Databricks Engineer (Data Engineer)

  • Location: No Location Set
  • Type: Contract
  • Job #34863
  • Strong hands-on experience with Databricks (PySpark, Delta Lake, notebooks) – core requirement
  • Proven ability to build and optimize ETL/ELT data pipelines in a lakehouse environment
  • Experience with Azure Data Lake Storage (ADLS Gen2) for scalable data storage
  • Hands-on development using Azure Data Factory (ADF) for orchestration and pipelines
  • Experience with Azure Functions for serverless data processing
  • Solid understanding of lakehouse architecture (Databricks + ADLS integration)
  • Strong proficiency in Python and SQL for data transformation and pipeline logic
  • Experience with data modeling, partitioning, and performance optimization
  • Familiarity with Databricks Unity Catalog (data governance, access control)
  • Experience integrating Databricks with Snowflake, APIs, or downstream BI systems
  • Exposure to CI/CD pipelines (Azure DevOps, GitHub Actions) for data workflows
  • Experience with infrastructure-as-code tools (Terraform or similar) is an asset
  • Familiarity with orchestration tools (Airflow, dbt, or ADF pipelines)
  • Strong communication skills with client-facing / consulting experience
  • Databricks certifications preferred (strong indicator of hands-on expertise)
Attach a Resume file. Accepted file types are DOC, DOCX, PDF, HTML, and TXT.

We are uploading your application. It may take a few moments to read your resume. Please wait!