Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
Databricks Data Engineer Consultant
As a Databricks Data Engineer Consultant, you will design, build, and optimize enterprise-scale data engineering solutions using Databricks on AWS, Azure, or Google Cloud Platform (GCP). You'll help clients modernize data platforms using Lakehouse patterns, deliver reliable and performant pipelines, and translate technical design into measurable business outcomes. You'll contribute to engineering standards, data governance, and delivery excellence while collaborating closely with stakeholders across business and technology teams.
Key Responsibilities
- Data platform delivery: Design, develop, test, and maintain scalable data pipelines and data products on Databricks to support enterprise analytics and reporting needs.
- Lakehouse Engineering: Implement Databricks Lakehouse solutions using Apache Spark and Delta Lake, delivering batch/streaming pipelines with Delta Live Tables, Autoloader, Structured Streaming, Workflows, and orchestration (e.g., Apache Airflow).
- Data Modeling: Build curated, governed data products by applying metadata-driven ingestion, PySpark incremental loads, and data quality frameworks, plus 3NF/dimensional modeling and Unity Catalog-based security/compliance controls.
- Governance & security: Support implementation of governance, security, and compliance controls in cloud data ecosystems, including Unity Catalog and fine-grained access controls.
- Performance optimization: Monitor and tune jobs, code, clusters, and pipeline designs to improve reliability, throughput, and cost efficiency.
- DevOps & automation: Implement and maintain CI/CD practices for data engineering deployments using tools such as Azure DevOps, AWS CodePipeline, Jenkins, TFS, or PowerShell.
- Communication: Clearly explain technical tradeoffs, implementation choices, and business value to technical teams and non-technical stakeholders; contribute to project plans, status updates, and client-facing deliverables.
- Best practices contribution: Contribute to documentation and reusable patterns for data architecture, integration, modeling, and engineering standards.
Required Qualifications
- Bachelor's degree in Computer Science, Engineering, or related field (Master's preferred).
- 2+ years of hands-on experience in data engineering with a strong focus on Databricks, deployed on any major cloud (AWS, Azure, GCP).
- Minimum of 2 years Technical Proficiency: Databricks and cloud-native storage/compute and distributed processing platforms; Lakehouse architecture, Apache Spark, Delta Lake; Data warehousing and implementation experience with 3NF, dimensional modeling, and enterprise data lakes.
- Databricks components: Delta Live Tables, Autoloader, Structured Streaming, Databricks Workflows, and orchestration.
- Incremental data loads; building metadata-driven ingestion and data quality frameworks using PySpark.
- Unity Catalog and fine-grained security/access control.
- Proven track record deploying solutions through automated CI/CD pipelines.
- Experience with performance optimization of pipelines, code, and compute resources.
- Ability to travel up to 50% on average, based on the work you do and the clients and industries/sectors you serve.
- Limited immigration sponsorship may be available.
Preferred Qualifications
- Strong knowledge of one or more cloud ecosystems: AWS, Azure, GCP cloud ecosystems and associated big data stacks is strongly preferred.
- Experience supporting or enabling AI/ML use cases.
- Databricks certifications.
Compensation
The wage range for this role is $84,400 - $155,400 annually. You may also be eligible to participate in a discretionary annual incentive program.
Key skills/competency
- Databricks Data Engineer Consultant
- Data Engineering
- Databricks
- Cloud Data Platforms (AWS, Azure, GCP)
- Lakehouse Architecture
- Apache Spark
- Delta Lake
- Data Modeling
- CI/CD
- Data Governance
How to Get Hired at Deloitte
- Tailor your resume: Highlight your Databricks, Spark, Delta Lake, and cloud platform experience, quantifying achievements.
- Showcase relevant projects: Detail your experience with Lakehouse architecture, CI/CD, and data modeling in your application.
- Prepare for technical questions: Be ready to discuss Databricks components, PySpark, and performance optimization strategies.
- Demonstrate business acumen: Articulate how your technical solutions drive business value and meet client needs.
- Research Deloitte: Understand their consulting approach, values, and recent data/cloud projects.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background