10 days ago

Data Engineer II

QuantumBlack, AI by McKinsey

On Site
Full Time
$120,000
Washington, DC
Apply

Job Overview

Job TitleData Engineer II
Job TypeFull Time
Offered Salary$120,000
LocationWashington, DC
Map of Washington, DC

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

About the Role

As a Data Engineer II, you will design, build, and optimize modern data platforms that power advanced analytics and AI solutions. You’ll collaborate with clients and interdisciplinary teams to architect scalable pipelines, manage secure and compliant data environments, and unlock the value of complex datasets across industries. You’ll sharpen your expertise by working on innovative projects, contributing to R&D, and learning from top-tier talent in a dynamic, global environment.

Your work will drive lasting impact. By ensuring data is accurate, accessible, and production-ready, you’ll enable clients to accelerate digital transformations, adopt AI responsibly, and achieve measurable business outcomes.

Key Responsibilities

  • Develop a streaming data platform to integrate telemetry for predictive maintenance in aerospace systems.
  • Implement secure data pipelines that reduce time-to-insight for a Fortune 500 utility company.
  • Optimize large-scale batch and streaming workflows for a global financial services client, cutting infrastructure costs while improving performance.
  • Develop pipelines for embeddings and vector databases to enable retrieval-augmented generation (RAG) for a global defense client.

You’ll work in cross-functional Agile teams with Data Scientists, Machine Learning Engineers, Designers, and domain experts to deliver high-quality analytics solutions. Partnering closely with clients—from data owners to C-level executives—you’ll shape data ecosystems that drive innovation and long-term resilience.

You should expect this role to include at least some work in critical industries (Government, Defense, Aerospace, Utilities, Oil and Gas), but you will have the ability to serve other industries as well. This role offers an exceptional environment to grow as a technologist and collaborator. You’ll develop expertise at the intersection of technology and business by tackling diverse challenges while collaborating with some of the best technical and business talent in the world.

There is flexibility to hire at the Engineer I/II or Senior Engineer level, depending on your experience.

Qualifications and Skills

  • U.S. Citizenship is required for this role (you must be able to be staffed on Critical Industries work which includes Government, Defense, Aerospace, Utilities, etc.).
  • Degree in Computer Science, Business Analytics, Engineering, Mathematics, or related field.
  • 2+ years of professional experience in data engineering, software engineering, or adjacent technical roles.
  • Proficiency in Python, Scala, or Java for production-grade pipelines, with strong skills in SQL and PySpark.
  • Hands-on experience with cloud platforms such as (AWS, GCP, Azure, Oracle) and modern data storage/warehouse solutions such as Snowflake, BigQuery, Redshift, and Delta Lake.
  • Practical experience with Databricks, AWS Glue, and transformation frameworks like dbt, Dataform, or Databricks Asset Bundles.
  • Knowledge of distributed systems such as (Spark, Dask, Flink) and streaming platforms (Kafka, Kinesis, Pulsar) for real-time and batch processing.
  • Familiarity with workflow orchestration tools such as (Airflow, Dagster, Prefect), CI/CD for data workflows, and infrastructure-as-code (Terraform, CloudFormation).
  • Understanding of DataOps principles including pipeline monitoring, testing, and automation, with exposure to observability tools such as Datadog, Prometheus, and Great Expectations.
  • Exposure to ML platforms such as (Databricks, SageMaker, Vertex AI), MLOps best practices, and GenAI toolkits (LangChain, LlamaIndex, Hugging Face).
  • Willingness to travel as required.
  • Strong communication, time management, and resilience, with the ability to align technical solutions to business value.

Key skills/competency

  • Data Engineering
  • Python
  • SQL
  • PySpark
  • AWS
  • GCP
  • Azure
  • Snowflake
  • BigQuery
  • Databricks

Tags:

Data Engineer
Data Engineering
Python
SQL
PySpark
AWS
GCP
Azure
Snowflake
BigQuery
Databricks
Data Pipelines
AI
Machine Learning
Cloud Computing
Distributed Systems
Streaming Data
Critical Industries

Share Job:

How to Get Hired at QuantumBlack, AI by McKinsey

  • Tailor your resume: Highlight your 2+ years of data engineering experience, Python, SQL, PySpark, and cloud platform skills (AWS, GCP, Azure).
  • Showcase cloud expertise: Detail your hands-on experience with data warehouses like Snowflake, BigQuery, Redshift, and Delta Lake.
  • Emphasize project impact: Quantify your achievements in optimizing pipelines, reducing costs, and improving performance in previous roles.
  • Demonstrate collaboration: Mention experience working in Agile teams and partnering with stakeholders from data owners to C-level executives.
  • Prepare for technical interviews: Be ready to discuss distributed systems, streaming platforms, workflow orchestration, and DataOps principles.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background