Senior Data Science Engineer
@ Dunnhumby

Gurgaon, Haryana, India
On Site
Posted 3 days ago

Your Application Journey

Personalized Resume
Apply
Email Hiring Manager
Interview

Email Hiring Manager

XXXXXXXX XXXXXXXXXXXXX XXXXXXXXXX***** @dunnhumby.com
Recommended after applying

Job Details

About dunnhumby

dunnhumby is the global leader in Customer Data Science, empowering businesses to compete in a modern data-driven economy. We put the Customer First and have deep expertise in retail.

Role Overview - Senior Data Science Engineer

We are seeking a Senior Data Science Engineer (Automation Engineer) to automate project workflows, manage deployment pipelines, and optimize operational efficiency using advanced AI techniques and automation.

What You’ll Do

  • Automate Python/Pyspark codes for data workflows.
  • Collaborate with data scientists to refactor notebooks into production-ready systems.
  • Design and implement scalable data pipelines with AI-driven automation.
  • Build tools such as job schedulers, version control and monitoring solutions.
  • Work closely with teams to identify AI-driven process improvements.

Key Skills/Competency

  • Data Science
  • Automation
  • Python
  • Pyspark
  • AI
  • Cloud
  • Git
  • Docker
  • Kubernetes
  • FastAPI

What You Can Expect

Enjoy a comprehensive rewards package, flexible working hours, and personal flexibility, along with cutting-edge technology investment and a nurturing inclusive culture.

Our Approach to Flexible Working

dunnhumby values work/life balance and is open to discussing agile working opportunities during the hiring process.

How to Get Hired at Dunnhumby

🎯 Tips for Getting Hired

  • Research dunnhumby's culture: Understand their customer-focused, data-driven mission.
  • Tailor your resume: Highlight automation and AI project experience.
  • Showcase technical skills: Emphasize Python, Airflow, Docker and Kubernetes experience.
  • Prepare for interviews: Practice technical and behavioral questions.

📝 Interview Preparation Advice

Technical Preparation

Review Python scripting and Pyspark basics.
Practice building scalable data pipelines.
Familiarize with cloud deployment strategies.
Learn containerization with Docker and Kubernetes.

Behavioral Questions

Discuss teamwork in automation projects.
Explain problem-solving in pipeline development.
Show adaptability in learning new technologies.
Describe handling project challenges effectively.

Frequently Asked Questions