Staff Data Engineer
Loblaw Digital
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
Staff Data Engineer at Loblaw Digital
Come make your difference in communities across Canada, where authenticity, trust and making connections is valued – as we shape the future of Canadian retail, together. Our unique position as one of the country's largest employers, coupled with our commitment to positively impact the lives of all Canadians, provides our colleagues a range of opportunities and experiences to help Canadians Live Life Well®.
At Loblaw Companies Limited, we succeed through collaboration and commitment and set a high bar for ourselves and those around us. Whether you are just starting your career, re-entering the workforce, or looking for a new job, this is where you belong.
As a Staff Data Engineer, you will play a technical leadership and architecture role on the Retail Media Platform team. This is a senior, hands-on position focused on designing, building, and evolving high-scale data platforms and backend systems that power advertising measurement, reporting, analytics, and AI-enabled use cases.
You will lead complex design initiatives, improve performance and reliability across critical pipelines, raise the bar on code quality, and mentor engineers across teams. You will also work closely with backend, product, and AI teams to reduce system coupling, evolve metadata and API design, and enable next-generation data and AI capabilities across the platform.
This role is ideal for someone who enjoys being both a system architect and a strong individual contributor, and who wants to shape the future of data and AI platforms at scale.
What You'll Do:
Architecture & Technical Leadership
- Lead system-level design for scalable, reliable, and high-performance data platforms supporting batch, streaming, and real-time use cases.
- Drive architectural improvements across data pipelines, metadata layers, APIs, and service dependencies.
- Partner with Product, Backend, and AI teams to translate complex business requirements into robust technical solutions.
Data Engineering & Platforms
- Architect, build, and optimize large-scale data pipelines using PySpark, Dataproc, Airflow, GCS, Parquet, and GCP services.
- Design and maintain data models that support analytics, reporting, experimentation, and measurement at scale.
- Implement strong data quality, validation, observability, and monitoring practices to ensure data trust and reliability.
Backend & API Enablement
- Contribute to the design and evolution of backend services and APIs that expose measurement, filtering, and reporting capabilities.
- Reduce unnecessary API and metadata dependencies to unlock better performance and flexibility, including deeper and more effective use of analytics engines such as Druid.
- Collaborate closely with backend engineers on service design, scalability, and performance tuning.
AI & Data-for-AI Enablement
- Partner with Data & AI teams to enable AI-driven features through high-quality, well-modeled, and inference-ready data pipelines.
- Support Loblaw’s broader AI strategy (including LDIA initiatives) by designing data foundations that power experimentation, automation, and intelligent decision-making.
- Help bridge traditional data engineering with emerging AI-enabled use cases.
Engineering Excellence & Mentorship
- Lead design reviews and code reviews, setting standards for performance, readability, testing, and maintainability.
- Mentor and coach engineers across experience levels, helping raise overall engineering maturity.
- Drive continuous improvement across pipeline performance, cost efficiency, reliability, and operational excellence.
Does This Sound Like You?
Required Qualifications
- BA/BS in Computer Science, Engineering, Math, or a related field (advanced degree is a plus).
- Senior- or Staff-level Data Engineer with experience owning production-critical, large-scale systems.
- Deep hands-on expertise with PySpark and distributed data processing, including performance optimization.
- Strong experience with cloud data platforms, preferably GCP (Dataproc, GCS, BigQuery).
- Strong SQL skills with experience querying and optimizing large analytical datasets.
- Experience with non-relational and analytical data stores (e.g., Druid, Bigtable, Elasticsearch, or similar).
- Solid programming experience in Python, Scala, or Java.
- Experience with orchestration tools such as Airflow and operating production pipelines.
- Strong understanding of data modeling, partitioning strategies, and storage formats (e.g., Parquet).
- Experience working in Agile environments with iterative delivery.
- Strong oral and written communication skills, with the ability to articulate technical concepts to both technical and non-technical stakeholders.
- Proven team player who thrives in a fast-paced, collaborative environment.
Nice to Have
- Experience supporting AI/ML workflows or platforms used for model training or inference.
- Experience with real-time or streaming systems (e.g., Kafka or similar).
- Experience in advertising technology, retail media, or large-scale measurement systems.
- Experience designing or evolving metadata-driven systems and APIs.
Key skills/competency
- PySpark
- GCP
- Data Modeling
- Distributed Processing
- SQL
- Airflow
- Data Architecture
- AI/ML Data
- Performance Optimization
- Python
How to Get Hired at Loblaw Digital
- Research Loblaw Digital's culture: Study their mission, values (Care, Ownership, Respect, Excellence), recent news, and employee testimonials on LinkedIn and Glassdoor.
- Tailor your resume for Staff Data Engineer: Highlight experience with PySpark, GCP, large-scale data platforms, and AI enablement, using keywords from the job description.
- Showcase technical leadership: Prepare examples demonstrating your ability to lead architectural design, mentor engineers, and drive technical improvements in data engineering.
- Prepare for a technical deep-dive: Expect questions on distributed data processing, data modeling, SQL optimization, and cloud data platform best practices (GCP-specific).
- Demonstrate collaboration and communication: Be ready to discuss how you've partnered with product, backend, and AI teams to deliver complex data solutions.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background