4 days ago

Machine Learning Engineering Manager

Spotify

On Site
Full Time
$250,000
Boston, MA

Job Overview

Job TitleMachine Learning Engineering Manager
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$250,000
LocationBoston, MA

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

Machine Learning Engineering Manager

The Personalization team at Spotify makes deciding what to play next easier and more enjoyable for every listener. We aim to understand the world of music and podcasts to provide great recommendations, keeping hundreds of millions of people listening daily across products like Home, Search, Discover Weekly, Daylist, and new innovations like AI DJ and AI Playlists.

The Role: LLM Serving & Infrastructure

Generative AI is profoundly transforming Spotify’s product capabilities and technical architecture. Generative recommender systems, agent frameworks, and LLMs offer immense opportunities to serve diverse user needs, unlock richer content understanding, and enhance user engagement. This Machine Learning Engineering Manager role will specifically focus on the serving of a Unified Recommender model, leveraging open-weight LLM and transformer technology. You will collaborate with a diverse team to establish and implement the machine learning plan for this product domain, developing innovative recommendations and agent interactions. As a technology leader, you will manage a team and influence peers, collaborating with internal customers and platform teams to profoundly shape the direction of the entire Spotify experience.

Join us and you’ll keep millions of users listening and engaging with our platform every day!

What You’ll Do

  • Lead a high-performing engineering team to develop, build, and deploy a high-scale, low-latency LLM Serving Infrastructure.
  • Drive the implementation of a unified serving layer to support multiple LLM models and inference types (batch, offline eval flows and real-time/streaming).
  • Lead all aspects of the development of the Model Registry for deploying, versioning, and running LLMs across production environments.
  • Ensure successful integration with the core Personalization and Recommendation systems to deliver LLM-powered features.
  • Define and champion standardized technical interfaces and protocols for efficient model deployment and scaling.
  • Establish and monitor the serving infrastructure's performance, cost, and reliability, including load balancing, autoscaling, and failure recovery.
  • Collaborate closely with data science, machine learning research, and feature teams (Autoplay, Home, Search, etc.) to drive the active adoption of the serving infrastructure.
  • Scale up the serving architecture to handle hundreds of millions of users and high-volume inference requests for internal domain-specific LLMs.
  • Drive Latency and Cost Optimization: partner with SRE and ML teams to implement techniques like quantization, pruning, and efficient batching to minimize serving latency and cloud compute costs.
  • Develop Observability and Monitoring: build dashboards and alerting for service health, tracing, A/B test traffic, and latency trends to ensure consistency to defined SLAs.
  • Contribute to Core LPM Serving: focus on the technical strategy for deploying and maintaining the core Large Personalization Model (LPM).

Who You Are

  • 5+ years of experience in software or machine learning engineering, with 2+ years of experience managing an engineering team.
  • Hands-on with ML Engineering: deep expertise in building, scaling, and governing high-quality ML systems and datasets, including defining data schemas, handling data lineage, and implementing data validation pipelines (e.g., HuggingFace datasets library or similar internal systems).
  • Deep technical background in building and operating large-scale, high-velocity Machine Learning/MLOps infrastructure, ideally for personalization, recommendation, or Large Language Models (LLMs).
  • Proven track record to drive complex projects involving multiple partners and federated contribution models ("one source of truth, many contributors").
  • Expertise in designing robust, loosely coupled systems with clean APIs and clear separation of concerns (e.g., distinguishing between fast dev-time tools and rigorous production-like systems).
  • Experience integrating evaluation and testing into continuous integration/continuous deployment (CI/CD) pipelines to enable rapid 'fork-evaluate-merge' developer workflows.
  • Solid understanding of experiment tracking and results visualization platforms (e.g., MLFlow, custom UIs).
  • A pragmatic leader who can balance the need for speed with progressive rigor and production fidelity.

Where You’ll Be

This role is based in New York or Boston. We offer you the flexibility to work where you work best! There will be some in person meetings, but still allows for flexibility to work from home.

The United States base range for this position is $176,166 - $251,666 plus equity. Benefits include health insurance, six months paid parental leave, 401(k) retirement plan, monthly meal allowance, 23 paid days off, 13 paid flexible holidays, and paid sick leave. These ranges may be modified in the future.

Spotify is an equal opportunity employer. We welcome you for who you are, no matter your background. Our platform and workplace are for everyone; the more voices we amplify, the more we thrive and innovate. Bring your personal experience, perspectives, and background – our differences power our revolution in listening.

At Spotify, we are passionate about inclusivity and ensuring our recruitment process is accessible. If you need accommodations at any stage of the application or interview, please let us know.

Spotify transformed music listening forever when we launched in 2008. Our mission is to unlock human creativity by empowering artists and connecting billions of fans. Everything we do is driven by our love for music and podcasting. Today, we are the world’s most popular audio streaming subscription service.

Key skills/competency

  • LLM Serving
  • MLOps Infrastructure
  • Machine Learning Engineering
  • Team Leadership
  • Scalable Systems
  • Model Deployment
  • Performance Optimization
  • Data Validation
  • CI/CD
  • Cloud Computing

Tags:

Machine Learning Engineering Manager
LLM serving
MLOps
distributed systems
model deployment
inference
performance optimization
team leadership
stakeholder management
infrastructure development
observability
transformers
HuggingFace
CI/CD
MLFlow
cloud compute
Python
data validation
API design
SRE practices

Share Job:

How to Get Hired at Spotify

  • Research Spotify's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
  • Tailor your resume for ML Engineering: Highlight deep expertise in LLM serving, MLOps, and scalable ML systems.
  • Showcase leadership and project impact: Emphasize managing teams, driving complex projects, and influencing technical direction.
  • Prepare for technical and system design challenges: Focus on LLM inference, performance optimization, and distributed ML infrastructure.
  • Articulate your passion for audio/personalization: Connect your skills to Spotify’s mission of enhancing user listening experiences.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background