Research Product Manager AI Systems
Granica
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
Research Product Manager - AI Systems (Structured Data, Evaluation & Learning Efficiency)
About The Role
We’re hiring a Research Product Manager to define and build core systems that determine how AI models are evaluated, improved, and deployed on real-world data. You’ll work on systems spanning model evaluation and benchmarking, post-training and feedback loops, structured and relational data learning, and performance, efficiency, and cost optimization. This role sits at the intersection of ML infrastructure, research, and product. It is closest to roles like ML platform PM or AI infrastructure PM, but with deeper ownership of how systems are designed and how model performance translates into real-world outcomes. You’ll partner closely with researchers and engineers to move ideas from experiments into production systems used at scale.
The Mission
AI today is no longer bottlenecked by model architecture alone. The Real Constraints Are: how models are evaluated, how they improve after training, and how they behave in real-world systems. Granica is building the systems that solve this. We are a research and systems company led by Prof. Andrea Montanari (Stanford), focused on: evaluation as a first-class system, post-training as a continuous learning loop, and efficient learning over real-world data. Most real-world data is structured and relational, yet modern AI systems remain poorly optimized to learn from it.
Our Thesis
AI advantage will come from how efficiently models learn from structured data—and how that translates into economic value.
What You’ll Do
- Define and drive systems for model evaluation, benchmarking, and real-world performance.
- Build product direction for post-training systems and feedback loops that continuously improve models.
- Define how models learn from large-scale structured and relational datasets.
- Partner with engineering to build systems that connect data platforms (warehouses, lakehouses) with ML systems.
- Own how improvements move from research experiments into production systems.
- Model trade-offs across compute, data efficiency, performance, and cost.
- Identify where system improvements drive measurable business impact.
Skills And Qualifications
Minimum Qualifications
- 5+ years of experience in product management, technical program management, or similar roles in AI, ML infrastructure, or data systems.
- Strong understanding of machine learning systems, including training, evaluation, and deployment.
- Experience working with large-scale data systems or distributed infrastructure.
- Ability to reason about trade-offs across data, compute, performance, and cost.
- Track record of driving complex technical systems from concept to production.
Preferred Qualifications
- Experience with ML platforms, LLM systems, or AI infrastructure.
- Experience with evaluation systems, observability, or model performance tooling.
- Familiarity with structured or relational data systems (e.g., warehouses, lakehouses).
- Background in engineering, applied research, or ML systems development.
- Experience operating in research-driven or highly ambiguous environments.
Ideal Backgrounds
- ML / AI infrastructure PMs (OpenAI, Google, Meta, Snowflake, Databricks, AWS, or similar).
- Product leaders in model systems, evaluation, or observability.
- Research engineers or applied scientists transitioning into product.
- Engineers who have built ML or data systems and taken on product ownership.
Why This Role Matters
Most AI systems are limited not by model capability, but by: weak evaluation systems, inefficient learning loops, poor utilization of structured data, and lack of connection between performance and real-world outcomes. This role defines how those constraints are solved in production systems. You won’t be optimizing features—you’ll be defining the systems that determine how models improve, how they are trusted, and how they deliver value.
Logistics
Location: Mountain View, CA Work model: On-site, five days per week Level: Senior / Staff / Principal (depending on experience)
Compensation & Benefits
Competitive salary, meaningful equity, and substantial bonus for top performers. Flexible time off plus comprehensive health coverage for you and your family. Support for research, publication, and deep technical exploration. At Granica, you will shape the fundamental infrastructure that makes intelligence itself efficient, structured, and enduring. Join us to build the foundational data systems that power the future of enterprise AI! Compensation Range: $160K - $250K
Key skills/competency
- Product Management
- AI Systems
- ML Infrastructure
- Structured Data
- Model Evaluation
- Machine Learning
- Research
- System Design
- Data Systems
- Performance Optimization
How to Get Hired at Granica
- Tailor your resume: Highlight experience in AI, ML infrastructure, and data systems, emphasizing your track record with complex technical systems and product management.
- Showcase your impact: Quantify achievements in model evaluation, system development, and driving measurable business impact from research to production.
- Understand Granica's mission: Research their focus on structured data, efficient learning, and evaluation systems; align your application with their thesis.
- Prepare for technical interviews: Be ready to discuss ML systems, trade-offs in compute/data/performance, and system design challenges in AI.
- Demonstrate ambiguity navigation: Highlight experience in research-driven or ambiguous environments, showcasing problem-solving and strategic thinking.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background