Senior AI Engineer, World Foundation Models
NVIDIA
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
Senior AI Engineer, World Foundation Models at NVIDIA
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.
NVIDIA is building the next generation of AI systems that can perceive, reason about, and generate dynamic worlds. Our team advances world foundation models to enable high-fidelity, temporally stable video and world generation for Physical AI, simulation, and interactive experiences. This role operates at the applied-research boundary: developing and validating model improvements, then hardening them into production-grade checkpoints and recipes that teams can reliably build on. The technical focus is human appearance, motion, and interaction, where identity drift, temporal artifacts, and inconsistent contact dynamics often limit real-world usability. Progress is measured through disciplined experimentation, robust diagnostics, and repeatable side-by-side evaluation. Work is delivered in close partnership with data, platform, and product engineering to ensure improvements translate into real-time performance and user-visible quality.
What You'll Be Doing
- Research, implement, and validate model architecture and algorithm changes that improve video generation fidelity, with emphasis on human-centric quality (identity preservation, anatomy, motion coherence, and interaction realism).
- Explore and prototype improvements across spatial multimodal modeling, modality alignment, flow-based or diffusion-based video generation, and neural rendering-inspired representations to improve controllability and long-horizon consistency.
- Improve training and inference efficiency through architectural and post-training techniques (compute/memory optimizations, distillation, pruning, and compression).
- Define model training objectives that improve sim-to-real and real-to-sim generalization, especially for human motion, contact, and interaction dynamics across real-world and synthetic/simulation data.
- Develop detailed, domain-specific benchmarks for evaluating world foundation models, especially generation and understanding world models that reason about video, simulation, and physical environments.
- Translate research results into robust implementations like training code, production-grade checkpoints, model integrations, and demos that clearly showcase capability gains across teams.
What We Need To See
- PhD in Computer Science, Graphics, Computer Engineering, or a closely related field (or equivalent experience).
- 8+ years of applied research and/or industry experience in vision, graphics, or adjacent ML domains (or equivalent experience).
- 4+ years of direct experience designing, training, and evaluating generative models for image/video/audio, with strong fundamentals in modern deep learning.
- Hands-on experience improving generative models with a focus on perceptual quality and temporal stability, especially for generating humans.
- Advanced proficiency in Python, PyTorch, C++, and CUDA with strong research-engineering practices (reproducibility, testing, profiling, experiment tracking).
- Experience training and debugging large models in multi-GPU and/or multi-node environments and distributed training workflows.
- Practical knowledge of inference/runtime bottlenecks and optimization techniques (e.g., batching, parallelism strategies, low-precision/quantization awareness, attention/KV-cache efficiency).
- Strong “eye for quality” and interest in diagnosing visual artifacts (sharpness, texture detail, temporal stability, etc.) using perceptual metrics, human preference signals, or learned evaluators.
Ways To Stand Out From The Crowd
- Proven track record in related research, including publications in top conferences (e.g., NeurIPS, CVPR, ICLR), with clear evidence of impact on model quality or robustness.
- Exposure to closed-loop training setups (e.g., reinforcement learning or preference-based optimization) for improving controllability, stability, and interaction quality in generated sequences.
Key skills/competency
- Generative AI
- Foundation Models
- Video Generation
- Deep Learning
- Python & PyTorch
- CUDA & C++
- Temporal Stability
- Model Optimization
- Distributed Training
- Neural Rendering
How to Get Hired at NVIDIA
- Research NVIDIA's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor, especially their AI leadership.
- Tailor your resume: Highlight extensive experience in generative AI, foundation models, PyTorch, and CUDA, showcasing real-world impact.
- Showcase project impact: Emphasize contributions to video generation fidelity, temporal stability, and human-centric quality improvements.
- Prepare for technical depth: Focus on distributed training, inference optimization, model architecture design, and debugging large-scale ML systems.
- Articulate your vision: Discuss your understanding of Physical AI, world models, and how your expertise aligns with NVIDIA's innovation in this space.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background