AI ML Engineer, RTL Power Optimization
NVIDIA
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
AI ML Engineer, RTL Power Optimization at NVIDIA
Today, NVIDIA is tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, encouraging environment where everyone is inspired to do their best work. Come join the team and see how we can make a lasting impact on the world.
About the Team
Our team is privileged to work on Power Optimization of Data center, gaming and automotive GPU chips. We also work on power optimization of Networking chips. The gamut of GPUs and networking chips requires the team to provide architecture, micro-architecture, RTL Design, methodology and AI based power optimization solutions. You will collaborate with Architects, Performance Engineers, Software Engineers, ASIC Design Engineers, and Physical Design teams to study and implement power analysis and reduction techniques for NVIDIA's next generation GPUs and Networking products. Your contributions will help us gain early insight into energy consumption of graphics and artificial intelligence workloads, and will allow us to influence architectural, design, and power management improvements.
What You'll Be Doing
- Use internally developed tools and industry standard pre-silicon gate-level and RTL power analysis tools, to help improve product power efficiency.
- Use artificial intelligence to deliver RTL and/or Architecture Power optimization solutions.
- Develop and share best practices for performing pre-silicon power analysis.
- Perform comparative power analysis, to spot trends and anomalies, that warrant more scrutiny.
- Interact with architects and RTL designers to help them interpret their power data and identify power bugs; drive them to implement fixes.
- Select and run a wide variety of workloads for power analysis.
- Prototype a new architectural feature in Verilog and analyze power.
What We Need To See
- Pursuing or recently completed a MS or PhD in Electrical Engineering, Computer Engineering, or related fields with coursework or experience in AI, Digital Design and VLSI concepts (or equivalent experience).
- Understanding of RTL power optimization fundamentals, including switching activity, clock/enable efficiency, and common low‑power design patterns at the RTL level.
- Knowledge of backend flows such as logic synthesis and place‑and‑route, and how RTL decisions impact post‑layout power and timing.
- Familiarity with RTL implementation of low‑power techniques (e.g., clock gating, operand isolation, power gating strategies, multi‑VT usage) and their trade‑offs.
- Exposure to industry power analysis tools such as PowerArtist, PrimeTime PX, or equivalent, including running power analysis and interpreting reports to guide design changes.
- Strong Python programming skills for building automation scripts, data pipelines, and AI/ML‑driven analysis flows for RTL power optimization.
- Coursework and/or hands‑on experience in Machine Learning and Artificial Intelligence, with ability to apply ML techniques to EDA and silicon power problems.
- Previous experience debugging RTL or gate‑level power anomalies by tracing logic cones, examining activity/toggle data, and identifying root‑cause structures or scenarios.
- Strong written and verbal communication skills to document methodologies, present power findings, and explain AI/ML‑based insights to both design and tools teams.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!
Key skills/competency
- AI/ML
- RTL Power Optimization
- VLSI Design
- Digital Design
- Python Programming
- Power Analysis Tools
- Clock Gating
- Operand Isolation
- ASIC Design
- Verilog
How to Get Hired at NVIDIA
- Research NVIDIA's AI vision: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor, focusing on their AI and GPU leadership.
- Tailor your resume for ML/RTL: Highlight coursework, projects, and internships specifically demonstrating experience in AI, VLSI, digital design, and power optimization techniques.
- Showcase Python and EDA tools expertise: Emphasize practical skills in Python for automation and familiarity with industry-standard power analysis tools like PowerArtist or PrimeTime PX.
- Prepare for technical depth: Expect in-depth questions on digital design fundamentals, low-power design patterns, and applying machine learning to silicon power problems.
- Demonstrate collaborative spirit: Be ready to discuss experiences in cross-functional teams, highlighting how you've effectively communicated complex technical insights to diverse audiences.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background