10 days ago

Senior Solutions Architect - Generative AI

NVIDIA

On Site
Full Time
$250,000
Bengaluru, Karnataka, India

Job Overview

Job TitleSenior Solutions Architect - Generative AI
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$250,000
LocationBengaluru, Karnataka, India

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

Senior Solutions Architect - Generative AI at NVIDIA

NVIDIA is seeking a dynamic and experienced Generative AI Solution Architect with specialized expertise in training Large Language Models (LLMs) and implementing workflows based on Pretraining, Finetuning LLMs & Retrieval-Augmented Generation (RAG). As a key member of our AI Solutions team, you will play a pivotal role in architecting and delivering cutting-edge solutions that leverage the power of NVIDIA's generative AI technologies. This position requires a deep understanding of language models, particularly open source LLMs, and a strong proficiency in designing and implementing RAG-based workflows.

What You Will Be Doing

  • Architect end-to-end generative AI solutions focusing on LLMs training, deployment, and RAG workflows.
  • Collaborate closely with customers to understand business challenges and design tailored solutions.
  • Support pre-sales activities, including technical presentations and demonstrations of LLM and RAG capabilities.
  • Work with NVIDIA engineering teams, providing feedback for generative AI software evolution.
  • Engage directly with customers/partners to understand their requirements and challenges.
  • Lead workshops and design sessions to refine generative AI solutions focused on LLMs and RAG.
  • Lead the training and optimization of Large Language Models using NVIDIA’s platforms.
  • Implement strategies for efficient and effective LLM training to achieve optimal performance.
  • Design and implement RAG-based workflows to enhance content generation and information retrieval.
  • Work closely with customers to integrate RAG workflows into their applications and systems.
  • Stay abreast of the latest developments in language models and generative AI technologies.
  • Provide technical leadership and guidance on best practices for LLM training and RAG solutions.

What We Need To See

  • Master's or Ph.D. in Computer Science, Artificial Intelligence, or equivalent experience.
  • 7+ years of hands-on experience in a technical AI role, specifically in generative AI with LLMs.
  • Proven track record of successfully deploying and optimizing LLM models for inference in production.
  • In-depth understanding of state-of-the-art language models like GPT-3, BERT, or similar.
  • Expertise in training and fine-tuning LLMs using TensorFlow, PyTorch, or Hugging Face.
  • Proficiency in model deployment and optimization for efficient inference on GPUs.
  • Strong knowledge of GPU cluster architecture and parallel processing for accelerated training.
  • Excellent communication and collaboration skills for technical and non-technical stakeholders.
  • Experience leading workshops, training, and presenting technical solutions to diverse audiences.

Ways To Stand Out From The Crowd

  • Experience deploying LLM models in cloud environments (AWS, Azure, GCP) and on-premises.
  • Proven ability to optimize LLM models for inference speed, memory, and resource utilization.
  • Familiarity with containerization (Docker) and orchestration (Kubernetes) for scalable deployment.
  • Deep understanding of GPU cluster architecture, parallel, and distributed computing concepts.
  • Hands-on experience with NVIDIA GPU technologies and GPU cluster management.
  • Ability to design and implement scalable workflows for LLM training and inference on GPU clusters.

Key skills/competency

  • Generative AI
  • Large Language Models (LLMs)
  • Retrieval-Augmented Generation (RAG)
  • Deep Learning
  • PyTorch
  • TensorFlow
  • Hugging Face
  • GPU Architectures
  • Cloud Deployment (AWS, Azure, GCP)
  • Technical Leadership

Tags:

Solutions Architect
Generative AI
LLMs
RAG
AI
Machine Learning
Deep Learning
Architecting
Deployment
Optimization
Training
Customer Engagement
Technical Leadership
PyTorch
TensorFlow
Hugging Face
GPUs
Kubernetes
Docker
AWS
Azure
GCP

Share Job:

How to Get Hired at NVIDIA

  • Research NVIDIA's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
  • Tailor your resume: Highlight Generative AI, LLM training, RAG expertise, and NVIDIA GPU experience to specific job requirements.
  • Showcase deep technical skills: Emphasize proficiency in PyTorch, TensorFlow, Hugging Face, and distributed AI systems relevant to LLMs and RAG.
  • Prepare for technical interviews: Practice system design, machine learning algorithms, and explain Generative AI concepts, especially LLM fine-tuning and RAG implementation.
  • Demonstrate passion for AI innovation: Discuss relevant personal projects, industry trends, and your vision for the future of Generative AI in the interview.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background