Researcher, Training
OpenAI
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
About The Team
OpenAI's Training team is responsible for producing the large language models that power our research, our products, and ultimately bring us closer to AGI. Achieving this goal requires combining deep research into improving our current architecture, datasets and optimization techniques, alongside long-term bets aimed at improving the efficiency and capability of future generations of models. We are responsible for integrating these techniques and producing model artifacts used by the rest of the company, and ensuring that these models are world-class in every respect. Recent examples of artifacts with major contributions from our team include GPT4-Turbo, GPT-4o and o1-mini.
About The Role
As a member of the architecture team, you will push the frontier of architecture development for OpenAI's flagship models, enhancing intelligence, efficiency, and adding new capabilities.
Ideal candidates have a deep understanding of LLM architectures, a sophisticated understanding of model inference, and a hands-on empirical approach. A good fit for this role will be equally happy coming up with a creative breakthrough, investing in strengthening a baseline, designing an eval, debugging a thorny regression, or tracking down a bottleneck.
This role is based in San Francisco. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In This Role, You Will
- Design, prototype and scale up new architectures to improve model intelligence
- Execute and analyze experiments autonomously and collaboratively
- Study, debug, and optimize both model performance and computational performance
- Contribute to training and inference infrastructure
You Might Thrive In This Role If You
- Have experience landing contributions to major LLM training runs
- Can thoroughly evaluate and improve deep learning architectures in a self-directed fashion
- Are motivated by safely deploying LLMs in the real world
- Are well-versed in the state of the art transformer modifications for efficiency
Key skills/competency
- LLM Architecture
- Model Inference
- Deep Learning
- Experimentation
- Model Optimization
- Computational Performance
- Transformer Models
- Architecture Development
- AI Research
- GPT Models
How to Get Hired at OpenAI
- Research OpenAI's Mission: Study their dedication to AGI benefiting humanity and their product impact.
- Highlight LLM Expertise: Showcase deep understanding of LLM architectures and model inference techniques.
- Emphasize Empirical Approach: Demonstrate hands-on experience in model development, debugging, and optimization.
- Tailor Your Resume: Customize your application to reflect contributions to major LLM training runs and deep learning architecture evaluation.
- Prepare for Technical Depth: Expect interview questions on transformer modifications, efficiency, and solving complex model regressions.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background