2 days ago

Researcher, Training

OpenAI

On Site
Full Time
$400,000
San Francisco, CA

Job Overview

Job TitleResearcher, Training
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$400,000
LocationSan Francisco, CA

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

About The Team

OpenAI's Training team is responsible for producing the large language models that power our research, our products, and ultimately bring us closer to AGI. Achieving this goal requires combining deep research into improving our current architecture, datasets and optimization techniques, alongside long-term bets aimed at improving the efficiency and capability of future generations of models. We are responsible for integrating these techniques and producing model artifacts used by the rest of the company, and ensuring that these models are world-class in every respect. Recent examples of artifacts with major contributions from our team include GPT4-Turbo, GPT-4o and o1-mini.

About The Role

As a member of the architecture team, you will push the frontier of architecture development for OpenAI's flagship models, enhancing intelligence, efficiency, and adding new capabilities.

Ideal candidates have a deep understanding of LLM architectures, a sophisticated understanding of model inference, and a hands-on empirical approach. A good fit for this role will be equally happy coming up with a creative breakthrough, investing in strengthening a baseline, designing an eval, debugging a thorny regression, or tracking down a bottleneck.

This role is based in San Francisco. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.

In This Role, You Will

  • Design, prototype and scale up new architectures to improve model intelligence
  • Execute and analyze experiments autonomously and collaboratively
  • Study, debug, and optimize both model performance and computational performance
  • Contribute to training and inference infrastructure

You Might Thrive In This Role If You

  • Have experience landing contributions to major LLM training runs
  • Can thoroughly evaluate and improve deep learning architectures in a self-directed fashion
  • Are motivated by safely deploying LLMs in the real world
  • Are well-versed in the state of the art transformer modifications for efficiency

Key skills/competency

  • LLM Architecture
  • Model Inference
  • Deep Learning
  • Experimentation
  • Model Optimization
  • Computational Performance
  • Transformer Models
  • Architecture Development
  • AI Research
  • GPT Models

Tags:

Researcher, AI Training
LLM architecture
model inference
deep learning
experimentation
model optimization
computational performance
transformer models
AI research
architecture development
evaluation design
Python
PyTorch
TensorFlow
distributed training
GPU optimization
CUDA
machine learning
large language models
AI deployment
data analysis

Share Job:

How to Get Hired at OpenAI

  • Research OpenAI's Mission: Study their dedication to AGI benefiting humanity and their product impact.
  • Highlight LLM Expertise: Showcase deep understanding of LLM architectures and model inference techniques.
  • Emphasize Empirical Approach: Demonstrate hands-on experience in model development, debugging, and optimization.
  • Tailor Your Resume: Customize your application to reflect contributions to major LLM training runs and deep learning architecture evaluation.
  • Prepare for Technical Depth: Expect interview questions on transformer modifications, efficiency, and solving complex model regressions.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background