Principal Applied Scientist @ Oracle
placeHybrid
attach_money $251,600
businessHybrid
scheduleFull Time
Posted 21 days ago
Your Application Journey
Interview
Email Hiring Manager
***** @oracle.com
Recommended after applying
Job Details
About the Role
Oracle is seeking an exceptional Principal Applied Scientist with deep expertise in Responsible AI to join our fast-growing AI/ML research team. In this role, you will drive the development and evaluation of scalable safeguards for foundation models, focusing on large language and multi-modal models. Your work will directly influence how we design, deploy, and monitor trustworthy AI systems across a broad range of products.
What You’ll Do
- Conduct cutting-edge research in Responsible AI including fairness, robustness, explainability, and safety for generative models.
- Design and implement safeguards, red teaming pipelines, and bias mitigation strategies for LLMs and foundation models.
- Contribute to the fine-tuning and alignment of LLMs using prompt engineering, instruction tuning, and RLHF/DPO.
- Define and implement rigorous evaluation protocols such as bias audits, toxicity analysis, and robustness benchmarks.
- Collaborate with product, policy, legal, and engineering teams to embed Responsible AI principles throughout the model lifecycle.
- Publish in top-tier venues and represent Oracle in academic and industry forums.
Minimum Qualifications
- Ph.D. in Computer Science, Machine Learning, NLP, or a related field with top-tier publications.
- Hands-on experience with LLMs including fine-tuning, evaluation, and prompt engineering.
- Expertise in building or evaluating Responsible AI systems focused on fairness, safety, and interpretability.
- Proficiency in Python and ML/DL frameworks such as PyTorch or TensorFlow.
- Strong understanding of model evaluation techniques and metrics for bias, robustness, and toxicity.
- Creative problem-solving skills with a rapid prototyping mindset and collaborative attitude.
Preferred Qualifications
- Experience with RLHF or other alignment methods.
- Open-source contributions in the AI/ML community.
- Experience with model guardrails, safety filters, or content moderation systems.
Why Join Us
You will work at the intersection of AI innovation and Responsible AI, shaping safe and trustworthy machine learning systems and influencing Oracle's global products. This role includes competitive benefits, comprehensive health plans, and numerous employee support programs.
Key skills/competency
- Responsible AI
- LLMs
- Fairness
- Safety
- Red teaming
- Bias mitigation
- Evaluation protocols
- Prompt engineering
- Python
- ML/DL frameworks
How to Get Hired at Oracle
🎯 Tips for Getting Hired
- Tailor your resume: Highlight AI research and Responsible AI skills.
- Showcase publications: Include top-tier conference contributions.
- Emphasize technical expertise: Detail Python, PyTorch, TensorFlow experience.
- Prepare for cross-team discussions: Demonstrate collaborative project examples.
📝 Interview Preparation Advice
Technical Preparation
circle
Review LLM fine-tuning techniques and protocols.
circle
Practice Python coding challenges and framework use.
circle
Study evaluation metrics for bias and safety.
circle
Familiarize with RL and prompt engineering methods.
Behavioral Questions
circle
Describe teamwork on cross-functional projects.
circle
Explain handling research challenges under pressure.
circle
Discuss conflict resolution in collaborative settings.
circle
Illustrate creativity in problem-solving scenarios.
Frequently Asked Questions
What background does Oracle seek for a Principal Applied Scientist role?
keyboard_arrow_down
How important are publications for the Principal Applied Scientist role at Oracle?
keyboard_arrow_down
What technical skills are essential for Oracle's Principal Applied Scientist role?
keyboard_arrow_down
How does Oracle approach Responsible AI in this role?
keyboard_arrow_down
What collaborative practices are expected for this role at Oracle?
keyboard_arrow_down