Model Safety Policy Project Intern
ByteDance
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
Model Safety Policy Project Intern
As a core member of ByteDance's Seed Global Data Team, you'll be at the heart of our operations. Gain first-hand experience in understanding the intricacies of training Large Language Models (LLMs) with diverse data sets. As a project intern, you will have the opportunity to engage in impactful short-term projects that provide you with a glimpse of professional real-world experience. You will gain practical skills through on-the-job learning in a fast-paced work environment and develop a deeper understanding of your career interests.
Applications will be reviewed on a rolling basis - we encourage you to apply early. Successful candidates must be able to commit to at least 3 months long internship period.
Your Role Will Involve
- Help to conduct research on the latest developments in AI safety across academia and industry, and support the identification of limitations in existing evaluation paradigms.
- Aid in designing and continuously refining safety evaluation frameworks for multi-models to assess safety-related behaviors, failure modes, and alignment with responsible AI principles.
- Support projects to enforce safety training or evaluate safety data, which may include data analysis to uncover insights that will inform model iteration and product design improvements.
Role Considerations
Please note that this role may involve exposure to potentially harmful or sensitive content, either as a core function, through ad hoc project participation, or via escalated cases. This may include, but is not limited to, text, images, or videos depicting:
- Hate speech or harassment
- Self-harm or suicide-related content
- Violence or cruelty
- Child safety
Support resources and resilience training will be provided to support employee well-being.
Qualifications
Minimum Qualifications
- Currently pursuing Bachelor's or Master's degree in AI policy, Computer Science, Engineering, Journalism, International Relations, Law, Regional Studies, or a related discipline.
- Strong analytical skills, with the ability to interpret both qualitative and quantitative data and translate them into clear insights.
- Creative problem-solving mindset, with comfort working under ambiguity and leveraging tools and technology to improve processes and outputs.
Preferred Qualifications
- Experience in AI safety, Trust & Safety, risk consulting, or risk management is highly desirable.
- A growth mindset, with a genuine receptiveness and enthusiasm for continuous learning.
- Readiness to actively solicit and apply constructive feedback.
- Intellectually curious, self-motivated, detail-oriented, and team-oriented.
- Deep interest in emerging technologies, user behavior, and the human impact of AI systems.
- Enthusiasm for learning from real-world case studies and applying insights in a high-impact setting.
Key skills/competency
- AI Safety
- Large Language Models (LLMs)
- Data Analysis
- Responsible AI Principles
- Evaluation Frameworks
- Research & Development
- Policy & Governance
- Problem-Solving
- Risk Management
- Machine Learning Ethics
How to Get Hired at ByteDance
- Research ByteDance's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
- Tailor your resume: Customize your resume to highlight experience in AI safety, data analysis, or relevant policy studies.
- Showcase problem-solving: Prepare examples demonstrating creative problem-solving and comfort with ambiguity.
- Emphasize a growth mindset: Articulate your enthusiasm for continuous learning and applying constructive feedback.
- Understand AI ethics: Demonstrate knowledge of responsible AI principles and the human impact of AI systems.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background