AI Content Safety Evaluator @ Taskify
placeHybrid
attach_money $90,000
businessHybrid
schedulePart Time
Posted 9 hours ago
Your Application Journey
Interview
Email Hiring Manager
****** @taskify.com
Recommended after applying
Job Details
About the AI Content Safety Evaluator Role
Taskify believes that AI safety starts with high-quality human data. We are building a flexible team of Safety Specialists to annotate and evaluate AI-generated content, ensuring outputs are safe, fair, and aligned with human values.
What You’ll Do
- Annotate AI-generated content for safety criteria such as bias, misinformation, disallowed content, and unsafe reasoning.
- Apply harm taxonomies and guidelines consistently.
- Document decision-making processes to improve annotation guidelines.
- Collaborate with researchers and engineers to enhance AI safety research and model development.
Who You Are
You are experienced in model evaluation, structured annotation, or applied research, and skilled at spotting subtle biases and inconsistencies. You must be comfortable in a fast-paced, evolving environment and capable of clearly explaining and defending your judgments.
Additional Information
- Role may involve reviewing sensitive content with support and clear guidelines.
- Work is text-based, remote, and flexible; suitable for both full-time and part-time contributors.
- Preferred candidates operate within US time zones; open to applicants from US, UK, and Canada.
Why Join Taskify?
Work on the cutting edge of AI safety and gain experience in a rapidly expanding, impactful field. Collaborate with experts across law, research, engineering, and creative fields to shape future AI systems.
Key Skills/Competency
- AI safety
- Data annotation
- Model evaluation
- Bias detection
- Misinformation
- Research collaboration
- Guideline adherence
- Structured analysis
- Decision documentation
- Remote work
How to Get Hired at Taskify
🎯 Tips for Getting Hired
- Prepare tailored resume: Highlight AI evaluation and annotation skills clearly.
- Research Taskify: Understand their mission and recent projects.
- Customize cover letter: Emphasize experience with bias and safe content.
- Practice interview questions: Be ready to discuss annotation guidelines and decisions.
📝 Interview Preparation Advice
Technical Preparation
circle
Review harm taxonomy guidelines.
circle
Study AI evaluation methods.
circle
Practice data annotation exercises.
circle
Familiarize with structured analysis tools.
Behavioral Questions
circle
Describe resolving ambiguous safety decisions.
circle
Explain your annotation rationale clearly.
circle
Discuss handling sensitive content review.
circle
Share collaboration experiences with teams.
Frequently Asked Questions
What qualifications does Taskify seek for the AI Content Safety Evaluator role?
keyboard_arrow_down
How does Taskify support its AI Content Safety Evaluators during sensitive content reviews?
keyboard_arrow_down
What interview process should applicants expect for the Taskify AI Content Safety Evaluator role?
keyboard_arrow_down