AI Content Safety Evaluator @ Taskify
Your Application Journey
Email Hiring Manager
Job Details
About the Role
As an AI Content Safety Evaluator at Taskify, you will use human judgment to ensure AI outputs are safe, fair, and aligned with human values. You will evaluate and annotate AI-generated content, assessing bias, misinformation, disallowed content, and unsafe reasoning.
Responsibilities
- Annotate AI-generated content for safety criteria.
- Apply harm taxonomies and guidelines consistently.
- Document decision-making processes for improving guidelines.
- Collaborate with researchers and engineers on AI safety.
Who You Are
You have experience in model evaluation, structured annotation, or applied research. You excel at spotting subtle biases and unsafe behaviors and can clearly explain and defend your reasoning in dynamic, fast-paced environments.
Additional Information
This role involves reviewing sensitive content with provided support and clear guidelines. It is text-based, remote, and offers flexible scheduling for both full-time and part-time contributors.
Why Join Taskify?
Work at the cutting edge of AI safety, gain impactful experience, and join a mission-driven team dedicated to building trustworthy AI systems.
Key skills/competency
- AI Safety
- Content Annotation
- Bias Detection
- Research Collaboration
- Decision Documentation
- Applied Research
- Guideline Application
- Remote Work
- Problem Solving
- Flexible Scheduling
How to Get Hired at Taskify
🎯 Tips for Getting Hired
- Customize your resume: Tailor experiences to AI safety.
- Showcase annotation skills: Highlight similar projects.
- Research Taskify: Understand their AI safety mission.
- Prepare examples: Be ready with past case studies.