AI Content Safety Evaluator
@ Taskify

Hybrid
$85,000
Hybrid
Part Time
Posted 9 hours ago

Your Application Journey

Personalized Resume
Apply
Email Hiring Manager
Interview

Email Hiring Manager

XXXXXXXXX XXXXXXXXXXXXX XXXXXXX****** @taskify.com
Recommended after applying

Job Details

About the Role

As an AI Content Safety Evaluator at Taskify, you will use human judgment to ensure AI outputs are safe, fair, and aligned with human values. You will evaluate and annotate AI-generated content, assessing bias, misinformation, disallowed content, and unsafe reasoning.

Responsibilities

  • Annotate AI-generated content for safety criteria.
  • Apply harm taxonomies and guidelines consistently.
  • Document decision-making processes for improving guidelines.
  • Collaborate with researchers and engineers on AI safety.

Who You Are

You have experience in model evaluation, structured annotation, or applied research. You excel at spotting subtle biases and unsafe behaviors and can clearly explain and defend your reasoning in dynamic, fast-paced environments.

Additional Information

This role involves reviewing sensitive content with provided support and clear guidelines. It is text-based, remote, and offers flexible scheduling for both full-time and part-time contributors.

Why Join Taskify?

Work at the cutting edge of AI safety, gain impactful experience, and join a mission-driven team dedicated to building trustworthy AI systems.

Key skills/competency

  • AI Safety
  • Content Annotation
  • Bias Detection
  • Research Collaboration
  • Decision Documentation
  • Applied Research
  • Guideline Application
  • Remote Work
  • Problem Solving
  • Flexible Scheduling

How to Get Hired at Taskify

🎯 Tips for Getting Hired

  • Customize your resume: Tailor experiences to AI safety.
  • Showcase annotation skills: Highlight similar projects.
  • Research Taskify: Understand their AI safety mission.
  • Prepare examples: Be ready with past case studies.

📝 Interview Preparation Advice

Technical Preparation

Review AI safety annotation guidelines.
Study harm taxonomy frameworks.
Practice documenting decision processes.
Brush up on data evaluation techniques.

Behavioral Questions

Describe a time handling ambiguous tasks.
Explain decision-making under pressure.
Share collaboration experiences with engineers.
Discuss adapting to fast changes.

Frequently Asked Questions