AI Safety - Trust & Safety Specialist @ Taskify
Your Application Journey
Email Hiring Manager
Job Details
Role Overview
At Taskify, our AI Safety - Trust & Safety Specialist role focuses on producing high-quality human data to power safe and compliant AI systems. You will annotate and evaluate AI outputs for bias, misinformation, unsafe reasoning, and disallowed content.
Key Responsibilities
- Annotate AI outputs against safety criteria.
- Apply harm taxonomies and guidelines consistently.
- Document reasoning to improve guidelines.
- Collaborate with teams to surface risks and enhance model safety.
Who You Are
You have experience in trust & safety, governance, or policy-to-product frameworks. Familiarity with harm taxonomies, safety-by-design principles, and regulatory frameworks such as the EU AI Act or NIST AI RMF is a plus. Your ability to translate abstract policies into concrete evaluation criteria and a commitment to reducing user harm make you a great fit.
Success Metrics
Your annotations need to be accurate, high-quality, and consistent. You will help surface risks early, and your feedback will enhance our guidelines and taxonomies, directly bolstering AI model safety and compliance.
Why Join Taskify?
Work at the frontier of AI safety, gain impactful experience and join a team committed to making AI systems safer, trustworthy, and aligned with human values. Taskify connects top experts with leading AI labs, working on cutting-edge projects.
Key skills/competency
- AI annotation
- Trust & safety
- Harm taxonomies
- Regulatory compliance
- Policy interpretation
- Data documentation
- Risk evaluation
- Guideline development
- Collaboration
- Ethical analysis
How to Get Hired at Taskify
🎯 Tips for Getting Hired
- Research Taskify's culture: Study their mission, values, and recent AI projects.
- Customize your resume: Highlight trust and safety experience.
- Showcase regulatory knowledge: Mention EU AI Act and NIST AI RMF.
- Prepare examples: Demonstrate decision-making in ambiguous cases.