Korean Trust & Safety Data Trainer
SME Careers
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
Korean Trust & Safety Data Trainer
This is a remote, hourly-paid contract role where you will review AI-generated content and safety decisions, evaluate reasoning quality and step-by-step problem-solving, and provide expert feedback so outputs are accurate, logical, and clearly explained. You will assess solutions for correctness and clarity, spot methodological or conceptual errors, fact-check where needed, and rate/compare multiple responses based on safety and policy alignment. You must be fluent/proficient (near-native or native) in Korean and able to make nuanced judgments across Korean and English content. This role is with the fast-growing AI Data Services company SME Careers (a subsidiary of SuperAnnotate), supporting many of the world’s largest AI companies and foundation-model labs. Your annotations on explicit safety tasks will be used to prevent the model from generating unintentional or adversarial, toxic, or unsafe outputs. You may be exposed to content that is sexual, violent, or psychologically disturbing in nature as part of this work to improve the world’s premier AI models.
Key Responsibilities
- Label and quality-check safety data across categories such as hate/harassment, sexual content, self-harm, violence, bias, illegal goods/services, malicious activities, malicious code, and deliberate misinformation.
- Perform red-teaming and adversarial testing by identifying realistic attack patterns, edge cases, and policy gray areas; document rationales and recommend mitigations to reduce unsafe outcomes.
- Apply and localize safety policies consistently across Korean and English: detect cultural nuance, slang, coded language, and context shifts; escalate uncertainty using documented decision paths.
Your Profile
- Bachelor’s degree or higher in a relevant field (e.g., Communications, Linguistics, Psychology, Law/Policy, Security Studies) or equivalent professional experience.
- Near-native or native Korean proficiency (reading/writing) for high-precision safety labeling and cultural-linguistic nuance.
- Minimum C1 English proficiency (reading/writing) for policy interpretation, prompt understanding, and consistent documentation.
- Experience in Trust & Safety, content moderation, policy operations, risk, compliance, investigations, or related safety functions (senior level).
- LLM red-teaming / adversarial testing experience is required (documented examples of edge-case discovery and mitigation recommendations).
- Localization/translation experience is highly preferred; able to preserve meaning, severity, and intent across languages.
- Emotional resilience: comfortable annotating unsafe, explicit, and/or toxic content, including content of a sexual, violent, or psychologically disturbing nature.
- Highly detail-oriented with strong judgment, consistency, and ability to follow evolving written guidelines.
- Strong analytical writing: concise rationales, clear decision paths, and reproducible reasoning for disagreements.
- Secure and confidential handling of sensitive content; reliable remote work practices and time management.
- Strong hands-on experience using tools like Perplexity, Gemini, ChatGPT and others.
Key skills/competency
- AI Content Review
- Trust & Safety Policies
- Korean Language Fluency
- LLM Red-Teaming
- Data Labeling
- Adversarial Testing
- Cultural Nuance Detection
- Content Moderation
- Policy Localization
- Risk Mitigation
How to Get Hired at SME Careers
- Research SME Careers' culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
- Tailor your resume for AI Trust & Safety: Highlight experience in content moderation, linguistic nuance, and LLM red-teaming, using keywords specific to data training and AI safety.
- Showcase Korean language expertise: Provide concrete examples of your near-native Korean proficiency and ability to interpret cultural nuances in a professional context.
- Prepare for technical and behavioral interviews: Be ready to discuss past experiences with explicit content, policy application, and analytical problem-solving in detail.
- Demonstrate LLM red-teaming skills: Be prepared to discuss documented examples of identifying edge cases and recommending mitigations for unsafe AI outputs.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background