18 days ago

Legal AI Quality Analyst

Sourcebae

Hybrid
Contractor
₹0
Hybrid
Apply

Job Overview

Job TitleLegal AI Quality Analyst
Job TypeContractor
Offered Salary₹0
LocationHybrid

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

About Sourcebae

Sourcebae is India’s largest domain-expert network for AI data operations, serving global AI leaders including Google, Microsoft, Amazon, Meta, Anthropic, and NVIDIA. We connect credentialed professionals — licensed lawyers, AIIMS doctors, IIT/NIT engineers, CAs — with the world’s most advanced AI companies to make artificial intelligence smarter, safer, and more reliable. Our legal domain experts play a critical role in training, evaluating, and red-teaming large language models (LLMs) to ensure AI systems produce accurate, legally sound, and ethically responsible outputs. This is not traditional legal work — this is the future of law meeting the future of technology.

Role Overview

As a Legal AI Quality Analyst at Sourcebae, you are the expert gatekeeper ensuring that AI systems do not produce legally dangerous, misleading, or harmful outputs. You will red-team AI models by stress-testing them with complex legal scenarios, adversarial prompts, and edge-case questions designed to expose failures in legal reasoning. You will evaluate AI outputs against Indian and international legal frameworks, flag compliance risks, identify hallucinated case citations, and help build safety guardrails that protect end-users from relying on incorrect legal information. This is a senior-level domain expert role designed for experienced lawyers, compliance professionals, and regulatory specialists who want to apply their deep legal knowledge to the most consequential challenge in AI: making it safe and trustworthy.

What You Will Do

  • Red-Teaming & Adversarial Testing: Design and execute adversarial legal prompts to expose AI model weaknesses: feeding edge-case scenarios, ambiguous legal questions, multi-jurisdictional conflicts, and trick questions to identify failure modes.
  • Legal Safety Evaluation: Evaluate AI outputs for dangerous legal advice, unauthorized practice of law risks, jurisdictional errors, outdated legal positions, and potential harm to users who might rely on the AI’s legal guidance.
  • Hallucination Detection: Systematically identify fabricated case citations, invented statutes, non-existent legal principles, and fake court rulings in AI-generated legal content. Verify every citation against authoritative legal databases.
  • Regulatory Compliance Review: Assess AI outputs against Indian regulatory frameworks (DPDP Act 2023, Companies Act 2013, SEBI regulations, RBI guidelines, IT Act 2000, Consumer Protection Act 2019, labour codes) and international standards (GDPR, SOX, HIPAA basics) for compliance accuracy.
  • Safety Guardrail Development: Write detailed failure reports with recommended guardrails, content policies, and response boundaries that AI development teams can implement to prevent legal misinformation.
  • Benchmark Creation: Create adversarial legal evaluation benchmarks: curated sets of tricky legal questions with verified answers that serve as ongoing test suites for AI model releases.
  • Cross-Domain Legal QA: Lead quality assurance reviews of other legal annotators’ work, calibrate scoring rubrics, resolve disagreements, and maintain inter-annotator reliability across the legal annotation team.
  • Policy & Ethics Input: Contribute to Sourcebae’s internal AI ethics and legal safety policy development, advising on where AI should and should not provide legal guidance.

Who Should Apply — Eligibility & Requirements

  • LLB (3-year or 5-year integrated) from a recognized Indian university — mandatory
  • Minimum 2 years of post-qualification legal experience in litigation, corporate advisory, regulatory compliance, or in-house legal roles
  • Deep working knowledge of at least three of the following: constitutional law, criminal law (IPC/BNS), contract law, corporate law (Companies Act 2013), data privacy (DPDP Act/IT Act), consumer protection, labour law, SEBI/securities regulation, IP law, or environmental law
  • Demonstrated ability to identify legal errors, logical fallacies, and factual inaccuracies in written legal analysis
  • Strong legal writing skills with the ability to produce detailed, structured error reports and safety assessments
  • Experience with legal research databases (SCC Online, Manupatra, Indian Kanoon, Westlaw, or equivalent)
  • Professional integrity, independent judgment, and comfort with critical evaluation of AI systems

Preferred / Good-to-Have

  • 5+ years of legal practice experience across multiple areas of law
  • Enrollment with a State Bar Council as a practicing advocate
  • Experience in regulatory compliance, risk advisory, or legal auditing roles
  • Background in policy research, judicial clerkship, or law reform work
  • Familiarity with AI safety concepts, responsible AI frameworks, or legal tech products
  • LLM or advanced legal qualification with specialization in regulatory/compliance law
  • Experience working with international legal frameworks (US, UK, EU jurisdictions)
  • Prior involvement in legal quality assurance, peer review, or editorial review of legal publications

Key Skills We Are Looking For

  • Advanced legal research and cross-referencing across jurisdictions
  • Critical analysis and adversarial thinking — the ability to “break” AI with tough legal questions
  • Regulatory knowledge across Indian statutory and regulatory frameworks
  • Legal writing — formal, precise, and capable of supporting safety policy recommendations
  • Quality assurance and calibration of legal annotation standards
  • Independent judgment and professional skepticism
  • Ability to explain complex legal concepts in structured, accessible language
  • Comfort with ambiguity and nuanced legal scenarios that lack clear-cut answers

Why Join Sourcebae?

  • Your legal expertise becomes the safety net for AI systems used by billions of people worldwide
  • Work on the highest-stakes AI safety challenges with clients like Anthropic, Google, and Meta through Sourcebae
  • This is the most senior and highest-paid role in Sourcebae’s legal domain expert network — your experience is valued accordingly
  • Shape the policies and guardrails that determine what AI can and cannot say about law
  • No coding required. Your legal judgment, adversarial thinking, and domain depth ARE the required skills.
  • Join an elite cohort of India’s top legal professionals working at the frontier of AI safety
  • Contribute to a mission that matters: making AI trustworthy, safe, and legally responsible

How to Apply

Apply on LinkedIn or visit careers.sourcebae.com. Submit your resume, a brief summary (3–5 lines) of your legal practice experience and areas of specialization, and optionally a writing sample (legal opinion, case analysis, or compliance memo). Shortlisted candidates will receive a paid red-teaming evaluation (60–90 minutes) testing adversarial legal thinking and error detection. Onboarding within 72 hours of clearing evaluation. Full AI safety training provided.

Key skills/competency

  • Legal AI Quality Analyst
  • Adversarial Testing
  • Legal Safety Evaluation
  • Hallucination Detection
  • Regulatory Compliance
  • AI Ethics
  • Legal Research
  • Critical Analysis
  • Legal Writing
  • Quality Assurance

Tags:

Legal AI Quality Analyst
AI Safety
Legal Compliance
Regulatory Law
Adversarial Testing
LLM Evaluation
Legal Technology
AI Ethics
Litigation
Corporate Law

Share Job:

How to Get Hired at Sourcebae

  • Tailor your resume: Highlight your LLB, 2+ years of legal experience, and specific areas of legal expertise (e.g., data privacy, corporate law).
  • Craft a compelling summary: Clearly articulate your legal practice experience and specialization in 3-5 lines.
  • Showcase your skills: Emphasize your legal research, critical analysis, legal writing, and understanding of regulatory frameworks.
  • Prepare for evaluation: Practice identifying legal errors and adversarial thinking for the paid red-teaming assessment.
  • Demonstrate professional integrity: Highlight your independent judgment and comfort with evaluating AI systems.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background