Integrity Operations Manager, Sensitive Content
Bumble Inc.
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
Overview
Every decision in our sensitive content workflow shapes whether people feel safe, respected, and empowered when they show up on Bumble. That responsibility sits at the heart of our mission to build a world where all relationships are healthy and equitable. The Integrity Operations team transforms policy into practice — designing and scaling moderation systems that reduce harm while protecting expression and member trust.
As Integrity Operations Manager, Sensitive Content, you will own the Human-in-the-Loop (HITL) layer of our image classification pipeline, ensuring decisions are consistent, policy-aligned, and grounded in real member impact. Partnering closely with Policy, Product, Engineering, and AI teams, you’ll continuously improve how we detect and respond to sensitive content at scale. This role calls for disciplined ownership, thoughtful AI fluency, and a deep commitment to our values of Respect and Courage — especially when navigating complex or high-risk material.
Please note: this position involves exposure to sensitive and potentially graphic content.
What You'll Do
- Lead day-to-day operations for the Sensitive Content pillar, ensuring accurate, timely, and policy-aligned image classification outcomes that reduce harm and protect member experience.
- Own end-to-end BPO and AI moderation vendor governance, including SLA definition, performance management, quality assurance frameworks, and structured business reviews that drive continuous improvement.
- Translate sensitive content policies and taxonomy updates into clear annotation guidelines, decision trees, and workflow documentation; run calibration sessions and inter-rater alignment exercises to strengthen consistency.
- Design and evolve quality measurement frameworks, including sampling strategies, error trend analysis, reviewer accuracy tracking, and root-cause insights that inform targeted training plans.
- Partner cross-functionally with Policy, Product, Engineering, and Machine Learning teams to improve moderation tooling, classifier performance feedback loops, and pipeline design — demonstrating an agile mindset as systems evolve.
- Coordinate special labeling initiatives (e.g., new harm typologies, taxonomy refinements, model retraining datasets), taking ownership from insight to impact with defined success metrics and clear timelines.
- Build and communicate operational reporting across quality, throughput, backlog health, escalation volumes, and cost efficiency — transforming data into clear narratives and actionable recommendations.
- Model calm, values-led decision-making when managing high-sensitivity escalations, balancing speed, risk, and member impact while upholding Bumble’s values of Respect and Excellence.
About You
- Typically requires 4–6 years of experience, though we welcome candidates with alternative backgrounds that demonstrate equivalent skills.
- Experience leading large-scale vendor or BPO moderation operations, including SLA management, structured QA programs, governance cadences, and distributed team performance oversight.
- Strong working knowledge of Trust & Safety policy taxonomies and demonstrated experience operationalizing them into labeling schemas, annotation standards, and moderation workflows.
- Hands-on experience supporting AI/ML-driven safety systems, including Human-in-the-Loop review design, dataset quality controls, calibration methodologies, and feedback loops for model improvement.
- Comfort with operational data analysis, including building reporting dashboards, conducting trend and variance analysis, identifying error themes, and presenting insights clearly; SQL proficiency is a strong plus.
- Demonstrated ability to collaborate with purpose across Policy, Product, Engineering, QA/Learning & Development, and external vendors — while taking ownership for delivery and outcomes.
- Strong problem-solving judgment under ambiguity, with the ability to see things through from insight to measurable impact and adapt quickly as harm patterns evolve.
- Thoughtful AI fluency: you understand where automation accelerates harm detection, where human judgment is essential, and how to continuously strengthen HITL systems without compromising fairness or member trust.
- A values-driven operator who fosters psychologically safe ways of working, demonstrates Curiosity when evaluating edge cases, and upholds Respect when navigating sensitive subject matter.
Key skills/competency
- Integrity Operations
- Sensitive Content Moderation
- Policy Implementation
- AI Fluency
- BPO Management
- Quality Assurance
- Data Analysis
- Cross-functional Collaboration
- Risk Management
- Trust & Safety
How to Get Hired at Bumble Inc.
- Research Bumble Inc.'s mission: Study their commitment to healthy, equitable relationships and values on LinkedIn and Glassdoor.
- Tailor your resume: Highlight experience in content moderation, AI/ML safety systems, and vendor management.
- Prepare for policy discussions: Demonstrate strong knowledge of Trust & Safety policies and operationalization skills.
- Showcase data analysis skills: Be ready to discuss operational reporting, trend analysis, and actionable insights.
- Emphasize values alignment: Articulate how you embody Respect, Courage, and thoughtful AI fluency in practice.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background