6 days ago

GenAI Team Lead

Alice (Formerly ActiveFence)

On Site
Full Time
$180,000
Ramat Gan, Tel Aviv District, Israel

Job Overview

Job TitleGenAI Team Lead
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$180,000
LocationRamat Gan, Tel Aviv District, Israel

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

GenAI Team Lead at Alice

Alice is seeking an experienced and detail-oriented GenAI Team Lead to oversee complex research and delivery efforts focused on identifying and mitigating risks in Generative AI systems. In this role, you will lead a multidisciplinary team conducting adversarial testing, risk evaluations, and data-driven analyses that strengthen AI model safety and integrity.

You will be responsible for ensuring high-quality project delivery, from methodology design and execution to client communication and final approval of deliverables. This position combines hands-on red teaming expertise with operational leadership, strategic thinking, and client-facing collaboration.

Key Responsibilities

Operational and Quality Leadership
  • Oversee the production of datasets, reports, and analyses related to AI safety and red teaming activities.
  • Review and approve deliverables to ensure they meet quality, methodological, and ethical standards.
  • Deliver final outputs to clients following approval and provide actionable insights that address key risks and vulnerabilities.
  • Offer ongoing structured feedback on the quality of deliverables and the efficiency of team workflows, driving continuous improvement.
Methodology and Research Development
  • Design and refine red teaming methodologies for new Responsible AI projects.
  • Guide the development of adversarial testing strategies that target potential weaknesses in models across text, image, and multimodal systems.
  • Support research initiatives aimed at identifying and mitigating emerging risks in Generative AI applications.
Client Engagement and Collaboration
  • Attend client meetings to address broader methodological or operational questions.
  • Represent the red teaming function in cross-departmental collaboration with other ActiveFence teams.

Requirements

Must Have
  • Proven background in red teaming and T&S, AI safety research, or Responsible AI operations.
  • Demonstrated experience managing complex projects or teams in a technical or analytical environment.
  • Strong understanding of adversarial testing methods and model evaluation.
  • Excellent communication skills in English, both written and verbal.
  • Exceptional organizational ability and attention to detail, with experience balancing multiple priorities.
  • Confidence in client-facing environments, including presenting deliverables and addressing high-level questions.
Nice to Have
  • Advanced academic or research background in AI, computational social science, or information integrity.
  • Experience authoring or co-authoring publications, white papers, or reports in the fields of AI Safety, Responsible AI, or AI Ethics.
  • Engagement in professional or academic communities related to Responsible AI, trust and safety, or machine learning security.
  • Participation in industry or academic conferences.
  • Familiarity with developing or reviewing evaluation frameworks, benchmarking tools, or adversarial datasets for model safety testing.
  • Proven ability to mentor researchers and foster professional development within technical teams.
  • A proactive, research-driven mindset and a passion for ensuring safe, transparent, and ethical AI deployment.

About Alice

Alice is a trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact—whether with each other or with machines.

In a world where AI has fundamentally changed the nature of risk, Alice provides end-to-end coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.

Key skills/competency

  • Generative AI Safety
  • Red Teaming
  • Adversarial Testing
  • Responsible AI
  • Team Leadership
  • Project Management
  • AI Model Evaluation
  • Risk Mitigation
  • Data Analysis
  • Client Communication

Tags:

GenAI Team Lead
Generative AI Safety
Red Teaming
Adversarial Testing
Risk Mitigation
Team Leadership
Project Management
AI Model Evaluation
Data Analysis
Client Engagement
Responsible AI Operations
Generative AI
Machine Learning
AI Ethics
Natural Language Processing
Computer Vision
Multimodal AI
Data Science
Cybersecurity
Evaluation Frameworks
Trust and Safety

Share Job:

How to Get Hired at Alice (Formerly ActiveFence)

  • Research Alice's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor to understand their commitment to AI safety and trust.
  • Customize your resume for GenAI Team Lead: Highlight your experience in red teaming, AI safety research, project management, and leadership. Use keywords like "adversarial testing," "Responsible AI," and "model evaluation."
  • Prepare for technical discussions: Be ready to discuss your expertise in designing red teaming methodologies, evaluating AI systems, and mitigating risks in Generative AI. Showcase practical examples.
  • Showcase leadership and client skills: Emphasize your ability to lead multidisciplinary teams, manage complex projects, provide structured feedback, and confidently engage with clients on technical and operational matters.
  • Demonstrate passion for AI ethics: Articulate your proactive, research-driven mindset and commitment to safe, transparent, and ethical AI deployment, aligning with Alice's core mission.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background