10 days ago

Security Researcher II

Microsoft

Hybrid
Full Time
$175,000
Hybrid

Job Overview

Job TitleSecurity Researcher II
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$175,000
LocationHybrid

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

Overview

Security is a top priority for our customers navigating digital threats, regulatory scrutiny, and complex estates. Microsoft Security aims to create a safer world by reshaping security and empowering users with an end-to-end, simplified security cloud. The Microsoft Security organization drives Microsoft’s mission to secure digital technology platforms, devices, and clouds in diverse customer environments, while also securing our internal estate. Our culture fosters a growth mindset, inspires excellence, and encourages daily contribution to deliver life-changing innovations.

Are you a red teamer eager to enter the AI field? Do you want to uncover AI failures in Microsoft’s largest AI systems affecting millions of users? We are seeking a Security Researcher II to join Microsoft’s AI Red Team. In this role, you will proactively hack high-stakes Generative AI (GenAI) technology pre-launch, providing crucial insights for mitigations through real-world examples of security, trust, and safety failures you caused in Microsoft’s significant AI systems. You will lead AI Security and Safety Research as a Red Teamer, enhancing AI security and supporting customer expansion with our AI systems.

Our team comprises an interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, Safety & Responsible AI experts, and software developers. Our mission is to proactively identify failures in Microsoft’s key AI systems. You will red team AI models and applications across Microsoft’s AI portfolio, including Bing Copilot, Security Copilot, GitHub Copilot, Office Copilot, and Windows Copilot. This sprint-based work involves collaboration with AI Safety, Security, and Product Development teams to conduct operations designed to find safety and security risks that inform critical internal business decisions. This is a dynamic team environment with diverse responsibilities within the AI Security and Safety domain, ideal for individuals who provide agile, practical insights and enjoy tackling ambiguous problems. Learn more about our AI Red Teaming approach: microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-s…

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees, we embrace a growth mindset, innovate to empower others, and collaborate to achieve shared goals. We uphold values of respect, integrity, and accountability to foster an inclusive culture where everyone thrives.

Responsibilities

  • Discover and exploit Generative AI vulnerabilities end-to-end to assess system safety.
  • Manage product group stakeholders as priority recipients and collaborators for operational sprints.
  • Drive clarity in communication and reporting for red teaming peers when engaging with product groups.
  • Develop methodologies, techniques, and research on emerging threats to scale and accelerate AI Red Teaming and AI Safety & Security across Microsoft.
  • Work alongside traditional offensive security engineers, adversarial ML experts, and developers to implement responsible AI operations.
  • Embody Microsoft’s culture and values.

Qualifications

Required Qualifications:

  • Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or a related field AND 2+ years of relevant experience (e.g., statistics, predictive analytics, research) OR
  • Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or a related field AND 1+ year(s) of relevant experience (e.g., statistics, predictive analytics, research) OR
  • Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or a related field OR equivalent experience.
  • 2+ years experience in AI research, red teaming, pen testing, AI red teaming, Responsible AI roles, or AI product development.

Other Requirements:

Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These include, but are not limited to, the following specialized security screenings:

  • Microsoft Cloud Background Check: This position will require passing the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.

Preferred Qualifications:

  • Master's Degree in Statistics, Mathematics, Computer Science or related field OR
  • 4+ years experience in software development lifecycle, large-scale computing, modeling, cybersecurity, and/or anomaly detection.

Key skills/competency

  • AI Security
  • Red Teaming
  • Vulnerability Exploitation
  • Generative AI
  • Adversarial ML
  • Offensive Security
  • Risk Assessment
  • Security Operations
  • Stakeholder Management
  • Responsible AI

Tags:

Security Researcher
AI security
red teaming
vulnerability exploitation
GenAI
adversarial ML
risk assessment
security operations
threat research
responsible AI
stakeholder management
AI
Generative AI
Machine Learning
Microsoft Cloud
Bing Copilot
GitHub Copilot
Office Copilot
Windows Copilot
Offensive Security
Pen Testing

Share Job:

How to Get Hired at Microsoft

  • Research Microsoft's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor. Focus on their growth mindset and commitment to AI safety.
  • Tailor your resume for AI security: Highlight specific experience in red teaming, adversarial ML, Generative AI vulnerabilities, and offensive security. Customize it to showcase relevant projects and achievements.
  • Showcase practical AI security skills: Prepare to discuss real-world examples of exploiting AI systems, penetration testing, or developing security mitigations during interviews. Demonstrate your problem-solving approach.
  • Prepare for technical assessments: Expect in-depth questions on AI/ML security vulnerabilities, threat modeling for GenAI, and offensive security techniques. Familiarize yourself with Microsoft's AI technologies.
  • Practice behavioral and situational questions: Emphasize your collaboration skills, ability to manage stakeholders, drive clarity in complex projects, and adapt to fast-moving, ambiguous problems within a team context.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background