Senior Software Engineer LLM Evaluation
Talent Bridge
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
About the Opportunity
One of our global AI research clients is building advanced evaluation and training datasets to improve large language models on realistic software engineering tasks. This project focuses on creating verifiable software engineering challenges derived from public repository histories using a structured, human-in-the-loop approach. The goal is to expand dataset coverage across programming languages, complexity levels, and real-world development scenarios.
Role Overview
We are seeking experienced, tech lead–level software engineers who are comfortable working with high-quality public GitHub repositories (500+ stars). This role combines hands-on engineering work with AI model evaluation, contributing directly to how AI systems interact with real-world codebases.
What You’ll Do
- Analyze and triage GitHub issues across widely used open-source repositories
- Set up and configure repositories, including Dockerization and development environment automation
- Evaluate unit test coverage, quality, and reliability
- Run, modify, and debug real-world codebases locally to assess AI model performance in bug-fixing and implementation tasks
- Collaborate with AI researchers to identify challenging repositories and issue types for LLM evaluation
- Contribute to designing structured, verifiable software engineering tasks
- Potentially lead and mentor junior engineers on repository validation projects
Required Skills
- 5+ years of professional software engineering experience
- Strong expertise in at least one of the following: Python, JavaScript, Java, Go, Rust, C/C++, C#, or Ruby
- Deep understanding of software architecture, debugging, and code quality standards
- Proficiency with Git, Docker, and development pipeline setup
- Ability to navigate and evaluate complex, production-grade codebases
- Experience contributing to or reviewing open-source projects is a plus
Nice to Have
- Experience participating in AI/LLM evaluation or research initiatives
- Background in building developer tools, automation systems, or code verification agents
- Experience leading small engineering teams
Engagement Details
- Contractor assignment (no medical or paid leave)
- 20 hours per week with partial PST overlap
- Duration: 3 months
- Expected start date: Next week
- Fully remote
This role offers a unique opportunity to combine deep software engineering expertise with frontier AI research, directly influencing how large language models understand and solve real-world coding problems.
Key skills/competency
- LLM Evaluation
- Software Engineering
- Debugging
- Code Quality
- Git
- Docker
- Software Architecture
- Open Source
- AI Research
- Technical Leadership
How to Get Hired at Talent Bridge
- Research Talent Bridge's client: Study their AI research focus, mission, and recent work to align your application effectively.
- Tailor your resume: Highlight extensive software engineering experience, particularly in LLM evaluation, open-source contributions, and relevant tech stacks like Python or Java.
- Showcase technical depth: Prepare to discuss complex debugging scenarios, software architecture, Git workflows, and Dockerization during technical interviews at Talent Bridge.
- Demonstrate collaboration and leadership: Emphasize your ability to work with AI researchers and any experience leading engineering efforts.
- Master codebase navigation: Be ready to articulate your approach to analyzing, setting up, and debugging production-grade public GitHub repositories.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background