Senior Software Engineer LLM Evaluation
Talent Bridge
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
Senior Software Engineer LLM Evaluation at Talent Bridge
About the Opportunity
One of our global AI research clients is building advanced evaluation and training datasets to improve large language models on realistic software engineering tasks. This project focuses on creating verifiable software engineering challenges derived from public repository histories using a structured, human-in-the-loop approach. The goal is to expand dataset coverage across programming languages, complexity levels, and real-world development scenarios.
Role Overview
We are seeking experienced, tech lead–level software engineers who are comfortable working with high-quality public GitHub repositories (500+ stars). This role combines hands-on engineering work with AI model evaluation, contributing directly to how AI systems interact with real-world codebases.
What You’ll Do
- Analyze and triage GitHub issues across widely used open-source repositories
- Set up and configure repositories, including Dockerization and development environment automation
- Evaluate unit test coverage, quality, and reliability
- Run, modify, and debug real-world codebases locally to assess AI model performance in bug-fixing and implementation tasks
- Collaborate with AI researchers to identify challenging repositories and issue types for LLM evaluation
- Contribute to designing structured, verifiable software engineering tasks
- Potentially lead and mentor junior engineers on repository validation projects
Required Skills
- 5+ years of professional software engineering experience
- Strong expertise in at least one of the following: Python, JavaScript, Java, Go, Rust, C/C++, C#, or Ruby
- Deep understanding of software architecture, debugging, and code quality standards
- Proficiency with Git, Docker, and development pipeline setup
- Ability to navigate and evaluate complex, production-grade codebases
- Experience contributing to or reviewing open-source projects is a plus
Nice to Have
- Experience participating in AI/LLM evaluation or research initiatives
- Background in building developer tools, automation systems, or code verification agents
- Experience leading small engineering teams
Engagement Details
- Contractor assignment (no medical or paid leave)
- 20 hours per week with partial PST overlap
- Duration: 3 months
- Expected start date: Next week
- Fully remote
This role offers a unique opportunity to combine deep software engineering expertise with frontier AI research, directly influencing how large language models understand and solve real-world coding problems.
Key skills/competency
- LLM Evaluation
- Software Engineering
- Python
- JavaScript
- Git
- Docker
- Debugging
- Code Quality
- Software Architecture
- Open-source Contributions
How to Get Hired at Talent Bridge
- Research Talent Bridge's mission: Study their AI research clients and project goals, particularly for LLM evaluation.
- Tailor your resume: Highlight your experience in LLM evaluation, software debugging, and contributions to open-source projects.
- Showcase technical depth: Emphasize expertise in relevant programming languages like Python, JavaScript, Java, Go, and proficiency with Git and Docker.
- Prepare for problem-solving: Practice debugging complex codebases and discussing software architecture and design patterns effectively.
- Demonstrate collaboration: Be ready to discuss experiences working with researchers or cross-functional teams in an iterative environment.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background