Solutions Engineer — Language Models
About Artificial Analysis
Artificial Analysis is the leading independent AI benchmarking and insights company. We support engineers and enterprises to understand AI capabilities and make critical decisions about their AI strategies. We are the go-to authority for understanding AI, from AI labs and enterprises to media, investors, and policymakers. Our benchmarks don't just measure the cutting edge of AI, they are actively shaping the frontier. Our benchmarks and analysis are trusted by hundreds of thousands of users and are the go-to reference for leading AI labs including OpenAI, Google, Meta, NVIDIA and Anthropic, and major publications including the Wall Street Journal, Bloomberg, the Financial Times and The Economist. We are a team of 25+, on track to double by mid-year, backed by Nat Friedman (Github, Meta), Daniel Gross (SSI), Andrew Ng (Google Brain, DeepLearning.ai, Amazon), Adam D'Angelo (Quora, Poe, OpenAI), Clem Delangue (Hugging Face) and other industry leaders.
The Opportunity
Artificial Analysis maintains one of the most comprehensive language model benchmarking suites in the industry. We're hiring a Solutions Engineer to own the day-to-day operation of our language model benchmarking stack. This is a hands-on, operational role: you'll add new models to our evaluation pipeline, run and debug benchmarks, and serve as the primary technical point of contact for AI lab customers — explaining results, fielding methodology questions, and resolving API endpoint issues over Slack and video calls. This is not a software engineering role focused on building new systems. It's about running a sophisticated existing stack exceptionally well, consistently and reliably, while being the trusted technical face of Artificial Analysis to our customers.
What You’ll Do
- Operate and maintain our Python-based language model benchmarking pipeline end-to-end: onboard new models, configure evaluations, execute benchmark runs, and validate results
- Debug issues across the stack — from API endpoint timeouts and errors to unexpected benchmark outputs — and resolve them quickly
- Serve as the primary technical contact for AI lab customers: communicate benchmarking results clearly, explain methodology, and troubleshoot integration issues via Slack and video calls
- Monitor benchmark runs for anomalies, investigate discrepancies, and ensure the accuracy and integrity of published results
- Maintain documentation of processes, known issues, and model-specific configurations
- Collaborate with the engineering team to flag pipeline improvements and contribute to process refinements
- Stay current with new model releases, API changes, and developments across the language model ecosystem
What We’re Looking For
- 5+ years of experience in a client-facing technical role — solutions engineering, support engineering, technical consulting, or similar
- Strong Python proficiency and comfort working with complex codebases you didn't write
- Hands-on experience working with AI/ML model APIs (OpenAI, Anthropic, Google, Meta, etc.)
- Excellent debugging skills — you can trace issues across APIs, data pipelines, and code
- Strong written and verbal communication skills, with the ability to explain technical concepts clearly to technical stakeholders
- Highly responsive and reliable — you take ownership of customer issues and follow through
- Comfortable with operational, repeatable work — you find satisfaction in running things well rather than building from scratch
Why Artificial Analysis?
- Shape how AI gets built: The leading AI labs track our benchmarks and use them to guide their development priorities. Your work will directly influence the direction of AI.
- Become a world expert in AI: You will evaluate every major model, across every major capability, as they are released. Very few roles offer this breadth of exposure to frontier AI.
- Work with the most important players in AI: You'll manage relationships with teams at the leading AI labs and major enterprises as a trusted, independent voice.
- Join at a defining moment: We're 25+ people, doubling this year, backed by some of the most connected investors in AI. The people who join now will shape the product, the team, and the strategy as we scale.
- Competitive compensation including equity
- Our team is split across San Francisco, Sydney, and Melbourne