Back to jobs

Data Scientist, Evals

Perplexity
Berlin, Berlin, Germany
Full-time
AI tools:
Perplexity
Applications go directly to the hiring team

As a Data Scientist at Perplexity, you'll join a small, high-impact team dedicated to enhancing answer quality for a cutting-edge LLM-first search engine used by millions daily. This role focuses on building specialized evaluations and collaborating closely with technical leadership to drive product improvements in real-world applications.

Permanent
4+ years
PhD or MS in a technical field or equivalent experience

Skills & Expertise

Python
SQL
AWS
Databricks
LLMs
machine learning
cloud data stack
agentic coding workflows

Key Responsibilities

Architect and maintain automated evaluation pipelines for answer quality assessment.

Design evaluation sets to measure the impact of tool calls on answer quality.

Develop VLM-based solutions to evaluate answer rendering across platforms.

Full Description

Perplexity serves tens of millions of users daily with reliable, high-quality answers grounded in an LLM-first search engine and our specialized data sources. We aim to use the latest models as they are released, but the intelligence frontier is a jagged one, and popular benchmarks do not effectively cover our use cases. In this role, you will build specialized evals to improve answer quality across Perplexity, covering search-based LLM answers and other scenarios popular with our users.

Responsibilities

* Architect and maintain automated evaluation pipelines to assess answer quality across Perplexity's products, ensuring high standards for accuracy and helpfulness

* Design evaluation sets and methods specifically to measure the impact of tool calls (particularly web search retrieval) on the final answer's quality

* Develop VLM-based solutions to programmatically evaluate how final answers render visually across different platforms and devices

* Continuously review public benchmarks and academic evaluations for their applicability to the Perplexity product, adapting and incorporating them into our regular performance measurements

* Operate within a small, high-impact team where your evaluation metrics directly shape product changes, collaborating closely with technical leadership to measure and improve Answer Quality

Qualifications

* PhD or MS in a technical field or equivalent experience

* 4+ years of experience in data science or machine learning

* Strong proficiency in Python and SQL (expected to write production-grade code)

* Experience building within a modern cloud data stack, specifically AWS and Databricks

* Comfortable with agentic coding workflows and using AI-assisted development tools to iterate faster

Preferred Qualifications

* 1+ years of experience working with LLMs at scale, specifically with LLM-as-a-judge setups

* Prior experience working on customer-facing web products or consumer apps, with real user traffic at scale

* A strong research background, with experience applying research methods to real-world ML problems

* Experience defining evaluation metrics (e.g., factual consistency, hallucination rate, retrieval precision) and building ground truth datasets

Applications go to the hiring team directly