Back to jobs

Machine Learning Engineer: LLM Interpretability & Systems

CTGT
San Francisco, CA
Full-time
17,500 – 25,000 / year
AI tools:
PyTorch
Applications go directly to the hiring team

Full Description

About CTGT & The Mission

Despite massive investment in commercial AI, organizations often find that demonstrated value is elusive, primarily due to the non-deterministic risk inherent to generative models. CTGT is the deterministic governance layer that enables the most important global institutions to deploy AI workflows with confidence.

Born out of Stanford University research, we provide the control plane that makes it possible. A lightweight, model-agnostic system that enforces policy, prevents drift, and produces auditable decisions in real time.

While we sit on the edge of AI research, CTGT brings frontier intelligence into real-world environments. We apply cutting-edge theory directly in production to make large language models more reliable, controllable, and performant in practice.

Our mission is to bring models to the level of performance and accountability required by the Fortune 500. By bridging the gap between LLM capabilities and domain-specific requirements, we unlock the true potential of generative AI to solve the most pressing problems in our world today.

The Role

A new open-source model is released and you are compelled to reach inside and understand how it actually works. You instinctively try to push it beyond what most people say is already impressive. You observe model behavior and don’t think, “What’s a better prompt?”, but “How do I improve its fundamentals?”

CTGT’s Senior Machine Learning Engineer will operate deep within the model stack, working directly with weights, activations, and architectures to build the systems that make AI governance deterministic. Your work powers the Policy Engine, the core technology that gives enterprises real-time, auditable control over model behavior in production. Your mandate is ostensibly simple but difficult in execution: determine how a model can be improved for a specific purpose and build the systems that operationalize that within our platform.

As opposed to simply using models, you will probe the mechanics of their cognition.

What You Will Do

* Take ideas from mechanistic interpretability and related work and turn them into code that runs in production, making research into reality.

* Work directly with model internals to improve behavior and performance across commercial and open-source models.

* Leverage techniques like activation patching, control vectors, and feature extraction to achieve targeted, repeatable improvements in model output.

* Build the evaluation and deployment loops needed to ship changes reliably into enterprise environments.

* Design and optimize the feature-level intervention systems that enable deterministic policy enforcement at inference time.

Who You Are

* Strong understanding of Transformer architectures, PyTorch internals, and the mathematical foundations of deep learning.

* Have trained, fine-tuned, or optimized models beyond superficial augmentation.

* Can read a paper, decide what matters, and implement it.

* Notice when something is not working and take ownership of fixing it.

* Motivated by the challenge of making large language models reliable and controllable enough for the highest-stakes enterprise applications.

What We Offer

* Compensation & Equity: Competitive base compensation, plus significant equity in a venture-backed company with institutional investors including Google’s Gradient Ventures, General Catalyst, and Y Combinator. We want people who think and act like owners.

* Real Impact: You will work directly on the core systems that determine how models perform in the wild. Your work ships into real, high-stakes environments where governance, auditability, and performance are non-negotiable.

* Autonomy & Trust: We operate with a high degree of trust. You are expected to form strong technical opinions and execute on them.

Applications go to the hiring team directly