Back to jobs

ML Research Scientist

kadence
Montreal, Quebec, Canada
Full-time
AI tools:
PyTorch
Applications go directly to the hiring team

Join a mission-driven AI research lab in Montreal as an ML Research Scientist, focusing on developing safer AI systems. You'll collaborate with a small team of world-class researchers, working on foundational research that aims to shape the future of AI safety without the pressure of commercial cycles.

Full-time
On-site
PhD or equivalent research depth
PhD in Computer Science, Physics, Mathematics, or related field

Skills & Expertise

Python
PyTorch
JAX
probabilistic modelling
Bayesian inference
statistical learning theory
causality
uncertainty estimation

Key Responsibilities

Conduct foundational research on probabilistic safety guarantees for AI systems.

Develop methodologies for reasoning, causality, robustness, and generalisation.

Design experimental frameworks to test and validate safety-oriented approaches.

Full Description

ML Research Scientist - Leading AI Lab, Montreal

We're partnering with a highly respected, mission-driven AI research lab tackling one of the field's most important long-term challenges: building AI systems that are fundamentally safe by design.

The Role:

We're seeking an ML Research Scientist to contribute to foundational work on safe AI systems. This is a generalist research position focused on developing principled, technically rigorous approaches to AI safety — centered on probabilistic guarantees, structured reasoning, and uncertainty modelling rather than post-training mitigation techniques.

What You'll Do:

• Conduct foundational research on probabilistic safety guarantees for large-scale AI systems

• Develop new methodologies for reasoning, causality, robustness, and generalisation

• Advance work on uncertainty quantification and reliability in language models

• Contribute to distributed training and evaluation of large-scale models under safety constraints

• Design rigorous experimental frameworks to test and validate safety-oriented approaches

• Publish high-impact research advancing the field of AI safety

• Work closely with a small team of world-class researchers operating at the intersection of theory and large-scale experimentation

What We're Looking For:

• PhD in Computer Science, Physics, Mathematics, or related field (or equivalent research depth in industry)

• Strong publication record in leading venues (e.g., NeurIPS, ICML, ICLR, COLT, ACL)

• Deep experience with language models and large-scale training pipelines

• Strong grounding in areas such as probabilistic modelling, Bayesian inference, statistical learning theory, causality, robustness, or uncertainty estimation

• Solid mathematical foundations combined with production-level coding skills (Python; PyTorch/JAX experience preferred)

• Experience designing and running large-scale experiments with rigorous evaluation

• Pragmatic approach to AI safety research (not ideological)

• Collaborative mindset — low ego, team-first mentality

Why This Matters:

This lab operates independently without commercial pressure cycles, allowing full focus on rigorous, long-horizon research. You'll work alongside some of the best researchers globally who are contributing to technically serious solutions that could shape the future of AI safety.

Location: Montreal (onsite) - open to candidates willing to relocate

Applications go to the hiring team directly