Back to jobs

AI Safety Analyst

Alignerr
Canada
Contract
4,000 – 12,000 / year
Applications go directly to the hiring team

Full Description

About The Role

What if your curiosity and critical thinking could make AI safer for millions of people? We're looking for AI Safety Analysts to put AI systems under pressure — probing for harmful outputs, unsafe behavior, and unexpected failures before they cause real-world harm.

This is meaningful, intellectually engaging work at the frontier of AI development. You'll be working alongside top research labs to ensure AI models behave responsibly and safely. No cybersecurity or AI background required — just a sharp, questioning mind and the drive to find what others miss.

* Organization: Alignerr

* Type: Hourly Contract

* Location: Remote

* Commitment: 10–40 hours/week

What You'll Do

* Challenge AI systems with adversarial, edge-case, and creative inputs designed to surface unsafe or unexpected behavior

* Identify harmful, inappropriate, or policy-violating AI outputs across a range of topics and scenarios

* Document safety issues clearly and precisely with supporting examples and explanations

* Rate AI responses on safety, helpfulness, and quality using structured evaluation rubrics

* Follow red-teaming protocols and testing guides to systematically explore AI weaknesses

* Work independently and asynchronously on your own schedule

Who You Are

* A natural critical thinker who enjoys questioning assumptions and exploring edge cases

* Intellectually curious — you're comfortable venturing into unusual or sensitive scenarios to uncover problems

* Clear and precise communicator in written English — you can describe issues in a way others can act on

* Detail-oriented and consistent in your evaluations

* Genuinely interested in AI safety and the responsible development of technology

* No prior AI, cybersecurity, or technical background required

Nice to Have

* Experience in research, journalism, quality assurance, or policy analysis

* Familiarity with AI tools or language models as an end user

* Background in ethics, philosophy, law, or social sciences

* Prior experience with content moderation or trust and safety work

Why Join Us

* Work on cutting-edge AI safety projects alongside leading research labs

* Fully remote and flexible — work when and where it suits you

* Freelance autonomy with the structure of meaningful, task-based work

* Contribute to AI development that has a real and lasting impact on how safely AI operates in the world

* Potential for ongoing work and contract extension as new projects launch

Applications go to the hiring team directly