Back to jobs

AI Red Team Tester

Alignerr
France
Contract
4,000 – 12,000 / year
AI tools:
ChatGPT
Applications go directly to the hiring team

Full Description

About The Role

What if your curiosity and creative thinking could make AI safer for millions of people? We're looking for AI Red Team Testers to do exactly that — systematically probe, challenge, and outwit cutting-edge AI systems to uncover their blind spots before they cause real harm.

This is your opportunity to think like an adversary, explore the edges of what AI can and can't do, and contribute directly to some of the most important safety work happening in the field today. No security or hacking background required — just a sharp, inquisitive mind and a knack for thinking outside the box.

* Organization: Alignerr

* Type: Hourly Contract

* Location: Remote

* Commitment: 10–40 hours/week

What You'll Do

* Design inventive prompts, scenarios, and conversational strategies to expose weaknesses in AI systems

* Attempt to elicit incorrect, unsafe, biased, or inappropriate outputs from AI models

* Document failure modes in clear, reproducible detail so they can be addressed by research teams

* Assess and rate the severity of issues you discover using structured evaluation frameworks

* Collaborate asynchronously with AI safety and research teams to inform model improvements

* Explore edge cases across a wide range of topics, tones, and contexts

Who You Are

* A naturally curious thinker who enjoys puzzles, loopholes, and unconventional ideas

* Comfortable approaching problems from an adversarial angle — you like finding what others miss

* Detail-oriented and systematic: you document findings clearly and thoroughly

* Strong written communicator who can articulate issues with precision

* Self-motivated and reliable when working independently in an async environment

* No background in cybersecurity, hacking, or AI required

Nice to Have

* Experience in creative writing, journalism, philosophy, or critical analysis

* Familiarity with AI chatbots or language models as an end user

* Background in ethics, psychology, or social science — useful for spotting bias and nuanced failure modes

* Prior experience in QA, testing, or structured evaluation work

Why Join Us

* Work on genuinely meaningful AI safety projects alongside leading research labs

* Fully remote and flexible — work on your own schedule, wherever you are

* Freelance autonomy with the substance of impactful, task-based work

* Variety in every session — no two testing scenarios are the same

* Potential for ongoing work and contract extension as new projects launch

* Be part of the growing field of AI safety at a pivotal moment in the technology's development

Applications go to the hiring team directly