Offensive Security Analyst (Structured / Non-Exploit)
AlignerrFull Description
Offensive Security Analyst (Structured / Non-Exploit) — AI Training
About The Role
What if your ability to think like an adversary could directly shape how AI understands and reasons about cybersecurity threats? We're looking for Offensive Security Analysts to bring real-world attack knowledge to one of the most impactful applications in tech today — training and evaluating cutting-edge AI systems.
This role is about structured adversarial reasoning, not exploit development. You'll work with realistic attack scenarios, model how threats move through systems, expose where defenses fail, and articulate how risk propagates across modern environments. No CVE writing. No shellcode. Just deep, strategic security thinking applied to frontier AI.
This is a fully remote, flexible contract role built for experienced security professionals who want to do meaningful work on their own schedule.
* Organization: Alignerr
* Type: Hourly Contract
* Location: Remote
* Commitment: 10–40 hours/week
What You'll Do
* Analyze attack paths, kill chains, and adversary strategies across realistic, production-style environments
* Identify weaknesses, misconfigurations, and defensive gaps in complex system architectures
* Review and evaluate red-team narratives, intrusion scenarios, and threat modeling exercises
* Generate, label, and validate adversarial reasoning data used to train and benchmark AI systems
* Clearly articulate attack chains, business impact, and security tradeoffs in structured formats
* Work independently and asynchronously — fully on your own schedule
Who You Are
* 2+ years of hands-on experience in pentesting, red teaming, or blue-team security with deep attack knowledge
* You understand how real attacks unfold — not just in theory, but in live production environments
* You can break down complex attack chains and communicate them clearly and precisely
* Comfortable working with ambiguity and applying structured thinking to open-ended security scenarios
* Detail-oriented and consistent — you bring rigor to everything you analyze
Nice to Have
* Experience with threat modeling frameworks (MITRE ATT&CK, STRIDE, PASTA, etc.)
* Background in incident response, threat intelligence, or adversary simulation
* Familiarity with cloud environments, Active Directory attack paths, or lateral movement techniques
* Prior experience writing security assessments, red team reports, or technical narratives
* Interest in AI safety, AI security, or responsible AI development
Why Join Us
* Work directly on frontier AI systems alongside leading research labs
* Fully remote and flexible — work when and where it suits you
* Freelance autonomy with the structure of meaningful, task-based work
* Apply your offensive security expertise to a field that's actively shaping the future of technology
* Potential for ongoing work and contract extension as new projects launch