Back to jobs

Remote | AI Red-Teamer — Adversarial AI Testing (English & Hebrew) — $57.74/hr

24-MAG
San Francisco, CA
Contract
Applications go directly to the hiring team

Join a specialized remote team of AI red-teamers to enhance the safety of advanced conversational AI systems. This role involves conducting adversarial testing and identifying vulnerabilities while collaborating closely with leading AI research teams. Enjoy flexible scheduling and competitive pay in a dynamic project environment.

Contract
Fully Remote

Skills & Expertise

Adversarial AI testing
Cybersecurity
Analytical thinking
Documentation
Penetration testing
Vulnerability analysis
Structured problem-solving
Written communication

Key Responsibilities

Conduct adversarial testing of conversational AI systems.

Identify vulnerabilities like jailbreaks and misuse cases.

Generate structured reports and reproducible datasets.

Full Description

We are sharing a specialised remote opportunity for experienced AI red-teamers, cybersecurity specialists, or adversarial AI experts to support a leading AI lab in identifying vulnerabilities and improving the safety of advanced conversational AI systems.

This project focuses on probing AI models with adversarial prompts, identifying system weaknesses, and generating structured red-team datasets that strengthen the robustness and reliability of next-generation AI systems.

Key Responsibilities

Conduct adversarial testing of conversational AI systems

Identify vulnerabilities such as jailbreaks, prompt injections, misuse cases, and bias exploitation

Evaluate AI responses across multi-turn conversations and manipulation scenarios

Annotate model failures and classify vulnerability types

Generate structured reports, attack cases, and reproducible datasets

Follow testing frameworks, taxonomies, and benchmarks to ensure consistency

Produce clear documentation that helps teams strengthen AI safety systems

Ideal Profile

Strong candidates may have:

Native-level fluency in English and Hebrew

Experience in AI red teaming, adversarial machine learning, or cybersecurity

Background in penetration testing, exploit development, or vulnerability analysis

Strong analytical thinking and structured problem-solving abilities

Ability to clearly explain risks to technical and non-technical audiences

Excellent written communication and attention to detail

Comfort working across multiple projects and testing scenarios

Nice to Have:

Experience with adversarial ML techniques (prompt injection, jailbreak testing, RLHF attacks)

Cybersecurity background in penetration testing or reverse engineering

Socio-technical risk analysis (bias, misinformation, abuse detection)

Creative adversarial thinking using psychology, narrative, or behavioral probing

Why This Opportunity

Contribute directly to improving the safety and robustness of advanced AI systems

Work on frontier AI red-teaming projects with leading AI research teams

Help identify vulnerabilities before AI systems reach production environments

Flexible remote project work with competitive compensation

Contract Details

Independent contractor role

Fully remote and flexible scheduling

Geographic eligibility restricted to USA or Israel

Compensation: $57.74 per hour

Weekly payments via Stripe or Wise

Projects may extend or adjust depending on scope and performance

No access to confidential employer or institutional data required

About The Platform

This opportunity is available through a leading AI-driven work platform.

By submitting this application, you acknowledge that your information may be processed by 24-MAG LLC for recruitment and opportunity matching in accordance with our Privacy Policy: https://www.24-mag.com/privacy-policy

Applications go to the hiring team directly