Back to jobs

Sr. Data Scientist, Responsible AI

RemoteHunter
United States
Full-time
13,976,400 – 28,774,900 / year
AI tools:
GPT-4
Applications go directly to the hiring team

Full Description

1. About Our Client:

The organization operates a leading visual search and discovery platform with over 500 million monthly active users globally. It addresses the challenge of inspiring creativity and helping users turn ideas into action. As the platform expands its Generative AI capabilities, the organization prioritizes the safety, fairness, and trustworthiness of these AI products. The team focuses on responsible AI development to ensure that innovations in AI enhance user experience while mitigating risks related to bias, safety, and policy compliance.

2. About the Opportunity:

The Senior Data Scientist, Responsible AI will lead efforts to develop automated adversarial testing frameworks for Generative AI products. This role is pivotal in identifying and mitigating vulnerabilities in AI systems to uphold product safety and user trust. The position requires collaboration across multiple teams to design evaluation methods and safety metrics, directly influencing the responsible deployment and continuous improvement of AI features.

3. Responsibilities:

• Design automated adversarial testing methods to detect vulnerabilities in Generative AI products

• Develop hybrid evaluation pipelines combining LLM-based judges, classifiers, and rule-based systems

• Create and apply harm taxonomies based on industry standards and internal threat models

• Build adaptive refinement loops to discover deeper vulnerabilities from testing outcomes

• Apply scientific and statistical rigor to AI safety evaluation, including benchmark dataset creation and calibration

• Collaborate with ML engineers, Trust & Safety specialists, policy teams, product managers, and legal partners to ensure safe product launches

• Focus on impact by influencing safety strategies and enhancing evaluation processes

• Mentor junior data scientists and cross-functional partners on adversarial evaluation and responsible AI practices

4. Requirements:

• 5+ years of experience applying scientific methods to large-scale data problems in fast-paced environments

• Hands-on experience in AI safety, adversarial machine learning, red teaming, responsible AI, or trust & safety

• Deep understanding of large language models, generative AI, and common failure modes such as prompt injection and bias

• Experience designing and calibrating AI evaluation frameworks, including LLM-as-judge and benchmark datasets

• Proficiency in Python programming and data manipulation using SQL/Spark; experience with ML pipelines and large-scale experimentation

• Familiarity with AI safety taxonomies and frameworks (e.g., OWASP LLM Top 10, MITRE ATLAS) preferred

• Ability to independently manage ambiguous projects with high ownership

• Excellent communication skills for explaining complex technical concepts to diverse audiences

• Collaborative mindset to work across Responsible AI, Trust & Safety, Product, Engineering, Policy, and Legal teams

5. Pay Range and Compensation Package:

• The pay range for this US-based role is $139,764 to $287,749 USD

• The position is also eligible for equity

• Final salary depends on location, prior experience, skills, and other factors

Our client is an equal opportunity employer. They celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, or national origin.

Note:

RemoteHunter is not the Employer of Record (EOR) for this role. Our purpose in this opportunity is to connect exceptional candidates with leading employers. We help job seekers worldwide discover roles that match their goals and guide them to complete their full application directly through the hiring company’s career page or ATS.

Applications go to the hiring team directly