Back to jobs

QA Engineer – AI Testing

Intuitive.ai
Canada
Contract
AI tools:
TensorFlow
PyTorch

About us:

Intuitive is an innovation-led engineering company delivering business outcomes for 100’s of Enterprises globally. With the reputation of being a Tiger Team & a Trusted Partner of enterprise technology leaders, we help solve the most complex Digital Transformation challenges across following Intuitive Superpowers:

Modernization & Migration

* Application & Database Modernization

* Platform Engineering (IaC/EaC, DevSecOps & SRE)

* Cloud Native Engineering, Migration to Cloud, VMware Exit

* FinOps

Data & AI/ML

* Data (Cloud Native / DataBricks / Snowflake)

* Machine Learning, AI/GenAI

Cybersecurity

* Infrastructure Security

* Application Security

* Data Security

* AI/Model Security

SDx & Digital Workspace (M365, G-suite)

* SDDC, SD-WAN, SDN, NetSec, Wireless/Mobility

* Email, Collaboration, Directory Services, Shared Files Services

Intuitive Services:

* Professional and Advisory Services

* Elastic Engineering Services

* Managed Services

* Talent Acquisition & Platform Resell Services

About the job:

Title: QA Engineer - AI Testing

Start Date: Immediately

# of Positions: 1

Position Type: Contract

Location: Remote across Canada

Role Overview

We are seeking a skilled QA Engineer with experience in AI/ML testing to ensure the reliability, accuracy, and performance of AI-powered applications. The ideal candidate will design and execute test strategies for machine learning models, data pipelines, and AI-driven systems while collaborating closely with data scientists, developers, and product teams.

Key Responsibilities

• Design and implement test strategies for AI/ML models and AI-driven applications

• Validate model accuracy, performance, bias, and reliability

• Develop automated test frameworks for AI systems and APIs

• Test data pipelines, model inputs/outputs, and data quality

• Create and execute functional, regression, and performance tests

• Work with data scientists and ML engineers to validate training datasets and model behavior

• Perform model validation, drift detection, and monitoring

• Identify edge cases and ensure AI systems behave reliably in real-world scenarios

• Document defects, test cases, and quality metrics

Required Skills

• 5+ years of experience in Software QA / Test Automation

• Experience testing AI/ML models or AI-enabled applications

• Strong knowledge of Python, SQL, and API testing

• Experience with automation tools such as Selenium, PyTest, or similar frameworks

• Familiarity with ML frameworks like TensorFlow, PyTorch, or Scikit-learn

• Experience testing data pipelines and large datasets

• Understanding of model validation, bias testing, and data quality checks

• Experience with CI/CD pipelines and cloud environments (AWS, Azure, or GCP)

Preferred Qualifications

• Experience testing LLM-based applications or Generative AI systems

• Knowledge of AI model evaluation metrics

• Experience with MLOps tools

• Familiarity with prompt testing and AI safety testing

Nice to Have

• Experience in performance testing AI workloads

• Understanding of ethical AI and responsible AI testing

Applications go to the hiring team directly