Back to jobs

Remote AI Jobs Canada

Rex.zone
United States
Full-time
3,000 – 5,000 / year
AI tools:
ChatGPT
LLM
Applications go directly to the hiring team

Full Description

Overview

Remote AI jobs Canada on Rex.zone focus on practical AI/ML training workflows—data labeling, RLHF, prompt evaluation, and QA evaluation used to improve large language models and multimodal systems. You will help turn raw text, images, audio, and conversations into high-quality training data and structured evaluations for real production pipelines.

What You’ll Work On

* Training data quality checks for text, image, and multimodal datasets

* RLHF: ranking model outputs, preference labeling, rubric-based evaluation

* Prompt evaluation and response grading for helpfulness, correctness, and safety

* Named entity recognition (NER) and span-level annotation for NLP pipelines

* Computer vision annotation (bounding boxes, polygons, keypoints) plus QA review

* Content safety labeling for policy compliance and risk reduction

* Error analysis and reporting to improve guidelines and reduce rework

Responsibilities

* Follow annotation guidelines and document edge cases clearly

* Perform QA evaluation, resolve disagreements, and calibrate with reviewers

* Track labeling accuracy, throughput, and rework rates to protect data integrity

* Provide structured feedback on prompts, rubrics, and evaluation metrics

* Maintain privacy and security standards when handling sensitive content

* Contribute to continuous improvement for LLM training pipelines and evaluation sets

Required Qualifications

* 3+ years in AI data operations, data labeling, QA, or model evaluation

* Experience with RLHF-style preference data or prompt evaluation workflows

* Strong attention to detail and ability to apply rubrics consistently

* Comfort with ambiguous language tasks and iterative guideline updates

* Familiarity with NLP concepts (tokenization, entities, intent) or CV concepts (boxes, segmentation)

* Clear communication of findings to engineering and project leads

Nice to Have

* Experience with content safety labeling and policy-based evaluations

* Experience measuring inter-annotator agreement and improving calibration

* Exposure to production QA systems, audit sampling, and escalation workflows

Compensation

Competitive hourly rate: $30–$50/hour (USD), based on experience and project fit.

Applications go to the hiring team directly