Entry Level AI Jobs in Brazil (Remote)
Rex.zoneFull Description
About Rex.zone
Rex.zone connects remote contributors with production AI/ML data operations work that supports LLM training pipelines. Projects commonly include data labeling, RLHF preference data collection, prompt evaluation, QA evaluation, and content safety labeling across NLP, computer vision, and multimodal domains.
About The Role
In this remote, full-time role, you will produce and validate training data used to improve large language model evaluation and downstream model performance. You will follow annotation guidelines and structured rubrics to deliver consistent, high-quality judgments and labels.
Responsibilities
* Deliver accurate data labeling aligned to annotation guidelines and project rubrics
* Perform QA evaluation using checklists to catch schema errors, guideline deviations, and low-confidence labels
* Complete prompt evaluation and RLHF tasks by comparing model outputs, ranking responses, and writing short justifications when required
* Maintain consistent training data quality across large batches and escalate edge cases with clear examples
* Track work in project tools, meet throughput targets, and provide well-documented feedback to support model performance improvement
Core Workstreams
* RLHF preference ranking and conversational AI evaluation
* NLP labeling (classification, named entity recognition)
* Computer vision annotation (bounding boxes, polygons, segmentation masks)
* Content safety labeling for policy compliance and safer model behavior
Required Skills
* Comfort working with structured rubrics, examples, and guideline compliance
* Strong attention to detail for QA evaluation and training data quality checks
* Ability to reason about language and intent for NLP tasks (e.g., NER, classification)
* Basic familiarity with AI/ML concepts such as large language models, RLHF, and evaluation datasets
Remote Work Details
This is a Remote, FULL_TIME position designed for distributed execution using cloud-based labeling and evaluation tools with asynchronous collaboration.
How To Apply
Apply through Rex.zone and complete the required screening tasks focused on annotation accuracy, rubric adherence, and practical judgment in prompt evaluation, RLHF, and QA evaluation scenarios.