Remote STEM Engineer (United States)
Rex.zoneFull Description
Remote STEM Jobs in the United States (Full-Time, Remote)
Rex.zone connects STEM professionals to real AI/ML production workflows, including LLM training pipelines, RLHF evaluation, data labeling, QA evaluation, prompt evaluation, named entity recognition, computer vision annotation, and content safety labeling. You will support training data quality, annotation guidelines compliance, and model performance improvement across distributed teams.
About The Role
As a Remote STEM Engineer (United States), you will deliver measurable outcomes across applied engineering and AI/ML support workstreams. Your day-to-day may include building and validating data pipelines, improving training data quality, running statistical analyses, and partnering with ML teams on evaluation harnesses.
Key Responsibilities
* Design, implement, and maintain data workflows that support machine learning and large language model evaluation.
* Execute RLHF-related processes including prompt evaluation, preference ranking, and rubric-based QA evaluation.
* Define and operationalize annotation guidelines compliance to improve training data quality and reduce label noise.
* Perform named entity recognition (NER) and schema validation checks; troubleshoot edge cases and ambiguous labeling.
* Support computer vision annotation programs (bounding boxes, polygons, keypoints) and audit inter-annotator agreement.
* Contribute to content safety labeling and policy-driven evaluation for harmful, sensitive, and restricted content.
* Create metrics and dashboards for model performance improvement (accuracy, precision/recall, calibration, and error taxonomy).
* Collaborate asynchronously with distributed teams; document decisions, experiments, and release notes.
Required Qualifications
* Bachelor’s degree (or higher) in a STEM field (CS, EE, Math, Stats, Physics, or related).
* Mid-Senior experience delivering engineering or applied data/ML work in production or research-adjacent environments.
* Proficiency with Python and common data tooling (pandas, NumPy) plus SQL for analysis and reporting.
* Understanding of ML evaluation concepts: ground truth construction, bias/variance, and dataset shift.
* Experience with quality assurance practices: sampling plans, audit checklists, and root-cause analysis.
* Ability to write clear documentation and follow structured rubrics for QA evaluation and labeling tasks.
* Comfort working fully remote with time-zone coordination across the United States.
Preferred Qualifications
* Exposure to NLP and LLM workflows (prompting, prompt evaluation, instruction tuning concepts).
* Experience with RLHF or human-in-the-loop evaluation pipelines.
* Computer vision annotation familiarity and tooling experience (CVAT, Labelbox, or similar).
* Knowledge of content safety labeling standards and policy frameworks.
* Experience with cloud platforms (AWS/GCP/Azure) and CI/CD or MLOps basics.
* Hands-on experience improving annotation guidelines compliance and inter-annotator agreement.
Compensation
Competitive hourly rate: $30–$50/hr (USD).