VLA Annotator
ObjectwaysFull Description
Location: Phoenix, AZ (On-site)
Employment Type: Full-Time | 40 hours/week
Compensation: $20/hour
About the Role:
We are looking for a detail-oriented and technically capable Vision-Language-Action (VLA) Annotator to join our data operations team in Phoenix, Arizona. In this role, you will be responsible for reviewing, labeling, and quality-checking multimodal datasets used to train and evaluate autonomous driving and robotics models. Your work directly impacts the safety and performance of AI systems operating in the real world.
This is a full-time, 40-hour-per-week position requiring sustained focus, sound judgment, and the ability to apply structured annotation guidelines to complex, real-world scenarios — including frequent edge cases.
Key Responsibilities:
* Review and annotate video footage, sensor telemetry, and camera feeds from autonomous vehicle test drives and robotics platforms.
* Assess vehicle and robotic behavior in 3D space using 2D camera inputs, including approach angles, following distances, trail alignment, and controlled stop quality.
* Use time-series telemetry data — including speed, throttle, steering, and braking charts — to make precise trim and segmentation decisions on data clips.
* Apply annotation guidelines consistently while exercising independent judgment on ambiguous or edge-case scenarios.
* Identify and flag unsafe, incomplete, or anomalous driving behaviors (e.g., rolling stops, improper following distance, out-of-distribution maneuvers).
* Maintain high throughput and accuracy standards; participate in regular quality audits and calibration sessions.
* Work within annotation platforms (e.g., Encord, CVAT, Label Studio, or similar) to complete labeling tasks efficiently.
* Document and communicate recurring issues or ambiguities in the data to improve pipeline quality.
Preferred Qualifications:
* Education: Bachelor's degree with a STEM background preferred (Engineering, Computer Science, Physics, Mathematics, GIS, or related field).
* Spatial & Mechanical Reasoning: Demonstrated ability to interpret vehicle or robotic behavior in 3D space from 2D camera feeds. Backgrounds in robotics, automotive engineering, mechanical engineering, GIS, or simulation are strong indicators.
* Time-Series Data Literacy: Experience reading and interpreting sensor data, telemetry charts, lab instrumentation output, or signal processing data. Comfort with chart-heavy analytical workflows is essential for making precise trim decisions.
* Driving Familiarity: Regular driving experience, ideally in varied or off-road conditions. Must be able to distinguish safe from unsafe driving behavior, recognize complete vs. rolling stops, and assess reasonable following distances.
* Detail Orientation with Tolerance for Ambiguity: Ability to follow precise, rule-based guidelines while also applying sound judgment on frequent edge cases. Prior experience in QA, data annotation, or lab/research settings is a strong signal.
* Video Review Endurance: Comfort with sustained video review tasks. Prior experience in video editing, surveillance monitoring, sports performance analysis, or media production is a plus.
Nice-To-Haves:
* Prior annotation or data labeling experience, especially in autonomy or robotics datasets.
* Familiarity with geospatial tools, map interfaces, or GIS platforms.
* Hands-on experience with Encord, Label Studio, CVAT, Scale AI, or comparable labeling platforms.
* Background in autonomous vehicles, ADAS systems, or driver safety analysis.