Back to jobs

Applied Machine Learning Engineer (Robotics Perception)

Alert Venture Foundry
North Billerica, MA
Full-time
AI tools:
PyTorch

Alert Venture Foundry (AVF) is a robotics-focused startup studio inventing and launching real-world robotic systems. Our teams work across hardware, embedded software, autonomy, and simulation to build systems that operate reliably outside the lab.

About the Role

We’re looking for an Applied Machine Learning Engineer (Robotics Perception) to help us build practical AI/ML capabilities that improve how robots perceive, understand, and operate in the real world. In this role, you’ll work on computer vision and multimodal perception features, and your biggest impact will come from building the pipelines that turn raw robot data into clean, trainable datasets.

This is a great fit for an earlier career ML engineer or applied scientist who enjoys building end-to-end systems: collecting data, shaping it into usable formats, training and evaluating models, and integrating those models into a real robotic stack.

This role is focused on robotics perception and sensor-driven systems. Pure LLM / RAG / chatbot experience without hands-on work with image or sensor data will not be considered. This is also not a pure research or dashboarding role. You will write production-grade code, work in Linux environments, and integrate directly with robotics software stacks.

What you’ll do:

* Build data pipelines that transform raw robot data streams (images/video, telemetry, voice tags, operator inputs) into clean, structured datasets for training and evaluation

* Design dataset schemas, sampling strategies, and train/validation/test splits to support repeatable experiments and model iteration

* Develop and fine-tune computer vision models for tasks such as image classification and related perception capabilities

* Support multimodal workflows where operator text or voice inputs can inform robot behavior and localization

* Run experiments, track performance metrics, and perform structured error analysis to improve model reliability in real-world conditions

* Integrate trained models into robotics software systems for real-time or near-real-time inference

* Collaborate with autonomy, controls, embedded, and systems engineers to ensure ML outputs are usable, robust, and measurable

* Contribute to testing and validation workflows to connect model performance in the lab with performance on real hardware

* Develop MLOps framework to train, test and deploy models

What we’re looking for:

* 3+ years building and deploying ML systems that interact with real-world sensor data (camera, LiDAR, depth, IMU, etc.)

* At least one production system where your model ran on physical hardware or edge devices (robot, vehicle, embedded system)

* Demonstrated experience debugging ML models using real-world failure cases (not just offline benchmarks)

* Strong Python skills and the ability to build maintainable software for data processing, training workflows, and system integration

* Training and fine-tuning convolutional or transformer-based vision models

* Building dataset pipelines for image/video data (augmentation, balancing, labeling workflows)

* Model evaluation using precision/recall, mAP, confusion matrices, and structured error analysis

* Experience optimizing inference performance (latency, throughput, quantization, pruning)

* Comfort working in Linux environments and using standard engineering tools (Git, cmake, debugging tools, scripting)

* Ability to collaborate across disciplines and communicate clearly about model behavior, limitations, and failure modes

* Experience developing object tracking and feature extraction algorithms like YOLO and autoencoders

* Bachelor’s or Master’s in Robotics, Machine Learning, Computer Science, or related field with hands-on robotics or perception system experience

This role is likely not a fit if:

* Your experience is primarily focused on LLM applications, chatbots, or RAG pipelines

* You have not worked with image, video, or robotics sensor data

* You have not deployed ML models into a production or hardware-constrained environment

Bonus points for:

* Experience deploying ML models to edge hardware or real-time systems (e.g., ONNX, TensorRT, Jetson-class devices).

* Familiarity with robotics middleware and integration patterns (e.g., ROS2).

* Experience with speech-to-text, natural language understanding, or grounding operator intent into system actions.

* Comfort writing C++ for performance-critical components or extending existing robotics and ML codebases.

* Exposure to planning, optimization, or autonomy systems where ML outputs influence robot behavior.

Why AVF?

* Build ML capabilities that move from data to models to real-world robotic behavior

* Work closely with experienced engineers in a fast-paced, high-agency startup studio environment

* Help define practical ML workflows and data pipelines that accelerate iteration and deployment

* Contribute to robotic systems that reshape how physical work is performed in the real world

AVF is an equal opportunity employer. We welcome applicants from all backgrounds who are excited to build and deploy real-world robotic systems. No visa sponsorship is available for this position.

Applications go to the hiring team directly