Back to jobs

Robotics Engineer

LuminX
San Francisco Bay Area
Full-time
AI tools:
TensorRT
ONNX
Applications go directly to the hiring team

Full Description

About LuminX

Warehouses still run on clipboards and barcode guns. Every day, billions of dollars in pallets move through docks where humans manually scan, count, and verify — and when something goes wrong, no one notices until a customer complains.

LuminX is changing that. We build AI camera systems that watch every pallet move in real time, read every label and barcode automatically, and surface errors the moment they happen. We're already deployed with major customers in automotive and cold storage, and we're growing quickly. We raised $5.5M in seed funding from top investors and have a senior team that includes alumni from leading robotics and AI companies.

The Role

We're hiring a robotics software engineer to own the software that runs on our edge AI devices — the camera pipelines, real-time inference, device software, and perception code that makes LuminX work in customer warehouses. You'll spend most of your time on the edge stack, but you're a generalist at heart, comfortable jumping into backend services, cloud tooling, or a customer site visit when the problem calls for it. Our edge devices live in real warehouses with real power, lighting, and networking constraints — debugging at the hardware/software boundary is part of the job, not a side quest.

This is a startup, so the work is real. You'll deploy your code to live customer sites, debug issues that don't appear in simulation, and own the perception stack end-to-end. If you want to write specs and hand them off, this isn't the role. If you want to build, ship, and own — you're at the right place.

What You'll Do

* Manage the software stack running on our edge AI devices, including camera capture, real-time inference, device health, and OTA updates.

* Build and optimize real-time perception pipelines — multi-camera capture, calibration, synchronization, and on-device VLM and CV inference.

* Bring up new cameras, sensors, and edge compute platforms — working close to the driver and OS layer when needed.

* Optimize model inference for edge hardware using TensorRT, ONNX, and quantization to hit our latency targets.

* Partner with the ML team to deploy, evaluate, and iterate on vision-language and detection models on edge.

* Partner with the hardware team on bring-up, peripheral integration, and validation of new edge platforms.

* Contribute to backend services and cloud tooling that touch the edge stack — telemetry, data ingestion, model orchestration.

* Help validate and harden real customer deployments, including occasional travel to customer sites for installs and debugging.

What We're Looking For

* 3+ years in robotics, computer vision, or embedded software

* Hands-on experience deploying software on edge platforms like NVIDIA Jetson or RK3588

* Strong C++ and Python

* Experience with computer vision or perception pipelines — capture, calibration, ROS/ROS2, or real-time inference

* Comfortable working close to the hardware when needed — camera bring-up, drivers, debugging integration issues

* Comfortable jumping into backend or cloud work when the problem demands it

Bonus Points For

* Experience optimizing ML inference for edge hardware (TensorRT, ONNX, CUDA, quantization)

* Industrial cameras, multi-camera calibration, or sensor fusion

* Linux driver work, V4L2, or low-level camera/sensor integration (MIPI/CSI, USB3, GigE)

* Streaming or video pipeline experience (GStreamer, FFmpeg, RTSP)

* Familiarity with VLMs or LLM-based perception

* Prior experience at a robotics, autonomy, or computer vision startup

Applications go to the hiring team directly