ML Engineer (Forward Deployed Engineering)
Applied ComputingJob Description
Orbital is a physics-grounded AI copilot that operates complex industrial systems such as refineries, upstream assets, and energy-intensive plants. It combines realtime time-series forecasting, physics-based models, and domain-trained language models to deliver interpretable insights, anomaly detection, and optimisation
pathways directly to operations teams.
As a Forward Deployed ML Engineer, your job is to make Orbital’s AI systems work in customer reality. You will deploy, configure, tune, and operationalise our deep learning models inside live industrial environments; spanning cloud, on-premise, hybrid, and air-gapped infrastructure.
This is not a pure research role.
You are not training experimental models in isolation. You are adapting production AI systems to customer data, configuring agents and RAG pipelines, tuning anomaly detection, and ensuring models deliver value in production workflows.
If Research builds the models, you make them work on-site.
Operating Context
Forward Deployed ML Engineers operate in pods of three alongside:
* Full Stack Engineers
* Data Engineers
Each pod delivers 2–3 customer deployments per quarter, owning AI configuration, model tuning, agent orchestration, and inference reliability in production.
Job Requirements
* MSc in Computer Science, Machine Learning, Data Science, or related field, or equivalent practical experience.
* Strong proficiency in Python and deep learning frameworks (PyTorch preferred).
* Solid software engineering background; designing and debugging distributed systems.
* Experience building and running Dockerised microservices, ideally with Kubernetes/EKS.
* LLM API integrations (OpenAI, Claude, Gemini), FastAPI for ML services and REST inference APIs
* Familiarity with message brokers (Kafka, RabbitMQ, or similar).
* Comfort working in hybrid cloud/on-prem deployments (AWS, Databricks, or industrial environments).
* Exposure to time-series or industrial data (historians, IoT, SCADA/DCS logs) is a plus.
* Domain experience working as a data scientist in oil and gas or energy is a plus.
* Ability to work in forward-deployed settings, collaborating directly with customers.
* Comfortable in customer-facing technical roles.
* Able to operate in forward-deployed environments.
* Strong troubleshooting capability in production AI systems
What Success Looks Like
* AI systems are deployed and running in customer environments.
* Models are tuned to customer data and delivering operational value.
* Anomalies and predictions are trusted by engineers.
* Multi-agent copilots function reliably in production workflows.
* RAG systems retrieve accurate, domain-relevant insights.
* Inference pipelines run with high uptime and low latency.
Job Responsibilities
* AI System Deployment & Configuration
* Deploy Orbital’s AI/ML services into customer environments.
* Configure inference pipelines across cloud, on-prem, and hybrid infrastructure.
* Package and deploy ML services via Docker/Kubernetes.
* Ensure inference services are reliable, scalable, and production-ready.
* Time Series & Predictive Model Tuning
* Deploy and tune time-series forecasting and anomaly detection models.
* Adapt models to customer-specific industrial processes.
* Configure thresholds, alerting logic, and detection sensitivity.
* Validate model outputs against engineering expectations.
Typical model classes include:
* Gradient boosting models (LightGBM)
* Transformer models
* Statistical anomaly detection methods
* Multivariate monitoring systems
* Multi-Agent & LLM System Configuration
* Deploy and configure multi-agent AI systems for customer workflows.
* Set up LLM provider integrations (OpenAI, Claude, Gemini).
* Configure agent routing and orchestration logic.
* Tune prompts and workflows for operational use cases.
* Retrieval Augmented Generation (RAG)
* Deploy RAG pipelines in customer environments.
* Ingest customer documentation and operational knowledge.
* Configure knowledge graphs and vector databases.
* Tune retrieval pipelines for accuracy and latency.
* Intelligent Data Agents
* Configure SQL agents for structured customer datasets.
* Deploy visualization agents for exploratory analytics.
* Adapt agents to customer schemas and naming conventions.
* Explainability & Interpretability
* Generate SHAP explanations for model outputs.
* Build interpretability reports for engineering stakeholders.
* Explain anomaly drivers and optimisation recommendations.
* Support trust and adoption of AI insights.
* Forward Deployment & Customer Integration
* Deploy AI systems into restricted industrial networks.
* Integrate inference pipelines with:
* Historians
* OPC UA servers
* IoT data streams
* Process control systems
* Work with IT/OT teams to satisfy infrastructure and security constraints.
* Debug production issues in live operational environments.
* Production Reliability & MLOps
* Monitor inference performance and drift.
* Troubleshoot production model failures.
* Version models and datasets (DVC or equivalent).
* Maintain containerised ML deployments.
* Support CI/CD for model updates.