Job Description
Orbital is a physics-grounded AI copilot that operates complex industrial systems such as refineries, upstream assets, and energy-intensive plants. It combines realtime time-series forecasting, physics-based models, and domain-trained language models to deliver interpretable insights, anomaly detection, and optimisation
pathways directly to operations teams.
As a
Forward Deployed ML Engineer, your job is to make Orbital’s AI systems work in customer reality. You will deploy, configure, tune, and operationalise our deep learning models inside live industrial environments; spanning cloud, on-premise, hybrid, and air-gapped infrastructure.
This is
not a pure research role.
You are not training experimental models in isolation. You are adapting production AI systems to customer data, configuring agents and RAG pipelines, tuning anomaly detection, and ensuring models deliver value in production workflows.
If Research builds the models, you make them work on-site.
Operating Context
Forward Deployed ML Engineers operate in
pods of three alongside:
- Full Stack Engineers
- Data Engineers
Each pod delivers
2–3 customer deployments per quarter, owning AI configuration, model tuning, agent orchestration, and inference reliability in production.
Job Requirements
- MSc in Computer Science, Machine Learning, Data Science, or related field, or equivalent practical experience.
- Strong proficiency in Python and deep learning frameworks (PyTorch preferred).
- Solid software engineering background; designing and debugging distributed systems.
- Experience building and running Dockerised microservices, ideally with Kubernetes/EKS.
- LLM API integrations (OpenAI, Claude, Gemini), FastAPI for ML services and REST inference APIs
- Familiarity with message brokers (Kafka, RabbitMQ, or similar).