ML Engineer Forward Deployed (USA)

Posted 5 Days Ago
Hiring Remotely in Houston, TX
In-Office or Remote
Mid level
Artificial Intelligence • Information Technology • Internet of Things • Software
The Role
As a Forward Deployed ML Engineer, you will integrate research models into production using PyTorch, design microservices, and work with customers to deploy AI solutions in industrial environments.
Summary Generated by Built In

Description

We’re building Orbital, an industrial AI system that runs live in refineries and upstream assets, ingesting sensor data, running deep learning + physics hybrid models, and serving insights in real time. As a Forward Deployed ML Engineer, you’ll sit at the intersection of research and deployment: turning notebooks into containerised microservices, wiring up ML inference pipelines, and making sure they run reliably in demanding industrial environments.

This role is not just about training models. You’ll write PyTorch code when needed, package models into Docker containers, design message-brokered microservice architectures, and deploy them in hybrid on-prem/cloud setups. You’ll also be customer-facing: working with process engineers and operators to integrate Orbital into their workflows.
Location:
You will be based in the U.S. or eligible to work there and ideally in Houston or willing to travel extensively to there.

Core Responsibilities


Model Integration & Engineering

  • Take research models (time-series, deep learning, physics-informed) and productionise them in PyTorch.
  • Wrap models into containerised services (Docker/Kubernetes) with clear APIs.
  • Optimise inference pipelines for latency, throughput, and reliability.


Microservices & Messaging

  • Design and implement ML pipelines as multi-container microservices.
  • Use message brokers (Kafka, RabbitMQ, etc.) to orchestrate data flow between services.
  • Ensure pipelines are fault-tolerant and scalable across environments.


Forward Deployment & Customer Integration

  • Deploy AI services into customer on-prem environments (industrial networks, restricted clouds).
  • Work with customer IT/OT teams to integrate with historians, OPC UA servers, and real-time data feeds.
  • Debug, monitor, and tune systems in the field — ensuring AI services survive messy real-world data.


Software Engineering Best Practices

  • Maintain clean, testable, container-ready codebases.
  • Implement CI/CD pipelines for model deployment and updates.
  • Work closely with product and data engineering teams to align system interfaces.


Requirements

  • MSc in Computer Science, Machine Learning, Data Science, or related field, or equivalent practical experience.
  • Strong proficiency in Python and deep learning frameworks (PyTorch preferred).
  • Solid software engineering background — designing and debugging distributed systems.
  • Experience building and running Dockerised microservices, ideally with Kubernetes/EKS.
  • Familiarity with message brokers (Kafka, RabbitMQ, or similar).
  • Comfort working in hybrid cloud/on-prem deployments (AWS, Databricks, or industrial environments).
  • Exposure to time-series or industrial data (historians, IoT, SCADA/DCS logs) is a plus.
  • Ability to work in forward-deployed settings, collaborating directly with customers.

What Success Looks Like

  • Research models are hardened into fast, reliable services that run in production.
  • Customers see Orbital AI running live in their environment without downtime.
  • Microservice-based ML pipelines scale cleanly, with message broking between components.
  • You become the go-to engineer bridging AI research, product, and customer integration.

Top Skills

AWS
Databricks
Docker
Kafka
Kubernetes
Python
PyTorch
RabbitMQ
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: London
30 Employees

What We Do

Applied Computing; our vision is to deliver sustainable abundance for a growing planet, through AI that works for the Energy Industry.

These industries are an enduring necessity, they power our planet, but their complexity has kept them tethered to legacy systems. These critical industries utilise less than 10% of their data for decision making and process optimisation.

Our flagship product, Orbital, is a Multi-Foundation AI system that enables these industries to finally trust AI in the control room, harness 100% of their data and optimise in real time for any metric unlocking faster decisions, safer operations, and higher performance.

With Orbital, we aim to transform how engineers and operators understand, and optimise their plants using trustworthy, physics-grounded AI.

This isn’t the future of Industrial AI, this is the first time it’s worked.

Similar Jobs

ServiceNow Logo ServiceNow

Consultant

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Remote or Hybrid
Austin, TX, USA

ServiceNow Logo ServiceNow

Consultant

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Remote or Hybrid
Austin, TX, USA

ServiceNow Logo ServiceNow

Ontology & Content Governance Specialist

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Remote or Hybrid
Austin, TX, USA

ServiceNow Logo ServiceNow

Consultant

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Remote or Hybrid
Dallas, TX, USA

Similar Companies Hiring

Credal.ai Thumbnail
Software • Security • Productivity • Machine Learning • Artificial Intelligence
Brooklyn, NY
Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees
PRIMA Thumbnail
Travel • Software • Marketing Tech • Hospitality • eCommerce
US
15 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account