Senior ML Ops Engineer (Machine Learning Infrastructure)

Reposted 7 Days Ago
Los Angeles, CA
In-Office
150K-240K Annually
Senior level
Transportation
The Role
Lead design and development of scalable ML infrastructure for autonomous vehicles. Collaborate with ML teams for efficient model training and deployment. Manage end-to-end ML pipelines and cloud-based solutions.
Summary Generated by Built In

Parallel Systems is pioneering autonomous battery-electric rail vehicles designed to transform freight transportation by shifting portions of the $900 billion U.S. trucking industry onto rail. Our innovative technology offers cleaner, safer, and more efficient logistics solutions. Join our dynamic team and help shape a smarter, greener future for global freight.

Senior ML Ops Engineer (Machine Learning Infrastructure)

Parallel Systems is seeking an experienced MLOps/ML Infrastructure Engineer to lead the design and development of the scalable systems that power our autonomy and perception pipelines. As we build the first fully autonomous, battery-electric rail vehicles, you will play a critical role in enabling the ML teams to develop, train, and deploy models efficiently and reliably in both R&D and real-world environments.

This is an opportunity to take full ownership of the ML infrastructure stack, from distributed training environments and experiment tracking to deployment and monitoring at scale. You’ll collaborate closely with world-class engineers in autonomy, robotics, and software, helping shape the core systems that make real-time, safety-critical ML possible. If you're driven by building robust platforms that unlock innovation in AI and robotics, we’d love to work with you. 

This role requires at least one week a month or more in our LA office per month. 

Responsibilities:

  • Design and implement robust MLOps solutions, including automated pipelines for data management, model training, deployment and monitoring. 
  • Architect, deploy, and manage scalable ML infrastructure for distributed training and inference. 
  • Collaborate with ML engineers to gather requirements and develop strategies for data management, model development and deployment. 
  • Build and operate cloud-based systems (e.g., AWS, GCP) optimized for ML workloads in R&D, and production environments. 
  • Build scalable ML infrastructure to support continuous integration/deployment, experiment management, and governance of models and datasets. 
  • Support the automation of model evaluation, selection, and deployment workflows. 

What Success Looks Like: 

  • After 30 Days: You have developed a deep understanding of the product goals, existing infrastructure, and stakeholder requirements. You've conducted technical discovery and proposed a preliminary MLOps architecture—evaluating various ML tools, cloud services, and workflow strategies—clearly outlining pros and cons for each option. 
  • After 60 Days: You’ve delivered a detailed design document that outlines the end-to-end ML pipeline, including data ingestion, model training, deployment, and monitoring. Based on feedback from ML engineers and stakeholders, you’ve iterated on the design and built PoC for the core ML workflow aligned with the approved architecture. 
  • After 90 Days: You have delivered the core features of the MLOps pipeline and successfully integrated key tools (e.g., MLflow, SageMaker, or Kubeflow). You’ve also initiated the implementation of the remaining features, ensuring the infrastructure supports scalable, repeatable workflows for model experimentation and deployment in both R&D and production environments. 

Basic Requirements: 

  • Bachelor’s or higher degree in Computer Science, Machine Learning, or a relevant engineering discipline. 
  • 5+ years of experience building large-scale, reliable systems; 2+ years focused on ML infrastructure or MLOps. 
  • Proven experience architecting and deploying production-grade ML pipelines and platforms. 
  • Strong knowledge of ML lifecycle: data ingestion, model training, evaluation, packaging, and deployment. 
  • Hands-on experience with MLOps tools (e.g., MLflow, Kubeflow, SageMaker, Airflow, Metaflow, or similar). 
  • Deep understanding of CI/CD practices applied to ML workflows. 
  • Proficiency in Python, Git, and system design with solid software engineering fundamentals. 
  • Experience with cloud platforms (AWS, GCP, or Azure) and designing ML architectures in those environments. 

Preferred Qualifications: 

  • Experience with deep learning architectures (CNNs, RNNs, Transformers) or computer vision. 
  • Hands-on experience with distributed training tools (e.g., PyTorch DDP, Horovod, Ray). 
  • Background in real-time ML systems and batch inference, including CPU/GPU-aware orchestration. 
  • Previous work in autonomous vehicles, robotics, or other real-time ML-driven systems. 

We are committed to providing fair and transparent compensation in accordance with applicable laws. Salary ranges are listed below and reflect the expected range for new hires in this role, based on factors such as skills, experience, qualifications, and location. Final compensation may vary and will be determined during the interview process. The target hiring range for this position is listed below.

Target Salary Range:
$150,000$240,000 USD

Parallel Systems is an equal opportunity employer committed to diversity in the workplace. All qualified applicants will receive consideration for employment without regard to any discriminatory factor protected by applicable federal, state or local laws. We work to build an inclusive environment in which all people can come to do their best work.

Parallel Systems is committed to the full inclusion of all qualified individuals. As part of this commitment, Parallel Systems will ensure that persons with disabilities are provided reasonable accommodations. If reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please contact your recruiter.

Top Skills

Airflow
AWS
Azure
GCP
Kubeflow
Metaflow
Mlflow
Python
Sagemaker
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Los Angeles, CA
56 Employees
Year Founded: 2020

What We Do

Parallel Systems is a startup company developing the future of intermodal transportation. Our mission is to decarbonize freight while improving supply chain logistics and safety. We are developing vehicles and software to create new autonomous and electric transportation systems for existing rail infrastructure, allowing railroads to convert part of the $700 billion U.S. trucking industry to rail.

Similar Jobs

Cox Enterprises Logo Cox Enterprises

Sales Manager

Automotive • Cloud • Greentech • Information Technology • Other • Software • Cybersecurity
Hybrid
San Diego, CA, USA
50000 Employees
129K-193K Annually

Cloudflare Logo Cloudflare

Sr. Account-based Marketing (ABM) Manager

Cloud • Information Technology • Security • Software • Cybersecurity
Hybrid
4 Locations
4400 Employees
135K-180K Annually

Wells Fargo Logo Wells Fargo

Teller 20 Hour - San Luis Obispo

Fintech • Financial Services
Hybrid
San Luis Obispo, CA, USA
213000 Employees
20-26 Hourly

Clerkie Logo Clerkie

Sales Development Representative

Artificial Intelligence • Fintech • Software
In-Office or Remote
7 Locations
42 Employees
55K-70K Annually

Similar Companies Hiring

Air Space Intelligence Thumbnail
Transportation • Software • Machine Learning • Logistics • Defense • Artificial Intelligence • Aerospace
Boston , Massachusetts
110 Employees
Blissway Thumbnail
Transportation • Software • Machine Learning • Internet of Things • Hardware • Fintech • Computer Vision
Denver, Colorado
20 Employees
Toro TMS Thumbnail
Transportation • Software • Sales • Enterprise Web • Cloud
Chicago, IL
64 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account