HPC Engineer

Posted Yesterday
Be an Early Applicant
Menlo Park, CA, USA
In-Office
350K-450K Annually
Senior level
Artificial Intelligence • Hardware • Information Technology • Robotics
From bits to atoms.
The Role
The HPC Engineer will design, operate, and optimize high-performance computing infrastructure, focusing on large-scale GPU and CPU clusters for AI and research workloads, ensuring performance and reliability for scientific discovery.
Summary Generated by Built In
About Periodic Labs

The most important scientific discoveries of our time won’t happen in a traditional lab. We’re an AI and physical sciences company building state-of-the-art models to accelerate breakthroughs across materials, energy, and beyond. Backed by world-class investors and growing rapidly, we operate at the pace the frontier requires. Our team brings deep expertise, genuine ownership, and an insatiable drive to push the boundaries of what’s scientifically possible.

About the Role

As HPC Engineer at Periodic Labs, you will design, build, and operate the high-performance computing infrastructure that powers our AI and scientific research. Our models demand extreme compute at scale — large GPU and CPU clusters, high-speed interconnects, low-latency parallel storage, and workload schedulers that make every cycle count. You will work directly with researchers and infrastructure engineers to ensure our compute environment is fast, reliable, and optimized for scientific discovery at the frontier.

This is a deeply hands-on role. You will architect and tune systems, automate provisioning, diagnose performance bottlenecks, and design for resilience at scale. You’ll partner with research and ML teams to understand their workloads and shape an HPC environment that removes friction and accelerates science.

What You’ll Do
  • Design, deploy, and operate large-scale GPU and CPU clusters for AI training, scientific simulation, and research workloads

  • Manage and optimize high-speed interconnect fabrics (InfiniBand, RoCE) and parallel filesystems (Lustre, GPFS, WEKA, or equivalent) for maximum throughput and minimum latency

  • Own workload scheduling and resource management using Slurm, Kubernetes, or similar systems — tuning for throughput, fairness, and researcher productivity

  • Implement and maintain automated cluster provisioning, configuration management, and lifecycle tooling using Ansible, Terraform, or custom orchestration

  • Monitor cluster health, performance, and utilization; build dashboards and alerting to proactively identify and resolve bottlenecks

  • Partner with research and ML engineering teams to profile workloads, diagnose performance issues, and tune hardware and software stacks for specific computational demands

  • Design and implement backup, disaster recovery, and fault-tolerance strategies for research data and compute infrastructure

  • Evaluate and integrate new hardware (GPUs, accelerators, networking) and software technologies as the field evolves

  • Establish standards and runbooks for HPC operations, capacity planning, and incident response

  • Collaborate with security and infrastructure teams to implement access controls, network segmentation, and compliance controls appropriate for a research environment

You Will Thrive in This Role If You Have
  • Experience designing and operating large-scale HPC or GPU clusters in research, cloud, or enterprise environments

  • Deep knowledge of high-speed interconnects such as InfiniBand (HDR/NDR) or RoCE, including fabric management, tuning, and troubleshooting

  • Hands-on experience with parallel and distributed storage systems (Lustre, GPFS, WEKA, BeeGFS, or similar) — configuration, performance tuning, and capacity management

  • Experience with workload managers and schedulers such as Slurm, PBS Pro, LSF, or Kubernetes-based HPC orchestration

  • Linux systems administration at scale, including kernel tuning, NUMA optimization, CPU and memory affinity, and GPU driver management

  • Infrastructure automation using Ansible, Terraform, or equivalent — you treat infrastructure as code

  • Experience with GPU computing environments including CUDA, NCCL, MPI, and multi-node distributed training or simulation setups

  • Performance profiling, benchmarking, and tuning of computational workloads across CPU, GPU, memory, network, and storage

  • Experience with monitoring and observability tooling (Prometheus, Grafana, or equivalent) in large, heterogeneous compute environments

  • Ability to collaborate with researchers or data scientists to understand workload requirements and translate them into infrastructure decisions

Especially Strong Candidates May Also Have
  • Experience operating GPU clusters for large-scale AI or ML training workloads such as multi-node transformer training

  • Familiarity with AI accelerators beyond GPUs, such as TPUs, Trainium, or custom ASIC environments

  • Experience in mixed on-prem and cloud HPC environments, including burst-to-cloud or hybrid scheduling patterns

  • Background in scientific computing domains such as computational chemistry, physics simulation, or bioinformatics

  • Experience with containerized HPC environments (Singularity/Apptainer, Docker, or container-aware schedulers)

  • Knowledge of network security, access control, and compliance requirements for regulated research data

  • Contributions to open-source HPC tooling or published work on HPC system design or performance

Mechanics

Minimum education: Bachelor’s degree or an equivalent combination of education and training or experience

Location: Our lab is located in Menlo Park and we prefer folks to be located in Menlo Park or San Francisco but can be flexible based on role

Compensation: The annual base compensation range for this role is $350,000-$450,000

Visa sponsorship: Yes, we sponsor visas and will do everything we can to assist in this process with our legal support.

We’re building a team of the world’s best — the scientists, engineers, and problem-solvers who don’t just follow the frontier, they define it. If you’re driven to bring AI to life in the physical world and make discoveries that have never been made before, you belong here.

Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
32 Employees
Year Founded: 2025

What We Do

We're building AI scientists and the autonomous laboratories for them to operate.

Similar Jobs

Hewlett Packard Enterprise Logo Hewlett Packard Enterprise

HPC & AI Senior Performance Engineer

Artificial Intelligence • Cloud • Information Technology • Consulting
In-Office
2 Locations
85422 Employees
120K-275K Annually

San Francisco Compute Company Logo San Francisco Compute Company

Network Engineer

Artificial Intelligence • Cloud • Information Technology • Infrastructure as a Service (IaaS)
In-Office
San Francisco, CA, USA
30 Employees
250K-325K Annually

NVIDIA Logo NVIDIA

Senior AI and ML HPC Cluster Engineer

Artificial Intelligence • Computer Vision • Hardware • Robotics • Metaverse
In-Office or Remote
2 Locations
21960 Employees
152K-288K Annually

NVIDIA Logo NVIDIA

HPC Operations Engineer

Artificial Intelligence • Computer Vision • Hardware • Robotics • Metaverse
In-Office
Santa Clara, CA, USA
21960 Employees
124K-242K Annually

Similar Companies Hiring

Fairly Even Thumbnail
Hardware • Other • Robotics • Sales • Software • Hospitality
New York, NY
30 Employees
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees
Golden Pet Brands Thumbnail
Digital Media • eCommerce • Information Technology • Marketing Tech • Pet • Retail • Social Media
El Segundo, California
178 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account