AI Infrastructure Operations Engineer

Posted Yesterday
Easy Apply
Be an Early Applicant
2 Locations
In-Office
Senior level
Artificial Intelligence
The Role
Manage AI compute clusters, monitor systems health, optimize resources, and troubleshoot technical issues, ensuring high performance for ML applications.
Summary Generated by Built In

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. 

Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.

About The Role

We are seeking a highly skilled and experienced AI Infrastructure Operations Engineer to manage and operate our cutting-edge machine learning compute clusters. These clusters would provide the candidate an opportunity to work with the world's largest computer chip, the Wafer-Scale Engine (WSE), and the systems that harness its unparalleled power. 

You will play a critical role in ensuring the health, performance, and availability of our infrastructure, maximizing compute capacity, and supporting our growing AI initiatives. This role requires a deep understanding of Linux-based systems, containerization technologies, and experience with monitoring and troubleshooting complex distributed systems. The ideal candidate is a proactive problem-solver with expertise in large-scale compute infrastructure, dependable and an advocate for customer success.  

Responsibilities
  • Manage and operate multiple advanced AI compute infrastructure clusters. 
  • Monitor and oversee cluster health, proactively identifying and resolving potential issues. 
  • Maximize compute capacity through optimization and efficient resource allocation. 
  • Deploy, configure, and debug container-based services using Docker. 
  • Provide 24/7 monitoring and support, leveraging automated tools and performing hands-on troubleshooting as needed. 
  • Handle engineering escalations and collaborate with other teams to resolve complex technical challenges. 
  • Contribute to the development and improvement of our monitoring and support processes. 
  • Stay up-to-date with the latest advancements in AI compute infrastructure and related technologies. 
Skills And Requirements
  • 6-8 years of relevant experience in managing and operating complex compute infrastructure, preferably in the context of machine learning or high-performance computing. 
  • Strong proficiency in Python scripting for automation and system administration. 
  • Deep understanding of Linux-based compute systems and command-line tools. 
  • Extensive knowledge of Docker containers and container orchestration platforms like k8s and SLURM. 
  • Proven ability to troubleshoot and resolve complex technical issues in a timely and efficient manner. 
  • Experience with monitoring and alerting systems. 
  • Should have a proven track record to own and drive challenges to completion. 
  • Excellent communication and collaboration skills. 
  • Ability to work effectively in a fast-paced environment. 
  • Willingness to participate in a 24/7 on-call rotation. 

Preferred Skills And Requirements

  • Operating large scale GPU clusters.
  • Knowledge of technologies like Ethernet, RoCE, TCP/IP, etc. is desired.
  • Knowledge of cloud computing platforms (e.g., AWS, GCP, Azure).
  • Familiarity with machine learning frameworks and tools.
  • Experience with cross-functional team projects. 
Location 
  • SF Bay Area.
  • Toronto, Canada.
  • Bangalore, India.
Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU.
  2. Publish and open source their cutting-edge AI research.
  3. Work on one of the fastest AI supercomputers in the world.
  4. Enjoy job stability with startup vitality.
  5. Our simple, non-corporate work culture that respects individual beliefs.

Read our blog: Five Reasons to Join Cerebras in 2026.

Apply today and become part of the forefront of groundbreaking advancements in AI!

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Top Skills

AWS
Azure
Docker
Ethernet
GCP
Kubernetes
Linux
Python
Roce
Slurm
Tcp/Ip
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Sunnyvale, CA
402 Employees
Year Founded: 2016

What We Do

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, functional business experts and engineers of all types. We have come together to build a new class of computer to accelerate artificial intelligence work by three orders of magnitude beyond the current state of the art.

The CS-2 is the fastest AI computer in existence. It contains a collection of industry firsts, including the Cerebras Wafer Scale Engine (WSE-2). The WSE-2 is the largest chip ever built. It contains 2.6 trillion transistors and covers more than 46,225 square millimeters of silicon. The largest graphics processor on the market has 54 billion transistors and covers 815 square millimeters. In artificial intelligence work, large chips process information more quickly producing answers in less time. As a result, neural networks that in the past took months to train, can now train in minutes on the Cerebras CS-2 powered by the WSE-2.

Join us: https://cerebras.net/careers/

Similar Jobs

Applied Systems Logo Applied Systems

Senior Software Engineer

Cloud • Insurance • Payments • Software • Business Intelligence • App development • Big Data Analytics
Remote or Hybrid
2 Locations
3000 Employees
90K-140K Annually

Applied Systems Logo Applied Systems

Data Engineer

Cloud • Insurance • Payments • Software • Business Intelligence • App development • Big Data Analytics
Remote or Hybrid
Canada
3000 Employees
60K-120K Annually

Block Logo Block

Program Manager

Blockchain • eCommerce • Fintech • Payments • Software • Financial Services • Cryptocurrency
In-Office or Remote
8 Locations
12000 Employees
98K-184K Annually

Block Logo Block

User Experience Researcher

Blockchain • eCommerce • Fintech • Payments • Software • Financial Services • Cryptocurrency
In-Office or Remote
8 Locations
12000 Employees
240K-359K Annually

Similar Companies Hiring

Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees
Milestone Systems Thumbnail
Software • Security • Other • Big Data Analytics • Artificial Intelligence • Analytics
Lake Oswego, OR
1500 Employees
Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account