We are looking for an experienced Linux virtualization engineer who can identify and implement virtualization improvements, including live migrations, in DO’s hypervisor fleet. The right candidate will be working in KVM and QEMU to deliver innovative solutions to real world problems that negatively impact our customers. Reporting to the Manager of the Fleet Optimization Engineering team, the Senior Engineer will be responsible not only for the continuous improvement of our customers’ experience but also for the operational optimization of the hypervisor fleet by addressing challenging, low-level virtualization issues.
What You'll Be Doing:- Work to root cause and eliminate Virtual Machine downtime or performance issues in our production fleet
- Collaborate with open source Linux, QEMU and libvirt communities to drive the evolution of Linux virtualization technologies and incorporate them into the DO fleet.
- Work with cross team partners to unlock possibilities within our virtualization stack, as it relates to performance, security, GPU, migrations and overall fleet management.
- Document, measure and report metrics over time such as performance, failures, API utilization and version info.
- Contribute to continuous patch management efforts. Build, backport and deploy patches to our virtualization stack as needed.
- Suggest and Implement improvements to our code release pipelines.
- Significant experience with the internals of QEMU, KVM, Linux kernel and libvirt. Strong proficiency in C/C++
- Proven track record of solving problems at scale. You’ll be involved in driving homogenization across the fleet while making sustainable decisions for the organization moving forward.
- A strong security mindset. You are proactive when it comes to identifying and implementing security best practices in your domain.
- You are a terrific cross team collaborator. So much of this role involves interactions with other engineers and teams.
- Experience with Intel and AMD virtualization.
- We innovate with purpose. You’ll be a part of a cutting-edge technology company with an upward trajectory, who are proud to simplify cloud and AI so builders can spend more time creating software that changes the world. As a member of the team, you will be a Shark who thinks big, bold, and scrappy, like an owner with a bias for action and a powerful sense of responsibility for customers, products, employees, and decisions.
- We prioritize career development. At DO, you’ll do the best work of your career. You will work with some of the smartest and most interesting people in the industry. We are a high-performance organization that will always challenge you to think big. Our organizational development team will provide you with resources to ensure you keep growing. We provide employees with reimbursement for relevant conferences, training, and education. All employees have access to LinkedIn Learning's 10,000+ courses to support their continued growth and development.
- We care about your well-being. Regardless of your location, we will provide you with a competitive array of benefits to support your overall well-being, from one-time work from home stipend to wellness allowance to flexible time off policy, to name a few. While the philosophy around our benefits is the same worldwide, specific benefits may vary based on local regulations and preferences.
- We reward our employees. The salary range for this position is between $170,000.00 - $185,000.00 based on market data, relevant years of experience, and skills. You may qualify for a bonus in addition to base salary; bonus amounts are determined based on company and individual performance. We also provide equity compensation to eligible employees, including equity grants upon hire and the option to participate in our Employee Stock Purchase Program.
- We value diversity and inclusion. We are an equal-opportunity employer, and recognize that diversity of thought and background builds stronger teams and products to serve our customers. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service.
*This is a remote role
#LI-Remote
Similar Jobs
What We Do
DigitalOcean is the Inference Cloud — a full-stack, production-ready cloud platform built to run AI applications with predictable performance, sustainable economics, and radically simpler operations at scale.
We are built for teams turning AI into real products — not just training models. Our advantage is not fewer features, but fewer failure modes when operating AI at scale — combining minimal operational overhead, predictable cost efficiency, and a full-stack cloud that works as a system. Hyperscalers are broad by design. Neoclouds are infrastructure-first. DigitalOcean is inference-first — with a real cloud underneath. It combines inference-optimized compute, managed inference software, and integrated cloud capabilities that reduce operational burden for teams running real workloads. Inference is the foundation—not the boundary. Everything else builds on top of it.
Why Work With Us
At DO, we do career-defining work. We innovate with AI and build cutting-edge tech. Our rewards to match that intensity - to motivate you, recognize your impact, and give you what you need to thrive. If you have a growth mindset, like to think big and bold, and are energized by the fast-paced environment, you'll find your place here.
Gallery
DigitalOcean Offices
Remote Workspace
Employees work remotely.
We commit to both remote work and in-person collaboration. These ways of working are dependent on specific roles and are mutually agreed upon by employees. In the US, we are mainly remote. In our APAC locations, we have a hybrid in-office approach.
