Launched in November 2023, the Nebius platform provides high-end infrastructure and tools for training, fine-tuning and inference. Based in Europe with a global footprint we aspire to become the leading AI cloud for AI practitioners around the world.
Nebius is built around the talents of around 400 highly skilled engineers with a proven track record in developing sophisticated cloud and ML solutions and designing cutting-edge hardware. This allows all the layers of the Nebius cloud – from hardware to UI – to be built in-house, differentiating Nebius from the majority of specialized clouds. As a result, Nebius customers get a true hyperscaler-cloud experience tailored for AI practitioners.
As an NVIDIA preferred cloud service provider, Nebius offers the latest NVIDIA GPUs including H100, L40S, with H200 and Blackwell chips coming soon.
Nebius owns a data center in Finland, built from the ground up by the company’s R&D team. We are expanding our infrastructure and plan to add new colocation data centers in Europe and North America already this year, and to build several greenfield DCs in the near future.
Our Finnish data center is home to ISEG, the most powerful commercially available supercomputer in Europe and the 19th most powerful globally (Top 500 list, June 2024). It also epitomizes our commitment to sustainability, with energy efficiency levels significantly above the global average and an innovative system that recovers waste heat to warm 2,000 residential buildings in the nearby town of Mäntsälä.
Nebius is headquartered in Amsterdam, Netherlands, with R&D and commercial hubs across North America, Europe and Israel.
The role
We are looking for the Support Engineer L2. You will resolve complex issues escalated from L1 support and Technical Account Managers, applying skills in Linux, networking, Kubernetes and scripting for effective troubleshooting. This role requires strong problem-solving, clear communication and a customer-focused approach to ensure seamless service.
You’re welcome to work remotely from the USA.
Your responsibilities will include:
1. Issue resolution
- Diagnose and resolve technical issues efficiently, focusing on Linux, networking and Kubernetes environments.
- Troubleshoot software, network and storage issues, documenting solutions for future reference.
2. Technical expertise
- Apply Linux skills to manage OS-level issues, utilize basic networking knowledge, support Kubernetes environments and use Python/Bash scripting for automation.
- Understand data storage concepts for diagnosing storage-related issues.
3. Customer communication
- Provide timely updates to customers, communicate complex issues clearly and escalate unresolved issues as needed.
4. Documentation and knowledge sharing
- Create and update technical documentation and mentor L1 support staff on recurring issues.
We expect you to have:
- Bachelor’s degree in Computer Science, Information Technology or a related field preferred.
- 5+ years in technical support with Linux and networking experience.
- Mid-level Linux, basic networking, Kubernetes, Python/Bash scripting and data storage knowledge.
- An understanding of how GPUs accelerate ML workloads.
- The ability to assist with resource provisioning, scaling, and integration within ML workflows.
- Familiarity with CUDA, Tensor Cores, and distributed training across multiple GPUs.
- The ability to troubleshoot memory errors, driver/library mismatches, and GPU utilization bottlenecks.
- The ability to debug common errors during model training (e.g., OOM errors, version compatibility issues).
- Knowledge of Docker (for packaging ML workflows) and Kubernetes (for scaling and managing GPU workloads in cloud environments).
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Top Skills
What We Do
Cloud platform specifically designed to train AI models