Senior Software Engineer, LLM Performance

Posted 16 Days Ago
Easy Apply
7 Locations
In-Office or Remote
Senior level
Artificial Intelligence • Cloud • Hardware • Information Technology • Software
The Role
Optimize and integrate LLMs across the stack from GPU kernels to Kubernetes deployments. Improve inference performance via kernel development, algorithmic techniques (quantization, speculative decoding), and contributions to open-source LLM engines like vLLM. Drive hardware utilization, profiling, and enterprise-grade scalable implementations.
Summary Generated by Built In

Parasail is redefining AI infrastructure by enabling seamless deployment across a distributed network of GPUs, optimizing for cost, performance, and flexibility. Our mission is to empower AI developers with a fast, cost-efficient, and scalable cloud experience—free from vendor lock-in and designed for the next generation of AI workloads.

Job Description:

The Senior Software Engineer, LLM Performance plays a crucial role in delivering a competitive platform by focusing on efficiently scheduling, executing, and managing AI workloads on distributed compute systems. This role is deeply technical, spanning from low-level GPU kernels to distributed AI orchestration and Kubernetes (K8s) deployments. It is about more than optimization; it’s about pioneering efficient infrastructure that supports AI’s transformative role in reshaping productivity, revolutionizing industries, and addressing some of the world’s most challenging problems. You’ll ensure that generative AI — including large language models (LLMs), multi-modal models, and diffusion models — operates efficiently at enterprise scale while driving continuous improvements in cost, performance, and sustainability.

Responsibilities:

  • Add support for new LLMs, working across the stack from low-level GPU kernels to Kubernetes-based deployments.

  • Contribute to cutting-edge open-source LLM engines such as vLLM or SGLang to extend their capabilities and performance (e.g. use Python technologies to improve API servers or request schedulers).

  • Operate closer to the hardware, focusing on building and integrating solutions to boost performance and hardware utilization. For example, improve attention backends like FlashAttention or FlashInfer by contributing to their development and optimization, or by integrating their solutions into vLLM.

  • Improve LLM performance using advanced algorithmic solutions such as speculative decoding, quantization, or other state-of-the-art techniques. Understand the impact of such techniques in model quality.

Qualifications:

  • Expertise in GPU computing, including low-level platforms such as CUDA, ROCm, XLA, PyTorch, Jax, etc.

  • Background in performance analysis and optimization of AI/HPC workloads (e.g. profiling or theoretical analysis of Flops and bandwidth).

  • Experience in writing GPU kernels using technologies like CUDA, CUTLASS, Triton.

  • Strength in Python and C++.

  • Demonstrated contributions to open-source projects. Contributions to inference engines such as vLLM is a strong plus.

  • A production-oriented mindset emphasizing robust, scalable code suitable for enterprise-grade applications.

  • A relentless curiosity about cutting-edge AI technologies combined with a passion for solving complex problems.

What You Bring to the Table: We are looking for people who are eager to learn and master the lower-level compute concepts that are critical for the AI revolution. With us, your skills will not only contribute to coding but will also have a significant impact on the scalability and efficiency of AI applications at large. If you're geared up for the challenge of optimizing AI performance and eager to push our technological prowess to new heights, we're excited to welcome you aboard.

Top Skills

Cuda,Rocm,Xla,Pytorch,Jax,Cutlass,Triton,Flashattention,Flashinfer,Vllm,Sglang,Python,C++,Kubernetes
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San Mateo, California
23 Employees
Year Founded: 2023

What We Do

Parasail is the first AI Deployment Network built for the new era of open and scalable AI. We connect teams to the world’s largest pool of on-demand GPU compute—giving AI builders fast, flexible, and cost-efficient infrastructure to deploy and scale models without contracts, quotas, or cloud complexity. From real-time inference to massive batch jobs, Parasail intelligently matches workloads across a global GPU network, optimizing for performance, price, and geography. No DevOps burden, no vendor lock-in—just plug-and-play access to high-performance infrastructure that works with the latest open-source models and evolving AI stacks. Companies like Weights & Biases, Elicit, Rasa, Everpilot, and Oumi are already building faster and saving up to 30x on costs with Parasail. The future of AI deployment isn’t a single cloud. It’s a global compute network. Parasail is making that future a reality. 🔗 www.parasail.io

Similar Jobs

GitLab Logo GitLab

Data Analyst

Cloud • Security • Software • Cybersecurity • Automation
Easy Apply
Remote
3 Locations
2500 Employees
78K-168K Annually

DraftKings Logo DraftKings

Senior Data Science Engineer

Digital Media • Gaming • Information Technology • Software • Sports • Esports • Big Data Analytics
Remote or Hybrid
Canada
6400 Employees

Magna International Logo Magna International

Quality Assurance Support Student

Automotive • Hardware • Robotics • Software • Transportation • Manufacturing
Remote or Hybrid
Woodbridge, ON, CAN
171000 Employees
25-25 Hourly

Applied Systems Logo Applied Systems

Consultant

Cloud • Insurance • Payments • Software • Business Intelligence • App development • Big Data Analytics
Remote or Hybrid
3 Locations
3040 Employees
76K-114K Annually

Similar Companies Hiring

Fairly Even Thumbnail
Software • Sales • Robotics • Other • Hospitality • Hardware
New York, NY
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account