Staff Software Engineer - GenAI Performance and Kernel

Reposted 2 Days Ago
Be an Early Applicant
San Francisco, CA
In-Office
191K-233K Annually
Senior level
Big Data • Machine Learning • Software • Analytics • Big Data Analytics
The Role
Lead the development and optimization of GPU kernels for GenAI inference, focusing on performance improvements and mentorship of engineers.
Summary Generated by Built In

P-1285

About This Role

As a staff software engineer for GenAI Performance and Kernel, you will own the design, implementation, optimization, and correctness of the high-performance GPU kernels powering our GenAI inference stack. You will lead development of highly-tuned, low-level compute paths, manage trade-offs between hardware efficiency and generality, and mentor others in kernel-level performance engineering. You will work closely with ML researchers, systems engineers, and product teams to push the state-of-the-art in inference performance at scale.

What You Will Do
  • Lead the design, implementation, benchmarking, and maintenance of core compute kernels (e.g. attention, MLP, softmax, layernorm, memory management) optimized for various hardware backends (GPU, accelerators)
  • Drive the performance roadmap for kernel-level improvements: vectorization, tensorization, tiling, fusion, mixed precision, sparsity, quantization, memory reuse, scheduling, auto-tuning, etc.
  • Integrate kernel optimizations with higher-level ML systems
  • Build and maintain profiling, instrumentation, and verification tooling to detect correctness, performance regressions, numerical issues, and hardware utilization gaps
  • Lead performance investigations and root-cause analysis on inference bottlenecks, e.g. memory bandwidth, cache contention, kernel launch overhead, tensor fragmentation
  • Establish coding patterns, abstractions, and frameworks to modularize kernels for reuse, cross-backend portability, and maintainability
  • Influence system architecture decisions to make kernel improvements more effective (e.g. memory layout, dataflow scheduling, kernel fusion boundaries)
  • Mentor and guide other engineers working on lower-level performance, provide code reviews, help set best practices
  • Collaborate with infrastructure, tooling, and ML teams to roll out kernel-level optimizations into production, and monitor their impact
What We Look For
  • BS/MS/PhD in Computer Science, or a related field
  • Deep hands-on experience writing and tuning compute kernels (CUDA, Triton, OpenCL, LLVM IR, assembly or similar sort) for ML workloads
  • Strong knowledge of GPU/accelerator architecture: warp structure, memory hierarchy (global, shared, register, L1/L2 caches), tensor cores, scheduling, SM occupancy, etc.
  • Experience with advanced optimization techniques: tiling, blocking, software pipelining, vectorization, fusion, loop transformations, auto-tuning
  • Familiarity with ML-specific kernel libraries (cuBLAS, cuDNN, CUTLASS, oneDNN, etc.) or open kernels
  • Strong debugging and profiling skills (Nsight, NVProf, perf, vtune, custom instrumentation)
  • Experience reasoning about numerical stability, mixed precision, quantization, and error propagation
  • Experience in integrating optimized kernels into real-world ML inference systems; exposure to distributed inference pipelines, memory management, and runtime systems
  • Experience building high-performance products leveraging GPU acceleration
  • Excellent communication and leadership skills — able to drive design discussions, mentor colleagues, and make trade-offs visible
  • A track record of shipping performance-critical, high-quality production software
  • Bonus: published in systems/ML performance venues (e.g. MLSys, ASPLOS, ISCA, PPoPP), experience with custom accelerators or FPGA, experience with sparsity or model compression techniques


Pay Range Transparency

Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.  Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.


Local Pay Range
$190,900$232,800 USD

About Databricks

Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. 

Our Commitment to Diversity and Inclusion

At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.

Compliance

If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.

Top Skills

Cublas
Cuda
Cudnn
Cutlass
Llvm Ir
Nvidia Nsight
Nvprof
Onednn
Opencl
Perf
Triton
Vtune
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
New York, NY
2,200 Employees
Year Founded: 2013

What We Do

As the leader in Unified Data Analytics, Databricks helps organizations make all their data ready for analytics, empower data science and data-driven decisions across the organization, and rapidly adopt machine learning to outpace the competition. By providing data teams with the ability to process massive amounts of data in the Cloud and power AI with that data, Databricks helps organizations innovate faster and tackle challenges like treating chronic disease through faster drug discovery, improving energy efficiency, and protecting financial markets.

Similar Jobs

Wells Fargo Logo Wells Fargo

Consultant

Fintech • Financial Services
Hybrid
7 Locations
213000 Employees
Hybrid
7 Locations
213000 Employees
38-67 Hourly
Hybrid
6 Locations
213000 Employees
23-31 Hourly

Wells Fargo Logo Wells Fargo

Branch Support Manager

Fintech • Financial Services
Hybrid
4 Locations
213000 Employees
92K-145K Annually

Similar Companies Hiring

Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees
PRIMA Thumbnail
Travel • Software • Marketing Tech • Hospitality • eCommerce
US
15 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account