At Groq. We believe in an AI economy powered by human agency. We envision a world where AI is accessible to all, a world that demands processing power that is better, faster, and more affordable than is available today. AI applications are currently constrained by the limitations of the Graphics Processing Unit (GPU), a technology originally developed for the gaming market and soon to become the weakest link in the AI economy.
Enter Groq's LPU™ AI Inference Technology. Specifically engineered for the demands of large language models (LLMs), the Language Processing Unit outpaces the GPU in speed, power, efficiency, and cost-effectiveness. The quickest way to understand the opportunity is to watch the following talk – groq.link/scspdemo.
Why join Groq? AI will change humanity forever, and we believe preservation of human agency and self determination is only possible if AI is made affordably and universally accessible. Groq’s LPUs will power AI from an early stage, and you will get to leave your fingerprint on civilization.
Principal Inference Stack Engineer
Mission: lead technical efforts focused on mapping ML workloads onto Groq’s LPU through deep and first hand working knowledge of Groq’s state-of-the-art spatial compiler and ML inference stack.
Responsibilities & outcomes:
- Analyze latest ML workloads from Groq partners or Cloud and develop optimization roadmap and strategies to improve inference performance and operating efficiency of workload
- Design, develop, and maintain optimizing compiler for Groq's LPU
- Expand Groq runtime API to simplify execution model of Groq LPUs
- Benchmark and analyze output produced by optimizing compiler and runtime, and drive enhancements to improve its quality-of-results when measured on the Groq LPU hardware.
- Manage large multi-person and multi-geo projects and interface with various leads across the company
- Mentor junior compiler engineers and collaborate with other senior compiler engineers on the team.
- Review and accept code updates to compiler passes and IR definitions.
- Work with HW teams and architects to drive improvements in architecture and SW compiler
- Publish novel compilation techniques to Groq's TSP at top-tier ML, Applications, Compiler, and Computer Architecture conferences.
Ideal candidates have/are:
- 10+ years of experience in the area of computer science/engineering or related
- 5+ years of direct experience with C/C++ and runtime frameworks
- Knowledge of LLVM and compiler architecture
- Experience with mapping HPC, ML, or Deep Learning workloads to accelerators
- Knowledge of spatial architectures such as FPGA or CGRAs an asset
- Knowledge with distributed systems and disaggregated compute desired
- Knowledge of functional programming an assert
- Experience with ML frameworks such as TensorFlow or PyTorch desired
- Knowledge of ML IR representations such as ONNX and Deep Learning
Attributes of a Groqster:
- Humility - Egos are checked at the door
- Collaborative & Team Savvy - We make up the smartest person in the room, together
- Growth & Giver Mindset - Learn it all versus know it all, we share knowledge generously
- Curious & Innovative - Take a creative approach to projects, problems, and design
- Passion, Grit, & Boldness - no limit thinking, fueling informed risk taking
If this sounds like you, we’d love to hear from you!
Compensation: At Groq, a competitive base salary is part of our comprehensive compensation package, which includes equity and benefits. For this role, the base salary range is $248,710 to $407,100, determined by your skills, qualifications, experience and internal benchmarks.
Location: Groq is a geo-agnostic company, meaning you work where you are. Exceptional candidates will thrive in asynchronous partnerships and remote collaboration methods. Some roles may require being located near our primary sites, as indicated in the job description.
At Groq: Our goal is to hire and promote an exceptional workforce as diverse as the global populations we serve. Groq is an equal opportunity employer committed to diversity, inclusion, and belonging in all aspects of our organization. We value and celebrate diversity in thought, beliefs, talent, expression, and backgrounds. We know that our individual differences make us better.
Groq is an Equal Opportunity Employer that is committed to inclusion and diversity. Qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, disability or protected veteran status. We also take affirmative action to offer employment opportunities to minorities, women, individuals with disabilities, and protected veterans.
Groq is committed to working with qualified individuals with physical or mental disabilities. Applicants who would like to contact us regarding the accessibility of our website or who need special assistance or a reasonable accommodation for any part of the application or hiring process may contact us at: [email protected]. This contact information is for accommodation requests only. Evaluation of requests for reasonable accommodations will be determined on a case-by-case basis.
Top Skills
What We Do
Groq is an AI solutions company delivering ultra-low latency AI inference with the world's first Language Processing Unit™. With turnkey generalized software and a deterministic Tensor Streaming architecture, Groq offers a synchronous ecosystem built for ultra-fast inference at scale. Groq solutions maximize human capital and innovative technology performance, having been proven to reduce developer complexity and accelerate time-to-production and ROI. Designed, engineered, and manufactured completely in North America, Groq offers domestically-based and scalable supply that is available now, capable of delivering 390 racks in 6-12 months and ramped lead times of 6-12 weeks. Learn more at groq.com.