About Anyscale:
At Anyscale, we're on a mission to democratize distributed computing and make it accessible to software developers of all skill levels. We’re commercializing Ray, a popular open-source project that's creating an ecosystem of libraries for scalable machine learning. Companies like OpenAI, Uber, Spotify, Instacart, Cruise, and many more, have Ray in their tech stacks to accelerate the progress of AI applications out into the real world.
With Anyscale, we’re building the best place to run Ray, so that any developer or data scientist can scale an ML application from their laptop to the cluster without needing to be a distributed systems expert.
Proud to be backed by Andreessen Horowitz, NEA, and Addition with $250+ million raised to date.
About the role
As a Distributed LLM Inference Engineer, you will help systems and optimizations that push the boundaries of performance for inference at large scale. This is an incredibly critical role to Anyscale as it allows us to provide market leading performance and price point for AI infrastructure.
As part of this role, you will
- Iterate very quickly with product teams to ship the end to end solutions for Batch and Online inference at high scale which will be used by Customers of Anyscale
- Work across the stack integrating Ray Data and LLM engine providing optimizations across the stack to provide low cost solutions for large scale ML inference.
- Integrate with Open source software like VLLM, work closely with the community to adopt these techniques in Anyscale solutions, and also contribute improvements to open source.
- Follow the latest state-of-the-art in the open source and the research community, implementing and extending best practices
We'd love to hear from you if you have
- Familiarity with running ML inference at large scale with high throughput
- Familiarity with deep learning and deep learning frameworks (e.g. PyTorch)
- Solid understanding of distributed systems, ML inference challenges
Bonus points!
- ML Systems knowledge
- Experience using Ray Data
- Work closely with community on LLM engines like vLLM, TensorRT-LLM
- Contributions to deep learning frameworks (PyTorch, TensorFlow)
- Contributions to deep learning compilers (Triton, TVM, MLIR)
- Prior experience working on GPUs / CUDA
Compensation
- At Anyscale, we take a market-based approach to compensation. We are data-driven, transparent, and consistent. The target salary for this role is $170,112 ~ $247,000. As the market data changes over time, the target salary for this role may be adjusted.
- Stock Options
- Healthcare plans, with premiums covered by Anyscale at 99%
- 401k Retirement Plan
- Wellness stipend
- Education stipend
- Paid Parental Leave
- Flexible Time Off
- Commute reimbursement
- 100% of in-office meals covered
This role is also eligible to participate in Anyscale's Equity and Benefits offerings, including the following:
Anyscale Inc. is an Equal Opportunity Employer. Candidates are evaluated without regard to age, race, color, religion, sex, disability, national origin, sexual orientation, veteran status, or any other characteristic protected by federal or state law.
Anyscale Inc. is an E-Verify company and you may review the Notice of E-Verify Participation and the Right to Work posters in English and Spanish
Top Skills
What We Do
Distributed computing made simple
Anyscale enables developers of all skill levels to easily build applications that run at any scale, from a laptop to a data center.