LLM Inference Frameworks and Optimization Engineer

Job Posted 6 Hours Ago Posted 6 Hours Ago
Be an Early Applicant
Hiring Remotely in Amsterdam
Remote
Mid level
Artificial Intelligence • Information Technology
The Role
Design and optimize distributed inference frameworks for LLMs, ensuring high performance and scalability while collaborating on software-hardware co-design.
Summary Generated by Built In

About the Role

At Together.ai, we are building state-of-the-art infrastructure to enable efficient and scalable inference for large language models (LLMs). Our mission is to optimize inference frameworks, algorithms, and infrastructure, pushing the boundaries of performance, scalability, and cost-efficiency.

We are seeking an Inference Frameworks and Optimization Engineer to design, develop, and optimize distributed inference engines that support multimodal and language models at scale. This role will focus on low-latency, high-throughput inference, GPU/accelerator optimizations, and software-hardware co-design, ensuring efficient large-scale deployment of LLMs and vision models.

This role offers a unique opportunity to shape the future of LLM inference infrastructure, ensuring scalable, high-performance AI deployment across a diverse range of applications. If you're passionate about pushing the boundaries of AI inference, we’d love to hear from you!

ResponsibilitiesInference Framework Development and Optimization

  • Design and develop fault-tolerant, high-concurrency distributed inference engine for text, image, and multimodal generation models.
  • Implement and optimize distributed inference strategies, including Mixture of Experts (MoE) parallelism, tensor parallelism, pipeline parallelism for high-performance serving.
  • Apply CUDA graph optimizations, TensorRT/TRT-LLM graph optimizations, and PyTorch-based compilation (torch.compile), and speculative decoding to enhance efficiency and scalability.

Software-Hardware Co-Design and AI Infrastructure

  • Collaborate with hardware teams on performance bottleneck analysis, co-optimize inference performance for GPUs, TPUs, or custom accelerators.
  • Work closely with AI researchers and infrastructure engineers to develop efficient model execution plans and optimize E2E model serving pipelines.

Requirements

Must-Have:

  • Experience:
    • 3+ years of experience in deep learning inference frameworks, distributed systems, or high-performance computing.
  • Technical Skills:
    • Familiar with at least one LLM inference frameworks (e.g., TensorRT-LLM, vLLM, SGLang, TGI(Text Generation Inference)).
    • Background knowledge and experience in at least one of the following: GPU programming (CUDA/Triton/TensorRT), compiler, model quantization, and GPU cluster scheduling.
    • Deep understanding of KV cache systems like Mooncake, PagedAttention, or custom in-house variants.
  • Programming:
    • Proficient in Python and C++/CUDA for high-performance deep learning inference.
  • Optimization Techniques:
    • Deep understanding of Transformer architectures and LLM/VLM/Diffusion model optimization.
    • Knowledge of inference optimization, such as workload scheduling, CUDA graph, compiled, efficient kernels
  • Soft Skills:
    • Strong analytical problem-solving skills with a performance-driven mindset.
    • Excellent collaboration and communication skills across teams.

Nice-to-Have:

  • Experience in developing software systems for large-scale data center networks with RDMA/RoCE
  • Familiar with distributed filesystem(e.g., 3FS, HDFS, Ceph)
  • Familiar with open source distributed scheduling/orchestration frameworks, such as Kubernetes (K8S)
  • Contributions to open-source deep learning inference projects.


About Together AI

Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.

Compensation

We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $160,000 - $230,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.

Equal Opportunity

Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.


Please see our privacy policy at https://www.together.ai/privacy  


Top Skills

C++
Cuda
Distributed Systems
High-Performance Computing
Python
PyTorch
Tensorrt
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
San Francisco, California
84 Employees
On-site Workplace
Year Founded: 2022

What We Do

Together AI is a research-driven artificial intelligence company. We contribute leading open-source research, models, and datasets to advance the frontier of AI. Our decentralized cloud services empower developers and researchers at organizations of all sizes to train, fine-tune, and deploy generative AI models. We believe open and transparent AI systems will drive innovation and create the best outcomes for society

Similar Jobs

GitLab Logo GitLab

Engineering Manager, Plan

Cloud • Security • Software • Cybersecurity • Automation
Easy Apply
Remote
29 Locations
2350 Employees

Atlassian Logo Atlassian

Senior Partner Solutions Architect, DACH region (German Speaking)

Cloud • Information Technology • Productivity • Security • Software • App development • Automation
Remote
Amsterdam, NLD
11000 Employees

Chainlink Labs Logo Chainlink Labs

Software Engineer, Frontend

Blockchain • Internet of Things • Payments • Cryptocurrency • Web3
Remote
8 Locations
680 Employees

Skillsoft Logo Skillsoft

Full Stack Software Engineer - AI

Artificial Intelligence • Consumer Web • Edtech • HR Tech • Information Technology • Software • Conversational AI
Remote
Netherlands
2900 Employees

Similar Companies Hiring

Stepful Thumbnail
Software • Healthtech • Edtech • Artificial Intelligence
New York, New York
60 Employees
HERE Technologies Thumbnail
Software • Logistics • Internet of Things • Information Technology • Computer Vision • Automotive • Artificial Intelligence
Amsterdam, NL
6000 Employees
True Anomaly Thumbnail
Software • Machine Learning • Hardware • Defense • Artificial Intelligence • Aerospace
Colorado Springs, CO
131 Employees
Not Eligible
Save
By clicking Apply you agree to share your profile information with the hiring company.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account