LLM Inference Deployment Engineer

Sorry, this job was removed at 06:19 p.m. (CST) on Monday, Aug 04, 2025
Be an Early Applicant
28 Locations
In-Office or Remote
Artificial Intelligence • Hardware • Software
The Role

EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.

About the Role

EnCharge AI is seeking an LLM Inference Deployment Engineer to optimize, deploy, and scale large language models (LLMs) for high-performance inference on its energy effiecient AI accelerators. You will work at the intersection of AI frameworks, model optimization, and runtime execution to ensure efficient model execution and low-latency AI inference.  

Responsibilities

  • Deploy and optimize LLMs (GPT, LLaMA, Mistral, Falcon, etc.) post-training from libraries like HuggingFace

  • Utilize inference runtimes such as ONNX Runtime, vLLM for efficient execution.

  • Optimize batching, caching, and tensor parallelism to improve LLM scalability in real-time applications.

  • Develop and maintain high-performance inference pipelines using Docker, Kubernetes, and other inference servers. 

Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related field.

  • Experience in LLM inference deployment, model optimization, and runtime engineering.

  • Strong expertise in LLM inference frameworks (PyTorch, ONNX Runtime, vLLM, TensorRT-LLM, DeepSpeed).

  • In-depth knowledge of the Python programming language for model integration and performance tuning.

  • Strong understanding of high-level model representations and experience implementing framework-level optimizations for Generative AI use cases

  • Experience with containerized AI deployments (Docker, Kubernetes, Triton Inference Server, TensorFlow Serving, TorchServe).

  • Strong knowledge of LLM memory optimization strategies for long-context applications.

  • Experience with real-time LLM applications (chatbots, code generation, retrieval-augmented generation). 

EnchargeAI is an equal employment opportunity employer in the United States.

Similar Jobs

GitLab Logo GitLab

Senior Director, Professional Services

Cloud • Security • Software • Cybersecurity • Automation
Easy Apply
Remote
28 Locations
2500 Employees

Mondelēz International Logo Mondelēz International

Program Manager

Big Data • Food • Hardware • Machine Learning • Retail • Automation • Manufacturing
Remote or Hybrid
Greece
90000 Employees

Coinbase Logo Coinbase

Technical Account Manager

Artificial Intelligence • Blockchain • Fintech • Financial Services • Cryptocurrency • NFT • Web3
Easy Apply
Remote
28 Locations
4000 Employees
88K-98K Annually

Mastercard Logo Mastercard

Consultant

Blockchain • Fintech • Payments • Consulting • Cryptocurrency • Cybersecurity • Quantum Computing
Remote or Hybrid
Greece
35300 Employees
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Santa Clara, CA
31 Employees
Year Founded: 2022

What We Do

EnCharge AI is a leader in advanced AI hardware and software systems for edge computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.

Similar Companies Hiring

Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees
Milestone Systems Thumbnail
Software • Security • Other • Big Data Analytics • Artificial Intelligence • Analytics
Lake Oswego, OR
1500 Employees
Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account