FuriosaAI is looking for passionate AI Software Engineers to join our Platform Team. You will participate in the research and development of models optimized for our NPU accelerator.
Our team builds the production-grade, streamlined AI software that makes up our SDK. This includes the runtime, LLM serving framework, and PyTorch models/extensions.
Your work on these critical parts of the SDK will directly enable AI developers to efficiently deploy optimized AI models on FuriosaAI NPUs.
Develop and optimize DNN model implementations in PyTorch for FuriosaAI's Tensor Contraction Processor (TCP) architecture
Analyze the features, implementations, CUDA and Triton kernels of existing AI model inference frameworks such as vLLM, TensorRT-LLM, and DeepSpeed-MII
Research and implement generative AI models, parallelism strategies, and inference techniques to improve performance and efficiency
Collaborate closely with the compiler team to optimize and enable models.
BS degree in Computer Science, Engineering, or a related field, or equivalent industry experience
Proficiency in Python programming skill
Experience in developing AI models in DNN frameworks (e.g., PyTorch)
Solid understanding of machine learning, deep learning, natural language processing (NLP), and/or generative AI models
Strong communication skills with the ability to collaborate effectively across cross-functional teams
Hands-on experience with PyTorch 2.0 technologies (e.g., TorchDynamo) or DNN compiler technologies, such as Triton and MLIR
Proficiency in C++/CUDA or Rust programming skills
Hands-on experience deploying and optimizing large-scale ML models in production
Hands-on experience in model training and fine-turning of pre-trained models
Experience in LLM inference frameworks: vLLM, TensorRT-LLM, and DeepSpeed-MII
Strong background in model quantizations and model evaluations
Strong background in machine learning, generative AI, and model evaluation techniques
Proven track record of contributing to open-source projects
Top Skills
What We Do
FuriosaAI designs and develops data center accelerators for the most advanced AI models and applications.
Our mission is to make AI computing sustainable so everyone on Earth has access to powerful AI.
Our Background
Three misfit engineers with each from HW, SW and algorithm fields who had previously worked for AMD, Qualcomm and Samsung got together and founded FuriosaAI in 2017 to build the world’s best AI chips.
The company has raised more than $100 million, with investments from DSC Investment, Korea Development Bank, and Naver, the largest internet provider in Korea. We have partnered on our first two products with a wide range of industry leaders including TSMC, ASUS, SK Hynix, GUC, and Samsung. FuriosaAI now has over 140 employees across Seoul, Silicon Valley, and Europe.
Our Approach
We are building full stack solutions to offer the most optimal combination of programmability, efficiency, and ease of use. We achieve this through a “first principles” approach to engineering: We start with the core problem, which is how to accelerate.








