Build and Maintain Tooling Infrastructure: Develop and maintain specialized tools to support the NPU compiler workflow, including automated testing frameworks, debugging tools, and performance profiling utilities tailored for AI workloads.
Testing Frameworks and Automation: Design and implement automated testing frameworks to ensure the reliability, accuracy, and stability of the NPU compiler across diverse AI models and NPU configurations.
Continuous Integration and Deployment (CI/CD): Establish and manage CI/CD pipelines that streamline the integration, testing, and deployment of compiler features, with a focus on ensuring NPU stability and supporting AI model compatibility.
Collaboration and Documentation: Work closely with compiler engineers to identify tooling needs specific to NPU compiler requirements, document infrastructure processes, and ensure that tools are efficient, reliable, and accessible for the team.
Bachelor’s degree in Computer Science, Electrical Engineering, or a closely related field, or equivalent practical experience.
Proficiency in at least one programming language (e.g., Python, C/C++, Go) and a strong understanding of software development best practices, including version control (Git) and code reviews.
Experience setting up and maintaining CI/CD pipelines using industry-standard tools and practices (e.g., Jenkins, GitLab CI, GitHub Actions).
Hands-on experience creating automated testing frameworks and tooling infrastructure, particularly within a Linux-based development environment.
Solid understanding of containerization and virtualization technologies (e.g., Docker, Kubernetes) and familiarity with infrastructure-as-code practices.
Master’s degree in Computer Science, Electrical Engineering, or a related technical field.
Prior experience working on compiler toolchains, especially for specialized hardware accelerators (e.g., NPUs, GPUs, TPUs) or AI/ML-focused architectures.
Familiarity with performance profiling, code optimization techniques, and debugging tools tailored for heterogeneous computing environments.
Experience with large-scale distributed systems and infrastructure, including orchestration frameworks and resource managers.
Knowledge of AI frameworks (e.g., TensorFlow, PyTorch) and an understanding of common AI workloads and models.
Experience contributing to open-source projects or working in a highly collaborative, cross-functional team setting.
Top Skills
What We Do
FuriosaAI designs and develops data center accelerators for the most advanced AI models and applications.
Our mission is to make AI computing sustainable so everyone on Earth has access to powerful AI.
Our Background
Three misfit engineers with each from HW, SW and algorithm fields who had previously worked for AMD, Qualcomm and Samsung got together and founded FuriosaAI in 2017 to build the world’s best AI chips.
The company has raised more than $100 million, with investments from DSC Investment, Korea Development Bank, and Naver, the largest internet provider in Korea. We have partnered on our first two products with a wide range of industry leaders including TSMC, ASUS, SK Hynix, GUC, and Samsung. FuriosaAI now has over 140 employees across Seoul, Silicon Valley, and Europe.
Our Approach
We are building full stack solutions to offer the most optimal combination of programmability, efficiency, and ease of use. We achieve this through a “first principles” approach to engineering: We start with the core problem, which is how to accelerate.






