What You'll Do
- Design and build a comprehensive performance testing platform for evaluating LLM inference workloads across GPU clusters
- Define and implement the benchmarking methodology, metrics, and test suites that measure latency, throughput, memory utilization, power consumption, and model accuracy
- Establish baseline performance for unoptimized models (Llama 3.2 70B, DeepSeek, etc.) and validate post-optimization improvements
- Develop automated testing pipelines for continuous performance validation across compiler releases and model updates
- Investigate performance bottlenecks using profiling tools (ROCm profilers, GPU traces, system-level monitoring) and work with the compiler team to drive optimizations
- Create dashboards and reporting that provide clear visibility into performance trends, regressions, and wins
- Collaborate cross-functionally with compiler engineers, ML engineers, and DevOps to ensure performance testing is integrated into our development workflow
- Document best practices for performance testing and optimization of ML workloads on GPU hardware
What You'll Bring
- 7+ years of experience in performance engineering, benchmarking, or systems engineering roles
- Deep understanding of ML inference workloads, particularly transformer-based models and LLMs
- Hands-on experience with GPU programming and optimization (CUDA, ROCm, or similar)
- Strong programming skills in Python and C/C++
- Proven track record of building performance testing infrastructure or benchmarking platforms from scratch
- Experience with ML frameworks (PyTorch, TensorFlow, ONNX Runtime, vLLM, TensorRT-LLM, etc.)
- Proficiency with profiling and debugging tools for GPU workloads
- Strong analytical skills with the ability to design experiments, analyze results, and communicate findings clearly
- Experience with CI/CD systems and test automation frameworks
Nice to Have
- Experience with AMD GPUs (Mi200/Mi300 series) and ROCm ecosystem
- Knowledge of compiler optimization techniques and their impact on performance
- Experience with distributed inference and multi-GPU workloads
- Familiarity with ML model quantization, pruning, and other optimization techniques
- Background in high-performance computing or systems-level optimization
- Experience with infrastructure-as-code (Kubernetes, Docker, Terraform)
- Contributions to open-source ML or systems projects
Personal Attributes
- Obsessive about details — you notice the 2% regression that others miss
- Self-driven — you take ownership and don't wait for permission to solve problems
- Collaborative mindset — you work well across teams and help others succeed
- Passionate about sustainability — you care about making AI more efficient and environmentally responsible
- Clear communicator — you can explain complex technical concepts to both engineers and stakeholders
Top Skills
What We Do
At Lemurian Labs our focus is on unleashing the capabilities of AI for the benefit of humanity. To fulfill this purpose we are developing a full stack solution consisting of software and hardware that is capable of orders of magnitude better performance and efficiency than legacy solutions, while being designed for scalability. There are massive shifts underway moving us from Software 1.0 to Software 2.0 to Software 3.0 and onwards, but to realize its true benefits we need fundamentally new hardware and systems that can keep up with the changing compute demands and simultaneously bringing down costs. We are developing software and hardware designed from first principles to deliver unprecedented realizable performance/watt and enable the next generation of AI workloads. Our diverse team of technologists have decades of experience at the frontiers of high performance computing, digital arithmetic, cryptography, artificial intelligence, robotics, and networking. There is a lot of talk about what the technology of tomorrow will look like and there are a number of companies developing it. At Lemurian, we believe tomorrow is so yesterday. We are developing the technology for the day after tomorrow. We are Lemurian Labs. Welcome to the future of artificial intelligence and computing.







