At Harmonic, we are building a mathematical reasoning engine that operates with absolute precision. While most AI makes maximum-likelihood guesses, Harmonic's Aristotle uses Lean 4 and reinforcement learning to verify its reasoning and results.
Following our Gold Medal-level performance on the 2025 International Math Olympiad (IMO) and the successful resolution of long-standing open problems, we are proving that AI can master the most rigorous domains of human thought. Backed by some of the world’s most prominent investors, we are intentionally scaling an elite technical team.
About the RoleWe are developing reinforcement learning systems at a scale where standard abstractions frequently fail. Unlike labs that operate primarily through high-level wrappers, we own the entirety of our RL stack. This ownership spans from low-level environment simulators and custom communication primitives to our distributed training loops and inference engines.
We are seeking engineers who view existing libraries as a baseline and the hardware speed itself as the true target. You will be responsible for the architecture powering our agents, with a relentless focus on maximizing the throughput of our reinforcement learning and production workflows.
Key ResponsibilitiesTotal Stack Ownership: Maintain and optimize our proprietary RL training and serving infrastructure. You have the authority to refactor any layer—from the Python API down to the CUDA kernels—to achieve peak performance for foundation model workloads.
Optimized Training: maximize the throughput of our reinforcement learning system from data generation to model training with sharded multi-node training and inference algorithms.
High-Performance Serving: optimize our inference stack for high-throughput reinforcement learning and low-latency LLM production traffic. Tune the inference engine, router, and scheduler, down to custom kernels if need be.
Compute Optimization: Identify and resolve performance bottlenecks within our distributed clusters, ensuring optimal throughput and memory efficiency for multi-billion parameter models, balancing memory constraints with compute-heavy training cycles.
BS in Computer Science or a related technical field, or equivalent industry experience
2+ years of relevant, hands-on industry experience
Proficiency in Python
Experience building or maintaining components within ML frameworks (e.g., PyTorch, JAX, or TensorFlow).
Proficiency in either:
Understanding of distributed training concepts and collective communication primitives (e.g., NCCL).
OR
Practical experience deploying and profiling models on GPU-accelerated cloud infrastructure.
MS or PhD in Computer Science, Mathematics, or a related field.
5+ years of relevant, hands-on industry experience
Proficiency in C++
Experience writing or improving kernels (Triton, CuTeDSL, TileLang, CUDA, CUTLASS, ThunderKittens) to resolve low-level bottlenecks.
Proven success deploying performant inference at scale using open-source or custom inference engines, routers, etc.
Direct experience scaling models via FSDP, Tensor Parallelism, or related sharding techniques on multi-node GPU clusters.
Experience designing reinforcement learning systems for high-throughput training and asynchronous data sampling.
Unlimited PTO
401(k) matching
100% employer-paid health, vision, and dental benefits for employees and 50% coverage for dependents. Harmonic offers varied health coverage options to select what is best for you and your family.
Health Savings Account (HSA) available for qualifying health plans
Visit our company blog to learn more about what we are working on!
Equal Opportunity StatementHarmonic is committed to diversity and inclusivity in the workplace. We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
Skills Required
- BS in Computer Science or related field
- 2+ years relevant industry experience
- Proficiency in Python
- Experience with ML frameworks
- Understanding of distributed training or profiling on GPU
What We Do
We are forging the worlds most advanced mathematical reasoning engine








