Graphcore is one of the world’s leading innovators in Artificial Intelligence compute.
It is developing hardware, software and systems infrastructure that will unlock the next generation of AI breakthroughs and power the widespread adoption of AI solutions across every industry.
As part of the SoftBank Group, Graphcore is a member of an elite family of companies responsible for some of the world’s most transformative technologies. Together, they share a bold vision: to enable Artificial Super Intelligence and ensure its benefits are accessible to everyone.
Graphcore’s teams are drawn from diverse backgrounds and bring a broad range of skills and perspectives. A melting pot of AI research specialists, silicon designers, software engineers and systems architects, Graphcore enjoys a culture of continuous learning and constant innovation.
Graphcore’s AI/ML training and inference infrastructure is rapidly scaling to meet the growing demands of AI workloads across mobile, edge, and datacenter environments. This role focuses on optimizing performance across ARM-based architectures and large-scale distributed systems, ensuring efficiency, scalability, and reliability across the full hardware-software stack.
The TeamThe System Engineering Performance team architects and optimizes high-performance infrastructure for large-scale datacenter deployments. The team works across hardware, software, networking, and system architecture to deliver cutting-edge AI solutions and ensure optimal system performance at scale.
Responsibilities and Duties- Analyze ML models’ compute and memory requirements using roofline analysis and simulations
- Collaborate across hardware and software teams to optimize large-scale AI workloads
- Benchmark, monitor, and troubleshoot system performance across distributed systems
- Optimize communication stacks including MPI, NCCL, UCX, RDMA, and networking fabrics
- Profile and optimize AI workloads, focusing on performance bottlenecks
- Develop high-quality, ARM-compatible code and documentation
Essential:
- BS/MS in Computer Science, Electrical Engineering, or related field
- Experience with distributed systems and communication libraries (MPI, NCCL, UCX, libfabric)
- Strong programming skills in C++ and Python
- Experience profiling and optimizing HPC or AI/ML workloads
- Familiarity with ML benchmarks such as MLPerf
Desirable:
- Experience with GPUs or accelerated computing architectures
- Knowledge of HPC networking and interconnect technologies (InfiniBand, RoCE)
- Familiarity with ML frameworks such as PyTorch or TensorFlow
- Understanding of ARM architectures and toolchains
- Strong debugging, profiling, and performance optimization skills
What We Do
At Graphcore, we’re building the future of AI compute. We’re a team of semiconductor, software and AI experts, with deep experience in creating the complete AI compute stack - from silicon and software to infrastructure at datacenter scale. As part of the SoftBank Group, backed by significant long-term investment, we are delivering key technology into the fast-growing SoftBank AI ecosystem. To meet the vast and exciting AI opportunity, Graphcore is expanding its teams around the world. We are bringing together the brightest minds to solve the toughest problems, in a place where everyone has the opportunity to make an impact on the company, our products and the future of artificial intelligence.
Why Work With Us
Our team is at the forefront of the machine intelligence revolution, enabling innovators from all industries to build AI-native products to expand human potential. What we do at Graphcore really makes a difference.
Gallery
Graphcore Offices
Hybrid Workspace
Employees engage in a combination of remote and on-site work.
At Graphcore, we value wellbeing and flexibility to support a healthy work/life balance. Our hybrid approach encourages office-based colleagues to work onsite three days a week, with trusted flexibility built on trust and transparency for everyone.





