The demand for new datacenters and AI compute is rapidly outpacing the planet's energy capacity. Digital solutions are hitting a power wall as we approach the physical limits of traditional silicon. Conquering this bottleneck isn’t about bigger chips or more of them; it means rethinking the fundamental architecture. The industry's current path isn’t going to meet the need, so we took a different approach.
Instead of traditional electronic circuits, we use silicon photonics and an active, programmable metasurface to perform matrix multiplications at the speed of light. Our optical cells are 10,000x smaller than traditional photonic components, enabling unprecedented density. By using photonics instead of electricity, our chips become more efficient as they scale. This architecture will deliver up to 100 times the energy efficiency of existing solutions while significantly improving performance for large-scale AI inference.
We’ve assembled a world-class team of industry veterans and recently raised a $110M Series A led by Gates Frontier. Participants include M12 (Microsoft’s Venture Fund), Carbon Direct Capital, Aramco Ventures, Bosch Ventures, Tectonic Ventures, Space Capital, and others. We have also been recognized on the EE Times Silicon 100 list for several consecutive years.
Join us and shape the future of computing!
Position Overview:
We are seeking a highly experienced Sr. Principal Processor Architect to lead the design of the processing core at the heart of our optical processing units (OPUs). This role is critical to defining the microarchitecture that bridges our revolutionary optical computing engines with efficient, scalable digital control and processing. The ideal candidate will bring deep expertise in advanced processor design, massive parallelism, and specialized accelerator architectures to create a novel compute platform optimized for AI inference workloads.
Location: Austin, TX or San Mateo, CA. Full-time onsite position.
Key Responsibilities:
Lead the architectural design of custom processor cores for Neurophos OPUs, balancing performance, power, and area constraints
Define microarchitectural features, including pipeline organization, execution units, vector/SIMD capabilities, and memory hierarchies
Design for massive-scale parallelism, drawing on GPU shader core and vector processor principles
Architect instruction sets, control flow mechanisms, branch prediction strategies, and exception handling
Evaluate and implement in-order vs. out-of-order execution, superscalar techniques, and multithreading approaches
Collaborate with optical engine designers to optimize the processor-accelerator interface
Work with modeling teams to validate architectural decisions through performance simulation
Drive co-design with compiler and runtime software teams to ensure efficient code generation
Publish research and represent Neurophos in the computer architecture community
Mentor junior architects and establish architectural best practices
Qualifications:
PhD in Computer Science, Electrical Engineering, or related field with focus on computer architecture (or MS with equivalent experience)
15+ years of experience in processor architecture and design
Deep expertise in pipelined processor design, including in-order and out-of-order (OoO) execution
Strong understanding of superscalar architectures, multithreading, and vector/SIMD machines
Extensive knowledge of branch prediction, speculation, exception handling, and architectural state management
Experience with massive parallelism architectures (GPU shader cores, vector processors, or similar)
Track record of shipping processor designs or significant architectural contributions
Strong publication record in computer architecture venues (ISCA, MICRO, ASPLOS, HPCA)
Excellent communication skills and ability to lead cross-functional technical discussions
Preferred Skills:
GPU shader core design experience or deep familiarity with GPU microarchitecture
Experience with domain-specific accelerators (TPU, NPU, DSP, or similar)
Knowledge of ML workload characteristics and accelerator design patterns
Familiarity with near-memory computing, in-memory computing, or optical computing paradigms
Experience with custom instruction set design and compiler co-design
Background in power-efficient microarchitecture techniques
Understanding of datacenter processor requirements and interconnect technologies
Experience with vector processor architectures (Cray, NEC SX, ARM SVE, RISC-V Vector)
This is an opportunity to play a pivotal role in an innovative startup redefining the future of AI hardware. Work on a game-changing technology at the intersection of photonics and AI as part of a collaborative and brilliant team. You’ll contribute to a platform that redefines computational performance and accelerates the future of artificial intelligence. Come help us bring this transformative technology to the world.
BenefitsJoin a team that invests in your future and your well-being. At Neurophos, we offer:
100% coverage of base health plan premiums for you and your dependents, plus HSA contributions.
Unlimited PTO. No rigid vacation banks, just a focus on delivery.
401(k) matching and stock option opportunities to ensure our success is your success.
Full suite of voluntary benefits, including Dental, Vision, Life, Hospital, Critical Illness, and Accident insurance.
Personalized Benefits. Choose the plans that fit your life and take the cash back for those that don’t.
Top Skills
What We Do
Neurophos is delivering the computational power of the human brain to artificial intelligence. By leveraging decades of metamaterials research and >300 patents, we are unlocking the speed and efficiency of optical compute in an in-memory processor to increase the speed and energy efficiency of AI inference by more than 100X.








