About Menlo
Menlo Research is an Applied R&D lab building Asimov, an open-source humanoid robot platform, and the full software stack that powers it. Our mission is to make humanoid labor economically viable -- turning software into physical labor at scale. We build across the full stack: hardware architecture, locomotion, autonomy, simulation, and infrastructure. We move fast, ship to real robots, and open-source everything we can. If you want your work to matter beyond a paper or a demo, this is the place.
The Role
We are looking for a Platform Engineer to build and maintain the infrastructure and data systems that power Asimov and Menlo's developer platform. You will work across cloud infrastructure, data pipelines, and production APIs -- keeping our systems reliable, scalable, and fast as our robot fleet and developer ecosystem grow. This is a hands-on engineering role for someone who is as comfortable debugging a flaky pipeline as they are designing a new ingestion architecture.
What You'll Do
- Build and maintain distributed infrastructure handling telemetry, sensory, and control data across cloud and edge environments
- Design and operate data ingestion and streaming pipelines connecting robot fleets to the cloud in real time, covering video, joint states, audio, and LiDAR
- Develop and maintain backend services and APIs that power Menlo's developer-facing platform, with a focus on reliability and developer experience
- Manage and evolve cloud native infrastructure using Kubernetes, Docker, and infrastructure as code tooling
- Ensure platform reliability through monitoring, alerting, autoscaling, failover, and incident response
- Support ML and robotics teams with data infrastructure for training pipelines, policy rollout, and hardware-in-the-loop simulation
- Implement secure APIs with access control, rate limiting, and usage metering as we scale
What You'll Bring
- 4 or more years of professional software engineering experience in platform, infrastructure, or data engineering
- Proficiency in one or more of Go, Rust, Python, or TypeScript, with strong fundamentals in concurrency and systems performance
- Hands-on experience with cloud native tooling: Kubernetes, Docker, Helm, and gRPC
- Experience building and operating data pipelines and streaming systems -- Kafka, Flink, or similar
- Solid understanding of API design patterns including REST, gRPC, and WebSockets
- Experience with databases spanning PostgreSQL, Redis, and modern vector databases
- Familiarity with observability tooling: Prometheus, Grafana, Datadog, or OpenTelemetry
Bonus Points
- Experience with real-time data streams from physical sensors or robotics systems
- Familiarity with MLOps workflows including model versioning, inference pipelines, and model registries
- Background in distributed training or large-scale simulation infrastructure
- Contributions to open-source infrastructure, robotics middleware, or AI frameworks
- Experience on developer platforms or API products
Why Join Menlo?
Platform and data engineering at most companies means keeping the lights on for someone else's product. At Menlo, the platform is core to what we ship -- it is how robots learn, how developers build on top of us, and how we close the loop between simulation and the real world. You will work on systems that are genuinely hard, with direct visibility into how your infrastructure shapes the robots we deploy. If you want ownership and real problems, this is the place.
What We Do
Menlo Research is an open AI & Robots lab. We build the brains for robots. It’s time to tell robots what to do!

.jpeg)







