About Us
We are building the next generation of embodied intelligence: humanoid and mobile robots capable of performing complex, long-horizon tasks in unstructured real-world environments. Our mission is to enable scalable general-purpose autonomy through large-scale learning, multi-modal data, and robust control.
We are looking for passionate engineers and scientists who thrive at the intersection of machine learning, robotics, and systems engineering, and want to see their research come alive in real robots.
Role Overview
You will lead development of the algorithms and architectures that give our robots the ability to walk, balance, grasp, manipulate, and reason in the physical world. This role bridges foundational model research and real-time robotics. You will design learning systems that power whole-body locomotion, dexterous manipulation, and embodied understanding.
Responsibilities
- Train and adapt large-scale VLA & VLMs that predict multi-modal futures (video, proprioception, audio, actions)
- Design and build reinforcement and imitation learning agents for whole-body locomotion, manipulation, and long-horizon tasks
- Integrate learned policies into real-time control loops for humanoid and mobile robots
- Build closed-loop evaluation and scaling pipelines to measure generalization and safety
- Collaborate with simulation and hardware teams (Isaac Sim, MuJoCo) to bridge sim-to-real transfer
- Translate research results into robust autonomy deployed across robot fleets
Preferred Qualifications
- BS/MS/PhD in Robotics, AI/Computer Science, or related field
- Proficiency in Python and C++, and deep learning frameworks (PyTorch / JAX)
- Deep experience in RL/IL, control, or multimodal learning
- Understanding of scaling laws, evaluation metrics, and scaling training for large models
- Familiarity with real-robot systems, sensing, and embedded control integration
- Familiarity with industry SOTA and latest research, e.g. Gr00t, Pi0, etc
Bonus Skills
- Experience with transformer-based control policies or diffusion policy learning.
- Work on humanoid locomotion, manipulation, or whole-body coordination.
- Prior open-source or research contributions in robotics, control, or deep learning.
Top Skills
What We Do
Menlo Research is an open AI & Robots lab.
We build the brains for robots. It’s time to tell robots what to do!








