We are seeking brilliant Research Scientists to build the artificial intelligence that will power Atlas, the world’s most capable humanoid robot. Our mission is to move beyond narrow tasks and create generalist robots that can perform useful work in complex, human environments like factories and warehouses. You will be joining the central, primary product team for Atlas, with unparalleled resources and a direct line to deploying your workI think stuff like on hardware that will change the world.
As a Research Scientist on our team, you will be at the forefront of training Large Behavior Models (LBMs) that translate multimodal understanding into meaningful action. We believe the path to generality lies in scaling learning from diverse data sources- robot teleoperation, human egocentric data, and simulation.. You will architect, train, and refine the foundational Vision-Language-Action (VLA) models that enable Atlas to reason, plan, and execute tasks with human-like dexterity and intelligence.
What You Will Do:
Architect and train end-to-end Large Behavior Models (LBMs) using massive datasets of ego-centric human video and robot telemetry.
Pioneer novel research in Vision-Language-Action (VLA) models, tackling core challenges in cross-embodiment learning, spatial reasoning, and long-horizon tasks.
Develop the foundational AI components that serve as the robot's top-level "brain," integrating perception, memory, and semantic reasoning.
Collaborate closely with controls, software, and hardware teams to rapidly test, iterate, and deploy your models on-robot and in the field.
Drive the research agenda that directly impacts the capabilities of our product and defines the future of general-purpose robotics.
We are Looking For:
A strong research background in Machine Learning, Computer Vision, Robotics, or a related field, demonstrated by impactful projects or publications.
Deep expertise and hands-on experience in one or more of the following areas:
Behavioral cloning, imitation learning, or reinforcement learning at scale.
Vision-Language-Action (VLA) or Large Behavior Models (LBMs) for manipulation
Training or fine-tuning Large Language Models (LLMs) or Vision-Language Models (VLMs).
Designing data collection strategies, curating large-scale proprietary datasets (e.g., human demonstrations), and developing data flywheels to drive continuous model improvement.
Learned visual representations for manipulation and navigation.
Proficiency in Python and deep learning frameworks (e.g., PyTorch, JAX).
PhD in a relevant field.
Bonus Points:
Hands-on experience deploying policies on physical robots
Strong practical understanding of 3D geometry and computer graphics
Proficiency in C++
Experience working in large, production-oriented codebases.
Top Skills
What We Do
Boston Dynamics builds advanced mobile manipulation robots with remarkable mobility, dexterity perception and agility. We use sensor-based controls and computation to unlock the potential of complex mechanisms. Our world-class development teams develop prototypes for wild new concepts, do build-test-build engineering and field testing and transform successful designs into robot products. Our goal is to change your idea of what robots can do.






