As a VLA Research Scientist on the Atlas team, you will architect, train, and deploy the large-scale behavior models that enable high-performance dexterous manipulation on Atlas. Your work will focus on building models that take multimodal input (vision, language, task context, robot state), and produce grounded actions that generalize across manipulation tasks, embodiments, and environments.
You will design large-scale behavior cloning and imitation learning pipelines, build hierarchical skill systems, and integrate learned components into a complex, real-world robotic system. You’ll collaborate closely with robotics, controls, and software teams, and rapidly test your work on state-of-the-art humanoid hardware.
How You Will Make an Impact:Architect and train end-to-end VLA and Large Behavior Models for mobile manipulation on Atlas.
Build large-scale imitation learning pipelines that learn from human demonstrations, teleoperation, and simulation data.
Develop policies capable of few-shot generalization across diverse manipulation tasks.
Create hierarchical behavior systems that combine learned skills into long-horizon behaviors.
Integrate your models into Atlas’s autonomy stack in collaboration with controls and platform teams.
Deploy, debug, and iterate your models directly on physical hardware.
Write high-quality, maintainable Python and C++ code that fits into a large production codebase.
We’re Looking For:
MS with 3+ years of experience or PhD in Machine Learning, Robotics, Computer Science, or related fields.
Prior experience training and deploying learned policies for complex behaviors in robots or simulated characters.
Strong background in:
Behavior cloning / imitation learning
Diffusion policies, ACT, or other modern BC architectures
Large behavior models or sequence modeling
Multimodal (vision/language/action) learning
Experience with modern ML frameworks (PyTorch, JAX) and large-scale training workflows.
Strong analytical and debugging skills; ability to write reliable, well-structured research code.
Nice to Have:
Hands-on experience deploying learned policies on real robots.
Experience creating or curating large demonstration datasets (teleop, human demos, ego-centric video).
Experience with RL as a complementary tool within behavior learning pipelines.
Publications in top-tier ML or robotics conferences (e.g., CoRL, RSS, ICRA, NeurIPS).
Experience contributing to large C++ or Python codebases.
Why Join Us:
Direct access to an advanced humanoid robot—test your models on hardware quickly and often.
A collaborative, inclusive team that values diverse perspectives and identities.
The opportunity to do applied VLA research with real-world impact.
A mission-focused environment where your work will define the future of general-purpose humanoids.
Top Skills
What We Do
Boston Dynamics builds advanced mobile manipulation robots with remarkable mobility, dexterity perception and agility. We use sensor-based controls and computation to unlock the potential of complex mechanisms. Our world-class development teams develop prototypes for wild new concepts, do build-test-build engineering and field testing and transform successful designs into robot products. Our goal is to change your idea of what robots can do.









