I'm pretty sure everything from 2016-onwards is the same frame with the same capabilities, just different videos showing off how versatile it is. They actually retired that frame this year, this is the new one: https://bostondynamics.com/blog/electric-new-era-for-atlas/
I don't think they've been sitting around without doing nothing to improve it for eight years, though. The hardware may look the same, but the internal components were being upgraded. Software too. Those tasks are also their own set of hard challenges.
Edit: Having said that, the new electric frame is wild!!
Didn't mean to imply they hadn't been doing anything, that's why I used the word "frame." There are all kinds of videos of that robot trying to do backflips and the hydraulics busting in the process or other issues which would necessitate internal upgrades. Just thought it was strange that each iteration shown up to 2016 was of a different model, then we get 6 in a row of essentially the same one.
Boston dynamics is trying to focus on research until they create a product they think has value, rather than release what they have now. AI is mostly running on hype, it's severely underdeveloped for what the media is saying it's capable of. Atlas isn't ready to be alone in an airport loading baggage. Spot, on the other hand, is ready for survey operations in hazardous areas, and has been released now that it's a viable product.
They also have one inserter called Stretch for sales as well. Atlas is a research platform, all that knowledge and algorithms and tech they got from doing Atlas is applied onto their other robot. The new Atlas seems to move closer to become an actual product too.
They used to have an inserter on two wheel called Ostrich but they deem that sort of robot, while able to move fast, is a pain and impractical when their job is simply loading and unloading stuff, as most of the time spend on turning and repositioning. So they scrap that and turn into Stretch.
Boston Dynamics change and adapt their robot and build to solve specific issue, while Musky Melon insist their robot has to be a two legged humanoid, slow moving robot with precise hand function.
I used to work across the street from their labs and we would regularly see them testing their robots in the parking lot. There were lots of areas of uneven ground that made for great testing. Eventually they reached out to us and a few other companies to offer tours of their facilities.
When we took them up on the offer one of the things I immediately noticed in one of their indoor testing areas was that everything was marked with what looked kind of like QR codes. The guy giving the tour said that the computing power in the robots was still fairly limited, and they needed as much of it as possible to focus on the actual robotics. So rather than have a ton of image/video processing they opted to label things that told the robot “this is a box”, “this is a door”, “this is a table” etc.
They could then send an instruction like “pick up the box, carry it through the door, and put it on the table”. The robot would then look for the appropriate QR codes and figure out how to complete the task.
That was 10+ years ago at this point. I don’t think they need those QR codes any more.
Thats actually cool to know thanks for expanding on this. The videos are shot to be very impressive but in reality i wonder how much use this would be in real world? Maybe some kind of warehouse stacking where you can qr label everything or if it cluld truly navite a semi-set route maybe some sort of inspection of remote facilities? Maybe building stuff on the moon? (Assuming you could deal with the dust somehow)