If I tell you that I saw a robot today, what comes to mind? What is a robot?
This is not a trick question.
Robots in science fiction
Because we’ve seen so many robots. We’ve seen Robbie the Robot from the 1956 movie Forbidden Planet, Rosie the housekeeper from The Jetsons, the animated Gigantor, C-3PO and R2-D2 from late 1970s Star Wars, Optimus Prime, Data from Star Trek, Arnold Schwarzenegger’s T-800 Terminator, or later robots like Wall-E, Dolores (and all the synths) from Westworld, and all the rest of the robots in the various Star Wars spin-offs.
All of these, together, have built up our view of robots over the years, at least in movies and TV.
We’re also familiar with the stories these robots tell. Data from Star Trek just wants to be more human. Isaac from The Orville is a Kaylon, a race of robots that destroy organic creatures. (Yet Isaac’s path has been one of redemption, for he’s a compassionate Kaylon and helps turn the tide for organics.) The Star Wars robots, especially those designed for merchandising, have become friends and companions to their organic buddies.
Then there are the many evil robots bent on destruction, like Ultron; Hal 9000; the Daleks and the Cybermen from Doctor Who; various incarnations of Terminator robots; Nomad, Lore, Peanut Hamper, and Control from Star Trek; and a collection of droids from Star Wars.
These robots have all provided writers with the opportunity to reflect humanity’s traits and problems back on mechanical beings and to play with what happens when you create artificial life with or without the moral constraints that govern most humans.
Also: The Star Wars starter guide: Every movie ranked and graded
Robots in the real world
But robots exist in the real world. And they don’t behave like C-3PO or Mr. Data. Instead, they range from giant automated factories to automobile welding robots, from 3D printers to toys for kids. What makes these robots…robots? And what makes them different from the robots of science fiction?
Foe starters, the robots of science fiction are often fully autonomous. Mr. Data from Star Trek: The Next Generation and The Doctor from Star Trek Voyager (a holographic AI) were even declared to be legal people in the eyes of the fictional Federation. Nobody is claiming that my friend’s Tesla is legally a person.
Also: The best robots and AI innovations at CES
In fact, the auto industry has developed a set of aspirational criteria defining the “autonomous-ness” of a robotic vehicle, and we can apply that criteria to other robots as well. The SAE J3016 criteria have six levels: levels 0, 1, and 2 describe automation limits, while levels 3, 4, and 5 describe more fully autonomous functioning:
As you can see, the blue criteria specify that a human driver must still be in control, even if assisted by the car, while the green criteria specify (mostly) that the car is able to make all necessary decisions.
Most real-world robots we have today fall on the blue side of the spectrum. That’s why I use the terms automated vs. autonomous to differentiate robotic capabilities. Although the words sound similar, here’s how they differ:
- Automated systems follow pre-defined rules to perform specific tasks.
- Autonomous systems can operate independently, make decisions, and adapt to new situations.
Today, I contend, most robots are merely automated devices that have some level of movement in the real world. They perform a series of steps, possibly modified based on certain criteria. (For example, a 3D printer will stop printing when it runs out of filament, only to resume once more material is loaded.) Autonomous devices include C-3PO, Mr. Data, the T-800, or Amazon’s dream for people-free delivery robots.
Also: BMW tests next-gen LiDAR to beat Tesla to Level 3 self-driving cars
Right now, we do automated really, really well. Autonomous, not so much. But we’re getting there.
Enormous dynamic range
Today’s robots, despite not being as versatile as Mr. Data, are generally quite useful and functional.
These include industrial robots, medical robots, military and defense robots, domestic robots, entertainment robots, space exploration and maintenance robots, agricultural robots, retail robots, underwater robots, and telepresence robots that help people participate in an activity from a distance.
My personal interest has been focused on robots available and accessible to makers and hobbyists, robots that can empower individuals to build, design, and prototype projects previously only feasible by those with a shop full of fabrication machinery.
I’m talking about 3D printers, which build up objects from layers of molten plastic; CNC devices, which often cut, carve, and remove wood or metal to create objects; laser cutters, which are ideal for sign cutting, engraving, and fabricating very detailed parts and circuit boards; and even vinyl cutters, for carefully cutting light, flexible material in intricate patterns.
Also: This is the best and fastest sub-$300 3D printer I’ve tested yet
These machines are programmed using CAD software to define — aka, design — the object being built. Those designs are then converted to a series of motion instructions that guide the machine in making repetitive, complex moves.
I used a CNC, for example, to make a series of identical custom organizer racks for parts storage.
While I designed and assembled the organizer, the robot became a force multiplier, carving precise rack-holding features, a process that was well beyond my woodworking skill set, which is mostly limited to hammering nails and screwing screws.
Robots today have an enormous dynamic range, from children’s learning toys to something as astonishingly complex and enormous as Amazon’s smart warehouses. These warehouses each contain thousands of robots, but because they all work in concert with each other, the entire warehouse can, itself, be considered a giant robot all on its own.
Span of autonomy
Let’s return to our discussion of automated robots vs. autonomous robots. Automated robots can follow a set of tasks, usually overseen (or at least checked on regularly) by a human operator.
My 3D printers are a good example. When I create a design in CAD software and then convert that design into gcode, what I’m creating is a series of movement instructions. Instructions specify the X and Y position of the print head, along with how high the extruder needs to be to account for the layers that are being constantly added. Instructions also specify the temperature of the extruder, determining how quickly and smoothly the plastic melts onto the previous layer.
Usually, I’ll kick off a print and then monitor it via a camera. On the not-rare-enough occasion that the print fails or the printer just decides to spew molten plastic into the air, I usually catch it fast enough, rush into the Fab Lab, and cancel the print. The printer is automated — it’s following instructions — but there’s nothing autonomous about this process.
Also: Generative AI will far surpass what ChatGPT can do
Newer printers are incorporating some AI: The cameras feed images to a processor that uses some machine learning to examine each image and determine if there’s a failure situation. While the machines can’t fix those failures, machine learning can turn off the process, preventing loss of material and a possible safety hazard.
When you visit Amazon’s factories, you’ll see more robots. There are carting robots that move along the floor, delivering products. Most of these are automated, not autonomous. But Amazon’s wildly complex conveyor systems do have intelligent imaging systems that look at products as they pass by and make some decisions about the objects as they pass. Here, we’re starting to see signs of more management taking place without human supervision.
I have a drone that also exhibits some autonomous behaviors. If, while directing it via a hand-held controller, I send it out of radio range, the drone itself will take over. It will plot a course back to its origin, reverse course, avoid obstacles like trees and power lines, and bring itself back home without any interaction. It will perform the same behavior when it senses its battery is too low for it to continue flying.
In each of these three examples (AI-assisted 3D printer monitoring, warehouse conveyor monitoring, and return-to-home flight), we’re seeing autonomous behaviors built as an extension of a mostly automated system. I think that’s how we’ll see autonomous features roll out. They’ll be available for situation-by-situation until more and more situations are taken into account.
Also: The tech behind ChatGPT could power your next car’s AI driving assistant
Eventually, you’ll be able to crawl into your car and get an extra 45 minutes of shuteye while the vehicle drives you to the Starbucks nearest your office. But as the SAE Levels of Driving Automation chart we discussed earlier shows, Level 5 is a big step. At that point, we’re trusting the vehicle to handle any and all road conditions and respond intelligently, carefully, quickly, and safely. Most experts believe that we’ll start seeing cars with this capability sometime after 2030.
When ChatGPT blunders with one of its famous hallucinations, it’s merely annoying — and possibly embarrassing if someone uses that material in some writing. But if a robot blunders while operating in the real world, something can go wrong physically, even fatally. Because the stakes are high, much more care needs to be taken not only in the development of fully autonomous systems, but in the staging and release of those systems in order to make sure they’re safe to unleash on in our shared environment.
Looking forward: Robots of tomorrow
Let’s review our three main takeaways: First, science fiction has given us a picture of a robot that is both cautionary and aspirational — but not necessarily practical. Second, a great many things can be considered robots in the real world. And third, the range of autonomy can vary among different real-world robots.
At first glance, it seems as though AI and robotics are inextricably linked. But as we’ve seen, AI can inform all, part, or none of a robot’s function, depending on the level of technology involved and the purpose of a robot. While it would be nice for a fabrication robot to know when it is failing and stopping, we derive a great deal of value from automated CNC devices and 3D printers that just follow their gcode instructions.
As we look into the future, we’ll see more autonomous systems. Siemens has a fascinating vision of what a factory will look like in the coming decades, and it showcases many autonomous systems interacting with the production process overall.
Also: How horses can inform the future of robot-human interaction
Outside of the world of entertainment characters, robots are complex mechanisms that justify the cost and effort to create them by the value they generate, whether that be cost savings, time savings, the ability to take on otherwise difficult processes, the ability to operate in environments dangerous to humans, or the ability to force-multiply the efforts of their human operators.
The ability to interact with the real world and perform automated steps are table stakes for participating in the robotics revolution. As we move forward, expect a melding between machine learning, intelligent vision, generative AI, traditional programming skills, and mechanical design prowess to open up new doors, provide new opportunities, and help robots of all sizes and capabilities do more to help us.
On the other hand, if — someday in the future — robots start to yell, “Exterminate! Exterminate!” …well, then… Danger, Will Robinson, Biddi Biddi Biddi, These are not the droids you’re looking for.
You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter on Substack, and follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.