I’ve spent much of the past year discussing generative AI and large language models with robotics experts. It’s become increasingly clear that these sorts of technologies are primed to revolutionize the way robots convey, learn, look and are programmed.

Accordingly, a number of top universities, research labs and companies are exploring the best methods for leveraging these artificial intelligence platforms. Well-funded Oregon-based startup Agility has been playing around with the tech for a while now using its bipedal robot, Digit.

Today, the company is showcasing some of that work in a short video shared through its social channels.

“[W]e were curious to see what can be achieved by integrating this technology into Digit,” the company notes. “A physical embodiment of artificial intelligence created a demo space with a series of numbered towers of several heights, as well as three boxes with multiple defining characteristics. Digit was given information about this environment, but was not given any specific information about its tasks, just natural language commands of varying complexity to see if it can execute them.”

In the video example, Digit is told to pick up a box the color of “Darth Vader’s lightsaber” and proceed it to the tallest tower. The process isn’t instantaneous, but rather slow and deliberate, as one might expect from an early-stage demo. The robot does, however, execute the task as described.

Agility notes, “Our innovation team developed this interactive demo to show how LLMs could make our robots more versatile and faster to deploy. The demo enables people to talk to Digit in natural language and ask it to do tasks, giving a glimpse at the future.”


Want the top robotics news in your inbox each week? Sign up for Actuator here.


Natural language communication has been a key potential application for this technology, along with the ability to program systems via low- and no-code technologies.

During my Disrupt panel, Gill Pratt described how the Toyota Research set up is using generative AI to speed up robotic learning:

We have figured out how to do something, which is use modern generative AI techniques that enable human demonstration of both position and force to essentially educate a robot from just a handful of examples. The code is not changed at all. What this is based on is something called diffusion policy. It’s work that we did in collaboration with Columbia and MIT. We’ve taught 60 different skills so far.

MIT CSAIL’s Daniela Rus also recently told me, “It turns out that generative AI can be quite powerful for solving even motion planning problems. You can get much faster solutions and much more fluid and human-admire solutions for control than with model predictive solutions. I think that’s very powerful, because the robots of the future will be much less roboticized. They will be much more fluid and human-admire in their motions.”

The potential applications here are broad and exciting — and Digit, as an advanced commercially available robotic system that is being piloted at Amazon fulfillment centers and other real-world locations, seems admire a prime candidate. If robotics are going to work alongside humans, they’ll need to learn to listen to them, as well.

Source link