ATLANTA – The black and yellow robot, meant to resemble a large dog, stood waiting for directions. When they came, the instructions weren’t in code but instead in plain English: “Visit the wooden desk exactly two times; in addition, don’t go to the wooden desk before the bookshelf.”

Four metallic legs whirred into action. The robot went from where it stood in the room to a nearby bookshelf, and then, after a brief pause, shuffled to the designated wooden desk before leaving and returning for a second visit to satisfy the command.

Until recently, such an exercise would have been nearly impossible for navigation robots like this one to carry out. Most current software for navigation robots can’t reliably move from English, or any everyday language, to the  that its robots understand and can perform.

And this gets even harder when the software has to make logical leaps based on complex or expressive directions (such as going to the bookshelf before the wooden desk) since that traditionally requires training on thousands of hours of data so that it knows what the robot is supposed to do when it comes across that particular type of command.

Advances in so-called large language models that run on artificial intelligence, however, are changing this. Giving robots newfound powers of understanding and reasoning are not only helping make experiments like this achievable but have  excited about transferring this type of success to environments outside of labs, such as people’s homes and major cities and towns around the world.

For the past year, researchers at Brown University’s Humans to Robots Laboratory have been working on a system with this kind of potential and share it in a new paper that will be presented at the Conference on Robot Learning in Atlanta on November 8.

Read more at TechXlore