Microsoft tries ChatGPT on robots and the results are impressive

Reading time icon 3 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Researchers at Microsoft recently tested ChatGPT on robot arms, drones, and home assistant robots. The company shared that the general conclusion of the experiment delivered excellent results, signifying a bright future for ChatGPT in the robotics realm.

The arrival of ChatGPT signaled the birth of the modern AI era globally. After Microsoft announced integrating the AI language model into Bing, other companies like Google started sharing their own works revolving around the tech. A handful of Chinese firms were also reported researching and developing their own ChatGPT-like projects. With this, Microsoft has realized that the only way to stay on top of this game is by going forward and applying ChatGPT in other areas: robotics.

In the work shared by Microsoft Autonomous Systems and Robotics Research Group, ChatGPT was given various tasks using different platforms. The tests also involved different design principles, including special prompting structures, high-level APIs, and human feedback through texts. The group reported that while the tech “still needs some help,” the results of the project proved that “ChatGPT can do a lot by itself.”

“By following our set of design principles, ChatGPT can generate code for robotics scenarios,” shared the group. “Without any fine-tuning we leverage the LLM’s (large language model) knowledge to control different robots form factors for a variety of tasks.”

One of the tests executed in the project involved giving ChatGPT control to a drone and Microsoft AirSim simulator. In some of the videos shared, ChatGPT was able to execute commands from finding a drink, identifying a drink based on descriptions, and suggesting a “healthy option.” It also managed to successfully follow the text command to take a selfie in front of a reflective surface and inspect a shelf in a lawnmower pattern. When used in a simulated industrial inspection scenario, researchers reported favorable results in the aerial obstacle avoidance test and added it “was able to effectively parse the user’s high-level intent and geometrical cues to control the drone accurately.”

In a more incredible scenario, ChatGPT passed the manipulation test by stacking blocks and using them to create Microsoft’s four-colored logo with the aid of its knowledge base.

“We used conversational feedback to teach the model how to compose the originally provided APIs into more complex high-level functions: that ChatGPT coded by itself,” the group explained. “…The model displayed a fascinating example of bridging the textual and physical domains when tasked with building the Microsoft logo out of wooden blocks. Not only was it able to recall the logo from its internal knowledge base, it was able to ‘draw’ the logo (as SVG code), and then use the skills learned above to figure out which existing robot actions can compose its physical form.”

While the results of the project look promising, Microsoft stressed that the work is “only a small fraction” of the things that can be done when large language models are used on robots. Additionally, the company reminded that ChatGPT is not yet fully ready to aid robots in performing tasks, warning enthusiasts and other researchers to “always take the necessary safety precautions.”

More about the topics: ai, Artificial Intelligence, ChatGPT, microsoft research, openAI