Researchers have used a popular video game to develop an algorithm that can help robots learn how to tell which objects and actions might be useful to achieve a goal.
Researchers from Brown University used the video game Minecraft to develop a new algorithm to help robots better plan their actions in complex environments.
Basic action planning, while easy for humans, is a frontier of robotics. Robots do not intuitively ignore objects and actions that are irrelevant to the task at hand, researchers said.
More From This Section
In complex environments, this leads to what computer scientists refer to as the "state-space explosion" - an array of choices so large that it boggles the robot mind.
Stefanie Tellex, assistant professor of computer science at Brown, is developing the algorithm which augments standard robot planning algorithms using "goal-based action priors" - sets of objects and actions in a given space that are most likely to help an agent achieve a given goal.
The priors for a given task can be supplied by an expert operator, but they can also be learned by the algorithm itself through trial and error.
The game Minecraft provided an ideal world to test how well the algorithm learned action priors and implemented them in the planning process.
Minecraft is an open-ended game, where players gather resources and build all manner of structures by destroying or stacking 3-D blocks in a virtual world.
Tellex and her colleagues started by constructing small domains, each just a few blocks square, in a model of Minecraft that the researchers developed.
Then they plunked a character into the domain and gave it a task to solve - perhaps mining some buried gold or building a bridge to cross a chasm.
The agent, powered by the algorithm, then had to try different options in order to learn the task's goal-based priors - the best actions to get the job done.
After the algorithm ran through a number of trials of a given task to learn the appropriate priors, the researchers moved to a new domain that it had never seen before to see if it could apply what it learned.
The researchers found that, armed with priors, their Minecraft agents could solve problems in unfamiliar domains much faster than agents powered by standard planning algorithms.
The researchers then tried the algorithm in a real robot. They used the algorithm to have a robot help a person in the task of baking brownies.
Tellex said she sees goal-based action priors as a viable strategy to help robots cope with the complexities of unstructured environments.