Microsoft researchers Shital Shah, Ashish Kapoor and Debadeepta Dey are leading development of the Aerial Informatics and Robotics Platform. Photography by Scott Eklund/Red Box Pictures.
Posted February 15, 2017 By Allison Linn
When most people with normal vision walk down the street, they can easily differentiate the things they must avoid – like trees, curbs and glass doors — from the things they don’t, such as shadows, reflections and clouds.
Chances are, most people also can anticipate what obstacles they should expect to encounter next — knowing, for example, that at a street corner they should watch out for cars and prepare to step down off the curb.
The ability to differentiate and anticipate comes easily to humans but it’s still very difficult for artificial intelligence-based systems. That’s one big reason why self-driving cars or autonomous delivery drones are still emerging technologies.
Microsoft researchers are aiming to change that. They are working on a new set of tools that other researchers and developers can use to train and test robots, drones and other gadgets for operating autonomously and safely in the real world. A beta version is available on GitHub via an open source license.
It’s all part of a research project the team dubs Aerial Informatics and Robotics Platform. It includes software that allows researchers to quickly write code to control aerial robots and other gadgets and a highly realistic simulator to collect data for training an AI system and testing it in the virtual world before deploying it in the real world.
Ashish Kapoor, a Microsoft researcher who is leading the project, said they hope the tools will spawn major progress in creating artificial intelligence gadgets we can trust to drive our cars, deliver our packages and maybe even do our laundry.
“The aspirational goal is really to build systems that can operate in the real world,” he said.
That’s different from many other artificial intelligence research projects, which have focused on teaching AI systems to be successful in more artificial environments that have well-defined rules, such as playing board games.
Kapoor said this work aims to help researchers develop more practical tools that can safely augment what people are doing in their everyday lives.
“That’s the next leap in AI, really thinking about real-world systems,” Kapoor said.