Houston: Scientists have trained an artificial intelligence (AI) system to see things the way humans do, by inferring an environment with just a few quick glimpses around, that may pave the way for more effective search-and-rescue robots.
Most AI tools are trained for very specific tasks, such as to recognise an object or estimate its volume in an environment they have experienced before. Scientists at University of Texas at Austin in the US wanted to develop an AI for general purpose, gathering visual information that can then be used for a wide range of tasks.
“We want an agent that is generally equipped to enter environments and be ready for new perception tasks as they arise,” said Kristen Grauman, a professor at University of Texas. The research used deep learning, a type of machine learning inspired by the brain’s neural networks, to train their agent on thousands of 360-degree images of different environments.
Now, when presented with a scene it has never seen before, the agent uses its experience to choose a few glimpses — like a tourist standing in the middle of a cathedral taking a few snapshots in different directions — that together add up to less than 20 per cent of the full scene.