Washington : Scientists have developed a new computational model that performs at human levels on a standard intelligence test, an advance that may lead to artificial systems which see and understand the world as we do.
“The model performs in the 75th percentile for American adults, making it better than average,” said Ken Forbus from the Northwestern University in the US. “The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition,” said Forbus.
The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus’ laboratory. The platform has the ability to solve visual problems and understand sketches in order to give immediate, interactive feedback. CogSketch also incorporates a computational model of analogy, based on Northwestern psychology professor Dedre Gentner’s structure-mapping theory. The ability to solve complex visual problems is one of the hallmarks of human intelligence.
Developing artificial intelligence systems that have this ability not only provides new evidence for the importance of symbolic representations and analogy in visual reasoning, but it could potentially shrink the gap between computer and human cognition. While the system can be used to model general visual problem-solving phenomena, they specifically tested it on a nonverbal standardised test that measures abstract reasoning.
All of the test’s problems consist of a matrix with one image missing. The test taker is given six to eight choices with which to best complete the matrix. Researchers computational model performed better than the average American. “The test is the best existing predictor of what psychologists call ‘fluid intelligence, or the general ability to think abstractly, reason, identify patterns, solve problems, and discern relationships,'” said Andrew Lovett, a former Northwestern postdoctoral researcher, now at the US Naval Research Laboratory.