A new study from MIT shows a weakness in the most advanced artificial intelligence (AI) systems.



A new test from MIT Center for Brain, Mind, and Machines (CPM) shows that even the most advanced artificial intelligence (AI) systems have trouble understanding the basic laws of cause and effect and physics.
Although AI algorithms can accurately describe objects, they have difficulty understanding the effects one object can have on another. Given the size of autonomous machines running AI quickly, this mistake could be more complicated.
Josh Tenenbaum, an MIT professor at CPMM, has designed and directed the experiment with fellow MIT researcher Chuang Gun and Harvard PhD student Kexin Yi.
Researchers designed the AI intelligence test to highlight gaps in the understanding of projects. The AI system will display a virtual environment with only a few objects interacting with each other.
- Advertisement -
AI programs were then asked to describe specific elements associated with objects. They performed very well on descriptive tasks such as identifying the colour of an object, which they correctly identified more than 90% of the time.
However, programs struggled with more causal and dynamic questions, and only answered correctly 10% of the time.
This study points out a significant problem that needs to be addressed before introducing self-driving cars, industrial autonomous machines and AI-based medical field programs.
As human beings, we take our reasoning into account, so it is difficult to program these systems to operate safely in every possible situation.
AI systems are great at observing and identifying objects, but they do not realize how their actions can have negative external effects.
- Advertisement -
David Cox, IBM’s director of the MIT-IBM Watson AI Lab, gave an example of how a robot could sense an object, but if it was pushed up, the object would not collapse. This disadvantage is costly and dangerous.
An AI system is as smart as a trained data set. Machine learning, especially the increasingly popular “deep learning”, requires a program to read large amounts of data to learn how to accurately identify things. Problems can occur down the line if the data set is inadequate.
For example, early AI programs for self-driving cars could not identify people with very dark skin because the data sets they were trained on were only white and Asian.
- Advertisement -
When it comes to training AI programs, coders have to choose from a wide variety of archives so that the program does not understand a single word.
For example, a program trained in the library of medical writing can be completely confused by the use of bilingual speech, especially if it is used to describe it.
Artificial intelligence is a wonderful technological advancement, but at this stage, AI systems are still in the developmental stage of a child.
An associate professor at Harvard, who studies machine learning, summed up the issue by saying that if an AI program knew that men were more likely to die from alcohol than women, “it would be the only way to reduce the death of pain, for no reason.”
As long as researchers remember that even the most sophisticated AI programs do not have the same rational capabilities as humans, they can avoid misunderstandings ranging from humour to life-threatening.