Computers ought to be taught like babies.

Why we are incorrect about the future of AI

By Hemaja Burud


At chess, poker, Jeopardy, Go, and a plethora of other games, artificial intelligence systems have consistently defeated humans.


But when it comes to comprehending the fundamental laws governing the physical world, robots continue to struggle.


Making machine learning systems more effective thinkers could be accomplished by modeling them after how infant brains function.


We haven't created an artificial intelligence model that can match the baby cognition of a typically developing 1-year-old in the seven decades that have elapsed since this challenge was put forward.


The ability of artificial intelligence to mimic a young child's comprehension of physics is one area where it has fallen short.


In the first several months of life, according to studies on cognitive development, people have expectations regarding these rational features of physics. We take visual sense for granted.


This is due to the fact that we have low-level brain equipment specifically designed for processing visual information in an instinctive, subconscious manner.


Consider for a moment that behind all the differences that exist among people, there exist a set of perceptual and intellectual abilities that are shared by everybody.


These abilities include the assumption that things will last a long time and won't suddenly disappear (for instance, your missing keys will still be there even if you can't find them).


When it comes to physical conceptions, humans aren't the only intelligent species. Even other animals may detect expectations about an object's location and motion.