In recent decades, artificial intelligence has performed well in many areas of science and technology.
Neural networks, whether real orartificial, learn by adjusting connections between neurons. By making them stronger or weaker, some neurons become more active, some less active, until a specific pattern of activity occurs. We call this pattern “memory.” The AI strategy is to use complex and lengthy algorithms that iteratively fine-tune and optimize the connections between neurons. The brain makes this much simpler: each connection between neurons changes only depending on how active the two neurons are at the same time. It has long been thought that this allows for less memory storage compared to an AI algorithm.
New research shows a different picture:When a relatively simple strategy used by the brain to change neural connections is combined with biologically plausible patterns of individual neuron responses, then the strategy performs as well as or better than AI algorithms.
The reason for this paradox is the introductionerrors: When memory is effectively retrieved, it may be identical to or correlated with the original input to be remembered. The brain's strategy results in retrieving memories that are not identical to the original inputs, suppressing the activity of those neurons that are barely active in each pattern. These silenced neurons do not really play a crucial role in distinguishing between different memories stored in the same network. By ignoring them, neural resources are focused on those neurons that are relevant to the input that needs to be remembered and provide higher throughput.
Overall, this study highlights howbiologically plausible self-organizing learning procedures can be as effective as slow and implausible learning algorithms.
See also:
Abortion and science: what will happen to the children who will give birth
Earth will reach critical temperature in 20 years
In space, they found gravitational waves that change space and time. What does it mean?