Neuroscience Inspired Artificial Intelligence
The fields of neuroscience and artificial intelligence are unquestionably intertwined. To bridge the declining communication between the fields, the authors survey past interactions in the fields and talk about the current advances in AI that are inspired by neuroscience. The paper concludes by how future research can advantage from drawing insights from both disciplines and also explores how neuroscience can be benefited from AI.
The paper begins with the premise that to build Turing intelligent systems, one must scrutinize and learn from the human mind - the only existing proof of such an intelligent system. Neuroscience can hence not only provide inspiration for the development of these intelligent systems but also be a validation tool for AI techniques if it is subsequently found in the human brain.
From historic interactions, it is evident that the origins of AI lie in neuroscience. It was only to study neural computations that ANNs were constructed. This served as the foundation to deep learning. The backpropagation algorithm originated from the parallel distributed processing(PDP) movement proposed in human cognition, dropout was inspired by stochasticity in neurons, and reinforcement learning originated from animal learning.
From recent advancements, though the attention mechanism was developed unconsciously of its existence in neuroscience, neurocomputational models used a piecemeal approach (isolate and prioritize information that is relevant at the moment) back in 1993. This only emphasizes how collaboration between the two domains is of significance. Parallels are also drawn between episodic memories (in the medial temporal lobe of the hippocampus) and the “experience replay” in the deep-Q-network. A DQN can hence even be thought of as a primitive hippocampus. Similarly, insights from the working memory in humans can be seen in RNN and LSTM architectures. Intelligent systems are now aiming to be able to learn from different tasks over distributed timescales. This is consistent with studies that “memories can be protected from interference through synapses that transition between a cascade of states with different levels of plasticity”.
The authors then discuss how in the future AI systems can utilize studies from neurocomputation to bridge the gap between machine and human intelligence. Human cognition can aid to develop networks that can “learn to learn”, and be capable of efficient and transfer learning. The authors truly believe that by transferring insights from brain mechanisms, one can facilitate building agents capable of planning hierarchically, generalization, and be truly creative. Also, techniques from neuroimaging and equivalents of single-cell recordings can be employed to make black-box AI interpretable and maybe even explainable.
The paper is concluded by appreciating AI techniques for analyzing neuroimaging datasets and promotes forming a “virtuous cycle” to advance both fields.