Although there has been good progress in AI development, a fundamentally different approach may be necessary to achieve true artificial general intelligence (AGI). This is apparent, given that current approaches do not take the best advantage of data organization (the logical model) and instead rely on heuristic techniques when attempting to make machines behave like humans. AI researchers require a good understanding of the neural signaling pathways of the human brain, without which the option is a trial and error approach to achieving AGI.
Understanding how the human brain processes data in order to manifest intelligence
Let’s understand how the brain processes sensory data to make sense of it.
A photoreceptor cell is a specialized type of cell found in the retina that is capable of visual phototransduction. A protein in these retinal cells converts light to signals, triggering a change in cell membrane potential. The various retinal photoreceptor cells (most predominantly rods and cones) function to help distinguish different aspects of form, such as shape, depth, and color. The designated ganglions passes on these individual packets to the Lateral Geniculate Nucleus in the thalamus (located under the brain’s cerebral cortex). As a receiver of the major sensory input from the retina, the Lateral Geniculate Nucleus serves as a relay center for the visual pathway..
The thalamus has multiple functions, and is generally believed to act as a relay station that transmits information between different subcortical areas and the cerebral cortex. Every sensory system includes a thalamic nucleus that receives sensory signals and sends them to the associated primary cortical area. Lateral Geniculate Nucleus picks inputs from the retina and moves it to the Visual Cortex, while the medial geniculate nucleus picks audio inputs and moves it to the primary auditory cortex, and ventral posterior nucleus sends touch and proprioceptive information to the primary somatosensory cortex. You can observe that all inputs that are received by the thalamus, have been hashed and moved to respective cortices for storage. The incoming inputs have been unified based on the time the firing takes place (Neurons that fire together wires together). This creates an assembly of all incoming inputs as one processing unit, which Hebb’s law refers to as “cell-assemblies.”
The thalamus is functionally connected to the hippocampus, where these assemblies further signal the creation of memories, with respect to spatial memory and spatial sensory datum, crucial for human episodic memory. There is support for the hypothesis that thalamic regions’ connection to particular parts of the mesio-temporal lobe provide for the differentiation of the functioning of recollective and familiarity memory. This can be described as when a particular signal is detected (familiarity), it is compared with the stored memories (recollective) to identify the object/event through these detected similarities and differences. This may account for an actual learning process of the brain.
The output signals are propagated to the prefrontal cortex, the part of the brain responsible for planning and decision-making. The outputs of the prefrontal cortex are further passed to the primary motor cortex to plan and execute movements.
Note: I have included processes of the brain that are related to processing information at a high level and excluded the other parts of the brain such as the amygdala, hypothalamus and other that influence mood, rewards or hormonal regulation, as these parameters do not necessarily contribute to logical intelligence. The emotional outputs are important for the human body to generate energy through hormonal discharge, which is not important in our endeavor to generate human-like intelligence (artificial intelligence). As these processes add up unwanted biases, we would be better off without the emotional states and allow no room for self-importance(ego).
- The Brain employs a centralized area to tag every sensory parameter and demonstrate that all cortices(silos) are connected.
- The brain uses a linear input assembly to learn through similarities and differences, and through excitation and/or inhibition feedback.
- Using this feedback, the brain exhibits traits of decision-making, planning, and predictions.
Inspired by the workings of the human brain, Responsible Machines platform is designed to learn and exhibit intelligence just like the human brain. The platform will allow the user to plug all sensors to a single platform so data can be tagged and auto-assembled using Hebb’s logic. Using this method of auto-assembly (called ‘strings’), the machine will self-learn and exhibit features of the brain’s prefrontal cortex (allowing for decision-making, planning, and predictions).
Click here to read about how linear assemblies can be used to learn, plan, predict and make decisions.
This AGI platform can be implemented for the brains of various machines (robots, cars, computers, etc.), wherein the machines auto-learn by collecting and processing data from sensors. Just as with humans,, it would be easy to teach them to do specific tasks or allow them to learn through observation. Such machines can be trained and/or controlled without the users having to learn ML/AI programming, and this creates an opportunity for everyone to benefit from AGI-driven machines without concern for them behaving haphazardly.