How far is AI from being intelligent like humans?

Though there has been good progress in demonstrating AI, it looks like we need to do a lot to get there or we might be looking the other way. This is quite evident among current approaches as they lack understanding of data organization (logical model) and rely on heuristic approaches to make machines behave like humans. AI Researchers need a good understanding of the neural signaling pathways of the human brain, without which, it would be a case of resorting to shoot-in-the-dark techniques to exhibit intelligence.

Understanding human brain processes data to exhibit intelligence?

Let’s understand how the brain processes sensory data to make sense of it.

photoreceptor cell is a specialized type of cell found in the retina that is capable of visual phototransduction. This protein in these cells converts light to signals, triggering a change in cell membrane potential. The different kind of photoreceptor cells (rods, cones) are specialized in picking different parameters like shape, depth, and color. These parameters are passed on to the Lateral Geniculate Nucleus, which receives individual packets of these parameters from designated ganglions. The lateral geniculate nucleus is a relay center in the thalamus for the visual pathway.

The thalamus has multiple functions, generally believed to act as a relay station relaying information between different subcortical areas and the cerebral cortex. Every sensory system includes a thalamic nucleus that receives sensory signals and sends them to the associated primary cortical area. Lateral Geniculate Nucleus picks inputs from the retina and moves it to the Visual Cortex, while the medial geniculate nucleus of the thalamus picks audio inputs and moves it to the primary auditory cortex, and ventral posterior nucleus sends touch and proprioceptive information to the primary somatosensory cortex. You can observe that all inputs that are received by the thalamus, have been hashed and moved to respective cortices for storage. The incoming inputs have been unified based on timestamp (Neurons that fire together wires together). This creates an assembly of all incoming inputs as one processing unit,  which Hebb referred to as “cell-assemblies” (Hebb’s law)

The thalamus is functionally connected to the hippocampus, where these assemblies further signal the creation of memories, with respect to spatial memory and spatial sensory datum, crucial for human episodic memory. There is support for the hypothesis that thalamic regions connection to particular parts of the mesio-temporal lobe provides differentiation of the functioning of recollective and familiarity memory. This can be described as when a particular signal is detected (familiarity), it is compared with the past memories (recollective) to identify the object/event through similarities and differences. This would be the actual learning process of the brain.

The output signals are propagated to the prefrontal cortex, the part of the brain responsible for planning and decision-making. The outputs of the prefrontal cortex are further passed to the primary motor cortex to plan and execute movements.

Note: I have included processes of the brain that is related to processing information at a high level and excluded the other parts of the brain such as the amygdala, hypothalamus and other that influence mood, rewards or hormonal regulation, as these parameters do not necessarily contribute to logical intelligence. The emotional outputs are important for the human body to generate energy through hormonal discharge, which is not important in our endeavor to generate human-like intelligence (artificial intelligence). As these processes add up unwanted biases, we would be better off without the emotional states and allow no room for self-importance(ego).


  • The Brain employs a centralized area to tag every sensory parameter and demonstrates that all cortices(silos) are connected.
  • Uses a linear input assembly to learn through similarities and differences through excitation/inhibition feedback
  • Using this feedback, the brain exhibits traits of decision-making, planning, and predictions

Inspired by the simplicity of the human brain, Responsible Machines platform is designed to learn and exhibit intelligence just like the human brain. The platform allows the user to plug all sensors to a single platform so data can be tagged and auto-assembled using Hebb’s logic. Using this auto-assembly (Strings), the machine will self-learn and exhibit features of the prefrontal cortex (decision-making, planning, and predictions).

Click here to read on how linear assemblies can be used to learn, plan, predict and make decisions.

Such an AI platform can be implemented as the brain of a machine (humanoids, cars, computers) where the machine gets to auto-learn by collecting data from its sensors. Just like humans, it would be easy to teach them to do specific tasks or allow them to learn through observation. Such machines can be trained or controlled by any human (young and old), without having to learn ML/AI programming, creating an opportunity for every individual to control AI machines, putting an end to the fear of unpredictability.



One thought on “How far is AI from being intelligent like humans?

Add yours

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at

Up ↑

%d bloggers like this: