Digital Immortality

This is in continuation with the digital immortality post published here.

Summarizing the previous post, we stated that “it might be possible to achieve longevity, irrespective of the target machine we chose to live in, provided that we learn how to extract data holistically from the source machine (brain) and develop a package to restore data seamlessly to the new entity to maintain continuity.”

In this article, we try to put together some assumptions (based on connecting scientific facts) that are required to extract data from internalized memories within the brain.

Overview

Digital immortality (or “virtual immortality“) is the hypothetical concept of storing (or transferring) a person’s personality in more durable media, i.e., a computer, and allowing it to communicate with people in the future. The result might look like an avatar behaving, reacting, and thinking like a person on the basis of that person’s digital archive.[1][2][3][4] After the death of the individual, this avatar could remain static or continue to learn and develop autonomously.

This can be categorized as immortality using external experiences. The figure below shows that we could collect all of the experiences of an individual (including their images, videos, posts by timeline, conversations and chats, audio files, etc.)and extract them to their inputs so that the virtual machine could learn and memorize patterns for contextual communication.

Because such an approach might not be able to scale sufficiently, we may require an avatar that also stores memories, which might not be truly connected due to disconnected data sets. However, one can make a manual effort to connect the extracted data and allow the avatar to learn from these new connections. By providing a self-learning platform, we can also allow the avatar to enhance its intelligence from new data acquired by the avatar, using the extracted memories as base aggregated memories.

The version will have limitations. For example, we will not be able to take a 30 second video of me with my family and/or friends and extract a true story out of it. Although one could help the avatar identify family members, and mimic conversation style and recall some of the events that give family members the experience of interacting with me. This self- learning part of the avatar might ask questions or extract key points from the conversation and build new memories, enabling the avatar to follow up with contextual conversation with my family members in order to achieve a life-like avatar presence.

However, providing the avatar with our real memories would make this machine a more realistic replica of ourselves. Theoretically, if my brain data could be recorded and added to a virtual machine, without losing any relationships, the machine might have a greater chance of exhibiting thoughts that emulate the way I think. With this foundation, the new data acquired can be seamlessly integrated to the learning process and produce new memories.

 

The question that captures our imagination is whether there could possibly be a self within such an avatar (rather than it behaving like a virtual machine engaging data). Though we understand that there will be a sense of self , which is more mathematically necessary to create a 3D vision of the thoughts, it might not have the same weights of importance for self and other associated high affinity relationships. However, that level of human bias can be introduced as a layer, but the question is would that be necessary?

To implement extraction of internalized memories, we need an understanding of the following

  1. How the brain may store and process data?
  2. How do we extract the data?
  3. How do we restore the data?

How brain may store and process data?

What we know from neuroscience?

Mapping about 300+ components in the brain, across hind, mid and forebrain, we connected the information pertaining to the functions of the parts to understand how the inputs gets processed to outputs (decisions / actions), and how they might fire up when a stimuli is triggered.

A photoreceptor cell is a specialized type of cell found in the retina that is capable of visual phototransduction. This protein in these cells converts light to signals, triggering a change in cell membrane potential. The different kind of photoreceptor cells (rods, cones) are specialized in picking different parameters like shape, depth, and color. These parameters are passed on to the Lateral Geniculate Nucleus, which receives individual packets of these parameters from designated ganglions. The lateral geniculate nucleus is a relay center in the thalamus for the visual pathway.

The thalamus has multiple functions, generally believed to act as a relay station relaying information between different subcortical areas and the cerebral cortex. Every sensory system includes a thalamic nucleus that receives sensory signals and sends them to the associated primary cortical area. Lateral Geniculate Nucleus picks inputs from the retina and moves it to the Visual Cortex, while the medial geniculate nucleus of the thalamus picks audio inputs and moves it to the primary auditory cortex, and ventral posterior nucleus sends touch and proprioceptive information to the primary somatosensory cortex. You can observe that all inputs that are received by the thalamus, have been hashed and moved to respective cortices for storage. The incoming inputs have been unified based on timestamp (Neurons that fire together wires together). This creates an assembly of all incoming inputs as one processing unit, which Hebb referred to as “cell-assemblies” (Hebb’s law)

The thalamus is functionally connected to the hippocampus, where these assemblies further signal the creation of memories, with respect to spatial memory and spatial sensory datum, crucial for human episodic memory. There is support for the hypothesis that thalamic regions connection to particular parts of the mesio-temporal lobe provides differentiation of the functioning of recollective and familiarity memory.

This can be described as when a particular signal is detected (familiarity), it is compared with the past memories (recollective) to identify the object/event through similarities and differences. This would be the actual learning process of the brain.

The output signals are propagated to the prefrontal cortex, the part of the brain responsible for planning and decision-making. The outputs of the prefrontal cortex are further passed to the primary motor cortex to plan and execute movements.

Inference

  • The Brain employs a centralized area to tag every sensory parameter and demonstrates that all cortices(silos) are connected.
  • Uses a linear input assembly to learn through similarities and differences through excitation/inhibition feedback
  • Using this feedback, the brain exhibits traits of decision-making, planning, and predictions

Hypothetical Working of the brain
The formation of memories can be a simple technique.

As in this diagram below, each sensory input forms a group based on a common time marker. This is also what we learn from Hebb’s postulates, too. Neurons that fire together, wire together. When they wire together, they form an assembly (Hebbian engrams).

These assemblies could form hierarchies based on inputs received from sensors. For example, the inputs received from the eye forms an assembly creating a composite of visual information (based on the same time stamp). This in turn groups with other sensory assemblies from the ears or nose, creating a larger assembly. That is, the eye inputs the shape, color and depth, creating a visual assembly. When we hear a sound or smell at the same time, we create a relationship between the visual assembly and the audio or aromatic assembly using the same timestamp. This assembly forms a relationship creating an object layer. The next level of assembly is created between objects available in the same space and time to create even larger assemblies called memories.

These assemblies are used to exhibit intelligence by simply comparing and extracting similarities and differences.

In the below diagram,

(a) When assemblies are compared, based on similar tokens (input units), they group together creating a category. During the match, the overlay highlights the similarities and the differences between two assemblies. For every unique difference in the assembly, a new neuron can be generated for hashing the unique difference. In the case of an exact match between both strings that are compared, no unique hash is created. However, for every similarity of an attribute within the unit assembly, the brain groups similar attributes to set a marker (category), with the name of the attribute. These categories are created at all levels of string ranging across the unit, sensory, object and memory assemblies. This allows the brain to classify at every stage and maintain clusters of similar attributes.

(b)When compared,the assemblies can also allow for predictions from available past assemblies.. If the brain is involved in a prediction, it invokes response from all related (connected) assemblies and selects the assembly with high potential. This allows the brain to predict scenarios based on past learning, and quickly come up with a plan or the steps needed to achieve the occurrence in an expedient manner..

(c ) Using the same assemblies, the brain can also plan the next best action, which are based on past patterns by selecting the shortest path to plan a response. This might be dependent on the speed of response and time available to fire the response. It might just access the limbic system for macro weights if the time is too short, or take time to analyze if there is enough time.

(d) Owing to conflicts in unit weights or micro assemblies, the drive to explore (or reason) continues until the resolution occurs. This can be explained as a set of assemblies that show variance only in specific areas even after a certain threshold. These amount to unconfirmed patterns (as in pic) and are account to reasoning. These might take place in the neocortex and so extra time will be required for performing these actions. When the conflict is resolved, it moves to a confirmation state. Even confirmed patterns can lose their state of confirmation, even when a new string matches 99% of the existing string during the comparison.

The relationships strengthen after the repetitive firing of sensory inputs. The plasticity in the neurons can be compared here, too. Owing to high the occurrence of the same inputs, a higher responsiveness to similar inputs drives a faster response.

The entire explanation on how it works provides an idea on how data is stored, which could provide insights on where data might be stored in the brain, our first step towards planning the extraction.

Extraction

Now that we understand the data points within the brain, our next task is to extract the data from the neurons along with their relationships (neural network).

Here, data extraction would mean extracting information from neurons by excitation to copy the inputs and the relationship patterns.

From research as well as the above theory, we can reckon that brain data is stored at different locations (raw input, aggregated nodes), and the data can be identified from the particular cell function, which either performs a extraction or aggregate function

We could probably detect and further confirm the role of the cells by observing how an incoming stimulated pattern triggers the different parts of the brain at various time gaps to understand which might be the pivoting node and from there, the entirety of its relationships

Hence, it would be important to understand the computing zone and the necessary data keys that need to be extracted to make sense of the data.

As it concerns our understanding of the data itself, data that can be assumed as the weighted response to a particular input may be expressed within the synaptic connections. Theoretically, a task of the neuron is to trigger the necessary response when a matching stimuli occurs. This information is replicated during cell division, allowing the relevant cell to hold on to the information, and may justify the longevity of memory

It may be possible to study the raw input cells as well as the centralized hashing cells, probably by performing functional analysis of the 300+ components in the brain with existing methods. We could resort to BCI interfaces but since the electrodes tap into motor responses, we can only record or replicate outputs using this technique. There could be a method for replicating inputs from the motor responses, too.

Correctly replicating the data would be the most difficult of the three steps because the process involves reproducing absolute relationships between nodes (in order to produce error-free data migration or data restoration within the avatar).

Restoration

Once we have the data in a structured format, the restoration of data into the virtual machine is not very difficult. The virtual machine will simply mimic the brain to process information. Having the exact same model of network will allow for migrating data in an organized fashion, and matching and ranking technique will allow the machine to mimic the detection, reasoning and comprehension part of the brain.

Data which can be extracted in formats such as FASTA can help maintain sequential patterns or assemblies. It would be far more easier to use the string technique for machine learning in avatars to enable their self learning.

Hypothetically, a data model suggested below can be used to allow the target machine to perform exactly like the process exhibited by the brain (the process explained above)

The above model is a unified model with no redundancy. It can be observed that the brain stores data with zero duplicates, as it would be inefficient to look for the same information in two different places. The model selects all unique inputs from the sensory organs and creates a hierarchical model that is formed at different hashing levels for quick, efficient data retrieval.

We would like feedback from readers, as this article is an attempt to connect dots on human brain functions. It requires views and suggestions that help eliminate the gaps in understanding brain processes. When we have an adequate understanding, it will be possible to move a digital immortality project to fruition.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s