This is in continuation with the digital immortality post published here.
Summarizing the previous post, we stated that “it might be possible to achieve longevity, irrespective of the target machine we chose to live in, provided that we learn how to extract data holistically from the source machine (brain) and develop a package to restore data seamlessly to the new entity to maintain continuity.”
In this article, we try to put together some assumptions (based on connecting scientific facts) that are required to extract data from internalized memories within the brain.
Although there has been good progress in AI development, a fundamentally different approach may be necessary to achieve true artificial general intelligence (AGI). This is apparent, given that current approaches do not take the best advantage of data organization (the logical model) and instead rely on heuristic techniques when attempting to make machines behave like humans. AI researchers require a good understanding of the neural signaling pathways of the human brain, without which the option is a trial and error approach to achieving AGI.
Understanding how the human brain processes data in order to manifest intelligence
Let’s understand how the brain processes sensory data to make sense of it.
A photoreceptor cell is a specialized type of cell found in the retina that is capable of visual phototransduction. A protein in these retinal cells converts light to signals, triggering a change in cell membrane potential. The various retinal photoreceptor cells (most predominantly rods and cones) function to help distinguish different aspects of form, such as shape, depth, and color. The designated ganglions passes on these individual packets to the Lateral Geniculate Nucleus in the thalamus (located under the brain’s cerebral cortex). As a receiver of the major sensory input from the retina, the Lateral Geniculate Nucleus serves as a relay center for the visual pathway..
The thalamus has multiple functions, and is generally believed to act as a relay station that transmits information between different subcortical areas and the cerebral cortex. Every sensory system includes a thalamic nucleus that receives sensory signals and sends them to the associated primary cortical area. Lateral Geniculate Nucleus picks inputs from the retina and moves it to the Visual Cortex, while the medial geniculate nucleus picks audio inputs and moves it to the primary auditory cortex, and ventral posterior nucleus sends touch and proprioceptive information to the primary somatosensory cortex. You can observe that all inputs that are received by the thalamus, have been hashed and moved to respective cortices for storage. The incoming inputs have been unified based on the time the firing takes place (Neurons that fire together wires together). This creates an assembly of all incoming inputs as one processing unit, which Hebb’s law refers to as “cell-assemblies.”
The thalamus is functionally connected to the hippocampus, where these assemblies further signal the creation of memories, with respect to spatial memory and spatial sensory datum, crucial for human episodic memory. There is support for the hypothesis that thalamic regions’ connection to particular parts of the mesio-temporal lobe provide for the differentiation of the functioning of recollective and familiarity memory. This can be described as when a particular signal is detected (familiarity), it is compared with the stored memories (recollective) to identify the object/event through these detected similarities and differences. This may account for an actual learning process of the brain.
The output signals are propagated to the prefrontal cortex, the part of the brain responsible for planning and decision-making. The outputs of the prefrontal cortex are further passed to the primary motor cortex to plan and execute movements.
Note: I have included processes of the brain that are related to processing information at a high level and excluded the other parts of the brain such as the amygdala, hypothalamus and other that influence mood, rewards or hormonal regulation, as these parameters do not necessarily contribute to logical intelligence. The emotional outputs are important for the human body to generate energy through hormonal discharge, which is not important in our endeavor to generate human-like intelligence (artificial intelligence). As these processes add up unwanted biases, we would be better off without the emotional states and allow no room for self-importance(ego).
The Brain employs a centralized area to tag every sensory parameter and demonstrate that all cortices(silos) are connected.
The brain uses a linear input assembly to learn through similarities and differences, and through excitation and/or inhibition feedback.
Using this feedback, the brain exhibits traits of decision-making, planning, and predictions.
Inspired by the workings of the human brain, Responsible Machines platform is designed to learn and exhibit intelligence just like the human brain. The platform will allow the user to plug all sensors to a single platform so data can be tagged and auto-assembled using Hebb’s logic. Using this method of auto-assembly (called ‘strings’), the machine will self-learn and exhibit features of the brain’s prefrontal cortex (allowing for decision-making, planning, and predictions).
Click here to read about how linear assemblies can be used to learn, plan, predict and make decisions.
This AGI platform can be implemented for the brains of various machines (robots, cars, computers, etc.), wherein the machines auto-learn by collecting and processing data from sensors. Just as with humans,, it would be easy to teach them to do specific tasks or allow them to learn through observation. Such machines can be trained and/or controlled without the users having to learn ML/AI programming, and this creates an opportunity for everyone to benefit from AGI-driven machines without concern for them behaving haphazardly.
In order to create ‘Strong AI’, we need to look no further than the cognitive processes of the human brain. We will see that processes involving anticipation, prediction, reasoning and abstraction are merely a combination of processes; and these can be mimicked by the machine, in order to behave just like a human.
However, today’s AI experts are faced with two formidable obstacles, as they strive to create an intelligent machine. These are:
Extremely complex building blocks for AI machines
Constant supervision and inputs required to ‘guide’ the learning process
To create true ‘Strong AI’, one needs to begin with simple building blocks that come together to form increasingly more complex structures. And, the learning process needs to be autonomous, in order to reduce complexity and time to derive intelligence.
This article explains how a self-learning machine can exhibit autonomous classification, pattern detection or output prediction, using a simple data organization technique. The data is organized as sequences forming patterns, which can be readily consumed to compute and exhibit artificial intelligence in real-time.
The Pattern matching technique can be described as the act of checking a given sequence of tokens for the presence of the constituents of some pattern. Sequence patterns are often described using regular expressions and matched using techniques such as backtracking. By far the most common form of pattern matching involves strings of characters. In many programming languages, a particular syntax of strings is used to represent regular expressions, which are patterns describing string characters. String versions of self-organizing maps and learning vector quantization (LVQ) have already been implemented in the context of speech recognition.
Here we illustrate how using the natural organization of input data can form strings (sequences), and how this generic organization can exhibit classification, feature selection, and intelligence using the patterns available in these string sequences.
Creation of String Complex
Consider a brand new machine wherein we have all the sensors (data collection units) integrated into a centralized platform (just like the human brain), but is yet to capture data (blank slate). As it starts recording inputs, it should start to organize data and exhibit intelligence, just like humans. Any Input captured by the sensors has two attributes: parameter label and value. Using these attributes across many inputs, the machine has to self-organize in order to exhibit intelligence. The value attached to each label could be either dynamic or static. The platform houses a rule for dynamic values wherein the values (min-max) creates the range scale in order to arrive at a threshold for that respective parameter.
Tree Patterns for Strings are represented as trees of root StringExpression, and all the characters in order as children of the root. Thus, to match “any amount of trailing characters”, a new wildcard is needed in contrast to that would match only a single character.
The labels of the parameters are unique, and any exact match of a parameter (string match) results in an overlay and filters out the data redundancy in the system.
The unit strings can be a length of characters that depict the unit parameters and its associated weight. For example: If the color sensor recording in RGB would input something like R.G.B, the machine could convert the incoming data to a string or convert it to a hexadecimal string and store it as FF90F5. Likewise, a shape extraction algorithm can input XYZ parameters of objects, which are again stored as a string.
These unit strings created from various inputs are tagged together based on their timestamp. This allows the machine to group strings that fired (recorded) together to form a string complex.
You could say that a combination of unit strings creates a ‘String Complex’.
For example, Individual Unit parameters recorded for shape will have information of a particular edge of an object. A set of individual strings would together carry the information of the shape of a particular object. For instance, the shape of the petal might give you individual information in the string, but many petals combine to form a flower. So, the string complex for the shape of the flower would look like [petal information][stamen information][receptacle information], and so on.
With just this data, you could see that a simple network is being built. If we consider that X, Y, and Z are three input parameters, the relationship of parameter ‘X” is established between the unit string and its corresponding weight. So, every time there is an exact string type and corresponding weight, it can detect a past instance and quickly correlate to all the nodes linked to the unit string.
There could be many such parameters recorded by a single sensor in forming a string complex and with many sensors; the system is full of varied string types. These string complexes are further grouped by sensors that recorded it. As shown in the diagram, you might have composites from visual sensors, audio sensors, touch sensors and so on.
The string complex is now sequenced with the unification of all sensory level string complexes. They are further grouped to form a cluster of a single object determined primarily by the string formed by shape parameters. The object sequence will contain complete information about the object.
For differentiation purpose, we can refer to it as the ‘Object String’.
Now, many such object strings find a relationship between each other and merge to form a macro composite which we can call the ‘Memory String’.
The Memory String holds complete information about an event with information of every object present in the scenario, along with its behavior and relationships between objects.
To summarize so far, the string hierarchy can be found in the tree structure as
The platform automatically organizes these data strings in the tree structure shown below
The strings formed can be compared to the 2nd Hebb’s postulate stating “Neurons that fire together, wire together”, as they group together using the exact timestamp. The pattern structure which is a linear juxtapositioning of tags and degree of weights (tag assemblies) can be compared to the Hebbian Engrams (Cell Assemblies) stated in Hebb’s 3rd postulate
Now that the data is auto-organized in this fashion, we can see how the machine self-learns and makes autonomous decisions. The machine will learn by detecting and matching these strings. The incoming memory string is decomposed to individual unit strings and compared against existing strings.
In the case of existing data, the value of each of the unit parameters is computed using the new values over existing aggregated value to arrive at the new synthesized value. For all the exact matches, the strength of the relationship grows by 1. For all new values of the existing unit parameter, a new relationship is established if the value is unique. This detect-and-match technique can quickly help the machine identify the object or its behavior.
During the match, the overlay highlights the similarities and the differences between two strings. For every unique difference of a string, it creates a new node and auto-labels the unique combination. In the case of an exact match between both strings that are compared, no unique label is created. However, for every similarity of an attribute within the unit string, the machine groups similar attributes to create a category in the name of the attribute. These categories are created at all levels of a string type ranging across the unit, sensory, object and memory strings. This allows the machine to classify at every stage and maintain clusters of similar attributes.
Using these strings, the machine can quickly pull out the desired event by selecting all strings with the string type that contains the desired parameter.
In case the machine wants to predict the occurrence of a particular event, it can select the string type that is created by past collection and come up with its prediction of the occurrence. This allows the machine to predict scenarios based on past learning and quickly come up with a plan to activate the next steps in order to achieve the occurrence in a minimum number of steps.
Based on continuous exact occurrences, the strength of the relationship between strings and string attributes carries more weight, finally reaching the state of confirmation (confirmed patterns). The threshold for confirmation has to be present, so we know that the machine confirms correctly only after numerous exact occurrences.
Strings that don’t encounter exact matches can be termed as unconfirmed patterns, wherein the machine continuously regresses on the pattern by either establishing more relationships or by appending weights through subsequent data interactions.
There might be a set of strings that show variance only in specific sectors even after a certain threshold. These account to unconfirmed patterns (as in Figure 4), and are pushed back for further regression before being committed as confirmed. Even confirmed patterns can lose its state of confirmation, even when a new string matches 99% of the existing string during the comparison.
This job can be described as machine reasoning, as the machine will explore all possible influencing attributes in order to understand the most deterministic pattern. Along with checking the pattern of these strings, the machine also checks whether or not the weights also find the match. In case the string gets an exact 100% match but there is a difference in value in one of the unit strings, the machine puts it back into the unconfirmed state for further regression.
Over a period of time, the machine will develop the capability to learn, understand similarities and differences, find answers to unique patterns and solve problems with greater intelligence. Using this technique, the machine can come up with decisions the same way humans do. And, when integrated with motor parts, the machine can perform actions autonomously in a real-life environment.
To sum up, the pattern matching and strings technique provide a holistic approach to creating a completely autonomous and highly-accurate machine: one that can learn on its own without any human intervention.
In early computing, an avatar was a graphical representation of the user or the user’s alter ego or character. Examples were an icon or figure representing a particular person in a video game, Internet forum, etc. Now within the scope of AI, Avatars can be looked at as virtual embodiment of humans, which are driven more by artificial intelligence rather than real people.
Of the many ideas, the 2045 Avatar Initiative by Dmitry Itskov aims to create technologies enabling the transfer of an individual’s personality to a more advanced non-biological carrier(avatar), and extending life perhaps to the point of immortality.
Digital Immortality has long been discussed by Gordon Bell, and he describes it as storing (or transferring) a person’s personality to a more durable media, such as a computer, and allowing it to communicate with people of the future. The result might look like an avatar behaving, reacting, and thinking like a person based on a person’s digital archive. After the death of the individual, this avatar could have a static persona based on past data only (no new data gathered after the death of the person) or continue to learn and develop autonomously (by collecting new data and aggregating over past data).
According to Gordon Bell and Jim Gray from Microsoft Research, retaining every conversation that a person has ever heard is already realistic: it needs less than a terabyte of storage (for adequate quality). Martine Rothblatt envisions the creation of “Mindfiles,” which consists of a collection of data from all kinds of sources, including the photos we upload to Facebook, the discussions and opinions we share on forums or blogs, and other social media interactions that reflect our life experiences and our unique perspective.
To realize such an avatar, which can learn from files carrying external experiences of a particular individual, we will need to provide it with mindware (an AI platform) and enable it to learn using an unsupervised learning model. By bringing together popular theories such as Kohonen’s Self Organizing Map, Adaptive Resonance Theory (ART), Hopfields Network, and HHMM, we believe that we should be able to achieve an autonomous learning avatar, that can learn from past data and exhibit the unique behavior of an individual’s persona.
We believe the RM2 Unsupervised Learning would be the ideal model to bring avatars into functional existence. Our unified platform can learn from files (text, images, videos and audio) and be able to build a timeline of memories. These memories, and new ones, have accrued weights from extracted sentiments and moods, and these weights are updated with new avatar conversations.
The unique platform can be used even when we have figured out memory extraction directly from the human brain. The memory extraction process, which would mean extracting episodic memory by time sequences, would be the best way to achieve a very human-like avatar. Data extracted needs to be converted and organized as per the structure proposed earlier in order to allow avatars to behave, think and perform like humans.
In computing, the term minimalism refers to the application of minimalist philosophies and principles in the design and use of hardware and software. Minimalism, in this sense, means designing systems that use the least hardware and software resources possible.
You could compare this with the functioning of the human brain, which exhibits intelligence using the least hardware (sensory organs) and least software (minimal inputs and minimal processing). The human brain demonstrates minimalism in order to rapidly store and synthesize information; which we recognize as rapid thinking and quick reactions.
Our storage and retrieval mechanisms, supported by seemingly automatic computational techniques allowing us to reason and derive answers (often very rapidly), also implies that the brain’s output (intelligence) is derived with a high degree of energy optimization. A straightforward form of organic learning involves the process of mimicking, which can itself be reduced to sequencing and relationships. Forms of training can be reduced to a linear process that replicates action with collected data parameters. If the human brain had to rely on traditional machine-learning techniques to extract patterns for every learning exercise, its energy would drain and lead to malfunction.
However, as human data touch-points are limited, we recognize that the data entities are limited and the entire knowledge structure is created using these attributes. The human brain structures knowledge based on data collected through touch-points (eyes, ears, nose, tongue, and skin) by building relationships and applying weights for basic computations.
The brain employs the most optimized data architecture design with zero redundancy, which enables it to build myriad complex structures using minimum attributes. A hypothetical hierarchical relationship structure may be built within the brain in order to learn and process intelligence in real-time.
Based on the theorem explained above, artificial Intelligence can, therefore, be a direct replica of natural intelligence. The data model that goes into building an intelligent machine is crucial in the utmost for rendering instant learning and response selections. Using sensory data (collected from sensors), it will be vital to lay out a structure for incoming data in order to form patterns that can be matched, given weights, and synthesized.
Using the timestamp of each data record, relationships are built between data based on the hierarchical design, allowing the machine to extract patterns on the fly. These patterns are converted into strings during comparison in the process flow. Using the strength of the node and its cumulative weight, the machine may prioritize which response to select. The diagram below illustrates how a pattern may become highlighted (based on the focus rule) and how it can detect possible associations that help the machine predict the possible outcomes of a repetitive path. The depth of repetition is used to achieve a confirmation state once it attains a certain threshold.
The simplicity demonstrated in selecting an outcome involves simply following the relationship trail. This method (Sequential Covering Rule Building) may assist in arriving at decisions instantaneously without having to parse through redundant computational steps.
To achieve such rapid processing, the data model for AI is required to be centered on the object node, which acts as the pivot between macro-clusters (frame, objects) and micro-clusters (shape, depth, color, etc). This entire data relationship of an object is available as a string. These strings are used to match with the incoming dataset, and the differences and similarities are used for auto-classification and auto-labeling.
The correct data relationship defines the truth behind accurate intelligence. If the holistic relationship of a data entity is not computed to the fullest, there is every possibility that the robot/machine may end up on an erroneous route, which we can see occurring even among naturally intelligent models.
Data relationships backed by the right weights are the two important aspects of accuracy toward the deduction of correct responses. Without this, we could see artificially intelligent machines failing in their learning methods, and resulting in non-intelligence.
The post articulates how Visual Learning works at a high level within the RM2 Platform. The Visual learning feature in RM2 is integrated into the unified architecture where visual object detection and learning are integrated to achieve real-time detection and behavior prediction in a given environment.
In order to accurately detect objects and learn from correct data associations, it is critical to extract unit data parameters for the development of a proper foundation for the establishment of a relationship between unique parameters. This approach will result in high accuracy for the identification of objects, or for learning object behavior.
The post provides an overview of how the RM2 Network employs unsupervised learning to process Natural Language using reference visual inputs along with the object label, just as humans do. We believe that in order to deliver effective machine-human communication, we need to integrate visual cues with language that will provide the ability to learn, reason, explain abstracts and understand the sentiment in a given conversation and help maintain context at all times.
In order to explain how the language processing works, we present an overview of the entire network and how unsupervised learning is conducted for autonomous learning
To begin, we recommend watching a brilliant video produced by DARPA (below), which cuts through all of the AI hype clutter to clearly articulate how demonstrated machine intelligence has evolved along with its shortfalls.
In summarizing AI capabilities, we may observe that perception and learning capabilities matured during the second wave of development. The hard reasoning that exhibited progress in the first wave simply lost out due to its limitations. We may also witness that the learning feature can deliver erroneous outputs, even after training over 1M data sets, which occurred in the case of a panda being mistaken for a gibbon.
A demonstration of Unsupervised AI might be when a robot can think and act responsibly within a given environment, akin to what humans would typically do. In order for machines to replicate human intelligence they require two critical elements, as do humans; time and data. A human exhibits intelligence by first collecting/absorbing data over a period of time. With the integration of data and time markers, any machine that can replicate cognitive processes may exhibit intelligence.
An actual unsupervised learning machine requires negligible human interference. In the same way that a human baby expands its intelligence through observation and guidance, an autonomous machine may evolve simply by observation. Guidance expedites the learning process; however, it might have further implications. You could say that if you aim to prepare a machine for unsupervised learning, you simply need to install an application that will faithfully collect data from an array of integrated sensors. These data will be employed for learning and decision making, and subsequently, these decisions are coordinated back to various motor components without any human interference in the routine. This translates to the negation of tech companies or multiple engineers that might otherwise be necessary to make the machine capable.
Unsupervised machine learning is the machine learning task of inferring a function to describe a hidden structure from unlabeled data. We could state that unsupervised learning is a feature of natural neural networks that is required to be replicated in a machine toward the achievement of autonomous unsupervised learning. In order to achieve the outputs of unsupervised learning, the platform designed by Responsible Machines highlights several key features that assist a robot in attaining its self-aware state.
Data Organization: In order to realize unsupervised intelligence, it is important to organize data that is conducive to instant pattern recognition. This may be compared to the Adaptive Resonance Theory (ART), developed by Stephen Grossberg and Gail Carpenter. The basic ART system typically consists of a comparison field, a recognition field that is comprised of neurons, a vigilance parameter (threshold of recognition), and a reset module. This platform organizes data in a manner that simplifies identification and comparison tasks.