Categories
Uncategorized

Digital Immortality

This is in continuation with the digital immortality post published here.

Summarizing the previous post, we stated that “it might be possible to achieve longevity, irrespective of the target machine we chose to live in, provided that we learn how to extract data holistically from the source machine (brain) and develop a package to restore data seamlessly to the new entity to maintain continuity.”

In this article, we try to put together some assumptions (based on connecting scientific facts) that are required to extract data from internalized memories within the brain.

Categories
Uncategorized

How far is AI from being intelligent like humans?

Although there has been good progress in AI development, a fundamentally different approach may be necessary to achieve true artificial general intelligence (AGI). This is apparent, given that current approaches do not take the best advantage of data organization (the logical model) and instead rely on heuristic techniques when attempting to make machines behave like humans. AI researchers require a good understanding of the neural signaling pathways of the human brain, without which the option is a trial and error approach to achieving AGI.

Understanding how the human brain processes data in order to manifest intelligence

Let’s understand how the brain processes sensory data to make sense of it.

photoreceptor cell is a specialized type of cell found in the retina that is capable of visual phototransduction. A protein in these retinal cells converts light to signals, triggering a change in cell membrane potential. The various retinal photoreceptor cells (most predominantly rods and cones) function to help distinguish different aspects of form, such as shape, depth, and color. The designated ganglions passes on these individual packets to the Lateral Geniculate Nucleus in the thalamus (located under the brain’s cerebral cortex). As a receiver of the major sensory input from the retina, the Lateral Geniculate Nucleus serves as a relay center for the visual pathway..

The thalamus has multiple functions, and is generally believed to act as a relay station that transmits information between different subcortical areas and the cerebral cortex. Every sensory system includes a thalamic nucleus that receives sensory signals and sends them to the associated primary cortical area. Lateral Geniculate Nucleus picks inputs from the retina and moves it to the Visual Cortex, while the medial geniculate nucleus picks audio inputs and moves it to the primary auditory cortex, and ventral posterior nucleus sends touch and proprioceptive information to the primary somatosensory cortex. You can observe that all inputs that are received by the thalamus, have been hashed and moved to respective cortices for storage. The incoming inputs have been unified based on the time the firing takes place (Neurons that fire together wires together). This creates an assembly of all incoming inputs as one processing unit,  which Hebb’s law refers to as “cell-assemblies.”

The thalamus is functionally connected to the hippocampus, where these assemblies further signal the creation of memories, with respect to spatial memory and spatial sensory datum, crucial for human episodic memory. There is support for the hypothesis that thalamic regions’ connection to particular parts of the mesio-temporal lobe provide for the differentiation of the functioning of recollective and familiarity memory. This can be described as when a particular signal is detected (familiarity), it is compared with the stored memories (recollective) to identify the object/event  through these detected similarities and differences. This may account for an actual learning process of the brain.

The output signals are propagated to the prefrontal cortex, the part of the brain responsible for planning and decision-making. The outputs of the prefrontal cortex are further passed to the primary motor cortex to plan and execute movements.

Note: I have included processes of the brain that are related to processing information at a high level and excluded the other parts of the brain such as the amygdala, hypothalamus and other that influence mood, rewards or hormonal regulation, as these parameters do not necessarily contribute to logical intelligence. The emotional outputs are important for the human body to generate energy through hormonal discharge, which is not important in our endeavor to generate human-like intelligence (artificial intelligence). As these processes add up unwanted biases, we would be better off without the emotional states and allow no room for self-importance(ego).

Summary

  • The Brain employs a centralized area to tag every sensory parameter and demonstrate that all cortices(silos) are connected.
  • The brain uses a linear input assembly to learn through similarities and differences, and through excitation and/or inhibition feedback.
  • Using this feedback, the brain exhibits traits of decision-making, planning, and predictions.

Inspired by the workings of the human brain, Responsible Machines platform is designed to learn and exhibit intelligence just like the human brain. The platform will allow the user to plug all sensors to a single platform so data can be tagged and auto-assembled using Hebb’s logic. Using this method of auto-assembly (called ‘strings’), the machine will self-learn and exhibit features of the brain’s prefrontal cortex (allowing for decision-making, planning, and predictions).

Click here to read about how linear assemblies can be used to learn, plan, predict and make decisions.

This AGI platform can be implemented for the brains of various machines (robots, cars, computers, etc.), wherein the machines auto-learn by collecting and processing data from sensors. Just as with humans,, it would be easy to teach them to do specific tasks or allow them to learn through observation. Such machines can be trained and/or controlled without the users having to learn ML/AI programming, and this creates an opportunity for everyone to benefit from AGI-driven machines without concern for them behaving haphazardly.

Categories
Uncategorized

Autonomous Learning Machine

In order to create ‘Strong AI’, we need to look no further than the cognitive processes of the human brain. We will see that processes involving anticipation, prediction, reasoning and abstraction are merely a combination of processes; and these can be mimicked by the machine, in order to behave just like a human.

However, today’s AI experts are faced with two formidable obstacles, as they strive to create an intelligent machine. These are:

  • Extremely complex building blocks for AI machines
  • Constant supervision and inputs required to ‘guide’ the learning process

To create true ‘Strong AI’, one needs to begin with simple building blocks that come together to form increasingly more complex structures. And, the learning process needs to be autonomous, in order to reduce complexity and time to derive intelligence.

This article explains how a self-learning machine can exhibit autonomous classification, pattern detection or output prediction, using a simple data organization technique. The data is organized as sequences forming patterns, which can be readily consumed to compute and exhibit artificial intelligence in real-time.

The Pattern matching technique can be described as the act of checking a given sequence of tokens for the presence of the constituents of some pattern.  Sequence patterns are often described using regular expressions and matched using techniques such as backtracking. By far the most common form of pattern matching involves strings of characters. In many programming languages, a particular syntax of strings is used to represent regular expressions, which are patterns describing string characters. String versions of self-organizing maps and learning vector quantization (LVQ) have already been implemented in the context of speech recognition.

Here we illustrate how using the natural organization of input data can form strings (sequences), and how this generic organization can exhibit classification, feature selection, and intelligence using the patterns available in these string sequences.

Creation of String Complex

Consider a brand new machine wherein we have all the sensors (data collection units) integrated into a centralized platform (just like the human brain), but is yet to capture data (blank slate). As it starts recording inputs, it should start to organize data and exhibit intelligence, just like humans.
Any Input captured by the sensors has two attributes: parameter label and value. Using these attributes across many inputs, the machine has to self-organize in order to exhibit intelligence. The value attached to each label could be either dynamic or static. The platform houses a rule for dynamic values wherein the values (min-max) creates the range scale in order to arrive at a threshold for that respective parameter.

Tree Patterns for Strings are represented as trees of root StringExpression, and all the characters in order as children of the root. Thus, to match “any amount of trailing characters”, a new wildcard is needed in contrast to that would match only a single character.

The labels of the parameters are unique, and any exact match of a parameter (string match) results in an overlay and filters out the data redundancy in the system.

The unit strings can be a length of characters that depict the unit parameters and its associated weight. For example: If the color sensor recording in RGB would input something like R[255].G[144].B[245], the machine could convert the incoming data to a string or convert it to a hexadecimal string and store it as FF90F5. Likewise, a shape extraction algorithm can input XYZ parameters of objects, which are again stored as a string.

These unit strings created from various inputs are tagged together based on their timestamp. This allows the machine to group strings that fired (recorded) together to form a string complex.

You could say that a combination of unit strings creates a ‘String Complex’.

For example, Individual Unit parameters recorded for shape will have information of a particular edge of an object. A set of individual strings would together carry the information of the shape of a particular object. For instance, the shape of the petal might give you individual information in the string, but many petals combine to form a flower. So, the string complex for the shape of the flower would look like [petal information][stamen information][receptacle information], and so on.

With just this data, you could see that a simple network is being built. If we consider that X, Y, and Z are three input parameters, the relationship of parameter ‘X” is established between the unit string and its corresponding weight.  So, every time there is an exact string type and corresponding weight, it can detect a past instance and quickly correlate to all the nodes linked to the unit string.

There could be many such parameters recorded by a single sensor in forming a string complex and with many sensors; the system is full of varied string types. These string complexes are further grouped by sensors that recorded it. As shown in the diagram, you might have composites from visual sensors, audio sensors, touch sensors and so on.

The string complex is now sequenced with the unification of all sensory level string complexes. They are further grouped to form a cluster of a single object determined primarily by the string formed by shape parameters. The object sequence will contain complete information about the object.

For differentiation purpose, we can refer to it as the ‘Object String’.

Now, many such object strings find a relationship between each other and merge to form a macro composite which we can call the ‘Memory String’.

The Memory String holds complete information about an event with information of every object present in the scenario, along with its behavior and relationships between objects.

To summarize so far, the string hierarchy can be found in the tree structure as

Unit Strings >> Sensory Strings >> Object Strings >> Memory Strings

The platform automatically organizes these data strings in the tree structure shown below

The strings formed can be compared to the 2nd Hebb’s postulate stating “Neurons that fire together, wire together”, as they group together using the exact timestamp.  The pattern structure which is a linear juxtapositioning of tags and degree of weights (tag assemblies) can be compared to the Hebbian Engrams (Cell Assemblies) stated in Hebb’s 3rd postulate

Now that the data is auto-organized in this fashion, we can see how the machine self-learns and makes autonomous decisions. The machine will learn by detecting and matching these strings. The incoming memory string is decomposed to individual unit strings and compared against existing strings.

In the case of existing data, the value of each of the unit parameters is computed using the new values over existing aggregated value to arrive at the new synthesized value. For all the exact matches, the strength of the relationship grows by 1. For all new values of the existing unit parameter, a new relationship is established if the value is unique. This detect-and-match technique can quickly help the machine identify the object or its behavior.

During the match, the overlay highlights the similarities and the differences between two strings. For every unique difference of a string, it creates a new node and auto-labels the unique combination. In the case of an exact match between both strings that are compared, no unique label is created. However, for every similarity of an attribute within the unit string, the machine groups similar attributes to create a category in the name of the attribute. These categories are created at all levels of a string type ranging across the unit, sensory, object and memory strings. This allows the machine to classify at every stage and maintain clusters of similar attributes.

Using these strings, the machine can quickly pull out the desired event by selecting all strings with the string type that contains the desired parameter.

In case the machine wants to predict the occurrence of a particular event, it can select the string type that is created by past collection and come up with its prediction of the occurrence. This allows the machine to predict scenarios based on past learning and quickly come up with a plan to activate the next steps in order to achieve the occurrence in a minimum number of steps.

Based on continuous exact occurrences, the strength of the relationship between strings and string attributes carries more weight, finally reaching the state of confirmation (confirmed patterns). The threshold for confirmation has to be present, so we know that the machine confirms correctly only after numerous exact occurrences.

Strings that don’t encounter exact matches can be termed as unconfirmed patterns, wherein the machine continuously regresses on the pattern by either establishing more relationships or by appending weights through subsequent data interactions.

There might be a set of strings that show variance only in specific sectors even after a certain threshold. These account to unconfirmed patterns (as in Figure 4), and are pushed back for further regression before being committed as confirmed. Even confirmed patterns can lose its state of confirmation, even when a new string matches 99% of the existing string during the comparison.

Figure 4

This job can be described as machine reasoning, as the machine will explore all possible influencing attributes in order to understand the most deterministic pattern. Along with checking the pattern of these strings, the machine also checks whether or not the weights also find the match. In case the string gets an exact 100% match but there is a difference in value in one of the unit strings, the machine puts it back into the unconfirmed state for further regression.

Over a period of time, the machine will develop the capability to learn, understand similarities and differences, find answers to unique patterns and solve problems with greater intelligence. Using this technique, the machine can come up with decisions the same way humans do. And, when integrated with motor parts, the machine can perform actions autonomously in a real-life environment.

To sum up, the pattern matching and strings technique provide a holistic approach to creating a completely autonomous and highly-accurate machine: one that can learn on its own without any human intervention.

Would love to hear your comments.

Categories
Uncategorized

Unsupervised Learning With Minimalism

4

A demonstration of Unsupervised AI might be when a robot can think and act responsibly within a given environment, akin to what humans would typically do. In order for machines to replicate human intelligence they require two critical elements, as do humans; time and data. A human exhibits intelligence by first collecting/absorbing data over a period of time. With the integration of data and time markers, any machine that can replicate cognitive processes may exhibit intelligence.

An actual unsupervised learning machine requires negligible human interference. In the same way that a human baby expands its intelligence through observation and guidance, an autonomous machine may evolve simply by observation. Guidance expedites the learning process; however, it might have further implications. You could say that if you aim to prepare a machine for unsupervised learning, you simply need to install an application that will faithfully collect data from an array of integrated sensors. These data will be employed for learning and decision making, and subsequently, these decisions are coordinated back to various motor components without any human interference in the routine. This translates to the negation of tech companies or multiple engineers that might otherwise be necessary to make the machine capable.