Categories
Uncategorized

Digital Immortality

This is in continuation with the digital immortality post published here.

Summarizing the previous post, we stated that “it might be possible to achieve longevity, irrespective of the target machine we chose to live in, provided that we learn how to extract data holistically from the source machine (brain) and develop a package to restore data seamlessly to the new entity to maintain continuity.”

In this article, we try to put together some assumptions (based on connecting scientific facts) that are required to extract data from internalized memories within the brain.

Categories
Uncategorized

How far is AI from being intelligent like humans?

Though there has been good progress in demonstrating AI, it looks like we need to do a lot to get there or we might be looking the other way. This is quite evident among current approaches as they lack understanding of data organization (logical model) and rely on heuristic approaches to make machines behave like humans. AI Researchers need a good understanding of the neural signaling pathways of the human brain, without which, it would be a case of resorting to shoot-in-the-dark techniques to exhibit intelligence.

Understanding how human brain processes data to exhibit intelligence?

Let’s understand how the brain processes sensory data to make sense of it.

photoreceptor cell is a specialized type of cell found in the retina that is capable of visual phototransduction. This protein in these cells converts light to signals, triggering a change in cell membrane potential. The different kind of photoreceptor cells (rods, cones) are specialized in picking different parameters like shape, depth, and color. These parameters are passed on to the Lateral Geniculate Nucleus, which receives individual packets of these parameters from designated ganglions. The lateral geniculate nucleus is a relay center in the thalamus for the visual pathway.

The thalamus has multiple functions, generally believed to act as a relay station relaying information between different subcortical areas and the cerebral cortex. Every sensory system includes a thalamic nucleus that receives sensory signals and sends them to the associated primary cortical area. Lateral Geniculate Nucleus picks inputs from the retina and moves it to the Visual Cortex, while the medial geniculate nucleus of the thalamus picks audio inputs and moves it to the primary auditory cortex, and ventral posterior nucleus sends touch and proprioceptive information to the primary somatosensory cortex. You can observe that all inputs that are received by the thalamus, have been hashed and moved to respective cortices for storage. The incoming inputs have been unified based on timestamp (Neurons that fire together wires together). This creates an assembly of all incoming inputs as one processing unit,  which Hebb referred to as “cell-assemblies” (Hebb’s law)

The thalamus is functionally connected to the hippocampus, where these assemblies further signal the creation of memories, with respect to spatial memory and spatial sensory datum, crucial for human episodic memory. There is support for the hypothesis that thalamic regions connection to particular parts of the mesio-temporal lobe provides differentiation of the functioning of recollective and familiarity memory. This can be described as when a particular signal is detected (familiarity), it is compared with the past memories (recollective) to identify the object/event through similarities and differences. This would be the actual learning process of the brain.

The output signals are propagated to the prefrontal cortex, the part of the brain responsible for planning and decision-making. The outputs of the prefrontal cortex are further passed to the primary motor cortex to plan and execute movements.

Note: I have included processes of the brain that is related to processing information at a high level and excluded the other parts of the brain such as the amygdala, hypothalamus and other that influence mood, rewards or hormonal regulation, as these parameters do not necessarily contribute to logical intelligence. The emotional outputs are important for the human body to generate energy through hormonal discharge, which is not important in our endeavor to generate human-like intelligence (artificial intelligence). As these processes add up unwanted biases, we would be better off without the emotional states and allow no room for self-importance(ego).

Summary

  • The Brain employs a centralized area to tag every sensory parameter and demonstrates that all cortices(silos) are connected.
  • Uses a linear input assembly to learn through similarities and differences through excitation/inhibition feedback
  • Using this feedback, the brain exhibits traits of decision-making, planning, and predictions

Inspired by the simplicity of the human brain, Responsible Machines platform is designed to learn and exhibit intelligence just like the human brain. The platform allows the user to plug all sensors to a single platform so data can be tagged and auto-assembled using Hebb’s logic. Using this auto-assembly (Strings), the machine will self-learn and exhibit features of the prefrontal cortex (decision-making, planning, and predictions).

Click here to read on how linear assemblies can be used to learn, plan, predict and make decisions.

Such an AI platform can be implemented as the brain of a machine (humanoids, cars, computers) where the machine gets to auto-learn by collecting data from its sensors. Just like humans, it would be easy to teach them to do specific tasks or allow them to learn through observation. Such machines can be trained or controlled by any human (young and old), without having to learn ML/AI programming, creating an opportunity for every individual to control AI machines, putting an end to the fear of unpredictability.

 

Categories
Uncategorized

Autonomous Learning Machine

In order to create ‘Strong AI’, we need to look no further than the cognitive processes of the human brain. We will see that processes involving anticipation, prediction, reasoning and abstraction are merely a combination of processes; and these can be mimicked by the machine, in order to behave just like a human.

However, today’s AI experts are faced with 2 formidable obstacles, as they strive to create an intelligent machine. These are:

  • Extremely complex building blocks for AI machines
  • Constant supervision and inputs required to ‘guide’ the learning process

To create true ‘Strong AI’, one needs to begin with simple building blocks, that come together to form increasingly more complex structures. And, the learning process needs to be autonomous, in order to reduce complexity and time to intelligence.

This article explains how a self-learning machine can exhibit autonomous classification, pattern detection or output prediction using a simple data organization technique. The data is organized as sequences forming patterns, which can be readily consumed to compute and exhibit artificial intelligence in real-time.

The Pattern matching technique can be described as the act of checking a given sequence of tokens for the presence of the constituents of some pattern.  Sequence patterns are often described using regular expressions and matched using techniques such as backtracking. By far the most common form of pattern matching involves strings of characters. In many programming languages, a particular syntax of strings is used to represent regular expressions, which are patterns describing string characters. String versions of self-organizing maps and LVQ have already been implemented in the context of speech recognition.

Here we illustrate how using the natural organization of input data can form strings (sequences) and how this generic organization can exhibit classification, feature selection, and intelligence using the patterns available in these string sequences.

Creation of String Complex

Consider a brand new machine where we have all the sensors (data collection units) integrated into a centralized platform (just like the human brain), but yet to capture data (blank slate). As it starts recording inputs, it should start to organize data and exhibit intelligence, just like humans.
Any Input captured by the sensors has two attributes: parameter label and value. Using these attributes across many inputs, the machine has to self-organize in order to exhibit intelligence. The value attached to each label could be either dynamic or static. The platform houses a rule for dynamic values where the values(min-max) creates the range scale in order to arrive at a threshold for that respective parameter.

Tree Patterns for Strings are represented as trees of root StringExpression, and all the characters in order as children of the root. Thus, to match “any amount of trailing characters”, a new wildcard ___ is needed in contrast to _ that would match only a single character.

The labels of the parameters are unique and any exact match of a parameter (string match), results in an overlay and filters out the data redundancy in the system.

The unit strings can be a length of characters that depict the unit parameters and its associated weight. For example: If the color sensor recording in RGB would input something like R[255].G[144].B[245], the machine could convert the incoming data to a string or convert it to a hexadecimal string and store as FF90F5. Likewise, a shape extraction algorithm can input XYZ parameters of objects, which is again stored as a string.

These unit strings created from various inputs are tagged together based on their timestamp. This allows the machine to group strings that fired (recorded) together to form a string complex.

You could say that a combination of unit strings creates a ‘String Complex’.

For example, Individual Unit parameters recorded for shape will have information of a particular edge of an object. A set of individual strings would together carry the information of the shape of a particular object. For instance, the shape of the petal might give you individual information in the string but many petals combine to form a flower. So the string complex for the shape of the flower would look like [petal information][stamen information][receptacle information] and so on.

With just this data, you could see that a simple network is being built. If we consider that X, Y, and Z are three input parameters, the relationship of parameter ‘X” is established between the unit string and its corresponding weight.  So every time there is an exact string type and corresponding weight, it can detect a past instance and quickly correlate to all the nodes linked to the unit string.

There could be many such parameters recorded by a single sensor in forming a string complex and with many sensors, the system is full of varied string types. These string complexes are further grouped by sensors that recorded it. As shown in the diagram, you might have composites from visual sensors, audio sensors, touch sensors and so on.

The string complex is now sequenced with the unification of all sensory level string complexes. They are further grouped to form a cluster of a single object determined primarily by the string formed by shape parameters. The object sequence will contain complete information about the object.

For differentiation purpose, we can refer to it as the ‘Object String’.

Now, many such object strings find a relationship between each other and merge to form a macro composite which we can call the ‘Memory String’.

The Memory String holds complete information about an event with information of every object present in the scenario along with its behavior and relationships between objects.

To summarize so far, the string hierarchy can be found in the tree structure as

Unit Strings >> Sensory Strings >> Object Strings >> Memory Strings

The platform automatically organizes these data strings in the tree structure shown below

The strings formed can be compared to the 2nd Hebb’s postulate stating “Neurons that fire together, wire together”, as they group together using the exact timestamp.  The pattern structure which is a linear juxtapositioning of tags and degree of weights (tag assemblies) can be compared to the Hebbian Engrams (Cell Assemblies) stated in Hebb’s 3rd postulate

Now that the data is auto-organized in this fashion, we can now see how the machine self-learns and make autonomous decisions. The machine will learn by detecting and matching these strings. The incoming memory string is decomposed to individual unit strings and compared against existing strings.

In the case of existing data, the value of each unit parameters is computed using the new values over existing aggregated value to arrive at the new synthesized value. For all the exact matches, the strength of the relationship grows by 1. For all new values of the existing unit parameter, a new relationship is established, if the value is unique. This detect-and-match technique can quickly help the machine identify the object or its behavior.

During the match, the overlay highlights the similarities and the differences between two strings. For every unique difference of a string, it creates a new node and auto-labels the unique combination. In the case of an exact match between both strings that are compared, no unique label is created. However, for every similarity of an attribute within the unit string, the machine groups similar attributes to create a category, in the name of the attribute. These categories are created at all levels of string ranging across the unit, sensory, object and memory strings. This allows the machine to classify at every stage and maintain clusters of similar attributes.

Using these strings, the machine can quickly pull out the desired event by selecting all strings with the string type that contains the desired parameter.

In case the machine wants to predict the occurrence of a particular event, it can select the string type that is created by past collection and come up with its prediction of the occurrence. This allows the machine to predict scenarios based on past learning and quickly come up with plan to achieve or action next steps to achieve the occurrence in minimum number of steps

Based on continuous exact occurrences, the strength of the relationship between strings and string attributes grows more weight, finally reaching the state of confirmation (confirmed patterns). The threshold for confirmation has to be a present, so we know that the machine confirms the truth only after numerous exact occurrences.

Strings that don’t encounter exact matches can be termed as unconfirmed patterns, where the machine continuously regresses on the pattern by either establishing more relationships or by appending weights through subsequent data interactions

There might be a set of strings that show variance only in specific sectors even after a certain threshold. These account to unconfirmed patterns (as in pic)and are pushed back for further regression before it is committed as confirmed. Even confirmed patterns can lose it state of confirmation, even when a new string matches 99% of the existing string during the comparison.

This job can be described as machine reasoning as the machine will explore every possible influencing attributes in order to understand the most deterministic pattern. Along with checking the pattern of these strings, the machine also checks if the weights also find the match. In case the string gets an exact 100% match but there is a difference in value in one of the unit strings, the machine puts it back into the unconfirmed state for further regression.

Over a period of time, the machine will develop the capability to learn, understand similarities and differences, find answers to unique patterns and solve problems with greater intelligence. Using this technique, the machine can come up with decisions just like the way humans do. And, when integrated with motor parts, can perform actions autonomously in a real-life environment.

To sum up, the pattern matching and strings technique provides a holistic approach to creating a completely autonomous and highly accurate machine: one that can learn on its own without any human intervention.

Would love to hear your comments.

Categories
Uncategorized

Self Learning Avatars

In early computing,  an avatar was a graphical representation of the user or the user’s alter ego or character. Examples were; an icon or figure representing a particular person in a video game, Internet forum, etc. Now within the scope of AI, Avatars can now be looked at as virtual embodiments of humans, which are driven more by artificial intelligence rather than by real people.

Of the many ideas, the 2045 Avatar Initiative by Dmitry Itskov, aims to create technologies enabling the transfer of an individual’s personality to a more advanced non-biological carrier(avatar), and extending life, including to the point of immortality.

Digital Immortality has long been discussed by Gordon Bell which he has described as the concept of storing (or transferring) a person’s personality in more durable media, i.e., a computer, and allowing it to communicate with people in the future. The result might look like an avatar behaving, reacting, and thinking like a person on the basis of that person’s digital archive. After the death of the individual, this avatar could remain static with the past data or continue to learn and develop autonomously.

According to Gordon Bell and Jim Gray from Microsoft Research, retaining every conversation that a person has ever heard is already realistic: it needs less than a terabyte of storage (for adequate quality). Experts such as Rothblatt envision the creation of “Mindfiles” – collections of data from all kinds of sources, including the photos we upload to Facebook, the discussions and opinions we share on forums or blogs, and other social media interactions that reflect our life experiences and our unique self.

To realize such an avatar, which can learn from files carrying external experiences of a particular individual, we will need to provide it with mindware (AI platform) and enable it to learn using an unsupervised learning model. By bringing together popular theories such as Kohonen’s Self Organizing Map, Adaptive Resonance Theory (ART), Hopfields Network, and HHMM, we believe that  we should be able to achieve an autonomous learning avatar, that can learn from past data and exhibit the unique behavior of the individual’s persona.

We believe the RM2 Unsupervised Learning would be the ideal model to bring avatars alive. The unified platform can learn from files (text, images, videos and audio) and be able to build a timeline of memories. Memories could have accrued weights from sentiment and moods extracted and these weights are updated as and when newer conversations develop with the avatar. The slide deck below presents why this unified platform might be the most suitable to achieve smart avatars.

The unique platform can be used even when we have figured out memory extraction directly from the human brain. Memory extraction process, which would mean extracting episodic memory by time sequences, would be the best way to achieve a very human-like avatar. Data extracted needs to be finally converted and organized as per the structure proposed earlier in order to allow avatars behave, think and perform like humans.

Categories
Uncategorized

Minimalism (computing)

In computing, minimalism refers to the application of minimalist philosophies and principles in the design and use of hardware and software. Minimalism, in this sense, means designing systems that use the least hardware and software resources possible.

You could compare this with the functioning of the human brain, which exhibits intelligence using least hardware (sensory organs) and least software (minimal inputs and minimal processing). The human brain demonstrates minimalism in order to rapidly store and synthesize information; which we recognize as rapid thinking and quick reactions.

Our storage and retrieval mechanisms, supported by seemingly automatic computational techniques allowing us to reason and derive answers (often very rapidly), also implies that the brain’s output (intelligence) is derived with a high degree of energy optimization.  A straightforward form of organic learning involves the process of mimicking, which can itself be reduced to sequencing and relationships. Forms of training can be reduced to a linear process that replicates action with collected data parameters. If the human brain had to rely on traditional machine-learning techniques to extract patterns for every learning exercise, its energy would drain and lead to malfunctioning.

However, as human data touch-points are limited, we recognize that the data entities are limited and the entire knowledge structure is created using these attributes. The human brain structures knowledge based on data collected through touch-points (eyes, ears, nose, tongue, and skin) by building relationships and applying weights for basic computations.

The brain employs the most optimized data architecture design with zero redundancy, which enables it to build myriad complex structures using minimum attributes. A hypothetical hierarchical relationship structure may be built within the brain in order to learn and process intelligence in real-time.

Based on the theorem explained above, artificial Intelligence can, therefore, be a direct replica of natural intelligence. The data model that goes into building an intelligent machine is crucial in the utmost for rendering instant learning and response selections. Using sensory data (collected from sensors), it will be vital to lay out a structure for incoming data in order to form patterns that can be matched, given weights, and synthesized.

Using the timestamp of each data record, relationships are built between data based on the hierarchical design, allowing the machine to extract patterns on the fly. These patterns are converted into strings during comparison in the process flow. Using the strength of the node and its cumulative weight, the machine may prioritize which response to select. The diagram below illustrates how a pattern may get highlighted (based on the focus rule) and how it can detect possible associations that help the machine predict the possible outcomes of a repetitive path. The depth of repetition is used to achieve a confirmation state once it attains a certain threshold.

Learning a rule by searching a path through a decision tree

The simplicity demonstrated in selecting an outcome involves simply following the relationship trail. This method (Sequential Covering Rule Building) may assist in arriving at decisions instantaneously without having to parse through redundant computational steps.

To achieve such rapid processing, the data model for AI is required to be centered on the object node, which acts as the pivot between macro-clusters (frame, objects) and micro-clusters (shape, depth, color, etc). This entire data relationship of an object is available as a string. These strings are used to match with the incoming dataset, and the differences and similarities are used for auto-classification and auto-labeling.

The correct data relationship defines the truth behind accurate intelligence. If the holistic relationship of a data entity is not computed to the fullest, there is every possibility that the robot/machine may end up on an erroneous route, which we can see occurring even among naturally intelligent models.

Data relationships backed by the right weights are the two important aspects of accuracy toward the deduction of correct responses. Without this, we could see artificially intelligent machines failing in their learning methods, and resulting in non-intelligence.

 

Categories
Uncategorized

Handling Visual Parameters

The post articulates how Visual Learning works in the RM2 Platform at a high-level. The Visual learning feature in RM2 is integrated into the unified architecture where visual object detection and learning are integrated to achieve real-time detection and behavior prediction in a given environment.

In order to accurately detect objects and learn from correct data associations, it is critical to extract unit data parameters toward the development of a proper foundation for the establishment of a relationship between unique parameters. This approach will result in high accuracy for the identification of objects, or for learning object behavior.

Categories
Uncategorized

Embedding Language Processing

Introduction

The post provides an overview of how RM2 Network uses unsupervised learning to process Natural Language using reference visual inputs along with the object label, just like the way humans do. We believe that in order to deliver effective machine-human communication, we need to incorporate visual cues within language learning which will give the ability to learn, reason, explain abstracts or understand the sentiment in a given conversation and help maintain context at all times.

In order to explain how the language processing works, we present an overview of the entire network and how unsupervised learning is conducted for autonomous learning

RM2 Network

The RM2 Network is a hybrid model for Unsupervised Learning which combines aspects of Kohonen’s Self-Organizing Map (SOM) and Recurrent Networks like Hopfields Network. Click here to read more on all the existing models that influence the hybrid.

Categories
Uncategorized

Machine Reasoning and Abstraction

To begin, we recommend watching a brilliant video produced by DARPA (below), which cuts through all of the AI hype clutter to clearly articulate how demonstrated machine intelligence has evolved along with its shortfalls.

In summarizing AI capabilities, we may observe that perception and learning capabilities matured during the second wave of development. The hard reasoning that exhibited progress in the first wave simply lost out due to its limitations. We may also witness that the learning feature can deliver erroneous outputs, even after training over 1M data sets, which occurred in the case of a panda being mistaken for a gibbon.

Categories
Uncategorized

Unsupervised Learning With Minimalism

4

A demonstration of Unsupervised AI might be when a robot can think and act responsibly within a given environment, akin to what humans would typically do. In order for machines to replicate human intelligence they require two critical elements, as do humans; time and data. A human exhibits intelligence by first collecting/absorbing data over a period of time. With the integration of data and time markers, any machine that can replicate cognitive processes may exhibit intelligence.

An actual unsupervised learning machine requires negligible human interference. In the same way that a human baby expands its intelligence through observation and guidance, an autonomous machine may evolve simply by observation. Guidance expedites the learning process; however, it might have further implications. You could say that if you aim to prepare a machine for unsupervised learning, you simply need to install an application that will faithfully collect data from an array of integrated sensors. These data will be employed for learning and decision making, and subsequently, these decisions are coordinated back to various motor components without any human interference in the routine. This translates to the negation of tech companies or multiple engineers that might otherwise be necessary to make the machine capable.

Categories
Uncategorized

Pattern Recognition using Symbolic Strings

Unsupervised machine learning is the machine learning task of inferring a function to describe a hidden structure from unlabeled data. We could state that unsupervised learning is a feature of natural neural networks that is required to be replicated in a machine toward the achievement of autonomous unsupervised learning. In order to achieve the outputs of unsupervised learning, the platform designed by Responsible Machines highlights several key features that assist a robot in attaining its self-aware state.

Data Organization: In order to realize unsupervised intelligence, it is important to organize data that is conducive to instant pattern recognition. This may be compared to the Adaptive Resonance Theory (ART), developed by Stephen Grossberg and Gail Carpenter. The basic ART system typically consists of a comparison field, a recognition field that is comprised of neurons, a vigilance parameter (threshold of recognition), and a reset module. This platform organizes data in a manner that simplifies identification and comparison tasks.