Categories
Uncategorized

Autonomous Learning Machine

In order to create ‘Strong AI’, we need to look no further than the cognitive processes of the human brain. We will see that processes involving anticipation, prediction, reasoning and abstraction are merely a combination of processes; and these can be mimicked by the machine, in order to behave just like a human.

However, today’s AI experts are faced with two formidable obstacles, as they strive to create an intelligent machine. These are:

  • Extremely complex building blocks for AI machines
  • Constant supervision and inputs required to ‘guide’ the learning process

To create true ‘Strong AI’, one needs to begin with simple building blocks that come together to form increasingly more complex structures. And, the learning process needs to be autonomous, in order to reduce complexity and time to derive intelligence.

This article explains how a self-learning machine can exhibit autonomous classification, pattern detection or output prediction, using a simple data organization technique. The data is organized as sequences forming patterns, which can be readily consumed to compute and exhibit artificial intelligence in real-time.

The Pattern matching technique can be described as the act of checking a given sequence of tokens for the presence of the constituents of some pattern.  Sequence patterns are often described using regular expressions and matched using techniques such as backtracking. By far the most common form of pattern matching involves strings of characters. In many programming languages, a particular syntax of strings is used to represent regular expressions, which are patterns describing string characters. String versions of self-organizing maps and learning vector quantization (LVQ) have already been implemented in the context of speech recognition.

Here we illustrate how using the natural organization of input data can form strings (sequences), and how this generic organization can exhibit classification, feature selection, and intelligence using the patterns available in these string sequences.

Creation of String Complex

Consider a brand new machine wherein we have all the sensors (data collection units) integrated into a centralized platform (just like the human brain), but is yet to capture data (blank slate). As it starts recording inputs, it should start to organize data and exhibit intelligence, just like humans.
Any Input captured by the sensors has two attributes: parameter label and value. Using these attributes across many inputs, the machine has to self-organize in order to exhibit intelligence. The value attached to each label could be either dynamic or static. The platform houses a rule for dynamic values wherein the values (min-max) creates the range scale in order to arrive at a threshold for that respective parameter.

Tree Patterns for Strings are represented as trees of root StringExpression, and all the characters in order as children of the root. Thus, to match “any amount of trailing characters”, a new wildcard is needed in contrast to that would match only a single character.

The labels of the parameters are unique, and any exact match of a parameter (string match) results in an overlay and filters out the data redundancy in the system.

The unit strings can be a length of characters that depict the unit parameters and its associated weight. For example: If the color sensor recording in RGB would input something like R[255].G[144].B[245], the machine could convert the incoming data to a string or convert it to a hexadecimal string and store it as FF90F5. Likewise, a shape extraction algorithm can input XYZ parameters of objects, which are again stored as a string.

These unit strings created from various inputs are tagged together based on their timestamp. This allows the machine to group strings that fired (recorded) together to form a string complex.

You could say that a combination of unit strings creates a ‘String Complex’.

For example, Individual Unit parameters recorded for shape will have information of a particular edge of an object. A set of individual strings would together carry the information of the shape of a particular object. For instance, the shape of the petal might give you individual information in the string, but many petals combine to form a flower. So, the string complex for the shape of the flower would look like [petal information][stamen information][receptacle information], and so on.

With just this data, you could see that a simple network is being built. If we consider that X, Y, and Z are three input parameters, the relationship of parameter ‘X” is established between the unit string and its corresponding weight.  So, every time there is an exact string type and corresponding weight, it can detect a past instance and quickly correlate to all the nodes linked to the unit string.

There could be many such parameters recorded by a single sensor in forming a string complex and with many sensors; the system is full of varied string types. These string complexes are further grouped by sensors that recorded it. As shown in the diagram, you might have composites from visual sensors, audio sensors, touch sensors and so on.

The string complex is now sequenced with the unification of all sensory level string complexes. They are further grouped to form a cluster of a single object determined primarily by the string formed by shape parameters. The object sequence will contain complete information about the object.

For differentiation purpose, we can refer to it as the ‘Object String’.

Now, many such object strings find a relationship between each other and merge to form a macro composite which we can call the ‘Memory String’.

The Memory String holds complete information about an event with information of every object present in the scenario, along with its behavior and relationships between objects.

To summarize so far, the string hierarchy can be found in the tree structure as

Unit Strings >> Sensory Strings >> Object Strings >> Memory Strings

The platform automatically organizes these data strings in the tree structure shown below

The strings formed can be compared to the 2nd Hebb’s postulate stating “Neurons that fire together, wire together”, as they group together using the exact timestamp.  The pattern structure which is a linear juxtapositioning of tags and degree of weights (tag assemblies) can be compared to the Hebbian Engrams (Cell Assemblies) stated in Hebb’s 3rd postulate

Now that the data is auto-organized in this fashion, we can see how the machine self-learns and makes autonomous decisions. The machine will learn by detecting and matching these strings. The incoming memory string is decomposed to individual unit strings and compared against existing strings.

In the case of existing data, the value of each of the unit parameters is computed using the new values over existing aggregated value to arrive at the new synthesized value. For all the exact matches, the strength of the relationship grows by 1. For all new values of the existing unit parameter, a new relationship is established if the value is unique. This detect-and-match technique can quickly help the machine identify the object or its behavior.

During the match, the overlay highlights the similarities and the differences between two strings. For every unique difference of a string, it creates a new node and auto-labels the unique combination. In the case of an exact match between both strings that are compared, no unique label is created. However, for every similarity of an attribute within the unit string, the machine groups similar attributes to create a category in the name of the attribute. These categories are created at all levels of a string type ranging across the unit, sensory, object and memory strings. This allows the machine to classify at every stage and maintain clusters of similar attributes.

Using these strings, the machine can quickly pull out the desired event by selecting all strings with the string type that contains the desired parameter.

In case the machine wants to predict the occurrence of a particular event, it can select the string type that is created by past collection and come up with its prediction of the occurrence. This allows the machine to predict scenarios based on past learning and quickly come up with a plan to activate the next steps in order to achieve the occurrence in a minimum number of steps.

Based on continuous exact occurrences, the strength of the relationship between strings and string attributes carries more weight, finally reaching the state of confirmation (confirmed patterns). The threshold for confirmation has to be present, so we know that the machine confirms correctly only after numerous exact occurrences.

Strings that don’t encounter exact matches can be termed as unconfirmed patterns, wherein the machine continuously regresses on the pattern by either establishing more relationships or by appending weights through subsequent data interactions.

There might be a set of strings that show variance only in specific sectors even after a certain threshold. These account to unconfirmed patterns (as in Figure 4), and are pushed back for further regression before being committed as confirmed. Even confirmed patterns can lose its state of confirmation, even when a new string matches 99% of the existing string during the comparison.

Figure 4

This job can be described as machine reasoning, as the machine will explore all possible influencing attributes in order to understand the most deterministic pattern. Along with checking the pattern of these strings, the machine also checks whether or not the weights also find the match. In case the string gets an exact 100% match but there is a difference in value in one of the unit strings, the machine puts it back into the unconfirmed state for further regression.

Over a period of time, the machine will develop the capability to learn, understand similarities and differences, find answers to unique patterns and solve problems with greater intelligence. Using this technique, the machine can come up with decisions the same way humans do. And, when integrated with motor parts, the machine can perform actions autonomously in a real-life environment.

To sum up, the pattern matching and strings technique provide a holistic approach to creating a completely autonomous and highly-accurate machine: one that can learn on its own without any human intervention.

Would love to hear your comments.

Categories
Uncategorized

Minimalism (computing)

In computing, the term minimalism refers to the application of minimalist philosophies and principles in the design and use of hardware and software. Minimalism, in this sense, means designing systems that use the least hardware and software resources possible.

You could compare this with the functioning of the human brain, which exhibits intelligence using the least hardware (sensory organs) and least software (minimal inputs and minimal processing). The human brain demonstrates minimalism in order to rapidly store and synthesize information; which we recognize as rapid thinking and quick reactions.

Our storage and retrieval mechanisms, supported by seemingly automatic computational techniques allowing us to reason and derive answers (often very rapidly), also implies that the brain’s output (intelligence) is derived with a high degree of energy optimization. A straightforward form of organic learning involves the process of mimicking, which can itself be reduced to sequencing and relationships. Forms of training can be reduced to a linear process that replicates action with collected data parameters. If the human brain had to rely on traditional machine-learning techniques to extract patterns for every learning exercise, its energy would drain and lead to malfunction.

However, as human data touch-points are limited, we recognize that the data entities are limited and the entire knowledge structure is created using these attributes. The human brain structures knowledge based on data collected through touch-points (eyes, ears, nose, tongue, and skin) by building relationships and applying weights for basic computations.

The brain employs the most optimized data architecture design with zero redundancy, which enables it to build myriad complex structures using minimum attributes. A hypothetical hierarchical relationship structure may be built within the brain in order to learn and process intelligence in real-time.

Based on the theorem explained above, artificial Intelligence can, therefore, be a direct replica of natural intelligence. The data model that goes into building an intelligent machine is crucial in the utmost for rendering instant learning and response selections. Using sensory data (collected from sensors), it will be vital to lay out a structure for incoming data in order to form patterns that can be matched, given weights, and synthesized.

Using the timestamp of each data record, relationships are built between data based on the hierarchical design, allowing the machine to extract patterns on the fly. These patterns are converted into strings during comparison in the process flow. Using the strength of the node and its cumulative weight, the machine may prioritize which response to select. The diagram below illustrates how a pattern may become highlighted (based on the focus rule) and how it can detect possible associations that help the machine predict the possible outcomes of a repetitive path. The depth of repetition is used to achieve a confirmation state once it attains a certain threshold.

Learning a rule by searching a path through a decision tree

The simplicity demonstrated in selecting an outcome involves simply following the relationship trail. This method (Sequential Covering Rule Building) may assist in arriving at decisions instantaneously without having to parse through redundant computational steps.

To achieve such rapid processing, the data model for AI is required to be centered on the object node, which acts as the pivot between macro-clusters (frame, objects) and micro-clusters (shape, depth, color, etc). This entire data relationship of an object is available as a string. These strings are used to match with the incoming dataset, and the differences and similarities are used for auto-classification and auto-labeling.

The correct data relationship defines the truth behind accurate intelligence. If the holistic relationship of a data entity is not computed to the fullest, there is every possibility that the robot/machine may end up on an erroneous route, which we can see occurring even among naturally intelligent models.

Data relationships backed by the right weights are the two important aspects of accuracy toward the deduction of correct responses. Without this, we could see artificially intelligent machines failing in their learning methods, and resulting in non-intelligence.

Categories
Uncategorized

Handling Visual Parameters

The post articulates how Visual Learning works at a high level within the RM2 Platform. The Visual learning feature in RM2 is integrated into the unified architecture where visual object detection and learning are integrated to achieve real-time detection and behavior prediction in a given environment.

In order to accurately detect objects and learn from correct data associations, it is critical to extract unit data parameters for the development of a proper foundation for the establishment of a relationship between unique parameters. This approach will result in high accuracy for the identification of objects, or for learning object behavior.

Categories
Uncategorized

Embedding Language Processing

Introduction

The post provides an overview of how the RM2 Network employs unsupervised learning to process Natural Language using reference visual inputs along with the object label, just as humans do. We believe that in order to deliver effective machine-human communication, we need to integrate visual cues with language that will provide the ability to learn, reason, explain abstracts and understand the sentiment in a given conversation and help maintain context at all times.

In order to explain how the language processing works, we present an overview of the entire network and how unsupervised learning is conducted for autonomous learning

RM2 Network

The RM2 Network is a hybrid model for Unsupervised Learning that combines aspects of Kohonen’s Self-Organizing Map (SOM) and Recurrent Networks like Hopfields Network. Click here to read more on all the existing models that influence the hybrid.

Categories
Uncategorized

Unsupervised Learning With Minimalism

4

A demonstration of Unsupervised AI might be when a robot can think and act responsibly within a given environment, akin to what humans would typically do. In order for machines to replicate human intelligence they require two critical elements, as do humans; time and data. A human exhibits intelligence by first collecting/absorbing data over a period of time. With the integration of data and time markers, any machine that can replicate cognitive processes may exhibit intelligence.

An actual unsupervised learning machine requires negligible human interference. In the same way that a human baby expands its intelligence through observation and guidance, an autonomous machine may evolve simply by observation. Guidance expedites the learning process; however, it might have further implications. You could say that if you aim to prepare a machine for unsupervised learning, you simply need to install an application that will faithfully collect data from an array of integrated sensors. These data will be employed for learning and decision making, and subsequently, these decisions are coordinated back to various motor components without any human interference in the routine. This translates to the negation of tech companies or multiple engineers that might otherwise be necessary to make the machine capable.