How Unsupervised Deep Learning operates within RM2 Platform

The RM2 Platform is a hybrid model for Unsupervised Learning which combines aspects of Kohonen’s Self-Organizing Map (SOM) and Recurrent Networks like Hopfields Network.

The Unsupervised feature of RM2 Platform, where no known target outputs are associated with each input pattern, can be compared to the Kohonen’s SOM where the SOM processes the input patterns and learns to cluster or segment the data through adjustment of weights. A two-dimensional map is typically created in such a way that the orders of the interrelationships among inputs are preserved.

The interrelationship among inputs with various derived fields and encoded weights constitutes a pattern. Each pattern can be compared to a memory. These patterns can be witnessed as alphanumerical strings that are encoded with weight for each unit level input tags and derived tags (hidden).

They form a string using a common time stamp, agreeing to the 2nd Hebb’s postulate stating “Neurons that fire together, wire together”.  The pattern structure which is a linear juxtapositioning of tags and degree of weights (tag assemblies) can be compared to the Hebbian Engrams (Cell Assemblies) stated in Hebb’s 3rd postulate

The core learning which involves the function of recalling the corresponding stored pattern for comparing with the detected pattern, and then produces a clear version of the pattern at the output;  finds similarity with the basic ART system, which is an unsupervised learning model. As described by ART, It typically consists of a comparison field and a recognition field composed of neurons, a vigilance parameter (threshold of recognition), and a reset module. The comparison field takes an input vector (a one-dimensional array of values) and transfers it to its best match in the recognition field. It’s best match is the single neuron whose set of weights (weight vector) most closely matches the input vector. Each recognition field neuron outputs a negative signal (proportional to that neuron’s quality of match to the input vector) to each of the other recognition field neurons and thus inhibits their output. In this way the recognition field exhibits lateral inhibition, allowing each neuron in it to represent a category to which input vectors are classified.

The RM2 network is dynamic where states are changing continuously until they reach an equilibrium point and remain until the input changes. The activation of states is derived based on the underlying state changes within its hierarchy. The final state is achieved through a staged manner based on a feedback received from previous iteration’s output. This approach can find a comparison with HHMM (Hierarchical Hidden Markov Model), which is considered to be a self-contained probabilistic model. The advantage of this approach is that its output naturally conforms to a hierarchical structure in the label space (e.g. a taxonomy).

To summarize, the RM2 Platform can simply be explained as follows

(1)The inputs are stored as specified by the data-aware hierarchical semantic model which organizes the data into a tag assembly (patterns) with encoded weights (0,1). The objective is to achieve the formation of the object layer from the input layer, and these objects will further form networks between other objects available at a particular time stamp and this is further clustered based on object reference parameters to create the top layer, which can be termed as the memory layer.

The inputs which could be visual, sound, language, touch or any other sensory data, are handled in the similar fashion where all build a relationship to the object layer, making it easy to associate with a given visual or an associated memory. You can say that the memory layer has complete information (in a hierarchical fashion) about a particular scenario with time parameters, objects, shapes, colors, labels (names), behavior, derivative fields and outcome associations. This entire data is converted to a tag assembly.

(2) During processing, the new tag assembly received is compared to the existing tag assembly of an object neuron, where the match reveals the differences and similarities between two tag assemblies. The similarities strengthen the relationship and the differences (which could be a combination of input sets) are matched at the input layer for similarities. If not available, it creates a new node and auto-labels with a prefixed alphanumeric string. Over exposure to the language, the objects can create a network of labels to update natural language words and its associations using visual cues.  This will allow the machine to understand conflicting patterns (reasoning) and arrive at possible pattern types which can undo the conflict (planning solutions).

(3) The Weights which are core for state activation works at individual input layer and cascades all the way to the top-level to arrive at the cumulative weight to understand the threshold. On every input, weights are added to the previous iteration’s output and when it reaches a particular threshold, the state changes to 0 or 1.

The RM2 Design is developed by Responsible Machines to make AI accessible and simple to everyone. The platform allows users to manage this unsupervised machine by monitor/querying particular node and understand the probable decisions/actions that the machine can take, making AI user-friendly and easier for everyone to manage AI without the need for highly skillful AI engineers.


2 thoughts on “How Unsupervised Deep Learning operates within RM2 Platform

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s