Pattern Recognition using Strings

Unsupervised machine learning is the machine learning task of inferring a function to describe a hidden structure from unlabeled data. We could state that unsupervised learning is a feature of natural neural networks that is required to be replicated in a machine toward the achievement of autonomous unsupervised learning. In order to achieve the outputs of unsupervised learning, the platform designed by Responsible Machines highlights several key features that assist a robot in attaining its self-aware state.

Data Organization: In order to realize unsupervised intelligence, it is important to organize data that is conducive to instant pattern recognition. This may be compared to the Adaptive Resonance Theory (ART), developed by Stephen Grossberg and Gail Carpenter. The basic ART system typically consists of a comparison field, a recognition field that is comprised of neurons, a vigilance parameter (threshold of recognition), and a reset module. This platform organizes data in a manner that simplifies identification and comparison tasks.

Pattern Recognition: The recorded dataset is stored in a fashion that creates a unique data set pattern. As the phrase goes “Cells that fire together, wire together”; data patterns are formed employing spatial and temporal markers. These data patterns are used to recognize and compare, and based on previous learning, occurrence or supervision, a specific weight is allocated to the relationship between these parameters.

String Creation (Tag Assembly): These data patterns comprise a linear array of values, which form strings. A sample string, which appears similar to the example the below, might contain identifiers for every micro-data set within the string.


E100000||F1000000||O100000000||LAAAAAAAAAAAAAAAAAAAA
|CAAAAAAAAAAAAAAA||V[S(x,y,z,k,o)\D(Dis,LI)\R\G\B]||A[sp,tmp]
||S[]||T[t,p,w,o1,o2]||CR[Obj1d,Obj1d,]||SR[Obj1d,Obj1d,]
||TR[Obj1d,Obj1d,]||PO[Dec1(weight), Dec2(weight)]
||AO[Dec1(weight), Dec2(weight)]

A breakup of the string can be understood as
 
E stands for Event
F stands for frame
O stands for Object
L stands for label ((Alphanumeric – Max of 35 letters)
C stands for category(Alphanumeric – Max of 35 letters)
V stands for Visual
A stands for audio
S stands for smell
T stands for touch
CR stands for category cluster
SR stands for Spatial Relationship
TR stands for Temporal Relationship
PO stands for Perceived Outputs
AO stands for actual outputs
S(x,y,z,k)  stands for Shape(Width, Height, Depth, PatternId, Orientation)
D(Dis,LI)  stands for Depth (Distance, Light intensity)
R\G\B – Red, Blue, Green
A[sp,tmp] – Audio (Spatial, Temporal)
T[t,p,tmp,o1,o2]- Texture, Pressure, temperature, other associated parameters

Whenever a data pattern is fired, the string checks every micro-identifier for comparison. Parameters that agree with the match result in classification, and when combined with the parameters that disagree with the match, enable the creation of a unique label.

Based on repetitive or supervised occurrences, a linear weight is assigned to the relationship between data parameters. Utilizing these weights, the best match whose set of weights (weight vector) most closely matches the input vector is selected as the response.

This platform works very similarly to the ART mentioned above, in terms of how the brain processes information.

 

Advertisements

One thought on “Pattern Recognition using Strings

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: