To begin, we recommend watching a brilliant video produced by DARPA (below), which cuts through all of the AI hype clutter to clearly articulate how demonstrated machine intelligence has evolved along with its shortfalls.
In summarizing AI capabilities, we may observe that perception and learning capabilities matured during the second wave of development. The hard reasoning that exhibited progress in the first wave simply lost out due to its limitations. We may also witness that the learning feature can deliver erroneous outputs, even after training over 1M data sets, which occurred in the case of a panda being mistaken for a gibbon.
The futuristic third wave, which is predicted to demonstrate reasoning and a step forward in articulation, is certainly the next step forward. However, the achievement of these capacities is contingent on getting our learning right, such that it leads to sensible reasoning and abstraction capabilities. At this scale, one might envisage an entirely capable AI inclusive of all four capabilities in its state of maturity.
In order to rapidly attain the fourth wave, we need to decode to understand where the learning methodologies are going wrong. You will notice that the current practice of learning over million images (or data) is not precisely derived from how a human brain works. The neural model does not operate on outputs alone (final images). Rather, it works on inputs that develop the image.
Current practice simply attempts to mimic the outputs generated by natural intelligence. This strategy might assist in detecting an object only where there is an absolute or close match of all extracted parameters. This approach may or may not capture all parameters in a holistic fashion, thereby leaving gaps in data relationships, which can lead to errors
Guided by natural neural intelligence, we may observe that every data parameter collected by touch-points (sensory organs) form a holistic input grid for calculation. Cues from the visual processes demonstrated by natural intelligence encompass the capture of data parameters such as shape, depth, and colors from rod and cone cells, which are integrated at the ganglions. Mimicking this methodology allows the machine to compute across all parameters, relationships, and dimensions; thereby driving learning toward more accurate results.
The above process is also true for language processing, where sentence decomposition extracts data parameters and overlays on the relevant object node. This enables the machine to build a natural language that is capable of explaining learning outputs, reasons, and logical decisions, without simply mimicking phrases.
The Responsible Machines AI platform will allow AI developers to initially configure data inputs, and then sit back and monitor, as the machine expands in intelligence. The supervised platform employs a hierarchical semantic architecture with primary(basic) rules, which allows the machine to auto-classify and auto label data on its own, without human intervention or supervision.
The illustration below reveals how the platform extracts data to its base inputs, which is stored as defined by the data model. Data is stored as a string which retains the information of the data parameters and its respective weights, which is used for learning, reasoning, and decision making. Read here to understand the simplicity of information processing for the Responsible Machines platform
We believe that this approach removes the relationship gaps between data parameters while making learning practices far more accurate and dependable.
We need to understand if this data model may assist us in achieving reasoning and abstraction, toward ascending to the third wave of AI. The strings (data packets) that hold the relationship weights are the primary enablers for the machines to learn, question, and reason.
Learning and decision-making, which incorporates questioning and reasoning are dependent on the way that synthesized data (knowledge derivatives) are organized. Organizing data in a natural fashion allows one to detect spatial and temporal relationships between data parameters. These spatial and temporal data patterns have macro weights that pertain to the pattern set. Conflicts between these weights push data relationships to an ambiguous state, allowing the machine to generate a question or reason why, in order to achieve a confirmative state, which is defined by the primary rule of the long-term objective.
It is important to understand how natural intelligence exhibits an integrated approach in managing visual nodes and their respective labels. These labels are utilized to assemble languages so that it can orally articulate explanations for the reasoning that it makes, or to explain an abstraction (quantificational schemata).
For example, if you were wondering how the machine would understand the difference between past, present, and future tenses, you would see that these insights may be detected within the temporal layout of the associated data parameters. Using the weight scale, it can easily create a time dimension to properly place the object and associated actions and translate the results to language.
Below is another example of when a machine might gather impetus to ask a question, or be in a state of doubt. This example was encountered online from a natural language reasoner. In the given two questions that (a) Every person is a man or a woman, and (b) Addison is a man and a woman. How would a computer know which one is true? Such data conflicts lead to questions.
But how did the machine detect this conflict? In the above scenario, the system generates four tags and creates a relationship based on available terms. So we have two relationships for the word “man” and “woman” ie. “person” and “Addison”. However, due to weights, the relationship would have a positive weight for “and” and a negative weight for “or”
Due to conflicting weights in a similar relationship, the confirmation logic fails to trigger and the relationship is in an unconfirmed state. As the global rule of the algorithm is to achieve a confirmed state, it revisits the node and generates a question for confirmation.
If stated that “Addison is not a woman” the relationship between Addison and woman (which had positive weights) is negated by “not” and achieves a zero state where the relationship is lost. Now that there is no conflict, the confirmation state is achieved based on the available relationship. It would have confirmed if stated that, “Every person is a man and a woman.” In order to avoid confirmation in the first instance, the default confirmation weight on the platform is set to 25 such exact matches of the entire association, in order to achieve a confirmation state. This ensures that the machine arrives at the correct decisions following repeated observation or occurrences.
During one of our workshops, a question was raised as to how a machine might comprehend words such as “filibuster or awakening” when encountered during a conversation or reading. This workshop helped us to detect how the machine may comprehend a pattern of this sort.
For example, when the word filibuster was encountered, the machine scanned for an available match, and when not located might consult a dictionary file, just as humans would do. The dictionary file explained filibuster as “an action such as prolonged speaking which obstructs progress in a legislative assembly in a way that does not technically contravene the required procedures.” Using the POS segregation routine, the machine was able to detect objects and an action reference and created a spatial and temporal map between the detected references.
The pattern detected within the machine may be explained as follows. Using nouns, it creates an object cluster (from legislative assembly) with spatial parameters. Using a general word such as assembly, it creates an object network, and using verbs, it creates the relationship between these nodes, which in this case may be referred to as speaking. This word association (prolonged) allows for the setting of a degree to the association. The word “progress” allows the machine to create a temporal weight for the above set, whereas the word “obstruct” allows the construction of a degree to the temporal weight. Each time an adjective or adverb is presented; it creates a degree comprised of three states (low, normal, and high). As temporal weights have priority over spatial weights, the resulting cumulative weight is negative, inclining its decision toward the “not-to-do” list.
Relationships and weights play a critical role in enabling the machine to comprehend in a similar way to humans and exhibit intelligence through perception, learning, reasoning, and abstracting, more naturally than hard-wired machines.