This document aims to articulate just how a super-intelligent machine might be built by emulation and the fine–tuning of the intelligence processes that are employed by an optimal biological model; the human being. The document also covers how we may integrate a node-based data structure with learning models to create a state machine that might have the capacity to embody super intelligence. Additionally, this document will explain why a specific template that incorporates a systematic approach may generate absolute intelligence. Such a machine might likely assist with the discovery of answers to facilitate and expedite current research; encompassing human longevity, BCI applications, and space research, thereby enabling the creation of an advanced research and technology implementation paradigm.
The Perception of Intelligence
Since the topic of artificial intelligence is discussed from a variety of perspectives, it is critical to have clarity on the exact definition of what an intelligent machine can do.
Here, a machine is said to be capable/intelligent if it mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.
At present there are individuals who hold an untrusting fear of AI, as they expect that machines will be imbued with emotional capabilities. Other individuals work within a narrow AI scope, which is restricted to the development of preset learning patterns.
For humans to witness an evolved intelligent AI, we require an autonomous learning module that mimics all of the processes that a human brain employs for learning and problem solving, albeit with much higher levels of efficiency and control.
However, unlike human beings, machines are far simpler entities in that they are not burdened with extraneous weights, such as emotions, which might otherwise distract or obstruct computational streams. Unlike human beings, machines are not under the influence of biases toward the achievement of necessary or desired rewards. Since machines are not endowed with hormonal system pathways, one may expect them to perform deductive reasoning without the impacts of emotional parameters, thus taking redundancies out of the calculation.
The reward system of the human body assists with the attainment of objectives, where this excitation is the primary motivator that drives learning. In the case of a machine, the rule of learning is to arrive at an optimal and consistent (normative) state. Any variation in normalcy will give rise to an exploration of variations and various possibilities to return to a normal/default state, with the least possible/no damage to organic objects. This translates to a problem solving objective for the machine
However, the determination of optimal states for a particular situation resides in the data of objects, their past behavior, and their association within a given spatial environment.
Variability in Human Intelligence
For intelligence in any human or artificial form, data relationship is key. The variability in intelligence across species is primarily due to the availability (or unavailability) of data, and the relationships that they create between generated data nodes. As is understood, an individual who is considered to be extremely intelligent collects and structures their internalized data, and creates a relationship structure that links significantly more parameters than a comparative human subject (who may be deemed as average). However, as these data relationships may vary for each data node, a number of individuals who are experts in one particular area might be average in another.
Though intelligence is an output of a specifically set learning patterns, it differs among individuals primarily in the way that they maintain their linkages between data records. Note the different structures in the table below:
|SUBJECT 1||SUBJECT 2||SUBJECT 3||SUBJECT 4|
|An individual has seen a spanner but has not seen it being used.||An individual has seen how a spanner is used, but has minimal interest. Has not seen it up close.||An individual has seen how a spanner is used. Has an interest in knowing how it works. Observes it carefully.||The individual is accustomed to working with a spanner.|
|Has basic data structure with low level of created data nodes.||Has sufficient data but does not have a good relationship between created data nodes. Has missed the finer parameters of the process.||Possesses additional visual data parameters, but lacks motor response parameters. Can build a better relationship between data nodes than the Subject 2 and is ready to apply it.||Has all parameters linked to the usability of the spanner. A completely built relationship structure is held within this context.|
Lack of data. Therefore no proper relationship is built
Enough data available, but lack of data parameters and their associated structure. This creates a replication problem for the future as the full pattern is not captured or evolved.
Computation may be done with more parameters, facilitating the easy replication of the required pattern. May encounter errors as there is no motor response data available for this purpose.
Replication is rapid, and accuracy is maintained due to repetitive usage.
The above table attempts to demonstrate the case of data and its relationship as being the prime concept in arriving at intelligence.
To reiterate, it may be said that intelligence comprises two primary concepts; data and relationship, and no human being has yet been able to achieve and nurture a holistic data structure.
This is primarily due to:
- Insufficient data
- If enough data available, lack of relationship
- If relationship available, lack of strength (weights required)
- If strength available, lack of learning (pattern recognition and regression)
- With incomplete learning (lack of practice) comes gaps or erroneously synthesized data
- Hypothetical assignment of weights (favorable to the emotional state of mind) compromises the data synthesis process further, eventually leading to erroneous processes toward accurate decision making
There is every chance that data has not been collected holistically, which creates or renders insufficient parameters in computation with the inevitable result of an incorrect decision. It is often said that this scenario might vary across individuals; however, it may also stem from a simple lack of interest (with no associated rewards or triggers), which makes one simply ignore the data parameter. Data capture is required to be comprehensive in order to deduce any complete patterns in the attempt to achieve high quality standards in accuracy.
The Need for Absolute Intelligence
With these self-inflicted limitations, come physical limitations that can operate within a certain range. The currently evolved cognitive, physiological, and sensory feature sets of the human body are still not capable of seeing finer particulates, hearing subtle sounds, or feeling external energy fields or energy.
The lack of these data parameters makes our innate-decision-making very basic and prone to errors, increasing risks of accidents, biological oxidative stress, and corporeal degradation.
Achieving absolute intelligence will be necessary in order to arrive at correct decisions without fail, which is possible when every data parameter is captured and synthesized in real-time; in order to be cognizant of, and to perform, correct future actions.
Technology will enable us to build such an intelligent machine, which can be useful in solving life related problems, toward the creation of a considerably smarter and safer environment.
To demonstrate this intelligence in a dynamic environment, the machine should be able to draw up intelligence at every process step; in order to perform continuously with ready results, and mimicking a state machine.
Building a Truly Intelligent Machine
Using concepts of ANN and other Learning Models, we can build an intelligent machine that may achieve possible decision states based on real-time computation, and have the capacity to make decisions and perform actions that are the equivalent to, or are an improvement over human models.
The diagram below depicts the automation flow within the machine:
The Base Platform
In this attempt to build such an intelligent machine, we would require an underlying data platform that provides for the ease of data unification and seamless data flow between sensory receptors and motor accessories. Robots such as Asimo (Honda) possess a seamless integrated data flow; hence, it may perform actions such as climbing stairs or other, which requires computing multiple data inputs. From a technical perspective, this indicates computation that is based on sensor captured object parameters, which trigger motor components to perform based on data outputs.
Similar to human beings (as seen in newborn babies), the machine would be inbuilt with a network of sensory features such as high resolution cameras (for image capture), sound receivers, and myriad sets of other sensors. This data should be seamlessly computed with outputs delivered to the relevant motor elements.
The above diagram depicts the various sensors that would capture an extensive range of data to facilitate holistic computation. Such a platform will assist in the recording of every data parameter from the acquisition sites (sensors), such that the data may be synthesized and the output passed back to the motor components, in order to guide the appropriate actions to be performed by the machine.
The data flow represents how data synthesis occurs in real-time. When data is collected, the platform identifies the data for new or existing frames, and subsequently, the captured frame is tagged and broken down by object. The object level tags seek relationships with the most closely associated tag, and creates a hypothesis (pattern construct) with the available tags.
The frame breakdown encompasses the partitioning of the available subsets of the primary nodes. For example, for every object extracted from a frame, there would be an associated data tagged, which spans visual, audio, and other sensory packets. Thus, if we consider the visual packet, it would be further partitioned according to shape, depth, and color (similar to the human eye).
The breakdown of the subsets would be stored with a timestamp, which will allow the machine to internally rebuild the frame. The subsets tags also create relationships with identical sub tags of different objects. For example: A shape extract of a bell shaped flower (object) might have a relationship with the shape of a bell (object).
A series of such frames constitute an event, with varying relationship weights recorded between every object in the frame in order to understand the attribution of the objects to a particular event. These data patterns of events are sequenced in real-time for pattern matching during the next occurrence. It might be said that there would be micro-relationships between objects and macro-relationships between events.
It might be quicker to rely on pattern constructs, as real-time pattern matching and corresponding responses may be delivered instantaneously. In case of new data, hypothetical patterns with available relationships would be created. Repeated exposures will serve to strengthen the pattern logic, thus allowing the system to confirm following consistent repetitive occurrences.
This tagging assembly also assists with memory identification during subsequent occurrences. In 99% of cases, tag assemblies are not exactly identical. In these cases, the unique parameters of the tag assembly are extracted, whereas the remaining common parameters are classified under a new parent node structure, which is employed as a reference for quick response, or further nodal exploration by the machine. The occurrence of exact tag assemblies is quite rare, but when they do occur, they can be compared with the state of ‘déjà vu’.
The tag assembly primarily consists of references to objects and its associations with attributed weights, along with event and state references. The Tag assembly can be compared with Cell Assembly (Hebbian Engrams) proposed in Hebb’s third postulate (Hebbian Theory).
In order to learn autonomously, the machine should be able to classify data and build relationships toward an understanding of the patterns that lead to an objective. The machine is required to identify the base pairs in order to classify new data/objects.
The primary classification of objects would be based on the motion properties of the object. To cite a generic base rule example, a high level instance might be stated as:
|Static||repetitive occurrence of the object with no change in position for a given event set for a given angle|
|Motion||repetitive occurrence of the object with a complete change in position for a given event set for a given angle|
|Growth||repetitive occurrence of the object with little change in position for a given event set for a given angle|
From this primary classification, the object can bifurcate its node based on the structural, behavioral, and functional parameters of the object. For every object identified, the parser may look at an exact match of the object. Should an exact match be found, the machine will be aware of the object. If not, the machine begins to record this object to induce the primary classification. Subsequently, based on subset similarity, it can allocate the object under a classification that is common to a particular subset. But since some parameters likely did not match, it would create a new unique ID that has references to the newly detected object.
Thus, every object has a master tag (leading to the classification header) and a unique tag (for individual reference). To reiterate the example above, if the machine had never seen a bell-shaped flower previously, but is familiar with bells, it would register the flower as a new object; however, it would build a relationship with the shape subset of the bell. Hence, this new object would have a unique ID for itself, and would also be tagged under a classification that is common for all bell-shaped extracts.
This auto classification for unique parameters will transition to category heads when several objects demonstrate similar subsets. The classification technique offers the quick retrieval of memories (tag assembly) to perform a certain computation that is necessary for a given situation.
Every node that is associated with an object has a weight associated with it, in order to maintain relationship strengths. These weights assist with the determination of priorities, or the maturation of the node, with respect to the object. Weights are further classified based on occurrences and the state of the object. For example, if the pattern associated with an object has achieved a confirmed state, due to a consistent repetitive pattern, the weights assigned to the object and the associated pattern would be higher in comparison with a pattern in an unconfirmed state.
The long term objective of the machine is to achieve a confirmed state across all nodes, whereas the short term objective is to achieve normalcy at all times. In order to accomplish this, the machine will be required to constantly explore and regress each data set and pattern. This automation encompasses processes similar to human cognitive capabilities, which would enable the machine to:
Observe: If the majority of the machine’s current session data parameters reside in an unconfirmed state, the machine will not act, but rather, will indulge in data collection and analysis only. This may be related to human observation and learning techniques.
Initiate: The machine may be programmed to initiate when most of its current session data parameters reside in a confirmed state, but the patterns do not match the normative state.
Reason: During the process of achieving complete patterns, leading to normalcy, the machine will deduce possible path associations toward the achievement of the goals, which may be compared with reasoning.
Solve: Problem solving involves a combination process of identifying possible path constructs to achieve the state of normalcy for a given problem.
Plan: Since the machine is designed to analyze and predict every single event, both of these processes are integrated into the system. It might be possible that over a period of time, every response by the machine will be anticipated, planned, and delivered in real-time (unlike the human model, which maintains a three tier response system based on temporal availability).
Motor Output Plan: Action simulation and analysis to create a time based instruction map to be delivered to all motor components.
For the computation of other data parameters such as spatial calculations, perception, and the interpretation of languages; any existing models and parsers might be integrated. However, as relates specifically to language interpretation and response, triangulation methods may be enabled between the object, sound, and script to enable a contextual conversation and train reading using natural learning.
To design a machine with the greater objective of possessing extreme capabilities, new dimensions or conditions might be set to match data patterns that have a high optimal path (even if for only a few occurrences), or new paths may be arrived at to better understand data anomalies and unique patterns with rare occurrences. Consequently, this will enable the machine to set new standards and produce zero risk plans, even under rare and unpredictable scenarios.
Speed of Responses
Another important outcome to achieve would be the speed of any such automation. As mentioned previously, every machine response is anticipated, planned, and delivered in real-time (unlike a human model, which maintains a three tier response system based on temporal availability). However, with synthesized data already available for computation, responses may be delivered in real-time, thus mimicking the reflex response speed of humans.
It is simply that the processing power of the human brain is so rapid that it passes the output to the motor parts, without it having to compute and comprehend which decision to make, at that point in time (subconscious decisions). These performances derive from readily available operands or fully synthesized tag complexes.
In order to create a machine that can compete with, or exceed, human intelligence abilities with great speed and accuracy, the models of a state machine must be employed, which can possess ready states and perform next actions based on specific computations.
Such a machine might have the potential to double its intelligence quotient year on year (Moore’s law), and may be a guiding factor to assist with the elucidation of myriad human endeavors and explorations.
By combining neural network data structures and unsupervised learning, it may be possible to create a state machine that can surpass human intelligence. Such a machine will be critical toward the rapid discovery of solutions spanning the areas of human longevity, BCI applications, space research, and assistance with development a predictable and sustainable future.