Self Learning Avatars

In early computing,  an avatar was a graphical representation of the user or the user’s alter ego or character. Examples were; an icon or figure representing a particular person in a video game, Internet forum, etc. Now within the scope of AI, Avatars can now be looked at as virtual embodiments of humans, which are driven more by artificial intelligence rather than by real people.

Of the many ideas, the 2045 Avatar Initiative by Dmitry Itskov, aims to create technologies enabling the transfer of an individual’s personality to a more advanced non-biological carrier(avatar), and extending life, including to the point of immortality.

Digital Immortality has long been discussed by Gordon Bell which he has described as the concept of storing (or transferring) a person’s personality in more durable media, i.e., a computer, and allowing it to communicate with people in the future. The result might look like an avatar behaving, reacting, and thinking like a person on the basis of that person’s digital archive. After the death of the individual, this avatar could remain static with the past data or continue to learn and develop autonomously.

According to Gordon Bell and Jim Gray from Microsoft Research, retaining every conversation that a person has ever heard is already realistic: it needs less than a terabyte of storage (for adequate quality). Experts such as Rothblatt envision the creation of “Mindfiles” – collections of data from all kinds of sources, including the photos we upload to Facebook, the discussions and opinions we share on forums or blogs, and other social media interactions that reflect our life experiences and our unique self.

To realize such an avatar, which can learn from files carrying external experiences of a particular individual, we will need to provide it with mindware (AI platform) and enable it to learn using an unsupervised learning model. By bringing together popular theories such as Kohonen’s Self Organizing Map, Adaptive Resonance Theory (ART), Hopfields Network, and HHMM, we believe that  we should be able to achieve an autonomous learning avatar, that can learn from past data and exhibit the unique behavior of the individual’s persona.

We believe the RM2 Unsupervised Learning would be the ideal model to bring avatars alive. The unified platform can learn from files (text, images, videos and audio) and be able to build a timeline of memories. Memories could have accrued weights from sentiment and moods extracted and these weights are updated as and when newer conversations develop with the avatar. The slide deck below presents why this unified platform might be the most suitable to achieve smart avatars.

The unique platform can be used even when we have figured out memory extraction directly from the human brain. Memory extraction process, which would mean extracting episodic memory by time sequences, would be the best way to achieve a very human-like avatar. Data extracted needs to be finally converted and organized as per the structure proposed earlier in order to allow avatars behave, think and perform like humans.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: