Friendly Robotics

by Jonathan Kolber

I just spent three awe-filled days at NextFest, an exhibition co-sponsored by GE and WIRED magazine. It was well worth the time.

Most dramatic was the realization that science – thanks to the technological tidal wave – is finally surpassing science fiction in some areas. I tried the science-fiction ride TimeEscape at Navy Pier, an immersive experience of Chicago’s past, present and future. It was hosted by an allegedly artificially intelligent robot that looked like, well, a robot. As it recited canned monologue and spoke in a manner that conveyed pseudo emotions, I was struck by how 20th century it all seemed. To a discerning observer, the robot lacked credibility. It was like a 3-D cartoon.

That same day, at NextFest, I visited the Philip K. Dick robot pavilion. A joint project of the University of Memphis FedEx Institute of Technology and Hanson Robotics, it delivers an animatronic version of the noted science-fiction author reclining on a sofa, able to converse with you.

Using advanced AI techniques, the robot was trained to speak like Dick by programming it with his writings. This has remarkable implications, which I’ll explore in a minute.

Understand, this robot doesn’t just mouth canned blather. It carries on intelligent conversation, indistinguishable from the actual author 90% of the time, according to FedEx Institute director Eric Mathews, who invited me to Memphis to test it for myself (I plan to).

The robot has fine-motor controls that give its realistic pseudo-flesh face the appearance of emotions. In fact, while watching the robot converse, I had the eerie impression several times – if only for a moment – that this was an actual person sitting across from me.

The crowd was enthralled. People waited up to an hour to interact with the robot. It was the most popular exhibit at an event replete with interesting exhibits.

This was true 21st-century robotics and a harbinger of things to come. I am confident this will lead to widespread use of humanlike robots by 2020. Of course, by that point, the intelligence included in them will begin to surpass that of ordinary humans (I say “ordinary,” but the majority of us will by then see the advantage in linking ourselves to vast computer networks, thereby becoming extraordinary in our perceptions and intellectual capabilities).

If this AI technique is so successful in training a robot to speak like a person, I wonder whether the capability to think in a manner similar to that person will follow. I intend to explore this possibility on a visit to Memphis. If so, in the future, we may enjoy new science-fiction stories written as if by this deceased master, much as computer programs today can play chess in the style of deceased masters such as Morphy and Tal. Applying this same technology to visual arts, new works in the styles of Rembrandt and Picasso could follow, and creativity in all areas would flourish.

The key question here is how much intelligence and creativity can be understood and systematized. To the extent that becomes possible, it can then be rendered into computer software and reproduced. That opens staggering possibilities.

For example, eventually, you’ll be able to acquire, on a custom basis, your very own simulation of a celebrity of your choice – or perhaps even a friend or loved one. While it may not offer ALL of the experiences that person would provide, it will be a fascinating companion for conversation, walks and perhaps even shared hobbies.

Sure, the price will be astronomical at first – probably millions of dollars. But like every advanced technology, the cost will plunge toward the cost of manufacture. Eventually, these simulacra (at least, the celebrity ones – which could be mass produced) should cost $50,000 or less. That’s not cheap, but it makes rental affordable for many and ownership affordable for those who have invested wisely.

The Daily Reckoning