How big a part does technology play in your life?
Sure, you have your smartphone, your netbook, your Facebook/Twitter, your email and your blog. Doesn’t everyone?
Hell, even your grandma knows how to Skype.
Technology is a tool, a means of making our lives easier, a way to communicate with friends.
But what if technology itself became a friend? A robot friend? A friend who smiles at you when you come in through the front door. A friend who would listen patiently as you vent about your megalomaniac boss. A friend who would be there for you when your girlfriend/boyfriend dumps you.
Sherry Turkle, the Director of the MIT Initiative on Technology and Self wants to have a debate about this. She describes her own unnerving, albeit brief, attachment to a robot called Cog. And the more troubling incident during one of her studies when a ‘social’ robot malfunctioned. A 12-year old girl, participating in the study, interpreted the robot’s lack of response as a dislike of her. She then became uncooperative and “withdrew to load up on snacks provided by the researchers“.
Let’s leave aside the question of why researchers gave a 12-year old girl snacks because she thought a robot didn’t like her. (‘Hey. Little Girl, you’re feeling unloved and rejected, here have some Krispy Kremes.’ Have they never watched Oprah? There’s a lifetime of eating disorders ahead for that kid.) Instead let’s look at why friendship with robots could be harmful to your mental health.
It’s normal for kids to talk to their toys as if they’re real and for adults to chat away to their pets. But we all know toys and pets can’t interact with us on an emotional, human level. (We do, don’t we?)
No, the problem comes when we interact with robots programmed to mimic human behaviour in their expressions and movements. We just can’t help ourselves, we respond to these social robots as if they actually have the emotional responses they’re displaying.
Which they don’t, of course. Reacting to a robot as if it really is a friend leaves us with potential problems when a glitch in its programming makes it behave differently. (‘Jeez, what’s wrong with you these days? You’re so moody. You got a problem with me or something?’)
Of course, one day robots may have simulated emotions, courtesy of sophisticated A.I., that would seem so real as to make no difference. And that would be a game changer. Who wouldn’t want a Data (Star Trek) or a Kryten (Red Dwarf) to hang out with? But this also raises the possibility of robots who you wouldn’t want to know, like Marvin (Hitchhiker’s Guide to the Galaxy) or the annoying Twiki (Buck Rogers in the 25th Century). There’s one or two Cylons you’d want to avoid as well.
But robots with ‘real’ emotions would be fascinating to communicate with. Who knows, maybe they could teach us something about ourselves. Would we be totally cavalier about ‘hurting’ them? After all, they’re only robots. But what if the robots could choose who they were friends with – what would they base those choices on? And what if their own ‘feelings’ and ’emotions’ mattered just as much to the robots as they do to us.
Here’s Bladerunner‘s Roy Batty with possibly the most emotional speech ever from a non-human. Listen and weep.