Home
Let's say that someday a young couple buys a robot to help with childcare. Version 1 of the robot isn't much more capable than a video baby monitor that can also rock the cradle and say, "There, there" as needed, perhaps in the mom's voice. Maybe it can do a few more things such as loading and unloading the dishwasher and feeding the dog. It's useful but still limited.

But here's the interesting part. That robot will mature, and get smarter, at about the same rate as the baby in the crib. Robots will have parts of their "brains" inside their bodies and parts in the cloud of the Internet, connected to the same data that all robots are connected to. As any robot anywhere gains knowledge, that knowledge is uploaded to the cloud and available to all robots. The day that one robot learns how to do your laundry all robots will acquire the ability simultaneously, although some robots might need sensor upgrades for new functions.

If robot makers are smart, all of your robot's parts will be modular for easy upgrades. Do you want your robot to have better sensors in its fingers? Just replace the hand with an upgraded version. While the robot's brain is upgrading automatically every minute, you'll be keeping its body upgraded. Eventually the robot will take care of its own hardware upgrades too.

But that's not the interesting part.

I presume that robots will need something like a "personality" for purely functional reasons, and to make decisions when the data is unclear. And they will acquire those necessary personalities largely by observation. For example, if the humans that the robot lives with are the types who are effusive in praise of others, the robot will pick up that trait. If the family is the snarky/jokey type, the robot will pick up on that too.

Like humans, robots will copy the ways and tendencies, and even biases, of the people it associates with the most. The personality factors will be uploaded to the common robot brain in the cloud, but each robot will be programmed to ignore the "average" way people react and instead favor whatever the locals do, and even further prefer to do what the immediate family of humans typically do. In other words, some robots will be friendly and helpful and some will be total dicks, just as their owners.

The robot owner will be able to "correct" any bad habits the robot picks up by observation, similar to the way parents correct bad manners in their children. In both cases, the robot and the child are going through a maturation process.

I'm getting to the interesting part of the post. No, really, I am.

I think most of you buy into the notion that robots will eventually be as common as television sets, and that the robots will - for purely practical reasons - adopt personality traits by observation. The robot will want to fit in, to be relevant, to be liked, if for no other reason than to increase its market value.

Eventually there will be templates of personalities, created via robot observations and then loaded to the cloud, so that new robots can start with basic personalities that match their assignments. The robot's personality will be free to evolve, based on its own local observations, but it will have a strong starting point. I would compare this to an Englishman who was born and raised in London then moved to the New York City at the age of twenty. He would retain his base English personality for the most part, but over time it would get a New York City edge.

This is a long way to get to my point, that robots of the future will have base personalities (the templates) that will be like time capsules. The robots will have base personalities that were normal during the era that robots matured from tools to intelligent entities. And that will only happen once in the course of human history.

We always hear how the new generation is different from the next. Sometimes the new generation cares more about money, or trusts the government more, whatever. Someday, and perhaps forever, robots will carry with them the base personalities that were common to the era in which computers first acquired their personality templates.

In time, or through intentional human intervention, we might erase those old personality templates because they are no longer relevant to the times. But I'm guessing the robot personality database by that time will be so complex, and spread across owners, that replacing it or even programming around it will be impractical.

In other words, I predict that children being born today will be the prime influencers of what robot personalities will be . . . forever?

Obviously robot personalities will differ by location and culture. And a new Indonesian house robot coming online will borrow its personality template from Indonesian robots that came before. So the personality templates might be frozen within each culture. That means the Israeli robots and the Hamas robots will not be friendly even if their humans have long since made peace.

It sounds like a trivial worry, that robots might acquire tainted personalities from the past. But I think that unless we design the robots right, it could be a big problem.

One solution would be to give robots generic, cookie-cutter personalities. Some robot manufacturers will certainly offer that option. But I think the natural competitiveness of humans will makes us want our robot to be learning and maturing as fast as the neighbor's robot. And we will want our robots to have unpredictable personalities - within safe bounds - because it will amuse the hell out of us.

My solution is that all robots must be raised for their first few years in Minnesota, where everyone is kind and generous. I assume there are other spots around the world in which the culture evolved to be unusually friendly. Part of the value of your future robot is where it was imprinted with its base personality. Someday the Minnesota Series of robots will fetch top dollar.

The Adams Law of Slow Moving Disasters states that humans always solve problems, no matter how large, if they can see them coming. So I'm bringing up this robot personality issue now, just to be safe.

The main question of the day is this: Will robots someday have personalities, and if so, will they acquire them, in whole or in part, by observation?

 
Rank Up Rank Down Votes:  +33
  • Print
  • Share
  • Share:

Comments

Sort By:
Mar 8, 2013
...and how many years of evolution of robots will go by before two parents who disagree over how to raise their kid (and what kind of robot they want around their kid)...

before they make a robot that won't unlock the pod bay doors?
 
 
+6 Rank Up Rank Down
Mar 8, 2013
robots have existed for decades. just not anthropomorphic ones. why would we want to replace ourselves with them? if we are just wet sacks of meat that can learn, and they are indestructible soulless metalloid monstrosities that can learn and evolve quickly, what makes you think they won't replace everyone within a few generations at most?
 
 
Mar 8, 2013
@Scott, I love your Law of Slow Moving Disasters and generally agree with it.

I'm not sure how slow-moving this will be, though. If a "rather limited" first-generation robot costs a quarter million dollars, it's already a disaster. Figure human labor at $8 / hour for "rather limited" work, multiply that by three shifts a day, 7 days a week, 365 days a year with no breaks... It undercuts minimum wage if it can be expected to last 4 years. Maybe as little as 3 years, if you factor in training costs, time off, and human liabilities. At 5 years, a $250,000 robot provides labor at ~$6.25 per hour.

The most trivial "but useful" robot that anyone would bother to own provides ridiculously cheap labor, even at price tags that 99% of the country couldn't imagine. I imagine demand would be so high that as robots became more sophisticated, the price wouldn't drop much. Not for a long time... until demand starts to be saturated. Which will be when labor is completely replaced.
 
 
Mar 8, 2013
The robots that will be made in the future will be designed to fill some human based need/desire. If they are built to have personalities and the ability to learn then those capabilities will be limited to whatever human based need that robot will be designed to fulfill (to use your robo-nanny example the robot will be able to interact positively with the kids, maybe learn when said kids are lying, maybe a few other things, but will still be largely limited to the robo-nanny role). They wont stray from the roles we assign them. in other words, what you're afraid will happen-that the robots of the future will be limited to their original personalities-is something that will happen.

Now...tell me why this is a bad thing.
 
 
-1 Rank Up Rank Down
Mar 8, 2013
I went to all the trouble to look up my password so I could actually comment. I recently re-read about the Myers Briggs Personality Types and the Keirsey Temperament sorter. What we think of as robots is somewhat "sensing" and "judging", but I suspect all 16 personality types could be easily simulated so that people could pick out a robot that fit their preferences.
 
 
Mar 8, 2013
You're thinking like a 1%-er. Capable robots are an incredible threat to average and sub-average people. The rich will be the first to buy them. Those robots will displace workers, driving non-rich people's salaries down. The poor and middle class won't be able to buy them at first. And as the robots get more capable and cheaper, they'll push more people out of the workforce and drive more-skilled workers into lower salaries. By the time robots are "affordable to all" and reasonably capable, nobody will actually be able to afford them, because nobody will have jobs. Except for the early adopters who have access to nearly free labor.

The voting population will put pressure on politicians to severely restrict the development of robots.

[The Adams Law of Slow Moving Disasters will solve your problem of robots taking all of the jobs. We'll have plenty of time to adjust to it. Interestingly, one pundit is saying that the aging of the boomers, and their massive upcoming retirements, will solve our employment problem. In folksy terms, you never have to worry about the bullet you can hear. -- Scott]
 
 
 
Get the new Dilbert app!
Old Dilbert Blog