Home
Warning: This blog is written for a rational audience that likes to have fun wrestling with unique or controversial points of view. It is written in a style that can easily be confused as advocacy or opinion. It is not intended to change anyone's beliefs or actions. If you quote from this post or link to it, which you are welcome to do, please take responsibility for whatever happens if you mismatch the audience and the content.
----------------------------------------------------------------------------------------------------------------------------------------------------------

I find that I enjoy crackpot ideas as much as real ones, and sometimes more. My crackpot idea for today is that intelligence is nothing more than pattern recognition. And pattern recognition is nothing more than noting the frequency, timing, and proximity of sensory inputs. Language skill, for example, is nothing but recognizing and using patterns. Math is clearly based on patterns. Our so-called common sense is mostly pattern recognition. Wisdom comes with age because old people have seen more patterns. Even etiquette is nothing more than patterns.

If intelligence is nothing but sophisticated pattern recognition, we'd expect that the creatures with the most sensory faculties would evolve to be the smartest. The more you sense, the more accurate patterns your brain can form. A dog can sniff a mannequin and determine that it belongs in the class of "not living" things even though a mannequin looks like a person. The more senses employed, the better your pattern recognition.

If having more senses makes you smarter, in the evolutionary sense, we'd expect that monkeys would be smarter than clams. And sure enough, that's the case. We'd also expect mammals to be smarter than fish because fish don't do much sensing by touch with their little fins, except perhaps feeling hot and cold. Generally speaking, the creatures with sensitive hands and feet are smarter than creatures with hooves, e.g. monkeys are smarter than cows.

We'd also expect that the more heterogeneous the environment, the smarter the inhabitants would become because there would be more types of input coming through the senses every minute. In general, the creatures with the most varied environments are the ones that are highly mobile, and able to move from one place to another within a day. Elephants, for example, are relatively smart mammals and they can cover many miles a day.

My crackpot point in all of this is that in order to build computers with artificial intelligence, all we need is a robot with lots of sensory inputs (sound, sight, touch, smell, taste) plus a high degree of mobility, plus a pattern recognition and imitation program. And almost nothing more. Like a human baby, the robot would recognize patterns and grow more intelligent over time. When the robot learns to walk, by observing humans and imitating with its own body, it could change its location and start gathering more sensory experiences on its own. Its intelligence would grow as it recognized and stored more patterns.

You might need to seed the robot with a few patterns that humans seem to be born with. For example, human babies apparently recognize faces and can discern human moods easily. That could come in handy. You'd also want your robot seeded with some basic objectives, the way babies are born with the desire to eat and feel comfort from being held. If the robot had no basic impulses, it would just sit around.

A robot's senses would be a bit different from human senses. In some cases the robot's senses would be superior. A robot could potentially see better in the dark and hear a greater range of sound. Robots might sense electrical and magnetic fields, and so on. I'm not sure if a robot will have the sensations of touch and taste in the way humans experience them, but the robot could have some version of those senses.

My crackpot prediction is that robots will develop intelligence when they are designed with mobility, five or more sensory inputs, and spectacularly powerful pattern recognition processors. Intelligence will emerge automatically from those properties.

Compared to humans, robots can easily share their patterns with other robots via the Internet. That means any experience of one robot will be shared by all. It won't take long for the first generations of robots with five senses and mobility to become a thousand times smarter than the smartest human. Eventually each new robot will be born with the intelligence of all existing robots as its starting point. Robots will use the cloud for storage and processing.

I give humanity thirty years of continued dominance on the earth. After that, the age of robots will be upon us. I realize this scenario is the basis for countless science fiction stories. All I'm adding is my prediction that it will happen sooner than you think. And it will all start when you see the headline "Scientists Design Robot Baby."

[New: I will double down on my crackpot idea of intelligence being nothing but pattern recognition by saying that dreams are caused by your brain doing a bubble sort of your newest patterns to get them in the best order. I assume it's hard to be conscious and also sort your patterns at the same time. If you wake up mid-sort, you might remember seeing the stripper in your dreams as your grandmother. It just means two patterns were sorting past each other on their ways to more accurate pattern storage.]

 
Rank Up Rank Down Votes:  +69
  • Print
  • Share
  • Share:

Comments

Sort By:
Aug 18, 2012
The 2004 to 2009 Battlestar Galactica series is one interpretation of what will happen when robots become self aware and intelligent. With streaming backup of their memories to multiple locations, they could be reborn to a new shell/body when needed. What kind of plans would you be making if you knew your life awareness could outlast the billion or so years that this solar system will continue to exist?
 
 
Aug 14, 2012
Most standard I.Q. tests validate your theory that intelligence is at least in part related to pattern recognition. But we see true genius in those who can start with pattern recognition, and creatively expand it beyond mere repetition. You've mentioned having just a few fundamental blocks for humor - a robot could certainly learn those. And a robot could learn to mimic a comic and repeat a joke so it's funny. But it's much, much harder to creatively put those humor building blocks together in a new and surprising way that makes people laugh.

My second comment regards the purpose you mention giving robots. The world wouldn't function well if everyone had the same skill sets and life purpose. Similarly, if robots just all started with the same pool of data and purpose, they'd all wind up doing the same thing toward the same end. The trick would be creating different models of robots with distinctly different purposes. Set one model on improving everything related to food production - creating larger crops, with higher nutrition, less pesticide, longer shelf life, better taste, etc. Another model tackles the clean water problem, and then there's energy, transportation, etc. Maybe their senses vary based on purpose. The one tackling energy probably doesn't need taste buds, but the one working on food does. And I'm looking forward to being cared for by the robots whose purpose it is to aid the elderly.
 
 
+3 Rank Up Rank Down
Aug 14, 2012
How would we evaluate a robots performance if it saw patterns that were irrational?

A pattern could be argued to be useful even though it doesn't have scientific basis.
Lets say a robot comes up with this pattern that there must be a God that takes care of us (bear with me for a moment...). Lets say it does this because that way it doesn't have to spend CPU cycles on figuring out the laws of nature, it doesn't have to stress about the bigger questions and can spend its cycles on other stuff that has more immediate benefits.

Would we dump the robot at the nearest scrapyard?
 
 
0 Rank Up Rank Down
Aug 14, 2012
I don't know whether "pattern recognition" is the same but I'd say, intelligence goes a bit like this:
You've got knowledge. This is a network of facts connected by relations (Tee: hot, water, leaves. Leaves: vegetarian, bush, green. Green: ...)

Now, at any point in time you receive sensory input (outside: food. Inside: hungry, memory: yes, the food is mine and I had intended to have lunch) and jump along associations until you arrive at a response.

The jumping speed determines the amount of associations you can evaluate until you have to react ("Why didn't you think of that?" gets asked, if you're slow in that regard). And the heuristics by which you decide on he order determines, how much jumping you need until you arrive at a satisfactory course of action. Withnin certain limits, the two can compensate for each other.

Which of the two is intelligence? No idea but the latter one I'd sort more into the "experience" or "wisdom" corner. (as opposed to "knowledge")

The first one determines how you react in really unknown situations and possible your learning speed. Maybe this is "mental flexibility".
 
 
Aug 14, 2012
@ marcoklaue

Thats actually not true!
Google has been researching neural networks and AI and their computers learned that there exists an object that we humans know as the "cat". The computer discovered the "cat pattern" on its own by watching youtube videos, without being pre-fed with cat pictures or videos. After that, it could recognize cats in youtube videos.

http://www.slate.com/blogs/future_tense/2012/06/27/google_computers_learn_to_identify_cats_on_youtube_in_artificial_intelligence_study.html
 
 
+2 Rank Up Rank Down
Aug 14, 2012
Scott -- the project you're talking about exists, in some form. It's called Cog, and it was a project under Prof. Rodney Brooks at MIT in the CSAIL (Computer Science AI Lab) to build a humanoid robot , give it as many sensory inputs as possible, and give it some algorithms to interact with the world around it, and see what happens. When I say "algorithms," I'm talking about -- according to the research paper -- "visual-motor routines (smooth-pursuit tracking,
saccades, binocular vergence, and vestibular-ocular and opto-kinetic reflexes), orientation behaviors, motor control techniques, and social behaviors (pointing to a visual target, recognizing joint attention through face and eye finding, imitation of head nods, and regulating interaction through expressive feedback)."

I know this because a roommate and good friend of mine was heavily involved in the project. He told me about some of the fun stuff that Cog would do, and how he would "learn" to pay attention to some things more than others, like an infant -- a rainbow colored slinky would always grab his attention away from someone doing a funny dance, for instance.

Google "Cog Rodney Brooks MIT" and you'll certainly find the main site for the project. The research paper is also interesting, albeit densely written in academic-ese. There are videos showing Cog interacting with others.

But definitely check out the FAQ -- clearly the lab agrees a bit with your definition of us humans as moist machines:

Q: Is Cog conscious?

A: We try to avoid using the c-word in our lab. For the record, no. Off the record, we have no idea what that question even means. And, still, no.
 
 
Aug 13, 2012
I think you underestimate the importance of processing power--as well as the sensing abilities of many vertebrates. Take fish, for example, with their massive suites of chemical, pressure and electrical sensors. Sharks, as an example, meet your input and pattern recognition criteria excellently but while they're not dumb, they're certainly not on the octopus level of intelligence. And if they were going to evolve intelligence (what's our definition, anyway? Self-awareness?) they've had ample opportunity to do so--hundreds of millions of years.

Instead, what's resulted is the equivalent of a really, really good version of one of today's machine intelligences. Extremely good at what it does, almost unbeatably so, but not displaying what we would consider the hallmarks of intelligence. Think of it as code that's been running a self-improvement control loop for essentially infinite time. It's the most streamlined, perfect code you can imagine, but not an AI.
 
 
Aug 13, 2012
In the last several years, I've started playing golf a little more seriously (instead of 6-8 times per year, it's now about 35-45 or more). While I'm approaching my next shot, I'm reviewing the mechanics of my previous shot. And now I'm beginning to wonder if there's a part of the brain that works semi-independently (subconscious?) from whatever part is what we think of as consciousness. As in, I address the ball on the tee, pointing my body (feet, hips, shoulders) straight down the fairway. The ball slices to the right. I tee up another ball, rotating my body to the left about 10-15 degress. I slice the ball to almost the same spot. I tee up another, rotate again left about 10 degrees, and slice the ball to basically the same spot (all three balls wound up about 225 yds and within 25 feet of each other). That episode was the most pronouced example, but I'm recognizing others now not quite as pronounced, but almost as obvious. Is a part of my brain trying to compensate for what is happening with the view, contrary to what I'm consciously trying to achieve? Is it semi-independently forcing my motions to act contrary to my expressed desires? How can something like that possibly be written into artificial intelligence? How can something like that possibly be measured, let alone replicated? Or from another perspective, will some future AI artifice be able to make the same mistake several times, and NOT learn from it? To me, AI may exceed human intelligence in many areas, but will never be able to duplicate it.
 
 
Aug 13, 2012
There's a lot of things you'd think computers can do well, but they can't. They can recognize human voices, and they can generally recognize if there's a human face in a digital photograph. But they can't recognize if there is, for example, a chair or a ship in a digital photograph. All the processing power and pattern-recognition algorithms haven't yet managed to get a computer to answer the question, "is there a chair in this picture?", which any three-year-old human can do.

They have to resort to "cheating", like having the computer test for things it CAN recognize ("does the picture look like the lighting is typical of an office? Is there water in the picture?") and taking a stab at whether there is a chair or a boat in the picture.
 
 
Aug 13, 2012
How much might pattern recognition be able to potentially explain?
How about like, Emotions? Motor Control? Self-Esteem?

"Also, our ability to reuse pattern in other context, is it a pattern very deep down in us or something else?"
Thats like abstract thinking, right? That might like being able to recognize partial patterns?
 
 
Aug 13, 2012
Part of our patter recognition ability is the ability to prioritize which patterns are important and which are not. If you're overloaded with stimulus, but unable to prioritize, you'll burn out real quick. For an autist, the threshold for which patterns to ignore must be a lot higher.

But yeah, if a robot isn't presented with new stimulus, then it may never move beyond its comfort zone. Humans also don't always move beyond their comfort zone, but I suppose thats because their concern for safety overrides their desire to learn. So I wonder whether curiosity is hard-wired into us, or whether we must learn a pattern saying that moving beyond your comfort zone makes you stronger.
Robots also need some way of unlearning patterns, because they won't always be able to get it right, just like us humans.
 
 
Aug 13, 2012
Well, I think you got half of it. I always says that intelligence is adaptation and reusing previous knowledge. Adaptation is pattern recognition, so on that we are saying exactly the same. But I see people with a lot of learning ability just failing miserably once out of their comfort zone. They can recognize and learn new pattern very vast, but not without help.

Most of our great knowledge as human come from pure luck and once acquired need to be transmit. All the difficulty for a robot, make so that they can detect that a pattern is really a pattern and know to what else it could potentially apply. The best luck is probably trying to program the method of rationality in their basic code. Good luck with that.

Also, our ability to reuse pattern in other context, is it a pattern very deep down in us or something else?
 
 
Aug 13, 2012
>You'd also want your robot seeded with some basic objectives, the way babies are born with the desire to eat and feel comfort from being held. If the robot had no basic impulses, it would just sit around.

The pattern recognition idea is very interesting, but I think the ideal set of "basic objectives" here has already been determined: survive and reproduce.

Start with imitating life, then work your way up to human - and you'll probably have to pass through some variation of multicelled, mammal, sentient, etc.
 
 
Aug 12, 2012
Babies can do something that (so far) machines can't - rewire their brains according to the sensory input using reward/pain means. Pain = hungry, reward = food. Pain=separation, reward = smile. Language skills are similarly acquired and even affects hearing. The Chinese are unable to differentiate L from R after a certain age. Maybe a Heuristic Algorithmic Logic computer could do it.
 
 
+1 Rank Up Rank Down
Aug 12, 2012
On sensory perception, pattern recognition and intelligence:

Asperger's is a disorder on the autism spectrum which is characterized (among other things) by poor social skills. In particular, people with Asperger's syndrome tend not to be able to recognize moods and emotions in others and often are unable to recognize that other people have a different point of view. It is not uncommon, for example, for a person with Asperger's to behave in socially unacceptable ways (e.g. lounging comfortably on a couch while refusing to make space for someone who has no place to sit, demanding someone inconvenience themselves in a large way to avoid his or her own minor inconvenience, etc.).

One theory is that people with Asperger's take in more sensory data of all kinds than "normal" people - and then discard excess information (like facial expressions). In other words, it is not a matter of being unable or unwilling to empathize. It is a matter of having discarded the information as an infant or small child that would have led to the social pattern recognition necessary to function "normally."

Children on the autism spectrum who are diagnosed and treated early can often learn concretely the skills that their "normal" peers learn automatically. They can be taught to recognize facial expressions and link them to emotions, etc.

In other words, better understanding the link between sensory input, pattern recognition and human intelligence can help humans as well as robots...
 
 
Aug 12, 2012
Hmmm...explain dolphins...
 
 
Aug 12, 2012
Hmmm...explain dolphins...
 
 
+5 Rank Up Rank Down
Aug 11, 2012
I believe Scotts 100% right.
But I think there are 2 other things that belong in the definition of intelligence: 1. prediction, and 2. motivation.
The brains ability to make predictions is what creates "experience" and enable you to learn. It makes pattern recognition more efficient, beause once you seen a connection enogh times, you can just assume it without needing to spend pattern recognition resources on it. But then again, making predictions is itself based on recognizing a pattern.

With motivation I don't mean survival instincts, but the drive to make use of pattern recognition. Who told us as babies that being able to recognize patterns would help us survive more efficiently?
Why didn't we just ignore all that weird sensory input first time we opened our eyes? Is that first pattern recognition step hard-wired into us or something?
 
 
+5 Rank Up Rank Down
Aug 11, 2012
Intelligence alone is not enough. It needs to be applied to a problem or to seek a goal. We humans have a lots of these around - survival, s-e-x, competition, ego, status, money, satisfaction etc. The evolution will continue to drive the growth of intelligence until these gaps are fully closed.

Even if robots acquired powerful pattern recognition capabilities, it would just sit there and result in bigger and bigger database of patterns. They will have no way 'on their own' of knowing where to apply it or how to act on it.

Rest easy guys - the threat to human dominance isn't going to come from the AI.
 
 
Aug 11, 2012
I think "intelligence" comes from what you do with the data after it's collected.
More and more data (sensory input) isn't always useful. The trick is sorting out the right kind of data. Quite a lot of what our brains do is ignore the less useful information so we can focus (literally and figuratively) on what is important.

Vision (in humans), for example, is actually quite poor. Look at the letter "J" on your keyboard. If you focus on it, you'll notice that the letter "F" is barely in focus. But when you look around a room, everything seems in focus. What you are "seeing" is a persistent "hallucination" created by your brain to help you find your why through the world.

A robot with multiple visual sensors still needs very intelligent software to put it all together and make sense of it all.

But, I think that you are right Scott, within 30 years, robots (technological intelligence) will be able to match or exceed humans. I'm still only 30% through reading Kurzweil's "The Singularity is Near", but it's quite compelling. Humans are already enhancing themselves with technology (smart phones, etc) and it's just a matter of time before the integration happens at the biological level.
 
 
 
Get the new Dilbert app!
Old Dilbert Blog