Saturday, July 25, 2009

A Breakdown of Intelligence

In the recent transhumanism meetup on AI, the following definition of intelligence was brought up: "the measure of intelligence is how well it can preference order possible futures". In simpler words, intelligence is the measured by the ability to make decisions.

How does one make decisions then? Let's break down the process into smaller parts:

  • Predict the outcomes of each choice. This is commonly known as "considering the consequences". What will happen if I cut down this forest? What will happen if I give her a compliment? What will happen if I detune the laser by 5 Mhz?
    • Coming up with a good model of self, the environment, and other people / intelligent agents.
    • To do this, we need to learn. This is why we study science, why we meditate and moralize, why we study psychology and philosophy.
    • If we don't know enough to get a working model, we need to experiment. Gather data, look at correlations, set up tests that are repeatable, simulate, present theories, debate. We do science.
  • Create new choices. Not every decision rests on a binary choice, black or white, yes or no, heads or tails. There are many shades of gray. We call this creativity, inventiveness, imagination, thinking outside the box. Unlike prediction, which delves deep into the possible futures, creativity shows us new possibilities. This is sometimes called "lateral thinking". We're not just picking the road less traveled; we cut our own path.

    It is worth noting that most people are bad at this. We like to go along with the choices that are presented.
  • Preference order the choices, by taking into consideration of their consequences. Which outcome is "best"? That is a tough question, and it comes down to how we define our goals and value. In computer science, it is relegated to a heuristic function: take a model of the world as input, return a number as output. The number represents the value of the model world, and can include positive or negative infinity, but most of the time it lies somewhere in between. This process is inherently flawed, since it takes all the complexities and intricacies of a world model and pares it down to one number, a one dimensional projection of the universe. But it is necessary - that's the way decisions work; at the end we can only choose once: one outcome, one future.

    Even if we can model the future perfectly and knows every possible choice (as is possible in many board games), we would still need to define our goals (e.g. checkmate) and to weigh the outcomes to see which choice will best help us to that goal.

  • Recursion. How do we know which goal is the best? Which heuristic function is the best? Which model of the world is the best? These are all decisions to be made! Is a utilitarian policy better than one that's based on natural rights? How do we define utilitarianism? Which rights should be universal? An intelligent being should see the layers and layers of decisions involved in making each decision - it should be able to handle recursive processes. When do we stop the recursion and start relying on assumptions? That is another decision!

No comments: