jonbho

Month: September, 2014

Defining Intelligence

You don’t understand something until you can recreate it.

Introduction

Some concepts describe simple phenomena that are easy to define: “gravity is the force of attraction between masses,” ”a prime integer is one which is only divisible by itself and the unity,” “peace is the freedom from disturbance, war or fighting,” etc.

Other concepts describe aspects of more complex systems, and are thus harder to define. “Happiness” or “intelligence” are two such examples.

A widely-acceptable definition of intelligence has so far proven to be an elusive goal. Finding a good definition is not only theoretically interesting, it is also key to building artificial intelligence. At the very least, it will address the issue that “anything stops being considered artificial intelligence as soon as it is made to work.”

A good, acceptable definition of intelligence should:

  • Clarify the minimum system capable of intelligence, which is an exercise unto itself.
  • Describe known instances of intelligence, and reject behavior that, while sophisticated and efficient, we prefer to label as non-intelligent.
  • Ensure that an embodiment of its terms always results in a system acceptable as intelligent. Naturally, the level of intelligence displayed will depend on the sophistication of the components used. Any artificially-built embodiment of this definition is by definition “artificial intelligence.”

In order to build such a definition, let’s start with a clear scope for intelligence: it is an attribute that can only be exhibited by an active stand-alone system evolving over time while embedded in an environment. The intelligent system should be discernible from the environment it is embedded in, and it needs to be able to perform actions to evolve over time — by acting on the environment, by gathering information about it, or, most often, by doing both. By restricting our scope this way, we are only talking about intelligence in the context of a system where there is an environment and a capability of action over time, but this should cover most of the interesting cases of intelligence, if not all of them.

Model intelligence

The key element in any system capable of intelligence seems to be an explicit model of the environment it is embedded in. A system that doesn’t have any concept of where it is, even if it can perform actions and improve its situation, can be efficient, effective, resolutive, and can have many other attributes, but it shouldn’t be called intelligent. Think of passive adaptive temperature control systems, self-restoring stably-balanced systems, etc… all of these are cases of unintelligent, even if interesting, systems.

A model in this sense is as a function that, given the state of the environment in a specific moment and the action the agent performs in that moment, can produce the state of the environment in the subsequent moment.

We would describe any system using a model of the environment to operate in it as using “model intelligence.” This usage and the model may be basic, so this kind of intelligence may or may not result in highly intelligent or even particularly effective behavior.

Goal seeking

The next step up the intelligence food chain would be the ability to pursue goals. We will only study this in the context of systems exhibiting model intelligence, since goal-pursuit without a model of the environment is little more than a local gradient descent algorithm.

In order to pursue a goal, it is necessary to find a way to describe the goal. In the context of modeling the environment, where a language for describing the state of the environment is already available, we could define a scalar function of the state of the environment that increases as the goal is closer to achievement. Since the goal may be bounded (“reach the summit of the Everest”) or unbounded (“collect as much money as possible”), there may or may not be a maximum value for this function.

In this context, even if the system just chooses its next action by trying to maximize the goal function in the next step as predicted by the model, we could say it is exhibiting “intelligent goal seeking behavior,” where the intelligence comes from the model of the environment that is being used.

However, in most cases, such a system will look several steps ahead into the future to choose the best actions to obtain the desired outcome. Since the model will not be completely precise, the information about the environment will be incomplete, and other agents in the environment are probably better modeled as having some freedom (possibly just a result of the previous two facts), the best way to pursue goals in a model-intelligence context is something similar to the minimax algorithm used in game theory.

In all cases, the more precise the model is, the more accurate its predictions will be, and the more effective the agent will be.

Operational intelligence

An algorithm such as minimax actually has a very short attention span; it can choose the best actions for a particular purpose given a model of the environment and its current state, but a system that uses it exclusively does nothing other than explore a tree of states and perform a sophisticated type of gradient descent.

So, the next logical step up from pure game-theory-style goal seeking involves the ability to build and execute plans. In order to be useful, this needs to be built using some abstraction on top of the language describing the environment, which is itself a challenge. But even basic abstraction can provide very efficient tools to solve complex problems, elaborate sophisticated plans, provides useful scope for actions, reduce the computational power required to operate, and be able to recover resiliently from unforeseen changes in the environment and the agent itself.

The necessary key elements are the goal-describing abstraction and some type of memory in the system to store the current main goals, plans, and sub-goals, including the specific details on how these apply to the current state of the environment. When a system uses this type of organization, we would say it uses “operational intelligence.”

An agent using “operational intelligence” can show highly sophisticated behavior. Many common tasks performed by human beings only require this type of intelligence.

Functional intelligence

In a final step towards the general phenomenon known as intelligence, it is necessary to incorporate its cornerstone: learning. For us to accept an agent as intelligent, we expect it to learn about the environment it operates in, while it operates in it. We also expect it to adjust its behavior according to what it learns, to better achieve its goals, to reevaluate them, to possibly find alternatives, and to get better end results using a more informed perspective.

In the framework we have described, the way to achieve learning is to have the system be able to refine over time its model of the environment. The system can use newly gathered information to build new versions of the model representing more precise descriptions of the environment; the system thus becomes progressively better at predicting the result of the its actions. At each moment, the system uses the current model to make decisions, come up with plans, choose actions, and act within the environment. Thereafter, it compares the model’s predictions with the actual results, and builds a new, tweaked and improved model that can be a better predictor, and thus a better decision tool in the future.

A system is using “functional intelligence” if it uses operational intelligence and it also builds new and improved versions of the model of the environment based on its experiences.

The name “functional intelligence” is used due to the model being a synthesized function matching the experience of the environment. A complete “functional intelligence” system is composed of simple operational state and a small number of computable functions, thus removing any need for “magic” for intelligent behavior.

Closing thoughts

To the author, the definition of “functional intelligence” above describes exclusively known and accepted instances of intelligence, and any system built according to its principles will necessarily act with a level of intelligence corresponding to the quality of its building blocks.

If this is indeed the case, the definition above is actually a definition of ”intelligence” in general, and can be directly used to create working artificial intelligence.

In future articles, I will clarify the concepts and show practical examples. If you are interested in these, you can follow me on Twitter.

Technology Happiness

As a little side-project experiment, I’m going to be sharing some simple tips that will help you reach Technology Happiness. What does that mean? It means getting technology to help you be happy, rather than preventing it.

Too many people today are too dependent on technology, and their use of it makes them more stressed rather than less! Technology was meant to make things easier for us so that we would be more productive, access more pleasant experiences, and worry less — but too often, it’s the other way around! Because we want our lives back, I came up with Technology Happiness. It is a set of simple tips that can be applied very easily, that I have been applying and testing, and which I have validated to really help me be happier and more productive.

Here is the first bite for you, on using email:

If you like the tip and you want to receive more like this, you can subscribe to the YouTube channel, or sign up for the Technology Happiness mailing list using the form below:

Subscribe to the Technology Happiness mailing list

* indicates required