| Home |

General Intelligence :

Social / economical / political issues

This page provides some information (and speculations) for the general public about the social implications of AI.


A brief timeline for the future

Current period: Late Capitalism / Information Age

Computers provide assistence to humans. Some economic processes are automated. The economy still depends largely on human labor and human decision-making (and of course, including earlier technologies such as industrialization).

An early form of general AI is present in this period (eg Cyc, Soar, Novamente), but it can not yet replace human labor in a significant way. Economic activities will depend on a combination of AI and human labor for some time to come.

This period is characterized by the development and training of the AI to become human-level.

Transition period

In this period, human-level AI will be achieved. This is the period of the spread of AI and automation.

How long will this period last? If the only factor is the spread of automation, it could be extremely rapid (perhaps a few years, even < 1 week is imaginable).

But there is another significant factor which is the delay caused by human decisions — it may be very time-consuming for humans to agree on how to utilize AI and physical resources.

A third factor is resistence from humans — people who don't want automation to spread so fast.

Complete Automation Age

In this period, machine intelligence will be overwhelmingly more powerful than human intelligence. All aspects of economic production, except those pertaining to emotions and human values, will be automated. The only place for humans is to decide what they desire.

Some authors argue that this period is unpredictable because technological advance would become so rapid. Below, I try to give some speculations...


Can a market economy still exist in the Automation Age?

Characteristics of human life

Much of human life, as we know it so far, is characterized by:

  1. individuals are unique and they display a diversity of traits
  2. individuals compete for scarce resources

These aspects of life seem to be very essential in the dynamics of human life. Without them, life will change drastically, and it may become "something entirely different or unrecognizable". My outlook is more conservative and I don't expect fundamental changes to occur unless some factors can be shown to cause them.

Will the allocation of resources to individuals be completely decided by AI?

I speculate that this is unlikely:

  1. Human diversity is unlikely to become homogenized.
  2. Competition is in human nature. People may collaborate (and indeed we always do, to various degrees), but competition cannot be completely eliminated.
  3. If you cannot change the amount of resources you own/control, life may become meaningless / aimless.

Thus:

Postulate 1: Humans will continue to compete for resources. They will do so with assistence of AIs.

Also, machine intelligence will be overwhelmingly superior to human intelligence, so:

Postulate 2: All decisions concerning objective reality should be left to AIs. The only decisions humans need to make are those pertaining to value-judgements.

Will money still be used?

I speculate that it will. As long as resources are scarce, and individuals are entitled to own / control them, there should be a medium of exchange for goods. For example, if I want to buy a bionic body, and you own one of the AIs that produce bionic bodies or some components of it. Even though the actual production processes are completely automated, I am still buying some goods from you with money, and the resources we own / control is not the same before and after the transaction.


How can people compete, if all work is done by AIs?

Can everyone simply ask their AI to "maximize my resources"?

I speculate that this will not happen:

In order to fight for resources, an AI needs to obtain many value-judgements for particular situations from the human user. For example, is it OK to kill people / animals, who to kill / exploit, who are your friends / enemies, etc. These value judgements vary from individual to individual, and cannot be standardized in a single "database of human values".

Making individualistic value judgements may be a significant delay in post-AI dynamics. When some humans get mind-uploaded, their decision-making will speed up, but the qualitative dynamics remain unchanged.

Will the person/group with the most powerful AI always win?

When AIs are widely distributed in the human population, the balance-of-power may prevent a single AI from "wiping out the rest" or establishing permanent dominance. Indeed, the future dynamics of life may be dominated by human value judgements, as we have postulated that human decision-making cannot be taken out of the loop, and in fact would be at the top of the decision hierarchy.

Will the the prices of things become precisely quantified?

There is a worry that the post-AI economy may become stagnant: everything that a person owns may become completely quantified. Before AI, human intelligence is not quantifiable because we all have our individualistic styles of thinking. After AI, however, intelligence may be measured precisely by the amount of computing time.

If a person starts with an initial asset of $W, s/he may end up with $W forever because the values of all goods can be measured precisely, and all transactions should be fair (meaning that you'd pay $X to get exactly $X's worth of goods).

More on this later...


| Home |

May/2005 (C) GIRG