2017 : WHAT SCIENTIFIC TERM OR CONCEPT OUGHT TO BE MORE WIDELY KNOWN? [1]

tom_griffiths's picture [5]
Henry R. Luce Professor of Information Technology, Consciousness and Culture, Director of the Computational Cognitive Science Lab, Princeton University; Co-author (with Brian Christian), Algorithms to Live By
Bounded Optimality

How are we supposed to act? To reason, to make decisions, to learn? The classic answer to this question, hammered out over hundreds of years and burnished to a fine luster in the middle of the last century, is simple: update your beliefs in accordance with probability theory and choose the action that maximizes your expected utility. There’s only one problem with this answer: it doesn’t work.

There are two ways in which it doesn’t work. First, it doesn’t describe how people actually act. People systematically deviate from the prescriptions of probability and expected utility. Those deviations are often taken as evidence of irrationality—of our human foibles getting in the way of our aspirations to intelligent action. However, human beings remain the best examples we have of systems that are capable of anything like intelligent action in many domains. Another interpretation of these deviations is thus that we are comparing people to the wrong standard.

The second way in which the classic notion of rationality falls short is that it is unattainable for real agents. Updating beliefs in accordance with probability theory and choosing the action that maximizes expected utility can quickly turn into intractable computational problems. If you want to design an agent that is actually capable of intelligent action in the real world, you need to take into account not just the quality of the chosen action but also how long it took to choose that action. Deciding that you should pull a pedestrian out of the path of an oncoming car isn’t very useful if it takes more than a few seconds to make the decision.

What we need is a better standard of rational action for real agents. Fortunately, artificial intelligence researchers have developed one: bounded optimality. The bounded-optimal agent navigates the tradeoff between efficiency and error, optimizing not the action that is taken but the algorithm that is used to choose that action. Taking into account the computational resources available to the agent and the cost of using those resources to think rather than act, bounded optimality is about thinking just the right amount before acting.

Bounded optimality deserves to be more widely known because of its implications for both machines and people. As artificial intelligence systems play larger roles in our lives, understanding the tradeoffs that inform their design is critical to understanding the actions that they take—machines are already making decisions that affect the lives of pedestrians. But understanding the same tradeoffs is just as important to thinking about the design of those pedestrians. Human cognition is finely tuned to make the most of limited on-board computational resources. With a more nuanced notion of what constitutes rational action, we might be better able to understand human behavior that would otherwise seem irrational.