Common knowledge (logic)

Common knowledge is a special kind of knowledge for a group of agents. There is common knowledge of p in a group of agents G when all the agents in G know p, they all know that they know p, they all know that they all know that they know p, and so on ad infinitum.

The concept was first introduced in the philosophical literature by David Kellogg Lewis in his study Convention (1969). It has been first given a mathematical formulation in a set-theoretical framework by Robert Aumann (1976). Computer scientists grew an interest in the subject of epistemic logic in general--and of common knowledge in particular--starting from the 1980s.

Example
It is common to introduce the idea of common knowledge by some variant of the following logic puzzle: On an island, there are k people who have blue eyes, and the rest of the people have green eyes. There is at least one blue-eyed person on the island (k >= 1). If a person ever knows herself to have blue eyes, she must leave the island at dawn the next day. Each person can see every other persons' eye color, there are no mirrors, and there is no discussion of eye color. At some point, an outsider comes to the island and makes the following public announcement, heard and understood by all people on the island: "at least one of you has blue eyes". The problem: Assuming all persons on the island are truthful and completely logical, what is the eventual outcome?

The answer is that, on the k dawns after the announcement, all the blue-eyed people will leave the island.

This can be easily seen with an inductive argument. If k = 1, the person will recognize that he or she has blue eyes (by seeing only green eyes in the others) and leave at the first dawn. If k = 2, no one will leave at the first dawn. The blue-eyed people, seeing only one person with blue eyes, and that no one left on the 1st dawn, will leave. So on, it can be reasoned that no one will leave at the first k-1 dawns if and only if there are at least k blue-eyed people. Those with blue eyes, seeing k-1 blue-eyed people among the others and knowing there must be at least k, will reason that they have blue eyes and leave.

What's most interesting about this scenario is that, for k > 1, the outsider is only telling the island citizens what they already know: that there are blue-eyed people among them. However, before this fact is announced, the fact is not common knowledge; it is merely "first-order" knowledge. The notion of common knowledge therefore has a palpable effect. Knowing that everyone knows does make a difference. When the outsider's public announcement (a fact already known to all) becomes common knowledge, the blue-eyed people on this island eventually deduce their status, and leave.

Modal logic
Common knowledge can be given a logical definition in multi-modal logic systems in which the modal operators are interpreted epistemically. At the propositional level, such systems are extensions of propositional logic. The extension consists of the introduction of a group G of agents, and of n modal operators Ki (with i = 1,...,n) with the intended meaning that "agent i knows." Thus Ki $$\varphi$$ (where $$\varphi$$ is a formula of the calculus) is read "agent i knows $$\varphi$$." We can define an operator EG with the intended meaning of "everyone in group G knows" by defining it with the axiom


 * $$E \varphi \Leftrightarrow \bigwedge_{i \in G} K_i \varphi$$,

By abbreviating the expression $$E_GE_G^{n-1} \varphi$$ with $$E_G^n \varphi$$ and defining $$E_G^0 \varphi = \varphi$$, we could then define common knowledge with the axiom


 * $$C \varphi \Leftrightarrow \bigwedge_{i = 0}^n E^n \varphi$$ with $$n = 1, 2,...$$

There is however a complication. The languages of epistemic logic are usually finitary, whereas the axiom above defines common knowledge as an infinite conjunction of formulas, hence not a well-formed formula of the language. To overcome this difficulty, a fixed-point definition of common knowledge can be given. Intuitively, common knowledge is thought of as the fixed point of the "equation" $$E_G (\varphi \wedge C_G \varphi)$$. In this way, it is possible to find a formula $$\psi$$ implying $$E_G (\psi \wedge C_G \varphi)$$ from which, in the limit, we can infer common knowledge of $$\varphi$$.

Set theoretic
Alternatively, common knowledge has been formalized using set theory (most notably by Robert Aumann). We will start with a set of states S. We can then define an event E as a subset of the set of states S. For each agent i, define a partition on S, Pi. This partition represents the state of knowledge of an agent in a state. In state s, agent i knows that one of the states in Pi(s) obtains, but not which one.

We can now define a knowledge function K in the following way:


 * $$K_i(e) = \{ s \in S | P_i(s) \subset e\}$$

That is Ki(e) is the set of states where the agent will know that event e obtains.

Similar to the modal logic formulation above, we can define an operator for the idea that "everyone knows e".


 * $$E(e) = \bigcap_i K_i(e)$$

As with the modal operator, we will iterate the E function, $$E^1(e) = E(e)$$ and $$E^{n+1}(e) = E(E^{n}(e))$$. Using this we can then define a common knowledge function,


 * $$C(e) = \bigcap_{n=1}^{\infty} E^n(e)$$

Applications
Common knowledge was used by David Lewis in his pioneering game-theoretical account of convention. In this sense, common knowledge is a concept still central for linguists and philosophers of language (see Clark 1996) maintaining a Lewisian, conventionalist account of language.

Robert Aumann introduced a set theoretical formulation of common knowledge (theoretically equivalent to the one given above) and proved the so-called "agreement theorem" through it: if two agents have common prior probability over a certain event, and the posterior probabilities are common knowledge, then such posterior probabilities are equal. A result based on the agreement theorem and proven by Milgrom shows that, given certain conditions on market efficiency and information, speculative trade is impossible.

The concept of common knowledge is central in game theory. For several years it has been thought that the assumption of common knowledge of rationality for the players in the game was fundamental. It turns out (Aumann and Brandenburger 1995) that, in 2-player games, common knowledge of rationality is not needed as an epistemic condition for Nash equilibrium strategies.

Computer scientists use languages incorporating epistemic logics (and common knowledge) to reason about distributed systems. Such systems can be based on logics more complicated that simple propositional epistemic logic, see Wooldridge Reasoning about Artificial Agents, 2000 (in which he uses a first-order logic incorporating epistemic and temporal operators) or van der Hoek et al. "Alternating Time Epistemic Logic".