Scott Aaronson does a very nice job explaining exactly what it is that William proved. Instead of replicating the whole endeavor, I’ll send you there. I’ll spend just a couple of paragraphs fixing some notation and recalling standard definitions; if you’re already familiar with the subject then you might skip past it.

In Scott’s post, he discusses the hierarchy of four kinds of quantum correlation classes, which I’ll give explicit names to here. Formally, these classes are sets of bipartite probability distributions. (the stands for quantum) is the class of distributions which can be generated by two non-commuting parties sharing finite-dimensional entanglement. (the stands for spatial) is the class which can be generated with infinite-dimensional entanglement in the usual spatial tensor product model, where Alice’s observables look like and Bob’s like . (the stands for approximation) are those distributions which can be obtained as a limit of distributions in (or equivalently as a limit of distributions in ). (the stands for commuting) are those distributions which can be generated in the commuting-operator model, where Alice and Bob’s observables don’t have a tensor product structure, but we still have for all , i.e. all of Alice’s observables commute with all of Bob’s.

This framework covers general correlations, but now we’ll be interested only in *games* and strategies thereof. A game consists of question sets , a probability distribution on , answer sets , and a valuation function .

If , then an -strategy is a correlation from .

The game is played as follows:

An arbiter samples a pair from the distribution and sends to Alice and to Bob. Then Alice and Bob send and respectively to the arbiter with joint probability . The value of with respect to strategy , sometimes denoted is just the expectation value of when the game is played.

We’re particularly interested in *perfect strategies*, i.e. strategies such that .

For each , let be the set of games which have a perfect -strategy that wins with probability . In other words, is the set of games which Alice and Bob can always win without communicating if they act according to the laws of quantum mechanics prescribed by . (By performing real world experiments, we can in principle set to be “whatever laws the real world obeys”. We just have to be careful to perform the experiment at time-scales comparable to the time it takes a lightspeed signal to travel between Alice and Bob. Experimentalists at TU Delft and elsewhere have done this!)

**If you tuned out for the standard background stuff, this is a good place to tune back in.**

William’s paper gives two main results of interest to quantum theorists.

- strictly contains , by demonstration of an explicit game with a perfect winning distribution which is in . This in fact shows that strictly contains .
- is an undecidable decision problem, by a recursive reduction from the word problem for groups.

This resolves what was an important open question: pinning down the complexity of the class . However, one should note that William only explicitly proves a lower bound, i.e. that is at least as hard as the word problem for groups (which in turn is exactly as hard as HALT). It’s not hard to see from that the reduction also goes the other way. Indeed, the main contribution of a previous work by Slofstra, Cleve, and Liu was to characterize the problem as a special case of the word problem for groups.

However, there’s another way to see that is at most as hard as HALT, and this method applies also to . First, let’s fix some notions from computability theory.

**Definition.** Informally, language is *recursively enumerable (r.e.)* if there is an algorithm to spit out all of the elements of , in some order. More formally, we demand the existence of a Turing machine such that

- for all , either or does not halt on input , and
- for all , there is some so that .

A language is co-r.e. if the complement of is r.e.

Another way to think of r.e. and co-r.e. are as computability versions of NP and coNP. (For a broader view, see the Wikipedia page on the arithmetical hierarchy.) Suppose we have a language and Turing machine with the property that iff there is some such that accepts, while iff for all , rejects. Then is recursively enumerable. (To prove this, pick an enumeration of the space of pairs . Let run on and then spit out if accepts.) The converse is also true.

The NPA hierarchy can be used to give a sequence of semidefinite programs whose values converge to the C_{qc} value from above. So if a game is in , then the value of the SDP at each level of its NPA hierarchy is . If a game is not in , then there is some finite level of the NPA hierarchy witnessing this fact. This suggests the following recursive enumeration procedure for the set of games with no winning -strategy: try the levels of the NPA hierarchy until one of them proves the value is .

We conclude that is co-r.e. Similarly, the Lasserre hierarchy converges to the value from below, so we can conclude that is r.e. Now we observe that this gives us an upper bound on the computational complexity of these problems.

**Remark:** If a language is r.e. or co-r.e., then it is decidable with access to a HALT oracle.

These upper bounds lead us to the implication between Slofstra’s two results. First, we need one more tool.

**Fact:** A language L is decidable iff it is both r.e. and co-r.e.

The proof uses the duality approach familiar to computer scientists from {semidefinite, linear} programming.

If , then is both r.e. and co-r.e. and therefore decidable. But is undecidable; contradiction! This proof is “worse” than that given in Slofstra’s paper in the sense that it’s not effective — it doesn’t give an explicit game separating the two classes. In return for being less effective, this idea lets us say some other stuff that I think is kind of neat.

**Proof of fact.**

Given input x, run the r.e. and co-r.e. algorithms in parallel until one of them spits out x. This certifies either that or , depending on which algorithm spat it out. This is guaranteed to happen in finite (but arbitrarily long!) time. Suppose we’re given input x. We use the r.e. algorithm to “approximate the sequence from below” and in parallel we use the co-r.e. algorithm to “approximate the sequence from above”. Eventually, after only finitely many steps, one of the approximations will show that x is in or out of the language.

We can use this idea to tell us something about approximation schemes for deciding in the sense of the LPA and Laserre hierarchies.

If is r.e., then . Otherwise, would be both r.e. and co-r.e. and therefore decidable. This resolves the weak Tsirelson conjecture is resolved. But this is equivalent to the Connes embedding conjecture, one of the biggest open problems in operator algebras.

In other words, if G_{qa} has a Lasserre-style approximation, then the Connes embedding conjecture is false. In fact, we get this conclusion even if has finite-length certificates with respect to any decidable proof system! Taking the converse, if the Connes embedding conjecture is true, then the general mathematical problem of showing that particular games lie in is hopeless. Hopefully, we would still be able to cross this “Turing barrier” for specific games that we care about.

Can we get any upper bound on the complexity of ? If the answer is somehow very shockingly no, then we shouldn’t expect to find a proof very easily. As before, if is not computable with respect to HALT, then it’s different from and Connes is false. This would similarly separate from , which would be nice but less surprising. Similarly, proving the wild noncomputability of would separate it from .

Why would this be shocking? It would violate a deeply held belief that one might call The Computable Church-Turing Thesis For People That Aren’t Logicians:

If a computational problem is defined in terms of natural computer science notions, then it’s no harder than HALT.

Of course, many people used to believe the Extended Church-Turing thesis for polynomial-time computations in the physical world, up until Shor’s algorithm happened. So the question remains: does (or ) have a recursive reduction from HALT?

]]>

There is a countable sequence of face-down cups and an infinite bag of balls. First, player one places balls underneath finitely many of the cups. Next, player two turns over some (possibly infinite) set of cups and looks to see if they have balls underneath them. Finally, player two points to a cup that she has not yet turned over and declares “this cup does not have a ball underneath it.” Player two wins if her guess is correct; otherwise player one wins.

What’s a good strategy for player two in this game?

(For ease of reading, we’ll consistently refer to player two with “she/her” pronouns and player one with “he/him” pronouns.)

First, let’s see that any deterministic strategy has a counter-strategy that beats it. A deterministic strategy consists of some infinite test set of cups to flip over, together with a function which decides what cup to flip over based on what was seen in the uncovered cups. To beat this, the first player first picks any assignment and considers . He places balls in according to and then also places a ball at .

In fact, we can take this proof a little bit farther and say that player one can still deterministically win even if the second player uses randomness in the second part (but not the first part) of her turn. Actually, we’ll prove something slightly weaker: If the second player chooses deterministically, then for any , the first player has a deterministic strategy guaranteeing that the second player wins with probability at most .

Suppose is a randomized strategy in the following sense: for any assignment of cups to balls in , is an -valued random variable which dictates which cup player two will point to. Fix to be any finite-support assignment of balls to cups in . F(t) induces a probability distribution on the integers, i.e. . In particular, the chance that is very large must go to as “very large” goes to infinity. To be more precise, we have that for any , there is some such that . So the first player uses the following strategy: inside of , he places balls according to . Outside of , he places balls under all cups up to cup . With probability at least , the second player will point to a cup with index lower than and lose.

In particular, our proof shows that if is bounded, say if player two has access to only a finite amount of randomness (more precisely her random source has finite max-entropy, then the player one can guarantee a win.

What if the second player is allowed to randomize their choice of ? In that case, how well they can do depends sensitively on the amount of randomness they have access to. One appropriate measure of “amount of randomness” is min-entropy. We say that a random variable has min-entropy if . In particular, a deterministic source has min-entropy and a uniform source with possibilities has min-entropy.

Operationally, is the maximum *guessing probability *for . If you use the source to generate a secret, then is the maximum probability with which an adversary can guess your secret.

A source which has min-entropy is “almost as good” as a source which gives a uniform random string on bits. This is because of the existence of efficient randomness extractors. These are functions which, with the help of a little bit of independent random seed, can turn a source with min-entropy into something which is almost indistinguishable from a uniform distribution on bits. (For more on extractors theory, I recommend Chapter 6 of Salil Vidhan’s monograph on pseudorandomness.)

Because randomness is such a useful resource in computer science, one important problem is that of *randomness verification*. Suppose you buy a “random number generator,” which is to say you buy a box with a button and a display screen. Whenever you press the button, the display screen spits out a or a . How can you tell whether the bits coming out are actually random? In other words, can you give some lower bound on the min-entropy of the random variable that you sample when you press the button times? How do you know, for example, that the first bits you observe aren’t just hardcoded into the box? We’ll return to our countable cup game to give a partial answer to this question.

Now that we have our notion of min-entropy fixed, we can state the following theorem:

- If the second player has access to a random source with min-entropy at least , she has a strategy winning with probability at least , regardless of the first player’s strategy.
- If the min-entropy of the second player is at most , then there exists a first-player strategy ensuring that the second player wins with probability at most .

The second result is one we’ve basically already established, but let’s spell it out. Let be a randomized strategy for player two. Let be a deterministic strategy such that with probability at least ; this exists by the definition of min-entropy. Then the first player plays the deterministic strategy which always wins against , as detailed at the beginning of this post. (If you like, you can think that the second player starts by sampling their source for a random seed. The first player’s strategy is then to make an exact guess for what the seed is, and then play to win against that seed.)

To prove the first result, we need to find a set of deterministic strategies for player two such that a counter to one of these will lose to the rest. That way, the strategy from the previous paragraph is the best that player one can do. Player two’s strategy is as follows: Fix some partition into a collection of infinite sets. (e.g. the equivalence classes mod for some ) On her turn, she picks at random (in such a way that ) and then looks under the cups . Finally, she picks a cup in with a higher index than any ball-containing cup in . (More precisely, she can pick the minimal such that and .)

To see that this works, notice that no matter how is chosen, there is a unique such that the highest-numbered ball is in . Player two wins as long as she picks . By the operational interpretation of min-entropy, this happens with probability at least .

This theorem shows that our countable cup game is, in a somewhat trivial sense, a one-box randomness verification test: it is a task that can be achieved by a data-generating box iff the box has sufficiently much min-entropy. However, this test is not *effective* in the following sense: There is no single “test” strategy for the first player which for which the win probability of the second player is equal to her min-entropy. To see this, we can use an argument similar to one we’ve seen before.

Fix any strategy for player one; let be the random variable denoting the set of places where he places balls. Since is surely finite, we have by Borel-Cantelli that . Therefore, for any , there is some such that . If player two uses the deterministic strategy “turn over no cups and then point to cup “, she’ll win with probability .

We can try to solve this non-effectiveness problem by appealing to Kolmogorov complexity. The Kolmogorov complexity of a string is equal to the length of the shortest program which can output that string. (This notion depends on the programming language one uses, but only up to an additive factor.) If the Kolmogorov complexity of the output string is greater than the number of particles in the box, then we know that the box couldn’t have come up with that string deterministically. Furthermore, if the box actually is giving us identical, independent random samples, this bound will eventually come into play. (The number of strings with a given Kolmogorov complexity is only as large as the number of programs of length , say if our programming language has its source code in binary. But then of the strings of length , only a fraction of them have Kolmogorov complexity at most .)

Unfortunately for us, Kolmogorov complexity is undecidable. (This makes the sentence at the beginning of the previous paragraph quite laughable.) This means that for any reasonable notion of “algorithm”, there is no algorithm which given an arbitrary string and a natural number , tells whether the Kolmogorov complexity of the string is at least .

Generalizing a bit, if we don’t make any assumptions on the computational power of the boxes, then no deterministic algorithm should be able to serve as a good randomness test (by arguments similar to ones we used in analyzing the countable cup game).

Things get a little better if we do make some assumptions about the boxes. In particular, let’s modify our notion of Kolmogorov complexity to include a time parameter. Instead of only requiring that the box be at most a given size while outputting a given string, we also require that the output is achieved within a given time. (Think of this as the amount of time that we measure in our laboratory when we press the button and wait for the result.) This is now computable: just take all of the algorithms of at most the specified length, run them all for the specified amount of time, and see if any produce the string in question. Unfortunately again, this approach has some problems. First, it requires explicit bounds on the running time and on the size of the box’s source code. Secondly, computing the time-parameterized Kolmogorov complexity doesn’t seem easy: the naive algorithm takes time . (Possibly n argument similar to the proof of the time hierarchy theorem can give a lower bound on the time complexity of this problem.)

Furthermore, this approach as written only lets us prove that the box is not deterministic, but it doesn’t give us any nonzero lower bound on the amount of randomness used by the program. (One can imagine a kind of randomness-parameterized Kolmogorov complexity as follows: fix n. What is the length of the smallest program with access to n bits of randomness which produces string with positive probability? Probably someone has considered this before but I’m not privy to it.)

Let’s end on a positive note and discuss a setting where randomness certification *is* possible. This idea was introduced by Thomas Vidick and Umesh Vazirani in their paper on Certifiable Quantum Dice. They require that the box being tested comes in the form of two boxes which don’t communicate with each other. The test requires drawing samples from the boxes and then performing some statistical tests on part of the data. If the tests are passed, then

- The boxes make nontrivial use of shared quantum entanglement
- The rest of the data has min-entropy remaining.

Furthermore, the test only requires bits of randomness if we assume that there’s no active interference by outside parties. This is an amazing result: if you have access to a small amount of randomness, and somebody else gives you access to these quantum boxes, then you can generate exponentially much more (trusted, certified) randomness than you started with. (For a more formal statement, read the paper! The exposition is very nice, and the non-adversarial case is accessible to anyone with a bit of computer science background, no quantum required.)

**Is there any hope for a similar result in a one-box setting?** It seems like the ideas in the post dance around some kind of nice impossibility proof, but I don’t know what the right statement is. If you have any ideas on this or any other comments/questions about the above exposition, please leave a comment!

]]>

Countably many cooperating players stand in a line indexed by . A neutral game master walks down the row of players, placing black and white hats on the player’s heads independently and uniformly at random. (In other words, the color of each player’s hat is decided by a fresh fair coin flip.) Afterwards, the game master goes through again, asking each player in turn to announce a guess for their own hat color. Each player hears the guesses of the players that came before them and sees all of the hats on the heads of people ahead of them. The players win if everyone except for the first person guesses correctly. (The first person, of course, is doomed to only guess their hat correctly 50% of the time, regardless of their strategy. However, the rest of the players have a chance to come out ahead by using the information provided by those behind them.)

The players conspire beforehand to come up with a strategy. Is it possible for them to cook up a strategy for which they always win? A bit more conservatively, how about one that wins with probability ?

It is somewhat well-known that in this game and similar ones, the players can always win if they make use of the Axiom of Choice. (It seems to me that this was first discovered by Yuval Gabay and Michael O’Connor in 2004 but not formally published until Hardin and Taylor’s 2008 paper. Despite this, I’ve usually heard it presented as folklore.) Choice is awfully nonconstructive and inelegant, so one might ask whether the use of the Axiom of Choice is necessary. Hardin and Taylor answer this in the affirmative, showing that some nontrivial form of Choice is required for this result. They do this by showing that if all sets of real numbers have the property of Baire, then there are no strategies that always win. The existence sets of reals without the property of Baire is a consequence of the Axiom of Choice, and it is known that even Dependent Choice is not enough to prove it. Therefore, the amount of Choice required for this result lies somewhere between Dependent Choice and full Choice. (See Section 3.4 of Hardin and Taylor’s book for a more technical discussion of the role of the Axiom of Choice in hat games.)

Such matters are a bit out of place in an intro probability class. Professor Tamuz posed a slightly different question which gets at the same idea: Demonstrate a *measurable* strategy which wins with probability or else show that none exists. (If you’re not familiar with measurability, an informal definition is that a player’s strategy is measurable if it always makes sense to ask “what is the probability that this player guesses that their own hat is black?”) The existence of non-measurable sets of reals is also a consequence of the Axiom of Choice, and it plays a crucial role in the Banach–Tarksi Theorem.

In my notes, I first show you how they can win this game in a way that naturally extends the winning strategy for finite versions of the game. Next, I answer Professor Tamuz’s question in the negative by proving that all measurable strategies must fail with positive probability. The proof is slightly more technical than I’d like to write in a blog post, but I will show you something else as a nice warm-up.

The main ingredient of the proof of non-measurability is the Kolmogorov zero-one law, which we’ll informally state a special case of now. Suppose we have an infinite sequence of independent random variables. (For example, consider the sequence of hat colors assigned by the game-master in the countable hat game.) Consider an event which is a function of the sequence but is also independent of any finite subsequence. (For example, are infinitely many hats black?) Then Kolmogorov says this event either almost surely occurs or almost surely does not occur. That is, its probability is or . (It is easy to see that our example has probability , even by using very basic tools.)

I’ll illustrate the Kolmogorov zero-one law with an application I learned from a probability theory class with Professor Leonard Schulman. Suppose a drunk person is wandering on the integer number line, trying to find their way home. Each minute, they flip a weighted coin and travel one unit left if heads but one unit right if tails. In other words, let their coin flip at time be a random variable which is with probability or with probability . Then their position at time is the random variable .

Consider the window . What is the probability that the drunk stays inside this prison for an infinitely long time? Intuitively, this shouldn’t happen: it would equate to the existence of an invisible cage guiding the drunk’s coin flips. In order to escape the cage, the drunk need only to get consecutive flips of the same side. The chance that any particular consecutive flips come up the same is only , but if we try infinitely many times, it’ll surely happen at some point. (To formalize this, apply Borel-Cantelli, a very simple and very useful fact about probability.)

Rephrasing, we have that with probability , the drunk will be to the right of or to the left of infinitely often. (Otherwise, there would be a largest time at which they were outside of , and they’d stay inside for the rest of time!) Call these events and . Each of these events is independent of any finite number of steps of the walk, so we can apply Kolmogorov to say that they each have probability or . Since their union has probability , at least one of them must be probability .

When , we think of the drunk as having a rightward drift, so “obviously” the probability that they’re to right of infinitely many times should be bigger than the probability that they’re to the left of infinitely many times. This “obvious” fact can be proven precisely with a technique known as *coupling*. This requires a bit more about what a probability space is than we’re using in this post; see lecture 25 of Prof. Tamuz’s lecture notes for a demonstration of the idea. This shows that when and when .

When , we must have by symmetry. (Formally, consider the measuring-preserving transformation on the space of coin flips which reverses every coin.) Then we must have that both are equal to . This is enough to show that the random walk is *recurrent*, which is to say that no matter where their home is, with probability , the drunk will find it infinitely many times.

In the proof of the nonmeasurability of winning strategies in the countable hat game, we use Kolmogorov in the same way as in the preceding paragraph, but with one minor change. As above, we consider two events (that the first player guesses black and that the first player guesses white) which have the same probability and to which we can apply Kolmogorov. However, these events are mutually exclusive, so by the inequalities , we achieve a contradiction. We conclude that the first player’s strategy is non-measurable. To see this argument in full detail, check out my notes. There are two followup questions which I’ve yet to resolve:

- We show that all measurable strategies win with probability strictly less than . Might it be true that all measurable strategies win with probability ?
- Consider the game where we don’t allow the players to hear the guesses of others.

If you want to learn much more about hat games, do check out Hardin and Taylor’s book, The Mathematics of Coordinated Inference. The first three chapters are written in a fairly accessible manner and should be an enjoyable read for undergrads with a touch of set theory or topology under their belt.

]]>