Some problems take so long to solve that we consider them essentially intractable

Proving intractability is actually quite difficult; this section is essentially a more general look at versus and related questions

Hierarchy Theorems

Common sense suggests that giving a Turing machine more time or more space increases the class of problems it can solve

Subject to certain conditions, the hierarchy theorems prove this intuition as correct

Definition: A function , where is at least , is called space constructible if there is some space TM that maps to

Commonly occurring functions are space constructible, like , , and

This definition is useful because it helps us prove that an asymptotically larger function can be used to compute more functions

Space Hierarchy Theorem: For any space constructible function , a language exists that is decidable in space but not in space

We describe in terms of an algorithm that decides it in space and guarantees it is different from any language in space, using the diagonalization method

“On input :

  1. Let be the length of .
  2. Compute using space constructability and mark off this much tape. If later stages ever attempt to use more, reject.
  3. If is not of the form for some TM , reject.
  4. Simulate on while counting the number of steps used in the simulation. If the count ever exceeds , reject.
  5. If accepts, reject. If rejects, accept.”

An important detail is that we use to guarantee the asymptotic behavior kicks in for some input

This machine is amazingly contrived but does work! It must behave differently from any decider that uses space, and assuming otherwise leads to a clear contradiction

Corollary: If is and is space constructible then

With some work, we can show that is space constructible for any rational number , and since the rationals are dense in the reals, we can extend this to any real number

Corollary: For any two real numbers ,

Corollary:
This follows since we already had from Savitch’s theorem

Furthermore, this establishes the main objective of this section, which is the existence of intractable problems

Corollary:

Definition: A function , where is at least , is called time constructible if there is some time TM that maps to

This behaves analogously to our class of space constructible functions

Time Hierarchy Theorem: For any time constructible function , a language exists that is decidable in time but not decidable in time

The construction of is exactly the same, however we require decrement a counter of and reject if it hits 0

We can simulate in with a multi-tape construction, storing the transition function and states on separate tapes, which can be converted to a single-tape machine at a constant time factor

The essential difficulty is actually the need to update the counter each step, which adds a factor to the whole process, bringing the algorithm to the desired time

Then our proof works with the same contradiction argument

A tighter time hierarchy theorem is conceivable, especially if we could simulate a Turing machine with a time allotment with a smaller time factor, but has not been proven

Corollary: If is and is time constructible, then

Corollary:

Corollary:

Therefore, intractable problems do exist, in both time and space

Exponential Space Completeness

The hierarchy theorems establish the existence of intractable problems but only implicitly

We use completeness to demonstrate actual intractable problems

Let be the exponentiation operation
If is a regular expression and is a nonnegative integer, we write to mean the concatenation of with itself times,

Generalized regular expressions allow the exponentiation operation in addition to the usual regular operations , , and

Checking the equivalence of normal regular expressions can be done in polynomial time with a graph fill, but this does not apply to generalized regular expressions, because these expressions when expanded can have exponential size

Definition: A language is EXPSPACE-complete if and every in is polynomial time reducible to

Theorem: is -complete

First, note that we can test that two NFAs with total states are inequivalent by nondeterministically reading an input symbol into both NFAs times, and accepting if one ever accepts without the other

This takes time, so Savitch’s theorem gives us a space algorithm for the problem

Then we can determine in by converting and to ordinary regular expressions, to NFAs, and then using the above deterministic algorithm

The tricky part is showing this is -hard

Let be a language that is decided by a TM running in space for some constant

Our reduction will produce a set of all possible strings , and , an expression which we will set up to generate all strings that aren’t rejecting computation histories on

We divide , representing the different places it could fail to be a proper rejecting computation history

Configuration looks like , which we match with the union of several subexpressions

For , we write , where is a shorthand for the union of all characters except

generates all strings that fail to contain a black symbol in some position through

Note that while the exponentiation operator wasn’t strictly necessary for , there was no way we could have matched and without it

is trivial

We recall the idea from the proof of the Cook-Levin theorem that we can determine the validity of a transition function by looking at the changes in every three consecutive symbols

Thus, , where means doesn’t yield according to the transition function, which is a set of constant size

The largest parts of are exponents of magnitude , which are in binary, thus the length of is polynomial in and our mapping is complete

Relativization

Ultimately, the proof that is intractable rests on the hierarchy theorems’ use of the diagonalization method

This begs the question, can we use a similar method to show a strict hierarchy for and ? Unfortunately, this would be tricky due to the method of relativization

Definition: An oracle for a language is a device that is capable of reporting whether any string is a member of

An oracle Turing machine is a modified Turing machine that has the additional capability of querying an oracle for in one step, by writing a string onto a special oracle tape

is said to be computing relative to , hence our term relativization

Definition: Let be the class of languages decidable with a polynomial time oracle Turing machine that uses oracle , and define similarly

It naturally follows that and

Furthermore, contains languages we believe are not in , such as (boolean formulas that are not minimal)

Theorem: exists such that and exists such that

This is a major barrier towards the use of diagonalization as a technique for proving , since diagonalization ought to be invariant towards the inclusion of oracles

How do we prove this theorem?

Proof:
Let be any -complete problem such as

The first containment follows definitionally, the second by Savitch’s theorem, and the third because is -complete, hence

Let , i.e. the collection of all strings for which a string of equal length appears in

We can easily construct a non-deterministic machine with an oracle that checks all strings of length , so

We index all polynomial time oracle TMs, and assume for simplicity that runs in time (this can be enforced fairly easily)

We decide what belongs to in stages; for each , we declare strings part of such that cannot decide

We choose greater than any string already declared in , with , and run on input , responding with a no to any oracle queries

cannot query every string in time, so it is missing information about

If finally answers false, then we add one string to that it hasn’t queried, and if it answers true then we leave with no strings of size , so no matter what answers incorrectly

In summary, a method to solve versus must somehow analyze computations, not just simulate machines

Circuit Complexity

Boolean circuits are the theoretical counterpart to digital circuits

Many believe that circuits are a convenient model for approaching theorems on computation, and they also provide a neat alternative proof that is -complete

Definition: A Boolean circuit is a collection of gates and inputs connected by wires

  • Cycles are not permitted, and gates take the form of , , and
  • Wires carry the Boolean values 0 and 1, and gates compute their respective Boolean functions
  • One gate is designated as the output gate
  • The size of a circuit is the number of gates it contains
  • The depth of a circuit is the length of the longest path from an input variable to an output gate

To a Boolean circuit with input variables and output gates, we associate a function that matches ‘s behavior

In order to handle languages with variable length, we need some broader concepts

Definition: A circuit family is an infinite list of circuits, where has input variables

We say decides a language over if for every string , , where is the length of

A circuit family is minimal if every in the list has no smaller equivalent circuit

The size complexity of a circuit family is the function , where is the size of

We define depth complexity similarly

Definition: The circuit complexity of a language is the size complexity of a minimal circuit family for that language, and the circuit depth complexity is defined similarly using depth instead of size

Theorem: Let be a function, where ; if , then has circuit complexity

The key idea in proving this is that we can construct a circuit that simulates on inputs of length , one row for each of the steps in ‘s computation

Let decide in time and let be an input of length to

We look at a tableau for on , a table whose rows are configurations of , where the top row contains the start configuration of on and the th row contains the th step of the computation

  • For convenience, this tableau uses a composite character for the state of the tape head and the symbol at its location
  • So each entry of the tableau contains either or a composite , notated by

We simplify by taking to only accept when the head is on the leftmost tape cell with a , and remain there indefinitely

Importantly, the content of each cell is determined by the (at most) three cells bordering it in the row above

With this in mind, we can design the circuit to represent each cell, with a wire for each possible value in , using a combination of and gates to wire together cells

The cells in the first row are wired directly to the input variables, and the output gate is assigned to the wire denoting that is equal to

It’s clear that the size of this circuit is proportional to

This theorem links circuit complexity with time complexity; if we demonstrate a language in has non-polynomial circuit complexity, then

We can also derive a few more cool results with these tools

Let -

Theorem: - is -complete

To prove this, we must give a polynomial time reduction where Boolean circuit is satisfiable

Because is in , it has a polynomial time verifier that takes inputs of form

We obtain the circuit simulating and fill in the inputs that correspond to with the symbols of ; if is satisfiable then a certificate exists, meaning is in (and the converse also holds)

If the running time of the verifier is then the size of the circuit is , so the reduction is polynomial

Theorem: is -complete (recall how we proved this before)

Without going into the details, it should be obvious that a circuit can be reduced to a Boolean formula by representing each wire with a variable and converting the gates appropriately

The size of the resulting expression is proportional to the size of the circuit, so the reduction is polynomial time