Linear speedup theoremIn computational complexity theory, the linear speedup theorem for Turing machines states that given any real c > 0 and any k-tape Turing machine solving a problem in time f(n), there is another k-tape machine that solves the same problem in time at most f(n)/c + 2n + 3, where k > 1.[1][2] If the original machine is non-deterministic, then the new machine is also non-deterministic. The constants 2 and 3 in 2n + 3 can be lowered, for example, to n + 2.[1] ProofThe construction is based on packing several tape symbols of the original machine M into one tape symbol of the new machine N. It has a similar effect as using longer words and commands in processors: it speeds up the computations but increases the machine size. How many old symbols are packed into a new symbol depends on the desired speed-up. Suppose the new machine packs three old symbols into a new symbol. Then the alphabet of the new machine is : it consists of the original symbols and the packed symbols. The new machine has the same number k > 1 of tapes. A state of N consists of the following components:
The new machine N starts with encoding the given input into a new alphabet (that is why its alphabet must include ). For example, if the input to 2-tape M is on the left, then after the encoding the tape configuration of N is on the right:
The new machine packs three old symbols (e.g., the blank symbol _, the symbol a, and the symbol b) into a new symbol (here (_,a,b)) and copies it the second tape, while erasing the first tape. At the end of the initialization, the new machine directs its head to the beginning. Overall, this takes 2n + 3 steps. After the initialization, the state of N is , where the symbol means that it will be filled in by the machine later; the symbol means that the head of the original machine points to the first symbols inside and . Now the machine starts simulating m = 3 transitions of M using six of its own transitions (in this concrete case, there will be no speed up, but in general m can be much larger than six). Let the configurations of M and N be:
where the bold symbols indicate the head position. The state of N is . Now the following happens:
Thus, the state of N becomes . ComplexityInitialization requires 2n + 3 steps. In the simulation, 6 steps of N simulate m steps of M. Choosing m > 6c produces the running time bounded by Machines with a read-only input tapeThe theorem as stated above also holds for Turing machines with 1-way, read-only input tape and work tapes.[3] Single-tape machinesFor single-tape Turing machines, linear speedup holds for machines with execution time at least . It provably does not hold for machines with time .[3] Dependence on tape compressionThe proof of the speedup theorem clearly hinges on the capability to compress storage by replacing the alphabet with a larger one. Geffert[4] showed that for nondeterministic single-tape Turing machines of time complexity linear speedup can be achieved without increasing the alphabet. Dependence on the shape of storageRegan[5] considered a property of a computational model called information vicinity. This property is related to the memory structure: a Turing machine has linear vicinity, while a Kolmogorov-Uspenskii machine and other pointer machines have an exponential one. Regan’s thesis is that the existence of linear speedup has to do with having a polynomial information vicinity. The salient point in this claim is that a model with exponential vicinity will not have speedup even if changing the alphabet is allowed (for models with a discrete memory that stores symbols). Regan did not, however, prove any general theorem of this kind. Hühne [6] proved that if we require the speedup to be obtained by an on-line simulation (which is the case for the speedup on ordinary Turing machines), then linear speedup does not exist on machines with tree storage. References
|