This article uses technical mathematical notation for logarithms. All instances of log(x) without a subscript base should be interpreted as a natural logarithm, also commonly written as ln(x) or loge(x).
The first such distribution found is π(N) ~ N/log(N), where π(N) is the prime-counting function (the number of primes less than or equal to N) and log(N) is the natural logarithm of N. This means that for large enough N, the probability that a random integer not greater than N is prime is very close to 1 / log(N). Consequently, a random integer with at most 2n digits (for large enough n) is about half as likely to be prime as a random integer with at most n digits. For example, among the positive integers of at most 1000 digits, about one in 2300 is prime (log(101000) ≈ 2302.6), whereas among positive integers of at most 2000 digits, about one in 4600 is prime (log(102000) ≈ 4605.2). In other words, the average gap between consecutive prime numbers among the first N integers is roughly log(N).[3]
Statement
Let π(x) be the prime-counting function defined to be the number of primes less than or equal to x, for any real number x. For example, π(10) = 4 because there are four prime numbers (2, 3, 5 and 7) less than or equal to 10. The prime number theorem then states that x / log x is a good approximation to π(x) (where log here means the natural logarithm), in the sense that the limit of the quotient of the two functions π(x) and x / log x as x increases without bound is 1:
known as the asymptotic law of distribution of prime numbers. Using asymptotic notation this result can be restated as
This notation (and the theorem) does not say anything about the limit of the difference of the two functions as x increases without bound. Instead, the theorem states that x / log x approximates π(x) in the sense that the relative error of this approximation approaches 0 as x increases without bound.
The prime number theorem is equivalent to the statement that the nth prime number pn satisfies
the asymptotic notation meaning, again, that the relative error of this approximation approaches 0 as n increases without bound. For example, the 2×1017th prime number is 8512677386048191063,[4] and (2×1017)log(2×1017) rounds to 7967418752291744388, a relative error of about 6.4%.
On the other hand, the following asymptotic relations are logically equivalent:[5]: 80–82
As outlined below, the prime number theorem is also equivalent to
History of the proof of the asymptotic law of prime numbers
Based on the tables by Anton Felkel and Jurij Vega, Adrien-Marie Legendre conjectured in 1797 or 1798 that π(a) is approximated by the function a / (A log a + B), where A and B are unspecified constants. In the second edition of his book on number theory (1808) he then made a more precise conjecture, with A = 1 and B = −1.08366. Carl Friedrich Gauss considered the same question at age 15 or 16 "in the year 1792 or 1793", according to his own recollection in 1849.[6] In 1838 Peter Gustav Lejeune Dirichlet came up with his own approximating function, the logarithmic integralli(x) (under the slightly different form of a series, which he communicated to Gauss). Both Legendre's and Dirichlet's formulas imply the same conjectured asymptotic equivalence of π(x) and x / log(x) stated above, although it turned out that Dirichlet's approximation is considerably better if one considers the differences instead of quotients.
In two papers from 1848 and 1850, the Russian mathematician Pafnuty Chebyshev attempted to prove the asymptotic law of distribution of prime numbers. His work is notable for the use of the zeta function ζ(s), for real values of the argument "s", as in works of Leonhard Euler, as early as 1737. Chebyshev's papers predated Riemann's celebrated memoir of 1859, and he succeeded in proving a slightly weaker form of the asymptotic law, namely, that if the limit as x goes to infinity of π(x) / (x / log(x)) exists at all, then it is necessarily equal to one.[7] He was able to prove unconditionally that this ratio is bounded above and below by 0.92129 and 1.10555, for all sufficiently large x.[8][9] Although Chebyshev's paper did not prove the Prime Number Theorem, his estimates for π(x) were strong enough for him to prove Bertrand's postulate that there exists a prime number between n and 2n for any integer n ≥ 2.
An important paper concerning the distribution of prime numbers was Riemann's 1859 memoir "On the Number of Primes Less Than a Given Magnitude", the only paper he ever wrote on the subject. Riemann introduced new ideas into the subject, chiefly that the distribution of prime numbers is intimately connected with the zeros of the analytically extended Riemann zeta function of a complex variable. In particular, it is in this paper that the idea to apply methods of complex analysis to the study of the real function π(x) originates. Extending Riemann's ideas, two proofs of the asymptotic law of the distribution of prime numbers were found independently by Jacques Hadamard[1] and Charles Jean de la Vallée Poussin[2] and appeared in the same year (1896). Both proofs used methods from complex analysis, establishing as a main step of the proof that the Riemann zeta functionζ(s) is nonzero for all complex values of the variable s that have the form s = 1 + it with t > 0.[10]
During the 20th century, the theorem of Hadamard and de la Vallée Poussin also became known as the Prime Number Theorem. Several different proofs of it were found, including the "elementary" proofs of Atle Selberg[11] and Paul Erdős[12] (1949). Hadamard's and de la Vallée Poussin's original proofs are long and elaborate; later proofs introduced various simplifications through the use of Tauberian theorems but remained difficult to digest. A short proof was discovered in 1980 by the American mathematician Donald J. Newman.[13][14] Newman's proof is arguably the simplest known proof of the theorem, although it is non-elementary in the sense that it uses Cauchy's integral theorem from complex analysis.
Proof sketch
Here is a sketch of the proof referred to in one of Terence Tao's lectures.[15] Like most proofs of the PNT, it starts out by reformulating the problem in terms of a less intuitive, but better-behaved, prime-counting function. The idea is to count the primes (or a related set such as the set of prime powers) with weights to arrive at a function with smoother asymptotic behavior. The most common such generalized counting function is the Chebyshev functionψ(x), defined by
The next step is to find a useful representation for ψ(x). Let ζ(s) be the Riemann zeta function. It can be shown that ζ(s) is related to the von Mangoldt functionΛ(n), and hence to ψ(x), via the relation
A delicate analysis of this equation and related properties of the zeta function, using the Mellin transform and Perron's formula, shows that for non-integer x the equation
holds, where the sum is over all zeros (trivial and nontrivial) of the zeta function. This striking formula is one of the so-called explicit formulas of number theory, and is already suggestive of the result we wish to prove, since the term x (claimed to be the correct asymptotic order of ψ(x)) appears on the right-hand side, followed by (presumably) lower-order asymptotic terms.
The next step in the proof involves a study of the zeros of the zeta function. The trivial zeros −2, −4, −6, −8, ... can be handled separately:
which vanishes for large x. The nontrivial zeros, namely those on the critical strip 0 ≤ Re(s) ≤ 1, can potentially be of an asymptotic order comparable to the main term x if Re(ρ) = 1, so we need to show that all zeros have real part strictly less than 1.
Non-vanishing on Re(s) = 1
To do this, we take for granted that ζ(s) is meromorphic in the half-plane Re(s) > 0, and is analytic there except for a simple pole at s = 1, and that there is a product formula
for Re(s) > 1. This product formula follows from the existence of unique prime factorization of integers, and shows that ζ(s) is never zero in this region, so that its logarithm is defined there and
Write s = x + iy ; then
Now observe the identity
so that
for all x > 1. Suppose now that ζ(1 + iy) = 0. Certainly y is not zero, since ζ(s) has a simple pole at s = 1. Suppose that x > 1 and let x tend to 1 from above. Since has a simple pole at s = 1 and ζ(x + 2iy) stays analytic, the left hand side in the previous inequality tends to 0, a contradiction.
Finally, we can conclude that the PNT is heuristically true. To rigorously complete the proof there are still serious technicalities to overcome, due to the fact that the summation over zeta zeros in the explicit formula for ψ(x) does not converge absolutely but only conditionally and in a "principal value" sense. There are several ways around this problem but many of them require rather delicate complex-analytic estimates. Edwards's book[16] provides the details. Another method is to use Ikehara's Tauberian theorem, though this theorem is itself quite hard to prove. D.J. Newman observed that the full strength of Ikehara's theorem is not needed for the prime number theorem, and one can get away with a special case that is much easier to prove.
Newman's proof of the prime number theorem
D. J. Newman gives a quick proof of the prime number theorem (PNT). The proof is "non-elementary" by virtue of relying on complex analysis, but uses only elementary techniques from a first course in the subject: Cauchy's integral formula, Cauchy's integral theorem and estimates of complex integrals. Here is a brief sketch of this proof. See [14] for the complete details.
The proof uses the same preliminaries as in the previous section except instead of the function , the Chebyshev function is used, which is obtained by dropping some of the terms from the series for . Similar to the argument in the previous proof based on Tao's lecture, we can show that ϑ (x) ≤ π(x)log x, and ϑ (x) ≥ (1 - ɛ)(π(x) + O(x 1-ɛ))log x for any 0 < ɛ < 1. Thus, the PNT is equivalent to .
Likewise instead of the function is used, which is obtained by dropping some terms in the series for . The functions and differ by a function holomorphic on . Since, as was shown in the previous section, has no zeroes on the line , has no singularities on .
One further piece of information needed in Newman's proof, and which is the key to the estimates in his simple method, is that is bounded. This is proved using an ingenious and easy method due to Chebyshev.
Integration by parts shows how and are related. For ,
Newman's method proves the PNT by showing the integral
converges, and therefore the integrand goes to zero as , which is the PNT. In general, the convergence of the improper integral does not imply that the integrand goes to zero at infinity, since it may oscillate, but since is increasing, it is easy to show in this case.
To show the convergence of , for let
and where
then
which is equal to a function holomorphic on the line .
The convergence of the integral , and thus the PNT, is proved by showing that . This involves change of order of limits since it can be written and therefore classified as a Tauberian theorem.
The difference is expressed using Cauchy's integral formula and then shown to be small for large by estimating the integrand. Fix and such that is holomorphic in the region where , and let be the boundary of this region. Since 0 is in the interior of the region, Cauchy's integral formula gives
where is the factor introduced by Newman, which does not change the integral since is entire and .
To estimate the integral, break the contour into two parts, where and . Then where . Since , and hence , is bounded, let be an upper bound for the absolute value of . This bound together with the estimate for gives that the first integral in absolute value is . The integrand over in the second integral is entire, so by Cauchy's integral theorem, the contour can be modified to a semicircle of radius in the left half-plane without changing the integral, and the same argument as for the first integral gives the absolute value of the second integral is . Finally, letting , the third integral goes to zero since and hence goes to zero on the contour. Combining the two estimates and the limit get
This holds for any so , and the PNT follows.
Prime-counting function in terms of the logarithmic integral
In a handwritten note on a reprint of his 1838 paper "Sur l'usage des séries infinies dans la théorie des nombres", which he mailed to Gauss, Dirichlet conjectured (under a slightly different form appealing to a series rather than an integral) that an even better approximation to π(x) is given by the offset logarithmic integral function Li(x), defined by
Indeed, this integral is strongly suggestive of the notion that the "density" of primes around t should be 1 / log t. This function is related to the logarithm by the asymptotic expansion
So, the prime number theorem can also be written as π(x) ~ Li(x). In fact, in another paper[17] in 1899 de la Vallée Poussin proved that
for some positive constant a, where O(...) is the big O notation. This has been improved to
The connection between the Riemann zeta function and π(x) is one reason the Riemann hypothesis has considerable importance in number theory: if established, it would yield a far better estimate of the error involved in the prime number theorem than is available today. More specifically, Helge von Koch showed in 1901[20] that if the Riemann hypothesis is true, the error term in the above relation can be improved to
(this last estimate is in fact equivalent to the Riemann hypothesis). The constant involved in the big O notation was estimated in 1976 by Lowell Schoenfeld,[21] assuming the Riemann hypothesis:
for all x ≥ 73.2 . This latter bound has been shown to express a variance to mean power law (when regarded as a random function over the integers) and 1/fnoise and to also correspond to the Tweedie compound Poisson distribution. (The Tweedie distributions represent a family of scale invariant distributions that serve as foci of convergence for a generalization of the central limit theorem.[22]) A lower bound is also derived by J. E. Littlewood, assuming the Riemann hypothesis:[23][24][25]
The logarithmic integralli(x) is larger than π(x) for "small" values of x. This is because it is (in some sense) counting not primes, but prime powers, where a power pn of a prime p is counted as 1/n of a prime. This suggests that li(x) should usually be larger than π(x) by roughly and in particular should always be larger than π(x). However, in 1914, Littlewood proved that changes sign infinitely often.[23] The first value of x where π(x) exceeds li(x) is probably around x ~ 10316; see the article on Skewes' number for more details. (On the other hand, the offset logarithmic integralLi(x) is smaller than π(x) already for x = 2; indeed, Li(2) = 0, while π(2) = 1.)
Elementary proofs
In the first half of the twentieth century, some mathematicians (notably G. H. Hardy) believed that there exists a hierarchy of proof methods in mathematics depending on what sorts of numbers (integers, reals, complex) a proof requires, and that the prime number theorem (PNT) is a "deep" theorem by virtue of requiring complex analysis.[9] This belief was somewhat shaken by a proof of the PNT based on Wiener's tauberian theorem, though Wiener's proof ultimately relies on properties of the Riemann zeta function on the line , where complex analysis must be used.
for primes p.[11] By July of that year, Selberg and Paul Erdős[12] had each obtained elementary proofs of the PNT, both using Selberg's asymptotic formula as a starting point.[9][26] These proofs effectively laid to rest the notion that the PNT was "deep" in that sense, and showed that technically "elementary" methods were more powerful than had been believed to be the case. On the history of the elementary proofs of the PNT, including the Erdős–Selberg priority dispute, see an article by Dorian Goldfeld.[9]
There is some debate about the significance of Erdős and Selberg's result. There is no rigorous and widely accepted definition of the notion of elementary proof in number theory, so it is not clear exactly in what sense their proof is "elementary". Although it does not use complex analysis, it is in fact much more technical than the standard proof of PNT. One possible definition of an "elementary" proof is "one that can be carried out in first-order Peano arithmetic." There are number-theoretic statements (for example, the Paris–Harrington theorem) provable using second order but not first-order methods, but such theorems are rare to date. Erdős and Selberg's proof can certainly be formalized in Peano arithmetic, and in 1994, Charalambos Cornaros and Costas Dimitracopoulos proved that their proof can be formalized in a very weak fragment of PA, namely IΔ0 + exp.[27] However, this does not address the question of whether or not the standard proof of PNT can be formalized in PA.
A more recent "elementary" proof of the prime number theorem uses ergodic theory, due to Florian Richter.[28] The prime number theorem is obtained there in an equivalent form that the Cesàro sum of the values of the Liouville function is zero. The Liouville function is where is the number of prime factors, with multiplicity, of the integer . Bergelson and Richter (2022) then obtain this form of the prime number theorem from an ergodic theorem which they prove:
Let be a compact metric space, a continuous self-map of , and a -invariant Borel probability measure for which is uniquely ergodic. Then, for every ,
In 2005, Avigad et al. employed the Isabelle theorem prover to devise a computer-verified variant of the Erdős–Selberg proof of the PNT.[29] This was the first machine-verified proof of the PNT. Avigad chose to formalize the Erdős–Selberg proof rather than an analytic one because while Isabelle's library at the time could implement the notions of limit, derivative, and transcendental function, it had almost no theory of integration to speak of.[29]: 19
In 2009, John Harrison employed HOL Light to formalize a proof employing complex analysis.[30] By developing the necessary analytic machinery, including the Cauchy integral formula, Harrison was able to formalize "a direct, modern and elegant proof instead of the more involved 'elementary' Erdős–Selberg argument".
Prime number theorem for arithmetic progressions
Let πd,a(x) denote the number of primes in the arithmetic progressiona, a + d, a + 2d, a + 3d, ... that are less than x. Dirichlet and Legendre conjectured, and de la Vallée Poussin proved, that if a and d are coprime, then
where φ is Euler's totient function. In other words, the primes are distributed evenly among the residue classes [a]modulod with gcd(a, d) = 1 . This is stronger than Dirichlet's theorem on arithmetic progressions (which only states that there is an infinity of primes in each class) and can be proved using similar methods used by Newman for his proof of the prime number theorem.[31]
The Siegel–Walfisz theorem gives a good estimate for the distribution of primes in residue classes.
Bennett et al.[32]
proved the following estimate that has explicit constants A and B (Theorem 1.3):
Let d be an integer and let a be an integer that is coprime to d. Then there are positive constants A and B such that
where
and
Prime number race
Although we have in particular
empirically the primes congruent to 3 are more numerous and are nearly always ahead in this "prime number race"; the first reversal occurs at x = 26861.[33]: 1–2 However Littlewood showed in 1914[33]: 2 that there are infinitely many sign changes for the function
so the lead in the race switches back and forth infinitely many times. The phenomenon that π4,3(x) is ahead most of the time is called Chebyshev's bias. The prime number race generalizes to other moduli and is the subject of much research; Pál Turán asked whether it is always the case that πc,a(x) and πc,b(x) change places when a and b are coprime to c.[34]Granville and Martin give a thorough exposition and survey.[33]
Another example is the distribution of the last digit of prime numbers. Except for 2 and 5, all prime numbers end in 1, 3, 7, or 9. Dirichlet's theorem states that asymptotically, 25% of all primes end in each of these four digits. However, empirical evidence shows that, for a given limit, there tend to be slightly more primes that end in 3 or 7 than end in 1 or 9 (a generation of the Chebyshev's bias).[35] This follows that 1 and 9 are quadratic residues modulo 10, and 3 and 7 are quadratic nonresidues modulo 10.
Non-asymptotic bounds on the prime-counting function
The prime number theorem is an asymptotic result. It gives an ineffective bound on π(x) as a direct consequence of the definition of the limit: for all ε > 0, there is an S such that for all x > S,
However, better bounds on π(x) are known, for instance Pierre Dusart's
The first inequality holds for all x ≥ 599 and the second one for x ≥ 355991.[36]
The proof by de la Vallée Poussin implies the following bound: For every ε > 0, there is an S such that for all x > S,
The value ε = 3 gives a weak but sometimes useful bound for x ≥ 55:[37]
In Pierre Dusart's thesis there are stronger versions of this type of inequality that are valid for larger x. Later in 2010, Dusart proved:[38]
Note that the first of these obsoletes the ε > 0 condition on the lower bound.
Approximations for the nth prime number
As a consequence of the prime number theorem, one gets an asymptotic expression for the nth prime number, denoted by pn:
Again considering the 2×1017th prime number 8512677386048191063, assuming the trailing error term is zero gives an estimate of 8512681315554715386; the first 5 digits match and relative error is about 0.46 parts per million.
Cipolla (1902)[41][42] showed that these are the leading terms of an infinite series which may be truncated at arbitrary degree, with
where each Pi is a degree-i monic polynomial. (P1(y) = y − 2, P2(y) = y2 − 6y + 11, P3(y) = y3 − 21/2y2 + 42y + 131/2, and so on.[42])
Dusart (1999).[43] found tighter bounds using the form of the Cesàro/Cipolla approximations but varying the lowest-order constant term. Bk(x; C) is the same function as above, but with the lowest-order constant term replaced by a parameter C:
The upper bounds can be extended to smaller n by loosening the parameter. For example, pn < n B1(log n; 0.5) for all n ≥ 20.[44]
Axler (2019)[44] extended this to higher order, showing:
Again, the bound on n may be decreased by loosening the parameter. For example, pn < n B2(log n; 0) for n ≥ 3468.
Table of π(x), x / log x, and li(x)
The table compares exact values of π(x) to the two approximations x / log x and li(x). The approximation difference columns are rounded to the nearest integer, but the "% error" columns are computed based on the unrounded approximations. The last column, x / π(x), is the average prime gap below x.
x
π(x)
π(x) − x/log(x)
li(x) − π(x)
% error
x/π(x)
x/log(x)
li(x)
10
4
0
2
8.22%
42.606%
2.500
102
25
3
5
14.06%
18.597%
4.000
103
168
23
10
14.85%
5.561%
5.952
104
1,229
143
17
12.37%
1.384%
8.137
105
9,592
906
38
9.91%
0.393%
10.425
106
78,498
6,116
130
8.11%
0.164%
12.739
107
664,579
44,158
339
6.87%
0.051%
15.047
108
5,761,455
332,774
754
5.94%
0.013%
17.357
109
50,847,534
2,592,592
1,701
5.23%
3.34×10−3 %
19.667
1010
455,052,511
20,758,029
3,104
4.66%
6.82×10−4 %
21.975
1011
4,118,054,813
169,923,159
11,588
4.21%
2.81×10−4 %
24.283
1012
37,607,912,018
1,416,705,193
38,263
3.83%
1.02×10−4 %
26.590
1013
346,065,536,839
11,992,858,452
108,971
3.52%
3.14×10−5 %
28.896
1014
3,204,941,750,802
102,838,308,636
314,890
3.26%
9.82×10−6 %
31.202
1015
29,844,570,422,669
891,604,962,452
1,052,619
3.03%
3.52×10−6 %
33.507
1016
279,238,341,033,925
7,804,289,844,393
3,214,632
2.83%
1.15×10−6 %
35.812
1017
2,623,557,157,654,233
68,883,734,693,928
7,956,589
2.66%
3.03×10−7 %
38.116
1018
24,739,954,287,740,860
612,483,070,893,536
21,949,555
2.51%
8.87×10−8 %
40.420
1019
234,057,667,276,344,607
5,481,624,169,369,961
99,877,775
2.36%
4.26×10−8 %
42.725
1020
2,220,819,602,560,918,840
49,347,193,044,659,702
222,744,644
2.24%
1.01×10−8 %
45.028
1021
21,127,269,486,018,731,928
446,579,871,578,168,707
597,394,254
2.13%
2.82×10−9 %
47.332
1022
201,467,286,689,315,906,290
4,060,704,006,019,620,994
1,932,355,208
2.03%
9.59×10−10 %
49.636
1023
1,925,320,391,606,803,968,923
37,083,513,766,578,631,309
7,250,186,216
1.94%
3.76×10−10 %
51.939
1024
18,435,599,767,349,200,867,866
339,996,354,713,708,049,069
17,146,907,278
1.86%
9.31×10−11 %
54.243
1025
176,846,309,399,143,769,411,680
3,128,516,637,843,038,351,228
55,160,980,939
1.78%
3.21×10−11 %
56.546
1026
1,699,246,750,872,437,141,327,603
28,883,358,936,853,188,823,261
155,891,678,121
1.71%
9.17×10−12 %
58.850
1027
16,352,460,426,841,680,446,427,399
267,479,615,610,131,274,163,365
508,666,658,006
1.64%
3.11×10−12 %
61.153
1028
157,589,269,275,973,410,412,739,598
2,484,097,167,669,186,251,622,127
1,427,745,660,374
1.58%
9.05×10−13 %
63.456
1029
1,520,698,109,714,272,166,094,258,063
23,130,930,737,541,725,917,951,446
4,551,193,622,464
1.53%
2.99×10−13 %
65.759
The value for π(1024) was originally computed assuming the Riemann hypothesis;[45] it has since been verified unconditionally.[46]
Analogue for irreducible polynomials over a finite field
There is an analogue of the prime number theorem that describes the "distribution" of irreducible polynomials over a finite field; the form it takes is strikingly similar to the case of the classical prime number theorem.
To state it precisely, let F = GF(q) be the finite field with q elements, for some fixed q, and let Nn be the number of monicirreducible polynomials over F whose degree is equal to n. That is, we are looking at polynomials with coefficients chosen from F, which cannot be written as products of polynomials of smaller degree. In this setting, these polynomials play the role of the prime numbers, since all other monic polynomials are built up of products of them. One can then prove that
If we make the substitution x = qn, then the right hand side is just
which makes the analogy clearer. Since there are precisely qn monic polynomials of degree n (including the reducible ones), this can be rephrased as follows: if a monic polynomial of degree n is selected randomly, then the probability of it being irreducible is about 1/n.
One can even prove an analogue of the Riemann hypothesis, namely that
The proofs of these statements are far simpler than in the classical case. It involves a short, combinatorial argument,[47] summarised as follows: every element of the degree n extension of F is a root of some irreducible polynomial whose degree d divides n; by counting these roots in two different ways one establishes that
where μ(k) is the Möbius function. (This formula was known to Gauss.) The main term occurs for d = n, and it is not difficult to bound the remaining terms. The "Riemann hypothesis" statement depends on the fact that the largest proper divisor of n can be no larger than n/2.
^Gauss, C. F. (1863), Werke, vol. 2 (1st ed.), Göttingen: Teubner, pp. 444–447.
^Costa Pereira, N. (August–September 1985). "A Short Proof of Chebyshev's Theorem". American Mathematical Monthly. 92 (7): 494–495. doi:10.2307/2322510. JSTOR2322510.
^Nair, M. (February 1982). "On Chebyshev-Type Inequalities for Primes". American Mathematical Monthly. 89 (2): 126–129. doi:10.2307/2320934. JSTOR2320934.
^Schoenfeld, Lowell (1976). "Sharper Bounds for the Chebyshev Functions ϑ(x) and ψ(x). II". Mathematics of Computation. 30 (134): 337–360. doi:10.2307/2005976. JSTOR2005976. MR0457374.
^Jørgensen, Bent; Martínez, José Raúl; Tsao, Min (1994). "Asymptotic behaviour of the variance function". Scandinavian Journal of Statistics. 21 (3): 223–243. JSTOR4616314. MR1292637.
^Bergelson, V., & Richter, F. K. (2022). Dynamical generalizations of the prime number theorem and disjointness of additive and multiplicative semigroup actions. Duke Mathematical Journal, 171(15), 3133-3200.
^Cipolla, Michele (1902). "La determinazione assintotica dell'nimo numero primo" [The asymptotic determination of the nth prime number]. Matematiche Napoli. 8 (in Italian). 3: 132–166.