在制品:进行中
继p。克莱姆1946 370 数理统计方法,定义Ξñ= Ñ (1 - Φ (žñ))。
这里Φ是标准正态分布的累积分布函数,ñ(0 , 1 )。作为其定义的结果,我们保证0 ≤ Ξñ≤ ñ几乎可以肯定。
考虑一个给定的实现ω ∈ Ω我们的样本空间。然后,在这个意义上žñ既是功能ñ和ω,和Ξñ的功能žñ,n和ω。对于固定的ω,我们可以考虑žñ的确定性函数ñ,和Ξñ的确定性函数žñ和ñ,从而简化了问题。我们的目标是展示其持有几乎可以肯定所有的结果ω ∈ Ω,使我们可以将结果从不确定性分析转移到不确定性设置。
继p。克莱默的1946年的374 数理统计的方法,假定此时(我的目标是回来后提供一个证明),我们能够证明(对于任何给定的ω ∈ Ω)以下的渐近展开抱(使用分部积分法和Φ的定义:
2个π--√ñΞñ= 1žñË− Z2ñ2( 1 + O (1ž2ñ)) 一个小号ž ñ→ ∞。(〜)
很明显,我们有žn + 1≥ žñ任何ñ,和žñ几乎是肯定的增函数ñ作为n → ∞,因此我们在整个后续内容为(几乎肯定全部)固定权利要求ω:žñ→ ∞⟺n → ∞。
因此,我们得出(其中〜表示渐近等价):
2个π--√ñΞñ〜1žñË− 1ž2ñ 一个小号ž ñ→ ∞n → ∞。
我们如何进行后续工作基本上等于主导平衡的方法,而我们的操作将通过以下引理形式正式证明:
引理:假定F(Ñ )〜克(n )作为n → ∞,和F(n )→ ∞(因此G(n )→ ∞)。然后给定通过对数和幂定律的组成,加法和乘积形成的任何函数H(本质上是任何“ polylog ”函数),我们还必须具有n → ∞:ħ (˚F(Ñ ))〜ħ (克(n ))。
换句话说,这样的“ polylog”函数保留了渐近等价。
这个引理的真实性是定理2.1的结果。如这里所写。还请注意,以下内容基本上是此处找到的类似问题的答案的扩展版本(更多详细信息)。
取双方的对数,我们得出:
日志(2 π--√Ξñ)- 日志n∼−logZn−Z2n2.(1)
这是Cramer有点笼统的地方。他只是说:“假设Ξn为界”,我们可以得出结论等等等等等等。但表示Ξn适当界几乎肯定似乎是个有点不平凡的。看来,这基本上可能是Galambos 265-267页所讨论内容的一部分,但是由于我仍在努力理解该书的内容,因此我不确定。
无论如何,假设一个可显示logΞn=o(logn),那么它遵循(因为− Z2ñ/ 2项支配− 日志žñ术语)的是:
− 日志ň 〜- ž2ñ2⟹žñ〜2 日志ñ-----√。
这是不是很好,因为它已经是我们要显示什么最,虽然这又是值得的注意的是它基本上只能拖延时间在路上,因为现在我们必须表现出一定的几乎处处有界的Ξñ。在另一方面,Ξñ具有用于任何最大IID连续随机变量的相同的分布,所以这可能是易处理的。
无论如何,如果žñ〜2 日志ñ-----√如,那么显然可以还得出这样的结论žñ〜2 日志ñ-----√(1 + α (n ))对于任何α (n )其是o (1 )作为n → ∞。使用上面关于保留对数等价的polylog函数的引理,我们可以将此表达式代入(1 )以得到:
日志(2 π--√Ξñ)- 日志n∼−log(1+α)−12log2−12loglogn−logn−2αlogn−α2logn.
⟹−log(Ξn2π−−√)∼log(1+α)+12log2+12loglogn+2αlogn+α2logn.
在这里,我们必须走得更远,并假设logΞn=o(loglogn) as n→∞几乎可以肯定。同样,所有Cramer说是“假定Ξn为界”。但是,因为所有的人可以说先天约Ξn是0≤Xin≤n因为,它几乎似乎很清楚,应该有Ξn=O(1)几乎可以肯定,这似乎是克莱默的要求的物质。
但是无论如何,假设有人认为,那么可以得出结论,不包含α的主导项是12loglogn。由于α=o(1),它遵循α2=o(α),并清楚地log(1+α)=o(α)=o(o(αlogn)),因此含有主项α是2αlogn。因此,我们可以重新排列和(将所有内容除以12loglogn或2αlogn)发现
−12loglogn∼2αlogn⟹α∼−loglogn4logn.
因此,将其代入上面,我们得到:
Zn∼2logn−−−−−√−loglogn22logn−−−−−√,
再次,假设我们认为对某些事情Ξn。
我们再次重新提出同样的技术。因为Zn∼2logn−−−−−√−loglogn22logn√,那么它也遵循
Zn∼2logn−−−−−√−loglogn22logn−−−−−√(1+β(n))=2logn−−−−−√(1−loglogn8logn(1+β(n))),
当β(n)=o(1)。在直接替换为(1)之前,让我们简化一下;我们得到:
logZn∼log(2logn−−−−−√)+log(1−loglogn8logn(1+β(n)))log(O(1))=o(logn)∼log(2logn−−−−−√).
Z2n2∼logn−12loglogn(1+β)+(loglogn)28logn(1β)2o((1+β)loglogn)∼logn−12(1+β)loglogn.
将其代入(1),我们发现:
log(2π−−√Ξn)−logn∼−log(2logn−−−−−√)−logn+12(1+β)loglogn⟹β∼log(4πΞ2n)loglogn.
Therefore, we conclude that almost surely
Zn∼2logn−−−−−√−loglogn22logn−−−−−√(1+log(4π)+2log(Ξn)loglogn)=2logn−−−−−√−loglogn+log(4π)22logn−−−−−√−log(Ξn)2logn−−−−−√.
This corresponds to the final result on p.374 of Cramer's 1946 Mathematical Methods of Statistics except that here the exact order of the error term isn't given. Apparently applying this one more term gives the exact order of the error term, but anyway it doesn't seem necessary to prove the results about the maxima of i.i.d. standard normals in which we are interested.
Given the result of the above, namely that almost surely:
Zn∼2logn−−−−−√−loglogn+log(4π)22logn−−−−−√−log(Ξn)2logn−−−−−√⟹Zn=2logn−−−−−√−loglogn+log(4π)22logn−−−−−√−log(Ξn)2logn−−−−−√+o(1).(†)
2. Then by linearity of expectation it follows that:
EZn=2logn−−−−−√−loglogn+log(4π)22logn−−−−−√−E[log(Ξn)]2logn−−−−−√+o(1)⟹EZn2logn−−−−−√=1−E[logΞn]2logn+o(1).
Therefore, we have shown that
limn→∞EZn2logn−−−−−√=1,
as long as we can also show that
E[logΞn]=o(logn).
This might not be too difficult to show since again Ξn has the same distribution for every continuous random variable. Thus we have the second result from above.
1. Similarly, we also have from the above that almost surely:
Zn2logn−−−−−√=1−log(Ξn)2logn+o(1),.
Therefore, if we can show that:
log(Ξn)=o(logn) almost surely,(*)
E[log(Ξn)]=o(logn), thereby also giving us the first result from above.
Also note that in the proof above of (†) we needed to assume anyway that Ξn=o(logn) almost surely (or at least something similar), so that if we are able to show (†) then we will most likely also have in the process needed to show Ξn=o(logn) almost surely, and therefore if we can prove (†) we will most likely be able to immediately reach all of the following conclusions.
3. However, if we have this result, then I don't understand how one would also have that EZn=2logn−−−−−√+Θ(1), since o(1)≠Θ(1). But at the very least it would seem to be true that EZn=2logn−−−−−√+O(1).
So then it seems that we can focus on answering the question of how to show that Ξn=o(logn) almost surely.
We will also need to do the grunt work of providing a proof for (~), but to the best of my knowledge that is just calculus and involves no probability theory, although I have yet to sit down and try it yet.
First let's go through a chain of trivialities in order to rephrase the problem in a way which makes it easier to solve (note that by definition Ξn≥0):
Ξn=o(logn)⟺limn→∞Ξnlogn=0⟺∀ε>0,Ξnlogn>ε only finitely many times⟺∀ε>0,Ξn>εlogn only finitely many times.
One also has that:
Ξn>εlogn⟺n(1−F(Zn))>εlogn⟺1−F(Zn)>εlognn⟺F(Zn)<1−εlognn⟺Zn≤inf{y:F(y)≥1−εlognn}.
Correspondingly, define for all n:
u(ε)n=inf{y:F(y)≥1−εlognn}.
Therefore the above steps show us that:
Ξn=o(logn) a.s.⟺P(Ξn=o(logn))=1⟺P(∀ε>0,Ξn>εlogn only finitely many times)=1⟺P(∀ε>0,Zn≤u(ε)n only finitely many times)=1⟺P(∀ε>0,Zn≤u(ε)n infinitely often)=0.
Notice that we can write:
{∀ε>0,Zn≤u(ε)n infinitely often}=⋂ε>0{Zn≤u(ε)n infinitely often}.
The sequences u(ε)n become uniformly larger as ε decreases, so we can conclude that the events {Zn≤u(ε)n infinitely often}
are decreasing (or at least somehow monotonic) as ε goes to 0. Therefore the probability axiom regarding monotonic sequences of events allows us to conclude that:
P(∀ε>0,Zn≤u(ε)n infinitely often)=P(⋂ε>0{Zn≤u(ε)n infinitely often})=P(limε↓0{Zn≤u(ε)n infinitely often})=limε↓0P(Zn≤u(ε)n infinitely often).
Therefore it suffices to show that for all ε>0,
P(Zn≤u(ε)n infinitely often)=0
because of course the limit of any constant sequence is the constant.
Here is somewhat of a sledgehammer result:
Theorem 4.3.1., p. 252 of Galambos, The Asymptotic Theory of Extreme Order Statistics, 2nd edition. Let X1,X2,… be i.i.d. variables with common nondegenerate and continuous distribution function F(x), and let un be a nondecreasing sequence such that n(1−F(un)) is also nondecreasing. Then, for un<sup{x:F(x)<1}, P(Zn≤un infinitely often)=0 or 1
according as
∑j=1+∞[1−F(uj)]exp(−j[1−F(uj)])<+∞ or =+∞.
The proof is technical and takes around five pages, but ultimately it turns out to be a corollary of one of the Borel-Cantelli lemmas. I may get around to trying to condense the proof to only use the part required for this analysis as well as only the assumptions which hold in the Gaussian case, which may be shorter (but maybe it isn't) and type it up here, but holding your breath is not recommended. Note that in this case ω(F)=+∞, so that condition is vacuous, and n(1−F(n)) is εlogn thus clearly non-decreasing.
Anyway the point being that, appealing to this theorem, if we can show that:
∑j=1+∞[1−F(u(ε)j)]exp(−j[1−F(u(ε)j)])=∑j=1+∞[εlogjj]exp(−εlogj)=ε∑j=1+∞logjj1+ε<+∞.
Note that since logarithmic growth is slower than any power law growth for any positive power law exponent (logarithms and exponentials are monotonicity preserving, so loglogn≤αlogn⟺logn≤nα and the former inequality can always be seen to hold for all n large enough due to the fact that logn≤n and a change of variables), we have that:
∑j=1+∞logjj1+ε≤∑j=1+∞jε/2j1+ε=∑j=1+∞1j1+ε/2<+∞,
since the p-series is known to converge for all p>1, and ε>0 of course implies 1+ε/2>1.
Thus using the above theorem we have shown that for all ε>0, P(Zn≤u(ε)n i.o.)=0, which to recapitulate should mean that Ξn=o(logn) almost surely.
We need to show still that logΞn=o(loglogn). This doesn't follow from the above, since, e.g.,
1nlogn=o(logn),−logn+loglogn≠o(logn).
However, given a sequence xn, if one can show that xn=o((logn)δ) for arbitrary δ>0, then it does follow that log(xn)=o(loglogn). Ideally I would like to be able to show this for Ξn using the above lemma (assuming it's even true), but am not able to (as of yet).