随机变量


15

关于随机变量和随机变量的函数,我们能说什么吗?例如,X2依赖于X


5
如果Xf(X)是独立的,则f(X)几乎可以确定为常数。也就是说,存在a,使得P(f(X)=a)=1
主教

2
@cardinal-为什么不回答呢?
卡尔

@cardinal,我想请John详细说明他的评论。我认为所考虑的功能是给定的确定性功能是理所当然的。在此过程中,我最终为您声明的结果写了一个参数。任何评论都是最欢迎和赞赏的。
NRH

是的,X2取决于X,因为如果您知道X那么您就会知道X2。 仅当对值X的了解不影响您对Y分布的了解时,XY才是独立的。XY
亨利

2
@iamrohitbanga:如果然后X 2 = 1几乎肯定。因此,在这种非常特殊的情况下,X独立于X 2X{1,1}X2=1XX2
主教

Answers:


18

这是@cardinal的评论的证明,稍有改动。如果˚F X 是独立的,然后 P X ˚F - 1= P X ˚F X Xf(X)=˚F-1产生了方程 P˚FX=P˚FX2 它有两个解0和1。因此PfX

P(XAf1(B))=P(XA,f(X)B)=P(XA)P(f(X)B)=P(XA)P(Xf1(B))
A=f1(B)
P(f(X)B)=P(f(X)B)2,
为所有。概括地说,不可能多说。如果 X ˚F X 是独立的,则 ˚F X 是一个变量,使得对于任何它要么是在Ç以概率1多说,一个需要更多的假设,例如,该单套 { b }是可测量的。P(f(X)B){0,1}BXf(X)f(X)BBBc{b}

但是,度量理论层面的细节似乎并不是OP的主要关注点。如果是真实的,˚F是一个真正的函数(和我们使用波雷尔σ代数,比方说),然后取= - b ]接下去对分布的分布函数˚F X 只在该值0和1,因此有一个XfσB=(,b]f(X)处它跳到从 0 1 P ˚F X = b = 1b01P(f(X)=b)=1

归根结底,OP问题的答案是f X 通常是依赖的,并且仅在非常特殊的情况下才是独立的。此外,狄拉克措施δ ˚F X 总是资格的条件分布˚F X 给出X = X,这是说的一个正式的方式,知道X = X,那么你也知道到底是什么˚F X Xf(X)δf(x)f(X)X=xX=xf(X)是。这种具有退化的条件分布的特殊形式的依存关系是随机变量函数的特征。


(+1)对不起。在撰写答案时,我没有收到您也提交过的最新消息。:)
主教

21

引理:令为随机变量,令f为(Borel可测量)函数,使得Xf X 是独立的。那么,f X 几乎可以确定为常数。也就是说,有一些一个[R ,使得P˚F X = = 1XfXf(X)f(X)aRP(f(X)=a)=1

证明如下:但是,首先,有一些评论。Borel可测量性只是确保我们能够以合理且一致的方式分配概率的技术条件。“几乎可以肯定”的陈述也是一种技术性。

引理的本质是,如果我们希望f X 是独立的,那么我们唯一的候选对象就是形式为f x = a的函数。Xf(X)f(x)=a

用的功能的情况下对比这使得X˚F X 不相关的。这是一个非常弱得多的条件。的确,考虑任何随机变量X,其均值为零,有限的绝对三阶矩且对称于零。取f x = x 2,如问题示例中所示。则C o vX f X = E X f fXf(X)Xf(x)=x2,因此 X f X = X 2不相关。Cov(X,f(X))=EXf(X)=EX3=0Xf(X)=X2

下面,我给出我可以为引理得出的最简单的证明。我已经使它变得非常冗长,以使所有细节都尽可能明显。如果有人发现改进或简化它的方法,我将很高兴知道。

Idea of proof: Intuitively, if we know X, then we know f(X). So, we need to find some event in σ(X), the sigma algebra generated by X, that relates our knowledge of X to that of f(X). Then, we use that information in conjunction with the assumed independence of X and f(X) to show that our available choices for f have been severely constrained.

XYAσ(X)Bσ(Y)P(XA,YB)=P(XA)P(YB)Y=f(X)fXYA(y)={ω:f(X(ω))y}

A(y)={ω:X(ω)f1((,y])}
(,y] is a Borel set and f is Borel-measurable, then f1((,y]) is also a Borel set. This implies that A(y)σ(X) (by definition(!) of σ(X)).

Since X and Y are assumed independent and A(y)σ(X), then

P(XA(y),Yy)=P(XA(y))P(Yy)=P(f(X)y)P(f(X)y),
and this holds for all yR. But, by definition of A(y)
P(XA(y),Yy)=P(f(X)y,Yy)=P(f(X)y).
Combining these last two, we get that for every yR,
P(f(X)y)=P(f(X)y)P(f(X)y),
so P(f(X)y)=0 or P(f(X)y)=1. This means there must be some constant aR such that the distribution function of f(X) jumps from zero to one at a. In other words, f(X)=a almost surely.

NB: Note that the converse is also true by an even simpler argument. That is, if f(X)=a almost surely, then X and f(X) are independent.


+1, I should be sorry - to steal your argument is not very polite. It's great that you spelled out the difference between independence and being uncorrelated in this context.
NRH

No "stealing" involved, nor impoliteness. :) Though many of the ideas and comments are similar (as you'd expect for a question like this!), I think the two posts are nicely complementary. In particular, I like how at the beginning of your post you didn't constrain yourself to real-valued random variables.
cardinal

@NRH accepting your answer as the initial part of your proof seems easier to grasp for a novice like me. Nevertheless +1 cardinal for your answer.
Rohit Banga
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.