协方差等于零是否意味着二进制随机变量具有独立性?


14

如果XY是两个只能具有两个可能状态的随机变量,我如何证明Cov(X,Y)=0表示独立性?这种违背了我回想起Cov(X,Y)=0并不意味着独立...

提示说从1开始0作为可能的状态,并从那里开始进行概括。我可以这样做并显示E(XY)=E(X)E(Y),但这并不意味着独立?

我猜这有点困惑如何数学上做到这一点。


你的问题的标题表明它不是一般的真实..
迈克尔·Chernick

5
您要证明的陈述确实是正确的。如果XY分别是参数参数为p1伯努利随机变量p2,则E[X]=p1E[Y]=p2。所以,cov(X,Y)=E[XY]E[X]E[Y]等于0仅当E[XY]=P{X=1,Y=1}等于p1p2=P{X=1}P{Y=1}表示{X=1}{Y=1}独立 事件。标准结果是,如果AB是一对独立事件,那么A,BcAc,B以及Ac,Bc独立事件也是如此,即XY是独立随机变量。现在概括。
Dilip Sarwate

Answers:


23

对于二进制变量,其期望值等于它们等于1的概率。因此,

E(XY)=P(XY=1)=P(X=1Y=1)E(X)=P(X=1)E(Y)=P(Y=1)

如果两个协方差为零,则意味着E(XY)=E(X)E(Y),这意味着

P(X=1Y=1)=P(X=1)P(Y=1)

使用关于独立事件的基本规则(即,如果AB是独立的,则它们的补码是独立的,等等),看到所有其他联合概率也相乘是很简单的,这意味着联合质量函数是因式分解的两个随机变量是独立的。


2
简洁优雅。优!+1 = D
Marcelo Ventura '02

9

相关性和协方差都可以测量两个给定变量之间的线性关联,并且没有义务检测其他任何形式的关联。

因此,这两个变量可能以其他几种非线性方式关联,并且协方差(因此,相关性)无法与独立情况区分开。

作为一个非常说教,人工和非现实的例子,可以考虑,使得P X = X = 1 / 3X = - 1 0 1,并且还考虑ÿ = X 2。请注意,它们不仅是关联的,而且是另一个的功能。但是,它们的协方差为0,因为它们的关联与协方差可以检测到的关联正交。XP(X=x)=1/3x=1,0,1Y=X2

编辑

确实,正如@whuber所指出的那样,以上原始答案实际上是关于如果两个变量都不一定是二分法的话该断言不是普遍正确的评论。我的错!

因此,让我们数学起来。(与Barney Stinson的“ Suit up!”相当)

特殊案例

如果两个ÿ是二分,则可以假定,不失一般性,这两个假设只的值01具有任意的概率pq- [R由下式给出 P X = 1 = p [ 0 1 ] P Ý = 1 = q [ 0 1 ] P X = 1 ÿXY01pqr 表征完全的联合分布Xÿ。接受@DilipSarwate的提示,请注意,这三个值足以确定XY的联合分布,因为 P X = 0 Y = 1

P(X=1)=p[0,1]P(Y=1)=q[0,1]P(X=1,Y=1)=r[0,1],
XY(X,Y) (在一个侧面说明,当然ř势必尊重两个p-- [R[01]q-- [R[01]1-p-q-- [R[
P(X=0,Y=1)=P(Y=1)P(X=1,Y=1)=qrP(X=1,Y=0)=P(X=1)P(X=1,Y=1)=prP(X=0,Y=0)=1P(X=0,Y=1)P(X=1,Y=0)P(X=1,Y=1)=1(qr)(pr)r=1pqr.
rpr[0,1]qr[0,1]超出 ř [ 0 1 ],这是说 ř [ 0 分钟p q 1 - p - q ]。)1pqr[0,1]r[0,1]r[0,min(p,q,1pq)]

Notice that r=P(X=1,Y=1) might be equal to the product pq=P(X=1)P(Y=1), which would render X and Y independent, since

P(X=0,Y=0)=1pqpq=(1p)(1q)=P(X=0)P(Y=0)P(X=1,Y=0)=ppq=p(1q)=P(X=1)P(Y=0)P(X=0,Y=1)=qpq=(1p)q=P(X=0)P(Y=1).

Yes, r might be equal to pq, BUT it can be different, as long as it respects the boundaries above.

Well, from the above joint distribution, we would have

E(X)=0P(X=0)+1P(X=1)=P(X=1)=pE(Y)=0P(Y=0)+1P(Y=1)=P(Y=1)=qE(XY)=0P(XY=0)+1P(XY=1)=P(XY=1)=P(X=1,Y=1)=rCov(X,Y)=E(XY)E(X)E(Y)=rpq

Now, notice then that X and Y are independent if and only if Cov(X,Y)=0. Indeed, if X and Y are independent, then P(X=1,Y=1)=P(X=1)P(Y=1), which is to say r=pq. Therefore, Cov(X,Y)=rpq=0; and, on the other hand, if Cov(X,Y)=0, then rpq=0, which is to say r=pq. Therefore, X and Y are independent.

General Case

About the without loss of generality clause above, if X and Y were distributed otherwise, let's say, for a<b and c<d,

P(X=b)=pP(Y=d)=qP(X=b,Y=d)=r
then X and Y given by
X=XabaandY=Ycdc
would be distributed just as characterized above, since
X=aX=0,X=bX=1,Y=cY=0andY=dY=1.
So X and Y are independent if and only if X and Y are independent.

Also, we would have

E(X)=E(Xaba)=E(X)abaE(Y)=E(Ycdc)=E(Y)cdcE(XY)=E(XabaYcdc)=E[(Xa)(Yc)](ba)(dc)=E(XYXcaY+ac)(ba)(dc)=E(XY)cE(X)aE(Y)+ac(ba)(dc)Cov(X,Y)=E(XY)E(X)E(Y)=E(XY)cE(X)aE(Y)+ac(ba)(dc)E(X)abaE(Y)cdc=[E(XY)cE(X)aE(Y)+ac][E(X)a][E(Y)c](ba)(dc)=[E(XY)cE(X)aE(Y)+ac][E(X)E(Y)cE(X)aE(Y)+ac](ba)(dc)=E(XY)E(X)E(Y)(ba)(dc)=1(ba)(dc)Cov(X,Y).
So Cov(X,Y)=0 if and only Cov(X,Y)=0.

=D


1
I recycled that answer from this post.
Marcelo Ventura

Verbatim cut and paste from your other post. Love it. +1
gammer

2
The problem with copy-and-paste is that your answer no longer seems to address the question: it is merely a comment on the question. It would be better, then, to post a comment with a link to your other answer.
whuber

2
How is thus an answer to the question asked?
Dilip Sarwate

1
Your edits still don't answer the question, at least not at the level the question is asked. You write "Notice that r  not necessarily equal to the product pq. That exceptional situation corresponds to the case of independence between X and Y." which is a perfectly true statement but only for the cognoscenti because for the hoi polloi, independence requires not just that
(1)P(X=1,Y=1)=P(X=1)P(Y=1)
but also
(2)P(X=u,Y=v)=P(X=u)P(Y=v), u.v{0,1}.
Yes, (1)(2) as the cognoscenti know; for lesser mortals, a proof that (1)(2) is helpful.
Dilip Sarwate

3

IN GENERAL:

The criterion for independence is F(x,y)=FX(x)FY(y). Or

(1)fX,Y(x,y)=fX(x)fY(y)

"If two variables are independent, their covariance is 0. But, having a covariance of 0 does not imply the variables are independent."

This is nicely explained by Macro here, and in the Wikipedia entry for independence.

independencezero cov, yet

zero covindependence.

Great example: XN(0,1), and Y=X2. Covariance is zero (and E(XY)=0, which is the criterion for orthogonality), yet they are dependent. Credit goes to this post.


IN PARTICULAR (OP problem):

These are Bernoulli rv's, X and Y with probability of success Pr(X=1), and Pr(Y=1).

cov(X,Y)=E[XY]E[X]E[Y]=Pr(X=1Y=1)Pr(X=1)Pr(Y=1)Pr(X=1,Y=1)=Pr(X=1)Pr(Y=1).

This is equivalent to the condition for independence in Eq. (1).


():

E[XY]=domain X, YPr(X=xY=y)xy=0 iff x×y0Pr(X=1Y=1).

(): by LOTUS.


As pointed out below, the argument is incomplete without what Dilip Sarwate had pointed out in his comments shortly after the OP appeared. After searching around, I found this proof of the missing part here:

If events A and B are independent, then events Ac and B are independent, and events Ac and Bc are also independent.

Proof By definition,

A and B are independent P(AB)=P(A)P(B).

But B=(AB)+(AcB), so P(B)=P(AB)+P(AcB), which yields:

P(AcB)=P(B)P(AB)=P(B)P(A)P(B)=P(B)[1P(A)]=P(B)P(Ac).

Repeat the argument for the events Ac and Bc, this time starting from the statement that Ac and B are independent and taking the complement of B.

Similarly. A and Bc are independent events.

So, we have shown already that

Pr(X=1,Y=1)=Pr(X=1)Pr(Y=1)
and the above shows that this implies that
Pr(X=i,Y=j)=Pr(X=i)Pr(Y=j),  i,j{0,1}
that is, the joint pmf factors into the product of marginal pmfs everywhere, not just at (1,1). Hence, uncorrelated Bernoulli random variables X and Y are also independent random variables.

2
Actually that's not an equivalent condition to Eq (1). All you showed was that fX,Y(1,1)=fX(1)fY(1)
gammer

Please consider replacing that image with your own equations, preferably ones that don't use overbars to denote complements. The overbars in the image are very hard to see.
Dilip Sarwate

@DilipSarwate No problem. Is it better, now?
Antoni Parellada

1
Thanks. Also, note that strictly speaking, you also need to show that A and Bc are independent events since the factorization of the joint pdf into the product of the marginal pmts must hold at all four points. Perhaps adding the sentence "Similarly. A and Bc are independent events" right after the proof that Ac and B are independent events will work.
Dilip Sarwate

@DilipSarwate Thank you very much for your help getting it right. The proof as it was before all the editing seemed self-explanatory, because of all the inherent symmetry, but it clearly couldn't be taken for granted. I am very appreciative of your assistance.
Antoni Parellada
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.