Wednesday, January 18, 2012

Independence

Whenever I read about "independence", it raises a red flag in my mind. If a paper contains a probabilistic analysis, bugs or even errors can often be hidden behind the word "independent". I am slow in believing independence, and especially conditional independence. That's where I have seen many bugs come up in proofs by me and by others. I am now skeptical of any unproved claims of independence.

Is the pro-Colbert super-PAC “Americans for a Better Tomorrow, Tomorrow” independent of Colbert? As it exists in real life, we are assuming, barring evidence to the contrary, that it is not coordinated. But if it came up in a science paper, we would ask to see a proof.

8 comments:

  1. I had a lot of trouble figuring out to say about independence when teaching elementary probability.

    Usually, textbooks say, "Events A and B are defined to be independent if Pr[A and B] = Pr[A]Pr[B]".

    That's fine and all, but 99.9% of the time when independence is used in proofs, the proofs look like this:

    "Let A be the event ... and let B be the event ... Now A and B are independent because [natural language explanation]. Therefore we can use the fact that Pr[A and B] = Pr[A]Pr[B]..."

    Obviously, this looks circular and invalid. Yet that's how mathematicians write.

    So you need to explain that there are techniques for proving A and B are independent without relying on the definition... so that you can deduce the definition.

    ReplyDelete
  2. Forgive my ignorance, but I always argue (admittedly using English) that Pr[A|B] = Pr[A] to establish that A and B are independent. Is there a more rigorous method?

    ReplyDelete
    Replies
    1. I like the definition Pr[A|B]=Pr[A] better than Pr[A and B]=Pr[A]Pr[B], even though it breaks symmetry. It's the definition I usually use in proofs.

      My main step towards rigor when such a claim, proved in English, looks slightly dubious, is to get rid of probability: break things down into elementary events, look at all possibilities, and reduce the problem to counting events one by one. Tedious, but necessary (and sufficient) when I am suspicious.

      Delete
    2. There are little lemmas that can sometimes be used towards actual proofs of independence.
      For instance, if X and Y are independent random variables and f is a function, then the random variables f(X) and f(Y) are independent (really!).

      Delete
  3. Here's another potential way. If X_1, ..., X_n are bounded, real-valued random variables, then they are independent if and only if E[X_1^m_1 X_2^m_2 ... X_n^m_n] = E[X_1^m_1]*E[X_1^m_2]*...*E[X_n^m_n] for all non-negative integers m_1, ..., m_n. This follows by Weirstrass' approximation theorem.

    ReplyDelete
    Replies
    1. Can you give an example where that would be the right way to present independence?

      Delete
    2. I can't think of an example off-hand. But I remember finding this formulation useful in a real-life situation. It's often the case that expectations are easier to calculate than probability.

      Delete
  4. Another belated example: if A, B, C, D are 4 independent events, then A union B is independent of C union D. This can be generalized to intersections, complements, more than 4 events...

    ReplyDelete

Note: Only a member of this blog may post a comment.