Independent Random Variables: Definition, Examples

Random Variables > Independent Random Variables

You may want to read this article first: What is a Random Variable?

What are Independent Random Variables?

An independent random variable is a random variable that doesn’t have an effect on the other random variables in your experiment. In other words, it doesn’t affect the probability of another event happening. For example, let’s say you wanted to know the average weight of a bag of sugar so you randomly sample 50 bags from various grocery stores. You wouldn’t expect the weight of one bag to affect another, so the variables are independent. The opposite is a dependent random variable, which does affect probabilities of other random variables.

Another way of looking at it

Knowing the value of X, an independent random variable, doesn’t help us to predict a value for Y and vice versa.

independent random variablesA random variable is a variable associated with an experiment, like n tosses of a coin or d draws of cards. From a (more technical) standpoint, two random variables are independent if either of the following statements are true:

  1. P(x|y) = P(x), for all values of X and Y.
  2. P(x∩y) = P(x) * P(y), for all values of X and Y.

The two are equivalent.

The first statement, P(x|y) = P(x), for all values of X and Y, is stating “the probability of x, given y, is x.” In other words, knowing y should make no difference on the probability, x — it’s still going to be just x no matter what the value of y.

You may recognize the second statement as the fundamental counting principle , which states that if you have two independent events, multiply their probabilities together. For example, let’s say your chances of winning a prize in bingo are 1/1000 and your odds of finding a parking space right next to the bingo hall are 1/20. Your odds of finding a parking space next to the bingo hall and winning in bingo are 1/1000 * 1/20 = 1/20,000.

How to Tell if Random Variables are Independent

1. The Intuitive Explanation

You can tell if two random variables are independent by looking at their individual probabilities. If those probabilities don’t change when the events meet, then those variables are independent. Another way of saying this is that if the two variables are correlated, then they are not independent.

As a simple example, let’s say you have two random variables X and Y. X can equal 0, 1, or 2 and Y can equal 0 or 1. First, let’s take a look at their probabilities:

  • The probability that X = 0 is 20%: Or, more formally — P(X = 1) = 0.2.
  • The probability that X = 1 is 30%: P(X = 3) = 0.3.
  • The probability that X = 2 is 50%: P(X = 5) = 0.5.
  • The probability that Y is 0 is 40%: P(Y = 0) = 0.4.
  • The probability that Y is 1 is 60%: P(Y = 1) = 0.6

But what happens to the probabilities when the two happen at the same time?

For each possible combination of X, given that Y has happened (in notation, that’s (X|Y)), the probabilities are:

  • P(x = 0 | y = 0) = 0.2;
  • P(x = 1 | y = 0) = 0.3;
  • P(x = 2 | y = 0) = 0.5;
  • P(x = 0 | y = 1) = 0.2;
  • P(x = 1 | y = 1) = 0.3;
  • P(x = 2 | y = 1) = 0.5;

The changing y-values have no effect on the x probabilities. Assuming the reverse is also true (that changing x-values would have no effect on the y-values), these are independent random variables.

2. The More Formal Definition (Discrete Variables)

Independent events are defined as meeting the following condition:

  • P(x|y) = P(x), for all values of X and Y.

Independent random variables that are also discrete variables can be described in a similar way:

  • P(X = x, Y = y) = P(X = x) P(Y = y), for all values of x and y.

Extending this to a set of n discrete random variables, we can say:

  • P(X1 = x1, X2 = x2, …, Xn = xn) = P(X1 = x1) P(X1 = x1)…P(X1 = x1), for all values of x (x1, x2, …, Xn).

Comments? Need to post a correction? Please Contact Us.