Conditional Probability and Independence

we learn how to compute probability of events as well as their unions and intersections. Now, let us consider the following situation.

At the commencement of the academic year, suppose the probability that Raju will score distinction in Statistics at the annual examination is 0.5. At the end of the Ist term, Raju scores 80% marks in Statistics. The question is whether this information will change (increase or decrease) Raju's probability of scoring distinction at the annual examination ?

Such types of questions answered with the help of conditional probability and independence. If the two events are independent, the probability 0.5 will remain as it is. On the other hand, if they are dependent, then conditional probability is to be calculated. Given the information that Raju has scored 80% marks at terminal examination, will affect his probability of scoring distinction at the annual examination.

We introduce first the concept of independence of events.

Independence of Two Events

Two events are said to be independent if the occurrence or non-occurrence of one does not affect the occurrence of the other. This gives rise to the following definition of independent events.

Definition: Two events A and B defined on a sample space are said to be independent if and only if

P(ANB) = P(A) · P(B)

Illustration 1 Consider the experiment of rolling a fair die

..

Let,

Ω = {1, 2, 3, 4, 5, 6}

A

Occurrence of an even number.

= {2, 4, 6}

B Occurrence of a number greater than 4.

= (5,6)

Note that

P(A) =

3 1 5=2 6

P(B) = 175

2 6

and

..

P(ANB) = P(6) == P(A) · P(B)

By definition, A and B are independent events

Remark 1 Note that here, AnB ; therefore A and B are not mutually exclusive events, although A and B are independent.


Illustration 2 Consider the experiment of tossing three fair coins  simultaneously.

= {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT} A Getting two heads

= {HHT, HTH, THH}

BP(B) = Getting two tails.

= {HTT, THT, TTH}

3

Observe that P(A) = 8

and

..

AnB =

P(ANB)

.. P (AB) = 0

P(A) · P(B)

.. A and B are not independent.

Remark 2: In the above example the events A and B are mutually exclusive (.. AnB = 0) but they are not independent.

Thus from the two illustrations it is clear that

(i) independence mutual exclusiveness and (ii) mutual exclusiveness independence. In fact what we have are the following results.

Result 1: If A and B are independent events with P (A) and P(B) both non-zero, then A and B cannot be mutually exclusive.

Proof Since A and B are independent,

P(A B) = P(A) P(B)

# 0

P(A) #0, P (B) 0

AnB or A and B cannot be mutually exclusive.

Result 2 If A and B are mutually exclusive with P(A) and P(B) both non-zero, then A and B cannot be independent; i.e. A and B are dependent.

Proof Since A and B are mutually exclusive.

..

AnB = 0

P(ANB) = 0

However, P(A) · P(B) = 0

P(ANB) P(A) · P(B)

P(A) 0, P(B) # 0

⇒ A and B are not independent. In other words A and B are dependent events.

depicts the relationship between independent events and mutually exclusive events.

Dependent Events

Independent Events

Mutually Exclusive

Fig. 3.1

Remark 3 If one of the events of A and B is an impossible event (), then A and B are independent as well as mutually exclusive. For example, suppose A = o, then AnB being a subset of A, AnB.

A and B are mutually exclusive.

Also,

P(ANB) = 0

= P(A) P(B)

... P(A) = 0

We shall now consider some of the important implications of independence of two events.

Theorem 1 Suppose A and B are two events defined on a sample space 2. If A and B are independent then,

(i) A and B' are independent.

(ii) A' and B are independent.

(iii) A' and B' are independent.

Proof (i) Consider,

P(A B') = P(A)-P (ANB)

= P(A) P(A) · P(B)

= P(A) [1-P(B)]

= P(A) · P(B')

..

A and B' are independent.

(ii) By changing the roles of A and B in (i), it can be proved 

A' and B are independent.

(iii) Consider,

..

P(A'B')

P(AUB)'

= 1-P (AUB)

by De Morgan's Law

= 1 [P(A) + P(B) = P(AB)]

= 1-P(A) - P (B) + P(A) · P(B)

= [1-P(A)] [1 − P(B)]

= P(A) P (B')

A' and B' are independent.

Independence of Three Events

Extension of the definition of independence of two events to the case of three events is not straight forward.

Definition 1 Mutual independence of three events A, B, C, (Complete independence of A, B, C). (April 2015, 2014)

Let A, B, C be three events defined on . The three events A, B, C are said to be mutually independent or completely independent if and only if the following conditions are satisfied.

P(ANB) = P(A) P(B)

(i)

(ii)

P (BOC)

(iii)

P(ANC)

(iv)

P(B) · P(C)

P(A) P(C)

P(A BOC) = P(A) · P(B) P (C)

Definition 2 Pairwise independence of three events. The three events A, B, C defined on 2 are said to be pairwise independent if and only if the following conditions are satisfied.

(i)

P (ANB) = P(A) · P(B)

(ii)

P (BOC)

P(B) · P(C)

(iii)

P(ANC)

P(A) · P(C)

Remark: Note that the definition of mutual independence needs the condition P (A BOC) = P(A) · P(B) P (C), in addition to the pairwise

independence of A, B and C. Therefore,

A, B, C are mutually independent.

A, B, C are pairwise independent.

The other way implication is not true; can be seen from the following example.

Conditional Probability

Consider a family having two children. Suppose, we wish to find the probability of the event A; that both the children are males.

= {MM, MF, FM, FF} and

A = {MM}

Assuming the sample space to be equiprobable; P (A) = . Now suppose it is already known that at least one of the children is a boy. Then the sample space will not contain FF and reduce to a new sample space say, B = {MM, MF, FM).

As there are only 3 elements in the sample space, probability of A with respect to the reduced sample space B will be. Thus, the knowledge about the occurrence of some other event has altered the probability of the event under consideration. The above probability is called as conditional probability of A given B and is denoted by P (AIB). Observe that, initially

Number of elements in A Number of elements in

P(A) =

=

n(A) n(2)

say

then,

P(A|B)

=

=

Number of elements in An B

Number of elements in B

n (A B)

n (B)

n (AB)/n (2)

n(B)/n(2)

P(A|B) =

P(ANB) P(B)

P(B) #0

Remark 1 This expression, though derived for events on an equiprobable finite sample space, remains valid for not equiprobable sample spaces such as countably infinite and continuous sample spaces. Remark 2: The probability P(A) is called unconditional probability of A.

Definition: Conditional probability of A given B : Suppose A and B are two events defined on a sample space 2, then the conditional probability of A given B, denoted by P (AIB) is defined as,

P(A|B) =

P (ANB) P(B)

Similarly, conditional probability of B given A is,

P (BIA) =

P(ANB) P(A)

P(B) > 0

P(A) > 0

Remark In the above example, if we calculate P(BIA), i.e. conditional probability of B given the event A, it will be

P(ANB) P(BIA) = P(A)

2/36

2/36

Thus,

= 1

P(A|B) P (BIA)

Theorem 2 Conditional probability satisfies all the axioms of unconditional probability, viz, for event B defined on 2, with P (B) > 0.

A-1  0 ≤ P(A|B) ≤1 for any Ac

A.2 P (NB) = 1

A-3 If A and C are mutually exclusive events defined on 2, then

P (AUCIB) = P(A|B) + P(CIB)

Proof: A-1 P(A|B)

=

P (ANB) P(B) > 0

and

A-2

ratio of non-negative numbers

P(A|B) =

P (ANB) P (B)

P (B) P(B)

*

A B C B

P(A B) ≤ P(B) = 1

P (NB) =

Ρ (ΒΟΩ) P (B) = P(B) P (B)

= 1

BO2 = B

A-3 A and C are mutually exclusive events.

..

AnC =

P[(AUC) OB]

P (AUCIB) =

P (B)

P(ANB) U (BOC)

=

P (B)

(AnB and B OC are disjoint, (see Fig. 3.2))

P (ANB) P (BOC)

=

+

P (B)

P (B)

A

= P(A|B) + P(CIB)

AOB BOC Fig. 3.2

In what follows the following properties of conditional probability.

Theorem 3 If A, B, C are any three events defined on 2, with P(B)> 0, then

P (AUCIB) = P(A|B) + P (CB) - P (A CIB)

Proof : Recall

(AUC) B = (AnB) U (BOC)

P (AUC/B) =

è¿‘

P [(AUC) B]

P (B)

P [(ANB) u (BOC)]

P (B)

P (ANB) + P(BOC) - P [(AB) n (BOC)]

by addition theorem

P (B)

= P(A|B) + P(CIB)-

P (A BOC) P (B)

= P(A|B) + P(CIB) - P (ACIB)

Theorem 4 If A and B are events defined on 2, then

P(A|B) 1-P(A|B);

P(B)>0

Proof : Consider.

P(A|B) =

P (A'OB) P(B)

=

P(B)-P (AB) P(B)

P (ANB)

1-

P (B)

= 1-P(A|B)

A

B

Ω

A'OB 

Comparison of Magnitudes of Conditional and Unconditional Probability

Let A and B be two events defined on 2. We wish to consider the magnitudes of P (AIB) and P (A) in each of the following different cases. (i) A and B are mutually exclusive (Fig. 3.4).

P(A) > 0

Fig. 3.4

P(A|B) =

P (ANB) P(B)

=

P (6) P(B)

=0

.. P(A|B) P(A)

(ii) ACB

P(A|B) =

.. P(A|B)

P (ANB) P (B) P(A)

=

P(A) P(B)

(iii) AB (Fig. 3.6)

P(A|B) =

Fig. 3.5

P (ANB)

P (B)

.. P(AB) ≥ P(A)

B

P (B)

= P(B) = 1

Fig. 3.6

Ω

(iv) AnB (Fig. 3.7)

P(A|B) =

P (ANB) P(B)

may or may not be bigger than P(A)

B

Ω

Fig. 3.7

(v) A and B are independent.

P (ANB)

P(A|B) =

=

P (B)

P(A) P(B) P(B)

independence

= P(A)

..

Remark

P(A|B) = P(A) when A and B are independent.

The above relation is also stated as the definition of

independent events.

Definition: Events A and B defined on a sample space are called independent iff,

P(A|B) = P(A); P(B).

P (BIA)

The statement P (A|B) = P(A) means that the information that 'event B has occurred' does not have any effect on the probability of A. In other words, whenever occurrence or non-occurrence of one event does not have any impact on the occurrence of other event, the two events are said to be independent. Therefore in practice, when we feel that the two events are not related in the above sense, we assume independence and calculate probability of their joint occurrence by taking the product of the individual probabilities. i.e. we use P (AB) = P(A) · P(B). Whenever the two events are not independent, they are said to be dependent. In that case, we can't use P (AB) = P(A) P (B) but have to use the theorem of compound probability which we shall study

 Multiplication Theorem

(Theorem of Compound Probability)

Theorem 5 Let A and B be any two events defined on a sample space N.

Then,

P(ANB) = P(A|B) · P(B) = P(BIA) · P(A)

(April 2014)

Proof: From the definition of conditional probability,

P(A|B) =

P(ANB) P(B)

..

P(ANB) = P(A|B) · P(B)

Similarly, P(AB) = P(BIA) P(A)

Remark 1 The theorem holds even if P(B) = 0, provided we interpret P(A|B) = 0 if P(B) = 0.

This is so because AnB BP (AB) = 0.

Remark 2 The multiplication theorem is used for obtaining probability of simultaneous occurrence of two events, whenever conditional probability is given. This will be clear from the examples we discuss in this section.

Example A lot contains 12 items of which 4 are defective. Two items are drawn at random from the lot one after other (without replacement). Find the probability that both items are non-defective.

Solution:

Let,

A Event that the item drawn first is non-defective (ND).

B Event that the item drawn at the second draw is non-defective.

12 items

4D

P(A)

=

8 12

=

8ND

Now probability that second item is ND given that first item is ND is

P(BIA) =

15

Since, the item drawn at first draw is kept aside,

..

P (both items non-defective) = P (ANB)

= P(BIA) P(A)

... by theorem 4

7

14

= 11 x = 33

The multiplication theorem can be extended to more than two events. Here we give the theorem for three events A, B, C.

Theorem 6 If A, B, C are any three events defined on a sample space, then

P(A BOC) = P(A) P (BIA) P (CIA B)

Proof: Let us call the events A B as D

.. P(A BOC)

= P(DOC)

= P(D) P (CID)

= P(AB) P (CIA B)

= P(A) P (BIA) P (CIA B)

Posterior Probabilities

One of the important applications of conditional probability and multiplication theorem is computation of posterior probabilities on the basis of the information supplied by the experiment.

For example, consider the following situation.

A certain factory has three machines M1, M2 and M3. From past experience it is observed that the probabilities of producing defective articles by M, M2, M3 are 0.01, 0.03 and 0.05 respectively. These probabilities are called as prior probabilities. Now, suppose an item is drawn at random at the end of the day's production. This item is found to be defective. Then, naturally, the quality control engineer will be interested in knowing the probabilities that the defective item is produced by M1, M2 or M3. This enables him to maintain the quality of the product. These probabilities, which are computed after performance of the experiment using the information on the outcome, are called as posterior probabilities.

These calculations are facilitated by the famous Bayes' theorem. Bayes', a British mathematician postulated this theorem in 1763. Bayes' theorem is of extreme help to business and management executives in arriving at valid decisions.

Let us first introduce ourselves to the concept of partition of sample space as a prerequisite to the Bayes' theorem.

Partition of a Sample Space

A collection of mutually exclusive and exhaustive events is called a partition of sample space. More specifically, the events A1, A2.... An defined on a sample space Q are said to form a partition of if and only

if

(i)

A; Aj for all i and j; i#j

n

(ii)

U A1 = S

Illustration Let

= {1, 2, 3, 4, 5, 6)

Let,

A

(2, 4, 6) and B = {1, 3, 5}

Then,

AnB and AUB = N

Thus A and B form a partition of N.

Here note that BA'

Bayes' Theorem

Theorem 7 Suppose events A,, A2,

.......

An form a partition of a

sample space of a random experiment. Suppose B is any other event with P(B) > 0, defined on 2. Then,

P(A;) · P(BIA;)

P(A|B) =

for all i = 1, 2,...., n

Σ (Α) - Ρ(ΒΙΑ)

j=1

Proof: Events A1, A2, ..., An form a partition of

..

Consider,

n

U Aj A1UA2 ... UA1 =. j=1

B = 20B

=

OB

n

U (Ajn B) j=1

by distributive property

Now (AjB), j = 1, 2, ..., n are mutually exclusive events. 



DESCRIPTIVE STATISTICS

Attributes,Variables and types of data

Presentation of Data

Measures of Central Tendency

Measures of Dispersion

Moments ,Skewness and Kurtosis

Theory of Attributes

Correlation

DISCRETE PROBABILITY DISTRIBUTIONSample Space and Events

Probability

Conditional Probability and Independence

Univariate Discrete Probability Distributions

Mathematical Expectation(Univariate)

Bivariate Discrete Probability Distribution

Mathematical Expectation (Bivariate)

DISCRETE DISTRIBUTIONS

Degenerate Bernoulli Binomial Distribution

Hypergeometric-Distribution

Poison Distribution

Geometric Distribution

Negative Binomial Disribution

Truncated Distribution



Post a Comment

Previous Post Next Post

Contact Form