Probability

 

The theory of probability has its origin in the seventeenth century. The king of France, who was fond of gambling, consulted the great mathematicians Pascal and Fermat for solving the gambling problems. These two scientists tried to solve the problems using probability concepts which laid the foundations of the probability theory

We learned that a random experiment has a number of possible outcomes. Also, the outcome cannot be predicted before performing the experiment. However, we find ourselves making certain statements as:

(i) My chances of getting first class are good.

(ii)It may rain today.

(iii) Possibility of Indians entering the World Cup Cricket finals is high.

The probability theory tries to measure the possibility of an outcome in numeric terms. Thus probability of an outcome is a numeric measure of the possibility or chance of the occurrence of that outcome. Unless, we have numbers, measuring the chances of various outcomes, we cannot analyze them mathematically.

There are several definitions of 'probability'. We shall consider only (i) the classical definition and (ii) the axiomatic approach.

Classical Definition of Probability

Classical definition assumes that the sample space under consideration is equiprobable.

Definition Equiprobable Sample Space: Consider a random experiment which has n mutually exclusive outcomes. The sample space 2, is said to be an equiprobable sample space if all the outcomes are equally likely. Or there is no reason to say that, their chances of occurrence are different. In other words, probability of occurrence of every outcome is the same.

For example, when we toss a fair coin, probability of getting 'head' is the same as that of getting 'tail'. Therefore, if the total probability is assumed to be equal to 1,

T

P (Head) P (Tail) =

Similarly, if = {@1, @2,..., Å’j, ...,

n}, then is 'equiprobable'

1

would mean that probability of each o; is

for 1 ≤ i ≤n.

n

If we denote probability of oo, by P (0;), then we have

P(∞,) = P(02)... = P(0;) = ... = P(0)=

Definition

·

Probability of an event If a random experiment results in n mutually exclusive and equally likely outcomes; out of which m are favourable to the event A, then the probability of occurrence of the event A is denoted by P(A) and is given by,

P(A) =

m n

0 ≤ m≤ n

In other words,

P(A) =

Remarks:

Number of elements belonging to A

Total number of elements in the sample space

1. Note that for any event A,

2.

3.

0 ≤ P(A) ≤1 since 0≤m ≤ n.

Thus, probability of any event always lies between 0 and 1. The classical definition of probability does not require the performance of the experiment. Probability is obtained using logical reasoning without conducting the experiment.

n is total number of mutually exclusive, equally likely and exhaustive outcomes in .

age 37/225

Type here to search

37

of 225

134%

m

TA


Limitations of Classical Definition of Probability

(i) In classical definition, it is assumed that all outcomes of the experiment under consideration are equally likely. This is not the case always. For instance, probability of passing the examination and failing in the examination are not same. In such cases, probabilities of events cannot be calculated using the classical definition.

(ii) Sometimes, n, the total number of possible outcomes is infinite. For example, for the experiment of tossing coin until head appears, the sample space = {H, TH, TTH, TTTH, ........ Here also classical definition fails.

(iii) If the actual value of n is not known, then also probabilities. cannot be computed using the classical definition. For example, in the experiment of capturing fish from a pond, the total number of fish in the pond is not known. Hence, we cannot find probability of concerned events. Due to these drawbacks in the classical definition of probability, it is used only in limited situations. A Russian mathematician, A. N. Kolmogorov in 1933 formulated the axiomatic approach to the modern probability theory. This approach begins with certain notions and axioms, based on which the further theory is developed using logical reasoning. For finite sample spaces, the axiomatic approach reduces to the probability assignment approach which we consider first.

Probability Model :

(Probability Assignment Approach)

Let be a finite sample space containing the points 0, 2,... On. That is,

= {0, 02,

wn}

Assign a real number P {0} to each o; 2, such that

(i) 0≤P (≤ 1, for i = 1, 2, ..., n and

(ii) P (0) + P(@2}+...+ P{0} = 1

i.e.

n

Σ Ρ{0} = 1 i = 1

P {0} is called the probability of the elementary event {0}. In other words, P {0} is the probability of occurrence of oo; in a single trial of the experiment.

then

Probability model: Suppose = {0, 02...} is a sample space, {wi, P{o}, i = 1, 2, ... 3 is a probability model if

P(0)≥ 0 and Σ P{0} = 1.

 Axiomatic Approach (Modern Approach) to Probability

As stated earlier, we shall now introduce ourselves to the axiomatic approach given by A. N. Kolmogorov in 1933. An axiom is a statement which is accepted heuristically and not proved. Based on axioms further theory is developed.

Axioms of probability: Let be a sample space concerning a random experiment. Let A be any event of . Probability of A, denoted by P (A), is defined as any real valued function on 2, which satisfies the

following axioms.

Axiom 1: P(A) ≥ 0

Axiom 2:  P(S) = 1


Axiom 3: If A1, A2, ..., An are any mutually exclusive events of 2, then.

n

PUA;

=

ΣΡ (Α;) i=1

In particular, if A and B are two mutually exclusive (disjoint) events, then

P(AUB) = P(A) + P(B)

Remark:The above definition applies to countably infinite as well as uncountably infinite sample spaces. These types of sample spaces are beyond the scope of this book. We shall always consider a finite sample space like

= {01, @2,..., On}

Important Theorems on Probability

We now learn some fundamental theorems on probability. Proofs of these theorems are based on the axioms given in 2.5. Application of these results makes the computation of probabilities of complex events

very easy.

Theorem 1 P (A') = 1-P(A) where A' is the complement of A. Proof: Note that, for any event A, A and A' are mutually exclusive

events.

Also,

ANA'

AUA' = 2

Ω

Fig. 2.1

Therefore, using Axiom 3,

P(AOA) = P(A) + P(A') P(S) = P(A) + P(A')

..

1 = P(A) + P(A')

P() = 1,

by Axiom 2.

Hence,

P(A) = 1-P(A)

Theorem 2 P (0) = 0, That is probability of an impossible event

is zero.

Proof: We know that

I

Φ' = Ω P(0)= 1-P (N).

by theorem 1

= 1-1

by Axiom 2

= 0

Theorem 3 For any event A of 2,

mm


Proof : (i)

0 ≤ P(A) ≤1 P(A) ≥ 0

by Axiom 1

(ii) Also, if A' is the complement of A, then

P(A') ≥ 0

by Axiom 1

..

1-P(A) ≥ 0

by theorem 1

P(A) ≤ 1


Hence,

0 ≤ P(A) ≤1

Theorem 4 If AB, then P (A) ≤ P(B).

Proof: Observe Fig. 2.2. The event B is composed of two disjoint

events A and A' B.

i.e.

B = Au(ANB)

B

Now,

..

Using Axiom 3,

2 A'OB

Fig. 2.2

P(B) = P(A) + P(A' B)

P(AB) ≥ 0 by Axiom 1

P(B) > P(A)

Theorem 5 Addition theorem of probability (Theorem of total probability)

If A and B are any two events defined on 2, then,

(April 12, 14)

P(AUB) = P(A) + P(B) - P (ANB)

Proof Since, A and B are any two events, we assume that they are not disjoint. For such general case the Venn diagram will be as follows. Note that the event AUB is composed of three mutually exclusive events A B', AOB and A'OB.

Therefore, using Axiom 3, we write,

P(AUB) = P(A B') + P(ANB) + P(ANB)

... (1)

G

(AOB') (ANB) (AB) Fig. 2.3

Similarly, A is the union of the disjoint event AnB' and AnB

P(A) = P(A B') + P(AB)

On similar lines, we see that

P(B) = P(ANB) + P(AB)

Adding (2) and (3), we get,

..


... (3)

P(A) + P(B) = P(A B') + P (A'nB) + 2P (ANB)

= P(A B') + P(AB) + P(A'OB) + P (AB)

= P(AUB) + P(ANB)

P(AUB) = P(A) + P(B) - P (ANB)

... from (1)

Hence the proof.

Remark Note that P (AUB) is the probability of occurrence of at least one of the events A and B.

Theorem 6: If A, B, and C are any three events defined on 2, then

P(AUBUC) = P(A) + P(B) + P(C) - P(AB) - P(BOC) - P(ANC) + P(A BOC)

Proof : Let,

BUC D, therefore,

where,

P(AUBUC) = P(AUD)

= P(A) + P(D) - P (AND)

= P(A) + P(BUC) - P [An(BUC)]

= P(A) + P(B) + P(C) − P(BC)

-P [(ANB) U (ANC)]

= P(A) + P(B) + P(C) - P(BC) - P(EUF)

E = AnB and F = A C

= P(A) + P(B) + P(C) - P(BOC) - P(E) - P(F) + P(ECF)

= P(A) + P(B) + P(C) - P(BOC) - P(ANB) - P(ANC) + P(A BOC)

[(ANB) (ANC) = AnBOC]

Note: The above result can also be proved by alternative method. In this method we express (AUBUC) in 7 different mutually exclusive events viz. AnB'C', AnBOC', A'OBOC, AnB'OC, AOBOC, A'OBOC', A'B'OC and using axioms we can prove the result.

Extension of addition theorem for n > 3 events is also possible. Here we state only the statement of the possible generalization.

Generalization: For any events, A1, A2, ..., An on 2, we have the following extension.

P(CA)

=

sim

T

Σ P(A;) -

Σ P(A; Aj)

i<j=1

P(A¡ Aj Ak

i=1

n

Σ

i<j<k=1

Σ P(Ain Ajo Ak Ar) i<j<k<r=1

+...+(-1)-P (A, A2... An)

Theorem 7 (Boole's inequality): If A and B are any two events

defined on Q, then

Proof : and

..

P(AUB) P(A) + P(B)

P(AUB) = P(A) + P(B) - P (ANB) P(A B) ≥ 0

P(AUB)

P(A) + P(B)

Extension of Boole's inequality is given in the following theorem.

Theorem 8: If A1, A2, ..., An are n events, then

n

<Σ P(A)

i=1

Proof: We prove the theorem by the method of Induction. (i) By theorem 6, we note that the result is true for n=2.

(ii) Let us assume that the result holds for n = m say.

P(CA)

PU

m

= Σ P(A;)

i=1

(iii) Now to prove that the result is true for n = m + 1, consider

Let,

m+1

U

Ai

i=1

B =

m

U A U Am+1 i=1

m

U Ai

i = 1

I



DESCRIPTIVE STATISTICS

Attributes,Variables and types of data

Presentation of Data

Measures of Central Tendency

Measures of Dispersion

Moments ,Skewness and Kurtosis

Theory of Attributes

Correlation

DISCRETE PROBABILITY DISTRIBUTIONSample Space and Events

Probability

Conditional Probability and Independence

Univariate Discrete Probability Distributions

Mathematical Expectation(Univariate)

Bivariate Discrete Probability Distribution

Mathematical Expectation (Bivariate)

DISCRETE DISTRIBUTIONS

Degenerate Bernoulli Binomial Distribution

Hypergeometric-Distribution

Poison Distribution

Geometric Distribution

Negative Binomial Disribution

Truncated Distribution



Post a Comment

Previous Post Next Post

Contact Form