Bivariate Discrete Probability Distribution

Introduction

So far we have learnt what is meant by a discrete random variable, its probability distribution and expectation. Throughout, we assumed that a single characteristic say X, is of interest. For example, X was the number on the face of a die, number of heads when 3 coins tossed, sum of two faces when two dice thrown etc. Hence, the set-up was univariate; concerning a single r.v. However, there are many situations when we are interested in two characteristics, say X and Y at the same time related to the same item. That is, there are two r.v.s. to be observed simultaneously. In other words, X and Y are two r.v.s. defined on the same sample space Q. Following examples illustrate this point.

1. In a family planning survey, number of children (X), as well as number of girls (Y) in a family are recorded.

2.Sex of the person (X) as well as whether he is smoker or not (Y) is recorded.

3.In forestry, diseased plants from two species are identified.

X = 1 if the plant is diseased and X = 0 otherwise.

Y = 1 if the plant of species I and Y = 0 otherwise.

4. X may be the sum of two numbers and Y may be the maximum

of two numbers when two dice are thrown.

5. X may be the sale in kg of tea of Brand A while Y may be the sale of tea of Brand B.

These situations demand handling two variables at a time. Hence, we combine them into an ordered pair say (X, Y) and call it as a bivariate random variable or two dimensional r.v. Note that both X and Y are defined on the same sample space 2. Whenever both X and Y are discrete (finite in this course), we say that (X, Y) is a two-dimensional discrete r.v.

Definition 1: Bivariate (Two Dimensional) discrete r.v.: Let be the sample corresponding to a random experiment. Let X and Y be two real valued functions (r.v.s.) defined on 2. The ordered pair (X, Y) defined by (X, Y) (w) = (X (w), Y (w)) is called a bivariate or two dimensional random variable on Q. If the number of possible values of (X, Y) is finite, then (X, Y) is called as a bivariate discrete r.v.

Note that (X, Y) is discrete if and only if X and Y are both discrete. Hence, if X takes values X1, X2, ..., Xm and Y takes values y1, Y2, ... Yn: then the range space of (X, Y) is

R(x, y) =((xi, Yi); i = 1, 2, ..., m; j = 1, ..., n}

R(x, y) is nothing but the cartesian product of R, and Ry, the ranges of X and Y respectively. R(x, y) contains m × n points.

 Bivariate (Joint) Probability Distribution

Recall that the p.m.f. P (x) of a single r.v. X, was derived by adding the probabilities of the elements in which gives rise to the event [X = x]. On similar lines one can construct the joint probability mass function of the bivariate r.v. (X, Y).

Definition 2: Joint Probability Mass Function of (X, Y) : Let (X, Y) be a discrete bivariate r.v. defined on . Let the range space of (X, Y) be

    R(x, y) =

For each (xi, yj) =

{(x, y); i = 1,..., m; j = 1, 2, ..., n} R(x, y) we define a function,

P (xi, yj)

Pij as follows:

Pij = P(xi, yj) = P(X = xi, Y = yj];

if, (i) Pij≥ 0 for all i, j

i= 1, 2, ..., m; j = 1, 2, ..., n

m

n

(ii) Σ

Σ Ρ = 1.

i=1 j=1

then the above function P is called as the joint probability mass function of (X, Y).

The set {(xi, yj, Pij); i = 1, 2, ..., m; j = 1, 2, ..., n} is called the joint probability distribution of (X, Y); which can be represented in the following tabular form.

P (X = xi, Y = y;] is obtained by adding the probabilities of those elements in which give rise to the event [X = x;] [Y=yj].

Y У1 Y2

Table 2.1 Yj

...

Уп

Total

X

X1

P11

P12

...

P21

P22

...

Pij

P2i

www

Pin P2n

P

P2.

:

:

:

:

:

:

The row totals P1., P2, ..., Pm represent marginal probabilities of X while the column totals P1, P2, P.n represent marginal probabilities of Y.

Marginal Probability Distributions

Let {(xi, yj, Pij); i = 1, ..., m; j = 1, ... n} represent a joint probability distribution of a two-dimensional discrete r.v. (X, Y). We know that by pij we mean P (X = X;, Y = y;]. Using these, we can obtain the (univariate) probability distribution of X as follows.

P1 = P(X = x;)

= P(X = x;)]

= P(x=x)

(X n

n

=

(x=3}]}]

j=1

{(X = x) ~ (Y = y;}]}]}

= P(X = Xi, Y=

i=1

Σ P(X = X, Y = y;)

using the distributive property

Since, the events (X = xi, Y = yj) for fixed x; and different yj are disjoint.

= Σ Pij j=1

i = 1, 2, ..., m

The above obtained probability distribution is called as the marginal probability distribution of X. It can be presented in the following

tabular form.

X P (X = X;) P1.

XI

X2

...

P2.

Xi Pi

Xm

...

Pm

Total 1

On similar lines we can derive the marginal probability distribution of Y.

P. j = P(Y = y;)

m

=

U (X = Xi,

= 1 1

m

Pij

i=1

Y

Yı Y2

...

Yj

P(Y=yj) P P.2

m

Obviously,

j = 1, 2, ..., n

Уп

Total

...

P.j

P. n

1

n

Σ Pi-

=

Σ

P.;=1

i=1

j=1

Thus, the row totals in Table 2.1 give the probabilities P (X = x;) for various values of x;; while the column totals give the probabilities P(Y= y;) for various values of yj.

Solved Examples

Example: Two fair dice are thrown. Let X denote the absolute difference between the two scores and Y denote the maximum of two

scores.

(i) Derive the joint probability distribution of (X, Y).

(ii) Also obtain the marginal probability distributions of X and Y. Solution We know that contains 36 elements each with probability 36. The possible values of X are 0, 1, 2, 3, 4, 5 and those

of Y are 1, 2, 3, 4, 5, 6. Now, consider P (X = 0, Y = 1). It means that the difference between two numbers is 0 and the maximum between two numbers is 1. This can happen only when (1, 1) occurs. That is, when both the faces show 1.

..

1

P(X=0, Y=1) = 36

Similarly, P (X = 0, Y2) = P(2, 2) = 36

P(X=0, Y=6) = P(6,6) = 36

Further, P (X = 1, Y = 1) = Probability that the difference between two numbers is 1, while maximum is 1. This is an impossible event, as no face of the die is marked as 0.

..

P(X = 1, Y = 1) = 0

 Independence of Two Discrete Random Variables

In earlier chapter, we defined the independence of two events. Events A and B are independent if and only if P (AB) = P(A) · P(B). Extending this definition to two discrete r.v.s. X and Y, we say that X and Y are independent if and only if the events [X = x] and [Y = y] are independent for all values of X and Y. The definition in terms of p.m.f.'s is as follows.


Definition 3: Let (X, Y) be a discrete bivariate r.v. defined on a sample space 2. Let pij = P (X = xi, Y = yj); i = 1, ..., m; j = 1, ..., n denote the joint p.m.f. of (X, Y). Let P., i = 1, 2, ..., m and P. j, j = 1, 2, ..., n be the marginal p.m.f. s of X and Y respectively. Then X and Y are called independent r.v.s. if and only if,

In other words,

Pij P. XP.j; i = 1,..., m; j = 1, ..., n.

P[X= xi, Y=yj] = P(X = x;) P(Y= y;) for all i and j.

 Conditional Probability Distributions

The conditional probability of event A given event B is defined as P (ANB) P(B)

P(A|B) =

P(B) > 0

Similarly, the conditional probability of (X = x;) given that (Y = y;)

P(X = xi, Y=yj) Pij P(X = x; Y=yj) = P(Y=yj)

j

When Y is fixed at yj, and we obtain conditional probabilities of each of xi, i = 1, ..., m given the event (Y= y;), what we get the conditional probability distribution of X given Y = yj. It is represented as follows:

X

X X2

...

P(X = x; Y=yj) Pij

P2j

...

P.j

P. ¡

Xi Pij P.j

...

Xm

Pmj P.j

Total 1

When we fix X at a particular value xi, say and obtain the conditional probabilities of Y = y; for all j, given X = x;, then we get

the conditional probability distribution of X given Y = yj. It is represented as follows:

X

X1

X2

Xi

Xm

P(X = x; Y=yj)

Pij P2j

Pij

Pmj

Total 1

P.

P.j

P. ¡

P. ¡

When we fix X at a particular value xj, say and obtain the conditional probabilities of Y = y; for all j, given X = x;, then we get the conditional probability distribution of Y given X = Xj.

P(X = xi, Y = yi)

Pij

P(Y=yj|X=x;) =

P (X = X;)

Pi.

It is tabulated as follows.

Y

Yı P(Y=yi|X = x1) Pil Pi2 Pi Pi.

Y2 ...

Yj

Yn

...

Pij Pi.

Pin Pi.

Total 1

Remark 1: Notice that when we obtain conditional distribution of X given Y = yj, the underlying variable is X and not Y. In fact Y is hold constant at y;. Similarly, in case of conditional distribution of Y given X = xi, the variable is Y and not X. In fact X is hold constant at xj. Accordingly, conditional probability distributions are probability distributions.

univariate

Σ Pij

M

m

Pij

Remark 2:

Σ P(X=xY=yj) =

=

i = 1

i=1

P.j

= 1

n

and

Σ P(Y=y; X = x;) = Σ

n Pij Pi. Pi. Pi.

=

= 1

j=1

j=1

These conditions have to be fulfilled as these are p.m.f.s.

Remark 3 When X and Y are independent, the conditional probability distributions become nothing but the respective marginal distributions. For,

P(X=x1Y=yj) =

=

P(X = xi, Y = y;) P(Y=yj)

P(X = x) P(Y= y;) P(Y=yj)

Pi. P.j

=

= Pi.

for i=1,..., m

P. j

Pij

P. P. j Pi.

= P. j.

for j = 1, 2, ..., n.

Also, P(Y=yj|X= x;) = P1.

This is in consistency with P (A|B) = P(A) when A and B are independent.

Remark 4: For obtaining conditional probability distribution of X given Y = yj; mark the column of Y = y; and to each entry in this column divide by its column total. Similarly, to get the conditional probability distribution of Y given X = x;, mark the ith row and to each value in this row, divide by the row total.

Joint Distribution Function of Two Dimensional Discrete r.v.

Definition 4: Let (X, Y) be a bivariate discrete r.v. defined on Q. The joint (cumulative) distribution function F (x, y) is defined as,

F(x, y) = P(X ≤x, Y≤y],

x, ye R

We state below some of the important properties of the distribution function.

1.

0 ≤ F(x, y) ≤1

2.

The function is non-decreasing in each of the variables. i.e.

(i) F(x, y) ≤ F(x, y')

if y<y'

(ii) F(x, y) F(x, y)

if x < x'

lim

3.

x→→ ∞ F(x, y) = 0.

lim

x→ F(x, y) = 1

4.

y → ∞

5.

Let a, b, c, d be any real numbers with a < b, c <d. Then,

P[a<Xb, c< Y≤d] = F(b, d) + F (a, c) - F (b, c) - F (a, d)

 Probability Distribution of (X + Y) when X & Y are Independent

Let (X, Y) be a bivariate r.v. with range space as {(xi, yj), i = 1, 2, ..., m; j = 1, 2, ..., n}. Define Z = X + Y. Let P, (x) and P2 (y) denote the marginal p.m.f.s of X and Y respectively. Consider

m

Σ PIX=X, Y = z-xi]

P[Z=z] =

i=1

m

= Σ i=1

P[X=x]P[Y=z-x;]

.. X and Y are independent

m

=

Σ P1 (xi) P2 (z-xi)

i=1


DESCRIPTIVE STATISTICS

Attributes,Variables and types of data

Presentation of Data

Measures of Central Tendency

Measures of Dispersion

Moments ,Skewness and Kurtosis

Theory of Attributes

Correlation

DISCRETE PROBABILITY DISTRIBUTIONSample Space and Events

Probability

Conditional Probability and Independence

Univariate Discrete Probability Distributions

Mathematical Expectation(Univariate)

Bivariate Discrete Probability Distribution

Mathematical Expectation (Bivariate)

DISCRETE DISTRIBUTIONS

Degenerate Bernoulli Binomial Distribution

Hypergeometric-Distribution

Poison Distribution

Geometric Distribution

Negative Binomial Disribution

Truncated Distribution



Post a Comment

Previous Post Next Post

Contact Form