PROBABILITY Part2
 Get link
 Other Apps
Main points to be discussed here 

1 
Random Experiment 
2 
Different types of
events 
3 
Variable
description of the events 
4 
Conditional
Probability 
5 
Multiplication Theorem
on Probability 
6 
Laws of Total
Probability 
7 
Baye's Theorem 
8 
Bernoulli's Trial 
9 
Mean and Variance of
the Distribution 
10 
Shortcut Method of
finding Mean, Variance and Standard Deviation. 
\[P(AB)=\frac{P(A\cap
B)}{P(B)}\] 
\[P(A\cap
B)=P(A).P(B)\] \[P(A\cap B\cap C)=P(A).P(B).P(C)\] 
Explanation of Some symbols and Terms used in the problems
Variable Description of the event ⇒ Set
theory notation 
A or B (At least one of A or B) = A ∪ B 
A and B = A ∩ B ,
Not A = Ā 
\[A \: but \: not\: B =
A\cap\overline{B}\] 
\[Either\;
A\: or\: B=A\cup B\] 
\[Neither\: A\: nor\:
B=\overline{A}\cap \overline{B}\] 
\[All
\: three\: of\: A, B \: and \: C = A\cap B\cap C\] 
\[At\: least\: one\: of\: A, B,
or\: C=A\cup B\cup C\] 
\[Exactly \: \: one\: of\: A\:
and\: B = (A\cap \overline{B})\cup (\overline{A}\cap B)\] 
\[D' Morgan's Law:\: A'\cap
B'=(A\cup B)'\]\[OR\: \: \: A'\cup B'=(A\cap B)'\] 
\[P(A\cup B)=P(A)+P(B)P(A\cap
B)\]\[P(A\cup B\cup C)=P(A)+P(B)+P(C)P(A\cap B)\]\[P(B\cap C)P(C\cap
A)+P(A\cap B\cap C)\] 
Multiplication
Theorem On Probability 
If A and B are two
events associated with a random experiment, then \[P(A\cap
B)=P(A)P(B/A)\: where P(A)\neq 0\] \[P(A\cap
B)=P(B)P(A/B)\: where P(B)\neq 0\] 
Extension of
Multiplication Theorem 
\[If A_{1},A_{2},A_{3},.....,A_{n}\: are\: n \: events\:
associated\: with\: the\: random\: experiment,\: then\] \[P(A_{1}\cap A_{2}\cap A_{3}.....\cap
A_{n})=P(A_{1})P(A_{2}/A_{1})P(A_{3}/A_{1}\cap A_{2})P(A_{4}/A_{1}\cap
A_{2}\cap A_{3}).....\] \[........P(A_{n}/A_{1}\cap A_{2}\cap .....\cap A_{n1})\] 
BAYE'S THEOREM 
\[Let \: S\: be\: the \: sample\: space \: and\: let\:
E_{1},E_{2},E_{3},.....,E_{n}\: be\] \[mutually \: exclusive\: and\: exhaustive\: events\:
associated\: with\: a\: random\: experiment.\] \[If\: A \: is\: any \: event\: which\: occurs\: with\: E_{1}
\: or\: E_{2}\: or\: E_{3}\: or...or\: E_{n}\] \[P(E_{i}A)=\frac{P(E_{i})P(AE_{i})}{\sum
P(E_{i})P(AE_{i})}\: \: i=1,2,3....,n\] If n = 2 then Baye's Theorem written as: \[P(E_{1}A) =\frac{P(E_{1})P(AE_{1})}{P(E_{1})P(AE_{1}) +P(E_{2}) P(AE_{2})}\] \[P(E_{2}A)=\frac{P(E_{2})P(AE_{2})}{P(E_{1})P(AE_{1})+ P(E_{2}) P(A E_{2})}\] If n = 3 then Baye's Theorem written as: \[P(E_{1}A)=\frac{P(E_{1})P(AE_{1})}{P(E_{1})P(AE_{1})+P(E_{2})P(AE_{2}) +P(E_{3})P(AE_{3})}\] \[P(E_{2}A)=\frac{P(E_{2})P(AE_{2})}{P(E_{1})P(AE_{1})+P(E_{2})P(AE_{2}) +P(E_{3})P(AE_{3})}\]
\[P(E_{3}A)=\frac{P(E_{3})P(AE_{3})}{P(E_{1})P(AE_{1})+P(E_{2})P(AE_{2}) +P(E_{3})P(AE_{3})}\] 
\[Mean(\overline{X})=\sum p_{i}x_{i}\: \: \: where\: \: i=1\: to\: n\]\[Variance(X) = \sum p_{i}x_{i}^{2}\: \: \left ( \sum p_{i}x_{i} \right )^{2}\: \: \: where\: \: i=1\: to\: n\] 
\[P(X=r)=\:^{n}C_{r}\: p^{r}q^{nr}, where r=1,2,3....n\] 
Bernoulli’s Trial 
Probability of r successes in ‘n’
Bernoulli trial is given by \[P(r\: successes)=^{n}C_{r}p^{r}q^{nr}\] \[P(r\:successes)=\frac{n!}{r!(nr)!}.p^{r}q^{nr}\] Where n = number of
trials r = number of successful trials = 0, 1, 2, 3, ………., n p = probability of success in a trial q = probability of failure in a trial and p + q = 1 
The probability distribution of number of successes for a random variable X can be written as:
This probability distribution is called Binomial distribution with parameters n and p.
Shortcut Method of finding
Mean, Variance and Standard Deviation 
The binomial distribution with n Bernoulli trials and success p is also denoted by B(n, p) Where ‘n’ number of Bernoulli trial p denotes the probability of success q denotes the probability of failure then, Mean = np Variance = npq Standard Deviation = Square
root of npq 
 Get link
 Other Apps