Announcements
- Our November 12th class will be held online. Information on how to connect is forthcoming.
Agenda
- Distributions (chapter 3) Lecture
- Work on lab 1 (this will be the first graded lab)
October 8, 2018
coins <- sample(c(-1,1), 100, replace=TRUE) plot(1:length(coins), cumsum(coins), type='l') abline(h=0)
cumsum(coins)[length(coins)]
## [1] -12
samples <- rep(NA, 1000) for(i in seq_along(samples)) { coins <- sample(c(-1,1), 100, replace=TRUE) samples[i] <- cumsum(coins)[length(coins)] } head(samples)
## [1] -8 8 -2 -10 -8 6
hist(samples)
(m.sam <- mean(samples))
## [1] 0.162
(s.sam <- sd(samples))
## [1] 9.883088
within1sd <- samples[samples >= m.sam - s.sam & samples <= m.sam + s.sam] length(within1sd) / length(samples)
## [1] 0.677
within2sd <- samples[samples >= m.sam - 2 * s.sam & samples <= m.sam + 2* s.sam] length(within2sd) / length(samples)
## [1] 0.951
within3sd <- samples[samples >= m.sam - 3 * s.sam & samples <= m.sam + 3 * s.sam] length(within3sd) / length(samples)
## [1] 0.999
\[ f\left( x|\mu ,\sigma \right) =\frac { 1 }{ \sigma \sqrt { 2\pi } } { e }^{ -\frac { { \left( x-\mu \right) }^{ 2 } }{ { 2\sigma }^{ 2 } } } \]
x <- seq(-4,4,length=200); y <- dnorm(x,mean=0, sd=1) plot(x, y, type = "l", lwd = 2, xlim = c(-3.5,3.5), ylab='', xlab='z-score', yaxt='n')
pnorm(15, mean=mean(samples), sd=sd(samples))
## [1] 0.9333678
1 - pnorm(15, mean=mean(samples), sd=sd(samples))
## [1] 0.06663219
SAT scores are distributed nearly normally with mean 1500 and stan- dard deviation 300. ACT scores are distributed nearly normally with mean 21 and standard deviation 5. A college admissions officer wants to determine which of the two applicants scored better on their standardized test with respect to the other test takers: Pam, who earned an 1800 on her SAT, or Jim, who scored a 24 on his ACT?
\[ Z = \frac{observation - mean}{SD} \]
Converting Pam and Jim's scores to z-scores:
\[ Z_{Pam} = \frac{1800 - 1500}{300} = 1 \]
\[ Z_{Jim} = \frac{24-21}{5} = 0.6 \]
SAT scores are distributed nearly normally with mean 1500 and standard deviation 300.
To use the 68-95-99 rule, we must verify the normality assumption. We will want to do this also later when we talk about various (parametric) modeling. Consider a sample of 100 male heights (in inches).
Histogram looks normal, but we can overlay a standard normal curve to help evaluation.
DATA606::qqnormsim(heights)
A random variable X has a Bernoulli distribution with parameter p if
\[ P(X=1) = p \quad and \quad P(X=0) = 1 - p \] for \(0 < p < 1\)
Dr. Smith wants to repeat Milgram’s experiments but she only wants to sam- ple people until she finds someone who will not inflict a severe shock. What is the probability that she stops after the first person?
\[P(1^{st}\quad person\quad refuses) = 0.35\]
the third person?
\[ P(1^{st} and 2^{nd} shock, 3^{rd} refuses) = \frac{S}{0.65} \times \frac{S}{0.65} \times \frac{R}{0.35} = 0.65^{2} \times 0.35 \approx 0.15 \]
the tenth person?
Geometric distribution describes the waiting time until a success for independent and identically distributed (iid) Bernouilli random variables.
Geometric probabilities
If \(p\) represents probability of success, \((1 − p)\) represents probability of failure, and n represents number of independent trials
\[ P(success\quad on\quad the\quad { n }^{ th }\quad trial) = (1 − p)^{n−1}p \]
How many people is Dr. Smith expected to test before finding the first one that refuses to administer the shock?
The expected value, or the mean, of a geometric distribution is defined as \(\frac{1}{p}\).
\[ \mu = \frac{1}{p} = \frac{1}{0.35} = 2.86 \]
She is expected to test 2.86 people before finding the first one that refuses to administer the shock.
But how can she test a non-whole number of people?
\[ \mu = \frac{1}{p} \] | \[ \sigma = \sqrt{\frac{1-p}{p^2}} \] |
Going back to Dr. Smith’s experiment:
\[ \sigma = \sqrt{\frac{1-p}{p^2}} = \sqrt{\frac{1-0.35}{0.35^2}} = 2.3 \]
Dr. Smith is expected to test 2.86 people before finding the first one that refuses to administer the shock, give or take 2.3 people.
These values only make sense in the context of repeating the experiment many many times.
Suppose we randomly select four individuals to participate in this experiment. What is the probability that exactly 1 of them will refuse to administer the shock
Let’s call these people Allen (A), Brittany (B), Caroline (C), and Damian (D). Each one of the four scenarios below will satisfy the condition of “exactly 1 of them refuses to administer the shock”:
The probability of exactly one 1 of 4 people refusing to administer the shock is the sum of all of these probabilities.
0.0961 + 0.0961 + 0.0961 + 0.0961 = 4 × 0.0961 = 0.3844
The question from the prior slide asked for the probability of given number of successes, k, in a given number of trials, n, (k = 1 success in n = 4 trials), and we calculated this probability as
\[ # of scenarios × P(single scenario) \]
Number of scenarios: there is a less tedious way to figure this out, we’ll get to that shortly…
\[P(single \quad scenario) = p^k (1 − p)^{(n−k)}\]
The Binomial distribution describes the probability of having exactly k successes in n independent Bernouilli trials with probability of success p.
The choose function is useful for calculating the number of ways to choose k successes in n trials.
\[ \left( \begin{matrix} n \\ k \end{matrix} \right) =\frac { n! }{ k!(n-k)! } \]
For example, :
\[ \left( \begin{matrix} 9 \\ 2 \end{matrix} \right) =\frac { 9! }{ 2!(9-2)! } =\frac { 9\times 8\times 7! }{ 2\times 1\times 7! } =\frac { 72 }{ 2 } =36 \]
choose(9,2)
## [1] 36
If p represents probability of success, (1 − p) represents probability of failure, n represents number of independent trials, and k represents number of successes
\[ P(k\quad successes\quad in\quad n\quad trials)=\left( \begin{matrix} n \\ k \end{matrix} \right) { p }^{ k }{ (1-p) }^{ (n-k) } \]
n <- 4 p <- 0.35 barplot(dbinom(0:n, n, p), names.arg=0:n)
dbinom(1, 4, p)
## [1] 0.384475