The normal distribution
STK4011 – Statistical Inference Theory
From Chapter 1: Exercises 1, 2, 3, 4, 7, 8, 9, 10, 12, 15, 16, 17, 18, 21, 23, 24, 29, 30, 31, 33, 40, 41, 43, 45, 49, 50. Gently asterisked: 12, 30, 33, 41, 45. From Chapter 2: Exercises 1, 2, 6, 7, 8, 9, 10, 11, 12, 13, 14, 18, 19, 21, 23 29, 32, 33, 39, 41, 42, 43, 52, 54, 55. Gently asterisked: 2, 6, 10, 19, 21, 29, 32, 41, 42, From Chapter 3: Exercises 1, 2, 4, 5, 6, 7, 8, 9, 11, 12, 15, 16, 17, 18, 24, 27, 30, 31, 32, 34. Gently asterisked: 1, 6, 11, 17, 18, 24, 30, 31. From Chapter 4: Exercises 1, 2, 3, 4, 5, 7, 8, 9, 12, 16, 18, 24, 25, 26, 27, 29, 32, 34, 35, 41. Gently asterisked: 1, 2, 5, 7, 16, 18, 24, 25, 26, 32, 34, 41. From Chapter 5 (in the PartOne version from October 2024) : Exercises 1, 2, 6, 14, 15, 16, 17, 18, 23, 29, 30, 31, 42, 44, 46, 51. Gently asterisked: 2, 14, 15, 17, 42, 44. From Chapter 6 (in the PartOne version from October 2024) : Exercises 1, 2, 3, 4, 6, 8, 15, 16, 27, 28. From Chapter 7 (in the PartOne version from October 2024) : Exercises 1, 2, 3, 6, 9, 10, 12, 14. From Chapter 8 (in the PartOne version from October 2024) : Exercises 1, 2, 3, 4, 7, 8, 9, 10. From the Statistical Stories: v.6, Who wins?, dataset: denmark-norway-handballnov2022; v.7, The turn-around operation, dataset: 0-2 to 3-2; iv.1, New Haven temperatures, dataset: newhaven-data; iv.3, How special are You, dataset: sleep14; vi.1, Checking out the CLT; i.6, Mothers, babies, birthweights, factors; vii.4, Time-to-failure for machine components; iv.6, Birds on islands outside Ecuador; ...
I Short & crisp
1 Statistical models
Normal, bi- and multinomial, exponential, gamma, mixing
The normal distribution
Normal sums
Binomial distribution
Trinomial probabilities
Hazard rates and survival functions
The exponential distribution
The Gamma distribution
Mixing the exponential
Transformations, uniform, Pareto, Cauchy, Beta, Dirichlet
Transformation from \(X\) to \(Y\)
Ordering exponentials
The Pareto distribution
Maxima of i.i.d. samples
Ratios and the Cauchy
Dirichlets inside Dirichlets
The Dirichlet-multinomial distribution
Laws of small numbers, Poisson, geometric, negative binomial
The Poisson distribution
Conditioning on Poisson sums and a generalised binomial
Moments, moment-generating functions, characteristic functions
Moments
Moment-generating functions
The Laplace distribution
Sums of random lengths and the compound Poisson
The logarithmic distribution
The multinormal, the t, the chi-squared, the F
The multinormal distribution
How tall is Nils?
The t distribution
The noncentral t distribution
2 Large-sample theory
Modes of convergence
The normal distribution
The Borel-Cantelli lemma and convergence almost surely
Scheffé’s lemma
From discrete to continuous
Many small probabilities give a Poisson
Maximum of uniforms
Stochastic \(O_{pr}\) and \(o_{pr}\) symbols
Convergence in probability and tail bounds
Markov, Chebyshov, and the Law of Large Numbers
The binomial and the empirical distribution function
Continuous mapping for convergence in probability
Toeplitz lemma and convergence in probability
Further tail bound inequalities
Bernshteı̆n and Weierstraß
Convergence in distribution
More in the Portmanteau
Skorokodh’s theorem
Central limit theorems
The central limit theorem, Lindeberg’s proof
Showing convergence in distribution via moment-generating functions
Proving the CLT with moment-generating functions
Lindeberg, Liapunov, etc.
Higher-order expansions of m.g.f.s
Continuity theorem for vector variables
The multi-dimensional CLT
The delta method
The empirical correlation coefficient, general case
Stretching the delta method
The strong law of large numbers
The Strong Law of Large Numbers: the basics
3 Parameters, estimators, precision, confidence
Precise estimation in a few classical models
Mean squared error
Binomial estimation
Estimating the normal variance
Confidence interval
Confidence interval for a normal variance
Confidence interval for a normal mean
Normal quantiles: estimation and confidence
The empirical distribution function
Confidence via normal approximations
Confidence intervals via normal approximations
Confidence intervals for the standard deviation, outside normality
The binomial, the normal approximation, and confidence intervals
Quantiles and sample quantiles
Uniform ordering
Min and max of two uniforms
The sample median
Moment matching methods
Moment matching estimators
Moment method estimators for the exponential family
Quantile matching methods
Moment fitting and quantile fitting for the Weibull
Minimum sum of squares and linear regression
Linear regression and least squares estimation
Linear multiple regression and least squares
Separate and joint confidence intervals
4 Testing, sufficiency, power
Testing, testing
Testing a null hypothesis
Connections from confidence intervals to testing
Confidence intervals for quantiles
t testing, one and two samples
Wald tests
Neymann–Pearson optimal testing
The Neymann–Pearson Lemma
The Neymann–Pearson Lemma: more details
The Neymann–Pearson Lemma: applications
The t test and its power
Optimal average power
Sufficiency, factorisation theorem, completeness
The factorisation theorem
Completeness for the exponential family
Basu’s Lemma
Optimal conditional testing
Conditional tests
Conditional tests: pairs of exponentials
Conditional tests: Poisson
Conditional tests: multiparameter exponential models
A generalised Poisson distribution
Testing for correlation
Testing equality across groups, vector parameter case
5 Minimum divergence and maximum likelihood
Maximising the log-likelihood
The likelihood and log-likelihood functions
The Kullback–Leibler divergence and the maximum likelihood method
Score functions and the Fisher information matrix
Cramér–Rao lower bounds for estimators
Cramér–Rao bounds for the multidimensional case
Fisher information and parameter transformations
Maximum likelihood estimators
Completing increasingly simpler tasks
Wilks theorems
log-likelihood and ML for the binomial, trinomial, multinomial
The maximum likelihood estimator: examples
Extending theory and methods to regression setups
Logistic regression
Poisson regression
We can do things
6 Bayesian inference and computation
Poisson data with gamma priors
The Bayesian Master Recipe
Some loss functions and their associated Bayes rules
How many streetcars in San Francisco?
A Bayesian take on hypothesis testing
The binomial-beta setup
MCMC, II: simulating from the posterior
The normal prior and posterior with normal data
Bernshteı̆n–von Mises approximations
Multiparameter inference, I
7 CDs, confidence curves, combining information
The probability transform
Recipe One: via the c.d.f. of an estimator
Confidence distribution and confidence curve for the normal standard deviation
Computing a CD with simulation and isotonic repair
Recipe Four: confidence curves via Wilks theorems
Median age for men and women in Roman Era Egypt
Recipe Six: CDs in exponential families
Bayesian posteriors as approximate CDs
8 Loss, risk, performance, optimality
Coin tossing
Uniformly minimum variance unbiased estimators
Cramér–Rao and Cauchy–Schwarz
Sufficiency and Rao–Blackwell
A weird unbiased estimator
More Rao–Blackwellisation
Tools for minimaxity
Minimax estimators for binomial and multinomial parameters
II Stories
Who wins? Computing probabilities as a function match time
The turn-around operation: from 0-2 to 3-2
New Haven annual temperatures 1912-1971
Mammals and their bodies and brains
Checking out the CLT
Mothers, babies, birthweights, factors
Time-to-failure for machine components
Birds on islands outside Ecuador