Probability theory and statistics
by Lund University
First university-level course in probability theory and statistics appropriate for economists and social scientists
The purpose of this chapter is to give a rigorous introduction to probability. In the first section we look at the foundations, defining experiment, outcome and events. We then move on to the main section on probability and conditional probability. The chapter is concluded with a short section defining independent events.
Random variable is the probably the most important concept in probability theory. Based on the setup from chapter one, we give a somewhat simplified definition of a random variable. A given random variables will have associated distribution functions which will help us calculate probabilities and these will be analyzed in the second section. We then look at the most important random variable, the standard normal.
A given random variables will have associated moments which can be calculated from its distribution functions and we look at moments in the first section. Once we have a random variable X, we can create a new one Y by a composition, Y = g(X) for some function and this is the topic of section 2. We conclude this chapter with the family of random variables called the normal random variables.
Our setup in probability theory is always an experiment, a collection of possible outcome, a bunch of events and probabilities assigned to each of these events. On top of this we have so far defined a single random variable. What we will do now is to define several random variables on top of the same experiment. Defining several random variables is no more mysterious than defining several functions on the same domain. They will simply map a given outcome to different numbers. More on this in the first section. The second section then introduces joint distribution functions for these random variables. We then introduce conditional distribution functions in three which will simply calculations of conditional probabilities introduced in chapter 1. Given several random variables, we can create a new one by taking a function of them. Section 4 looks at this as well as how to find moments of a random variable defined like this. Several random variables will also have associated joint moments and section five looks at the most important joint moment, the covariance. Section six and seven look at another type of moments, namely conditional moments. Chapter six focuses on conditional expectations while chapter seven looks at conditional variances and the important law of iterated expectations.
In the last chapter we considered the case of defining several random variables although most of it was done using only two random variables. If we want to look at more than two random variables, then it is much more useful to define a k×1 random vector containing k random variables. We will see that many expressions, such as a linear function of several random variables can be expressed more compactly using vector in matrix notation.
The final chapter of this course is an introduction to statistics. The basic idea of statistics is to make inference about a population given a sample drawn from this population. We begin by translating the population/sample concepts into a framework consistent with formal probability theory in section one. Section two looks at some important distributions that we often end up using in statistics. Section three looks at the simplest problems in statistics, making inference about the mean and the variance. We then move on to a more general study of statistics, introducing estimators and their small-sample properties. The following section is devoted to the more important, but also more difficult, large-sample properties of estimators. The final section will generalize what we have done so far in this chapter by looking at several estimators collected in a vector.