The linear regression model (LRM)
Summary
Given a random sample \(\left( y_1,x_1 \right),…,(y_n,x_n)\) , a linear regression model can be formulated as follows
\[y_i=β_1+β_2x_i+ε_i , i=1,…,n\]
- \(y\) is called the dependent or explained variable
- \(x\) is called the explanatory or independent variable
- \(ε\) is called the error term
- \(β_1\) and \(β_2\) are two unknown parameters (called simply the beta-parameters)
- It is assumed that \(y\) is explained partly by \(x\) through the linear expression \(β_1+β_2x_i\) and partly by a unexplainable error term.
- The error term \(ε_i\) is an unobserved random variable. The outcome of the error term is not observed as \(β_1\) and \(β_2\) are unknown.
LRM, main point:
- Observe data \(x_1,…,x_n\) and \(y_1,…,y_n\) .
- Make the LRM assumption.
- Find estimates of \(β_1\) and \(β_2\) .