Simple Linear regression

Linear regression answers a simple question: Can you measure an exact relationship between one target variables and a set of predictors?

The simplest of probabilistic models is the straight line model: \[y=\beta_0+\beta_1x+\epsilon\] where

  • \(y\) = dependent variable
  • \(x\) = independent variable
  • \(\epsilon\) = random error component
  • \(\beta_0\) = intercept
  • \(\beta_1\) = coefficient of \(x\)

Consider the following plot:

The equation is \(y=\beta_0+\beta_1x+\epsilon\). \(\beta_0\) is the intercept. If \(x\) equals to \(0\), \(y\) will be equal to the intercept, \(4.77\). \(\beta_1\) is the slope of the line. It tells in which proportion \(y\) varies when \(x\) varies.

To estimate the optimal values of \(\beta_0\) and \(\beta_1\), you use a method called Ordinary Least Squares (OLS). This method tries to find the parameters that minimize the sum of the squared errors, that is the vertical distance between the predicted \(y\) values and the actual \(y\) values. The difference is known as the error term.

Before you estimate the model, you can determine whether a linear relationship between \(y\) and \(x\) is plausible by plotting a scatterplot.

Scatterplot

We will use a very simple dataset to explain the concept of simple linear regression. We will import the Average Heights and weights for American Women. The dataset contains 15 observations. You want to measure whether Heights are positively correlated with weights.

library(ggplot2)
path <- 'https://raw.githubusercontent.com/guru99-edu/R-Programming/master/women.csv'
df <-read.csv(path)
ggplot(df,aes(x=height, y =  weight)) + geom_point()

The scatterplot suggests a general tendency for y to increase as x increases. In the next step, you will measure by how much increases for each additional.

Least Squares Estimates

In a simple OLS regression, the computation of \(\beta_0\) and \(\beta_1\) is straightforward. The goal is not to show the derivation in this tutorial. You will only write the formula.

You want to estimate: \(y=\beta_0+\beta_1x+\epsilon\)

The goal of the OLS regression is to minimize the following equation: \[\sum(y_i-\hat{y}_i)^2=\sum e_i^2\] where

\(y_i\) is the actual value and \(\hat{y}_i\) is the predicted value.

The solution for \(\beta_0\) is \(\hat{\beta}_0=\bar{y}-\hat{\beta}_1\bar{x}\)

Note that \(\bar{x}\) means the average value of \(x\)

The solution for \(\beta_1\) is \(\hat{\beta}_1=\frac{\text{Cov}(x,y)}{\text{Var}(x)}\)

In R, you can use the cov() and var() function to estimate \(\beta_1\) and you can use the mean() function to estimate \(\beta_0\)

beta <- cov(df$height, df$weight) / var (df$height)
beta
## [1] 3.45
alpha <- mean(df$weight) - beta * mean(df$height)
alpha
## [1] -87.51667

The beta coefficient implies that for each additional height, the weight increases by \(3.45\).

Estimating simple linear equation manually is not ideal. R provides a suitable function to estimate these parameters. You will see this function shortly. Before that, we will introduce how to compute by hand a simple linear regression model. In your journey of data scientist, you will barely or never estimate a simple linear model. In most situation, regression tasks are performed on a lot of estimators.

Multiple Linear regression

More practical applications of regression analysis employ models that are more complex than the simple straight-line model. The probabilistic model that includes more than one independent variable is called multiple regression models. The general form of this model is: \[y=\beta_0+\beta_1x_1+\beta_2x_2+\cdots+\beta_kx_k+\epsilon\] In matrix notation, you can rewrite the model: \[Y=X\beta+\epsilon\] The dependent variable y is now a function of \(k\) independent variables. The value of the coefficient \(\beta_i\) determines the contribution of the independent variable \(x_i\) and \(\beta_0\).

We briefly introduce the assumption we made about the random error \(\epsilon\) of the OLS:

  • Mean equal to \(0\)
  • Variance equal to \(\sigma^2\)
  • Normal distribution
  • Random errors are independent (in a probabilistic sense)

You need to solve for \(\beta\), the vector of regression coefficients that minimise the sum of the squared errors between the predicted and actual y values.

The closed-form solution is: \[\hat{\beta}=(X^TX)^{-1}X^Ty\]

with:

  • \(X^T\) indicates the transpose of the matrix \(X\)
  • \((X^TX)^{-1}\) indicates the invertible matrix

We use the mtcars dataset. You are already familiar with the dataset. Our goal is to predict the mile per gallon over a set of features.

Continuous variables

For now, you will only use the continuous variables and put aside categorical features. The variable am is a binary variable taking the value of 1 if the transmission is manual and 0 for automatic cars; vs is also a binary variable.

library(dplyr)
df <- mtcars %>% select(-c(am, vs, cyl, gear, carb))
glimpse(df)
## Rows: 32
## Columns: 6
## $ mpg  <dbl> 21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.2, 17.8…
## $ disp <dbl> 160.0, 160.0, 108.0, 258.0, 360.0, 225.0, 360.0, 146.7, 140.8, 1…
## $ hp   <dbl> 110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180, 180, 18…
## $ drat <dbl> 3.90, 3.90, 3.85, 3.08, 3.15, 2.76, 3.21, 3.69, 3.92, 3.92, 3.92…
## $ wt   <dbl> 2.620, 2.875, 2.320, 3.215, 3.440, 3.460, 3.570, 3.190, 3.150, 3…
## $ qsec <dbl> 16.46, 17.02, 18.61, 19.44, 17.02, 20.22, 15.84, 20.00, 22.90, 1…

You can use the lm() function to compute the parameters. The basic syntax of this function is:

lm(formula, data, subset)

Arguments:

  • formula: The equation you want to estimate
  • data: The dataset used
  • subset: Estimate the model on a subset of the dataset

Remember an equation is of the following form \[y=\beta_0+\beta_1x_1+\beta_2x_2+\cdots+\beta_kx_k+\epsilon\] in R

  • The symbol = is replaced by ~
  • Each \(x\) is replaced by the variable name
  • If you want to drop the constant, add -1 at the end of the formula

Example:

You want to estimate the weight of individuals based on their height and revenue. The equation is \[weigh = \beta_0+\beta_1height+\beta_2revenue+\epsilon\] The equation in R for our example is written as: \(weigh\) ~ \(height + revenue\)

Your objective is to estimate the mile per gallon based on a set of variables. The equation to estimate is: \[mpg=\beta_0+\beta_1disp+\beta_2hp+\beta_3drat+\beta_4wt+\epsilon\] You will estimate your first linear regression and store the result in the fit object.

model <- mpg ~ disp + hp + drat + wt
fit <- lm(model, data = df)
fit
## 
## Call:
## lm(formula = model, data = df)
## 
## Coefficients:
## (Intercept)         disp           hp         drat           wt  
##   29.148738     0.003815    -0.034784     1.768049    -3.479668

Code Explanation

  • model <- mpg ~ disp + hp + drat+ wt: Store the model to estimate
  • lm(model, data = df): Estimate the model with the data frame df

The output does not provide enough information about the quality of the fit. You can access more details such as the significance of the coefficients, the degree of freedom and the shape of the residuals with the summary() function.

summary(fit)
## 
## Call:
## lm(formula = model, data = df)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.5077 -1.9052 -0.5057  0.9821  5.6883 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 29.148738   6.293588   4.631  8.2e-05 ***
## disp         0.003815   0.010805   0.353  0.72675    
## hp          -0.034784   0.011597  -2.999  0.00576 ** 
## drat         1.768049   1.319779   1.340  0.19153    
## wt          -3.479668   1.078371  -3.227  0.00327 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.602 on 27 degrees of freedom
## Multiple R-squared:  0.8376, Adjusted R-squared:  0.8136 
## F-statistic: 34.82 on 4 and 27 DF,  p-value: 2.704e-10

Inference from the above table output

  • The above table proves that there is a strong negative relationship between wt and mileage, a small negative relationship with hp and a positive relationship with drat. However only the variable wt and hp has a statistical impact on mpg. Remember, to test a hypothesis in statistic, we use:
    • H0: The predictor has a meaningful impact on \(y\)
    • H1: No statistical impact
    • If the \(p\)-value is lower than 0.05, it indicates the variable is statistically significant
  • Adjusted R-squared: Variance explained by the model. In your model, the model explained 81 percent of the variance of 4y$. R squared is always between 0 and 1. The higher the better.

You can run the ANOVA test to estimate the effect of each feature on the variances with the anova() function.

anova(fit)
Df Sum Sq Mean Sq F value Pr(>F)
disp 1 808.88850 808.888498 119.450231 0.0000000
hp 1 33.66525 33.665254 4.971418 0.0342811
drat 1 30.14753 30.147534 4.451948 0.0442702
wt 1 70.50834 70.508336 10.412111 0.0032722
Residuals 27 182.83757 6.771762 NA NA

A more conventional way to estimate the model performance is to display the residual against different measures.

You can use the plot() function to show four graphs:

  • Residuals vs Fitted values
  • Normal Q-Q plot: Theoretical Quartile vs Standardized residuals
  • Scale-Location: Fitted values vs Square roots of the standardised residuals
  • Residuals vs Leverage: Leverage vs Standardized residuals

You add the code par(mfrow = c(2, 2)) before plot(fit). If you don’t add this line of code, R prompts you to hit the enter command to display the next graph.

par(mfrow = c(2, 2))
plot(fit)

The lm() formula returns a list containing a lot of useful information. You can access them with the fit object you have created, followed by the $ sign and the information you want to extract.

  • coefficients: `fit$coefficients’
  • residuals: `fit$residuals’
  • fitted value: `fit$fitted.values’

Factors regression

In the last model estimation, you regress mpg on continuous variables only. It is straightforward to add factor variables to the model. You add the variable am to your model. It is important to be sure the variable is a factor level and not continuous.

df <- mtcars %>% 
  mutate(cyl = factor(cyl), 
         vs = factor(vs),
         am = factor(am),
         gear = factor(gear),
         carb = factor(carb))
model <- mpg ~ .
summary(lm(model, data = df))
## 
## Call:
## lm(formula = model, data = df)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.5087 -1.3584 -0.0948  0.7745  4.6251 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)  
## (Intercept) 23.87913   20.06582   1.190   0.2525  
## cyl6        -2.64870    3.04089  -0.871   0.3975  
## cyl8        -0.33616    7.15954  -0.047   0.9632  
## disp         0.03555    0.03190   1.114   0.2827  
## hp          -0.07051    0.03943  -1.788   0.0939 .
## drat         1.18283    2.48348   0.476   0.6407  
## wt          -4.52978    2.53875  -1.784   0.0946 .
## qsec         0.36784    0.93540   0.393   0.6997  
## vs1          1.93085    2.87126   0.672   0.5115  
## am1          1.21212    3.21355   0.377   0.7113  
## gear4        1.11435    3.79952   0.293   0.7733  
## gear5        2.52840    3.73636   0.677   0.5089  
## carb2       -0.97935    2.31797  -0.423   0.6787  
## carb3        2.99964    4.29355   0.699   0.4955  
## carb4        1.09142    4.44962   0.245   0.8096  
## carb6        4.47757    6.38406   0.701   0.4938  
## carb8        7.25041    8.36057   0.867   0.3995  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.833 on 15 degrees of freedom
## Multiple R-squared:  0.8931, Adjusted R-squared:  0.779 
## F-statistic:  7.83 on 16 and 15 DF,  p-value: 0.000124

R uses the first factor level as a base group. You need to compare the coefficients of the other group against the base group.

Stepwise regression

The last part of this tutorial deals with the stepwise regression algorithm. The purpose of this algorithm is to add and remove potential candidates in the models and keep those who have a significant impact on the dependent variable. This algorithm is meaningful when the dataset contains a large list of predictors. You don’t need to manually add and remove the independent variables. The stepwise regression is built to select the best candidates to fit the model.

Let’s see in action how it works. You use the mtcars dataset with the continuous variables only for pedagogical illustration. Before you begin analysis, its good to establish variations between the data with a correlation matrix. The GGally library is an extension of ggplot2.

The library includes different functions to show summary statistics such as correlation and distribution of all the variables in a matrix. We will use the ggscatmat function, but you can refer to the vignette for more information about the GGally library.

The basic syntax for ggscatmat() is:

ggscatmat(df, columns = 1:ncol(df), corMethod = "pearson")

arguments:

  • df: A matrix of continuous variables
  • columns: Pick up the columns to use in the function. By default, all columns are used
  • corMethod: Define the function to compute the correlation between variable. By default, the algorithm uses the Pearson formula

You display the correlation for all your variables and decides which one will be the best candidates for the first step of the stepwise regression. There are some strong correlations between your variables and the dependent variable, mpg.

library(GGally)
df <- mtcars %>% select(-c(am, vs, cyl, gear, carb))
ggscatmat(df, columns = 1:ncol(df))

Variables selection is an important part to fit a model. The stepwise regression will perform the searching process automatically. To estimate how many possible choices there are in the dataset, you compute with \(k\) is the number of predictors. The amount of possibilities grows bigger with the number of independent variables. That’s why you need to have an automatic search.

You need to install the olsrr package from CRAN. You can install it directly from the command line:

install.packages("olsrr")

You can plot all the subsets of possibilities with the fit criteria (i.e. R-square, Adjusted R-square, Bayesian criteria). The model with the lowest AIC criteria will be the final model.

library(olsrr)
model <- mpg ~ .
fit <- lm(model, data = df)
test <- ols_step_all_possible(fit)
plot(test)

Code Explanation

  • mpg ~ .: Construct the model to estimate
  • lm(model, data = df): Run the OLS model
  • ols_step_all_possible(fit): Construct the graphs with the relevant statistical information
  • plot(test): Plot the graphs

Linear regression models use the t-test to estimate the statistical impact of an independent variable on the dependent variable. Researchers set the maximum threshold at 10 percent, with lower values indicates a stronger statistical link. The strategy of the stepwise regression is constructed around this test to add and remove potential candidates. The algorithm works as follow:

  • Step 1: Regress each predictor on \(y\) separately. Namely, regress \(x_1\) on \(y\), \(x_2\) on \(y\) to \(x_n\). Store the \(p\)-value and keep the regressor with a \(p\)-value lower than a defined threshold (0.1 by default). The predictors with a significance lower than the threshold will be added to the final model. If no variable has a \(p\)-value lower than the entering threshold, then the algorithm stops, and you have your final model with a constant only.
  • Step 2: Use the predictor with the lowest \(p\)-value and adds separately one variable. You regress a constant, the best predictor of step one and a third variable. You add to the stepwise model, the new predictors with a value lower than the entering threshold. If no variable has a p-value lower than 0.1, then the algorithm stops, and you have your final model with one predictor only. You regress the stepwise model to check the significance of the step 1 best predictors. If it is higher than the removing threshold, you keep it in the stepwise model. Otherwise, you exclude it.
  • Step 3: You replicate step 2 on the new best stepwise model. The algorithm adds predictors to the stepwise model based on the entering values and excludes predictor from the stepwise model if it does not satisfy the excluding threshold.
  • The algorithm keeps on going until no variable can be added or excluded.

You can perform the algorithm with the function ols_step_both_p() from the olsrr package.

ols_step_both_p(fit, pent = 0.1, prem = 0.3, details = FALSE)

Summary

Ordinary least squared regression can be summarized in the table below:

Library Objective Function Arguments
base Compute a linear regression lm() formula, data
base Summarize model summary() fit
olsrr Run stepwise regression ols_step_both_p fit, pent = 0.1, prem = 0.3, details = FALSE
 

A work by Gianluca Sottile

gianluca.sottile@unipa.it