Elastic Net (ridge and lasso combined)¶
Although lasso has proved to be effective as a variable selection technique, it has several problems such as the following:
If there is a group of variables that are highly correlated, the lasso tends to select only one of them, choosen rather arbtirary.
If \(D > N\) case lasso can select at most N variables
If \(N > D\), but the variables are correlated, it has been empirically observed that the perdiction performance of ridge regression is better than that of lasso.
To overcome these shortcomming we use an approach called the elastic net which is a hybrid between lasso and ridge regression.
Objective function
This penalty is strictly convex (\(\lambda_2 > 0\)) there is a unique global minimum, even if X is not full rank. Now since we have a strictly convex penalty on w will exhibit a grouping effect, which means that the regression coefficients o highly correlated variables tend to be equal.
Reduction to lasso¶
Elastic net problem can be reduced to a lasso problem:
This can be solved as:
\(\frac{X^TX + \lambda_2 I}{1 + \lambda_2} = (1 - \rho)\hat{\Sigma} + \rho I\)
Bayesian view¶
The implicit prior used by elastic net has the form:
which is the product of a Gaussian and Laplace distributions. And can be written as a hierarchical prior as:
If \(\gamma_2 = 0\) we reduce to lasso.