Drew Altschul, Department of Psychology
Basic linear model: \( \hat{y} = b_0 + b_1*x_1+ b_2*x_2 + ... b_p*x_p \)
Basic linear model: \( \hat{y} = b_0 + b_1*x_1+ b_2*x_2 + ... b_p*x_p \)
The lasso fits this with criteria
Basic linear model: \( \hat{y} = b_0 + b_1*x_1+ b_2*x_2 + ... b_p*x_p \)
The lasso fits this with criteria
such that: \( \sum |b_j| \leq s \)
when \( s \) is large, the constraint has no effect and the solution is the usual multiple regression
and \( s \) becomes small, the coefficients are shrunk, sometimes even to 0
Elastic nets use a mixing parameter \( \alpha \) to combine lasso and ridge regression
Elastic nets use a mixing parameter \( \alpha \) to combine lasso and ridge regression
Elastic nets use a mixing parameter \( \alpha \) to combine lasso and ridge regression
Elastic nets use a mixing parameter \( \alpha \) to combine lasso and ridge regression
Package glmnet
Elastic nets use a mixing parameter \( \alpha \) to combine lasso and ridge regression
Package glmnet
glmnet works with binomial, multinomial, poisson, & cox models
tune {e1071}
glmmLasso
LDlasso
EBglmnet
ahaz
covTest
regSEM
sparseSEM
qgraph
stabs
c060
If you're using glmnet to its fullest potential, in many cases you won't need variable selection anymore
If you're using glmnet to its fullest potential, in many cases you won't need variable selection anymore
But if you do
If you're using glmnet to its fullest potential, in many cases you won't need variable selection anymore
But if you do
Stability selection will allow you to use these regularization techniques to identify which variables consistently contribute to the model
@dremalt