'Is there any place in scikit-learn Lasso/Quantile Regression source code that L1 regularization is applied?
I could not find where the Manhattan distance of weights is calculated and multiplied with alpha (L1 reg. coefficient) in the Lasso Regression and the Quantile Regression source code of scikit-learn.
I was trying to implement Lasso Regression and Quantile Regression w/ NumPy and compare results w/ scikit-learn models.
Solution 1:[1]
I don't believe the loss function (including the regularization penalty) is ever explicitly calculated, no.
Instead, the loss function is optimized by coordinate descent, and so we only ever need to actually calculate derivatives of the loss function. That happens in the enet_coordinate_descent
function (or relatives), and I think the relevant bit is here.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Ben Reiniger |