Lorenz.FABS {LorenzRegression} | R Documentation |
Lorenz.FABS
solves the penalized Lorenz regression with (adaptive) Lasso penalty on a grid of lambda values.
For each value of lambda, the function returns estimates for the vector of parameters and for the estimated explained Gini coefficient, as well as the Lorenz-R^2
of the regression.
Lorenz.FABS(
y,
x,
standardize = TRUE,
weights = NULL,
kernel = 1,
h = length(y)^(-1/5.5),
gamma = 0.05,
lambda = "Shi",
w.adaptive = NULL,
eps = 0.005,
iter = 10^4,
lambda.min = 1e-07
)
y |
a vector of responses |
x |
a matrix of explanatory variables |
standardize |
Should the variables be standardized before the estimation process? Default value is TRUE. |
weights |
vector of sample weights. By default, each observation is given the same weight. |
kernel |
integer indicating what kernel function to use. The value 1 is the default and implies the use of an Epanechnikov kernel while the value of 2 implies the use of a biweight kernel. |
h |
bandwidth of the kernel, determining the smoothness of the approximation of the indicator function. Default value is n^(-1/5.5) where n is the sample size. |
gamma |
value of the Lagrange multiplier in the loss function |
lambda |
this parameter relates to the regularization parameter. Several options are available.
|
w.adaptive |
vector of size equal to the number of covariates where each entry indicates the weight in the adaptive Lasso. By default, each covariate is given the same weight (Lasso). |
eps |
step size in the FABS algorithm. Default value is 0.005. |
iter |
maximum number of iterations. Default value is 10^4. |
lambda.min |
lower bound of the penalty parameter. Only used if |
The regression is solved using the FABS algorithm developed by Shi et al (2018) and adapted to our case. For a comprehensive explanation of the Penalized Lorenz Regression, see Jacquemain et al. In order to ensure identifiability, theta is forced to have a L2-norm equal to one.
A list with several components:
lambda
vector gathering the different values of the regularization parameter
theta
matrix where column i provides the vector of estimated coefficients corresponding to the value lambda[i]
of the regularization parameter.
LR2
vector where element i provides the Lorenz-R^2
attached to the value lambda[i]
of the regularization parameter.
Gi.expl
vector where element i provides the estimated explained Gini coefficient related to the value lambda[i]
of the regularization parameter.
Jacquemain, A., C. Heuchenne, and E. Pircalabelu (2024). A penalised bootstrap estimation procedure for the explained Gini coefficient. Electronic Journal of Statistics 18(1) 247-300.
Shi, X., Y. Huang, J. Huang, and S. Ma (2018). A Forward and Backward Stagewise Algorithm for Nonconvex Loss Function with Adaptive Lasso, Computational Statistics & Data Analysis 124, 235-251.
data(Data.Incomes)
y <- Data.Incomes[,1]
x <- as.matrix(Data.Incomes[,-c(1,2)])
Lorenz.FABS(y, x)