mmd_reg {regMMD} | R Documentation |
Fits a regression model to the data, using the robust procedure based on maximum mean discrepancy (MMD) minimization introduced and studied in Alquier and Gerber (2024).
mmd_reg(y, X, model, intercept, par1, par2, kernel.y, kernel.x, bdwth.y, bdwth.x,
control= list())
y |
Response variable. Must be a vector of length |
X |
Design matrix. |
model |
Regression model to be fitted to the data. By default, the linear regression model with |
intercept |
If |
par1 |
Values of the regression coefficients of the model used as starting values to numerically optimize the objective function. |
par2 |
A value for the auxilliary parameter |
kernel.y |
Kernel applied on the response variable. Available options for |
kernel.x |
Kernel applied on the explanatory variables. Available options for |
bdwth.y |
Bandwidth parameter for the kernel |
bdwth.x |
Bandwidth parameter for the kernel |
control |
A |
Available options for model
are:
"linearGaussian"
Linear regression model with \mathcal{N}_1(0,\phi^2)
error terms, with \phi
unknown.
"linearGaussian.loc"
Linear regression model with \mathcal{N}_1(0,\phi^2)
error terms, with \phi
known.
"gamma"
Gamma regression model with unknown shape parameter \phi
. The inverse function is used as link function.
"gamma.loc"
Gamma regression model with known shape parameter \phi
. The inverse function is used as link function.
"beta"
Beta regression model with unknown precision parameter \phi
. The logistic function is used as link function.
"beta.loc"
Beta regression model with known precision parameter \phi
. The logistic function is used as link function.
"logistic"
Logistic regression model.
"exponential"
Exponential regression.
"poisson"
Poisson regression model.
When bdwth.x
>0 the function reg_mmd
computes the estimator \hat{\theta}_n
introduced in Alquier and Gerber (2024). When bdwth.x
=0 the function reg_mmd
computes the estimator \tilde{\theta}_n
introduced in Alquier and Gerber (2024). The former estimator has stronger theoretical properties but is more expensive to compute (see below).
When bdwth.x
=0 and model
is "linearGaussian"
, "linearGaussian.loc"
or "logistic"
, the objective function and its gradient can be computed on \mathcal{O}(n)
operations, where n
is the sample size (i.e. the dimension of y
). In this case, gradient descent with backtraking line search is used to perform the minimizatiom. The algorithm stops when the maximum number of iterations maxit
is reached, or as soon as the change in the objective function is less than eps_gd
times the current function value. In the former case, a warning message is generated. By defaut, maxit
=5\times 10^4
and eps_gd=sqrt(.Machine$double.eps)
, and the value of these two parameters can be changed using the control
argument of reg_mmd
.
When bdwth.x
>0 and model
is "linearGaussian"
, "linearGaussian.loc"
or "logistic"
, the objective function and its gradient can be computed on \mathcal{O}(n^2)
operations. To reduce the computational cost the objective function is minimized using norm adagrad (Duchi et al. 2011), an adaptive step size stochastic gradient algorithm. Each iteration of the algorithm requires \mathcal{O}(n)
operations. However, the algorithm has an intialization step that requires \mathcal{O}(n^2)
operations and has a memory requirement of size \mathcal{O}(n^2)
.
When model
is not in c("linearGaussian", "linearGaussian.loc", "logistic")
, the objective function and its gradient cannot be computed explicitly and the minimization is performed using norm adagrad. The cost per iteration of the algorithm is \mathcal{O}(n)
but, for bdwth.x
>0, the memory requirement and the initialization cost are both of size \mathcal{O}(n^2)
.
When adagrad is used, burnin
iterations are performed as a warm-up step. The algorithm then stops when burnin
+maxit
iterations are performed, or as soon as the norm of the average value of the gradient evaluations computed in all the previous iterations is less than eps_sg
. A warning message is generated if the maximum number of iterations is reached. By default, burnin
=10^3
, nsteps
=5\times 10^4
and eps_sg
=10^{-5}
and the value of these three parameters can be changed using the control
argument of reg_mmd
.
If bdwth.y="auto"
then the value of the bandwidth parameter of kernel.y
is equal to H_n/\sqrt{2}
with H_n
the median value of the set \{|y_i-y_j|\}_{i,j=1}^n
, where y_i
denote the ith component of y
. This definition of bdwth.y
is motivated by the results in Garreau et al. (2017). If H_n=0
the bandwidth parameter of kernel.y
is set to 1.
If bdwth.x="auto"
then the value of the bandwidth parameter of kernel.x
is equal to 0.01H_n/\sqrt{2}
with H_n
is the median value of the set \{\|x_i-x_j\|\}_{i,j=1}^n
, where x_i
denote the ith row of the design matrix X
. If H_n=0
the bandwidth parameter of kernel.x
is set to 1.
The control
argument is a list that can supply any of the following components:
If rescale=TRUE
the (non-constant) columns of X
are rescalled before perfoming the optimization, while if rescale=FLASE
no rescaling is applied. By default rescale=TRUE
.
A non-negative integer.
A non-negative real number.
A non-negative real number.
A integer strictly larger than 2.
Scaling constant for the step-sizes used by adagrad. stepsize
must be a stictly positive number and by default stepsize
=1.
If trace=TRUE
then the parameter value obtained at the end of each iteration (after the burn-in perdiod for adagrad) is returned. By default, trace=TRUE
and trace
is automatically set to TRUE
if the maximum number of iterations is reached.
Parameter used in adagrad to avoid numerical errors in the computation of the step-sizes. epsilon
must be a strictly positive real number and by default epsilon
=10^{-4}
.
Parameter for the backtraking line search. alpha
must be a real number in (0,1)
and by default alpha
=0.8.
Parameter used to control the computational cost of the algorithm when gamma.x
>0
, see the Suplementary material in Alquier and Gerber (2024) for mode details. c_det
must be a real number in (0,1)
and by default c_det
=0.2.
Parameter used to control the computational cost of the algorithm when bdwth.x
>0
, see the Suplementary material in Alquier and Gerber (2024) for mode details. c_rand
must be a real number in (0,1)
and by default c_rand
=0.1.
MMD_reg
returns an object of class
"regMMD"
.
The function summary
can be used to print the results.
An object of class regMMD
is a list containing the following components:
coefficients |
Estimated regression coefficients. |
intercept |
If |
phi |
If relevant (see details), either the estimated value of the |
kernel.y |
Kernel applied on the response variable used to fit the model. |
bdwth.y |
Value of the bandwidth for the kernel applied on the response variable used to fit the model. |
kernel.x |
Kernel applied on the explanatory variables used to fit the model. |
bdwth.x |
Value of the bandwidth for the kernel applied on the explanatory variables used to fit the model. |
par1 |
Value of the parameter |
par2 |
Value of parameter |
trace |
If the control parameter |
Alquier P, Gerber M (2024).
“Universal robust regression via maximum mean discrepancy.”
Biometrika, 111(1), 71-92.
Duchi J, Hazan E, Singer Y (2011).
“Adaptive subgradient methods for online learning and stochastic optimization.”
Journal of machine learning research, 12(7), 2121-2159.
Garreau D, Jitkrittum W, Kanagawa M (2017).
“Large sample analysis of the median heuristic.”
arXiv preprint arXiv:1707.07269.
#Simulate data
n<-1000
p<-4
beta<-1:p
phi<-1
X<-matrix(data=rnorm(n*p,0,1),nrow=n,ncol=p)
data<-1+X%*%beta+rnorm(n,0,phi)
##Example 1: Linear Gaussian model
y<-data
Est<-mmd_reg(y, X)
summary(Est)
##Example 2: Logisitic regression model
y<-data
y[data>5]<-1
y[data<=5]<-0
Est<-mmd_reg(y, X, model="logistic")
summary(Est)
Est<-mmd_reg(y, X, model="logistic", bdwth.x="auto")
summary(Est)