A C D E F G H I K L M N O P Q R S T U W
autogam_processor | Function that creates layer for each processor |
check_and_install | Function to check python environment and install necessary packages |
check_input_args_fit | Function to check if inputs are supported by corresponding fit function |
choose_kernel_initializer_torch | Function to choose a kernel initializer for a torch layer |
coef.deepregression | Generic functions for deepregression models |
coef.drEnsemble | Method for extracting ensemble coefficient estimates |
collect_distribution_parameters | Character-to-parameter collection function needed for mixture of same distribution (torch) |
combine_penalties | Function to combine two penalties |
create_family | Function to create (custom) family |
create_family_torch | Function to create (custom) family |
create_penalty | Function to create mgcv-type penalty |
cv | Generic cv function |
cv.deepregression | Generic functions for deepregression models |
deepregression | Fitting Semi-Structured Deep Distributional Regression |
distfun_to_dist | Function to define output distribution based on dist_fun |
ensemble | Generic deep ensemble function |
ensemble.deepregression | Ensembling deepregression models |
extractlen | Formula helpers |
extractval | Formula helpers |
extractvals | Formula helpers |
extractvar | Extract variable from term |
extract_pure_gam_part | Extract the smooth term from a deepregression term specification |
extract_S | Convenience function to extract penalty matrix and value |
family_to_tfd | Character-tfd mapping function |
family_to_trafo | Character-to-transformation mapping function |
family_to_trafo_torch | Character-to-transformation mapping function |
family_to_trochd | Character-torch mapping function |
fit.deepregression | Generic functions for deepregression models |
fitted.deepregression | Generic functions for deepregression models |
fitted.drEnsemble | Method for extracting the fitted values of an ensemble |
form2text | Formula helpers |
form_control | Options for formula parsing |
from_distfun_to_dist_torch | Function to define output distribution based on dist_fun |
from_dist_to_loss | Function to transform a distritbution layer output into a loss function |
from_dist_to_loss_torch | Function to transform a distribution layer output into a loss function |
from_preds_to_dist | Define Predictor of a Deep Distributional Regression Model |
from_preds_to_dist_torch | Define Predictor of a Deep Distributional Regression Model |
gam_plot_data | used by gam_processor |
gam_processor | Function that creates layer for each processor |
get_distribution | Function to return the fitted distribution |
get_ensemble_distribution | Obtain the conditional ensemble distribution |
get_gamdata | Extract property of gamdata |
get_gamdata_reduced_nr | Extract number in matching table of reduced gam term |
get_gam_part | Extract gam part from wrapped term |
get_help_forward_torch | Helper function to calculate amount of layers Needed when shared layers are used, because of layers have same names |
get_layernr_by_opname | Function to return layer number given model and name |
get_layernr_trainable | Function to return layer numbers with trainable weights |
get_layer_by_opname | Function to return layer given model and name |
get_luz_dataset | Helper function to create an function that generates R6 instances of class dataset |
get_names_pfc | Extract term names from the parsed formula content |
get_nodedata | Extract attributes/hyper-parameters of the node term |
get_node_term | Extract variables from wrapped node term |
get_partial_effect | Return partial effect of one smooth term |
get_processor_name | Extract processor name from term |
get_special | Extract terms defined by specials in formula |
get_type_pfc | Function to subset parsed formulas |
get_weight_by_name | Function to retrieve the weights of a structured layer |
get_weight_by_opname | Function to return weight given model and name |
handle_gam_term | Function to define smoothness and call mgcv's smooth constructor |
import_packages | Function to import required packages |
import_tf_dependings | Function to import required packages for tensorflow @import tensorflow tfprobability keras |
import_torch_dependings | Function to import required packages for torch @import torch torchvision luz |
int_processor | Function that creates layer for each processor |
inverse_group_lasso_pen | Hadamard-type layers |
keras_dr | Compile a Deep Distributional Regression Model |
layer_add_identity | Convenience layer function |
layer_concatenate_identity | Convenience layer function |
layer_dense_module | Function to create custom nn_linear module to overwrite reset_parameters |
layer_dense_torch | Function to define a torch layer similar to a tf dense layer |
layer_generator | Function that creates layer for each processor |
layer_group_hadamard | Hadamard-type layers |
layer_hadamard | Hadamard-type layers |
layer_hadamard_diff | Hadamard-type layers |
layer_node | NODE/ODTs Layer |
layer_sparse_batch_normalization | Sparse Batch Normalization layer |
layer_sparse_conv_2d | Sparse 2D Convolutional layer |
layer_spline | Function to define spline as TensorFlow layer |
layer_spline_torch | Function to define spline as Torch layer |
lin_processor | Function that creates layer for each processor |
log_score | Function to return the log_score |
loop_through_pfc_and_call_trafo | Function to loop through parsed formulas and apply data trafo |
makeInputs | Convenience layer function |
makelayername | Function that takes term and create layer name |
make_folds | Generate folds for CV out of one hot encoded matrix |
make_generator | creates a generator for training |
make_generator_from_matrix | Make a DataGenerator from a data.frame or matrix |
make_tfd_dist | Families for deepregression |
make_torch_dist | Families for deepregression |
mean.deepregression | Generic functions for deepregression models |
model_torch | Function to initialize a nn_module Forward functions works with a list. The entries of the list are the input of the subnetworks |
multioptimizer | Function to define an optimizer combining multiple optimizers |
names_families | Returns the parameter names for a given family |
na_omit_list | Function to exclude NA values |
nn_init_no_grad_constant_deepreg | custom nn_linear module to overwrite reset_parameters # nn_init_constant works only if value is scalar; so warmstarts for gam does'not work |
node_processor | Function that creates layer for each processor |
orthog_control | Options for orthogonalization |
orthog_P | Function to compute adjusted penalty when orthogonalizing |
orthog_post_fitting | Orthogonalize a Semi-Structured Model Post-hoc |
orthog_structured_smooths_Z | Orthogonalize structured term by another matrix |
penalty_control | Options for penalty setup in the pre-processing |
pen_layer | random effect layer |
plot.deepregression | Generic functions for deepregression models |
plot_cv | Plot CV results from deepregression |
precalc_gam | Pre-calculate all gam parts from the list of formulas |
predict.deepregression | Generic functions for deepregression models |
predict_gam_handler | Handler for prediction with gam terms |
predict_gen | Generator function for deepregression objects |
prepare_data | Function to prepare data based on parsed formulas |
prepare_data_torch | Function to additionally prepare data for fit process (torch) |
prepare_input_list_model | Function to prepare input list for fit process, due to different approaches |
prepare_newdata | Function to prepare new data based on parsed formulas |
prepare_torch_distr_mixdistr | Prepares distributions for mixture process |
print.deepregression | Generic functions for deepregression models |
process_terms | Control function to define the processor for terms in the formula |
quant | Generic quantile function |
quant.deepregression | Generic functions for deepregression models |
regularizer_group_lasso | Hadamard-type layers |
reinit_weights | Generic function to re-intialize model weights |
reinit_weights.deepregression | Method to re-initialize weights of a '"deepregression"' model |
re_layer | random effect layer |
ri_processor | Function that creates layer for each processor |
separate_define_relation | Function to define orthogonalization connections in the formula |
simplyconnected_layer | Hadamard-type layers |
simplyconnected_layer_torch | Hadamard-type layers torch |
stddev | Generic sd function |
stddev.deepregression | Generic functions for deepregression models |
stop_iter_cv_result | Function to get the stoppting iteration from CV |
subnetwork_init | Initializes a Subnetwork based on the Processed Additive Predictor |
subnetwork_init_torch | Initializes a Subnetwork based on the Processed Additive Predictor |
tfd_mse | For using mean squared error via TFP |
tfd_zinb | Implementation of a zero-inflated negbinom distribution for TFP |
tfd_zip | Implementation of a zero-inflated poisson distribution for TFP |
tf_repeat | TensorFlow repeat function which is not available for TF 2.0 |
tf_row_tensor | Row-wise tensor product using TensorFlow |
tf_split_multiple | Split tensor in multiple parts |
tf_stride_cols | Function to index tensors columns |
tf_stride_last_dim_tensor | Function to index tensors last dimension |
tibgroup_layer | Hadamard-type layers |
tibgroup_layer_torch | Hadamard-type layers torch |
tiblinlasso_layer_torch | Hadamard-type layers torch |
tib_layer | Hadamard-type layers |
tib_layer_torch | Hadamard-type layers torch |
torch_dr | Compile a Deep Distributional Regression Model (Torch) |
update_miniconda_deepregression | Function to update miniconda and packages |
weight_control | Options for weights of layers |