Skip to contents

Fits an XGBoost model (boosted decision tree ensemble) to given x/y data.

See the tutorial Introduction to Boosted Trees for a longer explanation of what XGBoost does, and the rest of the XGBoost Tutorials for further explanations XGBoost's features and usage.

This function is intended to provide a user-friendly interface for XGBoost that follows R's conventions for model fitting and predictions, but which doesn't expose all of the possible functionalities of the core XGBoost library.

See xgb.train() for a more flexible low-level alternative which is similar across different language bindings of XGBoost and which exposes additional functionalities such as training on external memory data and learning-to-rank objectives.

See also the migration guide if coming from a previous version of XGBoost in the 1.x series.

By default, most of the parameters here have a value of NULL, which signals XGBoost to use its default value. Default values are automatically determined by the XGBoost core library, and are subject to change over XGBoost library versions. Some of them might differ according to the booster type (e.g. defaults for regularization are different for linear and tree-based boosters). See xgb.params() and the online documentation for more details about parameters - but note that some of the parameters are not supported in the xgboost() interface.

Usage

xgboost(
  x,
  y,
  objective = NULL,
  nrounds = 100L,
  max_depth = NULL,
  learning_rate = NULL,
  min_child_weight = NULL,
  min_split_loss = NULL,
  reg_lambda = NULL,
  weights = NULL,
  verbosity = if (is.null(eval_set)) 0L else 1L,
  monitor_training = verbosity > 0,
  eval_set = NULL,
  early_stopping_rounds = NULL,
  print_every_n = 1L,
  eval_metric = NULL,
  nthreads = parallel::detectCores(),
  seed = 0L,
  base_margin = NULL,
  monotone_constraints = NULL,
  interaction_constraints = NULL,
  reg_alpha = NULL,
  max_bin = NULL,
  max_leaves = NULL,
  booster = NULL,
  subsample = NULL,
  sampling_method = NULL,
  feature_weights = NULL,
  colsample_bytree = NULL,
  colsample_bylevel = NULL,
  colsample_bynode = NULL,
  tree_method = NULL,
  max_delta_step = NULL,
  scale_pos_weight = NULL,
  updater = NULL,
  grow_policy = NULL,
  num_parallel_tree = NULL,
  multi_strategy = NULL,
  base_score = NULL,
  seed_per_iteration = NULL,
  device = NULL,
  disable_default_eval_metric = NULL,
  use_rmm = NULL,
  max_cached_hist_node = NULL,
  extmem_single_page = NULL,
  max_cat_to_onehot = NULL,
  max_cat_threshold = NULL,
  sample_type = NULL,
  normalize_type = NULL,
  rate_drop = NULL,
  one_drop = NULL,
  skip_drop = NULL,
  feature_selector = NULL,
  top_k = NULL,
  tweedie_variance_power = NULL,
  huber_slope = NULL,
  quantile_alpha = NULL,
  aft_loss_distribution = NULL,
  ...
)

Arguments

x

The features / covariates. Can be passed as:

  • A numeric or integer matrix.

  • A data.frame, in which all columns are one of the following types:

    • numeric

    • integer

    • logical

    • factor

    Columns of factor type will be assumed to be categorical, while other column types will be assumed to be numeric.

  • A sparse matrix from the Matrix package, either as dgCMatrix or dgRMatrix class.

Note that categorical features are only supported for data.frame inputs, and are automatically determined based on their types. See xgb.train() with xgb.DMatrix() for more flexible variants that would allow something like categorical features on sparse matrices.

y

The response variable. Allowed values are:

  • A numeric or integer vector (for regression tasks).

  • A factor or character vector (for binary and multi-class classification tasks).

  • A logical (boolean) vector (for binary classification tasks).

  • A numeric or integer matrix or data.frame with numeric/integer columns (for multi-task regression tasks).

  • A Surv object from the 'survival' package (for survival tasks).

If objective is NULL, the right task will be determined automatically based on the class of y.

If objective is not NULL, it must match with the type of y - e.g. factor types of y can only be used with classification objectives and vice-versa.

For binary classification, the last factor level of y will be used as the "positive" class - that is, the numbers from predict will reflect the probabilities of belonging to this class instead of to the first factor level. If y is a logical vector, then TRUE will be set as the last level.

objective

Optimization objective to minimize based on the supplied data, to be passed by name as a string / character (e.g. reg:absoluteerror). See the Learning Task Parameters page and the xgb.params() documentation for more detailed information on allowed values.

If NULL (the default), will be automatically determined from y according to the following logic:

  • If y is a factor with 2 levels, will use binary:logistic.

  • If y is a factor with more than 2 levels, will use multi:softprob (number of classes will be determined automatically, should not be passed under params).

  • If y is a Surv object from the survival package, will use survival:aft (note that the only types supported are left / right / interval censored).

  • Otherwise, will use reg:squarederror.

If objective is not NULL, it must match with the type of y - e.g. factor types of y can only be used with classification objectives and vice-versa.

Note that not all possible objective values supported by the core XGBoost library are allowed here - for example, objectives which are a variation of another but with a different default prediction type (e.g. multi:softmax vs. multi:softprob) are not allowed, and neither are ranking objectives, nor custom objectives at the moment.

Supported values are:

  • "reg:squarederror": regression with squared loss.

  • "reg:squaredlogerror": regression with squared log loss \(\frac{1}{2}[log(pred + 1) - log(label + 1)]^2\). All input labels are required to be greater than -1. Also, see metric rmsle for possible issue with this objective.

  • "reg:pseudohubererror": regression with Pseudo Huber loss, a twice differentiable alternative to absolute loss.

  • "reg:absoluteerror": Regression with L1 error. When tree model is used, leaf value is refreshed after tree construction. If used in distributed training, the leaf value is calculated as the mean value from all workers, which is not guaranteed to be optimal.

  • "reg:quantileerror": Quantile loss, also known as "pinball loss". See later sections for its parameter and Quantile Regression for a worked example.

  • "binary:logistic": logistic regression for binary classification, output probability

  • "binary:hinge": hinge loss for binary classification. This makes predictions of 0 or 1, rather than producing probabilities.

  • "count:poisson": Poisson regression for count data, output mean of Poisson distribution. "max_delta_step" is set to 0.7 by default in Poisson regression (used to safeguard optimization)

  • "survival:cox": Cox regression for right censored survival time data (negative values are considered right censored).

    Note that predictions are returned on the hazard ratio scale (i.e., as HR = exp(marginal_prediction) in the proportional hazard function h(t) = h0(t) * HR).

  • "survival:aft": Accelerated failure time model for censored survival time data. See Survival Analysis with Accelerated Failure Time for details.

  • "multi:softprob": multi-class classification throgh multinomial logistic likelihood.

  • "reg:gamma": gamma regression with log-link. Output is a mean of gamma distribution. It might be useful, e.g., for modeling insurance claims severity, or for any outcome that might be gamma-distributed.

  • "reg:tweedie": Tweedie regression with log-link. It might be useful, e.g., for modeling total loss in insurance, or for any outcome that might be Tweedie-distributed.

The following values are NOT supported by xgboost, but are supported by xgb.train() (see xgb.params() for details):

  • "reg:logistic"

  • "binary:logitraw"

  • "multi:softmax"

  • "rank:ndcg"

  • "rank:map"

  • "rank:pairwise"

nrounds

Number of boosting iterations / rounds.

Note that the number of default boosting rounds here is not automatically tuned, and different problems will have vastly different optimal numbers of boosting rounds.

max_depth

(for Tree Booster) (default=6, type=int32) Maximum depth of a tree. Increasing this value will make the model more complex and more likely to overfit. 0 indicates no limit on depth. Beware that XGBoost aggressively consumes memory when training a deep tree. "exact" tree method requires non-zero value.

range: \([0, \infty)\)

learning_rate

(alias: eta) Step size shrinkage used in update to prevent overfitting. After each boosting step, we can directly get the weights of new features, and learning_rate shrinks the feature weights to make the boosting process more conservative.

  • range: \([0,1]\)

  • default value: 0.3 for tree-based boosters, 0.5 for linear booster.

min_child_weight

(for Tree Booster) (default=1) Minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression task, this simply corresponds to minimum number of instances needed to be in each node. The larger min_child_weight is, the more conservative the algorithm will be.

range: \([0, \infty)\)

min_split_loss

(for Tree Booster) (default=0, alias: gamma) Minimum loss reduction required to make a further partition on a leaf node of the tree. The larger min_split_loss is, the more conservative the algorithm will be. Note that a tree where no splits were made might still contain a single terminal node with a non-zero score.

range: \([0, \infty)\)

reg_lambda

(alias: lambda)

  • For tree-based boosters:

    • L2 regularization term on weights. Increasing this value will make model more conservative.

    • default: 1

    • range: \([0, \infty]\)

  • For linear booster:

    • L2 regularization term on weights. Increasing this value will make model more conservative. Normalised to number of training examples.

    • default: 0

    • range: \([0, \infty)\)

weights

Sample weights for each row in x and y. If NULL (the default), each row will have the same weight.

If not NULL, should be passed as a numeric vector with length matching to the number of rows in x.

verbosity

Verbosity of printing messages. Valid values of 0 (silent), 1 (warning), 2 (info), and 3 (debug).

monitor_training

Whether to monitor objective optimization progress on the input data. Note that same 'x' and 'y' data are used for both model fitting and evaluation.

eval_set

Subset of the data to use as evaluation set. Can be passed as:

  • A vector of row indices (base-1 numeration) indicating the observations that are to be designed as evaluation data.

  • A number between zero and one indicating a random fraction of the input data to use as evaluation data. Note that the selection will be done uniformly at random, regardless of argument weights.

If passed, this subset of the data will be excluded from the training procedure, and the evaluation metric(s) supplied under eval_metric will be calculated on this dataset after each boosting iteration (pass verbosity>0 to have these metrics printed during training). If eval_metric is not passed, a default metric will be selected according to objective.

If passing a fraction, in classification problems, the evaluation set will be chosen in such a way that at least one observation of each class will be kept in the training data.

For more elaborate evaluation variants (e.g. custom metrics, multiple evaluation sets, etc.), one might want to use xgb.train() instead.

early_stopping_rounds

Number of boosting rounds after which training will be stopped if there is no improvement in performance (as measured by the last metric passed under eval_metric, or by the default metric for the objective if eval_metric is not passed) on the evaluation data from eval_set. Must pass eval_set in order to use this functionality.

If NULL, early stopping will not be used.

print_every_n

When passing verbosity>0 and either monitor_training=TRUE or eval_set, evaluation logs (metrics calculated on the training and/or evaluation data) will be printed every nth iteration according to the value passed here. The first and last iteration are always included regardless of this 'n'.

Only has an effect when passing verbosity>0.

eval_metric

(default according to objective)

  • Evaluation metrics for validation data, a default metric will be assigned according to objective (rmse for regression, and logloss for classification, mean average precision for rank:map, etc.)

  • User can add multiple evaluation metrics.

  • The choices are listed below:

    • "rmse": root mean square error

    • "rmsle": root mean square log error: \(\sqrt{\frac{1}{N}[log(pred + 1) - log(label + 1)]^2}\). Default metric of "reg:squaredlogerror" objective. This metric reduces errors generated by outliers in dataset. But because log function is employed, "rmsle" might output nan when prediction value is less than -1. See "reg:squaredlogerror" for other requirements.

    • "mae": mean absolute error

    • "mape": mean absolute percentage error

    • "mphe": mean Pseudo Huber error. Default metric of "reg:pseudohubererror" objective.

    • "logloss": negative log-likelihood

    • "error": Binary classification error rate. It is calculated as #(wrong cases)/#(all cases). For the predictions, the evaluation will regard the instances with prediction value larger than 0.5 as positive instances, and the others as negative instances.

    • "error@t": a different than 0.5 binary classification threshold value could be specified by providing a numerical value through 't'.

    • "merror": Multiclass classification error rate. It is calculated as #(wrong cases)/#(all cases).

    • "mlogloss": Multiclass logloss.

    • "auc": Receiver Operating Characteristic Area under the Curve. Available for classification and learning-to-rank tasks.

      • When used with binary classification, the objective should be "binary:logistic" or similar functions that work on probability.

      • When used with multi-class classification, objective should be "multi:softprob" instead of "multi:softmax", as the latter doesn't output probability. Also the AUC is calculated by 1-vs-rest with reference class weighted by class prevalence.

      • When used with LTR task, the AUC is computed by comparing pairs of documents to count correctly sorted pairs. This corresponds to pairwise learning to rank. The implementation has some issues with average AUC around groups and distributed workers not being well-defined.

      • On a single machine the AUC calculation is exact. In a distributed environment the AUC is a weighted average over the AUC of training rows on each node - therefore, distributed AUC is an approximation sensitive to the distribution of data across workers. Use another metric in distributed environments if precision and reproducibility are important.

      • When input dataset contains only negative or positive samples, the output is NaN. The behavior is implementation defined, for instance, scikit-learn returns \(0.5\) instead.

    • "aucpr": Area under the PR curve. Available for classification and learning-to-rank tasks.

      After XGBoost 1.6, both of the requirements and restrictions for using "aucpr" in classification problem are similar to "auc". For ranking task, only binary relevance label \(y \in [0, 1]\) is supported. Different from "map" (mean average precision), "aucpr" calculates the interpolated area under precision recall curve using continuous interpolation.

    • "pre": Precision at \(k\). Supports only learning to rank task.

    • "ndcg": Normalized Discounted Cumulative Gain

    • "map": Mean Average Precision

      The average precision is defined as:

      \(AP@l = \frac{1}{min{(l, N)}}\sum^l_{k=1}P@k \cdot I_{(k)}\)

      where \(I_{(k)}\) is an indicator function that equals to \(1\) when the document at \(k\) is relevant and \(0\) otherwise. The \(P@k\) is the precision at \(k\), and \(N\) is the total number of relevant documents. Lastly, the mean average precision is defined as the weighted average across all queries.

    • "ndcg@n", "map@n", "pre@n": \(n\) can be assigned as an integer to cut off the top positions in the lists for evaluation.

    • "ndcg-", "map-", "ndcg@n-", "map@n-": In XGBoost, the NDCG and MAP evaluate the score of a list without any positive samples as \(1\). By appending "-" to the evaluation metric name, we can ask XGBoost to evaluate these scores as \(0\) to be consistent under some conditions.

    • "poisson-nloglik": negative log-likelihood for Poisson regression

    • "gamma-nloglik": negative log-likelihood for gamma regression

    • "cox-nloglik": negative partial log-likelihood for Cox proportional hazards regression

    • "gamma-deviance": residual deviance for gamma regression

    • "tweedie-nloglik": negative log-likelihood for Tweedie regression (at a specified value of the tweedie_variance_power parameter)

    • "aft-nloglik": Negative log likelihood of Accelerated Failure Time model. See Survival Analysis with Accelerated Failure Time for details.

    • "interval-regression-accuracy": Fraction of data points whose predicted labels fall in the interval-censored labels. Only applicable for interval-censored data. See Survival Analysis with Accelerated Failure Time for details.

nthreads

Number of parallel threads to use. If passing zero, will use all CPU threads.

seed

Seed to use for random number generation. If passing NULL, will draw a random number using R's PRNG system to use as seed.

base_margin

Base margin used for boosting from existing model.

If passing it, will start the gradient boosting procedure from the scores that are provided here - for example, one can pass the raw scores from a previous model, or some per-observation offset, or similar.

Should be either a numeric vector or numeric matrix (for multi-class and multi-target objectives) with the same number of rows as x and number of columns corresponding to number of optimization targets, and should be in the untransformed scale (for example, for objective binary:logistic, it should have log-odds, not probabilities; and for objective multi:softprob, should have number of columns matching to number of classes in the data).

Note that, if it contains more than one column, then columns will not be matched by name to the corresponding y - base_margin should have the same column order that the model will use (for example, for objective multi:softprob, columns of base_margin will be matched against levels(y) by their position, regardless of what colnames(base_margin) returns).

If NULL, will start from zero, but note that for most objectives, an intercept is usually added (controllable through parameter base_score instead) when base_margin is not passed.

monotone_constraints

Optional monotonicity constraints for features.

Can be passed either as a named list (when x has column names), or as a vector. If passed as a vector and x has column names, will try to match the elements by name.

A value of +1 for a given feature makes the model predictions / scores constrained to be a monotonically increasing function of that feature (that is, as the value of the feature increases, the model prediction cannot decrease), while a value of -1 makes it a monotonically decreasing function. A value of zero imposes no constraint.

The input for monotone_constraints can be a subset of the columns of x if named, in which case the columns that are not referred to in monotone_constraints will be assumed to have a value of zero (no constraint imposed on the model for those features).

See the tutorial Monotonic Constraints for a more detailed explanation.

interaction_constraints

Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a list of vectors referencing columns in the data, e.g. list(c(1, 2), c(3, 4, 5)) (with these numbers being column indices, numeration starting at 1 - i.e. the first sublist references the first and second columns) or list(c("Sepal.Length", "Sepal.Width"), c("Petal.Length", "Petal.Width")) (references columns by names), where each vector is a group of indices of features that are allowed to interact with each other.

See the tutorial Feature Interaction Constraints for more information.

reg_alpha

(alias: reg_alpha)

  • L1 regularization term on weights. Increasing this value will make model more conservative.

  • For the linear booster, it's normalised to number of training examples.

  • default: 0

  • range: \([0, \infty)\)

max_bin

(for Tree Booster) (default=256, type=int32)

  • Only used if tree_method is set to "hist" or "approx".

  • Maximum number of discrete bins to bucket continuous features.

  • Increasing this number improves the optimality of splits at the cost of higher computation time.

max_leaves

(for Tree Booster) (default=0, type=int32) Maximum number of nodes to be added. Not used by "exact" tree method.

booster

(default= "gbtree") Which booster to use. Can be "gbtree", "gblinear" or "dart"; "gbtree" and "dart" use tree based models while "gblinear" uses linear functions.

subsample

(for Tree Booster) (default=1) Subsample ratio of the training instances. Setting it to 0.5 means that XGBoost would randomly sample half of the training data prior to growing trees. and this will prevent overfitting. Subsampling will occur once in every boosting iteration.

range: \((0,1]\)

sampling_method

(for Tree Booster) (default= "uniform") The method to use to sample the training instances.

  • "uniform": each training instance has an equal probability of being selected. Typically set "subsample" >= 0.5 for good results.

  • "gradient_based": the selection probability for each training instance is proportional to the regularized absolute value of gradients (more specifically, \(\sqrt{g^2+\lambda h^2}\)). "subsample" may be set to as low as 0.1 without loss of model accuracy. Note that this sampling method is only supported when "tree_method" is set to "hist" and the device is "cuda"; other tree methods only support "uniform" sampling.

feature_weights

Feature weights for column sampling.

Can be passed either as a vector with length matching to columns of x, or as a named list (only if x has column names) with names matching to columns of 'x'. If it is a named vector, will try to match the entries to column names of x by name.

If NULL (the default), all columns will have the same weight.

colsample_bytree, colsample_bylevel, colsample_bynode

(for Tree Booster) (default=1) This is a family of parameters for subsampling of columns.

  • All "colsample_by*" parameters have a range of \((0, 1]\), the default value of 1, and specify the fraction of columns to be subsampled.

  • "colsample_bytree" is the subsample ratio of columns when constructing each tree. Subsampling occurs once for every tree constructed.

  • "colsample_bylevel" is the subsample ratio of columns for each level. Subsampling occurs once for every new depth level reached in a tree. Columns are subsampled from the set of columns chosen for the current tree.

  • "colsample_bynode" is the subsample ratio of columns for each node (split). Subsampling occurs once every time a new split is evaluated. Columns are subsampled from the set of columns chosen for the current level. This is not supported by the exact tree method.

  • "colsample_by*" parameters work cumulatively. For instance, the combination {'colsample_bytree'=0.5, 'colsample_bylevel'=0.5, 'colsample_bynode'=0.5} with 64 features will leave 8 features to choose from at each split.

One can set the "feature_weights" for DMatrix to define the probability of each feature being selected when using column sampling.

tree_method

(for Tree Booster) (default= "auto") The tree construction algorithm used in XGBoost. See description in the reference paper and Tree Methods.

Choices: "auto", "exact", "approx", "hist", this is a combination of commonly used updaters. For other updaters like "refresh", set the parameter updater directly.

  • "auto": Same as the "hist" tree method.

  • "exact": Exact greedy algorithm. Enumerates all split candidates.

  • "approx": Approximate greedy algorithm using quantile sketch and gradient histogram.

  • "hist": Faster histogram optimized approximate greedy algorithm.

max_delta_step

(for Tree Booster) (default=0) Maximum delta step we allow each leaf output to be. If the value is set to 0, it means there is no constraint. If it is set to a positive value, it can help making the update step more conservative. Usually this parameter is not needed, but it might help in logistic regression when class is extremely imbalanced. Set it to value of 1-10 might help control the update.

range: \([0, \infty)\)

scale_pos_weight

(for Tree Booster) (default=1) Control the balance of positive and negative weights, useful for unbalanced classes. A typical value to consider: sum(negative instances) / sum(positive instances). See Parameters Tuning for more discussion. Also, see Higgs Kaggle competition demo for examples: R, py1, py2, py3.

updater

(for Linear Booster) (default= "shotgun") Choice of algorithm to fit linear model

  • "shotgun": Parallel coordinate descent algorithm based on shotgun algorithm. Uses 'hogwild' parallelism and therefore produces a nondeterministic solution on each run.

  • "coord_descent": Ordinary coordinate descent algorithm. Also multithreaded but still produces a deterministic solution. When the device parameter is set to "cuda" or "gpu", a GPU variant would be used.

grow_policy

(for Tree Booster) (default= "depthwise")

  • Controls a way new nodes are added to the tree.

  • Currently supported only if tree_method is set to "hist" or "approx".

  • Choices: "depthwise", "lossguide"

    • "depthwise": split at nodes closest to the root.

    • "lossguide": split at nodes with highest loss change.

num_parallel_tree

(for Tree Booster) (default=1) Number of parallel trees constructed during each iteration. This option is used to support boosted random forest.

multi_strategy

(for Tree Booster) (default = "one_output_per_tree") The strategy used for training multi-target models, including multi-target regression and multi-class classification. See Multiple Outputs for more information.

  • "one_output_per_tree": One model for each target.

  • "multi_output_tree": Use multi-target trees.

Version added: 2.0.0

Note: This parameter is working-in-progress.

base_score
  • The initial prediction score of all instances, global bias

  • The parameter is automatically estimated for selected objectives before training. To disable the estimation, specify a real number argument.

  • If base_margin is supplied, base_score will not be added.

  • For sufficient number of iterations, changing this value will not have too much effect.

seed_per_iteration

(default= FALSE) Seed PRNG determnisticly via iterator number.

device

(default= "cpu") Device for XGBoost to run. User can set it to one of the following values:

  • "cpu": Use CPU.

  • "cuda": Use a GPU (CUDA device).

  • "cuda:<ordinal>": <ordinal> is an integer that specifies the ordinal of the GPU (which GPU do you want to use if you have more than one devices).

  • "gpu": Default GPU device selection from the list of available and supported devices. Only "cuda" devices are supported currently.

  • "gpu:<ordinal>": Default GPU device selection from the list of available and supported devices. Only "cuda" devices are supported currently.

For more information about GPU acceleration, see XGBoost GPU Support. In distributed environments, ordinal selection is handled by distributed frameworks instead of XGBoost. As a result, using "cuda:<ordinal>" will result in an error. Use "cuda" instead.

Version added: 2.0.0

Note: if XGBoost was installed from CRAN, it won't have GPU support enabled, thus only "cpu" will be available. To get GPU support, the R package for XGBoost must be installed from source or from the GitHub releases - see instructions.

disable_default_eval_metric

(default= FALSE) Flag to disable default metric. Set to 1 or TRUE to disable.

use_rmm

Whether to use RAPIDS Memory Manager (RMM) to allocate cache GPU memory. The primary memory is always allocated on the RMM pool when XGBoost is built (compiled) with the RMM plugin enabled. Valid values are TRUE and FALSE. See Using XGBoost with RAPIDS Memory Manager (RMM) plugin for details.

max_cached_hist_node

(for Non-Exact Tree Methods) (default = 65536) Maximum number of cached nodes for histogram. This can be used with the "hist" and the "approx" tree methods.

Version added: 2.0.0

  • For most of the cases this parameter should not be set except for growing deep trees. After 3.0, this parameter affects GPU algorithms as well.

extmem_single_page

(for Non-Exact Tree Methods) (default = FALSE) This parameter is only used for the "hist" tree method with device="cuda" and subsample != 1.0. Before 3.0, pages were always concatenated.

Version added: 3.0.0

Whether the GPU-based "hist" tree method should concatenate the training data into a single batch instead of fetching data on-demand when external memory is used. For GPU devices that don't support address translation services, external memory training is expensive. This parameter can be used in combination with subsampling to reduce overall memory usage without significant overhead. See Using XGBoost External Memory Version for more information.

max_cat_to_onehot

(for Non-Exact Tree Methods) A threshold for deciding whether XGBoost should use one-hot encoding based split for categorical data. When number of categories is lesser than the threshold then one-hot encoding is chosen, otherwise the categories will be partitioned into children nodes.

Version added: 1.6.0

max_cat_threshold

(for Non-Exact Tree Methods) Maximum number of categories considered for each split. Used only by partition-based splits for preventing over-fitting.

Version added: 1.7.0

sample_type

(for Dart Booster) (default= "uniform") Type of sampling algorithm.

  • "uniform": dropped trees are selected uniformly.

  • "weighted": dropped trees are selected in proportion to weight.

normalize_type

(for Dart Booster) (default= "tree") Type of normalization algorithm.

  • "tree": new trees have the same weight of each of dropped trees.

    • Weight of new trees are 1 / (k + learning_rate).

    • Dropped trees are scaled by a factor of k / (k + learning_rate).

  • "forest": new trees have the same weight of sum of dropped trees (forest).

    • Weight of new trees are 1 / (1 + learning_rate).

    • Dropped trees are scaled by a factor of 1 / (1 + learning_rate).

rate_drop

(for Dart Booster) (default=0.0) Dropout rate (a fraction of previous trees to drop during the dropout).

range: \([0.0, 1.0]\)

one_drop

(for Dart Booster) (default=0) When this flag is enabled, at least one tree is always dropped during the dropout (allows Binomial-plus-one or epsilon-dropout from the original DART paper).

skip_drop

(for Dart Booster) (default=0.0) Probability of skipping the dropout procedure during a boosting iteration.

  • If a dropout is skipped, new trees are added in the same manner as "gbtree".

  • Note that non-zero skip_drop has higher priority than rate_drop or one_drop.

range: \([0.0, 1.0]\)

feature_selector

(for Linear Booster) (default= "cyclic") Feature selection and ordering method

  • "cyclic": Deterministic selection by cycling through features one at a time.

  • "shuffle": Similar to "cyclic" but with random feature shuffling prior to each update.

  • "random": A random (with replacement) coordinate selector.

  • "greedy": Select coordinate with the greatest gradient magnitude. It has O(num_feature^2) complexity. It is fully deterministic. It allows restricting the selection to top_k features per group with the largest magnitude of univariate weight change, by setting the top_k parameter. Doing so would reduce the complexity to O(num_feature*top_k).

  • "thrifty": Thrifty, approximately-greedy feature selector. Prior to cyclic updates, reorders features in descending magnitude of their univariate weight changes. This operation is multithreaded and is a linear complexity approximation of the quadratic greedy selection. It allows restricting the selection to top_k features per group with the largest magnitude of univariate weight change, by setting the top_k parameter.

top_k

(for Linear Booster) (default=0) The number of top features to select in greedy and thrifty feature selector. The value of 0 means using all the features.

tweedie_variance_power

(for Tweedie Regression ("objective=reg:tweedie")) (default=1.5)

  • Parameter that controls the variance of the Tweedie distribution var(y) ~ E(y)^tweedie_variance_power

  • range: \((1,2)\)

  • Set closer to 2 to shift towards a gamma distribution

  • Set closer to 1 to shift towards a Poisson distribution.

huber_slope

(for using Pseudo-Huber ("reg:pseudohubererror")) (default = 1.0) A parameter used for Pseudo-Huber loss to define the \(\delta\) term.

quantile_alpha

(for using Quantile Loss ("reg:quantileerror")) A scalar or a list of targeted quantiles (passed as a numeric vector).

Version added: 2.0.0

aft_loss_distribution

(for using AFT Survival Loss ("survival:aft") and Negative Log Likelihood of AFT metric ("aft-nloglik")) Probability Density Function, "normal", "logistic", or "extreme".

...

Not used.

Some arguments that were part of this function in previous XGBoost versions are currently deprecated or have been renamed. If a deprecated or renamed argument is passed, will throw a warning (by default) and use its current equivalent instead. This warning will become an error if using the 'strict mode' option.

If some additional argument is passed that is neither a current function argument nor a deprecated or renamed argument, a warning or error will be thrown depending on the 'strict mode' option.

Important: ... will be removed in a future version, and all the current deprecation warnings will become errors. Please use only arguments that form part of the function signature.

Value

A model object, inheriting from both xgboost and xgb.Booster. Compared to the regular xgb.Booster model class produced by xgb.train(), this xgboost class will have an

additional attribute metadata containing information which is used for formatting prediction outputs, such as class names for classification problems.

Details

For package authors using 'xgboost' as a dependency, it is highly recommended to use xgb.train() in package code instead of xgboost(), since it has a more stable interface and performs fewer data conversions and copies along the way.

References

  • Chen, Tianqi, and Carlos Guestrin. "Xgboost: A scalable tree boosting system." Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. 2016.

  • https://xgboost.readthedocs.io/en/stable/

Examples

data(mtcars)

# Fit a small regression model on the mtcars data
model_regression <- xgboost(mtcars[, -1], mtcars$mpg, nthreads = 1, nrounds = 3)
predict(model_regression, mtcars, validate_features = TRUE)

# Task objective is determined automatically according to the type of 'y'
data(iris)
model_classif <- xgboost(iris[, -5], iris$Species, nthreads = 1, nrounds = 5)
predict(model_classif, iris[1:10,])
predict(model_classif, iris[1:10,], type = "class")

# Can nevertheless choose a non-default objective if needed
model_poisson <- xgboost(
  mtcars[, -1], mtcars$mpg,
  objective = "count:poisson",
  nthreads = 1,
  nrounds = 3
)

# Can calculate evaluation metrics during boosting rounds
data(ToothGrowth)
xgboost(
  ToothGrowth[, c("len", "dose")],
  ToothGrowth$supp,
  eval_metric = c("auc", "logloss"),
  eval_set = 0.2,
  monitor_training = TRUE,
  verbosity = 1,
  nthreads = 1,
  nrounds = 3
)