Python API Reference

This page gives the Python API reference of xgboost, please also refer to Python Package Introduction for more information about the Python package.

Global Configuration

xgboost.config_context(**new_config)

Context manager for global XGBoost configuration.

Global configuration consists of a collection of parameters that can be applied in the global scope. See Global Configuration for the full list of parameters supported in the global configuration.

Note

All settings, not just those presently modified, will be returned to their previous values when the context manager is exited. This is not thread-safe.

Added in version 1.4.0.

Parameters:

new_config (Dict[str, Any]) – Keyword arguments representing the parameters and their values

Return type:

Iterator[None]

Example

import xgboost as xgb

# Show all messages, including ones pertaining to debugging
xgb.set_config(verbosity=2)

# Get current value of global configuration
# This is a dict containing all parameters in the global configuration,
# including 'verbosity'
config = xgb.get_config()
assert config['verbosity'] == 2

# Example of using the context manager xgb.config_context().
# The context manager will restore the previous value of the global
# configuration upon exiting.
with xgb.config_context(verbosity=0):
    # Suppress warning caused by model generated with XGBoost version < 1.0.0
    bst = xgb.Booster(model_file='./old_model.bin')
assert xgb.get_config()['verbosity'] == 2  # old value restored

Nested configuration context is also supported:

Example

with xgb.config_context(verbosity=3):
    assert xgb.get_config()["verbosity"] == 3
    with xgb.config_context(verbosity=2):
        assert xgb.get_config()["verbosity"] == 2

xgb.set_config(verbosity=2)
assert xgb.get_config()["verbosity"] == 2
with xgb.config_context(verbosity=3):
    assert xgb.get_config()["verbosity"] == 3

See also

set_config

Set global XGBoost configuration

get_config

Get current values of the global configuration

xgboost.set_config(**new_config)

Set global configuration.

Global configuration consists of a collection of parameters that can be applied in the global scope. See Global Configuration for the full list of parameters supported in the global configuration.

Added in version 1.4.0.

Parameters:

new_config (Dict[str, Any]) – Keyword arguments representing the parameters and their values

Return type:

None

Example

import xgboost as xgb

# Show all messages, including ones pertaining to debugging
xgb.set_config(verbosity=2)

# Get current value of global configuration
# This is a dict containing all parameters in the global configuration,
# including 'verbosity'
config = xgb.get_config()
assert config['verbosity'] == 2

# Example of using the context manager xgb.config_context().
# The context manager will restore the previous value of the global
# configuration upon exiting.
with xgb.config_context(verbosity=0):
    # Suppress warning caused by model generated with XGBoost version < 1.0.0
    bst = xgb.Booster(model_file='./old_model.bin')
assert xgb.get_config()['verbosity'] == 2  # old value restored

Nested configuration context is also supported:

Example

with xgb.config_context(verbosity=3):
    assert xgb.get_config()["verbosity"] == 3
    with xgb.config_context(verbosity=2):
        assert xgb.get_config()["verbosity"] == 2

xgb.set_config(verbosity=2)
assert xgb.get_config()["verbosity"] == 2
with xgb.config_context(verbosity=3):
    assert xgb.get_config()["verbosity"] == 3
xgboost.get_config()

Get current values of the global configuration.

Global configuration consists of a collection of parameters that can be applied in the global scope. See Global Configuration for the full list of parameters supported in the global configuration.

Added in version 1.4.0.

Returns:

args – The list of global parameters and their values

Return type:

Dict[str, Any]

Example

import xgboost as xgb

# Show all messages, including ones pertaining to debugging
xgb.set_config(verbosity=2)

# Get current value of global configuration
# This is a dict containing all parameters in the global configuration,
# including 'verbosity'
config = xgb.get_config()
assert config['verbosity'] == 2

# Example of using the context manager xgb.config_context().
# The context manager will restore the previous value of the global
# configuration upon exiting.
with xgb.config_context(verbosity=0):
    # Suppress warning caused by model generated with XGBoost version < 1.0.0
    bst = xgb.Booster(model_file='./old_model.bin')
assert xgb.get_config()['verbosity'] == 2  # old value restored

Nested configuration context is also supported:

Example

with xgb.config_context(verbosity=3):
    assert xgb.get_config()["verbosity"] == 3
    with xgb.config_context(verbosity=2):
        assert xgb.get_config()["verbosity"] == 2

xgb.set_config(verbosity=2)
assert xgb.get_config()["verbosity"] == 2
with xgb.config_context(verbosity=3):
    assert xgb.get_config()["verbosity"] == 3

Core Data Structure

Core XGBoost Library.

class xgboost.DMatrix(data, label=None, *, weight=None, base_margin=None, missing=None, silent=False, feature_names=None, feature_types=None, nthread=None, group=None, qid=None, label_lower_bound=None, label_upper_bound=None, feature_weights=None, enable_categorical=False, data_split_mode=DataSplitMode.ROW)

Bases: object

Data Matrix used in XGBoost.

DMatrix is an internal data structure that is used by XGBoost, which is optimized for both memory efficiency and training speed. You can construct DMatrix from multiple different sources of data.

Parameters:
  • data (Any) – Data source of DMatrix. See Supported data structures for various XGBoost functions for a list of supported input types.

  • label (Any | None) – Label of the training data.

  • weight (Any | None) –

    Weight for each instance.

    Note

    For ranking task, weights are per-group. In ranking task, one weight is assigned to each group (not each data point). This is because we only care about the relative ordering of data points within each group, so it doesn’t make sense to assign weights to individual data points.

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • missing (float | None) – Value in the input data which needs to be present as a missing value. If None, defaults to np.nan.

  • silent (bool) – Whether print messages during construction

  • feature_names (Sequence[str] | None) – Set names for features.

  • feature_types (Sequence[str] | None) –

    Set types for features. If data is a DataFrame type and passing enable_categorical=True, the types will be deduced automatically from the column types.

    Otherwise, one can pass a list-like input with the same length as number of columns in data, with the following possible values:

    • ”c”, which represents categorical columns.

    • ”q”, which represents numeric columns.

    • ”int”, which represents integer columns.

    • ”i”, which represents boolean columns.

    Note that, while categorical types are treated differently from the rest for model fitting purposes, the other types do not influence the generated model, but have effects in other functionalities such as feature importances.

    For categorical features, the input is assumed to be preprocessed and encoded by the users. The encoding can be done via sklearn.preprocessing.OrdinalEncoder or pandas dataframe .cat.codes method. This is useful when users want to specify categorical features without having to construct a dataframe as input.

  • nthread (int | None) – Number of threads to use for loading data when parallelization is applicable. If -1, uses maximum threads available on the system.

  • group (Any | None) – Group size for all ranking group.

  • qid (Any | None) – Query ID for data samples, used for ranking.

  • label_lower_bound (Any | None) – Lower bound for survival training.

  • label_upper_bound (Any | None) – Upper bound for survival training.

  • feature_weights (Any | None) – Set feature weights for column sampling.

  • enable_categorical (bool) –

    Added in version 1.3.0.

    Note

    This parameter is experimental

    Experimental support of specializing for categorical features.

    If passing ‘True’ and ‘data’ is a data frame (from supported libraries such as Pandas, Modin or cuDF), columns of categorical types will automatically be set to be of categorical type (feature_type=’c’) in the resulting DMatrix.

    If passing ‘False’ and ‘data’ is a data frame with categorical columns, it will result in an error being thrown.

    If ‘data’ is not a data frame, this argument is ignored.

    JSON/UBJSON serialization format is required for this.

  • data_split_mode (DataSplitMode)

data_split_mode()

Get the data split mode of the DMatrix.

Added in version 2.1.0.

Return type:

DataSplitMode

property feature_names: Sequence[str] | None

Labels for features (column labels).

Setting it to None resets existing feature names.

property feature_types: Sequence[str] | None

Type of features (column types).

This is for displaying the results and categorical data support. See DMatrix for details.

Setting it to None resets existing feature types.

get_base_margin()

Get the base margin of the DMatrix.

Return type:

base_margin

get_data()

Get the predictors from DMatrix as a CSR matrix. This getter is mostly for testing purposes. If this is a quantized DMatrix then quantized values are returned instead of input values.

Added in version 1.7.0.

Return type:

csr_matrix

get_float_info(field)

Get float property from the DMatrix.

Parameters:

field (str) – The field name of the information

Returns:

info – a numpy array of float information of the data

Return type:

array

get_group()

Get the group of the DMatrix.

Return type:

group

get_label()

Get the label of the DMatrix.

Returns:

label

Return type:

array

get_quantile_cut()

Get quantile cuts for quantization.

Added in version 2.0.0.

Return type:

Tuple[ndarray, ndarray]

get_uint_info(field)

Get unsigned integer property from the DMatrix.

Parameters:

field (str) – The field name of the information

Returns:

info – a numpy array of unsigned integer information of the data

Return type:

array

get_weight()

Get the weight of the DMatrix.

Returns:

weight

Return type:

array

num_col()

Get the number of columns (features) in the DMatrix.

Return type:

int

num_nonmissing()

Get the number of non-missing values in the DMatrix.

Added in version 1.7.0.

Return type:

int

num_row()

Get the number of rows in the DMatrix.

Return type:

int

save_binary(fname, silent=True)

Save DMatrix to an XGBoost buffer. Saved binary can be later loaded by providing the path to xgboost.DMatrix() as input.

Parameters:
  • fname (string or os.PathLike) – Name of the output buffer file.

  • silent (bool (optional; default: True)) – If set, the output is suppressed.

Return type:

None

set_base_margin(margin)

Set base margin of booster to start from.

This can be used to specify a prediction value of existing model to be base_margin However, remember margin is needed, instead of transformed prediction e.g. for logistic regression: need to put in value before logistic transformation see also example/demo.py

Parameters:

margin (array like) – Prediction margin of each datapoint

Return type:

None

set_float_info(field, data)

Set float type property into the DMatrix.

Parameters:
  • field (str) – The field name of the information

  • data (numpy array) – The array of data to be set

Return type:

None

set_float_info_npy2d(field, data)
Set float type property into the DMatrix

for numpy 2d array input

Parameters:
  • field (str) – The field name of the information

  • data (numpy array) – The array of data to be set

Return type:

None

set_group(group)

Set group size of DMatrix (used for ranking).

Parameters:

group (array like) – Group size of each group

Return type:

None

set_info(*, label=None, weight=None, base_margin=None, group=None, qid=None, label_lower_bound=None, label_upper_bound=None, feature_names=None, feature_types=None, feature_weights=None)

Set meta info for DMatrix. See doc string for xgboost.DMatrix.

Parameters:
  • label (Any | None)

  • weight (Any | None)

  • base_margin (Any | None)

  • group (Any | None)

  • qid (Any | None)

  • label_lower_bound (Any | None)

  • label_upper_bound (Any | None)

  • feature_names (Sequence[str] | None)

  • feature_types (Sequence[str] | None)

  • feature_weights (Any | None)

Return type:

None

set_label(label)

Set label of dmatrix

Parameters:

label (array like) – The label information to be set into DMatrix

Return type:

None

set_uint_info(field, data)

Set uint type property into the DMatrix.

Parameters:
  • field (str) – The field name of the information

  • data (numpy array) – The array of data to be set

Return type:

None

set_weight(weight)

Set weight of each instance.

Parameters:

weight (array like) –

Weight for each data point

Note

For ranking task, weights are per-group.

In ranking task, one weight is assigned to each group (not each data point). This is because we only care about the relative ordering of data points within each group, so it doesn’t make sense to assign weights to individual data points.

Return type:

None

slice(rindex, allow_groups=False)

Slice the DMatrix and return a new DMatrix that only contains rindex.

Parameters:
  • rindex (List[int] | ndarray) – List of indices to be selected.

  • allow_groups (bool) – Allow slicing of a matrix with a groups attribute

Returns:

A new DMatrix containing only selected indices.

Return type:

res

class xgboost.QuantileDMatrix(data, label=None, *, weight=None, base_margin=None, missing=None, silent=False, feature_names=None, feature_types=None, nthread=None, max_bin=None, ref=None, group=None, qid=None, label_lower_bound=None, label_upper_bound=None, feature_weights=None, enable_categorical=False, data_split_mode=DataSplitMode.ROW)

Bases: DMatrix

A DMatrix variant that generates quantilized data directly from input for the hist tree method. This DMatrix is primarily designed to save memory in training by avoiding intermediate storage. Set max_bin to control the number of bins during quantisation, which should be consistent with the training parameter max_bin. When QuantileDMatrix is used for validation/test dataset, ref should be another QuantileDMatrix``(or ``DMatrix, but not recommended as it defeats the purpose of saving memory) constructed from training dataset. See xgboost.DMatrix for documents on meta info.

Note

Do not use QuantileDMatrix as validation/test dataset without supplying a reference (the training dataset) QuantileDMatrix using ref as some information may be lost in quantisation.

Added in version 1.7.0.

Parameters:
  • max_bin (int | None) – The number of histogram bin, should be consistent with the training parameter max_bin.

  • ref (DMatrix | None) – The training dataset that provides quantile information, needed when creating validation/test dataset with QuantileDMatrix. Supplying the training DMatrix as a reference means that the same quantisation applied to the training data is applied to the validation/test data

  • data (Any) – Data source of DMatrix. See Supported data structures for various XGBoost functions for a list of supported input types.

  • label (Any | None) – Label of the training data.

  • weight (Any | None) –

    Weight for each instance.

    Note

    For ranking task, weights are per-group. In ranking task, one weight is assigned to each group (not each data point). This is because we only care about the relative ordering of data points within each group, so it doesn’t make sense to assign weights to individual data points.

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • missing (float | None) – Value in the input data which needs to be present as a missing value. If None, defaults to np.nan.

  • silent (bool) – Whether print messages during construction

  • feature_names (Sequence[str] | None) – Set names for features.

  • feature_types (Sequence[str] | None) –

    Set types for features. If data is a DataFrame type and passing enable_categorical=True, the types will be deduced automatically from the column types.

    Otherwise, one can pass a list-like input with the same length as number of columns in data, with the following possible values:

    • ”c”, which represents categorical columns.

    • ”q”, which represents numeric columns.

    • ”int”, which represents integer columns.

    • ”i”, which represents boolean columns.

    Note that, while categorical types are treated differently from the rest for model fitting purposes, the other types do not influence the generated model, but have effects in other functionalities such as feature importances.

    For categorical features, the input is assumed to be preprocessed and encoded by the users. The encoding can be done via sklearn.preprocessing.OrdinalEncoder or pandas dataframe .cat.codes method. This is useful when users want to specify categorical features without having to construct a dataframe as input.

  • nthread (int | None) – Number of threads to use for loading data when parallelization is applicable. If -1, uses maximum threads available on the system.

  • group (Any | None) – Group size for all ranking group.

  • qid (Any | None) – Query ID for data samples, used for ranking.

  • label_lower_bound (Any | None) – Lower bound for survival training.

  • label_upper_bound (Any | None) – Upper bound for survival training.

  • feature_weights (Any | None) – Set feature weights for column sampling.

  • enable_categorical (bool) –

    Added in version 1.3.0.

    Note

    This parameter is experimental

    Experimental support of specializing for categorical features.

    If passing ‘True’ and ‘data’ is a data frame (from supported libraries such as Pandas, Modin or cuDF), columns of categorical types will automatically be set to be of categorical type (feature_type=’c’) in the resulting DMatrix.

    If passing ‘False’ and ‘data’ is a data frame with categorical columns, it will result in an error being thrown.

    If ‘data’ is not a data frame, this argument is ignored.

    JSON/UBJSON serialization format is required for this.

  • data_split_mode (DataSplitMode)

class xgboost.Booster(params=None, cache=None, model_file=None)

Bases: object

A Booster of XGBoost.

Booster is the model of xgboost, that contains low level routines for training, prediction and evaluation.

Parameters:
attr(key)

Get attribute string from the Booster.

Parameters:

key (str) – The key to get attribute from.

Returns:

The attribute value of the key, returns None if attribute do not exist.

Return type:

value

attributes()

Get attributes stored in the Booster as a dictionary.

Returns:

result – Returns an empty dict if there’s no attributes.

Return type:

dictionary of attribute_name: attribute_value pairs of strings.

property best_iteration: int

The best iteration during training.

property best_score: float

The best evaluation score during training.

boost(dtrain, iteration, grad, hess)

Boost the booster for one iteration with customized gradient statistics. Like xgboost.Booster.update(), this function should not be called directly by users.

Parameters:
  • dtrain (DMatrix) – The training DMatrix.

  • grad (Any) – The first order of gradient.

  • hess (Any) – The second order of gradient.

  • iteration (int)

Return type:

None

copy()

Copy the booster object.

Returns:

A copied booster model

Return type:

booster

dump_model(fout, fmap='', with_stats=False, dump_format='text')

Dump model into a text or JSON file. Unlike save_model(), the output format is primarily used for visualization or interpretation, hence it’s more human readable but cannot be loaded back to XGBoost.

Parameters:
  • fout (str | PathLike) – Output file name.

  • fmap (str | PathLike) – Name of the file containing feature map names.

  • with_stats (bool) – Controls whether the split statistics are output.

  • dump_format (str) – Format of model dump file. Can be ‘text’ or ‘json’.

Return type:

None

eval(data, name='eval', iteration=0)

Evaluate the model on mat.

Parameters:
  • data (DMatrix) – The dmatrix storing the input.

  • name (str) – The name of the dataset.

  • iteration (int) – The current iteration number.

Returns:

result – Evaluation result string.

Return type:

str

eval_set(evals, iteration=0, feval=None, output_margin=True)

Evaluate a set of data.

Parameters:
Returns:

result – Evaluation result string.

Return type:

str

property feature_names: Sequence[str] | None

Feature names for this booster. Can be directly set by input data or by assignment.

property feature_types: Sequence[str] | None

Feature types for this booster. Can be directly set by input data or by assignment. See DMatrix for details.

get_dump(fmap='', with_stats=False, dump_format='text')

Returns the model dump as a list of strings. Unlike save_model(), the output format is primarily used for visualization or interpretation, hence it’s more human readable but cannot be loaded back to XGBoost.

Parameters:
  • fmap (str | PathLike) – Name of the file containing feature map names.

  • with_stats (bool) – Controls whether the split statistics are output.

  • dump_format (str) – Format of model dump. Can be ‘text’, ‘json’ or ‘dot’.

Return type:

List[str]

get_fscore(fmap='')

Get feature importance of each feature.

Note

Zero-importance features will not be included

Keep in mind that this function does not include zero-importance feature, i.e. those features that have not been used in any split conditions.

Parameters:

fmap (str | PathLike) – The name of feature map file

Return type:

Dict[str, float | List[float]]

get_score(fmap='', importance_type='weight')

Get feature importance of each feature. For tree model Importance type can be defined as:

  • ‘weight’: the number of times a feature is used to split the data across all trees.

  • ‘gain’: the average gain across all splits the feature is used in.

  • ‘cover’: the average coverage across all splits the feature is used in.

  • ‘total_gain’: the total gain across all splits the feature is used in.

  • ‘total_cover’: the total coverage across all splits the feature is used in.

Note

For linear model, only “weight” is defined and it’s the normalized coefficients without bias.

Note

Zero-importance features will not be included

Keep in mind that this function does not include zero-importance feature, i.e. those features that have not been used in any split conditions.

Parameters:
  • fmap (str | PathLike) – The name of feature map file.

  • importance_type (str) – One of the importance types defined above.

Returns:

  • A map between feature names and their scores. When gblinear is used for

  • multi-class classification the scores for each feature is a list with length

  • n_classes, otherwise they’re scalars.

Return type:

Dict[str, float | List[float]]

get_split_value_histogram(feature, fmap='', bins=None, as_pandas=True)

Get split value histogram of a feature

Parameters:
  • feature (str) – The name of the feature.

  • fmap (PathLike | str) – The name of feature map file.

  • bin – The maximum number of bins. Number of bins equals number of unique split values n_unique, if bins == None or bins > n_unique.

  • as_pandas (bool) – Return pd.DataFrame when pandas is installed. If False or pandas is not installed, return numpy ndarray.

  • bins (int | None)

Returns:

  • a histogram of used splitting values for the specified feature

  • either as numpy array or pandas DataFrame.

Return type:

ndarray | DataFrame

inplace_predict(data, iteration_range=(0, 0), predict_type='value', missing=nan, validate_features=True, base_margin=None, strict_shape=False)

Run prediction in-place when possible, Unlike predict() method, inplace prediction does not cache the prediction result.

Calling only inplace_predict in multiple threads is safe and lock free. But the safety does not hold when used in conjunction with other methods. E.g. you can’t train the booster in one thread and perform prediction in the other.

Note

If the device ordinal of the input data doesn’t match the one configured for the booster, data will be copied to the booster device.

booster.set_param({"device": "cuda:0"})
booster.inplace_predict(cupy_array)

booster.set_param({"device": "cpu"})
booster.inplace_predict(numpy_array)

Added in version 1.1.0.

Parameters:
Returns:

prediction – The prediction result. When input data is on GPU, prediction result is stored in a cupy array.

Return type:

numpy.ndarray/cupy.ndarray

load_config(config)

Load configuration returned by save_config.

Added in version 1.0.0.

Parameters:

config (str)

Return type:

None

load_model(fname)

Load the model from a file or a bytearray.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.load_model("model.json")
# or
model.load_model("model.ubj")
Parameters:

fname (str | bytearray | PathLike) – Input file name or memory buffer(see also save_raw)

Return type:

None

num_boosted_rounds()

Get number of boosted rounds. For gblinear this is reset to 0 after serializing the model.

Return type:

int

num_features()

Number of features in booster.

Return type:

int

predict(data, output_margin=False, pred_leaf=False, pred_contribs=False, approx_contribs=False, pred_interactions=False, validate_features=True, training=False, iteration_range=(0, 0), strict_shape=False)

Predict with data. The full model will be used unless iteration_range is specified, meaning user have to either slice the model or use the best_iteration attribute to get prediction from best model returned from early stopping.

Note

See Prediction for issues like thread safety and a summary of outputs from this function.

Parameters:
  • data (DMatrix) – The dmatrix storing the input.

  • output_margin (bool) – Whether to output the raw untransformed margin value.

  • pred_leaf (bool) – When this option is on, the output will be a matrix of (nsample, ntrees) with each record indicating the predicted leaf index of each sample in each tree. Note that the leaf index of a tree is unique per tree, so you may find leaf 1 in both tree 1 and tree 0.

  • pred_contribs (bool) – When this is True the output will be a matrix of size (nsample, nfeats + 1) with each record indicating the feature contributions (SHAP values) for that prediction. The sum of all feature contributions is equal to the raw untransformed margin value of the prediction. Note the final column is the bias term.

  • approx_contribs (bool) – Approximate the contributions of each feature. Used when pred_contribs or pred_interactions is set to True. Changing the default of this parameter (False) is not recommended.

  • pred_interactions (bool) – When this is True the output will be a matrix of size (nsample, nfeats + 1, nfeats + 1) indicating the SHAP interaction values for each pair of features. The sum of each row (or column) of the interaction values equals the corresponding SHAP value (from pred_contribs), and the sum of the entire matrix equals the raw untransformed margin value of the prediction. Note the last row and column correspond to the bias term.

  • validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.

  • training (bool) –

    Whether the prediction value is used for training. This can effect dart booster, which performs dropouts during training iterations but use all trees for inference. If you want to obtain result with dropouts, set this parameter to True. Also, the parameter is set to true when obtaining prediction for custom objective function.

    Added in version 1.0.0.

  • iteration_range (Tuple[int | integer, int | integer]) –

    Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.

    Added in version 1.4.0.

  • strict_shape (bool) –

    When set to True, output shape is invariant to whether classification is used. For both value and margin prediction, the output shape is (n_samples, n_groups), n_groups == 1 when multi-class is not used. Default to False, in which case the output shape can be (n_samples, ) if multi-class is not used.

    Added in version 1.4.0.

Returns:

prediction

Return type:

numpy array

save_config()

Output internal parameter configuration of Booster as a JSON string.

Added in version 1.0.0.

Return type:

str

save_model(fname)

Save the model to a file.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.save_model("model.json")
# or
model.save_model("model.ubj")
Parameters:

fname (str | PathLike) – Output file name

Return type:

None

save_raw(raw_format='ubj')

Save the model to a in memory buffer representation instead of file.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

Parameters:

raw_format (str) – Format of output buffer. Can be json, ubj or deprecated.

Return type:

An in memory buffer representation of the model

set_attr(**kwargs)

Set the attribute of the Booster.

Parameters:

**kwargs (Any | None) – The attributes to set. Setting a value to None deletes an attribute.

Return type:

None

set_param(params, value=None)

Set parameters into the Booster.

Parameters:
  • params (Dict | Iterable[Tuple[str, Any]] | str) – list of key,value pairs, dict of key to value or simply str key

  • value (str | None) – value of the specified parameter, when params is str key

Return type:

None

trees_to_dataframe(fmap='')

Parse a boosted tree model text dump into a pandas DataFrame structure.

This feature is only defined when the decision tree model is chosen as base learner (booster in {gbtree, dart}). It is not defined for other base learner types, such as linear learners (booster=gblinear).

Parameters:

fmap (str | PathLike) – The name of feature map file.

Return type:

DataFrame

update(dtrain, iteration, fobj=None)

Update for one iteration, with objective function calculated internally. This function should not be called directly by users.

Parameters:
Return type:

None

class xgboost.DataIter(cache_prefix=None, release_data=True)

Bases: ABC

The interface for user defined data iterator. The iterator facilitates distributed training, QuantileDMatrix, and external memory support using DMatrix. Most of time, users don’t need to interact with this class directly.

Note

The class caches some intermediate results using the data input (predictor X) as key. Don’t repeat the X for multiple batches with different meta data (like label), make a copy if necessary.

Parameters:
  • cache_prefix (str | None) – Prefix to the cache files, only used in external memory.

  • release_data (bool) – Whether the iterator should release the data during iteration. Set it to True if the data transformation (converting data to np.float32 type) is memory intensive. Otherwise, if the transformation is computation intensive then we can keep the cache.

get_callbacks(enable_categorical)

Get callback functions for iterating in C. This is an internal function.

Parameters:

enable_categorical (bool)

Return type:

Tuple[Callable, Callable]

abstract next(input_data)

Set the next batch of data.

Parameters:

input_data (Callable) – A function with same data fields like data, label with xgboost.DMatrix.

Return type:

0 if there’s no more batch, otherwise 1.

property proxy: _ProxyDMatrix

Handle of DMatrix proxy.

reraise()

Reraise the exception thrown during iteration.

Return type:

None

abstract reset()

Reset the data iterator. Prototype for user defined function.

Return type:

None

Learning API

Training Library containing training routines.

xgboost.train(params, dtrain, num_boost_round=10, *, evals=None, obj=None, feval=None, maximize=None, early_stopping_rounds=None, evals_result=None, verbose_eval=True, xgb_model=None, callbacks=None, custom_metric=None)

Train a booster with given parameters.

Parameters:
  • params (Dict[str, Any]) – Booster params.

  • dtrain (DMatrix) – Data to be trained.

  • num_boost_round (int) – Number of boosting iterations.

  • evals (Sequence[Tuple[DMatrix, str]] | None) – List of validation sets for which metrics will evaluated during training. Validation metrics will help us track the performance of the model.

  • obj (Callable[[ndarray, DMatrix], Tuple[ndarray, ndarray]] | None) – Custom objective function. See Custom Objective for details.

  • feval (Callable[[ndarray, DMatrix], Tuple[str, float]] | None) –

    Deprecated since version 1.6.0: Use custom_metric instead.

  • maximize (bool | None) – Whether to maximize feval.

  • early_stopping_rounds (int | None) – Activates early stopping. Validation metric needs to improve at least once in every early_stopping_rounds round(s) to continue training. Requires at least one item in evals. The method returns the model from the last iteration (not the best one). Use custom callback or model slicing if the best model is desired. If there’s more than one item in evals, the last entry will be used for early stopping. If there’s more than one metric in the eval_metric parameter given in params, the last metric will be used for early stopping. If early stopping occurs, the model will have two additional fields: bst.best_score, bst.best_iteration.

  • evals_result (Dict[str, Dict[str, List[float] | List[Tuple[float, float]]]] | None) –

    This dictionary stores the evaluation results of all the items in watchlist.

    Example: with a watchlist containing [(dtest,'eval'), (dtrain,'train')] and a parameter containing ('eval_metric': 'logloss'), the evals_result returns

    {'train': {'logloss': ['0.48253', '0.35953']},
     'eval': {'logloss': ['0.480385', '0.357756']}}
    

  • verbose_eval (bool | int | None) – Requires at least one item in evals. If verbose_eval is True then the evaluation metric on the validation set is printed at each boosting stage. If verbose_eval is an integer then the evaluation metric on the validation set is printed at every given verbose_eval boosting stage. The last boosting stage / the boosting stage found by using early_stopping_rounds is also printed. Example: with verbose_eval=4 and at least one item in evals, an evaluation metric is printed every 4 boosting stages, instead of every boosting stage.

  • xgb_model (str | PathLike | Booster | bytearray | None) – Xgb model to be loaded before training (allows training continuation).

  • callbacks (Sequence[TrainingCallback] | None) –

    List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API.

    Note

    States in callback are not preserved during training, which means callback objects can not be reused for multiple training sessions without reinitialization or deepcopy.

    for params in parameters_grid:
        # be sure to (re)initialize the callbacks before each run
        callbacks = [xgb.callback.LearningRateScheduler(custom_rates)]
        xgboost.train(params, Xy, callbacks=callbacks)
    

  • custom_metric (Callable[[ndarray, DMatrix], Tuple[str, float]] | None) –

    Custom metric function. See Custom Metric for details.

Returns:

Booster

Return type:

a trained booster model

xgboost.cv(params, dtrain, num_boost_round=10, nfold=3, stratified=False, folds=None, metrics=(), obj=None, feval=None, maximize=None, early_stopping_rounds=None, fpreproc=None, as_pandas=True, verbose_eval=None, show_stdv=True, seed=0, callbacks=None, shuffle=True, custom_metric=None)

Cross-validation with given parameters.

Parameters:
  • params (dict) – Booster params.

  • dtrain (DMatrix) – Data to be trained.

  • num_boost_round (int) – Number of boosting iterations.

  • nfold (int) – Number of folds in CV.

  • stratified (bool) – Perform stratified sampling.

  • folds (a KFold or StratifiedKFold instance or list of fold indices) – Sklearn KFolds or StratifiedKFolds object. Alternatively may explicitly pass sample indices for each fold. For n folds, folds should be a length n list of tuples. Each tuple is (in,out) where in is a list of indices to be used as the training samples for the n th fold and out is a list of indices to be used as the testing samples for the n th fold.

  • metrics (string or list of strings) – Evaluation metrics to be watched in CV.

  • obj (Callable[[ndarray, DMatrix], Tuple[ndarray, ndarray]] | None) – Custom objective function. See Custom Objective for details.

  • feval (function) –

    Deprecated since version 1.6.0: Use custom_metric instead.

  • maximize (bool) – Whether to maximize feval.

  • early_stopping_rounds (int) – Activates early stopping. Cross-Validation metric (average of validation metric computed over CV folds) needs to improve at least once in every early_stopping_rounds round(s) to continue training. The last entry in the evaluation history will represent the best iteration. If there’s more than one metric in the eval_metric parameter given in params, the last metric will be used for early stopping.

  • fpreproc (function) – Preprocessing function that takes (dtrain, dtest, param) and returns transformed versions of those.

  • as_pandas (bool, default True) – Return pd.DataFrame when pandas is installed. If False or pandas is not installed, return np.ndarray

  • verbose_eval (bool, int, or None, default None) – Whether to display the progress. If None, progress will be displayed when np.ndarray is returned. If True, progress will be displayed at boosting stage. If an integer is given, progress will be displayed at every given verbose_eval boosting stage.

  • show_stdv (bool, default True) – Whether to display the standard deviation in progress. Results are not affected, and always contains std.

  • seed (int) – Seed used to generate the folds (passed to numpy.random.seed).

  • callbacks (Sequence[TrainingCallback] | None) –

    List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API.

    Note

    States in callback are not preserved during training, which means callback objects can not be reused for multiple training sessions without reinitialization or deepcopy.

    for params in parameters_grid:
        # be sure to (re)initialize the callbacks before each run
        callbacks = [xgb.callback.LearningRateScheduler(custom_rates)]
        xgboost.train(params, Xy, callbacks=callbacks)
    

  • shuffle (bool) – Shuffle data before creating folds.

  • custom_metric (Callable[[ndarray, DMatrix], Tuple[str, float]] | None) –

    Custom metric function. See Custom Metric for details.

Returns:

evaluation history

Return type:

list(string)

Scikit-Learn API

Scikit-Learn Wrapper interface for XGBoost.

class xgboost.XGBRegressor(*, objective='reg:squarederror', **kwargs)

Bases: XGBModel, RegressorMixin

Implementation of the scikit-learn API for XGBoost regression. See Using the Scikit-Learn Estimator Interface for more information.

Parameters:
  • n_estimators (Optional[int]) – Number of gradient boosted trees. Equivalent to number of boosting rounds.

  • max_depth (Optional[int]) – Maximum tree depth for base learners.

  • max_leaves (Optional[int]) – Maximum number of leaves; 0 indicates no limit.

  • max_bin (Optional[int]) – If using histogram-based algorithm, maximum number of bins per feature

  • grow_policy (Optional[str]) –

    Tree growing policy.

    • depthwise: Favors splitting at nodes closest to the node,

    • lossguide: Favors splitting at nodes with highest loss change.

  • learning_rate (Optional[float]) – Boosting learning rate (xgb’s “eta”)

  • verbosity (Optional[int]) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).

  • objective (Union[str, xgboost.sklearn._SklObjWProto, Callable[[Any, Any], Tuple[numpy.ndarray, numpy.ndarray]], NoneType]) –

    Specify the learning task and the corresponding learning objective or a custom objective function to be used.

    For custom objective, see Custom Objective and Evaluation Metric and Custom objective and metric for more information, along with the end note for function signatures.

  • booster (Optional[str]) – Specify which booster to use: gbtree, gblinear or dart.

  • tree_method (Optional[str]) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from the parameters document tree method

  • n_jobs (Optional[int]) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.

  • gamma (Optional[float]) – (min_split_loss) Minimum loss reduction required to make a further partition on a leaf node of the tree.

  • min_child_weight (Optional[float]) – Minimum sum of instance weight(hessian) needed in a child.

  • max_delta_step (Optional[float]) – Maximum delta step we allow each tree’s weight estimation to be.

  • subsample (Optional[float]) – Subsample ratio of the training instance.

  • sampling_method (Optional[str]) –

    Sampling method. Used only by the GPU version of hist tree method.

    • uniform: Select random training instances uniformly.

    • gradient_based: Select random training instances with higher probability

      when the gradient and hessian are larger. (cf. CatBoost)

  • colsample_bytree (Optional[float]) – Subsample ratio of columns when constructing each tree.

  • colsample_bylevel (Optional[float]) – Subsample ratio of columns for each level.

  • colsample_bynode (Optional[float]) – Subsample ratio of columns for each split.

  • reg_alpha (Optional[float]) – L1 regularization term on weights (xgb’s alpha).

  • reg_lambda (Optional[float]) – L2 regularization term on weights (xgb’s lambda).

  • scale_pos_weight (Optional[float]) – Balancing of positive and negative weights.

  • base_score (Optional[float]) – The initial prediction score of all instances, global bias.

  • random_state (Union[numpy.random.mtrand.RandomState, numpy.random._generator.Generator, int, NoneType]) –

    Random number seed.

    Note

    Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.

  • missing (float) – Value in the data which needs to be present as a missing value. Default to numpy.nan.

  • num_parallel_tree (Optional[int]) – Used for boosting random forest.

  • monotone_constraints (Union[Dict[str, int], str, NoneType]) – Constraint of variable monotonicity. See tutorial for more information.

  • interaction_constraints (Union[str, List[Tuple[str]], NoneType]) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nested list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information

  • importance_type (Optional[str]) –

    The feature importance type for the feature_importances_ property:

    • For tree model, it’s either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.

    • For linear model, only “weight” is defined and it’s the normalized coefficients without bias.

  • device (Optional[str]) –

    Added in version 2.0.0.

    Device ordinal, available options are cpu, cuda, and gpu.

  • validate_parameters (Optional[bool]) – Give warnings for unknown parameter.

  • enable_categorical (bool) – See the same parameter of DMatrix for details.

  • feature_types (Optional[Sequence[str]]) –

    Added in version 1.7.0.

    Used for specifying feature types without constructing a dataframe. See DMatrix for details.

  • max_cat_to_onehot (Optional[int]) –

    Added in version 1.6.0.

    Note

    This parameter is experimental

    A threshold for deciding whether XGBoost should use one-hot encoding based split for categorical data. When number of categories is lesser than the threshold then one-hot encoding is chosen, otherwise the categories will be partitioned into children nodes. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • max_cat_threshold (Optional[int]) –

    Added in version 1.7.0.

    Note

    This parameter is experimental

    Maximum number of categories considered for each split. Used only by partition-based splits for preventing over-fitting. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • multi_strategy (Optional[str]) –

    Added in version 2.0.0.

    Note

    This parameter is working-in-progress.

    The strategy used for training multi-target models, including multi-target regression and multi-class classification. See Multiple Outputs for more information.

    • one_output_per_tree: One model for each target.

    • multi_output_tree: Use multi-target trees.

  • eval_metric (Union[str, List[str], Callable, NoneType]) –

    Added in version 1.6.0.

    Metric used for monitoring the training result and early stopping. It can be a string or list of strings as names of predefined metric in XGBoost (See doc/parameter.rst), one of the metrics in sklearn.metrics, or any other user defined metric that looks like sklearn.metrics.

    If custom objective is also provided, then custom metric should implement the corresponding reverse link function.

    Unlike the scoring parameter commonly used in scikit-learn, when a callable object is provided, it’s assumed to be a cost function and by default XGBoost will minimize the result during early stopping.

    For advanced usage on Early stopping like directly choosing to maximize instead of minimize, see xgboost.callback.EarlyStopping.

    See Custom Objective and Evaluation Metric and Custom objective and metric for more information.

    from sklearn.datasets import load_diabetes
    from sklearn.metrics import mean_absolute_error
    X, y = load_diabetes(return_X_y=True)
    reg = xgb.XGBRegressor(
        tree_method="hist",
        eval_metric=mean_absolute_error,
    )
    reg.fit(X, y, eval_set=[(X, y)])
    

  • early_stopping_rounds (Optional[int]) –

    Added in version 1.6.0.

    • Activates early stopping. Validation metric needs to improve at least once in every early_stopping_rounds round(s) to continue training. Requires at least one item in eval_set in fit().

    • If early stopping occurs, the model will have two additional attributes: best_score and best_iteration. These are used by the predict() and apply() methods to determine the optimal number of trees during inference. If users want to access the full model (including trees built after early stopping), they can specify the iteration_range in these inference methods. In addition, other utilities like model plotting can also use the entire model.

    • If you prefer to discard the trees after best_iteration, consider using the callback function xgboost.callback.EarlyStopping.

    • If there’s more than one item in eval_set, the last entry will be used for early stopping. If there’s more than one metric in eval_metric, the last metric will be used for early stopping.

  • callbacks (Optional[List[xgboost.callback.TrainingCallback]]) –

    List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API.

    Note

    States in callback are not preserved during training, which means callback objects can not be reused for multiple training sessions without reinitialization or deepcopy.

    for params in parameters_grid:
        # be sure to (re)initialize the callbacks before each run
        callbacks = [xgb.callback.LearningRateScheduler(custom_rates)]
        reg = xgboost.XGBRegressor(**params, callbacks=callbacks)
        reg.fit(X, y)
    

  • kwargs (Optional[Any]) –

    Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.

    Note

    **kwargs unsupported by scikit-learn

    **kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.

    Note

    Custom objective function

    A custom objective function can be provided for the objective parameter. In this case, it should have the signature objective(y_true, y_pred) -> [grad, hess] or objective(y_true, y_pred, *, sample_weight) -> [grad, hess]:

    y_true: array_like of shape [n_samples]

    The target values

    y_pred: array_like of shape [n_samples]

    The predicted values

    sample_weight :

    Optional sample weights.

    grad: array_like of shape [n_samples]

    The value of the gradient for each sample point.

    hess: array_like of shape [n_samples]

    The value of the second derivative for each sample point

apply(X, iteration_range=None)

Return the predicted leaf every tree for each sample. If the model is trained with early stopping, then best_iteration is used automatically.

Parameters:
Returns:

X_leaves – For each datapoint x in X and for each tree, return the index of the leaf x ends up in. Leaves are numbered within [0; 2**(self.max_depth+1)), possibly with gaps in the numbering.

Return type:

array_like, shape=[n_samples, n_trees]

property best_iteration: int

The best iteration obtained by early stopping. This attribute is 0-based, for instance if the best iteration is the first round, then best_iteration is 0.

property best_score: float

The best score obtained by early stopping.

property coef_: ndarray

Coefficients property

Note

Coefficients are defined only for linear learners

Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).

Returns:

coef_

Return type:

array of shape [n_features] or [n_classes, n_features]

evals_result()

Return the evaluation results.

If eval_set is passed to the fit() function, you can call evals_result() to get evaluation results for all passed eval_sets. When eval_metric is also passed to the fit() function, the evals_result will contain the eval_metrics passed to the fit() function.

The returned evaluation result is a dictionary:

{'validation_0': {'logloss': ['0.604835', '0.531479']},
 'validation_1': {'logloss': ['0.41965', '0.17686']}}
Return type:

evals_result

property feature_importances_: ndarray

Feature importances property, return depends on importance_type parameter. When model trained with multi-class/multi-label/multi-target dataset, the feature importance is “averaged” over all targets. The “average” is defined based on the importance type. For instance, if the importance type is “total_gain”, then the score is sum of loss change for each split from all trees.

Returns:

  • feature_importances_ (array of shape [n_features] except for multi-class)

  • linear model, which returns an array with shape (n_features, n_classes)

property feature_names_in_: ndarray

Names of features seen during fit(). Defined only when X has feature names that are all strings.

fit(X, y, *, sample_weight=None, base_margin=None, eval_set=None, verbose=True, xgb_model=None, sample_weight_eval_set=None, base_margin_eval_set=None, feature_weights=None)

Fit gradient boosting model.

Note that calling fit() multiple times will cause the model object to be re-fit from scratch. To resume training from a previous checkpoint, explicitly pass xgb_model argument.

Parameters:
  • X (Any) –

    Feature matrix. See Supported data structures for various XGBoost functions for a list of supported types.

    When the tree_method is set to hist, internally, the QuantileDMatrix will be used instead of the DMatrix for conserving memory. However, this has performance implications when the device of input data is not matched with algorithm. For instance, if the input is a numpy array on CPU but cuda is used for training, then the data is first processed on CPU then transferred to GPU.

  • y (Any) – Labels

  • sample_weight (Any | None) – instance weights

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • eval_set (Sequence[Tuple[Any, Any]] | None) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.

  • verbose (bool | int | None) – If verbose is True and an evaluation set is used, the evaluation metric measured on the validation set is printed to stdout at each boosting stage. If verbose is an integer, the evaluation metric is printed at each verbose boosting stage. The last boosting stage / the boosting stage found by using early_stopping_rounds is also printed.

  • xgb_model (Booster | XGBModel | str | None) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).

  • sample_weight_eval_set (Sequence[Any] | None) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.

  • base_margin_eval_set (Sequence[Any] | None) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.

  • feature_weights (Any | None) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown.

Return type:

XGBModel

get_booster()

Get the underlying xgboost Booster of this model.

This will raise an exception when fit was not called

Returns:

booster

Return type:

a xgboost booster of underlying model

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:

routing – A MetadataRequest encapsulating routing information.

Return type:

MetadataRequest

get_num_boosting_rounds()

Gets the number of xgboost boosting rounds.

Return type:

int

get_params(deep=True)

Get parameters.

Parameters:

deep (bool)

Return type:

Dict[str, Any]

get_xgb_params()

Get xgboost specific parameters.

Return type:

Dict[str, Any]

property intercept_: ndarray

Intercept (bias) property

For tree-based model, the returned value is the base_score.

Returns:

intercept_

Return type:

array of shape (1,) or [n_classes]

load_model(fname)

Load the model from a file or a bytearray.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.load_model("model.json")
# or
model.load_model("model.ubj")
Parameters:

fname (str | bytearray | PathLike) – Input file name or memory buffer(see also save_raw)

Return type:

None

property n_features_in_: int

Number of features seen during fit().

predict(X, output_margin=False, validate_features=True, base_margin=None, iteration_range=None)

Predict with X. If the model is trained with early stopping, then best_iteration is used automatically. The estimator uses inplace_predict by default and falls back to using DMatrix if devices between the data and the estimator don’t match.

Note

This function is only thread safe for gbtree and dart.

Parameters:
  • X (Any) – Data to predict with.

  • output_margin (bool) – Whether to output the raw untransformed margin value.

  • validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • iteration_range (Tuple[int | integer, int | integer] | None) –

    Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.

    Added in version 1.4.0.

Return type:

prediction

save_model(fname)

Save the model to a file.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.save_model("model.json")
# or
model.save_model("model.ubj")
Parameters:

fname (str | PathLike) – Output file name

Return type:

None

score(X, y, sample_weight=None)

Return the coefficient of determination of the prediction.

The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns:

score\(R^2\) of self.predict(X) w.r.t. y.

Return type:

float

Notes

The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score(). This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).

set_fit_request(*, base_margin='$UNCHANGED$', base_margin_eval_set='$UNCHANGED$', eval_set='$UNCHANGED$', feature_weights='$UNCHANGED$', sample_weight='$UNCHANGED$', sample_weight_eval_set='$UNCHANGED$', verbose='$UNCHANGED$', xgb_model='$UNCHANGED$')

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in fit.

  • base_margin_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin_eval_set parameter in fit.

  • eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_set parameter in fit.

  • feature_weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for feature_weights parameter in fit.

  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in fit.

  • sample_weight_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight_eval_set parameter in fit.

  • verbose (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for verbose parameter in fit.

  • xgb_model (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for xgb_model parameter in fit.

  • self (XGBRegressor)

Returns:

self – The updated object.

Return type:

object

set_params(**params)

Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.

Return type:

self

Parameters:

params (Any)

set_predict_request(*, base_margin='$UNCHANGED$', iteration_range='$UNCHANGED$', output_margin='$UNCHANGED$', validate_features='$UNCHANGED$')

Request metadata passed to the predict method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in predict.

  • iteration_range (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for iteration_range parameter in predict.

  • output_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for output_margin parameter in predict.

  • validate_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for validate_features parameter in predict.

  • self (XGBRegressor)

Returns:

self – The updated object.

Return type:

object

set_score_request(*, sample_weight='$UNCHANGED$')

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

  • self (XGBRegressor)

Returns:

self – The updated object.

Return type:

object

class xgboost.XGBClassifier(*, objective='binary:logistic', **kwargs)

Bases: XGBModel, ClassifierMixin

Implementation of the scikit-learn API for XGBoost classification. See Using the Scikit-Learn Estimator Interface for more information.

Parameters:
  • n_estimators (Optional[int]) – Number of boosting rounds.

  • max_depth (Optional[int]) – Maximum tree depth for base learners.

  • max_leaves (Optional[int]) – Maximum number of leaves; 0 indicates no limit.

  • max_bin (Optional[int]) – If using histogram-based algorithm, maximum number of bins per feature

  • grow_policy (Optional[str]) –

    Tree growing policy.

    • depthwise: Favors splitting at nodes closest to the node,

    • lossguide: Favors splitting at nodes with highest loss change.

  • learning_rate (Optional[float]) – Boosting learning rate (xgb’s “eta”)

  • verbosity (Optional[int]) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).

  • objective (Union[str, xgboost.sklearn._SklObjWProto, Callable[[Any, Any], Tuple[numpy.ndarray, numpy.ndarray]], NoneType]) –

    Specify the learning task and the corresponding learning objective or a custom objective function to be used.

    For custom objective, see Custom Objective and Evaluation Metric and Custom objective and metric for more information, along with the end note for function signatures.

  • booster (Optional[str]) – Specify which booster to use: gbtree, gblinear or dart.

  • tree_method (Optional[str]) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from the parameters document tree method

  • n_jobs (Optional[int]) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.

  • gamma (Optional[float]) – (min_split_loss) Minimum loss reduction required to make a further partition on a leaf node of the tree.

  • min_child_weight (Optional[float]) – Minimum sum of instance weight(hessian) needed in a child.

  • max_delta_step (Optional[float]) – Maximum delta step we allow each tree’s weight estimation to be.

  • subsample (Optional[float]) – Subsample ratio of the training instance.

  • sampling_method (Optional[str]) –

    Sampling method. Used only by the GPU version of hist tree method.

    • uniform: Select random training instances uniformly.

    • gradient_based: Select random training instances with higher probability

      when the gradient and hessian are larger. (cf. CatBoost)

  • colsample_bytree (Optional[float]) – Subsample ratio of columns when constructing each tree.

  • colsample_bylevel (Optional[float]) – Subsample ratio of columns for each level.

  • colsample_bynode (Optional[float]) – Subsample ratio of columns for each split.

  • reg_alpha (Optional[float]) – L1 regularization term on weights (xgb’s alpha).

  • reg_lambda (Optional[float]) – L2 regularization term on weights (xgb’s lambda).

  • scale_pos_weight (Optional[float]) – Balancing of positive and negative weights.

  • base_score (Optional[float]) – The initial prediction score of all instances, global bias.

  • random_state (Union[numpy.random.mtrand.RandomState, numpy.random._generator.Generator, int, NoneType]) –

    Random number seed.

    Note

    Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.

  • missing (float) – Value in the data which needs to be present as a missing value. Default to numpy.nan.

  • num_parallel_tree (Optional[int]) – Used for boosting random forest.

  • monotone_constraints (Union[Dict[str, int], str, NoneType]) – Constraint of variable monotonicity. See tutorial for more information.

  • interaction_constraints (Union[str, List[Tuple[str]], NoneType]) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nested list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information

  • importance_type (Optional[str]) –

    The feature importance type for the feature_importances_ property:

    • For tree model, it’s either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.

    • For linear model, only “weight” is defined and it’s the normalized coefficients without bias.

  • device (Optional[str]) –

    Added in version 2.0.0.

    Device ordinal, available options are cpu, cuda, and gpu.

  • validate_parameters (Optional[bool]) – Give warnings for unknown parameter.

  • enable_categorical (bool) – See the same parameter of DMatrix for details.

  • feature_types (Optional[Sequence[str]]) –

    Added in version 1.7.0.

    Used for specifying feature types without constructing a dataframe. See DMatrix for details.

  • max_cat_to_onehot (Optional[int]) –

    Added in version 1.6.0.

    Note

    This parameter is experimental

    A threshold for deciding whether XGBoost should use one-hot encoding based split for categorical data. When number of categories is lesser than the threshold then one-hot encoding is chosen, otherwise the categories will be partitioned into children nodes. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • max_cat_threshold (Optional[int]) –

    Added in version 1.7.0.

    Note

    This parameter is experimental

    Maximum number of categories considered for each split. Used only by partition-based splits for preventing over-fitting. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • multi_strategy (Optional[str]) –

    Added in version 2.0.0.

    Note

    This parameter is working-in-progress.

    The strategy used for training multi-target models, including multi-target regression and multi-class classification. See Multiple Outputs for more information.

    • one_output_per_tree: One model for each target.

    • multi_output_tree: Use multi-target trees.

  • eval_metric (Union[str, List[str], Callable, NoneType]) –

    Added in version 1.6.0.

    Metric used for monitoring the training result and early stopping. It can be a string or list of strings as names of predefined metric in XGBoost (See doc/parameter.rst), one of the metrics in sklearn.metrics, or any other user defined metric that looks like sklearn.metrics.

    If custom objective is also provided, then custom metric should implement the corresponding reverse link function.

    Unlike the scoring parameter commonly used in scikit-learn, when a callable object is provided, it’s assumed to be a cost function and by default XGBoost will minimize the result during early stopping.

    For advanced usage on Early stopping like directly choosing to maximize instead of minimize, see xgboost.callback.EarlyStopping.

    See Custom Objective and Evaluation Metric and Custom objective and metric for more information.

    from sklearn.datasets import load_diabetes
    from sklearn.metrics import mean_absolute_error
    X, y = load_diabetes(return_X_y=True)
    reg = xgb.XGBRegressor(
        tree_method="hist",
        eval_metric=mean_absolute_error,
    )
    reg.fit(X, y, eval_set=[(X, y)])
    

  • early_stopping_rounds (Optional[int]) –

    Added in version 1.6.0.

    • Activates early stopping. Validation metric needs to improve at least once in every early_stopping_rounds round(s) to continue training. Requires at least one item in eval_set in fit().

    • If early stopping occurs, the model will have two additional attributes: best_score and best_iteration. These are used by the predict() and apply() methods to determine the optimal number of trees during inference. If users want to access the full model (including trees built after early stopping), they can specify the iteration_range in these inference methods. In addition, other utilities like model plotting can also use the entire model.

    • If you prefer to discard the trees after best_iteration, consider using the callback function xgboost.callback.EarlyStopping.

    • If there’s more than one item in eval_set, the last entry will be used for early stopping. If there’s more than one metric in eval_metric, the last metric will be used for early stopping.

  • callbacks (Optional[List[xgboost.callback.TrainingCallback]]) –

    List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API.

    Note

    States in callback are not preserved during training, which means callback objects can not be reused for multiple training sessions without reinitialization or deepcopy.

    for params in parameters_grid:
        # be sure to (re)initialize the callbacks before each run
        callbacks = [xgb.callback.LearningRateScheduler(custom_rates)]
        reg = xgboost.XGBRegressor(**params, callbacks=callbacks)
        reg.fit(X, y)
    

  • kwargs (Optional[Any]) –

    Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.

    Note

    **kwargs unsupported by scikit-learn

    **kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.

    Note

    Custom objective function

    A custom objective function can be provided for the objective parameter. In this case, it should have the signature objective(y_true, y_pred) -> [grad, hess] or objective(y_true, y_pred, *, sample_weight) -> [grad, hess]:

    y_true: array_like of shape [n_samples]

    The target values

    y_pred: array_like of shape [n_samples]

    The predicted values

    sample_weight :

    Optional sample weights.

    grad: array_like of shape [n_samples]

    The value of the gradient for each sample point.

    hess: array_like of shape [n_samples]

    The value of the second derivative for each sample point

apply(X, iteration_range=None)

Return the predicted leaf every tree for each sample. If the model is trained with early stopping, then best_iteration is used automatically.

Parameters:
Returns:

X_leaves – For each datapoint x in X and for each tree, return the index of the leaf x ends up in. Leaves are numbered within [0; 2**(self.max_depth+1)), possibly with gaps in the numbering.

Return type:

array_like, shape=[n_samples, n_trees]

property best_iteration: int

The best iteration obtained by early stopping. This attribute is 0-based, for instance if the best iteration is the first round, then best_iteration is 0.

property best_score: float

The best score obtained by early stopping.

property coef_: ndarray

Coefficients property

Note

Coefficients are defined only for linear learners

Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).

Returns:

coef_

Return type:

array of shape [n_features] or [n_classes, n_features]

evals_result()

Return the evaluation results.

If eval_set is passed to the fit() function, you can call evals_result() to get evaluation results for all passed eval_sets. When eval_metric is also passed to the fit() function, the evals_result will contain the eval_metrics passed to the fit() function.

The returned evaluation result is a dictionary:

{'validation_0': {'logloss': ['0.604835', '0.531479']},
 'validation_1': {'logloss': ['0.41965', '0.17686']}}
Return type:

evals_result

property feature_importances_: ndarray

Feature importances property, return depends on importance_type parameter. When model trained with multi-class/multi-label/multi-target dataset, the feature importance is “averaged” over all targets. The “average” is defined based on the importance type. For instance, if the importance type is “total_gain”, then the score is sum of loss change for each split from all trees.

Returns:

  • feature_importances_ (array of shape [n_features] except for multi-class)

  • linear model, which returns an array with shape (n_features, n_classes)

property feature_names_in_: ndarray

Names of features seen during fit(). Defined only when X has feature names that are all strings.

fit(X, y, *, sample_weight=None, base_margin=None, eval_set=None, verbose=True, xgb_model=None, sample_weight_eval_set=None, base_margin_eval_set=None, feature_weights=None)

Fit gradient boosting classifier.

Note that calling fit() multiple times will cause the model object to be re-fit from scratch. To resume training from a previous checkpoint, explicitly pass xgb_model argument.

Parameters:
  • X (Any) –

    Feature matrix. See Supported data structures for various XGBoost functions for a list of supported types.

    When the tree_method is set to hist, internally, the QuantileDMatrix will be used instead of the DMatrix for conserving memory. However, this has performance implications when the device of input data is not matched with algorithm. For instance, if the input is a numpy array on CPU but cuda is used for training, then the data is first processed on CPU then transferred to GPU.

  • y (Any) – Labels

  • sample_weight (Any | None) – instance weights

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • eval_set (Sequence[Tuple[Any, Any]] | None) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.

  • verbose (bool | int | None) – If verbose is True and an evaluation set is used, the evaluation metric measured on the validation set is printed to stdout at each boosting stage. If verbose is an integer, the evaluation metric is printed at each verbose boosting stage. The last boosting stage / the boosting stage found by using early_stopping_rounds is also printed.

  • xgb_model (Booster | str | XGBModel | None) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).

  • sample_weight_eval_set (Sequence[Any] | None) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.

  • base_margin_eval_set (Sequence[Any] | None) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.

  • feature_weights (Any | None) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown.

Return type:

XGBClassifier

get_booster()

Get the underlying xgboost Booster of this model.

This will raise an exception when fit was not called

Returns:

booster

Return type:

a xgboost booster of underlying model

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:

routing – A MetadataRequest encapsulating routing information.

Return type:

MetadataRequest

get_num_boosting_rounds()

Gets the number of xgboost boosting rounds.

Return type:

int

get_params(deep=True)

Get parameters.

Parameters:

deep (bool)

Return type:

Dict[str, Any]

get_xgb_params()

Get xgboost specific parameters.

Return type:

Dict[str, Any]

property intercept_: ndarray

Intercept (bias) property

For tree-based model, the returned value is the base_score.

Returns:

intercept_

Return type:

array of shape (1,) or [n_classes]

load_model(fname)

Load the model from a file or a bytearray.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.load_model("model.json")
# or
model.load_model("model.ubj")
Parameters:

fname (str | bytearray | PathLike) – Input file name or memory buffer(see also save_raw)

Return type:

None

property n_features_in_: int

Number of features seen during fit().

predict(X, output_margin=False, validate_features=True, base_margin=None, iteration_range=None)

Predict with X. If the model is trained with early stopping, then best_iteration is used automatically. The estimator uses inplace_predict by default and falls back to using DMatrix if devices between the data and the estimator don’t match.

Note

This function is only thread safe for gbtree and dart.

Parameters:
  • X (Any) – Data to predict with.

  • output_margin (bool) – Whether to output the raw untransformed margin value.

  • validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • iteration_range (Tuple[int | integer, int | integer] | None) –

    Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.

    Added in version 1.4.0.

Return type:

prediction

predict_proba(X, validate_features=True, base_margin=None, iteration_range=None)

Predict the probability of each X example being of a given class. If the model is trained with early stopping, then best_iteration is used automatically. The estimator uses inplace_predict by default and falls back to using DMatrix if devices between the data and the estimator don’t match.

Note

This function is only thread safe for gbtree and dart.

Parameters:
  • X (Any) – Feature matrix. See Supported data structures for various XGBoost functions for a list of supported types.

  • validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • iteration_range (Tuple[int | integer, int | integer] | None) – Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.

Returns:

a numpy array of shape array-like of shape (n_samples, n_classes) with the probability of each data example being of a given class.

Return type:

prediction

save_model(fname)

Save the model to a file.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.save_model("model.json")
# or
model.save_model("model.ubj")
Parameters:

fname (str | PathLike) – Output file name

Return type:

None

score(X, y, sample_weight=None)

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Test samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns:

score – Mean accuracy of self.predict(X) w.r.t. y.

Return type:

float

set_fit_request(*, base_margin='$UNCHANGED$', base_margin_eval_set='$UNCHANGED$', eval_set='$UNCHANGED$', feature_weights='$UNCHANGED$', sample_weight='$UNCHANGED$', sample_weight_eval_set='$UNCHANGED$', verbose='$UNCHANGED$', xgb_model='$UNCHANGED$')

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in fit.

  • base_margin_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin_eval_set parameter in fit.

  • eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_set parameter in fit.

  • feature_weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for feature_weights parameter in fit.

  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in fit.

  • sample_weight_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight_eval_set parameter in fit.

  • verbose (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for verbose parameter in fit.

  • xgb_model (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for xgb_model parameter in fit.

  • self (XGBClassifier)

Returns:

self – The updated object.

Return type:

object

set_params(**params)

Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.

Return type:

self

Parameters:

params (Any)

set_predict_proba_request(*, base_margin='$UNCHANGED$', iteration_range='$UNCHANGED$', validate_features='$UNCHANGED$')

Request metadata passed to the predict_proba method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict_proba if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict_proba.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in predict_proba.

  • iteration_range (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for iteration_range parameter in predict_proba.

  • validate_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for validate_features parameter in predict_proba.

  • self (XGBClassifier)

Returns:

self – The updated object.

Return type:

object

set_predict_request(*, base_margin='$UNCHANGED$', iteration_range='$UNCHANGED$', output_margin='$UNCHANGED$', validate_features='$UNCHANGED$')

Request metadata passed to the predict method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in predict.

  • iteration_range (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for iteration_range parameter in predict.

  • output_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for output_margin parameter in predict.

  • validate_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for validate_features parameter in predict.

  • self (XGBClassifier)

Returns:

self – The updated object.

Return type:

object

set_score_request(*, sample_weight='$UNCHANGED$')

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

  • self (XGBClassifier)

Returns:

self – The updated object.

Return type:

object

class xgboost.XGBRanker(*, objective='rank:ndcg', **kwargs)

Bases: XGBModel, XGBRankerMixIn

Implementation of the Scikit-Learn API for XGBoost Ranking.

See Learning to Rank for an introducion.

See Using the Scikit-Learn Estimator Interface for more information.

Parameters:
  • n_estimators (Optional[int]) – Number of gradient boosted trees. Equivalent to number of boosting rounds.

  • max_depth (Optional[int]) – Maximum tree depth for base learners.

  • max_leaves (Optional[int]) – Maximum number of leaves; 0 indicates no limit.

  • max_bin (Optional[int]) – If using histogram-based algorithm, maximum number of bins per feature

  • grow_policy (Optional[str]) –

    Tree growing policy.

    • depthwise: Favors splitting at nodes closest to the node,

    • lossguide: Favors splitting at nodes with highest loss change.

  • learning_rate (Optional[float]) – Boosting learning rate (xgb’s “eta”)

  • verbosity (Optional[int]) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).

  • objective (Union[str, xgboost.sklearn._SklObjWProto, Callable[[Any, Any], Tuple[numpy.ndarray, numpy.ndarray]], NoneType]) –

    Specify the learning task and the corresponding learning objective or a custom objective function to be used.

    For custom objective, see Custom Objective and Evaluation Metric and Custom objective and metric for more information, along with the end note for function signatures.

  • booster (Optional[str]) – Specify which booster to use: gbtree, gblinear or dart.

  • tree_method (Optional[str]) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from the parameters document tree method

  • n_jobs (Optional[int]) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.

  • gamma (Optional[float]) – (min_split_loss) Minimum loss reduction required to make a further partition on a leaf node of the tree.

  • min_child_weight (Optional[float]) – Minimum sum of instance weight(hessian) needed in a child.

  • max_delta_step (Optional[float]) – Maximum delta step we allow each tree’s weight estimation to be.

  • subsample (Optional[float]) – Subsample ratio of the training instance.

  • sampling_method (Optional[str]) –

    Sampling method. Used only by the GPU version of hist tree method.

    • uniform: Select random training instances uniformly.

    • gradient_based: Select random training instances with higher probability

      when the gradient and hessian are larger. (cf. CatBoost)

  • colsample_bytree (Optional[float]) – Subsample ratio of columns when constructing each tree.

  • colsample_bylevel (Optional[float]) – Subsample ratio of columns for each level.

  • colsample_bynode (Optional[float]) – Subsample ratio of columns for each split.

  • reg_alpha (Optional[float]) – L1 regularization term on weights (xgb’s alpha).

  • reg_lambda (Optional[float]) – L2 regularization term on weights (xgb’s lambda).

  • scale_pos_weight (Optional[float]) – Balancing of positive and negative weights.

  • base_score (Optional[float]) – The initial prediction score of all instances, global bias.

  • random_state (Union[numpy.random.mtrand.RandomState, numpy.random._generator.Generator, int, NoneType]) –

    Random number seed.

    Note

    Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.

  • missing (float) – Value in the data which needs to be present as a missing value. Default to numpy.nan.

  • num_parallel_tree (Optional[int]) – Used for boosting random forest.

  • monotone_constraints (Union[Dict[str, int], str, NoneType]) – Constraint of variable monotonicity. See tutorial for more information.

  • interaction_constraints (Union[str, List[Tuple[str]], NoneType]) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nested list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information

  • importance_type (Optional[str]) –

    The feature importance type for the feature_importances_ property:

    • For tree model, it’s either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.

    • For linear model, only “weight” is defined and it’s the normalized coefficients without bias.

  • device (Optional[str]) –

    Added in version 2.0.0.

    Device ordinal, available options are cpu, cuda, and gpu.

  • validate_parameters (Optional[bool]) – Give warnings for unknown parameter.

  • enable_categorical (bool) – See the same parameter of DMatrix for details.

  • feature_types (Optional[Sequence[str]]) –

    Added in version 1.7.0.

    Used for specifying feature types without constructing a dataframe. See DMatrix for details.

  • max_cat_to_onehot (Optional[int]) –

    Added in version 1.6.0.

    Note

    This parameter is experimental

    A threshold for deciding whether XGBoost should use one-hot encoding based split for categorical data. When number of categories is lesser than the threshold then one-hot encoding is chosen, otherwise the categories will be partitioned into children nodes. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • max_cat_threshold (Optional[int]) –

    Added in version 1.7.0.

    Note

    This parameter is experimental

    Maximum number of categories considered for each split. Used only by partition-based splits for preventing over-fitting. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • multi_strategy (Optional[str]) –

    Added in version 2.0.0.

    Note

    This parameter is working-in-progress.

    The strategy used for training multi-target models, including multi-target regression and multi-class classification. See Multiple Outputs for more information.

    • one_output_per_tree: One model for each target.

    • multi_output_tree: Use multi-target trees.

  • eval_metric (Union[str, List[str], Callable, NoneType]) –

    Added in version 1.6.0.

    Metric used for monitoring the training result and early stopping. It can be a string or list of strings as names of predefined metric in XGBoost (See doc/parameter.rst), one of the metrics in sklearn.metrics, or any other user defined metric that looks like sklearn.metrics.

    If custom objective is also provided, then custom metric should implement the corresponding reverse link function.

    Unlike the scoring parameter commonly used in scikit-learn, when a callable object is provided, it’s assumed to be a cost function and by default XGBoost will minimize the result during early stopping.

    For advanced usage on Early stopping like directly choosing to maximize instead of minimize, see xgboost.callback.EarlyStopping.

    See Custom Objective and Evaluation Metric and Custom objective and metric for more information.

    from sklearn.datasets import load_diabetes
    from sklearn.metrics import mean_absolute_error
    X, y = load_diabetes(return_X_y=True)
    reg = xgb.XGBRegressor(
        tree_method="hist",
        eval_metric=mean_absolute_error,
    )
    reg.fit(X, y, eval_set=[(X, y)])
    

  • early_stopping_rounds (Optional[int]) –

    Added in version 1.6.0.

    • Activates early stopping. Validation metric needs to improve at least once in every early_stopping_rounds round(s) to continue training. Requires at least one item in eval_set in fit().

    • If early stopping occurs, the model will have two additional attributes: best_score and best_iteration. These are used by the predict() and apply() methods to determine the optimal number of trees during inference. If users want to access the full model (including trees built after early stopping), they can specify the iteration_range in these inference methods. In addition, other utilities like model plotting can also use the entire model.

    • If you prefer to discard the trees after best_iteration, consider using the callback function xgboost.callback.EarlyStopping.

    • If there’s more than one item in eval_set, the last entry will be used for early stopping. If there’s more than one metric in eval_metric, the last metric will be used for early stopping.

  • callbacks (Optional[List[xgboost.callback.TrainingCallback]]) –

    List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API.

    Note

    States in callback are not preserved during training, which means callback objects can not be reused for multiple training sessions without reinitialization or deepcopy.

    for params in parameters_grid:
        # be sure to (re)initialize the callbacks before each run
        callbacks = [xgb.callback.LearningRateScheduler(custom_rates)]
        reg = xgboost.XGBRegressor(**params, callbacks=callbacks)
        reg.fit(X, y)
    

  • kwargs (Optional[Any]) –

    Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.

    Note

    **kwargs unsupported by scikit-learn

    **kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.

    Note

    A custom objective function is currently not supported by XGBRanker.

    Note

    Query group information is only required for ranking training but not prediction. Multiple groups can be predicted on a single call to predict().

    When fitting the model with the group parameter, your data need to be sorted by the query group first. group is an array that contains the size of each query group.

    Similarly, when fitting the model with the qid parameter, the data should be sorted according to query index and qid is an array that contains the query index for each training sample.

    For example, if your original data look like:

    qid

    label

    features

    1

    0

    x_1

    1

    1

    x_2

    1

    0

    x_3

    2

    0

    x_4

    2

    1

    x_5

    2

    1

    x_6

    2

    1

    x_7

    then fit() method can be called with either group array as [3, 4] or with qid as [1, 1, 1, 2, 2, 2, 2], that is the qid column. Also, the qid can be a special column of input X instead of a separated parameter, see fit() for more info.

apply(X, iteration_range=None)

Return the predicted leaf every tree for each sample. If the model is trained with early stopping, then best_iteration is used automatically.

Parameters:
Returns:

X_leaves – For each datapoint x in X and for each tree, return the index of the leaf x ends up in. Leaves are numbered within [0; 2**(self.max_depth+1)), possibly with gaps in the numbering.

Return type:

array_like, shape=[n_samples, n_trees]

property best_iteration: int

The best iteration obtained by early stopping. This attribute is 0-based, for instance if the best iteration is the first round, then best_iteration is 0.

property best_score: float

The best score obtained by early stopping.

property coef_: ndarray

Coefficients property

Note

Coefficients are defined only for linear learners

Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).

Returns:

coef_

Return type:

array of shape [n_features] or [n_classes, n_features]

evals_result()

Return the evaluation results.

If eval_set is passed to the fit() function, you can call evals_result() to get evaluation results for all passed eval_sets. When eval_metric is also passed to the fit() function, the evals_result will contain the eval_metrics passed to the fit() function.

The returned evaluation result is a dictionary:

{'validation_0': {'logloss': ['0.604835', '0.531479']},
 'validation_1': {'logloss': ['0.41965', '0.17686']}}
Return type:

evals_result

property feature_importances_: ndarray

Feature importances property, return depends on importance_type parameter. When model trained with multi-class/multi-label/multi-target dataset, the feature importance is “averaged” over all targets. The “average” is defined based on the importance type. For instance, if the importance type is “total_gain”, then the score is sum of loss change for each split from all trees.

Returns:

  • feature_importances_ (array of shape [n_features] except for multi-class)

  • linear model, which returns an array with shape (n_features, n_classes)

property feature_names_in_: ndarray

Names of features seen during fit(). Defined only when X has feature names that are all strings.

fit(X, y, *, group=None, qid=None, sample_weight=None, base_margin=None, eval_set=None, eval_group=None, eval_qid=None, verbose=False, xgb_model=None, sample_weight_eval_set=None, base_margin_eval_set=None, feature_weights=None)

Fit gradient boosting ranker

Note that calling fit() multiple times will cause the model object to be re-fit from scratch. To resume training from a previous checkpoint, explicitly pass xgb_model argument.

Parameters:
  • X (Any) –

    Feature matrix. See Supported data structures for various XGBoost functions for a list of supported types.

    When this is a pandas.DataFrame or a cudf.DataFrame, it may contain a special column called qid for specifying the query index. Using a special column is the same as using the qid parameter, except for being compatible with sklearn utility functions like sklearn.model_selection.cross_validation(). The same convention applies to the XGBRanker.score() and XGBRanker.predict().

    qid

    feat_0

    feat_1

    0

    \(x_{00}\)

    \(x_{01}\)

    1

    \(x_{10}\)

    \(x_{11}\)

    1

    \(x_{20}\)

    \(x_{21}\)

    When the tree_method is set to hist, internally, the QuantileDMatrix will be used instead of the DMatrix for conserving memory. However, this has performance implications when the device of input data is not matched with algorithm. For instance, if the input is a numpy array on CPU but cuda is used for training, then the data is first processed on CPU then transferred to GPU.

  • y (Any) – Labels

  • group (Any | None) – Size of each query group of training data. Should have as many elements as the query groups in the training data. If this is set to None, then user must provide qid.

  • qid (Any | None) – Query ID for each training sample. Should have the size of n_samples. If this is set to None, then user must provide group or a special column in X.

  • sample_weight (Any | None) –

    Query group weights

    Note

    Weights are per-group for ranking tasks

    In ranking task, one weight is assigned to each query group/id (not each data point). This is because we only care about the relative ordering of data points within each group, so it doesn’t make sense to assign weights to individual data points.

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • eval_set (Sequence[Tuple[Any, Any]] | None) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.

  • eval_group (Sequence[Any] | None) – A list in which eval_group[i] is the list containing the sizes of all query groups in the i-th pair in eval_set.

  • eval_qid (Sequence[Any] | None) – A list in which eval_qid[i] is the array containing query ID of i-th pair in eval_set. The special column convention in X applies to validation datasets as well.

  • verbose (bool | int | None) – If verbose is True and an evaluation set is used, the evaluation metric measured on the validation set is printed to stdout at each boosting stage. If verbose is an integer, the evaluation metric is printed at each verbose boosting stage. The last boosting stage / the boosting stage found by using early_stopping_rounds is also printed.

  • xgb_model (Booster | str | XGBModel | None) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).

  • sample_weight_eval_set (Sequence[Any] | None) –

    A list of the form [L_1, L_2, …, L_n], where each L_i is a list of group weights on the i-th validation set.

    Note

    Weights are per-group for ranking tasks

    In ranking task, one weight is assigned to each query group (not each data point). This is because we only care about the relative ordering of data points within each group, so it doesn’t make sense to assign weights to individual data points.

  • base_margin_eval_set (Sequence[Any] | None) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.

  • feature_weights (Any | None) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown.

Return type:

XGBRanker

get_booster()

Get the underlying xgboost Booster of this model.

This will raise an exception when fit was not called

Returns:

booster

Return type:

a xgboost booster of underlying model

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:

routing – A MetadataRequest encapsulating routing information.

Return type:

MetadataRequest

get_num_boosting_rounds()

Gets the number of xgboost boosting rounds.

Return type:

int

get_params(deep=True)

Get parameters.

Parameters:

deep (bool)

Return type:

Dict[str, Any]

get_xgb_params()

Get xgboost specific parameters.

Return type:

Dict[str, Any]

property intercept_: ndarray

Intercept (bias) property

For tree-based model, the returned value is the base_score.

Returns:

intercept_

Return type:

array of shape (1,) or [n_classes]

load_model(fname)

Load the model from a file or a bytearray.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.load_model("model.json")
# or
model.load_model("model.ubj")
Parameters:

fname (str | bytearray | PathLike) – Input file name or memory buffer(see also save_raw)

Return type:

None

property n_features_in_: int

Number of features seen during fit().

predict(X, output_margin=False, validate_features=True, base_margin=None, iteration_range=None)

Predict with X. If the model is trained with early stopping, then best_iteration is used automatically. The estimator uses inplace_predict by default and falls back to using DMatrix if devices between the data and the estimator don’t match.

Note

This function is only thread safe for gbtree and dart.

Parameters:
  • X (Any) – Data to predict with.

  • output_margin (bool) – Whether to output the raw untransformed margin value.

  • validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • iteration_range (Tuple[int | integer, int | integer] | None) –

    Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.

    Added in version 1.4.0.

Return type:

prediction

save_model(fname)

Save the model to a file.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.save_model("model.json")
# or
model.save_model("model.ubj")
Parameters:

fname (str | PathLike) – Output file name

Return type:

None

score(X, y)

Evaluate score for data using the last evaluation metric. If the model is trained with early stopping, then best_iteration is used automatically.

Parameters:
  • X (Union[pd.DataFrame, cudf.DataFrame]) – Feature matrix. A DataFrame with a special qid column.

  • y (Any) – Labels

Returns:

The result of the first evaluation metric for the ranker.

Return type:

score

set_fit_request(*, base_margin='$UNCHANGED$', base_margin_eval_set='$UNCHANGED$', eval_group='$UNCHANGED$', eval_qid='$UNCHANGED$', eval_set='$UNCHANGED$', feature_weights='$UNCHANGED$', group='$UNCHANGED$', qid='$UNCHANGED$', sample_weight='$UNCHANGED$', sample_weight_eval_set='$UNCHANGED$', verbose='$UNCHANGED$', xgb_model='$UNCHANGED$')

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in fit.

  • base_margin_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin_eval_set parameter in fit.

  • eval_group (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_group parameter in fit.

  • eval_qid (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_qid parameter in fit.

  • eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_set parameter in fit.

  • feature_weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for feature_weights parameter in fit.

  • group (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for group parameter in fit.

  • qid (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for qid parameter in fit.

  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in fit.

  • sample_weight_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight_eval_set parameter in fit.

  • verbose (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for verbose parameter in fit.

  • xgb_model (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for xgb_model parameter in fit.

  • self (XGBRanker)

Returns:

self – The updated object.

Return type:

object

set_params(**params)

Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.

Return type:

self

Parameters:

params (Any)

set_predict_request(*, base_margin='$UNCHANGED$', iteration_range='$UNCHANGED$', output_margin='$UNCHANGED$', validate_features='$UNCHANGED$')

Request metadata passed to the predict method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in predict.

  • iteration_range (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for iteration_range parameter in predict.

  • output_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for output_margin parameter in predict.

  • validate_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for validate_features parameter in predict.

  • self (XGBRanker)

Returns:

self – The updated object.

Return type:

object

class xgboost.XGBRFRegressor(*, learning_rate=1.0, subsample=0.8, colsample_bynode=0.8, reg_lambda=1e-05, **kwargs)

Bases: XGBRegressor

scikit-learn API for XGBoost random forest regression. See Using the Scikit-Learn Estimator Interface for more information.

Parameters:
  • n_estimators (Optional[int]) – Number of trees in random forest to fit.

  • max_depth (Optional[int]) – Maximum tree depth for base learners.

  • max_leaves (Optional[int]) – Maximum number of leaves; 0 indicates no limit.

  • max_bin (Optional[int]) – If using histogram-based algorithm, maximum number of bins per feature

  • grow_policy (Optional[str]) –

    Tree growing policy.

    • depthwise: Favors splitting at nodes closest to the node,

    • lossguide: Favors splitting at nodes with highest loss change.

  • learning_rate (Optional[float]) – Boosting learning rate (xgb’s “eta”)

  • verbosity (Optional[int]) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).

  • objective (Union[str, xgboost.sklearn._SklObjWProto, Callable[[Any, Any], Tuple[numpy.ndarray, numpy.ndarray]], NoneType]) –

    Specify the learning task and the corresponding learning objective or a custom objective function to be used.

    For custom objective, see Custom Objective and Evaluation Metric and Custom objective and metric for more information, along with the end note for function signatures.

  • booster (Optional[str]) – Specify which booster to use: gbtree, gblinear or dart.

  • tree_method (Optional[str]) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from the parameters document tree method

  • n_jobs (Optional[int]) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.

  • gamma (Optional[float]) – (min_split_loss) Minimum loss reduction required to make a further partition on a leaf node of the tree.

  • min_child_weight (Optional[float]) – Minimum sum of instance weight(hessian) needed in a child.

  • max_delta_step (Optional[float]) – Maximum delta step we allow each tree’s weight estimation to be.

  • subsample (Optional[float]) – Subsample ratio of the training instance.

  • sampling_method (Optional[str]) –

    Sampling method. Used only by the GPU version of hist tree method.

    • uniform: Select random training instances uniformly.

    • gradient_based: Select random training instances with higher probability

      when the gradient and hessian are larger. (cf. CatBoost)

  • colsample_bytree (Optional[float]) – Subsample ratio of columns when constructing each tree.

  • colsample_bylevel (Optional[float]) – Subsample ratio of columns for each level.

  • colsample_bynode (Optional[float]) – Subsample ratio of columns for each split.

  • reg_alpha (Optional[float]) – L1 regularization term on weights (xgb’s alpha).

  • reg_lambda (Optional[float]) – L2 regularization term on weights (xgb’s lambda).

  • scale_pos_weight (Optional[float]) – Balancing of positive and negative weights.

  • base_score (Optional[float]) – The initial prediction score of all instances, global bias.

  • random_state (Union[numpy.random.mtrand.RandomState, numpy.random._generator.Generator, int, NoneType]) –

    Random number seed.

    Note

    Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.

  • missing (float) – Value in the data which needs to be present as a missing value. Default to numpy.nan.

  • num_parallel_tree (Optional[int]) – Used for boosting random forest.

  • monotone_constraints (Union[Dict[str, int], str, NoneType]) – Constraint of variable monotonicity. See tutorial for more information.

  • interaction_constraints (Union[str, List[Tuple[str]], NoneType]) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nested list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information

  • importance_type (Optional[str]) –

    The feature importance type for the feature_importances_ property:

    • For tree model, it’s either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.

    • For linear model, only “weight” is defined and it’s the normalized coefficients without bias.

  • device (Optional[str]) –

    Added in version 2.0.0.

    Device ordinal, available options are cpu, cuda, and gpu.

  • validate_parameters (Optional[bool]) – Give warnings for unknown parameter.

  • enable_categorical (bool) – See the same parameter of DMatrix for details.

  • feature_types (Optional[Sequence[str]]) –

    Added in version 1.7.0.

    Used for specifying feature types without constructing a dataframe. See DMatrix for details.

  • max_cat_to_onehot (Optional[int]) –

    Added in version 1.6.0.

    Note

    This parameter is experimental

    A threshold for deciding whether XGBoost should use one-hot encoding based split for categorical data. When number of categories is lesser than the threshold then one-hot encoding is chosen, otherwise the categories will be partitioned into children nodes. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • max_cat_threshold (Optional[int]) –

    Added in version 1.7.0.

    Note

    This parameter is experimental

    Maximum number of categories considered for each split. Used only by partition-based splits for preventing over-fitting. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • multi_strategy (Optional[str]) –

    Added in version 2.0.0.

    Note

    This parameter is working-in-progress.

    The strategy used for training multi-target models, including multi-target regression and multi-class classification. See Multiple Outputs for more information.

    • one_output_per_tree: One model for each target.

    • multi_output_tree: Use multi-target trees.

  • eval_metric (Union[str, List[str], Callable, NoneType]) –

    Added in version 1.6.0.

    Metric used for monitoring the training result and early stopping. It can be a string or list of strings as names of predefined metric in XGBoost (See doc/parameter.rst), one of the metrics in sklearn.metrics, or any other user defined metric that looks like sklearn.metrics.

    If custom objective is also provided, then custom metric should implement the corresponding reverse link function.

    Unlike the scoring parameter commonly used in scikit-learn, when a callable object is provided, it’s assumed to be a cost function and by default XGBoost will minimize the result during early stopping.

    For advanced usage on Early stopping like directly choosing to maximize instead of minimize, see xgboost.callback.EarlyStopping.

    See Custom Objective and Evaluation Metric and Custom objective and metric for more information.

    from sklearn.datasets import load_diabetes
    from sklearn.metrics import mean_absolute_error
    X, y = load_diabetes(return_X_y=True)
    reg = xgb.XGBRegressor(
        tree_method="hist",
        eval_metric=mean_absolute_error,
    )
    reg.fit(X, y, eval_set=[(X, y)])
    

  • early_stopping_rounds (Optional[int]) –

    Added in version 1.6.0.

    • Activates early stopping. Validation metric needs to improve at least once in every early_stopping_rounds round(s) to continue training. Requires at least one item in eval_set in fit().

    • If early stopping occurs, the model will have two additional attributes: best_score and best_iteration. These are used by the predict() and apply() methods to determine the optimal number of trees during inference. If users want to access the full model (including trees built after early stopping), they can specify the iteration_range in these inference methods. In addition, other utilities like model plotting can also use the entire model.

    • If you prefer to discard the trees after best_iteration, consider using the callback function xgboost.callback.EarlyStopping.

    • If there’s more than one item in eval_set, the last entry will be used for early stopping. If there’s more than one metric in eval_metric, the last metric will be used for early stopping.

  • callbacks (Optional[List[xgboost.callback.TrainingCallback]]) –

    List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API.

    Note

    States in callback are not preserved during training, which means callback objects can not be reused for multiple training sessions without reinitialization or deepcopy.

    for params in parameters_grid:
        # be sure to (re)initialize the callbacks before each run
        callbacks = [xgb.callback.LearningRateScheduler(custom_rates)]
        reg = xgboost.XGBRegressor(**params, callbacks=callbacks)
        reg.fit(X, y)
    

  • kwargs (Optional[Any]) –

    Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.

    Note

    **kwargs unsupported by scikit-learn

    **kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.

    Note

    Custom objective function

    A custom objective function can be provided for the objective parameter. In this case, it should have the signature objective(y_true, y_pred) -> [grad, hess] or objective(y_true, y_pred, *, sample_weight) -> [grad, hess]:

    y_true: array_like of shape [n_samples]

    The target values

    y_pred: array_like of shape [n_samples]

    The predicted values

    sample_weight :

    Optional sample weights.

    grad: array_like of shape [n_samples]

    The value of the gradient for each sample point.

    hess: array_like of shape [n_samples]

    The value of the second derivative for each sample point

apply(X, iteration_range=None)

Return the predicted leaf every tree for each sample. If the model is trained with early stopping, then best_iteration is used automatically.

Parameters:
Returns:

X_leaves – For each datapoint x in X and for each tree, return the index of the leaf x ends up in. Leaves are numbered within [0; 2**(self.max_depth+1)), possibly with gaps in the numbering.

Return type:

array_like, shape=[n_samples, n_trees]

property best_iteration: int

The best iteration obtained by early stopping. This attribute is 0-based, for instance if the best iteration is the first round, then best_iteration is 0.

property best_score: float

The best score obtained by early stopping.

property coef_: ndarray

Coefficients property

Note

Coefficients are defined only for linear learners

Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).

Returns:

coef_

Return type:

array of shape [n_features] or [n_classes, n_features]

evals_result()

Return the evaluation results.

If eval_set is passed to the fit() function, you can call evals_result() to get evaluation results for all passed eval_sets. When eval_metric is also passed to the fit() function, the evals_result will contain the eval_metrics passed to the fit() function.

The returned evaluation result is a dictionary:

{'validation_0': {'logloss': ['0.604835', '0.531479']},
 'validation_1': {'logloss': ['0.41965', '0.17686']}}
Return type:

evals_result

property feature_importances_: ndarray

Feature importances property, return depends on importance_type parameter. When model trained with multi-class/multi-label/multi-target dataset, the feature importance is “averaged” over all targets. The “average” is defined based on the importance type. For instance, if the importance type is “total_gain”, then the score is sum of loss change for each split from all trees.

Returns:

  • feature_importances_ (array of shape [n_features] except for multi-class)

  • linear model, which returns an array with shape (n_features, n_classes)

property feature_names_in_: ndarray

Names of features seen during fit(). Defined only when X has feature names that are all strings.

fit(X, y, *, sample_weight=None, base_margin=None, eval_set=None, verbose=True, xgb_model=None, sample_weight_eval_set=None, base_margin_eval_set=None, feature_weights=None)

Fit gradient boosting model.

Note that calling fit() multiple times will cause the model object to be re-fit from scratch. To resume training from a previous checkpoint, explicitly pass xgb_model argument.

Parameters:
  • X (Any) –

    Feature matrix. See Supported data structures for various XGBoost functions for a list of supported types.

    When the tree_method is set to hist, internally, the QuantileDMatrix will be used instead of the DMatrix for conserving memory. However, this has performance implications when the device of input data is not matched with algorithm. For instance, if the input is a numpy array on CPU but cuda is used for training, then the data is first processed on CPU then transferred to GPU.

  • y (Any) – Labels

  • sample_weight (Any | None) – instance weights

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • eval_set (Sequence[Tuple[Any, Any]] | None) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.

  • verbose (bool | int | None) – If verbose is True and an evaluation set is used, the evaluation metric measured on the validation set is printed to stdout at each boosting stage. If verbose is an integer, the evaluation metric is printed at each verbose boosting stage. The last boosting stage / the boosting stage found by using early_stopping_rounds is also printed.

  • xgb_model (Booster | str | XGBModel | None) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).

  • sample_weight_eval_set (Sequence[Any] | None) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.

  • base_margin_eval_set (Sequence[Any] | None) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.

  • feature_weights (Any | None) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown.

Return type:

XGBRFRegressor

get_booster()

Get the underlying xgboost Booster of this model.

This will raise an exception when fit was not called

Returns:

booster

Return type:

a xgboost booster of underlying model

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:

routing – A MetadataRequest encapsulating routing information.

Return type:

MetadataRequest

get_num_boosting_rounds()

Gets the number of xgboost boosting rounds.

Return type:

int

get_params(deep=True)

Get parameters.

Parameters:

deep (bool)

Return type:

Dict[str, Any]

get_xgb_params()

Get xgboost specific parameters.

Return type:

Dict[str, Any]

property intercept_: ndarray

Intercept (bias) property

For tree-based model, the returned value is the base_score.

Returns:

intercept_

Return type:

array of shape (1,) or [n_classes]

load_model(fname)

Load the model from a file or a bytearray.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.load_model("model.json")
# or
model.load_model("model.ubj")
Parameters:

fname (str | bytearray | PathLike) – Input file name or memory buffer(see also save_raw)

Return type:

None

property n_features_in_: int

Number of features seen during fit().

predict(X, output_margin=False, validate_features=True, base_margin=None, iteration_range=None)

Predict with X. If the model is trained with early stopping, then best_iteration is used automatically. The estimator uses inplace_predict by default and falls back to using DMatrix if devices between the data and the estimator don’t match.

Note

This function is only thread safe for gbtree and dart.

Parameters:
  • X (Any) – Data to predict with.

  • output_margin (bool) – Whether to output the raw untransformed margin value.

  • validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • iteration_range (Tuple[int | integer, int | integer] | None) –

    Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.

    Added in version 1.4.0.

Return type:

prediction

save_model(fname)

Save the model to a file.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.save_model("model.json")
# or
model.save_model("model.ubj")
Parameters:

fname (str | PathLike) – Output file name

Return type:

None

score(X, y, sample_weight=None)

Return the coefficient of determination of the prediction.

The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns:

score\(R^2\) of self.predict(X) w.r.t. y.

Return type:

float

Notes

The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score(). This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).

set_fit_request(*, base_margin='$UNCHANGED$', base_margin_eval_set='$UNCHANGED$', eval_set='$UNCHANGED$', feature_weights='$UNCHANGED$', sample_weight='$UNCHANGED$', sample_weight_eval_set='$UNCHANGED$', verbose='$UNCHANGED$', xgb_model='$UNCHANGED$')

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in fit.

  • base_margin_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin_eval_set parameter in fit.

  • eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_set parameter in fit.

  • feature_weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for feature_weights parameter in fit.

  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in fit.

  • sample_weight_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight_eval_set parameter in fit.

  • verbose (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for verbose parameter in fit.

  • xgb_model (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for xgb_model parameter in fit.

  • self (XGBRFRegressor)

Returns:

self – The updated object.

Return type:

object

set_params(**params)

Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.

Return type:

self

Parameters:

params (Any)

set_predict_request(*, base_margin='$UNCHANGED$', iteration_range='$UNCHANGED$', output_margin='$UNCHANGED$', validate_features='$UNCHANGED$')

Request metadata passed to the predict method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in predict.

  • iteration_range (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for iteration_range parameter in predict.

  • output_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for output_margin parameter in predict.

  • validate_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for validate_features parameter in predict.

  • self (XGBRFRegressor)

Returns:

self – The updated object.

Return type:

object

set_score_request(*, sample_weight='$UNCHANGED$')

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

  • self (XGBRFRegressor)

Returns:

self – The updated object.

Return type:

object

class xgboost.XGBRFClassifier(*, learning_rate=1.0, subsample=0.8, colsample_bynode=0.8, reg_lambda=1e-05, **kwargs)

Bases: XGBClassifier

scikit-learn API for XGBoost random forest classification. See Using the Scikit-Learn Estimator Interface for more information.

Parameters:
  • n_estimators (Optional[int]) – Number of trees in random forest to fit.

  • max_depth (Optional[int]) – Maximum tree depth for base learners.

  • max_leaves (Optional[int]) – Maximum number of leaves; 0 indicates no limit.

  • max_bin (Optional[int]) – If using histogram-based algorithm, maximum number of bins per feature

  • grow_policy (Optional[str]) –

    Tree growing policy.

    • depthwise: Favors splitting at nodes closest to the node,

    • lossguide: Favors splitting at nodes with highest loss change.

  • learning_rate (Optional[float]) – Boosting learning rate (xgb’s “eta”)

  • verbosity (Optional[int]) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).

  • objective (Union[str, xgboost.sklearn._SklObjWProto, Callable[[Any, Any], Tuple[numpy.ndarray, numpy.ndarray]], NoneType]) –

    Specify the learning task and the corresponding learning objective or a custom objective function to be used.

    For custom objective, see Custom Objective and Evaluation Metric and Custom objective and metric for more information, along with the end note for function signatures.

  • booster (Optional[str]) – Specify which booster to use: gbtree, gblinear or dart.

  • tree_method (Optional[str]) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from the parameters document tree method

  • n_jobs (Optional[int]) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.

  • gamma (Optional[float]) – (min_split_loss) Minimum loss reduction required to make a further partition on a leaf node of the tree.

  • min_child_weight (Optional[float]) – Minimum sum of instance weight(hessian) needed in a child.

  • max_delta_step (Optional[float]) – Maximum delta step we allow each tree’s weight estimation to be.

  • subsample (Optional[float]) – Subsample ratio of the training instance.

  • sampling_method (Optional[str]) –

    Sampling method. Used only by the GPU version of hist tree method.

    • uniform: Select random training instances uniformly.

    • gradient_based: Select random training instances with higher probability

      when the gradient and hessian are larger. (cf. CatBoost)

  • colsample_bytree (Optional[float]) – Subsample ratio of columns when constructing each tree.

  • colsample_bylevel (Optional[float]) – Subsample ratio of columns for each level.

  • colsample_bynode (Optional[float]) – Subsample ratio of columns for each split.

  • reg_alpha (Optional[float]) – L1 regularization term on weights (xgb’s alpha).

  • reg_lambda (Optional[float]) – L2 regularization term on weights (xgb’s lambda).

  • scale_pos_weight (Optional[float]) – Balancing of positive and negative weights.

  • base_score (Optional[float]) – The initial prediction score of all instances, global bias.

  • random_state (Union[numpy.random.mtrand.RandomState, numpy.random._generator.Generator, int, NoneType]) –

    Random number seed.

    Note

    Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.

  • missing (float) – Value in the data which needs to be present as a missing value. Default to numpy.nan.

  • num_parallel_tree (Optional[int]) – Used for boosting random forest.

  • monotone_constraints (Union[Dict[str, int], str, NoneType]) – Constraint of variable monotonicity. See tutorial for more information.

  • interaction_constraints (Union[str, List[Tuple[str]], NoneType]) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nested list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information

  • importance_type (Optional[str]) –

    The feature importance type for the feature_importances_ property:

    • For tree model, it’s either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.

    • For linear model, only “weight” is defined and it’s the normalized coefficients without bias.

  • device (Optional[str]) –

    Added in version 2.0.0.

    Device ordinal, available options are cpu, cuda, and gpu.

  • validate_parameters (Optional[bool]) – Give warnings for unknown parameter.

  • enable_categorical (bool) – See the same parameter of DMatrix for details.

  • feature_types (Optional[Sequence[str]]) –

    Added in version 1.7.0.

    Used for specifying feature types without constructing a dataframe. See DMatrix for details.

  • max_cat_to_onehot (Optional[int]) –

    Added in version 1.6.0.

    Note

    This parameter is experimental

    A threshold for deciding whether XGBoost should use one-hot encoding based split for categorical data. When number of categories is lesser than the threshold then one-hot encoding is chosen, otherwise the categories will be partitioned into children nodes. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • max_cat_threshold (Optional[int]) –

    Added in version 1.7.0.

    Note

    This parameter is experimental

    Maximum number of categories considered for each split. Used only by partition-based splits for preventing over-fitting. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • multi_strategy (Optional[str]) –

    Added in version 2.0.0.

    Note

    This parameter is working-in-progress.

    The strategy used for training multi-target models, including multi-target regression and multi-class classification. See Multiple Outputs for more information.

    • one_output_per_tree: One model for each target.

    • multi_output_tree: Use multi-target trees.

  • eval_metric (Union[str, List[str], Callable, NoneType]) –

    Added in version 1.6.0.

    Metric used for monitoring the training result and early stopping. It can be a string or list of strings as names of predefined metric in XGBoost (See doc/parameter.rst), one of the metrics in sklearn.metrics, or any other user defined metric that looks like sklearn.metrics.

    If custom objective is also provided, then custom metric should implement the corresponding reverse link function.

    Unlike the scoring parameter commonly used in scikit-learn, when a callable object is provided, it’s assumed to be a cost function and by default XGBoost will minimize the result during early stopping.

    For advanced usage on Early stopping like directly choosing to maximize instead of minimize, see xgboost.callback.EarlyStopping.

    See Custom Objective and Evaluation Metric and Custom objective and metric for more information.

    from sklearn.datasets import load_diabetes
    from sklearn.metrics import mean_absolute_error
    X, y = load_diabetes(return_X_y=True)
    reg = xgb.XGBRegressor(
        tree_method="hist",
        eval_metric=mean_absolute_error,
    )
    reg.fit(X, y, eval_set=[(X, y)])
    

  • early_stopping_rounds (Optional[int]) –

    Added in version 1.6.0.

    • Activates early stopping. Validation metric needs to improve at least once in every early_stopping_rounds round(s) to continue training. Requires at least one item in eval_set in fit().

    • If early stopping occurs, the model will have two additional attributes: best_score and best_iteration. These are used by the predict() and apply() methods to determine the optimal number of trees during inference. If users want to access the full model (including trees built after early stopping), they can specify the iteration_range in these inference methods. In addition, other utilities like model plotting can also use the entire model.

    • If you prefer to discard the trees after best_iteration, consider using the callback function xgboost.callback.EarlyStopping.

    • If there’s more than one item in eval_set, the last entry will be used for early stopping. If there’s more than one metric in eval_metric, the last metric will be used for early stopping.

  • callbacks (Optional[List[xgboost.callback.TrainingCallback]]) –

    List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API.

    Note

    States in callback are not preserved during training, which means callback objects can not be reused for multiple training sessions without reinitialization or deepcopy.

    for params in parameters_grid:
        # be sure to (re)initialize the callbacks before each run
        callbacks = [xgb.callback.LearningRateScheduler(custom_rates)]
        reg = xgboost.XGBRegressor(**params, callbacks=callbacks)
        reg.fit(X, y)
    

  • kwargs (Optional[Any]) –

    Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.

    Note

    **kwargs unsupported by scikit-learn

    **kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.

    Note

    Custom objective function

    A custom objective function can be provided for the objective parameter. In this case, it should have the signature objective(y_true, y_pred) -> [grad, hess] or objective(y_true, y_pred, *, sample_weight) -> [grad, hess]:

    y_true: array_like of shape [n_samples]

    The target values

    y_pred: array_like of shape [n_samples]

    The predicted values

    sample_weight :

    Optional sample weights.

    grad: array_like of shape [n_samples]

    The value of the gradient for each sample point.

    hess: array_like of shape [n_samples]

    The value of the second derivative for each sample point

apply(X, iteration_range=None)

Return the predicted leaf every tree for each sample. If the model is trained with early stopping, then best_iteration is used automatically.

Parameters:
Returns:

X_leaves – For each datapoint x in X and for each tree, return the index of the leaf x ends up in. Leaves are numbered within [0; 2**(self.max_depth+1)), possibly with gaps in the numbering.

Return type:

array_like, shape=[n_samples, n_trees]

property best_iteration: int

The best iteration obtained by early stopping. This attribute is 0-based, for instance if the best iteration is the first round, then best_iteration is 0.

property best_score: float

The best score obtained by early stopping.

property coef_: ndarray

Coefficients property

Note

Coefficients are defined only for linear learners

Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).

Returns:

coef_

Return type:

array of shape [n_features] or [n_classes, n_features]

evals_result()

Return the evaluation results.

If eval_set is passed to the fit() function, you can call evals_result() to get evaluation results for all passed eval_sets. When eval_metric is also passed to the fit() function, the evals_result will contain the eval_metrics passed to the fit() function.

The returned evaluation result is a dictionary:

{'validation_0': {'logloss': ['0.604835', '0.531479']},
 'validation_1': {'logloss': ['0.41965', '0.17686']}}
Return type:

evals_result

property feature_importances_: ndarray

Feature importances property, return depends on importance_type parameter. When model trained with multi-class/multi-label/multi-target dataset, the feature importance is “averaged” over all targets. The “average” is defined based on the importance type. For instance, if the importance type is “total_gain”, then the score is sum of loss change for each split from all trees.

Returns:

  • feature_importances_ (array of shape [n_features] except for multi-class)

  • linear model, which returns an array with shape (n_features, n_classes)

property feature_names_in_: ndarray

Names of features seen during fit(). Defined only when X has feature names that are all strings.

fit(X, y, *, sample_weight=None, base_margin=None, eval_set=None, verbose=True, xgb_model=None, sample_weight_eval_set=None, base_margin_eval_set=None, feature_weights=None)

Fit gradient boosting classifier.

Note that calling fit() multiple times will cause the model object to be re-fit from scratch. To resume training from a previous checkpoint, explicitly pass xgb_model argument.

Parameters:
  • X (Any) –

    Feature matrix. See Supported data structures for various XGBoost functions for a list of supported types.

    When the tree_method is set to hist, internally, the QuantileDMatrix will be used instead of the DMatrix for conserving memory. However, this has performance implications when the device of input data is not matched with algorithm. For instance, if the input is a numpy array on CPU but cuda is used for training, then the data is first processed on CPU then transferred to GPU.

  • y (Any) – Labels

  • sample_weight (Any | None) – instance weights

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • eval_set (Sequence[Tuple[Any, Any]] | None) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.

  • verbose (bool | int | None) – If verbose is True and an evaluation set is used, the evaluation metric measured on the validation set is printed to stdout at each boosting stage. If verbose is an integer, the evaluation metric is printed at each verbose boosting stage. The last boosting stage / the boosting stage found by using early_stopping_rounds is also printed.

  • xgb_model (Booster | str | XGBModel | None) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).

  • sample_weight_eval_set (Sequence[Any] | None) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.

  • base_margin_eval_set (Sequence[Any] | None) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.

  • feature_weights (Any | None) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown.

Return type:

XGBRFClassifier

get_booster()

Get the underlying xgboost Booster of this model.

This will raise an exception when fit was not called

Returns:

booster

Return type:

a xgboost booster of underlying model

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:

routing – A MetadataRequest encapsulating routing information.

Return type:

MetadataRequest

get_num_boosting_rounds()

Gets the number of xgboost boosting rounds.

Return type:

int

get_params(deep=True)

Get parameters.

Parameters:

deep (bool)

Return type:

Dict[str, Any]

get_xgb_params()

Get xgboost specific parameters.

Return type:

Dict[str, Any]

property intercept_: ndarray

Intercept (bias) property

For tree-based model, the returned value is the base_score.

Returns:

intercept_

Return type:

array of shape (1,) or [n_classes]

load_model(fname)

Load the model from a file or a bytearray.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.load_model("model.json")
# or
model.load_model("model.ubj")
Parameters:

fname (str | bytearray | PathLike) – Input file name or memory buffer(see also save_raw)

Return type:

None

property n_features_in_: int

Number of features seen during fit().

predict(X, output_margin=False, validate_features=True, base_margin=None, iteration_range=None)

Predict with X. If the model is trained with early stopping, then best_iteration is used automatically. The estimator uses inplace_predict by default and falls back to using DMatrix if devices between the data and the estimator don’t match.

Note

This function is only thread safe for gbtree and dart.

Parameters:
  • X (Any) – Data to predict with.

  • output_margin (bool) – Whether to output the raw untransformed margin value.

  • validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • iteration_range (Tuple[int | integer, int | integer] | None) –

    Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.

    Added in version 1.4.0.

Return type:

prediction

predict_proba(X, validate_features=True, base_margin=None, iteration_range=None)

Predict the probability of each X example being of a given class. If the model is trained with early stopping, then best_iteration is used automatically. The estimator uses inplace_predict by default and falls back to using DMatrix if devices between the data and the estimator don’t match.

Note

This function is only thread safe for gbtree and dart.

Parameters:
  • X (Any) – Feature matrix. See Supported data structures for various XGBoost functions for a list of supported types.

  • validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.

  • base_margin (Any | None) – Global bias for each instance. See Intercept for details.

  • iteration_range (Tuple[int | integer, int | integer] | None) – Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.

Returns:

a numpy array of shape array-like of shape (n_samples, n_classes) with the probability of each data example being of a given class.

Return type:

prediction

save_model(fname)

Save the model to a file.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.save_model("model.json")
# or
model.save_model("model.ubj")
Parameters:

fname (str | PathLike) – Output file name

Return type:

None

score(X, y, sample_weight=None)

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Test samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns:

score – Mean accuracy of self.predict(X) w.r.t. y.

Return type:

float

set_fit_request(*, base_margin='$UNCHANGED$', base_margin_eval_set='$UNCHANGED$', eval_set='$UNCHANGED$', feature_weights='$UNCHANGED$', sample_weight='$UNCHANGED$', sample_weight_eval_set='$UNCHANGED$', verbose='$UNCHANGED$', xgb_model='$UNCHANGED$')

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in fit.

  • base_margin_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin_eval_set parameter in fit.

  • eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_set parameter in fit.

  • feature_weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for feature_weights parameter in fit.

  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in fit.

  • sample_weight_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight_eval_set parameter in fit.

  • verbose (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for verbose parameter in fit.

  • xgb_model (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for xgb_model parameter in fit.

  • self (XGBRFClassifier)

Returns:

self – The updated object.

Return type:

object

set_params(**params)

Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.

Return type:

self

Parameters:

params (Any)

set_predict_proba_request(*, base_margin='$UNCHANGED$', iteration_range='$UNCHANGED$', validate_features='$UNCHANGED$')

Request metadata passed to the predict_proba method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict_proba if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict_proba.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in predict_proba.

  • iteration_range (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for iteration_range parameter in predict_proba.

  • validate_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for validate_features parameter in predict_proba.

  • self (XGBRFClassifier)

Returns:

self – The updated object.

Return type:

object

set_predict_request(*, base_margin='$UNCHANGED$', iteration_range='$UNCHANGED$', output_margin='$UNCHANGED$', validate_features='$UNCHANGED$')

Request metadata passed to the predict method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in predict.

  • iteration_range (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for iteration_range parameter in predict.

  • output_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for output_margin parameter in predict.

  • validate_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for validate_features parameter in predict.

  • self (XGBRFClassifier)

Returns:

self – The updated object.

Return type:

object

set_score_request(*, sample_weight='$UNCHANGED$')

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

  • self (XGBRFClassifier)

Returns:

self – The updated object.

Return type:

object

Plotting API

Plotting Library.

xgboost.plot_importance(booster, ax=None, height=0.2, xlim=None, ylim=None, title='Feature importance', xlabel='F score', ylabel='Features', fmap='', importance_type='weight', max_num_features=None, grid=True, show_values=True, values_format='{v}', **kwargs)

Plot importance based on fitted trees.

Parameters:
  • booster (XGBModel | Booster | dict) – Booster or XGBModel instance, or dict taken by Booster.get_fscore()

  • ax (matplotlib Axes) – Target axes instance. If None, new figure and axes will be created.

  • grid (bool) – Turn the axes grids on or off. Default is True (On).

  • importance_type (str) –

    How the importance is calculated: either “weight”, “gain”, or “cover”

    • ”weight” is the number of times a feature appears in a tree

    • ”gain” is the average gain of splits which use the feature

    • ”cover” is the average coverage of splits which use the feature where coverage is defined as the number of samples affected by the split

  • max_num_features (int | None) – Maximum number of top features displayed on plot. If None, all features will be displayed.

  • height (float) – Bar height, passed to ax.barh()

  • xlim (tuple | None) – Tuple passed to axes.xlim()

  • ylim (tuple | None) – Tuple passed to axes.ylim()

  • title (str) – Axes title. To disable, pass None.

  • xlabel (str) – X axis title label. To disable, pass None.

  • ylabel (str) – Y axis title label. To disable, pass None.

  • fmap (str | PathLike) – The name of feature map file.

  • show_values (bool) – Show values on plot. To disable, pass False.

  • values_format (str) – Format string for values. “v” will be replaced by the value of the feature importance. e.g. Pass “{v:.2f}” in order to limit the number of digits after the decimal point to two, for each value printed on the graph.

  • kwargs (Any) – Other keywords passed to ax.barh()

Returns:

ax

Return type:

matplotlib Axes

xgboost.plot_tree(booster, fmap='', num_trees=0, rankdir=None, ax=None, **kwargs)

Plot specified tree.

Parameters:
  • booster (Booster, XGBModel) – Booster or XGBModel instance

  • fmap (str (optional)) – The name of feature map file

  • num_trees (int, default 0) – Specify the ordinal number of target tree

  • rankdir (str, default "TB") – Passed to graphviz via graph_attr

  • ax (matplotlib Axes, default None) – Target axes instance. If None, new figure and axes will be created.

  • kwargs (Any) – Other keywords passed to to_graphviz

Returns:

ax

Return type:

matplotlib Axes

xgboost.to_graphviz(booster, fmap='', num_trees=0, rankdir=None, yes_color=None, no_color=None, condition_node_params=None, leaf_node_params=None, **kwargs)

Convert specified tree to graphviz instance. IPython can automatically plot the returned graphviz instance. Otherwise, you should call .render() method of the returned graphviz instance.

Parameters:
  • booster (Booster | XGBModel) – Booster or XGBModel instance

  • fmap (str | PathLike) – The name of feature map file

  • num_trees (int) – Specify the ordinal number of target tree

  • rankdir (str | None) – Passed to graphviz via graph_attr

  • yes_color (str | None) – Edge color when meets the node condition.

  • no_color (str | None) – Edge color when doesn’t meet the node condition.

  • condition_node_params (dict | None) –

    Condition node configuration for for graphviz. Example:

    {'shape': 'box',
     'style': 'filled,rounded',
     'fillcolor': '#78bceb'}
    

  • leaf_node_params (dict | None) –

    Leaf node configuration for graphviz. Example:

    {'shape': 'box',
     'style': 'filled',
     'fillcolor': '#e48038'}
    

  • kwargs (Any) – Other keywords passed to graphviz graph_attr, e.g. graph [ {key} = {value} ]

Returns:

graph

Return type:

graphviz.Source

Callback API

Callback library containing training routines. See Callback Functions for a quick introduction.

class xgboost.callback.TrainingCallback

Interface for training callback.

Added in version 1.3.0.

after_iteration(model, epoch, evals_log)

Run after each iteration. Returns True when training should stop.

Parameters:
  • model (Any) – Eeither a Booster object or a CVPack if the cv function in xgboost is being used.

  • epoch (int) – The current training iteration.

  • evals_log (Dict[str, Dict[str, List[float] | List[Tuple[float, float]]]]) –

    A dictionary containing the evaluation history:

    {"data_name": {"metric_name": [0.5, ...]}}
    

Return type:

bool

after_training(model)

Run after training is finished.

Parameters:

model (Any)

Return type:

Any

before_iteration(model, epoch, evals_log)

Run before each iteration. Returns True when training should stop. See after_iteration() for details.

Parameters:
Return type:

bool

before_training(model)

Run before training starts.

Parameters:

model (Any)

Return type:

Any

class xgboost.callback.EvaluationMonitor(rank=0, period=1, show_stdv=False)

Bases: TrainingCallback

Print the evaluation result at each iteration.

Added in version 1.3.0.

Parameters:
  • rank (int) – Which worker should be used for printing the result.

  • period (int) – How many epoches between printing.

  • show_stdv (bool) – Used in cv to show standard deviation. Users should not specify it.

after_iteration(model, epoch, evals_log)

Run after each iteration. Returns True when training should stop.

Parameters:
  • model (Any) – Eeither a Booster object or a CVPack if the cv function in xgboost is being used.

  • epoch (int) – The current training iteration.

  • evals_log (Dict[str, Dict[str, List[float] | List[Tuple[float, float]]]]) –

    A dictionary containing the evaluation history:

    {"data_name": {"metric_name": [0.5, ...]}}
    

Return type:

bool

after_training(model)

Run after training is finished.

Parameters:

model (Any)

Return type:

Any

class xgboost.callback.EarlyStopping(rounds, metric_name=None, data_name=None, maximize=None, save_best=False, min_delta=0.0)

Bases: TrainingCallback

Callback function for early stopping

Added in version 1.3.0.

Parameters:
  • rounds (int) – Early stopping rounds.

  • metric_name (str | None) – Name of metric that is used for early stopping.

  • data_name (str | None) – Name of dataset that is used for early stopping.

  • maximize (bool | None) – Whether to maximize evaluation metric. None means auto (discouraged).

  • save_best (bool | None) – Whether training should return the best model or the last model.

  • min_delta (float) –

    Added in version 1.5.0.

    Minimum absolute change in score to be qualified as an improvement.

Examples

es = xgboost.callback.EarlyStopping(
    rounds=2,
    min_delta=1e-3,
    save_best=True,
    maximize=False,
    data_name="validation_0",
    metric_name="mlogloss",
)
clf = xgboost.XGBClassifier(tree_method="hist", device="cuda", callbacks=[es])

X, y = load_digits(return_X_y=True)
clf.fit(X, y, eval_set=[(X, y)])
after_iteration(model, epoch, evals_log)

Run after each iteration. Returns True when training should stop.

Parameters:
  • model (Any) – Eeither a Booster object or a CVPack if the cv function in xgboost is being used.

  • epoch (int) – The current training iteration.

  • evals_log (Dict[str, Dict[str, List[float] | List[Tuple[float, float]]]]) –

    A dictionary containing the evaluation history:

    {"data_name": {"metric_name": [0.5, ...]}}
    

Return type:

bool

after_training(model)

Run after training is finished.

Parameters:

model (Any)

Return type:

Any

before_training(model)

Run before training starts.

Parameters:

model (Any)

Return type:

Any

class xgboost.callback.LearningRateScheduler(learning_rates)

Bases: TrainingCallback

Callback function for scheduling learning rate.

Added in version 1.3.0.

Parameters:

learning_rates (Callable[[int], float] | Sequence[float]) – If it’s a callable object, then it should accept an integer parameter epoch and returns the corresponding learning rate. Otherwise it should be a sequence like list or tuple with the same size of boosting rounds.

after_iteration(model, epoch, evals_log)

Run after each iteration. Returns True when training should stop.

Parameters:
  • model (Any) – Eeither a Booster object or a CVPack if the cv function in xgboost is being used.

  • epoch (int) – The current training iteration.

  • evals_log (Dict[str, Dict[str, List[float] | List[Tuple[float, float]]]]) –

    A dictionary containing the evaluation history:

    {"data_name": {"metric_name": [0.5, ...]}}
    

Return type:

bool

class xgboost.callback.TrainingCheckPoint(directory, name='model', as_pickle=False, interval=100)

Bases: TrainingCallback

Checkpointing operation. Users are encouraged to create their own callbacks for checkpoint as XGBoost doesn’t handle distributed file systems. When checkpointing on distributed systems, be sure to know the rank of the worker to avoid multiple workers checkpointing to the same place.

Added in version 1.3.0.

Since XGBoost 2.1.0, the default format is changed to UBJSON.

Parameters:
  • directory (str | PathLike) – Output model directory.

  • name (str) – pattern of output model file. Models will be saved as name_0.ubj, name_1.ubj, name_2.ubj ….

  • as_pickle (bool) – When set to True, all training parameters will be saved in pickle format, instead of saving only the model.

  • interval (int) – Interval of checkpointing. Checkpointing is slow so setting a larger number can reduce performance hit.

after_iteration(model, epoch, evals_log)

Run after each iteration. Returns True when training should stop.

Parameters:
  • model (Any) – Eeither a Booster object or a CVPack if the cv function in xgboost is being used.

  • epoch (int) – The current training iteration.

  • evals_log (Dict[str, Dict[str, List[float] | List[Tuple[float, float]]]]) –

    A dictionary containing the evaluation history:

    {"data_name": {"metric_name": [0.5, ...]}}
    

Return type:

bool

before_training(model)

Run before training starts.

Parameters:

model (Any)

Return type:

Any

Dask API

Dask extensions for distributed training

See Distributed XGBoost with Dask for simple tutorial. Also XGBoost Dask Feature Walkthrough for some examples.

There are two sets of APIs in this module, one is the functional API including train and predict methods. Another is stateful Scikit-Learner wrapper inherited from single-node Scikit-Learn interface.

The implementation is heavily influenced by dask_xgboost: https://github.com/dask/dask-xgboost

Optional dask configuration

  • xgboost.scheduler_address: Specify the scheduler address, see Troubleshooting.

    Added in version 1.6.0.

    dask.config.set({"xgboost.scheduler_address": "192.0.0.100"})
    # We can also specify the port.
    dask.config.set({"xgboost.scheduler_address": "192.0.0.100:12345"})
    
class xgboost.dask.DaskDMatrix(client, data, label=None, *, weight=None, base_margin=None, missing=None, silent=False, feature_names=None, feature_types=None, group=None, qid=None, label_lower_bound=None, label_upper_bound=None, feature_weights=None, enable_categorical=False)

Bases: object

DMatrix holding on references to Dask DataFrame or Dask Array. Constructing a DaskDMatrix forces all lazy computation to be carried out. Wait for the input data explicitly if you want to see actual computation of constructing DaskDMatrix.

See doc for xgboost.DMatrix constructor for other parameters. DaskDMatrix accepts only dask collection.

Note

DaskDMatrix does not repartition or move data between workers. It’s the caller’s responsibility to balance the data.

Added in version 1.0.0.

Parameters:
  • client (distributed.Client) – Specify the dask client used for training. Use default client returned from dask if it’s set to None.

  • data (da.Array | dd.DataFrame)

  • label (da.Array | dd.DataFrame | dd.Series | None)

  • weight (da.Array | dd.DataFrame | dd.Series | None)

  • base_margin (da.Array | dd.DataFrame | dd.Series | None)

  • missing (float | None)

  • silent (bool)

  • feature_names (Sequence[str] | None)

  • feature_types (Sequence[str] | None)

  • group (da.Array | dd.DataFrame | dd.Series | None)

  • qid (da.Array | dd.DataFrame | dd.Series | None)

  • label_lower_bound (da.Array | dd.DataFrame | dd.Series | None)

  • label_upper_bound (da.Array | dd.DataFrame | dd.Series | None)

  • feature_weights (da.Array | dd.DataFrame | dd.Series | None)

  • enable_categorical (bool)

num_col()

Get the number of columns (features) in the DMatrix.

Return type:

number of columns

class xgboost.dask.DaskQuantileDMatrix(client, data, label=None, *, weight=None, base_margin=None, missing=None, silent=False, feature_names=None, feature_types=None, max_bin=None, ref=None, group=None, qid=None, label_lower_bound=None, label_upper_bound=None, feature_weights=None, enable_categorical=False)

Bases: DaskDMatrix

A dask version of QuantileDMatrix.

Parameters:
  • client (distributed.Client)

  • data (da.Array | dd.DataFrame)

  • label (da.Array | dd.DataFrame | dd.Series | None)

  • weight (da.Array | dd.DataFrame | dd.Series | None)

  • base_margin (da.Array | dd.DataFrame | dd.Series | None)

  • missing (float | None)

  • silent (bool)

  • feature_names (Sequence[str] | None)

  • feature_types (Any | List[Any] | None)

  • max_bin (int | None)

  • ref (DMatrix | None)

  • group (da.Array | dd.DataFrame | dd.Series | None)

  • qid (da.Array | dd.DataFrame | dd.Series | None)

  • label_lower_bound (da.Array | dd.DataFrame | dd.Series | None)

  • label_upper_bound (da.Array | dd.DataFrame | dd.Series | None)

  • feature_weights (da.Array | dd.DataFrame | dd.Series | None)

  • enable_categorical (bool)

num_col()

Get the number of columns (features) in the DMatrix.

Return type:

number of columns

xgboost.dask.train(client, params, dtrain, num_boost_round=10, *, evals=None, obj=None, feval=None, early_stopping_rounds=None, xgb_model=None, verbose_eval=True, callbacks=None, custom_metric=None)

Train XGBoost model.

Added in version 1.0.0.

Note

Other parameters are the same as xgboost.train() except for evals_result, which is returned as part of function return value instead of argument.

Parameters:
Returns:

results – A dictionary containing trained booster and evaluation history. history field is the same as eval_result from xgboost.train.

{'booster': xgboost.Booster,
 'history': {'train': {'logloss': ['0.48253', '0.35953']},
             'eval': {'logloss': ['0.480385', '0.357756']}}}

Return type:

dict

xgboost.dask.predict(client, model, data, output_margin=False, missing=nan, pred_leaf=False, pred_contribs=False, approx_contribs=False, pred_interactions=False, validate_features=True, iteration_range=(0, 0), strict_shape=False)

Run prediction with a trained booster.

Note

Using inplace_predict might be faster when some features are not needed. See xgboost.Booster.predict() for details on various parameters. When output has more than 2 dimensions (shap value, leaf with strict_shape), input should be da.Array or DaskDMatrix.

Added in version 1.0.0.

Parameters:
  • client (distributed.Client | None) – Specify the dask client used for training. Use default client returned from dask if it’s set to None.

  • model (TrainReturnT | Booster | distributed.Future) – The trained model. It can be a distributed.Future so user can pre-scatter it onto all workers.

  • data (DaskDMatrix | da.Array | dd.DataFrame) – Input data used for prediction. When input is a dataframe object, prediction output is a series.

  • missing (float) – Used when input data is not DaskDMatrix. Specify the value considered as missing.

  • output_margin (bool)

  • pred_leaf (bool)

  • pred_contribs (bool)

  • approx_contribs (bool)

  • pred_interactions (bool)

  • validate_features (bool)

  • iteration_range (Tuple[int | integer, int | integer])

  • strict_shape (bool)

Returns:

prediction – When input data is dask.array.Array or DaskDMatrix, the return value is an array, when input data is dask.dataframe.DataFrame, return value can be dask.dataframe.Series, dask.dataframe.DataFrame, depending on the output shape.

Return type:

dask.array.Array/dask.dataframe.Series

xgboost.dask.inplace_predict(client, model, data, iteration_range=(0, 0), predict_type='value', missing=nan, validate_features=True, base_margin=None, strict_shape=False)

Inplace prediction. See doc in xgboost.Booster.inplace_predict() for details.

Added in version 1.1.0.

Parameters:
Returns:

When input data is dask.array.Array, the return value is an array, when input data is dask.dataframe.DataFrame, return value can be dask.dataframe.Series, dask.dataframe.DataFrame, depending on the output shape.

Return type:

prediction

class xgboost.dask.DaskXGBClassifier(max_depth=None, max_leaves=None, max_bin=None, grow_policy=None, learning_rate=None, n_estimators=None, verbosity=None, objective=None, booster=None, tree_method=None, n_jobs=None, gamma=None, min_child_weight=None, max_delta_step=None, subsample=None, sampling_method=None, colsample_bytree=None, colsample_bylevel=None, colsample_bynode=None, reg_alpha=None, reg_lambda=None, scale_pos_weight=None, base_score=None, random_state=None, missing=nan, num_parallel_tree=None, monotone_constraints=None, interaction_constraints=None, importance_type=None, device=None, validate_parameters=None, enable_categorical=False, feature_types=None, max_cat_to_onehot=None, max_cat_threshold=None, multi_strategy=None, eval_metric=None, early_stopping_rounds=None, callbacks=None, **kwargs)

Bases: DaskScikitLearnBase, ClassifierMixin

Implementation of the scikit-learn API for XGBoost classification. See Using the Scikit-Learn Estimator Interface for more information.

Parameters:
  • n_estimators (Optional[int]) – Number of gradient boosted trees. Equivalent to number of boosting rounds.

  • max_depth (Optional[int]) – Maximum tree depth for base learners.

  • max_leaves (Optional[int]) – Maximum number of leaves; 0 indicates no limit.

  • max_bin (Optional[int]) – If using histogram-based algorithm, maximum number of bins per feature

  • grow_policy (Optional[str]) –

    Tree growing policy.

    • depthwise: Favors splitting at nodes closest to the node,

    • lossguide: Favors splitting at nodes with highest loss change.

  • learning_rate (Optional[float]) – Boosting learning rate (xgb’s “eta”)

  • verbosity (Optional[int]) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).

  • objective (Union[str, xgboost.sklearn._SklObjWProto, Callable[[Any, Any], Tuple[numpy.ndarray, numpy.ndarray]], NoneType]) –

    Specify the learning task and the corresponding learning objective or a custom objective function to be used.

    For custom objective, see Custom Objective and Evaluation Metric and Custom objective and metric for more information, along with the end note for function signatures.

  • booster (Optional[str]) – Specify which booster to use: gbtree, gblinear or dart.

  • tree_method (Optional[str]) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from the parameters document tree method

  • n_jobs (Optional[int]) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.

  • gamma (Optional[float]) – (min_split_loss) Minimum loss reduction required to make a further partition on a leaf node of the tree.

  • min_child_weight (Optional[float]) – Minimum sum of instance weight(hessian) needed in a child.

  • max_delta_step (Optional[float]) – Maximum delta step we allow each tree’s weight estimation to be.

  • subsample (Optional[float]) – Subsample ratio of the training instance.

  • sampling_method (Optional[str]) –

    Sampling method. Used only by the GPU version of hist tree method.

    • uniform: Select random training instances uniformly.

    • gradient_based: Select random training instances with higher probability

      when the gradient and hessian are larger. (cf. CatBoost)

  • colsample_bytree (Optional[float]) – Subsample ratio of columns when constructing each tree.

  • colsample_bylevel (Optional[float]) – Subsample ratio of columns for each level.

  • colsample_bynode (Optional[float]) – Subsample ratio of columns for each split.

  • reg_alpha (Optional[float]) – L1 regularization term on weights (xgb’s alpha).

  • reg_lambda (Optional[float]) – L2 regularization term on weights (xgb’s lambda).

  • scale_pos_weight (Optional[float]) – Balancing of positive and negative weights.

  • base_score (Optional[float]) – The initial prediction score of all instances, global bias.

  • random_state (Union[numpy.random.mtrand.RandomState, numpy.random._generator.Generator, int, NoneType]) –

    Random number seed.

    Note

    Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.

  • missing (float) – Value in the data which needs to be present as a missing value. Default to numpy.nan.

  • num_parallel_tree (Optional[int]) – Used for boosting random forest.

  • monotone_constraints (Union[Dict[str, int], str, NoneType]) – Constraint of variable monotonicity. See tutorial for more information.

  • interaction_constraints (Union[str, List[Tuple[str]], NoneType]) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nested list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information

  • importance_type (Optional[str]) –

    The feature importance type for the feature_importances_ property:

    • For tree model, it’s either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.

    • For linear model, only “weight” is defined and it’s the normalized coefficients without bias.

  • device (Optional[str]) –

    Added in version 2.0.0.

    Device ordinal, available options are cpu, cuda, and gpu.

  • validate_parameters (Optional[bool]) – Give warnings for unknown parameter.

  • enable_categorical (bool) – See the same parameter of DMatrix for details.

  • feature_types (Optional[Sequence[str]]) –

    Added in version 1.7.0.

    Used for specifying feature types without constructing a dataframe. See DMatrix for details.

  • max_cat_to_onehot (Optional[int]) –

    Added in version 1.6.0.

    Note

    This parameter is experimental

    A threshold for deciding whether XGBoost should use one-hot encoding based split for categorical data. When number of categories is lesser than the threshold then one-hot encoding is chosen, otherwise the categories will be partitioned into children nodes. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • max_cat_threshold (Optional[int]) –

    Added in version 1.7.0.

    Note

    This parameter is experimental

    Maximum number of categories considered for each split. Used only by partition-based splits for preventing over-fitting. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • multi_strategy (Optional[str]) –

    Added in version 2.0.0.

    Note

    This parameter is working-in-progress.

    The strategy used for training multi-target models, including multi-target regression and multi-class classification. See Multiple Outputs for more information.

    • one_output_per_tree: One model for each target.

    • multi_output_tree: Use multi-target trees.

  • eval_metric (Union[str, List[str], Callable, NoneType]) –

    Added in version 1.6.0.

    Metric used for monitoring the training result and early stopping. It can be a string or list of strings as names of predefined metric in XGBoost (See doc/parameter.rst), one of the metrics in sklearn.metrics, or any other user defined metric that looks like sklearn.metrics.

    If custom objective is also provided, then custom metric should implement the corresponding reverse link function.

    Unlike the scoring parameter commonly used in scikit-learn, when a callable object is provided, it’s assumed to be a cost function and by default XGBoost will minimize the result during early stopping.

    For advanced usage on Early stopping like directly choosing to maximize instead of minimize, see xgboost.callback.EarlyStopping.

    See Custom Objective and Evaluation Metric and Custom objective and metric for more information.

    from sklearn.datasets import load_diabetes
    from sklearn.metrics import mean_absolute_error
    X, y = load_diabetes(return_X_y=True)
    reg = xgb.XGBRegressor(
        tree_method="hist",
        eval_metric=mean_absolute_error,
    )
    reg.fit(X, y, eval_set=[(X, y)])
    

  • early_stopping_rounds (Optional[int]) –

    Added in version 1.6.0.

    • Activates early stopping. Validation metric needs to improve at least once in every early_stopping_rounds round(s) to continue training. Requires at least one item in eval_set in fit().

    • If early stopping occurs, the model will have two additional attributes: best_score and best_iteration. These are used by the predict() and apply() methods to determine the optimal number of trees during inference. If users want to access the full model (including trees built after early stopping), they can specify the iteration_range in these inference methods. In addition, other utilities like model plotting can also use the entire model.

    • If you prefer to discard the trees after best_iteration, consider using the callback function xgboost.callback.EarlyStopping.

    • If there’s more than one item in eval_set, the last entry will be used for early stopping. If there’s more than one metric in eval_metric, the last metric will be used for early stopping.

  • callbacks (Optional[List[xgboost.callback.TrainingCallback]]) –

    List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API.

    Note

    States in callback are not preserved during training, which means callback objects can not be reused for multiple training sessions without reinitialization or deepcopy.

    for params in parameters_grid:
        # be sure to (re)initialize the callbacks before each run
        callbacks = [xgb.callback.LearningRateScheduler(custom_rates)]
        reg = xgboost.XGBRegressor(**params, callbacks=callbacks)
        reg.fit(X, y)
    

  • kwargs (Optional[Any]) –

    Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.

    Note

    **kwargs unsupported by scikit-learn

    **kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.

apply(X, iteration_range=None)

Return the predicted leaf every tree for each sample. If the model is trained with early stopping, then best_iteration is used automatically.

Parameters:
Returns:

X_leaves – For each datapoint x in X and for each tree, return the index of the leaf x ends up in. Leaves are numbered within [0; 2**(self.max_depth+1)), possibly with gaps in the numbering.

Return type:

array_like, shape=[n_samples, n_trees]

property best_iteration: int

The best iteration obtained by early stopping. This attribute is 0-based, for instance if the best iteration is the first round, then best_iteration is 0.

property best_score: float

The best score obtained by early stopping.

property client: distributed.Client

The dask client used in this model. The Client object can not be serialized for transmission, so if task is launched from a worker instead of directly from the client process, this attribute needs to be set at that worker.

property coef_: ndarray

Coefficients property

Note

Coefficients are defined only for linear learners

Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).

Returns:

coef_

Return type:

array of shape [n_features] or [n_classes, n_features]

evals_result()

Return the evaluation results.

If eval_set is passed to the fit() function, you can call evals_result() to get evaluation results for all passed eval_sets. When eval_metric is also passed to the fit() function, the evals_result will contain the eval_metrics passed to the fit() function.

The returned evaluation result is a dictionary:

{'validation_0': {'logloss': ['0.604835', '0.531479']},
 'validation_1': {'logloss': ['0.41965', '0.17686']}}
Return type:

evals_result

property feature_importances_: ndarray

Feature importances property, return depends on importance_type parameter. When model trained with multi-class/multi-label/multi-target dataset, the feature importance is “averaged” over all targets. The “average” is defined based on the importance type. For instance, if the importance type is “total_gain”, then the score is sum of loss change for each split from all trees.

Returns:

  • feature_importances_ (array of shape [n_features] except for multi-class)

  • linear model, which returns an array with shape (n_features, n_classes)

property feature_names_in_: ndarray

Names of features seen during fit(). Defined only when X has feature names that are all strings.

fit(X, y, *, sample_weight=None, base_margin=None, eval_set=None, verbose=True, xgb_model=None, sample_weight_eval_set=None, base_margin_eval_set=None, feature_weights=None)

Fit gradient boosting model.

Note that calling fit() multiple times will cause the model object to be re-fit from scratch. To resume training from a previous checkpoint, explicitly pass xgb_model argument.

Parameters:
  • X (da.Array | dd.DataFrame) –

    Feature matrix. See Supported data structures for various XGBoost functions for a list of supported types.

    When the tree_method is set to hist, internally, the QuantileDMatrix will be used instead of the DMatrix for conserving memory. However, this has performance implications when the device of input data is not matched with algorithm. For instance, if the input is a numpy array on CPU but cuda is used for training, then the data is first processed on CPU then transferred to GPU.

  • y (da.Array | dd.DataFrame | dd.Series) – Labels

  • sample_weight (da.Array | dd.DataFrame | dd.Series | None) – instance weights

  • base_margin (da.Array | dd.DataFrame | dd.Series | None) – Global bias for each instance. See Intercept for details.

  • eval_set (Sequence[Tuple[da.Array | dd.DataFrame | dd.Series, da.Array | dd.DataFrame | dd.Series]] | None) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.

  • verbose (int | bool) – If verbose is True and an evaluation set is used, the evaluation metric measured on the validation set is printed to stdout at each boosting stage. If verbose is an integer, the evaluation metric is printed at each verbose boosting stage. The last boosting stage / the boosting stage found by using early_stopping_rounds is also printed.

  • xgb_model (Booster | XGBModel | None) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).

  • sample_weight_eval_set (Sequence[da.Array | dd.DataFrame | dd.Series] | None) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.

  • base_margin_eval_set (Sequence[da.Array | dd.DataFrame | dd.Series] | None) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.

  • feature_weights (da.Array | dd.DataFrame | dd.Series | None) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown.

Return type:

DaskXGBClassifier

get_booster()

Get the underlying xgboost Booster of this model.

This will raise an exception when fit was not called

Returns:

booster

Return type:

a xgboost booster of underlying model

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:

routing – A MetadataRequest encapsulating routing information.

Return type:

MetadataRequest

get_num_boosting_rounds()

Gets the number of xgboost boosting rounds.

Return type:

int

get_params(deep=True)

Get parameters.

Parameters:

deep (bool)

Return type:

Dict[str, Any]

get_xgb_params()

Get xgboost specific parameters.

Return type:

Dict[str, Any]

property intercept_: ndarray

Intercept (bias) property

For tree-based model, the returned value is the base_score.

Returns:

intercept_

Return type:

array of shape (1,) or [n_classes]

load_model(fname)

Load the model from a file or a bytearray.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.load_model("model.json")
# or
model.load_model("model.ubj")
Parameters:

fname (str | bytearray | PathLike) – Input file name or memory buffer(see also save_raw)

Return type:

None

property n_features_in_: int

Number of features seen during fit().

predict(X, output_margin=False, validate_features=True, base_margin=None, iteration_range=None)

Predict with X. If the model is trained with early stopping, then best_iteration is used automatically. The estimator uses inplace_predict by default and falls back to using DMatrix if devices between the data and the estimator don’t match.

Note

This function is only thread safe for gbtree and dart.

Parameters:
  • X (da.Array | dd.DataFrame) – Data to predict with.

  • output_margin (bool) – Whether to output the raw untransformed margin value.

  • validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.

  • base_margin (da.Array | dd.DataFrame | dd.Series | None) – Global bias for each instance. See Intercept for details.

  • iteration_range (Tuple[int | integer, int | integer] | None) –

    Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.

    Added in version 1.4.0.

Return type:

prediction

predict_proba(X, validate_features=True, base_margin=None, iteration_range=None)

Predict the probability of each X example being of a given class. If the model is trained with early stopping, then best_iteration is used automatically. The estimator uses inplace_predict by default and falls back to using DMatrix if devices between the data and the estimator don’t match.

Note

This function is only thread safe for gbtree and dart.

Parameters:
  • X (da.Array | dd.DataFrame | dd.Series) – Feature matrix. See Supported data structures for various XGBoost functions for a list of supported types.

  • validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.

  • base_margin (da.Array | dd.DataFrame | dd.Series | None) – Global bias for each instance. See Intercept for details.

  • iteration_range (Tuple[int | integer, int | integer] | None) – Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.

Returns:

a numpy array of shape array-like of shape (n_samples, n_classes) with the probability of each data example being of a given class.

Return type:

prediction

save_model(fname)

Save the model to a file.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.save_model("model.json")
# or
model.save_model("model.ubj")
Parameters:

fname (str | PathLike) – Output file name

Return type:

None

score(X, y, sample_weight=None)

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Test samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns:

score – Mean accuracy of self.predict(X) w.r.t. y.

Return type:

float

set_fit_request(*, base_margin='$UNCHANGED$', base_margin_eval_set='$UNCHANGED$', eval_set='$UNCHANGED$', feature_weights='$UNCHANGED$', sample_weight='$UNCHANGED$', sample_weight_eval_set='$UNCHANGED$', verbose='$UNCHANGED$', xgb_model='$UNCHANGED$')

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in fit.

  • base_margin_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin_eval_set parameter in fit.

  • eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_set parameter in fit.

  • feature_weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for feature_weights parameter in fit.

  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in fit.

  • sample_weight_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight_eval_set parameter in fit.

  • verbose (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for verbose parameter in fit.

  • xgb_model (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for xgb_model parameter in fit.

  • self (DaskXGBClassifier)

Returns:

self – The updated object.

Return type:

object

set_params(**params)

Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.

Return type:

self

Parameters:

params (Any)

set_predict_proba_request(*, base_margin='$UNCHANGED$', iteration_range='$UNCHANGED$', validate_features='$UNCHANGED$')

Request metadata passed to the predict_proba method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict_proba if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict_proba.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in predict_proba.

  • iteration_range (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for iteration_range parameter in predict_proba.

  • validate_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for validate_features parameter in predict_proba.

  • self (DaskXGBClassifier)

Returns:

self – The updated object.

Return type:

object

set_predict_request(*, base_margin='$UNCHANGED$', iteration_range='$UNCHANGED$', output_margin='$UNCHANGED$', validate_features='$UNCHANGED$')

Request metadata passed to the predict method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in predict.

  • iteration_range (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for iteration_range parameter in predict.

  • output_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for output_margin parameter in predict.

  • validate_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for validate_features parameter in predict.

  • self (DaskXGBClassifier)

Returns:

self – The updated object.

Return type:

object

set_score_request(*, sample_weight='$UNCHANGED$')

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

  • self (DaskXGBClassifier)

Returns:

self – The updated object.

Return type:

object

class xgboost.dask.DaskXGBRegressor(max_depth=None, max_leaves=None, max_bin=None, grow_policy=None, learning_rate=None, n_estimators=None, verbosity=None, objective=None, booster=None, tree_method=None, n_jobs=None, gamma=None, min_child_weight=None, max_delta_step=None, subsample=None, sampling_method=None, colsample_bytree=None, colsample_bylevel=None, colsample_bynode=None, reg_alpha=None, reg_lambda=None, scale_pos_weight=None, base_score=None, random_state=None, missing=nan, num_parallel_tree=None, monotone_constraints=None, interaction_constraints=None, importance_type=None, device=None, validate_parameters=None, enable_categorical=False, feature_types=None, max_cat_to_onehot=None, max_cat_threshold=None, multi_strategy=None, eval_metric=None, early_stopping_rounds=None, callbacks=None, **kwargs)

Bases: DaskScikitLearnBase, RegressorMixin

Implementation of the Scikit-Learn API for XGBoost. See Using the Scikit-Learn Estimator Interface for more information.

Parameters:
  • n_estimators (Optional[int]) – Number of gradient boosted trees. Equivalent to number of boosting rounds.

  • max_depth (Optional[int]) – Maximum tree depth for base learners.

  • max_leaves (Optional[int]) – Maximum number of leaves; 0 indicates no limit.

  • max_bin (Optional[int]) – If using histogram-based algorithm, maximum number of bins per feature

  • grow_policy (Optional[str]) –

    Tree growing policy.

    • depthwise: Favors splitting at nodes closest to the node,

    • lossguide: Favors splitting at nodes with highest loss change.

  • learning_rate (Optional[float]) – Boosting learning rate (xgb’s “eta”)

  • verbosity (Optional[int]) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).

  • objective (Union[str, xgboost.sklearn._SklObjWProto, Callable[[Any, Any], Tuple[numpy.ndarray, numpy.ndarray]], NoneType]) –

    Specify the learning task and the corresponding learning objective or a custom objective function to be used.

    For custom objective, see Custom Objective and Evaluation Metric and Custom objective and metric for more information, along with the end note for function signatures.

  • booster (Optional[str]) – Specify which booster to use: gbtree, gblinear or dart.

  • tree_method (Optional[str]) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from the parameters document tree method

  • n_jobs (Optional[int]) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.

  • gamma (Optional[float]) – (min_split_loss) Minimum loss reduction required to make a further partition on a leaf node of the tree.

  • min_child_weight (Optional[float]) – Minimum sum of instance weight(hessian) needed in a child.

  • max_delta_step (Optional[float]) – Maximum delta step we allow each tree’s weight estimation to be.

  • subsample (Optional[float]) – Subsample ratio of the training instance.

  • sampling_method (Optional[str]) –

    Sampling method. Used only by the GPU version of hist tree method.

    • uniform: Select random training instances uniformly.

    • gradient_based: Select random training instances with higher probability

      when the gradient and hessian are larger. (cf. CatBoost)

  • colsample_bytree (Optional[float]) – Subsample ratio of columns when constructing each tree.

  • colsample_bylevel (Optional[float]) – Subsample ratio of columns for each level.

  • colsample_bynode (Optional[float]) – Subsample ratio of columns for each split.

  • reg_alpha (Optional[float]) – L1 regularization term on weights (xgb’s alpha).

  • reg_lambda (Optional[float]) – L2 regularization term on weights (xgb’s lambda).

  • scale_pos_weight (Optional[float]) – Balancing of positive and negative weights.

  • base_score (Optional[float]) – The initial prediction score of all instances, global bias.

  • random_state (Union[numpy.random.mtrand.RandomState, numpy.random._generator.Generator, int, NoneType]) –

    Random number seed.

    Note

    Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.

  • missing (float) – Value in the data which needs to be present as a missing value. Default to numpy.nan.

  • num_parallel_tree (Optional[int]) – Used for boosting random forest.

  • monotone_constraints (Union[Dict[str, int], str, NoneType]) – Constraint of variable monotonicity. See tutorial for more information.

  • interaction_constraints (Union[str, List[Tuple[str]], NoneType]) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nested list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information

  • importance_type (Optional[str]) –

    The feature importance type for the feature_importances_ property:

    • For tree model, it’s either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.

    • For linear model, only “weight” is defined and it’s the normalized coefficients without bias.

  • device (Optional[str]) –

    Added in version 2.0.0.

    Device ordinal, available options are cpu, cuda, and gpu.

  • validate_parameters (Optional[bool]) – Give warnings for unknown parameter.

  • enable_categorical (bool) – See the same parameter of DMatrix for details.

  • feature_types (Optional[Sequence[str]]) –

    Added in version 1.7.0.

    Used for specifying feature types without constructing a dataframe. See DMatrix for details.

  • max_cat_to_onehot (Optional[int]) –

    Added in version 1.6.0.

    Note

    This parameter is experimental

    A threshold for deciding whether XGBoost should use one-hot encoding based split for categorical data. When number of categories is lesser than the threshold then one-hot encoding is chosen, otherwise the categories will be partitioned into children nodes. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • max_cat_threshold (Optional[int]) –

    Added in version 1.7.0.

    Note

    This parameter is experimental

    Maximum number of categories considered for each split. Used only by partition-based splits for preventing over-fitting. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • multi_strategy (Optional[str]) –

    Added in version 2.0.0.

    Note

    This parameter is working-in-progress.

    The strategy used for training multi-target models, including multi-target regression and multi-class classification. See Multiple Outputs for more information.

    • one_output_per_tree: One model for each target.

    • multi_output_tree: Use multi-target trees.

  • eval_metric (Union[str, List[str], Callable, NoneType]) –

    Added in version 1.6.0.

    Metric used for monitoring the training result and early stopping. It can be a string or list of strings as names of predefined metric in XGBoost (See doc/parameter.rst), one of the metrics in sklearn.metrics, or any other user defined metric that looks like sklearn.metrics.

    If custom objective is also provided, then custom metric should implement the corresponding reverse link function.

    Unlike the scoring parameter commonly used in scikit-learn, when a callable object is provided, it’s assumed to be a cost function and by default XGBoost will minimize the result during early stopping.

    For advanced usage on Early stopping like directly choosing to maximize instead of minimize, see xgboost.callback.EarlyStopping.

    See Custom Objective and Evaluation Metric and Custom objective and metric for more information.

    from sklearn.datasets import load_diabetes
    from sklearn.metrics import mean_absolute_error
    X, y = load_diabetes(return_X_y=True)
    reg = xgb.XGBRegressor(
        tree_method="hist",
        eval_metric=mean_absolute_error,
    )
    reg.fit(X, y, eval_set=[(X, y)])
    

  • early_stopping_rounds (Optional[int]) –

    Added in version 1.6.0.

    • Activates early stopping. Validation metric needs to improve at least once in every early_stopping_rounds round(s) to continue training. Requires at least one item in eval_set in fit().

    • If early stopping occurs, the model will have two additional attributes: best_score and best_iteration. These are used by the predict() and apply() methods to determine the optimal number of trees during inference. If users want to access the full model (including trees built after early stopping), they can specify the iteration_range in these inference methods. In addition, other utilities like model plotting can also use the entire model.

    • If you prefer to discard the trees after best_iteration, consider using the callback function xgboost.callback.EarlyStopping.

    • If there’s more than one item in eval_set, the last entry will be used for early stopping. If there’s more than one metric in eval_metric, the last metric will be used for early stopping.

  • callbacks (Optional[List[xgboost.callback.TrainingCallback]]) –

    List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API.

    Note

    States in callback are not preserved during training, which means callback objects can not be reused for multiple training sessions without reinitialization or deepcopy.

    for params in parameters_grid:
        # be sure to (re)initialize the callbacks before each run
        callbacks = [xgb.callback.LearningRateScheduler(custom_rates)]
        reg = xgboost.XGBRegressor(**params, callbacks=callbacks)
        reg.fit(X, y)
    

  • kwargs (Optional[Any]) –

    Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.

    Note

    **kwargs unsupported by scikit-learn

    **kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.

apply(X, iteration_range=None)

Return the predicted leaf every tree for each sample. If the model is trained with early stopping, then best_iteration is used automatically.

Parameters:
Returns:

X_leaves – For each datapoint x in X and for each tree, return the index of the leaf x ends up in. Leaves are numbered within [0; 2**(self.max_depth+1)), possibly with gaps in the numbering.

Return type:

array_like, shape=[n_samples, n_trees]

property best_iteration: int

The best iteration obtained by early stopping. This attribute is 0-based, for instance if the best iteration is the first round, then best_iteration is 0.

property best_score: float

The best score obtained by early stopping.

property client: distributed.Client

The dask client used in this model. The Client object can not be serialized for transmission, so if task is launched from a worker instead of directly from the client process, this attribute needs to be set at that worker.

property coef_: ndarray

Coefficients property

Note

Coefficients are defined only for linear learners

Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).

Returns:

coef_

Return type:

array of shape [n_features] or [n_classes, n_features]

evals_result()

Return the evaluation results.

If eval_set is passed to the fit() function, you can call evals_result() to get evaluation results for all passed eval_sets. When eval_metric is also passed to the fit() function, the evals_result will contain the eval_metrics passed to the fit() function.

The returned evaluation result is a dictionary:

{'validation_0': {'logloss': ['0.604835', '0.531479']},
 'validation_1': {'logloss': ['0.41965', '0.17686']}}
Return type:

evals_result

property feature_importances_: ndarray

Feature importances property, return depends on importance_type parameter. When model trained with multi-class/multi-label/multi-target dataset, the feature importance is “averaged” over all targets. The “average” is defined based on the importance type. For instance, if the importance type is “total_gain”, then the score is sum of loss change for each split from all trees.

Returns:

  • feature_importances_ (array of shape [n_features] except for multi-class)

  • linear model, which returns an array with shape (n_features, n_classes)

property feature_names_in_: ndarray

Names of features seen during fit(). Defined only when X has feature names that are all strings.

fit(X, y, *, sample_weight=None, base_margin=None, eval_set=None, verbose=True, xgb_model=None, sample_weight_eval_set=None, base_margin_eval_set=None, feature_weights=None)

Fit gradient boosting model.

Note that calling fit() multiple times will cause the model object to be re-fit from scratch. To resume training from a previous checkpoint, explicitly pass xgb_model argument.

Parameters:
  • X (da.Array | dd.DataFrame) –

    Feature matrix. See Supported data structures for various XGBoost functions for a list of supported types.

    When the tree_method is set to hist, internally, the QuantileDMatrix will be used instead of the DMatrix for conserving memory. However, this has performance implications when the device of input data is not matched with algorithm. For instance, if the input is a numpy array on CPU but cuda is used for training, then the data is first processed on CPU then transferred to GPU.

  • y (da.Array | dd.DataFrame | dd.Series) – Labels

  • sample_weight (da.Array | dd.DataFrame | dd.Series | None) – instance weights

  • base_margin (da.Array | dd.DataFrame | dd.Series | None) – Global bias for each instance. See Intercept for details.

  • eval_set (Sequence[Tuple[da.Array | dd.DataFrame | dd.Series, da.Array | dd.DataFrame | dd.Series]] | None) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.

  • verbose (int | bool) – If verbose is True and an evaluation set is used, the evaluation metric measured on the validation set is printed to stdout at each boosting stage. If verbose is an integer, the evaluation metric is printed at each verbose boosting stage. The last boosting stage / the boosting stage found by using early_stopping_rounds is also printed.

  • xgb_model (Booster | XGBModel | None) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).

  • sample_weight_eval_set (Sequence[da.Array | dd.DataFrame | dd.Series] | None) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.

  • base_margin_eval_set (Sequence[da.Array | dd.DataFrame | dd.Series] | None) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.

  • feature_weights (da.Array | dd.DataFrame | dd.Series | None) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown.

Return type:

DaskXGBRegressor

get_booster()

Get the underlying xgboost Booster of this model.

This will raise an exception when fit was not called

Returns:

booster

Return type:

a xgboost booster of underlying model

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:

routing – A MetadataRequest encapsulating routing information.

Return type:

MetadataRequest

get_num_boosting_rounds()

Gets the number of xgboost boosting rounds.

Return type:

int

get_params(deep=True)

Get parameters.

Parameters:

deep (bool)

Return type:

Dict[str, Any]

get_xgb_params()

Get xgboost specific parameters.

Return type:

Dict[str, Any]

property intercept_: ndarray

Intercept (bias) property

For tree-based model, the returned value is the base_score.

Returns:

intercept_

Return type:

array of shape (1,) or [n_classes]

load_model(fname)

Load the model from a file or a bytearray.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.load_model("model.json")
# or
model.load_model("model.ubj")
Parameters:

fname (str | bytearray | PathLike) – Input file name or memory buffer(see also save_raw)

Return type:

None

property n_features_in_: int

Number of features seen during fit().

predict(X, output_margin=False, validate_features=True, base_margin=None, iteration_range=None)

Predict with X. If the model is trained with early stopping, then best_iteration is used automatically. The estimator uses inplace_predict by default and falls back to using DMatrix if devices between the data and the estimator don’t match.

Note

This function is only thread safe for gbtree and dart.

Parameters:
  • X (da.Array | dd.DataFrame) – Data to predict with.

  • output_margin (bool) – Whether to output the raw untransformed margin value.

  • validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.

  • base_margin (da.Array | dd.DataFrame | dd.Series | None) – Global bias for each instance. See Intercept for details.

  • iteration_range (Tuple[int | integer, int | integer] | None) –

    Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.

    Added in version 1.4.0.

Return type:

prediction

save_model(fname)

Save the model to a file.

The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) are only saved when using JSON or UBJSON (default) format. See Model IO for more info.

model.save_model("model.json")
# or
model.save_model("model.ubj")
Parameters:

fname (str | PathLike) – Output file name

Return type:

None

score(X, y, sample_weight=None)

Return the coefficient of determination of the prediction.

The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns:

score\(R^2\) of self.predict(X) w.r.t. y.

Return type:

float

Notes

The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score(). This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).

set_fit_request(*, base_margin='$UNCHANGED$', base_margin_eval_set='$UNCHANGED$', eval_set='$UNCHANGED$', feature_weights='$UNCHANGED$', sample_weight='$UNCHANGED$', sample_weight_eval_set='$UNCHANGED$', verbose='$UNCHANGED$', xgb_model='$UNCHANGED$')

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in fit.

  • base_margin_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin_eval_set parameter in fit.

  • eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for eval_set parameter in fit.

  • feature_weights (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for feature_weights parameter in fit.

  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in fit.

  • sample_weight_eval_set (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight_eval_set parameter in fit.

  • verbose (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for verbose parameter in fit.

  • xgb_model (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for xgb_model parameter in fit.

  • self (DaskXGBRegressor)

Returns:

self – The updated object.

Return type:

object

set_params(**params)

Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.

Return type:

self

Parameters:

params (Any)

set_predict_request(*, base_margin='$UNCHANGED$', iteration_range='$UNCHANGED$', output_margin='$UNCHANGED$', validate_features='$UNCHANGED$')

Request metadata passed to the predict method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • base_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for base_margin parameter in predict.

  • iteration_range (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for iteration_range parameter in predict.

  • output_margin (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for output_margin parameter in predict.

  • validate_features (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for validate_features parameter in predict.

  • self (DaskXGBRegressor)

Returns:

self – The updated object.

Return type:

object

set_score_request(*, sample_weight='$UNCHANGED$')

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

  • self (DaskXGBRegressor)

Returns:

self – The updated object.

Return type:

object

class xgboost.dask.DaskXGBRanker(*, objective='rank:pairwise', **kwargs)

Bases: DaskScikitLearnBase, XGBRankerMixIn

Implementation of the Scikit-Learn API for XGBoost Ranking.

Added in version 1.4.0.

See Using the Scikit-Learn Estimator Interface for more information.

Parameters:
  • n_estimators (Optional[int]) – Number of gradient boosted trees. Equivalent to number of boosting rounds.

  • max_depth (Optional[int]) – Maximum tree depth for base learners.

  • max_leaves (Optional[int]) – Maximum number of leaves; 0 indicates no limit.

  • max_bin (Optional[int]) – If using histogram-based algorithm, maximum number of bins per feature

  • grow_policy (Optional[str]) –

    Tree growing policy.

    • depthwise: Favors splitting at nodes closest to the node,

    • lossguide: Favors splitting at nodes with highest loss change.

  • learning_rate (Optional[float]) – Boosting learning rate (xgb’s “eta”)

  • verbosity (Optional[int]) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).

  • objective (Union[str, xgboost.sklearn._SklObjWProto, Callable[[Any, Any], Tuple[numpy.ndarray, numpy.ndarray]], NoneType]) –

    Specify the learning task and the corresponding learning objective or a custom objective function to be used.

    For custom objective, see Custom Objective and Evaluation Metric and Custom objective and metric for more information, along with the end note for function signatures.

  • booster (Optional[str]) – Specify which booster to use: gbtree, gblinear or dart.

  • tree_method (Optional[str]) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from the parameters document tree method

  • n_jobs (Optional[int]) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.

  • gamma (Optional[float]) – (min_split_loss) Minimum loss reduction required to make a further partition on a leaf node of the tree.

  • min_child_weight (Optional[float]) – Minimum sum of instance weight(hessian) needed in a child.

  • max_delta_step (Optional[float]) – Maximum delta step we allow each tree’s weight estimation to be.

  • subsample (Optional[float]) – Subsample ratio of the training instance.

  • sampling_method (Optional[str]) –

    Sampling method. Used only by the GPU version of hist tree method.

    • uniform: Select random training instances uniformly.

    • gradient_based: Select random training instances with higher probability

      when the gradient and hessian are larger. (cf. CatBoost)

  • colsample_bytree (Optional[float]) – Subsample ratio of columns when constructing each tree.

  • colsample_bylevel (Optional[float]) – Subsample ratio of columns for each level.

  • colsample_bynode (Optional[float]) – Subsample ratio of columns for each split.

  • reg_alpha (Optional[float]) – L1 regularization term on weights (xgb’s alpha).

  • reg_lambda (Optional[float]) – L2 regularization term on weights (xgb’s lambda).

  • scale_pos_weight (Optional[float]) – Balancing of positive and negative weights.

  • base_score (Optional[float]) – The initial prediction score of all instances, global bias.

  • random_state (Union[numpy.random.mtrand.RandomState, numpy.random._generator.Generator, int, NoneType]) –

    Random number seed.

    Note

    Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.

  • missing (float) – Value in the data which needs to be present as a missing value. Default to numpy.nan.

  • num_parallel_tree (Optional[int]) – Used for boosting random forest.

  • monotone_constraints (Union[Dict[str, int], str, NoneType]) – Constraint of variable monotonicity. See tutorial for more information.

  • interaction_constraints (Union[str, List[Tuple[str]], NoneType]) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nested list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information

  • importance_type (Optional[str]) –

    The feature importance type for the feature_importances_ property:

    • For tree model, it’s either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.

    • For linear model, only “weight” is defined and it’s the normalized coefficients without bias.

  • device (Optional[str]) –

    Added in version 2.0.0.

    Device ordinal, available options are cpu, cuda, and gpu.

  • validate_parameters (Optional[bool]) – Give warnings for unknown parameter.

  • enable_categorical (bool) – See the same parameter of DMatrix for details.

  • feature_types (Optional[Sequence[str]]) –

    Added in version 1.7.0.

    Used for specifying feature types without constructing a dataframe. See DMatrix for details.

  • max_cat_to_onehot (Optional[int]) –

    Added in version 1.6.0.

    Note

    This parameter is experimental

    A threshold for deciding whether XGBoost should use one-hot encoding based split for categorical data. When number of categories is lesser than the threshold then one-hot encoding is chosen, otherwise the categories will be partitioned into children nodes. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • max_cat_threshold (Optional[int]) –

    Added in version 1.7.0.

    Note

    This parameter is experimental

    Maximum number of categories considered for each split. Used only by partition-based splits for preventing over-fitting. Also, enable_categorical needs to be set to have categorical feature support. See Categorical Data and Parameters for Categorical Feature for details.

  • multi_strategy (Optional[str]) –

    Added in version 2.0.0.

    Note

    This parameter is working-in-progress.

    The strategy used for training multi-target models, including multi-target regression and multi-class classification. See Multiple Outputs for more information.

    • one_output_per_tree: One model for each target.

    • multi_output_tree: Use multi-target trees.

  • eval_metric (Union[str, List[str], Callable, NoneType]) –

    Added in version 1.6.0.

    Metric used for monitoring the training result and early stopping. It can be a string or list of strings as names of predefined metric in XGBoost (See doc/parameter.rst), one of the metrics in sklearn.metrics, or any other user defined metri