Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
Note. The source distribution of Python XGBoost 1.7.0 was defective (#8415). Since PyPI does not allow us to replace existing artifacts, we released 1.7.0.post0
version to upload the new source distribution. Everything in 1.7.0.post0
is identical to 1.7.0
otherwise.
We are excited to announce the feature packed XGBoost 1.7 release. The release note will walk through some of the major new features first, then make a summary for other improvements and language-binding-specific changes.
XGBoost 1.7 features initial support for PySpark integration. The new interface is adapted from the existing PySpark XGBoost interface developed by databricks with additional features like QuantileDMatrix
and the rapidsai plugin (GPU pipeline) support. The new Spark XGBoost Python estimators not only benefit from PySpark ml facilities for powerful distributed computing but also enjoy the rest of the Python ecosystem. Users can define a custom objective, callbacks, and metrics in Python and use them with this interface on distributed clusters. The support is labeled as experimental with more features to come in future releases. For a brief introduction please visit the tutorial on XGBoost's document page. (#8355, #8344, #8335, #8284, #8271, #8283, #8250, #8231, #8219, #8245, #8217, #8200, #8173, #8172, #8145, #8117, #8131, #8088, #8082, #8085, #8066, #8068, #8067, #8020, #8385)
Due to its initial support status, the new interface has some limitations; categorical features and multi-output models are not yet supported.
More progress on the experimental support for categorical features. In 1.7, XGBoost can handle missing values in categorical features and features a new parameter max_cat_threshold
, which limits the number of categories that can be used in the split evaluation. The parameter is enabled when the partitioning algorithm is used and helps prevent over-fitting. Also, the sklearn interface can now accept the feature_types
parameter to use data types other than dataframe for categorical features. (#8280, #7821, #8285, #8080, #7948, #7858, #7853, #8212, #7957, #7937, #7934)
An exciting addition to XGBoost is the experimental federated learning support. The federated learning is implemented with a gRPC federated server that aggregates allreduce calls, and federated clients that train on local data and use existing tree methods (approx, hist, gpu_hist). Currently, this only supports horizontal federated learning (samples are split across participants, and each participant has all the features and labels). Future plans include vertical federated learning (features split across participants), and stronger privacy guarantees with homomorphic encryption and differential privacy. See Demo with NVFlare integration for example usage with nvflare.
As part of the work, XGBoost 1.7 has replaced the old rabit module with the new collective module as the network communication interface with added support for runtime backend selection. In previous versions, the backend is defined at compile time and can not be changed once built. In this new release, users can choose between rabit
and federated.
(#8029, #8351, #8350, #8342, #8340, #8325, #8279, #8181, #8027, #7958, #7831, #7879, #8257, #8316, #8242, #8057, #8203, #8038, #7965, #7930, #7911)
The feature is available in the public PyPI binary package for testing.
Before 1.7, XGBoost has an internal data structure called DeviceQuantileDMatrix
(and its distributed version). We now extend its support to CPU and renamed it to QuantileDMatrix
. This data structure is used for optimizing memory usage for the hist
and gpu_hist
tree methods. The new feature helps reduce CPU memory usage significantly, especially for dense data. The new QuantileDMatrix
can be initialized from both CPU and GPU data, and regardless of where the data comes from, the constructed instance can be used by both the CPU algorithm and GPU algorithm including training and prediction (with some overhead of conversion if the device of data and training algorithm doesn't match). Also, a new parameter ref
is added to QuantileDMatrix
, which can be used to construct validation/test datasets. Lastly, it's set as default in the scikit-learn interface when a supported tree method is specified by users. (#7889, #7923, #8136, #8215, #8284, #8268, #8220, #8346, #8327, #8130, #8116, #8103, #8094, #8086, #7898, #8060, #8019, #8045, #7901, #7912, #7922)
The mean absolute error is a new member of the collection of objectives in XGBoost. It's noteworthy since MAE has zero hessian value, which is unusual to XGBoost as XGBoost relies on Newton optimization. Without valid Hessian values, the convergence speed can be slow. As part of the support for MAE, we added line searches into the XGBoost training algorithm to overcome the difficulty of training without valid Hessian values. In the future, we will extend the line search to other objectives where it's appropriate for faster convergence speed. (#8343, #8107, #7812, #8380)
With the help of the pyodide project, you can now run XGBoost on browsers. (#7954, #8369)
With the growing adaption of the new internet protocol, XGBoost joined the club. In the latest release, the Dask interface can be used on IPv6 clusters, see XGBoost's Dask tutorial for details. (#8225, #8234)
We have new optimizations for both the hist
and gpu_hist
tree methods to make XGBoost's training even more efficient.
Hist Hist now supports optional by-column histogram build, which is automatically configured based on various conditions of input data. This helps the XGBoost CPU hist algorithm to scale better with different shapes of training datasets. (#8233, #8259). Also, the build histogram kernel now can better utilize CPU registers (#8218)
GPU Hist
GPU hist performance is significantly improved for wide datasets. GPU hist now supports batched node build, which reduces kernel latency and increases throughput. The improvement is particularly significant when growing deep trees with the default depthwise
policy. (#7919, #8073, #8051, #8118, #7867, #7964, #8026)
Breaking changes made in the 1.7 release are summarized below.
grow_local_histmaker
updater is removed. This updater is rarely used in practice and has no test. We decided to remove it and focus have XGBoot focus on other more efficient algorithms. (#7992, #8091)rabit
module is replaced with the new collective
module. It's a drop-in replacement with added runtime backend selection, see the federated learning section for more details (#8257)Before diving into package-specific changes, some general new features other than those listed at the beginning are summarized here.
DMatrix
and QuantileDMatrix
can get the data from XGBoost. In previous versions, only getters for meta info like labels are available. The new method is available in Python (DMatrix::get_data
) and C. (#8269, #8323)Some noteworthy bug fixes that are not related to specific language binding are listed in this section.
Python 3.8 is now the minimum required Python version. (#8071)
More progress on type hint support. Except for the new PySpark interface, the XGBoost module is fully typed. (#7742, #7945, #8302, #7914, #8052)
XGBoost now validates the feature names in inplace_predict
, which also affects the predict function in scikit-learn estimators as it uses inplace_predict
internally. (#8359)
Users can now get the data from DMatrix
using DMatrix::get_data
or QuantileDMatrix::get_data
.
Show libxgboost.so
path in build info. (#7893)
Raise import error when using the sklearn module while scikit-learn is missing. (#8049)
Use config_context
in the sklearn interface. (#8141)
Validate features for inplace prediction. (#8359)
Pandas dataframe handling is refactored to reduce data fragmentation. (#7843)
Support more pandas nullable types (#8262)
Remove pyarrow workaround. (#7884)
Binary wheel size We aim to enable as many features as possible in XGBoost's default binary distribution on PyPI (package installed with pip), but there's a upper limit on the size of the binary wheel. In 1.7, XGBoost reduces the size of the wheel by pruning unused CUDA architectures. (#8179, #8152, #8150)
Fixes Some noteworthy fixes are listed here:
Fix potential error in DMatrix constructor on 32-bit platform. (#8369)
Maintenance work
isort
and black
for selected files. (#8137, #8096)use_label_encoder
in XGBClassifier. The label encoder has already been deprecated and removed in the previous version. These changes only affect the indicator parameter (#7822)Documents
We summarize improvements for the R package briefly here:
The consistency between JVM packages and other language bindings is greatly improved in 1.7, improvements range from model serialization format to the default value of hyper-parameters.
timeoutRequestWorkers
is now removed. With the support for barrier mode, this parameter is no longer needed. (#7839)pytest-timeout
is added as an optional dependency for running Python tests to keep the test time in check. (#7772, #8291, #8286, #8276, #8306, #8287, #8243, #8313, #8235, #8288, #8303, #8142, #8092, #8333, #8312, #8348)Roadmap: https://github.com/dmlc/xgboost/issues/8282 Release note: https://github.com/dmlc/xgboost/pull/8374
Release status: https://github.com/dmlc/xgboost/issues/8366
This is a patch release for bug fixes.
This is a patch release for bug fixes and Spark barrier mode support. The R package is unchanged.
We replaced the old parallelism tracker with spark barrier mode to improve the robustness of the JVM package and fix the GPU training pipeline.
You can verify the downloaded packages by running this on your Unix shell:
echo "<hash> <artifact>" | shasum -a 256 --check
2633f15e7be402bad0660d270e0b9a84ad6fcfd1c690a5d454efd6d55b4e395b ./xgboost.tar.gz
After a long period of development, XGBoost v1.6.0 is packed with many new features and improvements. We summarize them in the following sections starting with an introduction to some major new features, then moving on to language binding specific changes including new features and notable bug fixes for that binding.
This version of XGBoost features new improvements and full coverage of experimental
categorical data support in Python and C package with tree model. Both hist
, approx
and gpu_hist
now support training with categorical data. Also, partition-based
categorical split is introduced in this release. This split type is first available in
LightGBM in the context of gradient boosting. The previous XGBoost release supported one-hot
split where the splitting criteria is of form x \in {c}
, i.e. the categorical feature x
is tested
against a single candidate. The new release allows for more expressive conditions: x \in S
where the categorical feature x
is tested against multiple candidates. Moreover, it is now
possible to use any tree algorithms (hist
, approx
, gpu_hist
) when creating categorical splits.
For more information, please see our tutorial on categorical data, along with
examples linked on that page. (#7380, #7708, #7695, #7330, #7307, #7322, #7705,
#7652, #7592, #7666, #7576, #7569, #7529, #7575, #7393, #7465, #7385, #7371, #7745, #7810)
In the future, we will continue to improve categorical data support with new features and optimizations. Also, we are looking forward to bringing the feature beyond Python binding, contributions and feedback are welcomed! Lastly, as a result of experimental status, the behavior might be subject to change, especially the default value of related hyper-parameters.
XGBoost 1.6 features initial support for the multi-output model, which includes multi-output regression and multi-label classification. Along with this, the XGBoost classifier has proper support for base margin without to need for the user to flatten the input. In this initial support, XGBoost builds one model for each target similar to the sklearn meta estimator, for more details, please see our quick introduction.
(#7365, #7736, #7607, #7574, #7521, #7514, #7456, #7453, #7455, #7434, #7429, #7405, #7381)
External memory support for both approx and hist tree method is considered feature
complete in XGBoost 1.6. Building upon the iterator-based interface introduced in the
previous version, now both hist
and approx
iterates over each batch of data during
training and prediction. In previous versions, hist
concatenates all the batches into
an internal representation, which is removed in this version. As a result, users can
expect higher scalability in terms of data size but might experience lower performance due
to disk IO. (#7531, #7320, #7638, #7372)
The approx
tree method is rewritten based on the existing hist
tree method. The
rewrite closes the feature gap between approx
and hist
and improves the performance.
Now the behavior of approx
should be more aligned with hist
and gpu_hist
. Here is a
list of user-visible changes:
max_leaves
and max_depth
.grow_policy
.max_bin
to replace sketch_eps
.depthwise
policy is used.Based on the existing JSON serialization format, we introduce UBJSON support as a more
efficient alternative. Both formats will be available in the future and we plan to
gradually phase out support for the old
binary model format. Users can opt to use the different formats in the serialization
function by providing the file extension json
or ubj
. Also, the save_raw
function in
all supported languages bindings gains a new parameter for exporting the model in different
formats, available options are json
, ubj
, and deprecated
, see document for the
language binding you are using for details. Lastly, the default internal serialization
format is set to UBJSON, which affects Python pickle and R RDS. (#7572, #7570, #7358,
#7571, #7556, #7549, #7416)
Aside from the major new features mentioned above, some others are summarized here:
seed_per_iteration
is removed, now distributed training should
generate closer results to single node training when sampling is used. (#7009)huber_slope
is introduced for the Pseudo-Huber
objective.aucpr
is rewritten for better performance and GPU support. (#7297, #7368)max_leave
and max_depth
is now unified (#7302, #7551).gpu_hist
. (#7507)Most of the performance improvements are integrated into other refactors during feature
developments. The approx
should see significant performance gain for many datasets as
mentioned in the previous section, while the hist
tree method also enjoys improved
performance with the removal of the internal pruner
along with some other
refactoring. Lastly, gpu_hist
no longer synchronizes the device during training. (#7737)
This section lists bug fixes that are not specific to any language binding.
num_parallel_tree
is now a model parameter instead of a training hyper-parameter,
which fixes model IO with random forest. (#7751)iteration_range
is provided. (#7409)Other than the changes in Dask, the XGBoost Python package gained some new features and improvements along with small bug fixes.
pip install xgboost
to install XGBoost.libomp
from Homebrew, as the XGBoost wheel now
bundles libomp.dylib
library.fit
that are not related to input data are moved into the constructor
and can be set by set_params
. (#6751, #7420, #7375, #7369)get_group
is introduced for DMatrix
to allow users to get the group
information in the custom objective function. (#7564)**kwargs
. (#7629)feature_names_in_
is defined for all sklearn estimators like
XGBRegressor
to follow the convention of sklearn. (#7526)DMatrix
construction in dask now honers thread configuration. (#7337)nthread
configuration using the Dask sklearn interface. (#7633)This section summarizes the new features, improvements, and bug fixes to the R package.
load.raw
can optionally construct a booster as return. (#7686)Some new features for JVM-packages are introduced for a more integrated GPU pipeline and better compatibility with musl-based Linux. Aside from this, we have a few notable bug fixes.
DeviceQuantileDMatrix
to Scala binding (#7459)multi:softmax
(#7694)Other than the changes in the Python package and serialization, we removed some deprecated features in previous releases. Also, as mentioned in the previous section, we plan to phase out the old binary format in future releases.
This section lists some of the general changes to XGBoost's document, for language binding specific change please visit related sections.
This is a summary of maintenance work that is not specific to any language binding.
Some fixes and update to XGBoost's CI infrastructure. (#7739, #7701, #7382, #7662, #7646, #7582, #7407, #7417, #7475, #7474, #7479, #7472, #7626)
Roadmap: https://github.com/dmlc/xgboost/issues/7726 Release note: https://github.com/dmlc/xgboost/pull/7746
This is a patch release for compatibility with latest dependencies and bug fixes.
num_boosted_rounds
for linear model.This is a patch release for compatibility with the latest dependencies and bug fixes. Also, all GPU-compatible binaries are built with CUDA 11.0.
[Python] Handle missing values in dataframe with category dtype. (#7331)
[R] Fix R CRAN failures about prediction and some compiler warnings.
[JVM packages] Fix compatibility with latest Spark (#7438, #7376)
Support building with CTK11.5. (#7379)
Check user input for iteration in inplace predict.
Handle OMP_THREAD_LIMIT
environment variable.
[doc] Fix broken links. (#7341)
You can verify the downloaded packages by running this on your Unix shell:
echo "<hash> <artifact>" | shasum -a 256 --check
3a6cc7526c0dff1186f01b53dcbac5c58f12781988400e2d340dda61ef8d14ca xgboost_r_gpu_linux_afb9dfd4210e8b8db8fe03380f83b404b1721443.tar.gz
6f74deb62776f1e2fd030e1fa08b93ba95b32ac69cc4096b4bcec3821dd0a480 xgboost_r_gpu_win64_afb9dfd4210e8b8db8fe03380f83b404b1721443.tar.gz
565dea0320ed4b6f807dbb92a8a57e86ec16db50eff9a3f405c651d1f53a259d xgboost.tar.gz
This release comes with many exciting new features and optimizations, along with some bug fixes. We will describe the experimental categorical data support and the external memory interface independently. Package-specific new features will be listed in respective sections.
In version 1.3, XGBoost introduced an experimental feature for handling categorical data
natively, without one-hot encoding. XGBoost can fit categorical splits in decision
trees. (Currently, the generated splits will be of form x \in {v}
, where the input is
compared to a single category value. A future version of XGBoost will generate splits that
compare the input against a list of multiple category values.)
Most of the other features, including prediction, SHAP value computation, feature
importance, and model plotting were revised to natively handle categorical splits. Also,
all Python interfaces including native interface with and without quantized DMatrix
,
scikit-learn interface, and Dask interface now accept categorical data with a wide range
of data structures support including numpy/cupy array and cuDF/pandas/modin dataframe. In
practice, the following are required for enabling categorical data support during
training:
gpu_hist
to train the model.Once the model is trained, it can be used with most of the features that are available on the Python package. For a quick introduction, see https://xgboost.readthedocs.io/en/latest/tutorials/categorical.html
Related PRs: (#7011, #7001, #7042, #7041, #7047, #7043, #7036, #7054, #7053, #7065, #7213, #7228, #7220, #7221, #7231, #7306)
Next steps
x \in S
where the input is compared with multiple category values. split. (#7081)This release features a brand-new interface and implementation for external memory (also
known as out-of-core training). (#6901, #7064, #7088, #7089, #7087, #7092, #7070,
#7216). The new implementation leverages the data iterator interface, which is currently
used to create DeviceQuantileDMatrix
. For a quick introduction, see
https://xgboost.readthedocs.io/en/latest/tutorials/external_memory.html#data-iterator
. During the development of this new interface, lz4
compression is removed. (#7076).
Please note that external memory support is still experimental and not ready for
production use yet. All future development will focus on this new interface and users are
advised to migrate. (You are using the old interface if you are using a URL suffix to use
external memory.)
DMatrix
construction and inplace_predict
(#6998, #7003). Now XGBoost no longer makes data
copy when input is numpy array view.min_delta
parameter to control the
stopping behavior (#7137)iteration_range
for the predict function is available, which can be
used for specifying the range of trees for running prediction. (#6819, #7126)nthread
parameter in DMatrix
construction. (#7127)DeviceQuantileDMatrix
(#7195). Constructing DMatrix
with GPU data structures and the interface for quantized DMatrix
were first
introduced in the Python package and are now available in the xgboost4j package.The performance for both hist
and gpu_hist
has been significantly improved in 1.5
with the following optimizations:
deterministic_histogram
and now
the GPU algorithm is always deterministic.n_gpus
was deprecated in 1.0 release and is now removed.gpu_id
is specified (#6891,
#6987)gamma
negative likelihood evaluation metric. (#7275)verbose_eal
for xgboost.cv
function in Python. (#7291)UINT32_MAX
with missing
values. (#7026)softmax
objective. (#7104)Other than the items mentioned in the previous sections, there are some Python-specific improvements.
dev
(#6988)__sklearn_is_fitted__
is
implemented as part of the changes (#7130, #7230)DaskDMatrix
with iteration_range
. (#7005)Improvements other than new features on R package:
Improvements other than new features on JVM packages:
process_type
. (#7135)use_rmm
. (#6808)Some refactoring around CPU hist, which lead to better performance but are listed under general maintenance tasks:
Others
gpu_id
with custom objective. (#7015)dh::CopyIf
. (#6828)ncclUnhandledCudaError
. (#7190)You can verify the downloaded packages by running this on your unix shell:
echo "<hash> <artifact>" | shasum -a 256 --check
2c63e8abd3e89795ac9371688daa31109a9514eebd9db06956ba5aa41d0c0e20 xgboost_r_gpu_linux_1.5.0.tar.gz
8b19f817dcb6b601b0abffa9cf943ee92c3e9a00f56fa3f4fcdfe98cd3777c04 xgboost_r_gpu_win64_1.5.0.tar.gz
25ee3adb9925d0529575c0f00a55ba42202a1cdb5fdd3fb6484b4088571326a5 xgboost.tar.gz