[ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl
Thrust 1.16.0 provides a new “nosync” hint for the CUDA backend, as well as numerous bugfixes and stability improvements.
thrust::cuda::par_nosync
Execution PolicyMost of Thrust’s parallel algorithms are fully synchronous and will block the calling CPU thread until all work is completed. This design avoids many pitfalls associated with asynchronous GPU programming, resulting in simpler and less-error prone usage for new CUDA developers. Unfortunately, this improvement in user experience comes at a performance cost that often frustrates more experienced CUDA programmers.
Prior to this release, the only synchronous-to-asynchronous migration path for existing Thrust codebases involved significant refactoring, replacing calls to thrust
algorithms with a limited set of future
-based thrust::async
algorithms or lower-level CUB kernels. The new thrust::cuda::par_nosync
execution policy provides a new, less-invasive entry point for asynchronous computation.
par_nosync
is a hint to the Thrust execution engine that any non-essential internal synchronizations should be skipped and that an explicit synchronization will be performed by the caller before accessing results.
While some Thrust algorithms require internal synchronization to safely compute their results, many do not. For example, multiple thrust::for_each
invocations can be launched without waiting for earlier calls to complete:
// Queue three `for_each` kernels:
thrust::for_each(thrust::cuda::par_nosync, vec1.begin(), vec1.end(), Op{});
thrust::for_each(thrust::cuda::par_nosync, vec2.begin(), vec2.end(), Op{});
thrust::for_each(thrust::cuda::par_nosync, vec3.begin(), vec3.end(), Op{});
// Do other work while kernels execute:
do_something();
// Must explictly synchronize before accessing `for_each` results:
cudaDeviceSynchronize();
Thanks to @fkallen for this contribution.
A future version of Thrust will remove support for CUDA Dynamic Parallelism (CDP).
This will only affect calls to Thrust algorithms made from CUDA device-side code that currently launches a kernel; such calls will instead execute sequentially on the calling GPU thread instead of launching a device-wide kernel.
cub
namespace to thrust::cub
. This has caused issues with ambiguous namespaces for projects that declare using namespace thrust;
from the global namespace. We recommend against this practice.thrust::cuda::par_nosync
policy. Thanks to @fkallen for this contribution.DeviceMergeSort
API and remove Thrust’s internal implementation.thrust::shuffle
. Thanks to @djns99 for this contribution.CMAKE_INSTALL_INCLUDEDIR
values in Thrust’s CMake install rules. Thanks to @robertmaynard for this contribution.icc
builds.min
/max
macros defined in windows.h
.nvc++
.small
macro defined in windows.h
.Thrust 1.16.0 provides a new “nosync” hint for the CUDA backend, as well as numerous bugfixes and stability improvements.
thrust::cuda::par_nosync
Execution PolicyMost of Thrust’s parallel algorithms are fully synchronous and will block the calling CPU thread until all work is completed. This design avoids many pitfalls associated with asynchronous GPU programming, resulting in simpler and less-error prone usage for new CUDA developers. Unfortunately, this improvement in user experience comes at a performance cost that often frustrates more experienced CUDA programmers.
Prior to this release, the only synchronous-to-asynchronous migration path for existing Thrust codebases involved significant refactoring, replacing calls to thrust
algorithms with a limited set of future
-based thrust::async
algorithms or lower-level CUB kernels. The new thrust::cuda::par_nosync
execution policy provides a new, less-invasive entry point for asynchronous computation.
par_nosync
is a hint to the Thrust execution engine that any non-essential internal synchronizations should be skipped and that an explicit synchronization will be performed by the caller before accessing results.
While some Thrust algorithms require internal synchronization to safely compute their results, many do not. For example, multiple thrust::for_each
invocations can be launched without waiting for earlier calls to complete:
// Queue three `for_each` kernels:
thrust::for_each(thrust::cuda::par_nosync, vec1.begin(), vec1.end(), Op{});
thrust::for_each(thrust::cuda::par_nosync, vec2.begin(), vec2.end(), Op{});
thrust::for_each(thrust::cuda::par_nosync, vec3.begin(), vec3.end(), Op{});
// Do other work while kernels execute:
do_something();
// Must explictly synchronize before accessing `for_each` results:
cudaDeviceSynchronize();
Thanks to @fkallen for this contribution.
A future version of Thrust will remove support for CUDA Dynamic Parallelism (CDP).
This will only affect calls to Thrust algorithms made from CUDA device-side code that currently launches a kernel; such calls will instead execute sequentially on the calling GPU thread instead of launching a device-wide kernel.
cub
namespace to thrust::cub
. This has caused issues with ambiguous namespaces for projects that declare using namespace thrust;
from the global namespace. We recommend against this practice.thrust::cuda::par_nosync
policy. Thanks to @fkallen for this contribution.DeviceMergeSort
API and remove Thrust’s internal implementation.thrust::shuffle
. Thanks to @djns99 for this contribution.CMAKE_INSTALL_INCLUDEDIR
values in Thrust’s CMake install rules. Thanks to @robertmaynard for this contribution.icc
builds.min
/max
macros defined in windows.h
.nvc++
.small
macro defined in windows.h
.Thrust 1.15.0 provides numerous bugfixes, including non-numeric thrust::sequence
support, several MSVC-related compilation fixes, fewer conversion warnings, counting_iterator
initialization, and documentation updates.
A future version of Thrust will remove support for CUDA Dynamic Parallelism (CDP).
This will only affect calls to Thrust algorithms made from CUDA device-side code that currently launches a kernel; such calls will instead execute sequentially on the calling GPU thread instead of launching a device-wide kernel.
thrust::sequence
to work with non-numeric types. Thanks to Ben Jude (@bjude) for this contribution.max()
on MSVC. Thanks to Thomas (@tomintheshell) for this contribution.counting_iterator
's default constructor.std::allocator_traits
on MSVC + C++17.-Wconversion
warnings. Thanks to Matt Stack (@matt-stack) for this contribution.thrust::for_each
documentation. Thanks to Salman (@untamedImpala) for this contribution.B0
macro in termios.h system header. Thanks to Philip Deegan (@PhilipDeegan) for this contribution.Thrust 1.13.1 is a minor release accompanying the CUDA Toolkit 11.5.
This release provides a new hook for embedding the thrust::
namespace inside a custom namespace. This is intended to work around various issues related to linking multiple shared libraries that use Thrust. The existing CUB_NS_PREFIX
and CUB_NS_POSTFIX
macros already provided this capability for CUB; this update provides a simpler mechanism that is extended to and integrated with Thrust. Simply define THRUST_CUB_WRAPPED_NAMESPACE
to a namespace name, and both thrust::
and cub::
will be placed inside the new namespace. Using different wrapped namespaces for each shared library will prevent issues like those reported in NVIDIA/thrust#1401.
THRUST_CUB_WRAPPED_NAMESPACE
hooks.Thrust 1.14.0 is a major release accompanying the NVIDIA HPC SDK 21.9.
This release adds the ability to wrap the thrust::
namespace in an external namespace, providing a workaround for a variety of shared library linking issues. Thrust also learned to detect when CUB's symbols are in a wrapped namespace and properly import them. To enable this feature, use #define THRUST_CUB_WRAPPED_NAMESPACE foo
to wrap both Thrust and CUB in the foo::
namespace. See thrust/detail/config/namespace.h
for details and more namespace options.
Several bugfixes are also included: The tuple_size
and tuple_element
helpers now support cv-qualified types.
scan_by_key
uses less memory. thrust::iterator_traits
is better integrated with std::iterator_traits
. See below for more details and references.
thrust::
to be wrapped in an external namespace, and support cases when CUB is wrapped in an external namespace.thrust::tuple_size
and thrust::tuple_element
. Thanks to Jake Hemstad for this contribution.scan_by_key
. Thanks to Lilo Huang for this contribution.expand
example. Thanks to Lilo Huang for this contribution.find_package
configuration files.std::iterator_traits
when no thrust::iterator_traits
specialization exists for an iterator type. Thanks to Divye Gala for this contribution.Thrust 1.13.0 is the major release accompanying the NVIDIA HPC SDK 21.7 release.
Notable changes include bfloat16
radix sort support (via thrust::sort
) and memory handling fixes in the reserve
method of Thrust's vectors. The CONTRIBUTING.md
file has been expanded to include instructions for building CUB as a component of Thrust, and API documentation now refers to cppreference instead of SGI's STL reference.
thrust::host_space_tag
and thrust::device_space_tag
. Use the equivalent thrust::host_system_tag
and thrust::device_system_tag
instead.bfloat16
in thrust::sort
. Thanks to Xiang Gao (@zasdfgbnm) for this contribution.thrust::transform_iterator
now supports non-copyable types. Thanks to Jake Hemstad (@jrhemstad) for this contribution.THRUST_IGNORE_DEPRECATED_API
macro that disables deprecation warnings on Thrust and CUB APIs.thrust::sort
calls into cub::DeviceRadixSort
. Thanks to Andy Adinets (@canonizer) for this contribution.thrust::sort
's merge sort implementation.reserve
on Thrust's vector containers. Thanks to Kai Germaschewski (@germasch) for this contribution.CONTRIBUTING.md
to include details on building CUB's test suite as part of Thrust.Thrust 1.12.1 is a trivial patch release that slightly changes the phrasing of a deprecation message.
Thrust 1.12.0 is the major release accompanying the NVIDIA HPC SDK 21.3 and the CUDA Toolkit 11.4.
It includes a new thrust::universal_vector
, which holds data that is accessible from both host and device. This allows users to easily leverage CUDA's unified memory with Thrust.
New asynchronous thrust::async:exclusive_scan
and inclusive_scan
algorithms have been added, and the synchronous versions of these have been updated to use cub::DeviceScan
directly.
Many compilation warnings and subtle overflow bugs were fixed in the device algorithms, including a long-standing bug that returned invalid temporary storage requirements when num_items
was close to (but not exceeding) INT32_MAX
.
This release deprecates support for Clang < 7.0 and MSVC < 2019 (aka 19.20/16.0/14.20).
thrust::scan_by_key
functors / accumulator types. This may change the results from scan_by_key
when input, output, and initial value types are not the same type.thrust::async::
algorithms: inclusive_scan
and exclusive_scan
.thrust::universal_vector
, universal_ptr
, and universal_allocator
.make_reverse_iterator
.temp_storage_bytes
when num_items
close to (but not over) INT32_MAX
.GridEvenShare
with unsigned offsets.thrust::transform_reduce
.thrust::counting_iterator
.thrust::optional
. Thanks to Vukasin Milovanovic for this contribution.signbit(double)
implementation on MSVC.cub::DeviceScan
to implement thrust::exclusive_scan
and thrust::inclusive_scan
.set_operation
documentation. Thanks to Hongyu Cai for this contribution.thrust::complex
implementation.thrust::gather
documentation.Thrust 1.11.0 is a major release providing bugfixes and performance enhancements. It includes a new sort algorithm that provides up to 2x more performance from thrust::sort
when used with certain key types and hardware. The new thrust::shuffle
algorithm has been tweaked to improve the randomness of the output. Our CMake package and build system continue to see improvements with better add_subdirectory
support, installation rules, status messages, and other features that make Thrust easier to use from CMake projects. The release includes several other bugfixes and modernizations, and received updates from 12 contributors.
thrust::sort
on CUDA when using 32/64-bit numeric keys on Pascal and up (SM60+). This improved radix sort algorithm provides up to 2x more performance. Thanks for Andy Adinets for this contribution.add_subdirectory
. Thanks to Kai Germaschewski for this contribution.thrust::shuffle
to produce better quality random distributions. Thanks to Rory Mitchell and Daniel Stokes for this contribution.transform_inclusive_scan
and transform_exclusive_scan
.middle
calculation to avoid overflows. Thanks to Richard Barnes for this contribution.size_t
for the index type parameter in thrust::tuple_element
. Thanks to Andrew Corrigan for this contribution.thrust::device_vector
in MSVC Debug builds. Thanks to Ben Jude for this contribution.thrust::detail::is_pod
implementation. Thanks to Anatoliy Tomilov for this contribution.iter_swap
call when using thrust::partition
with STL containers. Thanks to Isaac Deutsch for this contribution.FindTBB.cmake
module to support latest MSVC.FindPackageHandleStandardArgs
to print standard status messages when our CMake package is found. Thanks to Kai Germaschewski for this contribution.thrust::remove_cvref
. Thanks to Andrew Corrigan for this contribution.thrust/cub
repository is now NVIDIA/cub
master
branch to the main
branch.Thrust 1.10.0 is the major release accompanying the NVIDIA HPC SDK 20.9 release and the CUDA Toolkit 11.2 release. It drops support for C++03, GCC < 5, Clang < 6, and MSVC < 2017. It also overhauls CMake support. Finally, we now have a Code of Conduct for contributors: https://github.com/thrust/thrust/blob/main/CODE_OF_CONDUCT.md
THRUST_IGNORE_DEPRECATED_CPP_DIALECT
or THRUST_IGNORE_DEPRECATED_CPP_11
. Suppression is only a short term solution. We will be dropping support for C++11 in the near future.main
.add_subdirectory
with the Thrust source root (see thrust/thrust#976).
An example can be found here: https://github.com/thrust/thrust/blob/main/examples/cmake/add_subdir/CMakeLists.txt
THRUST_CPP_DIALECT
option has been added for single config mode. Logic that modified CMAKE_CXX_STANDARD
and CMAKE_CUDA_STANDARD
has been eliminated.testing/CMakeLists.txt
examples/CMakeLists.txt
cmake/ThrustHeaderTesting.cmake
cmake/ThrustCUDAConfig.cmake
.include(cmake/*.cmake)
files rather than searching CMAKE_MODULE_PATH
- we only want to use the ones in the repo.thrust::transform_input_output_iterator
, a variant of transform iterator adapter that works as both an input iterator and an output iterator. The given input function is applied after reading from the wrapped iterator while the output function is applied before writing to the wrapped iterator. Thanks to Trevor Smith for this contribution.thrust::plus<>
specialization.thrust::intermediate_type_from_function_and_iterators
helper is no longer needed and has been removed.cudaStreamSynchronize
instead of cudaDeviceSynchronize
if the execution policy has a stream attached to it. Thanks to Rong Ou for this contribution.thrust::transform_inclusive_scan
with heterogeneous types. Thanks to Rong Ou for this contribution.CUDA_CUB_RET_IF_FAIL
macro argument only once. Thanks to Jason Lowe for this contribution.<stdexcept>
header.THRUST_DECLTYPE_RETURNS
macros in async test implementations.std::iota
in CUDATestDriver::target_devices
. Thanks to Michael Francis for this contribution.out_of_memory_recovery
test trigger faster.thrust::device_reference
and placeholder expressions and thrust::find
with asymmetric equality operators.thrust::detail::predicate_to_integral
from bool
to IntegralType
. Thanks to Andreas Hehn for this contribution.<thrust/system/cuda/memory.h>
include to <thrust/system/cuda/detail/malloc_and_free.h>
. Thanks to Robert Maynard for this contribution.testing/copy.cu
.thrust::wrapped_function
for void
return types because MSVC is not a fan of the pattern return static_cast<void>(expr);
.tbb/tbb_thread.h
with <thread>
.thrust::advance
instead of +=
for generic iterators.-Xcompiler
for NVCCASSERT_STATIC_ASSERT
skip for the OMP backend.tbb.cuda
configs.s/fopen/fstream/