

The core tools of applied math such as linear algebra, sparse matrix operations, sampling, optimization, and statistical moments are supported. Neighborhood methods cover a large portion of algorithms for machine learning, such as clustering and reducing dimensionality. It’s common, useful, and computationally intensive. NVIDIA built RAFT to present the essential elements as building blocks for developers. In this post, I discuss where RAFT belongs in a developer’s toolbox, the circumstances in which to use it, and more importantly, the power to wield RAFT when it is needed. RAFT enables you to spend your time designing and developing your applications, rather than having to worry about whether you are getting the most out of your GPU hardware. This is just the beginning: RAFT components will continue to be optimized as new GPU architectures are released, ensuring that you are always getting the best performance out of your hardware. The highly optimized RAFT computational patterns constitute a rich catalog of modular drop-in accelerators, providing you with powerful elements to compose new algorithms or speed up existing libraries. NVIDIA made RAPIDS RAFT to address these bottlenecks and maximize reuse when building algorithms for multidimensional data, such as what is often encountered in machine learning and data analytics.

Reusable solutions for these steps often require low-level primitives that are non-trivial and time-consuming to write well. In many data analytics and machine learning algorithms, computational bottlenecks tend to come from a small subset of steps that dominate the end-to-end performance. RAPIDS is a suite of accelerated libraries for data science and machine learning on GPUs: cuDF for pandas-like data structures, cuGraph for graph data, and cuML for machine learning.
