CANCELED: Create CUDA kernels from Python using Numba and CuPy.
Valentin Haenel
We'll explain how to do GPU-Accelerated numerical computing from Python using the Numba Python compiler in combination with the CuPy GPU array library. Numba is an open source compiler that can translate Python functions for execution on the GPU without requiring users to write any C or C++ code. Numba's just-in-time compilation ability makes it easy to interactively experiment with GPU computing in the Jupyter notebook. Combining Numba with CuPy, a nearly complete implementation of the NumPy API for CUDA, creates a high productivity GPU development environment. Learn the basics of using Numba with CuPy, techniques for automatically parallelizing custom Python functions on arrays, and how to create and launch CUDA kernels entirely from Python. Access to appropriate hardware will be provided in the form of access to GPU based cloud resources.
Valentin Haenel
Affiliation: Anaconda Inc.
Valentin is a long-time "Python for Data" user and developer who still remembers hearing Travis Oliphant's keynote at the EuroScipy 2007. This was during a time where he first became aware of the nascent scientific Python stack. He started using Python for simple modeling of spiking neurons and evaluation of data from perception experiments during his Masters degree in computational neuroscience. Since then he has been active as a contributor across more than 75 open source projects. For example, within the Blosc ecosystem where he still maintains and contributes to Python-Blosc and Bloscpack. Furthermore, he has acquired significant experience as a Git trainer and consultant and had published the first German language book about the topic in 2011. In 2014 and 2015 he helped kickstart the PyData Berlin community alongside a few other volunteers and co-organized the first two editions of the PyData Berlin Conference. He now works for Anaconda as a software engineer / open source developer on the Numba project.