I will introduce latest frameworks from NVIDIA that enables large scale AI and research that exploits them.
Abstract: The deep-learning revolution has achieved impressive progress through the convergence of data, algorithms, and computing infrastructure. The availability of web-scale labeled data and parallelism of GPUs enabled us to harness the power of neural networks. However, for further progress, we cannot solely rely on bigger models. We need to reduce our dependence on labeled data, and design algorithms that can incorporate more structure and domain knowledge. Examples include tensors, graphs, physical laws, and simulations. I will describe efficient frameworks that enable developers to easily prototype such models, e.g. Tensorly to incorporate tensorized architectures, NVIDIA Isaac to incorporate physically valid simulations and NVIDIA RAPIDS for end-to-end data analytics. I will then lay out some outstanding problems in this area.