Notebooks has seen an enthusiastic adaptation over time. With increased adaptation comes new challenges - how to manage different environments for kernels, updating an environment without breaking anything, maintain different versions of kernels for dev and production, share a notebook which will run everywhere. This talk will explain how kernel life cycle management will simplify these challenges
Jupyter Notebook has seen an enthusiastic adoption among the data science and engineering community, to an extent where it has become a default environment for all the research. Many enterprises use notebooks as a de facto tool for all their data and machine learning development activities, and some of them are even productionalizing the notebooks. Majority of data scientists and engineers at PayPal use notebooks for all development activities. In fact, the entire data and AI/ML platforms are exposed through notebooks.
With increased usage comes new challenges. As thousands of users use notebooks, there is a need for different environments to be used for different use cases and governance around these environments. Here are some of the challenges we faced • It has become a management nightmare to support different python versions and packages for different environments • Sharing a notebook is not simple with different notebooks environments • Hard to maintain different kernels and versions for different environments like development and production
So, we implemented a solution named kernel life cycle management to solve some of these challenges. Praveen and Ayushi will explain • Kernel Life Cycle Management – how life cycle management was introduced for kernels at PayPal by leveraging Kubernetes and Docker • Sharing a notebook – How a shared notebook can be executed for any user • Making changes to kernel environment without breaking scheduled notebooks in production • Kernel as a docker image for isolating the environments
Background knowledge for attendees: • Basic understanding of notebooks and kernels