Shen Liang
June 15, 4 PM
online (zoom)

 

 

Abstract

This tutorial provides an overview of two important and correlated (in many cases intersectional) topics in deep learning: transfer learning, and multi-task learning. Transfer learning focuses on transferring knowledge learned in one problem to a different yet related problem, while multi-task learning attempts to solve multiple tasks in one stroke by exploiting the commonalities across the multiple tasks. In this tutorial, I will present an overview of the motivations, methodologies, as well as applications of these two learning paradigms in fields such as natural language processing and computer vision. I will also provide two concrete examples of these paradigms in a hands-on workshop.

Dr Shen Liang
(Université Paris Cité, diiP)
Shen Liang is a research associate at the Data Intelligence Institute of Paris (diiP) and affiliated with the Université Paris Cité. He has worked on a variety of data management and mining problems including time series analysis, semi-supervised learning, knowledge-guided deep learning and GPU-accelerated computation within various fields such as healthcare, manufacturing, geosciences and astrophysics. He holds a PhD in software engineering from Fudan University, China.

Click the image to see slide

Other seminars