Why you don’t see many real-world applications of Reinforcement Learning.
Yurii Tolochko
Being the third pillar of machine learning, it is not too surprising that reinforcement learning both enjoys and suffers from opportunities and challenges that are unique to its domain. A lot of the difficulties of supervised and unsupervised learning are simply not an issue here, but it comes at a cost of dealing with a completely different set of problems.
We have seen the tremendous success of RL in creating AIs for various games - from tic-tac-toe through chess and Go and up to Starcraft 2 and Dota 2. But what about successful applications in fields that are not inherently game-related? Turns out, we won’t find that many, even if we dig pretty deep. Why is that?
It’s due to the prevalence of problems that are inherent to the current state of RL as a field. In this talk we will address these limitations. We will see that many of the reported findings don’t hold up under scrutiny. We will see how and why many state-of-the-art algorithms break down when compared to much more simple solutions. However, we will also identify the conditions where RL already shines or might shine in the future. In the end we will discuss some promising avenues for future research.
The talk assumes a familiarity with fundamental principles of machine learning in general as well as a basic knowledge of reinforcement learning concepts specifically (not to waste time on introducing the common terminology).
Yurii Tolochko
Affiliation: BASF
I am a Machine Learning Engineer working with and passionate about all branches of ML and of course Python.
The talk is partly inspired by my own attempts to introduce Reinforcement Learning into the working environment and the apparent chasm between the efficacy and applicability of conventional (un)supervised ML and Reinforcement Learning.