Value Function Approximation

All of the tabular methods we have been considering so far might scale well within a small state space. However, when dealing with Reinforcement Learning problems in continuous state space, an exact solution is nearly impossible to find. But instead, an approximated answer could be found. ...

February 11, 2022 · 21 min · Trung H. Nguyen

Temporal-Difference Learning

So far in this series, we have gone through the ideas of dynamic programming (DP) and Monte Carlo. What will happen if we combine these ideas together? Temporal-difference (TD) learning is our answer. ...

January 31, 2022 · 21 min · Trung H. Nguyen

Monte Carlo Methods in Reinforcement Learning

Recall that when using Dynamic Programming algorithms to solve RL problems, we made an assumption about the complete knowledge of the environment. With Monte Carlo methods, we only require experience - sample sequences of states, actions, and rewards from simulated or real interaction with an environment. ...

August 21, 2021 · 20 min · Trung H. Nguyen

Solving MDPs with Dynamic Programming

In two previous notes, MDPs and Bellman equations and Optimal Policy Existence, we have known how MDPs, Bellman equations were defined and how they worked. In this note, we are going to find the solution for the MDP framework with Dynamic Programming. ...

July 25, 2021 · 9 min · Trung H. Nguyen

Optimal Policy Existence

In the previous note about Markov Decision Processes, Bellman equations, we mentioned that there exists a policy $\pi_*$ that is better than or equal to all other policies. In this note, we will be proving that. ...

July 10, 2021 · 7 min · Trung H. Nguyen

Markov Decision Processes, Bellman equations

You may have known or heard vaguely about a computer program called AlphaGo - the AI has beaten Lee Sedol - the winner of 18 world Go titles. One of the techniques it used is called self-play against its other instances, with Reinforcement Learning. ...

June 27, 2021 · 5 min · Trung H. Nguyen