Read-through: Measure theory - Lebesgue measure
Note II of the measure theory series. Materials are mostly taken from Tao’s book, except for some needed notations extracted from Stein’s book. ...
Note II of the measure theory series. Materials are mostly taken from Tao’s book, except for some needed notations extracted from Stein’s book. ...
Note I of the measure theory series. Materials are mostly taken from Tao’s book, except for some needed notations extracted from Stein’s book. ...
Connection between Likelihood ratio policy gradient method and Importance sampling method. ...
Recall that when using dynamic programming (DP) method in solving reinforcement learning problems, we required the availability of a model of the environment. Whereas with Monte Carlo methods and temporal-difference learning, the models are unnecessary. Such methods with requirement of a model like the case of DP is called model-based, while methods without using a model is called model-free. Model-based methods primarily rely on planning; and model-free methods, on the other hand, primarily rely on learning. ...
So far in the series, we have been choosing the actions based on the estimated action value function. On the other hand, we can instead learn a parameterized policy, $\boldsymbol{\theta}$, that can select actions without consulting a value function by updating $\boldsymbol{\theta}$ on each step in the direction of an estimate of the gradient of some performance measure w.r.t $\boldsymbol{\theta}$. Such methods are called policy gradient methods. ...
Notes on Exponential Family & Generalized Linear Models. ...
Beside $n$-step TD methods, there is another mechanism called eligible traces that unify TD and Monte Carlo. Setting $\lambda$ in TD($\lambda$) from $0$ to $1$, we end up with a spectrum ranging from TD methods, when $\lambda=0$ to Monte Carlo methods with $\lambda=1$. ...