reinforcement-learning
Maximum Entropy Reinforcement Learning via Soft Q-learning & Soft Actor-Critic
Notes on Entropy-Regularized Reinforcement Learning via SQL & SAC ...
Deterministic Policy Gradients
The generalization of policy gradient theorems into deterministic case and corresponding policy gradient algorithms. ...
Likelihood Ratio Policy Gradient via Importance Sampling
Connection between Likelihood ratio policy gradient method and Importance sampling method. ...
Planning & Learning
Recall that when using dynamic programming (DP) method in solving reinforcement learning problems, we required the availability of a model of the environment. Whereas with Monte Carlo methods and temporal-difference learning, the models are unnecessary. Such methods with requirement of a model like the case of DP is called model-based, while methods without using a model is called model-free. Model-based methods primarily rely on planning; and model-free methods, on the other hand, primarily rely on learning. ...
Policy Gradient Theorem
So far in the series, we have been choosing the actions based on the estimated action value function. On the other hand, we can instead learn a parameterized policy, $\boldsymbol{\theta}$, that can select actions without consulting a value function by updating $\boldsymbol{\theta}$ on each step in the direction of an estimate of the gradient of some performance measure w.r.t $\boldsymbol{\theta}$. Such methods are called policy gradient methods. ...
Eligible Traces
Beside $n$-step TD methods, there is another mechanism called eligible traces that unify TD and Monte Carlo. Setting $\lambda$ in TD($\lambda$) from $0$ to $1$, we end up with a spectrum ranging from TD methods, when $\lambda=0$ to Monte Carlo methods with $\lambda=1$. ...