Temporal consistency loss & Ape-X DQfD

An algorithm consists of three components: the transformed Bellman operator, the temporal consistency (TC) loss and the combination of Ape-X DQN and DQfD to learn a more consistent human-level policy. ...

March 12, 2024 · 4 min · Trung H. Nguyen

Multi-agent Deep Deterministic Policy Gradient

May 25, 2023 · 5 min · Trung H. Nguyen

Maximum Entropy Reinforcement Learning via Soft Q-learning & Soft Actor-Critic

Notes on Entropy-Regularized Reinforcement Learning via SQL & SAC ...

December 27, 2022 · 11 min · Trung H. Nguyen

Deterministic Policy Gradients

Notes on Deterministic Policy Gradient algorithms ...

December 2, 2022 · 12 min · Trung H. Nguyen

Likelihood Ratio Policy Gradient via Importance Sampling

Connection between Likelihood ratio policy gradient method and Importance sampling method. ...

May 25, 2022 · 5 min · Trung H. Nguyen

Planning & Learning

Recall that when using dynamic programming (DP) method in solving reinforcement learning problems, we required the availability of a model of the environment. Whereas with Monte Carlo methods and temporal-difference learning, the models are unnecessary. Such methods with requirement of a model like the case of DP is called model-based, while methods without using a model is called model-free. Model-based methods primarily rely on planning; and model-free methods, on the other hand, primarily rely on learning. ...

May 19, 2022 · 7 min · Trung H. Nguyen

Policy Gradient Theorem

So far in the series, we have been choosing the actions based on the estimated action value function. On the other hand, we can instead learn a parameterized policy, $\boldsymbol{\theta}$, that can select actions without consulting a value function by updating $\boldsymbol{\theta}$ on each step in the direction of an estimate of the gradient of some performance measure w.r.t $\boldsymbol{\theta}$. Such methods are called policy gradient methods. ...

May 4, 2022 · 8 min · Trung H. Nguyen