Policy Gradient Learning (on going)
Policy Gradient Algorithms, REINFORCE, Actor-Critic, A2C, A3C, PPO.Policy Gradient Learning is a family of reinforcement learning algorithms that directly learns a policy that maps states to action probabilities. This policy acts as the optimal policy, outputting how probable it is that the optimal policy would choose each of the actions. We then just choose greedily or by sampling.
Another family of reinforcement learning algorithms is Value Based methods such as Q-Learning. These algorithms try to learn how good it is to be in a given state, or how good it is to be in a given state and take a certain action. Given this, they choose the action that has the most return.
REINFORCE
It is the simplest Policy Gradient based algorithm.
We have an objective function, the Expected Cumulative Discounted Return:
with being a trajectory sampled from the policy .
We want to maximize our objective function with being the total discounted return of trajectory .
We can write this as:
Now we need to find each individual term. The probability of the trajectory under policy is:
We now try to take the gradient of :
will be used as our loss function (we negate it in order to minimize it) and we thus need to be able to take its derivative. But the above form is intractable because it requires us to enumerate all possible trajectories which is impossible. On top of that, it requires us to know which is impossible as it requires us to know the environment dynamics which we usually don't. Even if we did know all of that, the integral would be too expensive to compute and training would be too slow.
The solution to this is using the log derivative trick.
And we have that:
Substituting back into (*) gives us:
Back to , using the definition we gave earlier and introducing the log we get:
Now taking the gradient gives:
The initial state and environment terms both vanish as they don't depend on :
Now we substitute back into (*) again to get:
This new form of (*) is now tractable because we don't need the environment dynamics anymore and the integral became tractable.
Now we consider as a loss function and we are going to minimize it using gradient descent. In practice the expectation is ignored and we approximate the expectation with a single sample (or a small batch):
The default REINFORCE algorithm uses trajectory. This results in unbiased estimates of the gradient but with high variance. More sophisticated variants result in lower variance by using trajectories to estimate the gradient (to replace the expectation).
High Variance
The problem with REINFORCE is that it has very high variance. This is because the returns can vary wildly. At some step we might get lucky and get a +100 reward and the next one we'll be getting -50, even if we were in the same state and took the same action. We got different outcomes because of environment randomness.
This variance makes learning slow and unstable.
To mitigate this we are going to consider an action as good, relative to the average rewards we got when we were at that same state and took the same action.
- : we can't tell anything about it because we don't know how we usually get there
- when the average is +100: this is bad
- when average is -20: this is very good
So we want to know whether this action is better or worse than the typical action in this state.
To achieve this we define a new policy gradient / loss function:
With being an estimate of the expected return from state , the value function. This value will be learned by another neural network.
This doesn't affect our gradient and still keeps it unbiased because:
The new term we added will just vanish.
The goal of the value network is to predict the total expected cumulative discounted reward starting from the current state. We have access to the ground truth value of that at the end of each episode and for each state, since it is just which is the sum.
In order to learn we can either do that in a separate phase: we collect episodes with a random policy and fit to the returns in a supervised learning way. But this isn't a good way to do it because the learned will eventually become outdated as the policy improves.
The better option is to jointly train . We already know the loss for the policy network. The loss for the Value network is .
So at the end of each episode, we compute (predict) the values using the current value network. We then update it using the loss (mse_loss(values, returns)) and then using the predicted values and the returns () we compute and get the advantages which we'll be using to compute the loss for the policy network.
This reduces the variance because without this, the returns range will be and thus the variance of will be large.
While when using the advantages "trick", , this will result in a smaller variance.
The value function centers the signal around zero: good actions get positive advantage, bad actions get negative advantage, regardless of whether the state itself is good or bad.
Episodic Case
An episodic case is when a given task has an end.
The objective function, total expected discounted reward, can also be written as:
As said earlier, in practice we aren't using directly. We are instead using an unbiased approximate (the approximate's expectation is equal to the actual gradient) .
We use an approximate because that's all we have access to. We can't get the true gradient because this would require gathering all possible trajectories / episodes ever. We only use batches just like in supervised learning.
Actor Critic
An important distinction to make is between which is the return, the actual ground truth, the sum of discounted rewards you collected from timestep until the end of the episode in one specific trajectory. .
is a random variable. It changes every time you run an episode, even if you start from the exact same and take the exact same first action because the environment might be stochastic (same action from state doesn't necessarily lead to the same outcome).
And which is the value function, the prediction, . It is the expected (average) return if you start from state and follow policy forever. It is a deterministic number once the policy is fixed. It is the theoretical average over infinite possible futures. It doesn't change between episodes because it's an expectation, not a specific outcome.
We already defined it above. Now to extend, there are other ways to update the critic, rather than waiting for the entire episode to end in order to update the critic. We can update the critic after seeing just one step.
To do that we can use bootstrapping (TD learning) just like we do with DQN:
So in plain english this gives "The value of being here at state equals the immediate reward I get right now, plus the discounted value of wherever I end up next".
So instead of waiting for the entire episode / future (which gives us ), we can estimate the future using our current guess of .
Bootstrapping means using your own current prediction to update yourself.
Without Bootstrapping, to update we need to unroll an entire episode and we would do as follows:
- Start at
- Play out an entire episode to the end
- Calculate
- Update:
With Bootstrapping (TD Learning):
- Start at
- Take one step (action) , get a reward and land in after observing the environment once applying
- Look up your current estimate (even though it's probably wrong)
- Calculate the target:
- Update immediately:
The target is an approximation of that we are trying to make our critic network learn. We are computing it by basically taking the current reward and adding up the average from the next state.
This represents the error / surprise: how wrong was our prediction. Even though we are mainly using the model's output, there is still some signal in the target / ground truth which comes from the environment and which is . With time it'll converge.
The two ways of updating the critic network are called:
- Monte Carlo: when we only update at the end of the entire episode (at each step we use the critic to predict the returns and then we just compare the predicted returns with the actual ground truth returns we got at the end of the episode)
- TD (Temporal Difference): when we update at each step
A2C (Advantage Actor-Critic)
A2C is what you end up with when you implement Actor-Critic following the math. The setup is as follows:
- One agent exploring the environment
- One actor network, one critic network
- Collect a batch of experience (either one step or steps or a full episode)
- Compute gradients, update the network and repeat
The Advantage part is the component added to stabilize learning and reduce the variance:
This is called Advantage because:
- If , the action was better than average (advantageous). Then we increase its probability.
- If , the action was worse than average (disadvantageous). Decrease its probability.
A3C (Asynchronous Advantage Actor-Critic)
It was introduced by DeepMind in 2016. It solves the same problem as A2C but uses parallelism and asynchrony. The issue with A2C which is solved by A3C is that in A2C there is a single process: only one agent explores the environment. If the environment is slow (such as in Atari games), the GPU will sit idle while the CPU runs the game. Learning will be bottlenecked by the environment speed.
A3C solves this by having multiple agents exploring in parallel on different CPU threads, each with their own copy of the environment, and having them asynchronously push updates to shared global networks.
The setup is as follows:
- One shared Actor and one shared Critic
- N workers, parallel agents, each with:
- Their own copy of the environment
- Local copies of the Actor and Critic
- Their own exploration trajectory
There is no synchronization. Multiple workers might read the global weights, compute gradients and write back at the same time. This works because the noise generated by this is helpful: it prevents getting stuck in local minimas.
PPO (Proximal Policy Optimization)
PPO is just Actor-Critic that doesn't break things. It is the same core idea but with a safety guardrail that prevents the policy from changing too dramatically in one update.
In vanilla Actor Critic we minimize:
The danger is that if the gradient says to increase the probability of a specific action (e.g. going left) a lot, we might go from 50% chance left, 50% chance right to 99% chance left.
This is problematic because:
- The action might actually be bad and we just got lucky once. Now we're locked into doing it forever because the probability is 99%.
- Exploration will die as we might never try other actions again even though they might be better.
- The advantage estimates become wrong. Our critic learned that assumes the old policy. If the policy changes drastically, all our value estimates are garbage now.
We prefer to have smooth controlled updates. To achieve this, PPO asks a different question than vanilla Actor Critic.
- Vanilla Actor Critic asks: How can I change my policy to make this good action more likely?
- PPO asks: How can I change my policy to make this good action more likely BUT NOT TOO MUCH MORE LIKELY.
The "how much we've changed" is measured using a probability ratio:
- means the new and old policy are the same
- means the new policy is twice as likely to take this action as the old one
- means the new policy is half as likely to take this action (we're suppressing it)
Additional Notes and Concepts
Policy Gradient Loss is defined as with being the advantage. It tries to estimate what is the relative value of the selected action in the current state. is equal to the discounted sum of rewards minus a baseline estimate we get using some critic network or any other way. The baseline is usually set to be a value function neural network that is updated at each step. This network is updated in a supervised learning way.
What this loss will do is: if the advantage was positive (action was better than average return), then we increase the probability of selecting them again in the future if we encounter them again, and the inverse for inverse A.
When using policy gradient methods (REINFORCE, A2C, PPO, etc.), which are on-policy methods, we cannot reuse the same batch. This is because the derivatives assume our data was generated by the current version of the policy.
Vanilla REINFORCE with large step size is very unstable. Taking a big step will dramatically change the policy and this makes the advantage estimates that got computed with the old policy unreliable and not accurate (they don't reflect the current policy).
Monte Carlo methods also have very high variance.
PPO was built on "Trust Region Policy Optimization Paper".
TODOs and Future Topics
-
Add drawbacks, limitations, pros, cons
-
Add some analysis of the algo, when to use and when to not
-
Interpretations of how the loss will behave
-
What comes next, PPO, GRPO, etc.
-
Proof that results in unbiased estimates of the gradient
-
Give the example from statquest, where you use your own existing data to recreate a sub dataset and have better estimation or something like that. Bootstrapping is also used in tree based machine learning I believe.
-
Theoretical proof that it'll converge (TD learning convergence)
-
Soft Actor Critic
-
PPO (handles continuous action spaces apparently)
-
Rainbow DQN
-
Try to learn about some model based reinforcement learning
-
Try to learn about Reinforcement Learning from Demonstration (RLfD) / Imitation Learning + RL
-
Model based RL. Deep MPC, etc.
-
Add coding part for the REINFORCE algorithm
-
TRPO