Difference in Q-learning, Sarsa, Expected Value Sarsa for dummies

I have finally implemented three different algortihms which are based on a Q-value. They all use state-action-value table and e-greedy policy when choosing the next action. The only thing that differs is the update process of the Q-value.

I have prepared a simple mapping, for those who are experiencing problems understanding the differences:

  • Q-learning: use the maximum value of next state actions as a `next_value`.
  • SARSA: generate another action using existing policy and use its value from a mapping table
  • Expected Value SARSA: use sum of all next state actions divided by a number of actions as a `next_value`.
Difference in Q-learning, Sarsa, Expected Value Sarsa for dummies

Julia: Solving Open AI Taxi-v2 using SARSA algorithm

So, I’ve been very active this two days and managed to implement SARSA algorithm for solving Taxi-v2.

They look mostly the same except that in Q-learning, we update our Q-function by assuming we are taking action `a` that maximises our post-state Q function.

In SARSA, we use the same policy that generated the previous action a to generate the next action, `a-prim`, which we run through our Q-function for updates.

It all might sound very complicated but it results in very small change to Q-learning algorithm. You can compare my implementation of SARSA and Q-learning to see the difference.

I have managed to reach the same score as with Q-learning in about the same time. I guess Taxi-v2 problem is solved once an forever.

You should also be able to launch Taxi-v2 using the code from my GitHub repository.

Julia: Solving Open AI Taxi-v2 using SARSA algorithm