So, I’ve been very active this two days and managed to implement SARSA algorithm for solving Taxi-v2.

They look mostly the same except that in Q-learning, we update our Q-function by assuming we are taking action `*a*` that maximises our post-state Q function.

In SARSA, we use the same policy that generated the previous action a to generate the next action, `*a-prim*`, which we run through our Q-function for updates.

It all might sound very complicated but it results in very small change to Q-learning algorithm. You can compare my implementation ofÂ SARSA and Q-learning to see the difference.

I have managed to reach the same score as with Q-learning in about the same time. I guess Taxi-v2 problem is solved once an forever.

You should also be able to launch Taxi-v2 using the code from my GitHub repository.