Difference in Q-learning, Sarsa, Expected Value Sarsa for dummies

I have finally implemented three different algortihms which are based on a Q-value. They all use state-action-value table and e-greedy policy when choosing the next action. The only thing that differs is the update process of the Q-value.

I have prepared a simple mapping, for those who are experiencing problems understanding the differences:

  • Q-learning: use the maximum value of next state actions as a `next_value`.
  • SARSA: generate another action using existing policy and use its value from a mapping table
  • Expected Value SARSA: use sum of all next state actions divided by a number of actions as a `next_value`.
Difference in Q-learning, Sarsa, Expected Value Sarsa for dummies

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s