AIMA Artificial Intelligence a modern approch

AIMA-exercises is an open-source community of students, instructors and developers. Anyone can add an exercise, suggest answers to existing questions, or simply help us improve the platform. We accept contributions on this github repository.

Exercise 17.6 [reward-equivalence-exercise]

Sometimes MDPs are formulated with a reward function $R(s,a)$ that depends on the action taken or with a reward function $R(s,a,s’)$ that also depends on the outcome state.

  1. Write the Bellman equations for these formulations.

  2. Show how an MDP with reward function $R(s,a,s’)$ can be transformed into a different MDP with reward function $R(s,a)$, such that optimal policies in the new MDP correspond exactly to optimal policies in the original MDP.

  3. Now do the same to convert MDPs with $R(s,a)$ into MDPs with $R(s)$.

View Answer