{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Homework 4: Dynamic Programming\n", "\n", "In this assignment you will implement the *value iteration* algorithm and apply it to the Frozen Lake and Gambler's environments. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "from gym.envs.toy_text.frozen_lake import FrozenLakeEnv" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Excerise 1 (10 Points):\n", "\n", "Implement value iteration for the gambler's problem, detailed below.\n", "\n", "#### Gambler's Environment\n", "\n", "This exercise uses the Gambler's Problem detailed in Example 4.3 in [SB](http://incompleteideas.net/book/the-book-2nd.html). The important details are described here:\n", "\n", "A gambler has the opportunity to make bets on the out comes of a sequence of coin flips. If the coin comes up as heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the either the gambler reaches his goal of \\$100 or he loses all his money. On each flip, the gambler must decide how much of his capital to stake.\n", "\n", "The state-value function gives the probability of winning from each state. A policy is a mapping from the amount of capital to states. The optimal policy maximizes the probablitiy of reaching the goal. Let $p_h$ denote the probability of the coin coming up heads.\n", "\n", "* **State Space:** $s \\in \\{1, 2, ..., 100\\}$\n", "* **Action Space:** $a \\in \\{0, 1, ..., \\min(s, 100-s)\\}$\n", "* **Reward:** $\\begin{cases}\n", " +1 & s = 100 \\\\\n", " 0 & otherwise\n", " \\end{cases}$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def gamblers_value_iteration(p_h, theta=1e-4, gamma=1.0):\n", " \"\"\"\n", " Args:\n", " env: OpenAI Gym environment\n", " theta: Threshold used to determine accuracy of estimation\n", " gamma: Discount factor\n", " Returns:\n", " A tuple (policy, value function)\n", " \"\"\"\n", " V = np.zeros(101)\n", " policy = np.zeros(100)\n", " \n", " return policy, V" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Show your results in terms of the greedy policy as a function of state for your calculated value function. You should break ties at random. As there are mulitple optimal solutions to this problem, your results do not have to match those in Figure 4.3 from SB." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "policy, V = gamblers_value_iteration(0.4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Extra Credit: Characterize the class of possible optimal solutions for the problem setting (100 capital, 0.4p) given in the problem." ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Excerise 2 (10 Points):\n", "\n", "Implement value iteration for any arbritary Gym environemnt provided there is a perfect model of the environment as a MDP. In order for a OpenAI Gym environment to have this perfect model it must have nS, nA, and P as attributes.\n", "\n", "* **P:** Represents the transition probabilities of the environment. P[s][a] is the tuple (prob, next_state, reward, done)\n", "* **nS:** Number of states in the environment\n", "* **nA:** Number of actions in the environment\n", "\n", "Note that we have added a max iterations argument to this function. While this is not necessary, it ensures that we will never be stuck running forever due to a bad $\\theta$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def gym_value_iteration(env, theta=1e-4, gamma=1.0, max_iterations=1000):\n", " \"\"\"\n", " Args:\n", " env: OpenAI Gym environment which has P, nS, and nA as attributes\n", " theta: Threshold used to determine accuracy of estimation\n", " gamma: Discount factor\n", " max_iterations: Maximum number of value iterations to run\n", " Returns:\n", " A tuple (policy, value function)\n", " \"\"\"\n", " \n", " V = np.zeros(env.nS)\n", " policy = np.zeros(env.nS)\n", " \n", " return policy, V" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Test your implementation on the Frozen Lake environment" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "frozen_lake = FrozenLakeEnv()\n", "policy, V = gym_value_iteration(frozen_lake)\n", "\n", "expected_policy = np.array([0, 3, 3, 3, 0, 0, 0, 0, 3, 1, 0, 0, 0, 2, 1, 0])\n", "np.testing.assert_array_equal(policy, expected_policy)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 2 }