{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Homework 3: Monte Carlo\n", "\n", "In this assignment you will implement off-policy every-visit Monte Carlo Control and off-policy every-visit Monte Carlo Control with Weighted Importance Sampling. You will apply both of these algorithms on the frozen lake and blackjack environments and visualize their performance." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import copy\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import tqdm\n", "from collections import defaultdict\n", "\n", "from envs.frozen_lake import FrozenLakeEnv\n", "from envs.blackjack import BlackjackEnv\n", "\n", "plt.style.use('ggplot')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setup the Environment\n", "\n", "This assignment introduces you to two environments derived from OpenAI gym: the frozen lake environment\n", "and the blackjack environment. Frozen lake is a grid world where the agent must reach a goal state while\n", "avoiding holes. You can get the full description of frozen lake [here](https://gym.openai.com/envs/FrozenLake-v0/) or by looking at frozen lake.py in\n", "your code directory. The other environment is in blackjack.py. This implements the same version of\n", "blackjack described in Example 5.1 in [SB](http://incompleteideas.net/book/the-book-2nd.html)." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "blackjack_env = BlackjackEnv()\n", "frozen_lake_env = FrozenLakeEnv(desc=None, map_name=\"4x4\",is_slippery=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 1 (5 pts):\n", "\n", "Implement e-greedy action selection based on the current Q-values. Break ties between equal Q-values uniformly randomly. Remeber an action should be a number in the range: [0, num_actions]." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "def eGreedyActionSelection(q_curr, eps):\n", " '''\n", " Preforms epsilon greedy action selectoin based on the Q-values.\n", " \n", " Args:\n", " q_curr: A numpy array that contains the Q-values for each action for a state.\n", " eps: The probability to select a random action. Float between 0 and 1.\n", " \n", " Returns:\n", " The selected action.\n", " '''\n", " \n", " return 0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 2 (10 pts):\n", "\n", "Implement the off-policy every-visit Monte Carlo update using the incremental update formula in Section 2.4 in SB. Recall that in Chapter 2 we averaged the *rewards* whereas in MC we average the *returns*. In the formula below $G_n$ denotes the return at timestep $n$.\n", "\n", "$$\n", "\\begin{align}\n", " Q_{n+1} = Q_n + \\alpha [G_n - Q_n]\n", "\\end{align}\n", "$$" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "def updateMCValues(Q_func, episode_transitions, gamma, alpha):\n", " '''\n", " Updates the Q-function according to the given episode transitions.\n", " \n", " Args:\n", " Q_func: A dictonary mapping state -> action values.\n", " episode_transitions: A list of (state, action, reward) tuples describing the episode.\n", " gamma: The discount factor.\n", " alpha: The stepsize.\n", " \n", " Returns:\n", " The updated Q-function.\n", " '''\n", " \n", " return Q_func" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 3 (5 pts):\n", "\n", "Add code just after the commented line, ```YOUR CODE HERE to display E-GREEDY ACTION SELECTION```, that prints the state, action, next state, reward, done flag information to\n", "the screen for each transition experienced. Display actions using the corresponding English words. For\n", "frozenlake, display state as an integer. For blackjack, display state as the set of cards expressed in English.\n", "Turn in these two transition sequences as an answer to this question." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "def train_mc_agent(env, num_episodes, eps=0.1, gamma=1.0, alpha=0.1, logging=True):\n", " '''\n", " Trains a off-policy every-visit MC agent.\n", " \n", " Args:\n", " env: The environment to train the agent on.\n", " num_episodes: The number of episodes to train the agent for.\n", " eps: The probability to select a random action. Float between 0 and 1. \n", " gamma: The discount factor.\n", " alpha: The stepsize.\n", " logging: Boolean flag which turns logging off/on.\n", " \n", " Returns:\n", " A tuple: (Q_func, episode_rewards)\n", " Q_func is a dictonary mapping state -> action values.\n", " episode_rewards is a list containing the rewards obtained for each episode during training.\n", " '''\n", " \n", " # Create Q function dict with default values\n", " init_q_value = 0.0\n", " Q_func = defaultdict(lambda: np.ones(env.action_space.n) * init_q_value)\n", " \n", " episode_rewards = [0.0]\n", " pbar = tqdm.trange(num_episodes-1) if logging else range(num_episodes-1)\n", " for curr_episode in pbar: \n", " episode_transitions = list()\n", " state = env.reset()\n", " is_done = False\n", " while not is_done:\n", " # Get the next action and execute it\n", " action = eGreedyActionSelection(Q_func[state], eps)\n", " new_state, reward, is_done, _ = env.step(action)\n", " episode_transitions.append((state, action, reward))\n", " \n", " if logging:\n", " # **** YOUR CODE HERE to display E-GREEDY ACTION SELECTION ****\n", " # Display experienced obs, action, new_obs, rew, done tuples.\n", " pass\n", "\n", " state = copy.deepcopy(new_state)\n", "\n", " # Update the Q function\n", " Q_func = updateMCValues(Q_func, episode_transitions, gamma, alpha)\n", " \n", " # Bookkeeping: store episode rewards to measure performance.\n", " episode_rewards[-1] += reward\n", " episode_rewards.append(0.0)\n", " mean_100ep_reward = round(np.mean(episode_rewards[-51:-1]), 1)\n", " if logging:\n", " pbar.set_description('Mean Reward: {}'.format(mean_100ep_reward))\n", " \n", " return Q_func, episode_rewards" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 4 (10 pts):\n", "\n", "Calculate a learning curve averaged over 50 runs for step size parameter, α = 0.1. The learning curve should plot average reward per episode as a function of episode number, starting with the first episode and going up to 50k episodes. Plot the resulting learning curve for both the frozen lake environment and the blackjack environment." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 5 (10 pts):\n", "\n", "Run your code up to 500k episodes for the blackjack domain with a step size parameter that enables the value function to converge. Plot the value 'function as a color plot with a similar layout to that shown in SB Figure 5.1. Make sure you include the color bar or some kind of key that indicates the values of the colors. Also plot the learned blackjack policy, showing something similar to that shown in SB Figure 5.2. It’s okay if your policy is slightly different from what they get, but please explain why this is. You may need to adjust α to ensure convergence." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 6 (20 pts):\n", "\n", "Implement the below function to train a off-policy every-visit MC agent which uses weighted importance sampling. As before it should use epsilon greedy action selection. Feel free to reuse any code from above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def train_mc_agent_importance_sampling(env, num_episodes, eps=0.1, gamma=1.0, alpha=0.1, logging=True):\n", " '''\n", " Trains a off-policy every-visit MC agent with weighted importance sampling.\n", " \n", " Args:\n", " env: The environment to train the agent on.\n", " num_episodes: The number of episodes to train the agent for.\n", " eps: The probability to select a random action. Float between 0 and 1. \n", " gamma: The discount factor.\n", " alpha: The stepsize.\n", " logging: Boolean flag which turns logging off/on.\n", " \n", " Returns:\n", " A tuple: (Q_func, episode_rewards)\n", " Q_func is a dictonary mapping state -> action values.\n", " episode_rewards is a list containing the rewards obtained for each episode during training.\n", " '''\n", " init_q_value = 0.0\n", " Q_func = defaultdict(lambda: np.ones(env.action_space.n) * init_q_value)\n", " episode_rewards = list()\n", " \n", " # Your implementation here!\n", " \n", " return Q_func, episode_rewards" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 7 (20 pts):\n", "\n", "Repeat Exercises 4 & 5 using the new MC agent which uses weighted importance sampling. Compare the results and provide reasoning." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 2 }