{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Homework 6: Deep Q-Networks\n", "\n", "In this assignment you will implement deep q-learning and test this algorithm on the Frozen-lake environment. This assigment asks you to use Tensorflow to implement a DQN, if you are unframiliar for Tensorflow I suggest you take a look [here](https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/01_Simple_Linear_Model.ipynb) to get a general idea of how it works. After that it might be helpful to look into [tf.keras](https://www.tensorflow.org/guide/keras) which makes it easier to define networks." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import os\n", "import numpy as np\n", "import numpy.random as npr\n", "import random\n", "import matplotlib.pyplot as plt\n", "import copy\n", "from collections import defaultdict, namedtuple\n", "from itertools import count\n", "from more_itertools import windowed\n", "from tqdm import tqdm\n", "\n", "import tensorflow as tf\n", "import tensorflow.contrib.slim as slim\n", "\n", "from gym.envs.toy_text.frozen_lake import FrozenLakeEnv\n", "\n", "plt.style.use('ggplot')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### $\\epsilon$-Greedy Decay" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class LinearSchedule(object): \n", " def __init__(self, schedule_timesteps, final_p, initial_p=1.0): \n", " '''\n", " Linear interpolation between initial_p and final_p over \n", " schedule_timesteps. After this many timesteps pass final_p is \n", " returned. \n", " \n", " Args: \n", " - schedule_timesteps: Number of timesteps for which to linearly anneal initial_p to final_p \n", " - initial_p: initial output value \n", " -final_p: final output value \n", " ''' \n", " self.schedule_timesteps = schedule_timesteps \n", " self.final_p = final_p \n", " self.initial_p = initial_p \n", " \n", " def value(self, t): \n", " fraction = min(float(t) / self.schedule_timesteps, 1.0) \n", " return self.initial_p + fraction * (self.final_p - self.initial_p) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Replay Buffer" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Transition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward', 'done'))\n", "\n", "class ReplayMemory(object):\n", " def __init__(self, size):\n", " '''\n", " Replay buffer used to store transition experiences. Experiences will be removed in a \n", " FIFO manner after reaching maximum buffer size.\n", " \n", " Args:\n", " - size: Maximum size of the buffer.\n", " '''\n", " self.size = size\n", " self.memory = list()\n", " self.idx = 0\n", " \n", " def add(self, *args):\n", " if len(self.memory) < self.size:\n", " self.memory.append(None)\n", " self.memory[self.idx] = Transition(*args)\n", " self.idx = (self.idx + 1) % self.size\n", " \n", " def sample(self, batch_size):\n", " return random.sample(self.memory, batch_size)\n", " \n", " def __len__(self):\n", " return len(self.memory)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 1 (15 Points):\n", "\n", "Implement the necessary Tensorflow operations for a Deep Q-Network. This should include the following:\n", "1. A placeholder variable to take state input, a placeholder variable for target q-values, and a placeholder varaible for the selected actions; see [tf.placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder). You will also probably want to transform the input for selected actions into a one hot representation, see [tf.one_hot](https://www.tensorflow.org/api_docs/python/tf/one_hot).\n", "1. Any number of fully connected layers which take the state as input and outputs the q-values for each possible action. You should only need a few small fully connected layers as the environments are relatively simple. I used 3 fully connected layers but you could probably get away with using fewer. I find the best way to create layers in tensorflow is to use [tf.slim](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim) but a lot of people like to use [tf.keras](https://www.tensorflow.org/guide/keras). Feel free to use either of these APIs or just vanilla Tensorflow.\n", "3. A prediction operation which returns the index of the best action, see [tf.argmax](https://www.tensorflow.org/api_docs/python/tf/math/argmax).\n", "4. Operations to compute the loss. A common loss function used for DQNs is [Huber loss](https://en.wikipedia.org/wiki/Huber_loss) but for simple environments it is sufficient to just used MSE, also referred to as TD error:\n", "$$\n", "\\begin{align}\n", " MSE = \\frac{\\sum_{i=1}^n (Q_i^{target} - Q_i^{estimate})^2)}{n}\n", "\\end{align}\n", "$$\n", "4. A optimizer to minimize the loss, i.e. [SGD](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer), [RMSProp](https://www.tensorflow.org/api_docs/python/tf/train/RMSPropOptimizer), [Adam](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer). The most commonly used optimizer these days is Adam so you should probably use that one but it is not required. Don't forget to create an operation which uses the optimizer to minimize the loss." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class DQN(object):\n", " def __init__(self, state_shape, action_shape, lr=0.001):\n", " '''\n", " Deep Q-Network Tensorflow model.\n", " \n", " Args:\n", " - state_shape: Input state shape \n", " - action_shape: Output action shape\n", " '''\n", " \n", " pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 2 (15 Points):\n", "\n", "Implement the following method which will be used when optimizing the model. The optimize_model method should compute the target Q-values from the batch and use the optimizer op created in the previous exercise. Given the randomly sampled batch $(s_t, a_t, r_t, s'_t, d_t)$, where $d_t$ is a boolean indicating if the episode terminated at time $t$, we can compute the target Q-value used in the update as follows:\n", "\n", "$$\n", "\\begin{align}\n", " Q_i^{target} = \n", " \\begin{cases}\n", " r_i & \\text{if $d_t = True$} \\\\\n", " r_i + \\gamma \\underset{a'}{\\max} \\hat{Q}(s_{i+1}, a'; \\theta) & \\text{otherwise}\n", " \\end{cases} \\\\\n", "\\end{align}\n", "$$\n", "\n", "**Note:** We are using a target network, $\\hat{Q}$, to compute the target q-values in order to stabalize training." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def optimize_model(session, policy_net, target_net, batch, gamma):\n", " '''\n", " Calculates the target Q-values for the given batch and uses them to update the model.\n", " \n", " Args:\n", " - session: Tensorflow session\n", " - policy_net: Policy DQN model\n", " - target_net: DQN model used to generate target Q-values\n", " - batch: Batch of experiences uesd to optimize model\n", " - gamma: Discount factor\n", " '''\n", " pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These methods are used to set the target network weights to the policy network weights. You can do this by either copying the policy networks weights over to the target network every n timesteps or by updating the target network weights a small amount at each timestep. The latter of which tends to be more stable and is therefor used below. `update_target_graph_op` creates the various `tf.assign` operations and `update_target` calls these operations. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def update_target_graph_op(tf_vars, tau):\n", " '''\n", " Creates a Tensorflow op which updates the target model towards the policy model by a small amount.\n", " \n", " Args:\n", " - tf_vars: All trainable variables in the Tensorflow graph\n", " - tau: Amount to update the target model\n", " '''\n", " total_vars = len(tf_vars)\n", " update_ops = list()\n", " for idx,var in enumerate(tf_vars[0:total_vars//2]):\n", " op = tf_vars[idx + total_vars//2].assign((var.value()*tau) + \\\n", " ((1-tau)*tf_vars[idx+total_vars//2].value()))\n", " update_ops.append(op)\n", " return update_ops\n", "\n", "def update_target(session, update_ops):\n", " '''\n", " Calls each update op to update the target model.\n", " \n", " Args:\n", " - session: Tensorflow session\n", " - update_ops: The update ops which moves the target model towards the policy model\n", " '''\n", " for op in update_ops:\n", " session.run(op)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 3 (20 Points):\n", "\n", "Implement the below method to train the model in the given environment. You should choose actions in a $\\epsilon$-greedy fashion while annealing $\\epsilon$ over time as we did in the previous assignment." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def train(env, num_episodes=500, gamma=0.99, batch_size=64,\n", " annealing_steps=1000, s_epsilon=1.0, f_epsilon=0.1):\n", " '''\n", " DQN algorithm\n", " \n", " Args:\n", " - env: The environment to train the agent on\n", " - num_episodes: The number of episodes to train the agent for\n", " - gamma: The discount factor\n", " - batch_size: Number of experiences in a batch\n", " - annealing_steps: The number of steps to anneal epsilon over\n", " - s_epsilon: The initial epsilon value for e-greedy action selection\n", " - f_epsilon: The final epsilon value for the e-greedy action selection\n", " \n", " Returns: (policy_net, episode_rewards)\n", " - policy_net: Trained DQN model\n", " - episode_rewards: Numpy array containing the reward of each episode during training\n", " '''\n", " policy_net = DQN(1, env.action_space.n)\n", " target_net = DQN(1, env.action_space.n)\n", " target_ops = update_target_graph_op(tf.trainable_variables(), 0.1)\n", " \n", " memory = ReplayMemory(20000)\n", " epsilon = LinearSchedule(annealing_steps, f_epsilon, s_epsilon)\n", " \n", " total_steps = 0\n", " episode_rewards = list()\n", " \n", " return policy_net, episode_rewards" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Test your implentation on the Frozen-lake (both slippery and not slippery) environment. You should plot the episode rewards over time. It might be helpful to smooth this curve over a time window of 100 episodes in order to get a more clear picture of the learning process." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "frozen_lake_env = FrozenLakeEnv(desc=None, map_name=\"4x4\",is_slippery=False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "slippery_frozen_lake_env = FrozenLakeEnv(desc=None, map_name=\"4x4\",is_slippery=True)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 2 }