{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Homework 5: TD Learning\n", "\n", "In this assignment you will implement Sarsa, Expected Sarsa, and Q-Learning and test these algorithms on the Frozen-lake environment and a Cartpole environment with a discrete state space." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import numpy.random as npr\n", "import matplotlib.pyplot as plt\n", "import copy\n", "from collections import defaultdict\n", "\n", "from gym.envs.toy_text.frozen_lake import FrozenLakeEnv\n", "from gym.envs.classic_control.cartpole import CartPoleEnv\n", "\n", "plt.style.use('ggplot')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### $\\epsilon$-Greedy Decay\n", "\n", "A fairly typical thing to do when using a $\\epsilon$-greedy exploration strategy is to anneal $\\epsilon$ from its starting value to some final value over a number of timesteps. This allows your agent to explore the environemnt more at the beginning of training when it knows very little about the environemnt and then explore less as its policy becomes more and more optimal. For the CartPole environment I reccomend the final $\\epsilon$ be set at 0.1 and for the frozen lake environment I reccomend the final $\\epsilon$ be set to 0.0. Feel free to play around with these parameters however as your results might be a bit different than mine." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "class LinearSchedule(object): \n", " def __init__(self, schedule_timesteps, final_p, initial_p=1.0): \n", " '''\n", " Linear interpolation between initial_p and final_p over \n", " schedule_timesteps. After this many timesteps pass final_p is \n", " returned. \n", " \n", " Args: \n", " - schedule_timesteps: Number of timesteps for which to linearly anneal initial_p to final_p \n", " - initial_p: initial output value \n", " -final_p: final output value \n", " ''' \n", " self.schedule_timesteps = schedule_timesteps \n", " self.final_p = final_p \n", " self.initial_p = initial_p \n", " \n", " def value(self, t): \n", " fraction = min(float(t) / self.schedule_timesteps, 1.0) \n", " return self.initial_p + fraction * (self.final_p - self.initial_p) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Discrete Cart Pole\n", "\n", "In order to train a tabular agent, such as Sarsa or Q-learning, on a environment with a continuous state space, we need to first discretize the state space. Generally, the finer the discretization the more optimal your policy will be but the longer it will take to train.\n", "\n", "For additional info on the Cart-pole problem see [here](https://github.com/openai/gym/wiki/CartPole-v0)." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "class DiscreteCartPole(object):\n", " def __init__(self):\n", " self.env = CartPoleEnv()\n", " self.action_space = self.env.action_space\n", " self.round = 1\n", " \n", " def reset(self):\n", " obs = self.env.reset()\n", " return tuple(np.around(obs, self.round).tolist())\n", " \n", " def step(self, action):\n", " obs, reward, done, info = self.env.step(action)\n", " return tuple(np.around(obs, self.round).tolist()), reward, done, info" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[33mWARN: gym.spaces.Box autodetected dtype as . Please provide explicit dtype.\u001b[0m\n" ] } ], "source": [ "frozen_lake_env = FrozenLakeEnv(desc=None, map_name=\"4x4\",is_slippery=True)\n", "slippery_frozen_lake_env = FrozenLakeEnv(desc=None, map_name=\"4x4\",is_slippery=True)\n", "cart_pole_env = DiscreteCartPole()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 1 (10 Points):\n", "\n", "Implement the on-policy TD control algorithm known as SARSA. You should find the optimal $\\epsilon$-greedy policy." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "def sarsa(env, num_episodes, gamma=1.0, alpha=0.1, \n", " start_eps=0.2, final_eps=0.1, annealing_steps=1000,\n", " max_episode_steps=200):\n", " '''\n", " Sarsa algorithm.\n", " \n", " Args:\n", " - env: The environment to train the agent on\n", " - num_episodes: The number of episodes to train the agent for\n", " - gamma: The discount factor\n", " - alpha: The stepsize\n", " - start_eps: The initial epsilon value for e-greedy action selection\n", " - final_eps: The final epsilon value for the e-greedy action selection\n", " - annealing_steps: The number of steps to anneal epsilon over\n", " - max_episode_steps: The maximum number of steps an episode can take\n", " \n", " Returns: (Q_func, episode_rewards, episode_lengths)\n", " - Q: Dictonary mapping state -> action values\n", " - episode_rewards: Numpy array containing the reward of each episode during training\n", " - episode_lengths: Numpy array containing the length of each episode during training\n", " '''\n", " Q = defaultdict(lambda: np.zeros(env.action_space.n))\n", " episode_rewards = np.zeros(num_episodes-1)\n", " episode_lengths = np.zeros(num_episodes-1)\n", " \n", " exploration = LinearSchedule(annealing_steps, start_eps, final_eps)\n", " \n", " return Q, episode_rewards, episode_lengths" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Test your implentation on the Frozen-lake (both slippery and not slippery) environment and the discrete Cart-pole environment. You should plot the episode rewards over time averaged over 50 training runs. It might be helpful to smooth this curve over a time window of 100 episodes in order to get a more clear picture of the learning process. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 2 (10 Points):\n", "\n", "Implement the on-policy TD control algorithm known as expected SARSA. You should find the optimal $\\epsilon$-greedy policy." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "def expected_sarsa(env, num_episodes, gamma=1.0, alpha=0.1, \n", " start_eps=0.2, final_eps=0.1, annealing_steps=1000,\n", " max_episode_steps=200):\n", " '''\n", " Q-learning algorithm.\n", " \n", " Args:\n", " - env: The environment to train the agent on\n", " - num_episodes: The number of episodes to train the agent for\n", " - gamma: The discount factor\n", " - alpha: The stepsize\n", " - start_eps: The initial epsilon value for e-greedy action selection\n", " - final_eps: The final epsilon value for the e-greedy action selection\n", " - annealing_steps: The number of steps to anneal epsilon over\n", " - max_episode_steps: The maximum number of steps an episode can take\n", " \n", " Returns: (Q_func, episode_rewards, episode_lengths)\n", " - Q: Dictonary mapping state -> action values\n", " - episode_rewards: Numpy array containing the reward of each episode during training\n", " - episode_lengths: Numpy array containing the length of each episode during training\n", " '''\n", " Q = defaultdict(lambda: np.zeros(env.action_space.n))\n", " episode_rewards = np.zeros(num_episodes-1)\n", " episode_lengths = np.zeros(num_episodes-1)\n", " \n", " exploration = LinearSchedule(annealing_steps, start_eps, final_eps)\n", " \n", " return Q, episode_rewards, episode_lengths" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Test your implentation on the Frozen-lake (both slippery and not slippery) environment and the discrete Cart-pole environment. You should plot the episode rewards over time averaged over 50 training runs. It might be helpful to smooth this curve over a time window of 100 episodes in order to get a more clear picture of the learning process." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 3 (10 Points):\n", "\n", "Implement the off-policy TD control algorithm known as Q-learning. You should find the optimal greedy policy while following an $\\epsilon$-greedy policy." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "def q_learning(env, num_episodes, gamma=1.0, alpha=0.1, \n", " start_eps=0.2, final_eps=0.1, annealing_steps=1000,\n", " max_episode_steps=200):\n", " '''\n", " Q-learning algorithm.\n", " \n", " Args:\n", " - env: The environment to train the agent on\n", " - num_episodes: The number of episodes to train the agent for\n", " - gamma: The discount factor\n", " - alpha: The stepsize\n", " - start_eps: The initial epsilon value for e-greedy action selection\n", " - final_eps: The final epsilon value for the e-greedy action selection\n", " - annealing_steps: The number of steps to anneal epsilon over\n", " - max_episode_steps: The maximum number of steps an episode can take\n", " \n", " Returns: (Q_func, episode_rewards, episode_lengths)\n", " - Q: Dictonary mapping state -> action values\n", " - episode_rewards: Numpy array containing the reward of each episode during training\n", " - episode_lengths: Numpy array containing the length of each episode during training\n", " '''\n", " Q = defaultdict(lambda: np.zeros(env.action_space.n))\n", " episode_rewards = np.zeros(num_episodes-1)\n", " episode_lengths = np.zeros(num_episodes-1)\n", " \n", " exploration = LinearSchedule(annealing_steps, start_eps, final_eps)\n", " \n", " return Q, episode_rewards, episode_lengths" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Test your implentation on the Frozen-lake (both slippery and not slippery) environment and the discrete Cart-pole environment. You should plot the episode rewards over time averaged over 50 training runs. It might be helpful to smooth this curve over a time window of 100 episodes in order to get a more clear picture of the learning process." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 2 }