|
|
{% extends "layout.html" %}
|
|
|
|
|
|
{% block content %}
|
|
|
<!DOCTYPE html>
|
|
|
<html lang="en">
|
|
|
<head>
|
|
|
<meta charset="UTF-8">
|
|
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
|
<title>Study Guide: RL Core Concepts</title>
|
|
|
|
|
|
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
|
|
|
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
|
|
|
<style>
|
|
|
|
|
|
body {
|
|
|
background-color: #ffffff;
|
|
|
color: #000000;
|
|
|
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
|
|
|
font-weight: normal;
|
|
|
line-height: 1.8;
|
|
|
margin: 0;
|
|
|
padding: 20px;
|
|
|
}
|
|
|
|
|
|
|
|
|
.container {
|
|
|
max-width: 800px;
|
|
|
margin: 0 auto;
|
|
|
padding: 20px;
|
|
|
}
|
|
|
|
|
|
|
|
|
h1, h2, h3 {
|
|
|
color: #000000;
|
|
|
border: none;
|
|
|
font-weight: bold;
|
|
|
}
|
|
|
|
|
|
h1 {
|
|
|
text-align: center;
|
|
|
border-bottom: 3px solid #000;
|
|
|
padding-bottom: 10px;
|
|
|
margin-bottom: 30px;
|
|
|
font-size: 2.5em;
|
|
|
}
|
|
|
|
|
|
h2 {
|
|
|
font-size: 1.8em;
|
|
|
margin-top: 40px;
|
|
|
border-bottom: 1px solid #ddd;
|
|
|
padding-bottom: 8px;
|
|
|
}
|
|
|
|
|
|
h3 {
|
|
|
font-size: 1.3em;
|
|
|
margin-top: 25px;
|
|
|
}
|
|
|
|
|
|
|
|
|
strong {
|
|
|
font-weight: 900;
|
|
|
}
|
|
|
|
|
|
|
|
|
p, li {
|
|
|
font-size: 1.1em;
|
|
|
border-bottom: 1px solid #e0e0e0;
|
|
|
padding-bottom: 10px;
|
|
|
margin-bottom: 10px;
|
|
|
}
|
|
|
|
|
|
|
|
|
li:last-child {
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
|
|
|
ol {
|
|
|
list-style-type: decimal;
|
|
|
padding-left: 20px;
|
|
|
}
|
|
|
|
|
|
ol li {
|
|
|
padding-left: 10px;
|
|
|
}
|
|
|
|
|
|
|
|
|
ul {
|
|
|
list-style-type: none;
|
|
|
padding-left: 0;
|
|
|
}
|
|
|
|
|
|
ul li::before {
|
|
|
content: "•";
|
|
|
color: #000;
|
|
|
font-weight: bold;
|
|
|
display: inline-block;
|
|
|
width: 1em;
|
|
|
margin-left: 0;
|
|
|
}
|
|
|
|
|
|
|
|
|
pre {
|
|
|
background-color: #f4f4f4;
|
|
|
border: 1px solid #ddd;
|
|
|
border-radius: 5px;
|
|
|
padding: 15px;
|
|
|
white-space: pre-wrap;
|
|
|
word-wrap: break-word;
|
|
|
font-family: "Courier New", Courier, monospace;
|
|
|
font-size: 0.95em;
|
|
|
font-weight: normal;
|
|
|
color: #333;
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
|
|
|
.story-rl {
|
|
|
background-color: #f0f8ff;
|
|
|
border-left: 4px solid #0c8599;
|
|
|
margin: 15px 0;
|
|
|
padding: 10px 15px;
|
|
|
font-style: italic;
|
|
|
color: #555;
|
|
|
font-weight: normal;
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
.story-rl p, .story-rl li {
|
|
|
border-bottom: none;
|
|
|
}
|
|
|
|
|
|
.example-rl {
|
|
|
background-color: #e3fafc;
|
|
|
padding: 15px;
|
|
|
margin: 15px 0;
|
|
|
border-radius: 5px;
|
|
|
border-left: 4px solid #3bc9db;
|
|
|
}
|
|
|
|
|
|
.example-rl p, .example-rl li {
|
|
|
border-bottom: none !important;
|
|
|
}
|
|
|
|
|
|
|
|
|
.quiz-section {
|
|
|
background-color: #fafafa;
|
|
|
border: 1px solid #ddd;
|
|
|
border-radius: 5px;
|
|
|
padding: 20px;
|
|
|
margin-top: 30px;
|
|
|
}
|
|
|
.quiz-answers {
|
|
|
background-color: #e3fafc;
|
|
|
padding: 15px;
|
|
|
margin-top: 15px;
|
|
|
border-radius: 5px;
|
|
|
}
|
|
|
|
|
|
|
|
|
table {
|
|
|
width: 100%;
|
|
|
border-collapse: collapse;
|
|
|
margin: 25px 0;
|
|
|
}
|
|
|
th, td {
|
|
|
border: 1px solid #ddd;
|
|
|
padding: 12px;
|
|
|
text-align: left;
|
|
|
}
|
|
|
th {
|
|
|
background-color: #f2f2f2;
|
|
|
font-weight: bold;
|
|
|
}
|
|
|
|
|
|
|
|
|
@media (max-width: 768px) {
|
|
|
body, .container {
|
|
|
padding: 10px;
|
|
|
}
|
|
|
h1 { font-size: 2em; }
|
|
|
h2 { font-size: 1.5em; }
|
|
|
h3 { font-size: 1.2em; }
|
|
|
p, li { font-size: 1em; }
|
|
|
pre { font-size: 0.85em; }
|
|
|
table, th, td { font-size: 0.9em; }
|
|
|
}
|
|
|
</style>
|
|
|
</head>
|
|
|
<body>
|
|
|
|
|
|
<div class="container">
|
|
|
<h1>🤖 Study Guide: Core Concepts of Reinforcement Learning</h1>
|
|
|
|
|
|
<h2>🔹 Introduction to RL</h2>
|
|
|
<div class="story-rl">
|
|
|
<p><strong>Story-style intuition: Training a Dog</strong></p>
|
|
|
<p>Imagine you are training a new puppy. You don't give it a textbook on how to behave. Instead, you use a system of rewards and consequences. When the puppy sits on command, you give it a treat (a positive reward). When it chews on the furniture, you say "No!" (a negative reward). Through a process of <strong>trial-and-error</strong>, the puppy gradually learns a set of behaviors (a "policy") that maximizes the number of treats it receives over its lifetime. This is the essence of <strong>Reinforcement Learning (RL)</strong>. It's about learning what to do—how to map situations to actions—so as to maximize a numerical reward signal.</p>
|
|
|
</div>
|
|
|
<p><strong>Reinforcement Learning (RL)</strong> is a type of machine learning where an agent learns to make a sequence of decisions in an environment to achieve a long-term goal. It is fundamentally different from other learning paradigms:</p>
|
|
|
<ul>
|
|
|
<li><strong>vs. Supervised Learning:</strong> In supervised learning, you have a labeled dataset (the "answer key"). The model learns by comparing its predictions to the correct answers.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> This is like a student studying for a test with a complete set of practice questions and the correct answers. They learn by correcting their mistakes.</p></div>
|
|
|
</li>
|
|
|
<li><strong>vs. Unsupervised Learning:</strong> In unsupervised learning, the goal is to find hidden structure in unlabeled data. There are no right or wrong answers, just patterns.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> This is like a historian being given a thousand ancient, untranslated texts and trying to group them by language or topic, without any prior knowledge.</p></div>
|
|
|
</li>
|
|
|
<li><strong>Reinforcement Learning:</strong> The agent learns from the consequences of its actions, not from being told what to do. The feedback is a scalar reward, which is often delayed.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> This is like a person learning to play a video game. They don't have an answer key. They learn that certain actions lead to points (rewards) and others lead to losing a life (negative rewards), and their goal is to get the highest score possible.</p></div>
|
|
|
</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>🔹 Core Components of RL</h2>
|
|
|
<p>The "Training a Dog" analogy helps us define the core building blocks of any RL problem.</p>
|
|
|
<ul>
|
|
|
<li><strong>Agent:</strong> The learner and decision-maker. It perceives the environment and chooses actions.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> The puppy is the <strong>agent</strong>. In a video game, the character you control is the agent.</p></div>
|
|
|
</li>
|
|
|
<li><strong>Environment:</strong> Everything the agent interacts with. It represents the world or the task the agent is trying to solve.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> Your house, including the furniture, your commands, and the treats, is the <strong>environment</strong>. The game world, including its rules, levels, and enemies, is the environment.</p></div>
|
|
|
</li>
|
|
|
<li><strong>State (S):</strong> A complete description of the environment at a specific moment. It's the information the agent uses to make a decision.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> A <strong>state</strong> for the puppy could be a snapshot: "in the living room, toy is on the floor, owner is holding a treat." For a chess game, the state is the position of every piece on the board.</p></div>
|
|
|
</li>
|
|
|
<li><strong>Action (A):</strong> A choice the agent can make from a set of possibilities.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> In the given state, the puppy's available <strong>actions</strong> might be "sit," "bark," "run," or "chew toy."</p></div>
|
|
|
</li>
|
|
|
<li><strong>Reward (R):</strong> The immediate feedback signal from the environment after the agent performs an action. The agent's sole objective is to maximize the total reward it accumulates.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> If the puppy sits, it gets a +10 reward (a treat). If it barks, it gets a -1 reward (a stern look).</p></div>
|
|
|
</li>
|
|
|
<li><strong>Policy (π):</strong> The agent's strategy or "brain." It's a function that maps a state to an action. A good policy will consistently choose actions that lead to high rewards.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> An initial, untrained <strong>policy</strong> for the puppy is random. A final, well-trained policy is a smart set of rules: "If I see my owner holding a treat, the best action is to sit immediately."</p></div>
|
|
|
</li>
|
|
|
<li><strong>Value Function (V):</strong> A prediction of the total future reward an agent can expect to get, starting from a particular state. It represents the long-term desirability of a state.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> The puppy learns that the state "sitting by the front door in the evening" has a high <strong>value</strong>. While this state itself doesn't give an immediate reward, it often leads to a highly rewarding future state: going for a walk.</p></div>
|
|
|
</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>🔹 The Interaction Flow (Agent–Environment Loop)</h2>
|
|
|
<p>RL is a continuous loop of interaction between the agent and the environment, where each step refines the agent's understanding.</p>
|
|
|
|
|
|
<ol>
|
|
|
<li>The agent observes the current <strong>State</strong> (S_t).</li>
|
|
|
<li>Based on its <strong>Policy</strong> (π), the agent chooses an <strong>Action</strong> (A_t).</li>
|
|
|
<li>The environment receives the action, transitions to a new <strong>State</strong> (S_{t+1}), and gives the agent a <strong>Reward</strong> (R_{t+1}).</li>
|
|
|
<li>The agent uses this reward and new state to update its knowledge (its policy and value functions).</li>
|
|
|
<li>This loop repeats, allowing the agent to learn from experience and adapt its behavior over time.</li>
|
|
|
</ol>
|
|
|
|
|
|
<h2>🔹 Mathematical Foundations</h2>
|
|
|
<div class="story-rl">
|
|
|
<p>To formalize this process, mathematicians use a framework called a <strong>Markov Decision Process (MDP)</strong>. It's simply a way of writing down all the rules of the "game" the agent is playing, assuming that the future depends only on the current state and action, not on the past (the Markov Property).</p>
|
|
|
</div>
|
|
|
<p>An MDP is defined by a tuple: \( (S, A, P, R, \gamma) \)</p>
|
|
|
<ul>
|
|
|
<li>\( S \): A set of all possible states (all possible configurations of the environment).</li>
|
|
|
<li>\( A \): A set of all possible actions.</li>
|
|
|
<li>\( P \): The state transition probability function, \( P(s'|s, a) \). This is the "physics" of the environment.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> In a slippery, icy world, if a robot in state "at square A" takes the action "move North," the transition probability might be: 80% chance of ending up in the state "at square B (north of A)," 10% chance of slipping and ending up "at square C (east of A)," and 10% chance of not moving at all ("at square A").</p></div>
|
|
|
</li>
|
|
|
<li>\( R \): The reward function, \( R(s, a) \). This defines the goal of the problem.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> In a maze, the reward is -1 for every step taken (to encourage finishing quickly) and +100 for taking the action that leads to the exit state.</p></div>
|
|
|
</li>
|
|
|
<li>\( \gamma \): The <strong>discount factor</strong> (a number between 0 and 1). It determines the present value of future rewards.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> A reward of 100 you receive in two steps is worth \(100 \times \gamma^2\) to you right now. If γ=0.9, that future reward is worth 81 now. If γ=0.1, it's worth only 1 now. This prevents infinite loops and makes the agent prioritize rewards that are closer in time.</p></div>
|
|
|
</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>🔹 Detailed Examples</h2>
|
|
|
<h3>Chess</h3>
|
|
|
<ul>
|
|
|
<li><strong>Agent:</strong> The chess-playing program (e.g., AlphaZero).</li>
|
|
|
<li><strong>Environment:</strong> The chessboard and the rules of chess, including the opponent's moves. The opponent is considered part of the environment because the agent cannot control their actions.</li>
|
|
|
<li><strong>State:</strong> The exact position of all pieces on the board, plus whose turn it is.</li>
|
|
|
<li><strong>Action:</strong> Making a legal move with one of the pieces.</li>
|
|
|
<li><strong>Reward:</strong> A large positive reward (+1) for winning, a large negative reward (-1) for losing, and a small reward (0) for all other moves until the end of the game. This is an example of a <strong>sparse reward</strong> because most actions do not receive immediate feedback.</li>
|
|
|
</ul>
|
|
|
<h3>Self-Driving Car</h3>
|
|
|
<ul>
|
|
|
<li><strong>Agent:</strong> The car's control system (the AI).</li>
|
|
|
<li><strong>Environment:</strong> The road, other cars, pedestrians, traffic lights, and weather conditions.</li>
|
|
|
<li><strong>State:</strong> A combination of the car's current speed, position, steering angle, and processed data from its sensors (e.g., detected lane lines from the camera, distances to obstacles from LiDAR).</li>
|
|
|
<li><strong>Action:</strong> Can be discrete (turn left, turn right) or continuous (adjust the steering wheel by 3.5 degrees, accelerate by 5%).</li>
|
|
|
<li><strong>Reward:</strong> The reward function is carefully designed ("reward shaping") to encourage good behavior: a small positive reward for every meter it moves forward safely, a small negative reward for jerky movements, and a large negative reward for any collision or traffic violation.</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>🔹 Advantages & Challenges</h2>
|
|
|
<table>
|
|
|
<thead>
|
|
|
<tr>
|
|
|
<th>Advantages of RL</th>
|
|
|
<th>Challenges in RL</th>
|
|
|
</tr>
|
|
|
</thead>
|
|
|
<tbody>
|
|
|
<tr>
|
|
|
<td>✅ Can solve complex problems that are difficult to program explicitly.<br><strong>Example:</strong> It's nearly impossible to write rules by hand for all situations a self-driving car might face. RL allows the car to learn these rules from experience.</td>
|
|
|
<td>❌ <strong>Large State Spaces:</strong> For problems like Go, the number of possible board states is greater than the number of atoms in the universe, making it impossible to explore them all.</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td>✅ The agent can adapt to dynamic, changing environments.<br><strong>Example:</strong> A trading bot can adapt its strategy as market conditions change over time.</td>
|
|
|
<td>❌ <strong>Sparse Rewards:</strong> In many problems, rewards are only given at the very end (like winning a game). This is the "credit assignment problem" - it's hard for the agent to figure out which of its many early actions were actually responsible for the final win.</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td>✅ A very general framework that can be applied to many different fields.</td>
|
|
|
<td>❌ <strong>Exploration vs. Exploitation:</strong> This is a fundamental trade-off.
|
|
|
<div class="example-rl"><p><strong>Example:</strong> When choosing a restaurant, do you <strong>exploit</strong> your knowledge and go to your favorite place that you know is great? Or do you <strong>explore</strong> a new restaurant that might be even better, but also risks being terrible?</p></div>
|
|
|
</td>
|
|
|
</tr>
|
|
|
</tbody>
|
|
|
</table>
|
|
|
|
|
|
<div class="quiz-section">
|
|
|
<h2>📝 Quick Quiz: Test Your Knowledge</h2>
|
|
|
<ol>
|
|
|
<li><strong>What is the main difference between the feedback an agent gets in Reinforcement Learning versus Supervised Learning?</strong></li>
|
|
|
<li><strong>What is a "policy" in RL? Give a simple real-world analogy.</strong></li>
|
|
|
<li><strong>In the MDP formulation, what does the discount factor (gamma, γ) control? What would γ = 0 mean?</strong></li>
|
|
|
<li><strong>What is the "Exploration vs. Exploitation" dilemma? Provide an example from your own life.</strong></li>
|
|
|
</ol>
|
|
|
<div class="quiz-answers">
|
|
|
<h3>Answers</h3>
|
|
|
<p><strong>1.</strong> In Supervised Learning, the feedback is the "correct answer" from a labeled dataset. In Reinforcement Learning, the feedback is a scalar "reward" signal, which only tells the agent how good its action was, not what the best action would have been.</p>
|
|
|
<p><strong>2.</strong> A policy is the agent's strategy for choosing an action in a given state. A simple analogy is a recipe: for a given state ("I have eggs, flour, and sugar"), the policy (recipe) tells you which action to take ("mix them together").</p>
|
|
|
<p><strong>3.</strong> The discount factor controls how much the agent cares about future rewards versus immediate rewards. A γ = 0 would mean the agent is completely "myopic" or short-sighted, only caring about the immediate reward from its next action and ignoring any long-term consequences.</p>
|
|
|
<p><strong>4.</strong> It's the dilemma of choosing between trying something new (exploration) to potentially find a better outcome, versus sticking with what you know works well (exploitation). An example is choosing a restaurant: do you go to your favorite restaurant that you know is great (exploitation), or do you try a new one that might be even better, or might be terrible (exploration)?</p>
|
|
|
</div>
|
|
|
</div>
|
|
|
|
|
|
<h2>🔹 Key Terminology Explained</h2>
|
|
|
<div class="story-rl">
|
|
|
<p><strong>The Story: Decoding the Dog Trainer's Manual</strong></p>
|
|
|
</div>
|
|
|
<ul>
|
|
|
<li>
|
|
|
<strong>Policy (π):</strong>
|
|
|
<br>
|
|
|
<strong>What it is:</strong> The agent's brain or strategy. It's the rulebook the agent uses to decide what action to take in any given state.
|
|
|
<br>
|
|
|
<strong>Story Example:</strong> The puppy's final, well-trained <strong>policy</strong> is a set of rules like: "If my human is home, and it's 6 PM, and my bowl is empty, then the best action is to go sit by my bowl."
|
|
|
</li>
|
|
|
<li>
|
|
|
<strong>Markov Decision Process (MDP):</strong>
|
|
|
<br>
|
|
|
<strong>What it is:</strong> The mathematical framework used to describe an RL problem. It formalizes the agent, environment, states, actions, and rewards.
|
|
|
<br>
|
|
|
<strong>Story Example:</strong> The <strong>MDP</strong> is the complete "rulebook of the universe" for the puppy. It contains a list of every possible room configuration (states), every possible puppy action, the rules of what happens after each action, and the rewards for each action.
|
|
|
</li>
|
|
|
<li>
|
|
|
<strong>Discount Factor (γ):</strong>
|
|
|
<br>
|
|
|
<strong>What it is:</strong> A number between 0 and 1 that represents the importance of future rewards.
|
|
|
<br>
|
|
|
<strong>Story Example:</strong> A puppy with a high <strong>discount factor</strong> is patient. It's willing to perform a series of less-rewarding actions (like "come," "heel," "stay") because it knows it will lead to a very big treat at the end. A puppy with a low discount factor is impatient and will always choose the action that gets it a small treat *right now*.
|
|
|
</li>
|
|
|
</ul>
|
|
|
|
|
|
</div>
|
|
|
|
|
|
</body>
|
|
|
</html>
|
|
|
{% endblock %}
|
|
|
|
|
|
|