Site Tools


Hotfix release available: 2025-05-14b "Librarian". upgrade now! [56.2] (what's this?)
Hotfix release available: 2025-05-14a "Librarian". upgrade now! [56.1] (what's this?)
New release available: 2025-05-14 "Librarian". upgrade now! [56] (what's this?)
Hotfix release available: 2024-02-06b "Kaos". upgrade now! [55.2] (what's this?)
Hotfix release available: 2024-02-06a "Kaos". upgrade now! [55.1] (what's this?)
New release available: 2024-02-06 "Kaos". upgrade now! [55] (what's this?)
Hotfix release available: 2023-04-04b "Jack Jackrum". upgrade now! [54.2] (what's this?)
Hotfix release available: 2023-04-04a "Jack Jackrum". upgrade now! [54.1] (what's this?)
New release available: 2023-04-04 "Jack Jackrum". upgrade now! [54] (what's this?)
Hotfix release available: 2022-07-31b "Igor". upgrade now! [53.1] (what's this?)
Hotfix release available: 2022-07-31a "Igor". upgrade now! [53] (what's this?)
New release available: 2022-07-31 "Igor". upgrade now! [52.2] (what's this?)
New release candidate 2 available: rc2022-06-26 "Igor". upgrade now! [52.1] (what's this?)
New release candidate available: 2022-06-26 "Igor". upgrade now! [52] (what's this?)
Hotfix release available: 2020-07-29a "Hogfather". upgrade now! [51.4] (what's this?)
memento-value-function-approximation

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
memento-value-function-approximation [2025/10/18 11:17]
66.249.74.131 old revision restored (2025/08/28 22:22)
memento-value-function-approximation [2025/10/19 23:11] (current)
216.73.216.169 old revision restored (2025/10/19 07:57)
Line 9: Line 9:
    * ...    * ...
  
-===Algorithme du gradient===+===Descente de gradient===
  
 Avec J(w), une fonction dérivable de paramètre w (w étant un vector contenant toutes les valeurs des états). Avec J(w), une fonction dérivable de paramètre w (w étant un vector contenant toutes les valeurs des états).
Line 39: Line 39:
  
 Algorithme qui trouve le paramètre w qui minimise la somme des carrés des erreurs entre la fonction approximation et la la valeur cible. Algorithme qui trouve le paramètre w qui minimise la somme des carrés des erreurs entre la fonction approximation et la la valeur cible.
- 
-===Stochastic Gradient Descent with Experience Replay=== 
- 
-Expérience donnée sous la forme de pair <Etat, Valeur>. 
-(Voir diapo 37 pour plus de détails) 
- 
-===Experience Replay in Deep Q-Network=== 
- 
-   * DQN utilisent l'experience replay. 
-   * Choix d'action en fonction d'une politique gloutonne. 
-   * Sauvegarde les transitions en replay memomry 
-   * Optimise le MSE (mean squarred error) entre les cibles du QNetwork et du QLearning 
-   * Utilise une variante de la descente de gradient stochastique 
- 
- 
  
  
memento-value-function-approximation.1760779050.txt.gz · Last modified: 2025/10/18 11:17 by 66.249.74.131