D. P. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming, Athena Scientific, 1996.

D. P. Bertsekas, Dynamic programming and suboptimal control: A survey from ADP to MPC, European Journal of Control, vol.11, issue.4-5, pp.310-334, 2005.

D. P. Bertsekas, Abstract Dynamic Programming, Athena S, 2018.

T. Bian and Z. Jiang, Value iteration and adaptive dynamic programming for data-driven adaptive optimal control design, Automatica, vol.71, pp.348-360, 2016.

L. Bu?oniu, D. Ernst, B. D. Schutter, and R. Babu?ka, Approximate dynamic programming with a fuzzy parameterization, Automatica, vol.46, issue.5, pp.804-814, 2010.

A. M. Farahmand, M. Ghavamzadeh, C. Szepesvári, and S. Mannor, Regularized fitted Q-iteration for planning in continuous-space Markovian decision problems, American Control Conference, pp.725-730, 2009.

M. Granzotto, R. Postoyan, L. Bu?oniu, D. Ne?i?, and J. Daafouz, Stability analysis of discrete-time finite-horizon discounted optimal control, IEEE Conf. on Dec. and Control, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01877140

M. Granzotto, R. Postoyan, L. Bu?oniu, D. Ne?i?, and J. Daafouz, Finite-horizon discounted optimal control: stability and performance. submitted for journal publication, 2019.

G. Grimm, M. J. Messina, S. E. Tuna, and A. R. Teel, Model predictive control: for want of a local control Lyapunov function, all is not lost, IEEE Transactions on Automatic Control, vol.50, issue.5, pp.546-558, 2005.

A. Heydari, Theoretical and numerical analysis of approximate dynamic programming with approximation errors, Journal of Guidance, Control, and Dynamics, vol.39, issue.2, pp.301-311, 2015.

A. Heydari, Stability analysis of optimal adaptive control under value iteration using a stabilizing initial policy, IEEE Transactions on Neural Networks and Learning Systems, vol.29, issue.9, pp.4522-4527, 2018.

A. Heydari, Stability analysis of optimal adaptive control using value iteration with approximation errors, IEEE Transactions on Automatic Control, vol.63, issue.9, pp.3119-3126, 2018.

Z. Jiang, A. R. Teel, and L. Praly, Small-gain theorem for ISS systems and applications, Mathematics of Control, Signals, and Systems, vol.7, pp.95-120, 1994.

C. M. Kellett and A. R. Teel, On the robustness of KL-stability for difference inclusions: smooth discrete-time Lyapunov functions, SIAM Journal on Control and Optimization, vol.44, issue.3, pp.777-800, 2005.

H. K. Khalil, Nonlinear systems, 2002.

D. Liu and Q. Wei, Finite-approximation-error-based optimal control approach for discrete-time nonlinear systems, IEEE Transactions on Cybernetics, vol.43, issue.2, pp.779-789, 2013.

D. Liu, Q. Wei, D. Wang, X. Yang, and H. Li, Finite approximation error-based value iteration ADP, Adaptive Dynamic Programming with Applications in Optimal Control, pp.91-149, 2017.

D. Liu, Q. Wei, D. Wang, X. Yang, and H. Li, Value iteration ADP for discrete-time nonlinear systems, Adaptive Dynamic Programming with Applications in Optimal Control, pp.37-90, 2017.

R. Munos and C. Szepesvári, Finite-time bounds for fitted value iteration, Journal of Machine Learning Research, vol.9, pp.815-857, 2008.
URL : https://hal.archives-ouvertes.fr/inria-00120882

R. Postoyan, Commande et construction d'observateurs pour les systèmes non linéairesà donnéeséchantillonnées et en réseau, 2009.

R. Postoyan, L. Bu?oniu, D. Ne?i?, and J. Daafouz, Stability analysis of discrete-time infinite-horizon optimal control with discounted cost, IEEE Transactions on Automatic Control, vol.62, issue.6, pp.2736-2749, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01452929

M. Rinehart, M. Dahleh, and I. Kolmanovsky, Value iteration for (switched) homogeneous systems, IEEE Transactions on Automatic Control, vol.54, issue.6, pp.1290-1294, 2009.

B. Scherrer and B. Lesner, On the use of non-stationary policies for stationary infinite-horizon Markov decision processes, NIPS -Neur. Inf. Processing Syst, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00758809

S. P. Singh and R. C. Yee, An upper bound on the loss from approximate optimal-value functions, Machine Learning, vol.16, issue.3, pp.227-233, 1994.

A. R. Teel and L. Praly, A smooth Lyapunov function from a class-KL estimate involving two positive semidefinite functions. ESAIM: Control, Optimisation and Calculus of Variations, vol.5, pp.313-367, 2000.

Q. Wei and D. Liu, Stable iterative adaptive dynamic programming algorithm with approximation errors for discrete-time nonlinear systems, Neural Computing and Applications, vol.24, issue.6, pp.1355-1367, 2014.

Q. Wei, D. Liu, and H. Lin, Value iteration adaptive dynamic programming for optimal control of discrete-time nonlinear systems, IEEE Transactions on Cybernetics, vol.46, issue.3, pp.840-853, 2016.

R. J. Williams and L. C. Baird, Tight performance bounds on greedy policies based on imperfect value functions, Proceedings of the 8th Yale Workshop on Adaptive and Learning Systems, pp.108-113, 1994.

A. Al-tamimi, F. L. Lewis, and M. Abu-khalaf, Discrete-time nonlinear hjb solution using approximate dynamic programming: Convergence proof, IEEE Transactions on Systems, Man, and Cybernetics, vol.38, issue.4, pp.943-949, 2008.

B. D. Anderson and J. B. Moore, Optimal control: linear quadratic methods. Courier Corporation, 2007.

D. Antunes and W. P. Heemels, Linear quadratic regulation of switched systems using informed policies, IEEE Transactions on Automatic Control, vol.62, issue.6, pp.2675-2688, 2017.

G. I. Bara and M. Boutayeb, A new sufficient condition for the static output feedback stabilization of linear discrete-time systems, IEEE Conference on Decision and Control, pp.4723-4728, 2006.

R. Bellman, Dynamic Programming, 2010.

D. P. Bertsekas, Dynamic programming and suboptimal control: A survey from adp to mpc, European Journal of Control, vol.11, issue.4-5, pp.310-334, 2005.

D. P. Bertsekas, Dynamic Programming and Optimal Control, Athena Scientific, vol.2, 2012.

D. P. Bertsekas, Dynamic Programming and Optimal Control, Athena Scientific, vol.1, 2012.

D. P. Bertsekas, Value and policy iterations in optimal control and adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, vol.28, issue.3, pp.500-509, 2017.

D. Bertsekas and J. N. Tsitsiklis, Neuro-dynamic programming, Athena Scientific Belmont, vol.5, 1996.

S. Bhasin, R. Kamalapurkar, M. Johnson, K. G. Vamvoudakis, F. L. Lewis et al.,

. Dixon, A novel actor-critic-identifier architecture for approximate optimal control of uncertain nonlinear systems, Automatica, vol.49, issue.1, pp.82-92, 2013.

T. Bian, Y. Jiang, and Z. Jiang, Adaptive dynamic programming and optimal control of nonlinear nonaffine systems, Automatica, vol.50, issue.10, pp.2624-2632, 2014.

C. I. Boussios, M. A. Dahleh, and J. N. Tsitsiklis, Semiglobal nonlinear stabilization via approximate policy iteration, American Control Conference, vol.6, pp.4675-4680, 2001.

L. Bu?oniu, R. Munos, B. D. Schutter, and R. Babu?ka, Optimistic planning for sparsely stochastic systems, IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), pp.48-55, 2011.

L. Bu?oniu, J. Daafouz, M. C. Bragagnolo, and I. Mor?rescu, Planning for optimal control and performance certification in nonlinear systems with controlled or uncontrolled switches, Automatica, vol.78, pp.297-308, 2017.

L. Bu?oniu, D. Ernst, B. D. Schutter, and R. Babu?ka, Approximate dynamic programming with a fuzzy parameterization, Automatica, vol.46, issue.5, pp.804-814, 2010.

L. Bu?oniu, R. Postoyan, and J. Daafouz, Near-optimal strategies for nonlinear and uncertain networked control systems, IEEE Transactions on Automatic Control, vol.61, issue.8, pp.2124-2139, 2016.

X. Chang and G. Yang, New results on output feedback h ? control for linear discrete-time systems, IEEE Transactions on Automatic Control, vol.59, issue.5, pp.1355-1359, 2014.

G. Feng, E. Meyer, and Y. Liu, A new digital control algorithm to achieve optimal dynamic performance in dc-to-dc converters, IEEE Transactions on Power Electronics, vol.22, issue.4, pp.1489-1498, 2007.

C. A. Gonzaga, M. Jungers, and J. Daafouz, Stability analysis of discrete-time Lur'e systems, Automatica, vol.48, issue.9, pp.2277-2283, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00717576

M. Granzotto, R. Postoyan, L. Bu?oniu, D. Ne?i?, and J. Daafouz, Finite-horizon discounted optimal control: stability and performance

M. Granzotto, R. Postoyan, L. Bu?oniu, D. Ne?i?, and J. Daafouz, Stability analysis of discrete-time finite-horizon optimal control with discounted cost, IEEE Conference on Decision and Control, pp.2322-2327, 2018.

M. Granzotto, R. Postoyan, L. Bu?oniu, D. Ne?i?, and J. Daafouz, Optimistic planning for the near-optimal control of nonlinear switched discrete-time systems with stability guarantees, IEEE Conference on Decision and Control, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02308366

G. Grimm, M. J. Messina, S. E. Tuna, and A. R. Teel, Model predictive control: for want of a local control Lyapunov function, all is not lost, IEEE Transactions on Automatic Control, vol.50, issue.5, pp.546-558, 2005.

L. Grüne and A. Rantzer, On the infinite horizon performance of receding horizon controllers, IEEE Transactions on Automatic Control, vol.53, issue.9, pp.2100-2111, 2008.

L. Grüne, E. D. Sontag, and F. R. Wirth, Asymptotic stability equals exponential stability, and iss equals finite energy gain -if you twist your eyes, Systems & Control Letters, vol.38, issue.2, pp.127-134, 1999.

A. Heydari, Revisiting approximate dynamic programming and its convergence, IEEE Transactions on Cybernetics, vol.44, issue.12, pp.2733-2743, 2014.

A. Heydari, Theoretical and numerical analysis of approximate dynamic programming with approximation errors, Journal of Guidance, Control, and Dynamics, vol.39, issue.2, pp.301-311, 2015.

A. Heydari, Analyzing policy iteration in optimal control, IEEE American Control Conference, pp.5728-5733, 2016.

A. Heydari, Stability analysis of optimal adaptive control under value iteration using a stabilizing initial policy, IEEE Transactions on Neural Networks and Learning Systems, vol.29, issue.9, pp.4522-4527, 2017.

A. Heydari, Stability analysis of optimal adaptive control using value iteration with approximation errors, IEEE Transactions on Automatic Control, vol.63, issue.9, pp.3119-3126, 2018.

J. Hren and R. Munos, Optimistic planning of deterministic systems, European Workshop on Reinforcement Learning, pp.151-164, 2008.
URL : https://hal.archives-ouvertes.fr/hal-00830182

M. Höger and L. Grüne, On the relation between detectability and strict dissipativity for nonlinear discrete time systems, IEEE Control Systems Letters, vol.3, issue.2, pp.458-462, 2019.

S. Ibrir, Static output feedback and guaranteed cost control of a class of discrete-time nonlinear systems with partial state measurements, Nonlinear Analysis: Theory, Methods & Applications, vol.68, pp.1784-1792, 2008.

Y. Jiang and Z. Jiang, Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics, Automatica, vol.48, issue.10, pp.2699-2704, 2012.

Y. Jiang and Z. Jiang, Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics, Automatica, vol.48, issue.10, pp.2699-2704, 2012.

Z. Jiang and Y. Wang, Input-to-state stability for discrete-time nonlinear systems, Automatica, vol.37, issue.6, pp.857-869, 2001.

R. E. Kalman, Contributions to the theory of optimal control, Bol. soc. mat. mexicana, vol.5, issue.2, pp.102-119, 1960.

S. Keerthi and E. Gilbert, An existence theorem for discrete-time infinite-horizon optimal control problems, IEEE Transactions on Automatic Control, vol.30, issue.9, pp.907-909, 1985.

C. M. Kellett and A. R. Teel, On the robustness of KL-stability for difference inclusions: Smooth discrete-time Lyapunov functions, SIAM Journal on Control and Optimization, vol.44, issue.3, pp.777-800, 2005.

J. B. Lasserre, D. Henrion, C. Prieur, and E. Trélat, Nonlinear optimal control via occupation measures and lmi-relaxations, SIAM Journal on Control and Optimization, vol.47, issue.4, pp.1643-1666, 2008.
URL : https://hal.archives-ouvertes.fr/hal-00136032

F. L. Lewis and D. Vrabie, Reinforcement learning and adaptive dynamic programming for feedback control. Circuits and Systems Magazine, IEEE, vol.9, pp.32-50, 2009.

D. Liberzon, Switching in Systems and Control. Systems & Control: Foundations & Applications. Birkhäuser Boston, 2003.

B. Lincoln and A. Rantzer, Relaxing dynamic programming, IEEE Transactions on Automatic Control, vol.51, issue.8, pp.1249-1260, 2006.

D. Liu and Q. Wei, Policy iteration adaptive dynamic programming algorithm for discrete-time nonlinear systems, IEEE Transactions on Neural Networks and Learning Systems, vol.25, issue.3, pp.621-634, 2013.

D. Liu, Q. Wei, D. Wang, X. Yang, and H. Li, Finite approximation error-based value iteration ADP, Adaptive Dynamic Programming with Applications in Optimal Control, pp.91-149, 2017.

D. Liu, Q. Wei, D. Wang, X. Yang, and H. Li, Value iteration ADP for discrete-time nonlinear systems, Adaptive Dynamic Programming with Applications in Optimal Control, pp.37-90, 2017.

B. Luo, D. Liu, T. Huang, and D. Wang, Model-free optimal tracking control via critic-only Q-learning, IEEE Transactions on neural networks and learning systems, vol.27, pp.2134-2144, 2016.

D. Q. Mayne, Model predictive control: Recent developments and future promise, Automatica, vol.50, issue.12, pp.2967-2986, 2014.

J. D. Meiss, Differential Dynamical Systems (Monographs on Mathematical Modeling and Computation), Society for Industrial and Applied Mathematics, 2007.

F. Morbidi, R. Cano, and D. Lara, Minimum-energy path generation for a quadrotor uav, IEEE International Conference on Robotics and Automation (ICRA), pp.1492-1498, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01276199

R. Munos and C. Szepesvári, Finite time bounds for fitted value iteration, Journal of Machine Learning Research, vol.9, pp.815-857, 2008.
URL : https://hal.archives-ouvertes.fr/inria-00120882

J. J. Murray, C. J. Cox, G. G. Lendaris, and R. Saeks, Adaptive dynamic programming, IEEE Transactions on Systems, Man, and Cybernetics, Part C, vol.32, issue.2, pp.140-153, 2002.

M. A. Müller and L. Grüne, On the relation between dissipativity and discounted dissipativity, IEEE 56th Annual Conference on Decision and Control (CDC), pp.5570-5575, 2017.

R. Postoyan, L. Bu?oniu, D. Ne?i?, and J. Daafouz, Stability analysis of discretetime infinite-horizon optimal control with discounted cost, IEEE Transactions on Automatic Control, vol.62, issue.6, pp.2736-2749, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01452929

R. Postoyan, M. Granzotto, L. Bu?oniu, B. Scherrer, D. Ne?i? et al., Stability guarantees for nonlinear discrete-time systems controlled by approximate value iteration, IEEE Conference on Decision and Control, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02271268

C. Prieur, Uniting local and global controllers with robustness to vanishing noise, Mathematics of Control, Signals and Systems, vol.14, issue.2, pp.143-172, 2001.

C. Prieur and L. Praly, Uniting local and global controllers, IEEE Conference on Decision and Control, vol.2, pp.1214-1219, 1999.
URL : https://hal.archives-ouvertes.fr/hal-00608429

C. Prieur and A. R. Teel, Uniting local and global output feedback controllers, IEEE Transactions on Automatic Control, vol.56, issue.7, pp.1636-1649, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00608429

J. B. Rejeb, L. Bu?oniu, I. Mor?rescu, and J. Daafouz, Near-optimal control of nonlinear switched systems with non-cooperative switching rules, American Control Conference, pp.2648-2653, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01538139

A. Richards, T. Schouwenaars, J. P. How, and E. Feron, Spacecraft trajectory planning with avoidance constraints using mixed-integer linear programming, Journal of Guidance, Control, and Dynamics, vol.25, issue.4, pp.755-764, 2002.

P. Riedinger, A switched LQ regulator design in continuous time, IEEE Transactions on Automatic Control, vol.59, issue.5, pp.1322-1328, 2014.
URL : https://hal.archives-ouvertes.fr/hal-00874920

R. T. Rockafellar, R. , and J. Wets, Variational analysis, vol.317, 2009.

C. Savorgnan, J. B. Lasserre, and M. Diehl, Discrete-time stochastic optimal control via occupation measures and moment relaxations, Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference, pp.519-524, 2009.

B. Scherrer and B. Lesner, On the use of non-stationary policies for stationary infinitehorizon markov decision processes, Advances in Neural Information Processing Systems, pp.1826-1834, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00758809

S. P. Singh and R. C. Yee, An upper bound on the loss from approximate optimal-value functions, Machine Learning, vol.16, issue.3, pp.227-233, 1994.

R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, 2017.

E. Trélat, Contrôle optimal : théorie & applications. Vuibert, vol.2, pp.7117-7175, 2005.

S. E. Tuna, M. J. Messina, and A. R. Teel, Shorter horizons for model predictive control, American Control Conference, pp.863-868, 2006.

K. G. Vamvoudakis and F. L. Lewis, Online actor-critic algorithm to solve the continuous-time infinite horizon optimal control problem, Automatica, vol.46, issue.5, pp.878-888, 2010.

V. I. Vorotnikov, Partial stability and control, 2012.

D. Vrabie and F. Lewis, Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems, Neural Networks, vol.22, issue.3, pp.237-246, 2009.

D. Vrabie, O. Pastravanu, M. Abu-khalaf, and F. L. Lewis, Adaptive optimal control for continuous-time linear systems based on policy iteration, Automatica, vol.45, issue.2, pp.477-484, 2009.

Q. Wei and D. Liu, Stable iterative adaptive dynamic programming algorithm with approximation errors for discrete-time nonlinear systems, Neural Computing and Applications, vol.24, issue.6, pp.1355-1367, 2014.

Q. Wei, D. Liu, and H. Lin, Value iteration adaptive dynamic programming for optimal control of discrete-time nonlinear systems, IEEE Transactions on Cybernetics, vol.46, issue.3, pp.840-853, 2015.

M. Wiering and M. Van-otterlo, Reinforcement Learning: State of the Art, vol.12, 2012.

F. Zhang, H. L. Trentelman, G. Feng, and J. M. Scherpen, Absolute stabilization of Lur'e systems via dynamic output feedback, European Journal of Control, vol.44, pp.15-26, 2018.

W. Zhang, J. Hu, and A. Abate, A study of the discrete-time switched LQR problem, ECE TechnicalReports, 2009.

W. Zhang, J. Hu, and A. Abate, Infinite-horizon switched LQR problems in discrete time: A suboptimal algorithm with performance analysis, IEEE Transactions on Automatic Control, vol.57, issue.7, pp.1815-1821, 2012.

R. E. Bellman, En contre partie, la fonction de coût optimale dois être connu. C'est donc ici où plusieurs algorithmes ont été développé pour approcher la fonction de coût optimale. Par moyens d'un facteur de "décompte" ou "oublie", il est possible d'obtenir des bornes d'erreur entre le coût optimal, le desiré, est celui obtenu par le calcul numérique. Ces méthodes sont applicables à de larges classes de systèmes non-linéaires en temps discret et

, En effet, ces travaux se concentrent sur l'optimalité et ignorent dans la plupart des cas la stabilité du système commandé, qui est au coeur de l'automatique. La stabilité fournit des garanties analytiques sur le comportement des solutions de systèmes contrôlés au fil du temps, qu'elles convergent vers un point ou un ensemble souhaité, et que si elles sont initialement proche à un attracteur, Une question fondamentale reste néanmoins à élucider pour cela: celle de la stabilité

, La stabilité assure également une robustesse nominale aux perturbations non modélisées, bien qu'une attention particulière soit nécessaire pour garantir cela pour les systèmes non linéaires en temps discret

, comme 1) la connaissance d'un contrôleur initial de stabilisation globale qui peut être difficile à déterminer ; 2) la stabilité d'un point est étudiée, alors que le système en boucle fermée peut avoir un type d'attracteurs plus général ; 3) nous voulons considérer des coûts d'étape plus généraux, et pas nécessairement quadratiques, ni des fonctions définies positives de l'état et de l'entrée ; 4) ces résultats ne tiennent pas compte facteur d'oubli est source de difficultés lorsqu'on s'intéresse à la stabilité. A l'évidence, des travaux récents [PBND14, PBND16] démontrent les difficultés techniques de la présence du facteur d'oubli, et que celui-ci doit être choisi suffisamment grand (ergo, un oubli négligeable), mais ces résultats ne sont pas adaptés au cas de l'algorithme d'itération sur les valeurs. Aux apparences, le facteur d'oublie pose un dilemme entre la garantie d'une convergence (rapide) de la stabilité et la garantie de sous optimalité des algorithmes d, Certains auteurs ont étudié la stabilité dans ce contexte, mais souffrent de certaines ou de multiples limitations

, Avec les mêmes hypothèses que [PBND14, PBND16], je démontre la stabilité de contrôle optimal à coût d'horizon fini décompté quand le facteur d'oubli et l'horizon fini sont suffisamment grands. Ces hypothèses sont différentes de celles considérées en programmation dynamique, En réalité, les fonctions de coût à horizon infini décompté sont approchées par des fonctions de coût à horizon fini décompté

, De plus, mes résultats se distinguent de ceux existant en programmation dynamique en deux point: ils (i) généralisent au cas non-décompté (sans facteur d'oubli); (ii) s'appliquent à des coûts plus usuels pour des automaticiens

, En suite, j'étends l'analyse au cadre plus général de valeur itération approchée, quand la fonction de coût fini décompté est elle aussi approchée: la stabilité est garantie quand l'approximation est suffisamment bonne. J'applique enfin mes résultats a une simulation de pendule inverse avec commande calculée par valeur itération approchée; celle-ci corrobore mes résultats analytiques, sous certaines conditions et vérifiées dans plusieurs exemples

, La version journal est en cours d'évaluation, vol.20

, En exploitant la forme du système, l'algorithme calcule exactement la fonction de coût à horizon fini en utilisaient intelligemment le budget de calcul disponible. Cependant, ils souffrent des mêmes restrictions que l'étude antérieure: les garanties d'optimalité ne se généralisent pas au cas non-décompté et considèrent des fonctions de coût non-usuelles pour des automaticiens. Je propose donc un algorithme, dénommé OP min décliné en deux version, qui ne souffre pas de ces restrictions, Dans un deuxième temps, je me suis intéressé à l'algorithme de planification optimiste [LV06], car lui aussi approche la fonction de coût d'horizon infini par un problème d'horizon fini

, Ici, la notion de regret est liée aux conséquences (coût) des actions "regrettables". Le regret d'une action à un état est défini comme le coût total excédentaire qui résulte de cet apport sous-optimal (ou non), Avec du recul, j'ai pu interpréter de manière concise et intuitive mes résultats précédents en exploitant la notion de ce que j'appelle le "regret

, également participé à une étude sur l'analyse de stabilité de systèmes contrôlé par valeur itération approchée

, Cette thèse est encadrée par Jamal Daafouz et Romain Postoyan tous deux membres du département 'Contrôle -Identification -Diagnostic' du CRAN (UMR 7039), et les travaux sont menés en collaboration avec Lucian Bu?oniu

R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, 2017.

D. P. Bertsekas, Dynamic Programming and Optimal Control, Athena Scientific, vol.2, 2012.

L. Bu?oniu, R. Babuska, B. D. Schutter, and D. Ernst, Reinforcement Learning and Dynamic Programming Using Function Approximators, 2010.

S. M. Lavalle, Planning Algorithms, 2006.
URL : https://hal.archives-ouvertes.fr/hal-01993243

R. Munos, The optimistic principle applied to games, optimization and planning: towards foundations of Monte-Carlo tree search, Foundations and Trends in Machine Learning, vol.7, pp.1-130, 2014.

R. Postoyan, L. Bu?oniu, D. Ne?i?, and J. Daafouz, Stability of infinite-horizon optimal control with discounted cost, CDC (IEEE Conference on Decision and Control), 2014.
URL : https://hal.archives-ouvertes.fr/hal-01080084

R. Postoyan, L. Bu?oniu, D. Ne?i?, and J. Daafouz, Stability analysis of discretetime infinite-horizon optimal control with discounted cost, IEEE Transactions on Automatic Control, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01452929

G. Grimm, M. J. Messina, S. E. Tuna, and A. R. Teel, Model predictive control: for want of a local control Lyapunov function, all is not lost, IEEE Transactions on Automatic Control, 2005.

M. Granzotto, R. Postoyan, L. Bu?oniu, D. Ne?i?, and J. Daafouz, Stability analysis of discrete-time finite-horizon discounted optimal control, 57th IEEE Conference on Decision and Control, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01877140

M. Granzotto, R. Postoyan, L. Bu?oniu, D. Ne?i?, and J. Daafouz, Finite horizon discounted optimal control -stability and performance, 2018.

M. Granzotto, R. Postoyan, L. Bu?oniu, D. Ne?i?, and J. Daafouz, Optimistic planning for the near-optimal control of nonlinear switched discrete-time systems with stability guarantees
URL : https://hal.archives-ouvertes.fr/hal-02308366

R. Postoyan, M. Granzotto, L. Bu?oniu, B. Scherrer, D. Ne?i? et al., Stability guarantees for nonlinear discrete-time systems controlled by approximate value iteration
URL : https://hal.archives-ouvertes.fr/hal-02271268

. Mots-clés,