. Première-fois-par-moore-le-livre-de-sutton and . Barto, La Figure 1.2 décrit les variables qui représentent l'information dont dispose l'agent pour ce problème. Il y a bien sûr la position de la voiture : le point le plus à gauche a une coordonnée de 1.2, l'objectif

E. Klein, B. Piot, M. Geist, and O. Pietquin, Classi???cation structur??e pour l???apprentissage par renforcement inverse, Actes de la Conférence Francophone sur l'Apprentissage Automatique, 2012.
DOI : 10.3166/ria.27.155-169

E. Klein, B. Piot, M. Geist, and O. Pietquin, Classi???cation structur??e pour l???apprentissage par renforcement inverse, Revue d'intelligence artificielle, vol.27, issue.2, 2013.
DOI : 10.3166/ria.27.155-169

E. Klein, B. Piot, M. Geist, O. Pietquin, E. et al., « Apprentissage par renforcement inverse en cascadant classification et régression Décision et Apprentissage (JFPDA) 2013 ? Il sera également présenté à Prague à « A cascaded supervised learning approach to inverse reinforcement learning, Journées Francophones de Plannification Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD 2013). Prague (Czech Republic), sept. 2013 ? Enfin, une communication étudiant les liens entre classification et ARI, regroupant ces travaux et ceux menés sur SCIRL aura lieu à RLDM2013

M. Geist, E. Klein, B. Piot, Y. Guermeur, and O. Pietquin, « Around inverse reinforcement learning and score-based classification, Reinforcement Learning and Decision Making Meetings. 2013 L'analyse théorique de CSI est due à un autre doctorant de l'équipe MaLIS

E. Klein, M. Geist, and O. Pietquin, « Reducing the dimentionality of the reward space in the Inverse Reinforcement Learning problem, Proceedings of the IEEE Workshop on Machine Learning Algorithms, Systems and Applications, 2011.

L. Bougrain, M. Duvinage, and E. Klein, Inverse reinforcement learning to control a robotic arm using a Brain-Computer Interface, p.2012
URL : https://hal.archives-ouvertes.fr/hal-00924653

. [. Bibliographie, E. M. Aizerman, L. Braverman, and . Rozoner, « Theoretical foundations of the potential function method in pattern recognition learning, pp.25-821, 1964.

A. [. Abbeel, A. Coates, and . Ng, Autonomous Helicopter Aerobatics through Apprenticeship Learning, The International Journal of Robotics Research, vol.20, issue.1, pp.1608-1639, 2010.
DOI : 10.1109/TASSP.1978.1163055

P. Abbeel and A. Ng, Apprenticeship learning via inverse reinforcement learning, Twenty-first international conference on Machine learning , ICML '04, p.1, 2004.
DOI : 10.1145/1015330.1015430

URL : http://www.aicml.cs.ualberta.ca/banff04/icml/pages/papers/335.pdf

A. [. Bradtke and . Barto, « Linear least-squares algorithms for temporal difference learning, Machine Learning, vol.221, pp.33-57, 1996.

M. [. Bougrain, E. Duvinage, and . Klein, Inverse reinforcement learning to control a robotic arm using a Brain-Computer Interface. Rap. tech, p.2012
URL : https://hal.archives-ouvertes.fr/hal-00924653

]. R. Bel03 and . Bellman, Dynamic programming, 2003.

I. [. Boser, V. Guyon, and . Vapnik, A training algorithm for optimal margin classifiers, Proceedings of the fifth annual workshop on Computational learning theory , COLT '92, pp.144-152, 1992.
DOI : 10.1145/130385.130401

]. R. Bon13 and . Bonidal, « Analyse des systèmes discriminants multi-classes à grande marge, Thèse de doct, 2013.

M. Bain and C. Sommut, « A framework for behavioural cloning, Machine Intelligence 15, p.103, 2000.

D. [. Chajewska, D. Koller, and . Ormoneit, « Learning an agent's utility function by observing behavior, International Conference on Machine Learning (ICML), pp.35-42, 2001.

M. [. Chernova and . Veloso, Confidence-based policy learning from demonstration using Gaussian mixture models, Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems , AAMAS '07, p.233, 2007.
DOI : 10.1145/1329125.1329407

C. [. Deisenroth, J. Rasmussen, and . Peters, Gaussian process dynamic programming, Neurocomputing, vol.72, issue.7-9, pp.1508-1524, 2009.
DOI : 10.1016/j.neucom.2008.12.019

E. [. Dvijotham and . Todorov, « Inverse Optimal Control with Linearly-Solvable MDPs, Proceedings of the Interntional Conference on Machine Learning, 2010.

P. [. Ernst, L. Geurts, and . Wehenkel, « Tree-based batch mode reinforcement learning, Journal of Machine Learning Research, vol.6, 2005.

]. J. Fri01 and . Friedman, « Greedy function approximation: a gradient boosting machine, Annals of Statistics, vol.295, pp.1189-1232, 2001.

]. M. Gei+12, B. Geist, A. Scherrer, M. Lazaric, and . Ghavamzadeh, « A Dantzig Selector Approach to Temporal Difference Learning, International Conference on Machine Learning (ICML). (to appear), 2012.

M. Geist, E. Klein, B. Piot, Y. Guermeur, and O. Pietquin, « Around inverse reinforcement learning and score-based classification, Reinforcement Learning and Decision Making Meetings, 2013.

]. G. Gor95 and . Gordon, Stable function approximation in dynamic programming A generic model of multi?class support vector machine, Rap. tech. DTIC Document International Journal of Intelligent Information and Database Systems, vol.66, pp.555-577, 1995.

[. Jin, H. Qian, and M. Zhu, Gaussian processes in inverse reinforcement learning, 2010 International Conference on Machine Learning and Cybernetics, pp.225-230, 2010.
DOI : 10.1109/ICMLC.2010.5581063

]. E. Kgp11a, M. Klein, O. Geist, and . Pietquin, « Apprentissage par imitation étendu au cas batch, off-policy et sans modèle, Sixièmes Journées Francophones de Planification, Décision et Apprentissage pour la conduite de systèmes, 2011.

]. E. Kgp11b, M. Klein, O. Geist, and . Pietquin, « Batch, Off-policy and Model-Free Apprenticeship Learning, IJCAI Workshop on Agents Learning Interactively from Human Teachers Barcelona (Spain), juil, 2011.

]. E. Kgp11c, M. Klein, O. Geist, and . Pietquin, « Batch, Off-policy and Model-free Apprenticeship Learning, Proceedings of the European Workshop on Reinforcement Learning, 2011.

E. Klein, M. Geist, and O. Pietquin, « Reducing the dimentionality of the reward space in the Inverse Reinforcement Learning problem, Proceedings of the IEEE Workshop on Machine Learning Algorithms, Systems and Applications, 2011.

]. E. Kle+12b, B. Klein, M. Piot, O. Geist, and . Pietquin, « Classification structurée pour l'apprentissage par renforcement inverse, Actes de la Conférence Francophone sur l'Apprentissage Automatique, 2012.

]. E. Kle+13a, B. Klein, M. Piot, O. Geist, and . Pietquin, « A cascaded supervised learning approach to inverse reinforcement learning, Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD 2013). Prague (Czech Republic), sept. 2013. contributions à l'apprentissage par renforcement inverse 109

E. Klein, B. Piot, M. Geist, and O. Pietquin, « Apprentissage par renforcement inverse en cascadant classification et régression, Journées Francophones de Plannification, Décision et Apprentissage (JFPDA), 2013.

]. E. Kle+13c, B. Klein, M. Piot, O. Geist, and . Pietquin, « Classification structurée pour l'apprentissage par renforcement inverse, 2013.

A. [. Kolter and . Ng, Regularization and feature selection in least-squares temporal difference learning, Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, pp.521-528, 2009.
DOI : 10.1145/1553374.1553442

]. Y. Lec+06, U. Lecun, J. Muller, E. Ben, B. Cosatto et al., « Off-road obstacle avoidance through end-toend learning, Advances in neural information processing systems, pp.739-1049, 2006.

M. [. Lazaric, R. Ghavamzadeh, and . Munos, « Finite-sample analysis of LSTD, Proceedings of the 27th International Conference on Machine Learning, 2010.

M. Lopes, F. Melo, and L. Montesano, Active Learning for Reward Estimation in Inverse Reinforcement Learning, Machine Learning and Knowledge Discovery in Databases, vol.50, pp.31-46, 2009.
DOI : 10.1109/ADPRL.2007.368162

M. Lagoudakis and R. Parr, « Least-squares policy iteration, The Journal of Machine Learning Research, vol.4, pp.1107-1149, 2003.

Z. [. Levine, V. Popovic, and . Koltun, « Feature construction for inverse reinforcement learning, Proc. NIPS. T. 23, pp.1342-1350, 2010.

L. Mason, J. Baxter, P. Bartlett, and M. Frean, « Functional gradient techniques for combining hypotheses, ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, pp.221-246, 1999.

]. J. Mer09 and . Mercer, « Functions of positive and negative type, and their connection with the theory of integral equations ». Dans : Philosophical transactions of the royal society of London. Series A, containing papers of a mathematical or physical character, pp.209-415, 1909.

M. [. Melo and . Lopes, Learning from Demonstration Using MDP Induced Metrics, Machine Learning and Knowledge Discovery in Databases, pp.385-401, 2010.
DOI : 10.1007/978-3-642-15883-4_25

]. A. Moo90 and . Moore, « Efficient Memory-Based Learning for Robot Control, Thèse de doct, 1990.

]. R. Mun07 and . Munos, « Performance bounds in L p norm for approximate value iteration, SIAM journal on control and optimization, vol.46, issue.2, pp.541-561, 2007.

D. [. Ng, S. Harada, and . Russell, « Policy invariance under reward transformations: Theory and application to reward shaping, International Conference on Machine Learning (ICML), pp.278-287, 1999.

]. J. Nor98 and . Norris, Markov chains, 1998.

S. [. Ng and . Russell, « Algorithms for inverse reinforcement learning, International Conference on Machine Learning (ICML), pp.663-670, 2000.

C. [. Neu and . Szepesvári, « Apprenticeship learning using inverse reinforcement learning and gradient methods, Conference on Uncertainty in Artificial Intelligence (UAI), pp.295-302, 2007.

C. [. Neu and . Szepesvári, Training parsers by inverse reinforcement learning, Machine learning 77, pp.303-337, 2009.
DOI : 10.1017/CBO9780511546921

]. D. Pom93 and . Pomerleau, « Knowledge-based training of artificial neural networks for autonomous robot driving, pp.19-43, 1993.

]. M. Put94 and . Puterman, Markov decision processes: Discrete stochastic dynamic programming, p.471619779, 1994.

P. [. Qiao and . Beling, Inverse reinforcement learning with Gaussian process, American Control Conference (ACC), pp.113-118, 2011.

E. [. Ramachandran and . Amir, « Bayesian inverse reinforcement learning, Proceedings of the International Joint Conference on Artificial Intelligence, pp.2586-2591, 2007.

]. N. Rat+07, D. Ratliff, J. Bradley, J. Bagnell, and . Chestnutt, « Boosting structured prediction for imitation learning, Advances in Neural Information Processing Systems, p.1153, 2007.

J. [. Ross, « Efficient reductions for imitation learning, 2010.

J. [. Ratliff, S. Bagnell, and . Srinivasa, « Imitation learning for locomotion and manipulation, International Conference on Humanoid Robots. IEEE, pp.392-397, 2007.

G. [. Ross and J. A. Gordon, « A reduction of imitation learning and structured prediction to no-regret online learning, 2010.

]. M. Rie05 and . Riedmiller, « Neural fitted Q iteration?first experiences with a data efficient neural reinforcement learning method, Machine Learning: ECML 2005, pp.317-328, 2005.

M. [. Rasmussen and . Kuss, « Gaussian processes in reinforcement learning, Advances in Neural Information Processing Systems, pp.751-759, 2004.

D. [. Ratliff, J. Silver, and . Bagnell, Learning to search: Functional gradient techniques for imitation learning, Autonomous Robots, vol.50, issue.1, pp.25-53, 2009.
DOI : 10.1007/978-3-642-82118-9

]. S. Rus98 and . Russell, « Learning agents for uncertain environments (extended abstract), Annual Conference on Computational Learning Theory, p.103, 1998.

A. [. Sutton and . Barto, Reinforcement learning, 1998.
DOI : 10.1007/978-1-4615-3618-5

URL : https://hal.archives-ouvertes.fr/hal-00764281

M. [. Syed, R. Bowling, and . Schapire, Apprenticeship learning using linear programming, Proceedings of the 25th international conference on Machine learning, ICML '08, pp.1032-1039, 2008.
DOI : 10.1145/1390156.1390286

URL : http://icml2008.cs.helsinki.fi/papers/645.pdf

R. [. Srinivasan and . Camacho, « Inductive Logic Programming Applied to an Area of Flight Control, Machine Intelligence, vol.15, 1998.

M. [. Scherrer and . Geist, Recursive Least-Squares Learning with Eligibility Traces, Proceedings of the European Workshop on Machine Learning (EWRL 2011). Lecture Notes in Computer Science (LNCS). Athens (Greece), p.12, 2011.
DOI : 10.1007/978-3-642-29946-9_14

URL : https://hal.archives-ouvertes.fr/hal-00644511

D. [. Safavian and . Landgrebe, A survey of decision tree classifier methodology, Man and Cybernetics, pp.660-674, 1991.
DOI : 10.1109/21.97458

C. [. Saunders, K. Nehaniv, and . Dautenhahn, Teaching robots by moulding behavior and scaffolding the environment, Proceeding of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction , HRI '06, pp.118-125, 2006.
DOI : 10.1145/1121241.1121263

R. [. Syed and . Schapire, « A game-theoretic approach to apprenticeship learning Advances in neural information processing systems, pp.1449-1456, 2008.

C. [. Shiraz and . Sammut, « Combining knowledge acquisition and machine learning to control dynamic systems, INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE. T. 15 contributions à l'apprentissage par renforcement inverse 111 [Sti95] D. Stirling. « CHURPs: Compressed Heuristic Universal Reaction Planners Thèse de doct, pp.908-913, 1995.

]. B. Tas+05, V. Taskar, D. Chatalbashev, C. Koller, and . Guestrin, « Learning structured prediction models: A large margin approach, Proceedings of the 22nd international conference on Machine learning, p.903, 2005.