Y. Akimoto, Y. Nagata, I. Ono, and S. Kobayashi, Bidirectional relation between CMA evolution strategies and natural evolution strategies, Proc. of PPSN, pp.154-163, 2010.

B. D. Anderson and J. B. Moore, Optimal filtering. Courier Corporation, 2012.

M. Andrychowicz, Hindsight experience replay, Proc. of NIPS, 2017.

G. Antonelli, T. I. Fossen, and D. R. Yoerger, Underwater robotics. Springer handbook of robotics, vol.15, pp.987-1008, 2008.

B. D. Argall, S. Chernova, M. Veloso, and B. Browning, A survey of robot learning from demonstration, Robotics and Autonomous Systems, vol.57, issue.5, pp.469-483, 2009.

K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, Deep reinforcement learning: A brief survey, IEEE Signal Processing Magazine, vol.34, issue.6, pp.26-38, 2017.

C. G. Atkeson, A. W. Moore, and S. Schaal, Locally weighted learning, Lazy learning, pp.11-73, 1997.

C. G. Atkeson and J. C. Santamaria, A comparison of direct and model-based reinforcement learning, IEEE International Conference on, vol.4, pp.3557-3564, 1997.

P. Auer, N. Cesa-bianchi, and P. Fischer, Finite-time analysis of the multiarmed bandit problem, Machine learning, vol.47, issue.2-3, pp.235-256, 2002.

A. Auger and N. Hansen, A restart CMA evolution strategy with increasing population size, Congress on Evolutionary Computation, 2005.

J. Baxter and P. L. Bartlett, Infinite-horizon policy-gradient estimation, Journal of Artificial Intelligence Research, vol.15, pp.319-350, 2001.

M. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton et al., Unifying count-based exploration and intrinsic motivation, Proc. of NIPS, 2016.

J. G. Bellingham and K. Rajan, Robotics in remote and hostile environments, Science, vol.318, issue.5853, pp.1098-1102, 2007.

R. Bellman, A markovian decision process, Journal of mathematics and mechanics, pp.679-684, 1957.

S. Bhatnagar, M. Ghavamzadeh, M. Lee, and R. S. Sutton, Incremental natural actor-critic algorithms, Advances in neural information processing systems, pp.105-112, 2008.

A. Billard, S. Calinon, R. Dillmann, and S. Schaal, Robot programming by demonstration, Springer handbook of robotics, pp.1371-1394, 2008.

A. Billard, S. Calinon, R. Dillmann, and S. Schaal, Survey: Robot programming by demonstration. Handbook of robotics, p.59, 2008.

C. M. Bishop, Pattern recognition and machine learning, 2006.

M. Blum and M. A. Riedmiller, Optimization of Gaussian process hyperparameters using Rprop, Proc. of ESANN, 2013.

J. Bongard, V. Zykov, and H. Lipson, Resilient machines through continuous self-modeling, Science, vol.314, issue.5802, pp.1118-1121, 2006.

Z. I. Botev, The cross-entropy method for optimization, Handbook of statistics, vol.31, pp.35-59, 2013.

L. Breiman, Random forests. Machine learning, vol.45, pp.5-32, 2001.

E. Brochu, V. M. Cora, D. Freitas, and N. , A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning, 2010.

R. A. Brooks, Elephants don't play chess, Robotics and autonomous systems, vol.6, issue.1-2, pp.3-15, 1990.

R. A. Brooks, Intelligence without reason, Proceedings of the 12th international joint conference on Artificial intelligence, vol.1, pp.569-595, 1991.

R. A. Brooks, Intelligence without representation. Artificial intelligence, vol.47, pp.139-159, 1991.

R. A. Brooks, The role of learning in autonomous robots, Proceedings of the fourth annual workshop on Computational learning theory, pp.5-10, 2014.

S. Calinon, F. Guenter, and A. Billard, On learning, representing, and generalizing a task in a humanoid robot, IEEE Transactions on Systems, Man, and Cybernetics, vol.37, issue.2, pp.286-298, 2007.

E. F. Camacho and C. B. Alba, Model predictive control, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00256633

A. Cangelosi and M. Schlesinger, Developmental robotics: From babies to robots, 2015.

K. Chatzilygeroudis and J. Mouret, Using Parameterized Black-Box Priors to Scale Up Model-Based Policy Search for Robotics, Proc. of ICRA, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01768285

K. Chatzilygeroudis, R. Rama, R. Kaushik, D. Goepp, V. Vassiliades et al., Black-Box Data-efficient Policy Search for Robotics, Proc. of IROS, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01576683

K. Chatzilygeroudis, V. Vassiliades, and J. Mouret, Reset-free trial-and-error learning for robot damage recovery, Robotics and Autonomous Systems, vol.100, pp.236-250, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01654641

K. Chatzilygeroudis, V. Vassiliades, F. Stulp, S. Calinon, and J. Mouret, A survey on policy search algorithms for learning robot controllers in a handful of trials, IEEE Transactions on Robotics, vol.36, issue.2, pp.328-347, 2020.
URL : https://hal.archives-ouvertes.fr/hal-02393432

K. Chua, R. Calandra, R. Mcallister, and S. Levine, Deep reinforcement learning in a handful of trials using probabilistic dynamics models, Advances in Neural Information Processing Systems, pp.4754-4765, 2018.

I. Clavera, J. Rothfuss, J. Schulman, Y. Fujita, T. Asfour et al., Model-based reinforcement learning via meta-policy optimization, Conference on Robot Learning, pp.617-629, 2018.

C. Colas, O. Sigaud, and P. Oudeyer, GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms, Proc. of ICML, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01840576

R. Collobert, Natural language processing (almost) from scratch, Journal of machine learning research, vol.12, pp.2493-2537, 2011.

E. Coumans, Bullet physics library. Open source: bulletphysics. org, vol.15, p.5, 2013.

A. Cully, K. Chatzilygeroudis, F. Allocati, and J. Mouret, Limbo: A Flexible High-performance Library for Gaussian Processes modeling and Data-Efficient Optimization, The Journal of Open Source Software, vol.3, issue.26, p.545, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01884299

A. Cully, J. Clune, D. Tarapore, and J. Mouret, Robots that can adapt like animals, Nature, vol.521, issue.7553, pp.503-507, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01158243

A. Cully and Y. Demiris, Quality and diversity optimization: A unifying modular framework, IEEE Transactions on Evolutionary Computation, vol.22, issue.2, pp.245-259, 2017.

A. Cully and Y. Demiris, Hierarchical behavioral repertoires with unsupervised descriptors, Proceedings of the Genetic and Evolutionary Computation Conference, pp.69-76, 2018.

A. Cully and Y. Demiris, Quality and diversity optimization: A unifying modular framework, IEEE Trans. on Evolutionary Computation, vol.22, issue.2, pp.245-259, 2018.

A. Cully and J. Mouret, Evolving a behavioral repertoire for a walking robot, Evolutionary Computation, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01095543

M. Cutler and J. P. How, Efficient reinforcement learning for robots using informative simulated priors, Proc. of ICRA, 2015.

P. Dayan and G. E. Hinton, Using expectation-maximization for reinforcement learning, Neural Computation, vol.9, issue.2, pp.271-278, 1997.

K. Deb, Multi-objective optimization using evolutionary algorithms, vol.16, 2001.

K. Deb and H. Beyer, Self-adaptive genetic algorithms with simulated binary crossover, 1999.

K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. on Evolutionary Computation, vol.6, issue.2, pp.182-197, 2002.

M. P. Deisenroth, D. Fox, and C. E. Rasmussen, Gaussian processes for data-efficient learning in robotics and control, IEEE Trans. Pattern Anal. Mach. Intell, vol.37, issue.2, pp.408-423, 2015.

M. P. Deisenroth, G. Neumann, and J. Peters, A survey on policy search for robotics, Foundations and Trends in Robotics, vol.2, issue.1, pp.1-142, 2013.

M. P. Deisenroth and C. E. Rasmussen, PILCO: A model-based and data-efficient approach to policy search, Proc. of ICML, 2011.

S. Doncieux, N. Bredeche, J. Mouret, and A. E. Eiben, Evolutionary robotics: what, why, and where to, Frontiers in Robotics and AI, vol.2, p.4, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01131267

S. Doncieux, D. Filliat, N. Díaz-rodríguez, T. Hospedales, R. Duro et al., Open-ended learning: a conceptual framework based on representational redescription, Frontiers in neurorobotics, vol.12, p.59, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01889947

S. Doncieux, A. Laflaquière, and A. Coninx, Novelty search: a theoretical perspective, Proceedings of the Genetic and Evolutionary Computation Conference, pp.99-106, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02561846

S. Doncieux and J. Mouret, Beyond black-box optimization: a review of selective pressures for evolutionary robotics, Evolutionary Intelligence, vol.7, issue.2, pp.71-93, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01150254

S. Doncieux, J. Mouret, N. Bredeche, and V. Padois, Evolutionary robotics: Exploring new horizons, New horizons in evolutionary robotics, pp.3-25, 2011.
URL : https://hal.archives-ouvertes.fr/inria-00566896

M. Duarte, J. Gomes, S. M. Oliveira, and A. L. Christensen, Evolution of repertoire-based control for robots with complex locomotor systems, IEEE Transactions on Evolutionary Computation, vol.22, issue.2, pp.314-328, 2017.

M. Duarte, J. C. Gomes, S. Oliveira, and A. L. Christensen, Evolution of repertoire-based control for robots with complex locomotor systems, IEEE Transactions on Evolutionary Computation, vol.22, pp.314-328, 2018.

P. Fidelman and P. Stone, Learning ball acquisition on a physical robot, International Symposium on Robotics and Automation (ISRA), p.6, 2004.

C. Finn, P. Abbeel, and S. Levine, Model-agnostic meta-learning for fast adaptation of deep networks, Proceedings of the 34th International Conference on Machine Learning, vol.70, pp.1126-1135, 2017.

C. Florensa, D. Held, M. Wulfmeier, A. , and P. , Reverse curriculum generation for reinforcement learning, Conference on Robot Learning, 2017.

Y. Gal and Z. Ghahramani, Dropout as a bayesian approximation: Representing model uncertainty in deep learning, Proc. of ICML, 2015.

Y. Gal, J. Hron, K. , and A. , Concrete dropout, Advances in Neural Information Processing Systems, pp.3581-3590, 2017.

Y. Gal, R. T. Mcallister, and C. E. Rasmussen, Improving PILCO with bayesian neural network dynamics models, Data-Efficient Machine Learning workshop, 2016.

J. C. Gallagher, R. D. Beer, K. S. Espenschied, and R. D. Quinn, Application of evolved locomotion controllers to a hexapod robot, Robotics and Autonomous Systems, vol.19, issue.1, pp.95-103, 1996.

L. K. Goff, O. Yaakoubi, A. Coninx, and S. Doncieux, Building an affordances map with interactive perception, 2019.

D. E. Goldberg and K. Deb, A comparative analysis of selection schemes used in genetic algorithms, Foundations of genetic algorithms, vol.1, pp.69-93, 1991.

J. Gottlieb and P. Oudeyer, Information-seeking, curiosity, and attention: computational and neural mechanisms, Trends in Cognitive Sciences, vol.17, issue.11, pp.585-593, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00913646

. Gpy, GPy: A gaussian process framework in python, 2012.

R. J. Griffin, G. Wiedebach, S. Bertrand, A. Leonessa, and J. Pratt, Straight-leg walking through underconstrained whole-body control, 2018 IEEE International Conference on Robotics and Automation (ICRA), pp.1-5, 2018.

S. Gu, E. Holly, T. Lillicrap, and S. Levine, Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates, 2017 IEEE international conference on robotics and automation (ICRA), pp.3389-3396, 2017.

C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, On calibration of modern neural networks, Proc. of ICML, 2017.

D. Ha and J. Schmidhuber, Recurrent world models facilitate policy evolution, Advances in Neural Information Processing Systems, pp.2450-2462, 2018.

T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor, International Conference on Machine Learning, pp.1861-1870, 2018.

D. Hafner, D. Tran, T. Lillicrap, A. Irpan, D. et al., Noise contrastive priors for functional uncertainty, 2018.

N. Hansen, The CMA Evolution Strategy: A Comparing Review, 2006.

N. Hansen, Benchmarking a BI-population CMA-ES on the BBOB-2009 function testbed, Proc. of GECCO, pp.2389-2396, 2009.
URL : https://hal.archives-ouvertes.fr/inria-00382093

N. Hansen, Benchmarking a BI-population CMA-ES on the BBOB-2009 noisy testbed, Proc. of GECCO, pp.2397-2402, 2009.
URL : https://hal.archives-ouvertes.fr/inria-00382101

N. Hansen, A. S. Niederberger, L. Guzzella, and P. Koumoutsakos, A method for handling uncertainty in evolutionary optimization with an application to feedback control of combustion, IEEE Trans. on Evolutionary Computation, vol.13, issue.1, pp.180-197, 2009.
URL : https://hal.archives-ouvertes.fr/inria-00276216

N. Hansen and A. Ostermeier, Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation, Proc. of IEEE international conference on evolutionary computation, pp.312-317, 1996.

N. Heess, Emergence of locomotion behaviours in rich environments, 2017.

R. Herbrich, N. D. Lawrence, and M. Seeger, Fast sparse gaussian process methods: The informative vector machine, Advances in neural information processing systems, pp.625-632, 2003.

J. C. Higuera, D. Meger, and G. Dudek, Synthesizing neural network controllers with probabilistic model-based reinforcement learning, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp.2538-2544, 2018.

J. Hollerbach, W. Khalil, and M. Gautier, Model Identification, pp.113-138, 2016.

R. Houthooft, X. Chen, Y. Duan, J. Schulman, F. De-turck et al., Vime: Variational information maximizing exploration, Proc. of NIPS, 2016.

P. Huang, J. Lehman, A. K. Mok, R. Miikkulainen, and L. Sentis, Grasping novel objects with a dexterous robotic hand through neuroevolution, 2014 IEEE Symposium on Computational Intelligence in Control and Automation (CICA), pp.1-8, 2014.

A. J. Ijspeert, J. Nakanishi, and S. Schaal, Learning attractor landscapes for learning motor primitives, Advances in neural information processing systems, pp.1547-1554, 2003.

Y. Jin and J. Branke, Evolutionary optimization in uncertain environments-a survey, IEEE Trans. on Evolutionary Computation, vol.9, issue.3, pp.303-317, 2005.

D. R. Jones, C. D. Perttunen, and B. E. Stuckman, Lipschitzian optimization without the lipschitz constant, Journal of optimization Theory and Applications, vol.79, issue.1, pp.157-181, 1993.

R. Jonschkowski and O. Brock, Learning state representations with robotic priors, Autonomous Robots, vol.39, issue.3, pp.407-428, 2015.

S. J. Julier and J. K. Uhlmann, Unscented filtering and nonlinear estimation, Proceedings of the IEEE, vol.92, issue.3, pp.401-422, 2004.

L. P. Kaelbling, M. L. Littman, and A. W. Moore, Reinforcement learning: A survey, Journal of artificial intelligence research, vol.4, pp.237-285, 1996.

S. M. Kakade, A natural policy gradient, Advances in neural information processing systems, pp.1531-1538, 2002.

B. A. Karmiloff-smith, Beyond modularity: A developmental perspective on cognitive science, European journal of disorders of communication, vol.29, issue.1, pp.95-105, 1994.

R. Kaushik, T. Anne, and J. Mouret, Fast Online Adaptation in Robotics through Meta-Learning Embeddings of Simulated Priors, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
URL : https://hal.archives-ouvertes.fr/hal-02909452

R. Kaushik, K. Chatzilygeroudis, and J. Mouret, Multi-objective model-based policy search for data-efficient learning with sparse rewards, Conference on Robot Learning, pp.839-855, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01884294

R. Kaushik, P. Desreumaux, and J. Mouret, Adaptive prior selection for repertoire-based online adaptation in robotics, Frontiers in Robotics and AI, vol.6, p.151, 2020.
URL : https://hal.archives-ouvertes.fr/hal-02462935

S. Keerthi and W. Chu, A matching pursuit approach to sparse gaussian process regression, Advances in neural information processing systems, pp.643-650, 2006.

E. Keogh and A. Mueen, Curse of dimensionality. Encyclopedia of machine learning, pp.257-258, 2010.

J. Kober, J. A. Bagnell, and J. Peters, Reinforcement learning in robotics: A survey, International Journal of Robotics Research, vol.32, issue.11, pp.1238-1274, 2013.

J. Kober and J. R. Peters, Policy search for motor primitives in robotics, Advances in neural information processing systems, pp.849-856, 2009.

N. Kohl and P. Stone, Policy gradient reinforcement learning for fast quadrupedal locomotion, Proc. of ICRA, vol.3, pp.2619-2624, 2004.

S. Koos, A. Cully, and J. Mouret, Fast damage recovery in robotics with the t-resilience algorithm, The International Journal of Robotics Research, vol.32, issue.14, pp.1700-1723, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00932862

S. Koos and J. Mouret, Online discovery of locomotion modes for wheel-legged hybrid robots: A transferability-based approach, Field Robotics, pp.70-77, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00633930

S. Koos, J. Mouret, and S. Doncieux, The transferability approach: Crossing the reality gap in evolutionary robotics, IEEE Transactions on Evolutionary Computation, vol.17, issue.1, pp.122-145, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00687617

R. Koppejan and S. Whiteson, Neuroevolutionary reinforcement learning for generalized helicopter control, Proceedings of the 11th Annual conference on Genetic and evolutionary computation, pp.145-152, 2009.

E. Krotkov, D. Hackett, L. Jackel, M. Perschbacher, J. Pippine et al., The darpa robotics challenge finals: Results and perspectives, Journal of Field Robotics, vol.34, issue.2, pp.229-240, 2017.

V. Kumar, E. Todorov, and S. Levine, Optimal control with learned local models: Application to dexterous manipulation, 2016 IEEE International Conference on Robotics and Automation (ICRA), pp.378-383, 2016.

A. Kupcsik, M. P. Deisenroth, J. Peters, A. P. Loh, P. Vadakkepat et al., Model-based contextual policy search for data-efficient generalization of robot skills, Artificial Intelligence, vol.247, pp.415-439, 2017.

T. Kurutach, I. Clavera, Y. Duan, A. Tamar, A. et al., Modelensemble trust-region policy optimization, Proc. of ICLR, 2018.

B. Lakshminarayanan, A. Pritzel, and C. Blundell, Simple and scalable predictive uncertainty estimation using deep ensembles, Advances in Neural Information Processing Systems, pp.6402-6413, 2017.

J. Lee, M. X. Grey, S. Ha, T. Kunz, S. Jain et al., DART: Dynamic animation and robotics toolkit, The Journal of Open Source Software, 2018.

J. Lehman, J. Chen, J. Clune, and K. O. Stanley, Es is more than just a traditional finite-difference approximator, 2017.

J. Lehman and K. O. Stanley, Abandoning objectives: Evolution through the search for novelty alone, Evolutionary computation, vol.19, issue.2, pp.189-223, 2011.

J. Lehman and K. O. Stanley, Evolving a diversity of virtual creatures through novelty search and local competition, Proceedings of the 13th annual conference on Genetic and evolutionary computation, pp.211-218, 2011.

T. Lesort, N. Díaz-rodríguez, J. Goudou, and D. Filliat, State representation learning for control: An overview, Neural Networks, vol.108, pp.379-392, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01858558

S. Levine and P. Abbeel, Learning neural network policies with guided policy search under unknown dynamics, Proc. of NIPS, pp.1071-1079, 2014.

S. Levine and V. Koltun, Guided policy search, Proc. of ICML, 2013.

S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, Learning hand-eye coordination for robotic grasping with deep learning and largescale data collection, The International Journal of Robotics Research, vol.37, issue.4-5, pp.421-436, 2018.

T. P. Lillicrap, Continuous control with deep reinforcement learning, Proc. of ICLR, 2016.

H. Liu, Y. Ong, X. Shen, and J. Cai, When gaussian process meets big data: A review of scalable gps, IEEE Transactions on Neural Networks and Learning Systems, 2020.

D. J. Lizotte, T. Wang, M. H. Bowling, and D. Schuurmans, Automatic gait optimization with gaussian process regression, IJCAI, vol.7, pp.944-949, 2007.

M. Lopes, T. Lang, M. Toussaint, and P. Oudeyer, Exploration in model-based reinforcement learning by empirically estimating learning progress, Proc. of NIPS, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00755248

I. Loshchilov, CMA-ES with restarts for solving CEC 2013 benchmark problems, Congress on Evolutionary Computation, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00823880

M. Lungarella, G. Metta, R. Pfeifer, and G. Sandini, Developmental robotics: a survey, Connection science, vol.15, issue.4, pp.151-190, 2003.

K. M. Lynch and F. C. Park, Modern Robotics: Mechanics, Planning, and Control, 2017.

D. J. Mackay, A practical bayesian framework for backpropagation networks, Neural computation, vol.4, issue.3, pp.448-472, 1992.

H. Mania, A. Guy, and B. Recht, Simple random search of static linear policies is competitive for reinforcement learning, Advances in Neural Information Processing Systems, pp.1800-1809, 2018.

S. Mannor, R. Y. Rubinstein, and Y. Gat, The cross entropy method for fast policy search, Proc. of ICML, pp.512-519, 2003.

V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap et al., Asynchronous methods for deep reinforcement learning, 2016.

V. Mnih, Human-level control through deep reinforcement learning, Nature, vol.518, issue.7540, pp.529-533, 2015.

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou et al., Playing atari with deep reinforcement learning, 2013.

V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness et al., Human-level control through deep reinforcement learning, Nature, vol.518, issue.7540, p.529, 2015.

J. Mouret, Novelty-based Multiobjectivization, New Horizons in Evolutionary Robotics, pp.139-154, 2011.
URL : https://hal.archives-ouvertes.fr/hal-01300711

J. Mouret, Micro-data learning: The other end of the spectrum, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01374786

J. Mouret and J. Clune, Illuminating search spaces by mapping elites, 2015.

J. Mouret and S. Doncieux, Sferes v2: Evolvin'in the multi-core world, Proc. of CEC, 2010.
URL : https://hal.archives-ouvertes.fr/hal-00687633

J. Mouret, S. Koos, and S. Doncieux, Crossing the reality gap: a short introduction to the transferability approach, 2013.
URL : https://hal.archives-ouvertes.fr/hal-01300706

R. M. Murray, Z. Li, S. S. Sastry, and S. S. Sastry, A mathematical introduction to robotic manipulation, 1994.

A. Nagabandi, I. Clavera, S. Liu, R. S. Fearing, P. Abbeel et al., Learning to adapt in dynamic, real-world environments through meta-reinforcement learning, Proc. of ICLR, 2019.

A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine, Neural network dynamics for model-based deep reinforcement learning with model-free finetuning, 2018 IEEE International Conference on Robotics and Automation (ICRA), pp.7559-7566, 2018.

K. Nagatani, Emergency response to the nuclear accident at the Fukushima Daiichi Nuclear Power Plants using mobile rescue robots, Journal of Field Robotics, vol.30, issue.1, pp.44-63, 2013.

R. M. Neal, Bayesian learning for neural networks, vol.118, 2012.

A. Y. Ng and M. Jordan, PEGASUS: a policy search method for large MDPs and POMDPs, Proc. of Uncertainty in Artificial Intelligence, pp.406-415, 2000.

A. Y. Ng, H. J. Kim, M. I. Jordan, and S. Sastry, Autonomous helicopter flight via reinforcement learning, Proceedings of the 16th International Conference on Neural Information Processing Systems, NIPS'03, pp.799-806, 2003.

D. Nguyen-tuong and J. Peters, Model learning for robot control: a survey, Cognitive Processing, vol.12, issue.4, pp.319-340, 2011.

A. Nichol, J. Achiam, and J. Schulman, On first-order meta-learning algorithms, 2018.

S. Nolfi, D. Floreano, and D. D. Floreano, Evolutionary robotics: The biology, intelligence, and technology of self-organizing machines, 2000.

M. Oliveira, S. Doncieux, J. Mouret, and C. P. Santos, Optimization of humanoid walking controller: Crossing the reality gap, 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), pp.106-111, 2013.
URL : https://hal.archives-ouvertes.fr/hal-01300704

T. Osa, J. Pajarinen, G. Neumann, J. A. Bagnell, P. Abbeel et al., An algorithmic perspective on imitation learning, Foundations and Trends R in Robotics, vol.7, issue.1-2, pp.1-179, 2018.

I. Osband, Risk versus uncertainty in deep learning: Bayes, bootstrap and the dangers of dropout, NIPS Workshop on Bayesian Deep Learning, 2016.

P. Oudeyer, F. Kaplan, and V. V. Hafner, Intrinsic motivation systems for autonomous mental development, IEEE Trans. on Evolutionary Computation, vol.11, issue.2, pp.265-286, 2007.

P. Oudeyer, F. Kaplan, V. V. Hafner, and A. Whyte, The playground experiment: Task-independent development of a curious robot, Proc. of the AAAI Spring Symposium on Developmental Robotics, pp.42-47, 2005.

V. Padois, S. Ivaldi, J. Babi?, M. Mistry, J. Peters et al., Whole-body multi-contact motion in humans and humanoids: Advances of the codyco european project, Robotics and Autonomous Systems, vol.90, pp.97-117, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01399360

Y. Pan, E. Theodorou, and M. Kontitsis, Sample efficient path integral control under uncertainty, Advances in Neural Information Processing Systems, pp.2314-2322, 2015.

G. Paolo, A. Laflaquiere, A. Coninx, and S. Doncieux, Unsupervised learning and exploration of reachable outcome space, 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020.
URL : https://hal.archives-ouvertes.fr/hal-02951255

V. Papaspyros, K. Chatzilygeroudis, V. Vassiliades, and J. Mouret, Safety-aware robot damage recovery using constrained bayesian optimization and simulated priors, BayesOpt '16 Workshop at NIPS, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01407757

C. Park and D. Apley, Patchwork kriging for large-scale gaussian process regression, 2017.

R. Pautrat, K. Chatzilygeroudis, and J. Mouret, Bayesian optimization with automatic prior selection for data-efficient direct policy search, Proc. of ICRA, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01768279

A. Pervez and D. Lee, Learning task-parameterized dynamic movement primitives using mixture of GMMs, Intelligent Service Robotics, vol.11, issue.1, pp.61-78, 2018.

J. Peters, K. Mülling, A. , and Y. , Relative entropy policy search, Proc. of AAAI, 2010.

J. Peters and S. Schaal, Reinforcement learning by reward-weighted regression for operational space control, Proceedings of the 24th international conference on Machine learning, pp.745-750, 2007.

J. Peters and S. Schaal, Natural actor-critic, Neurocomputing, vol.71, issue.7-9, pp.1180-1190, 2008.

J. Peters and S. Schaal, Reinforcement learning of motor skills with policy gradients, Neural Networks, vol.21, issue.4, pp.682-697, 2008.

J. Peters, S. Vijayakumar, and S. Schaal, Reinforcement learning for humanoid robotics, Proceedings of the third IEEE-RAS international conference on humanoid robots, pp.1-20, 2003.

P. Such, F. Madhavan, V. Conti, E. Lehman, J. Stanley et al., Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning, 2017.

R. Pfeifer and J. Bongard, How the body shapes the way we think: a new view of intelligence, 2006.

A. S. Polydoros and L. Nalpantidis, Survey of model-based reinforcement learning: Applications on robotics, Journal of Intelligent & Robotic Systems, pp.1-21, 2017.

J. K. Pugh, L. B. Soros, and K. O. Stanley, Quality diversity: A new frontier for evolutionary computation, Frontiers in Robotics and AI, vol.3, p.40, 2016.

J. Quiñonero-candela and C. E. Rasmussen, A unifying view of sparse approximate Gaussian process regression, JMLR, vol.6, pp.1939-1959, 2005.

A. Rai, R. Antonova, F. Meier, A. , and C. G. , Using simulation to improve sample-efficiency of bayesian optimization for bipedal robots, Journal of machine learning research, vol.20, issue.49, pp.1-24, 2019.

A. V. Rao, A survey of numerical methods for optimal control, Advances in the Astronautical Sciences, vol.135, issue.1, pp.497-528, 2009.

C. E. Rasmussen and C. K. Williams, Gaussian processes for machine learning, 2006.

M. Riedmiller and H. Braun, Rprop-a fast adaptive learning algorithm, Proc. of ISCIS VII, 1992.

G. A. Rummery and M. Niranjan, On-line Q-learning using connectionist systems, 1994.

T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever, Evolution strategies as a scalable alternative to reinforcement learning, 2017.

V. G. Santucci, P. Oudeyer, A. Barto, and G. Baldassarre, Intrinsically motivated open-ended learning in autonomous robots, Frontiers in Neurorobotics, vol.13, p.115, 2020.

M. Saveriano, Y. Yin, P. Falco, and D. Lee, Data-efficient control policy search using residual dynamics learning, Proc. of IROS, 2017.

S. Schaal, P. Mohajerian, and A. Ijspeert, Dynamics systems vs. optimal control-a unifying view, Progress in brain research, vol.165, pp.425-445, 2007.

T. Schaul and J. Schmidhuber, Metalearning. Scholarpedia, vol.5, issue.6, p.4650, 2010.

J. Schulman, S. Levine, P. Moritz, M. I. Jordan, A. et al., Trust region policy optimization, Proc. of ICML, 2015.

J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, Proximal policy optimization algorithms, 2017.

M. Seeger, Bayesian gaussian process models: Pac-bayesian generalisation error bounds and sparse approximations, 2003.

F. Sehnke, Policy gradients with parameter-based exploration for control, Proc. of Artificial Neural Networks, pp.387-396, 2008.

F. Sehnke, C. Osendorfer, T. Rückstieß, A. Graves, J. Peters et al., Parameter-exploring policy gradients, Neural Networks, vol.23, issue.4, pp.551-559, 2010.

R. Sellaouti, O. Stasse, S. Kajita, K. Yokoi, and A. Kheddar, Faster and smoother walking of humanoid hrp-2 with passive toe joints, IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.4909-4914, 2006.

B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, D. Freitas et al., Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, vol.104, pp.148-175, 2015.

A. Sharma, S. Gu, S. Levine, V. Kumar, and K. Hausman, Dynamicsaware unsupervised discovery of skills, 2019.

D. Silver, Mastering the game of go with deep neural networks and tree search, Nature, vol.529, issue.7587, pp.484-489, 2016.

D. Silver, Mastering the game of go with deep neural networks and tree search, Nature, vol.529, issue.7587, pp.484-489, 2016.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai et al., Mastering chess and shogi by self-play with a general reinforcement learning algorithm, 2017.

D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra et al., Deterministic policy gradient algorithms, 2014.
URL : https://hal.archives-ouvertes.fr/hal-00938992

E. Snelson and Z. Ghahramani, Sparse gaussian processes using pseudoinputs, Proc. of NIPS, 2005.

J. Spitz, K. Bouyarmane, S. Ivaldi, and J. Mouret, Trial-anderror learning of repulsors for humanoid qp-based whole-body control, 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), pp.468-475, 2017.

K. O. Stanley, J. Clune, J. Lehman, and R. Miikkulainen, Designing neural networks through neuroevolution, Nature Machine Intelligence, vol.1, issue.1, pp.24-35, 2019.

K. O. Stanley and R. Miikkulainen, Evolving neural networks through augmenting topologies, Evolutionary computation, vol.10, issue.2, pp.99-127, 2002.

F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley et al., Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning, 2017.

M. Sugiyama, I. Takeuchi, T. Suzuki, T. Kanamori, H. Hachiya et al., Least-squares conditional density estimation, IEICE Transactions on Information and Systems, vol.93, issue.3, pp.583-594, 2010.

R. S. Sutton, Learning to predict by the methods of temporal differences, Machine learning, vol.3, issue.1, pp.9-44, 1988.

R. S. Sutton, Generalization in reinforcement learning: Successful examples using sparse coarse coding, Advances in neural information processing systems, pp.1038-1044, 1996.

R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction, 1998.

R. S. Sutton, D. A. Mcallester, S. P. Singh, and Y. Mansour, Policy gradient methods for reinforcement learning with function approximation, Advances in neural information processing systems, pp.1057-1063, 2000.

V. Tangkaratt, S. Mori, T. Zhao, J. Morimoto, and M. Sugiyama, , 2014.

, Model-based policy gradients with parameter-based exploration by leastsquares conditional density estimation, Neural Netw, vol.57, pp.128-140

M. Tesch, J. Schneider, and H. Choset, Using response surfaces and expected improvement to optimize snake robot gait parameters, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.1069-1074, 2011.

E. Theodorou, J. Buchli, and S. Schaal, A generalized path integral control approach to reinforcement learning, JMLR, vol.11, pp.3137-3181, 2010.

S. Thrun and T. M. Mitchell, Lifelong robot learning. Robotics and autonomous systems, vol.15, pp.25-46, 1995.

J. Trevelyan, W. R. Hamel, and S. Kang, Robotics in hazardous applications, Springer handbook of robotics, pp.1521-1548, 2016.

V. Vassiliades, K. Chatzilygeroudis, and J. Mouret, Using centroidal voronoi tessellations to scale up the multidimensional archive of phenotypic elites algorithm, IEEE Transactions on Evolutionary Computation, vol.22, issue.4, pp.623-630, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01630627

N. Vlassis, M. Toussaint, G. Kontes, and S. Piperidis, Learning model-free robot control by a monte carlo em algorithm, Autonomous Robots, vol.27, issue.2, pp.123-130, 2009.

C. J. Watkins and P. Dayan, Q-learning, Machine learning, vol.8, issue.3-4, pp.279-292, 1992.

D. Wierstra, T. Schaul, J. Peters, and J. Schmidhuber, Natural evolution strategies, IEEE Congress on Evolutionary Computation, pp.3381-3387, 2008.

G. Williams, P. Drews, B. Goldfain, J. M. Rehg, and E. A. Theodorou, Aggressive driving with model predictive path integral control, 2016 IEEE International Conference on Robotics and Automation (ICRA), pp.1433-1440, 2016.

R. J. Williams, Simple statistical gradient-following algorithms for connectionist reinforcement learning, Machine learning, vol.8, issue.3-4, pp.229-256, 1992.

A. Wilson, A. Fern, and P. Tadepalli, Using trajectory data to improve bayesian optimization for reinforcement learning, JMLR, vol.15, issue.1, pp.253-282, 2014.

X. Yao, Evolving artificial neural networks, Proceedings of the IEEE, vol.87, pp.1423-1447, 1999.

W. Yu, C. K. Liu, and G. Turk, Policy transfer with strategy optimization, International Conference on Learning Representations, 2019.

W. Yu, J. Tan, Y. Bai, E. Coumans, and S. Ha, Learning fast adaptation with meta strategy optimization, 2019.