Markov chain stochastic DCA and applications in deep learning with PDEs regularization - Université de Lorraine Accéder directement au contenu
Article Dans Une Revue Neural Networks Année : 2023

Markov chain stochastic DCA and applications in deep learning with PDEs regularization

Résumé

This paper addresses a large class of nonsmooth nonconvex stochastic DC (difference-of-convex functions) programs where endogenous uncertainty is involved and i.i.d. (independent and identically distributed) samples are not available. Instead, we assume that it is only possible to access Markov chains whose sequences of distributions converge to the target distributions. This setting is legitimate as Markovian noise arises in many contexts including Bayesian inference, reinforcement learning, and stochastic optimization in high-dimensional or combinatorial spaces. We then design a stochastic algorithm named Markov chain stochastic DCA (MCSDCA) based on DCA (DC algorithm) - a well-known method for nonconvex optimization. We establish the convergence analysis in both asymptotic and nonasymptotic senses. The MCSDCA is then applied to deep learning via PDEs (partial differential equations) regularization, where two realizations of MCSDCA are constructed, namely MCSDCA-odLD and MCSDCA-udLD, based on overdamped and underdamped Langevin dynamics, respectively. Numerical experiments on time series prediction and image classification problems with a variety of neural network topologies show the merits of the proposed methods.
Fichier non déposé

Dates et versions

hal-04283561 , version 1 (13-11-2023)

Identifiants

Citer

Hoang Phuc Hau Luu, Hoai Minh Le, Hoai An Le Thi. Markov chain stochastic DCA and applications in deep learning with PDEs regularization. Neural Networks, 2023, 170, pp.149-166. ⟨10.1016/j.neunet.2023.11.032⟩. ⟨hal-04283561⟩
34 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More