This article investigates the use of extended Kalman filtering to train recurrent neural networks with rather general convex loss functions and regularization terms on the network parameters, including ℓ1-regularization. We show that the learning method is competitive with respect to stochastic gradient descent in a nonlinear system identification benchmark and in training a linear system with binary outputs. We also explore the use of the algorithm in data-driven nonlinear model predictive control and its relation with disturbance models for offset-free closed-loop tracking. © 1963-2012 IEEE.
Recurrent Neural Network Training with Convex Loss and Regularization Functions by Extended Kalman Filtering
Bemporad, A.
2023-01-01
Abstract
This article investigates the use of extended Kalman filtering to train recurrent neural networks with rather general convex loss functions and regularization terms on the network parameters, including ℓ1-regularization. We show that the learning method is competitive with respect to stochastic gradient descent in a nonlinear system identification benchmark and in training a linear system with binary outputs. We also explore the use of the algorithm in data-driven nonlinear model predictive control and its relation with disturbance models for offset-free closed-loop tracking. © 1963-2012 IEEE.File in questo prodotto:
File | Dimensione | Formato | |
---|---|---|---|
Recurrent_Neural_Network_Training_With_Convex_Loss_and_Regularization_Functions_by_Extended_Kalman_Filtering.pdf
non disponibili
Tipologia:
Versione Editoriale (PDF)
Licenza:
Copyright dell'editore
Dimensione
656.97 kB
Formato
Adobe PDF
|
656.97 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.