It is a longstanding challenge for the machine learning community to develop models that are capable of processing and learning from long sequences of data. The exceptional results of transformer-based approaches, such as large language models, promote the idea of parallel attention as the key to succeed in such a challenge, temporarily obscuring the role of classic sequential processing of recurrent models. However, in the past few years, a new generation of neural models has emerged, combining transformers and recurrent networks motivated by concerns over the quadratic complexity of self-attention. Meanwhile, (deep) state-space models have also emerged as robust approaches to function approximation over time, thus opening a new perspective in learning from sequential data. Here we provide an overview of these trends unified under the umbrella of recurrent models, and discuss their likely crucial impact in the development of future architectures for large generative models.
Back to recurrent processing at the crossroad of transformers and state-space models
Betti Alessandro;Gori Marco;
2025-01-01
Abstract
It is a longstanding challenge for the machine learning community to develop models that are capable of processing and learning from long sequences of data. The exceptional results of transformer-based approaches, such as large language models, promote the idea of parallel attention as the key to succeed in such a challenge, temporarily obscuring the role of classic sequential processing of recurrent models. However, in the past few years, a new generation of neural models has emerged, combining transformers and recurrent networks motivated by concerns over the quadratic complexity of self-attention. Meanwhile, (deep) state-space models have also emerged as robust approaches to function approximation over time, thus opening a new perspective in learning from sequential data. Here we provide an overview of these trends unified under the umbrella of recurrent models, and discuss their likely crucial impact in the development of future architectures for large generative models.File | Dimensione | Formato | |
---|---|---|---|
1.pdf
non disponibili
Descrizione: Back to recurrent processing at the crossroad of transformers and state-space models
Tipologia:
Versione Editoriale (PDF)
Licenza:
Copyright dell'editore
Dimensione
1.63 MB
Formato
Adobe PDF
|
1.63 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.