We investigate both stationary and time-varying, nonmonotone-generalized Nash equilibrium problems that exhibit symmetric interactions among the agents, which are known to be potential. As may happen in practical cases, however, we envision a scenario in which the formal expression of the underlying potential function is not available, and we design a semidecentralized Nash-equilibrium-seeking algorithm. In the proposed two-layer scheme, a coordinator iteratively integrates possibly noisy and sporadic agents' feedback to learn the pseudogradients of the agents and then design personalized incentives for them. On their side, the agents receive those personalized incentives, compute a solution to an extended game, and then return feedback measurements to the coordinator. In the stationary setting, our algorithm returns a Nash equilibrium in case the coordinator is endowed with standard learning policies while it returns a Nash equilibrium up to a constant, yet adjustable, error in the time-varying case. As a motivating application, we consider the ride-hailing service provided by several competing companies with mobility as a service orchestration, necessary to both handle competition among firms and avoid traffic congestion.

Personalized incentives as feedback design in generalized Nash equilibrium problems

Fabiani, Filippo;
2023-01-01

Abstract

We investigate both stationary and time-varying, nonmonotone-generalized Nash equilibrium problems that exhibit symmetric interactions among the agents, which are known to be potential. As may happen in practical cases, however, we envision a scenario in which the formal expression of the underlying potential function is not available, and we design a semidecentralized Nash-equilibrium-seeking algorithm. In the proposed two-layer scheme, a coordinator iteratively integrates possibly noisy and sporadic agents' feedback to learn the pseudogradients of the agents and then design personalized incentives for them. On their side, the agents receive those personalized incentives, compute a solution to an extended game, and then return feedback measurements to the coordinator. In the stationary setting, our algorithm returns a Nash equilibrium in case the coordinator is endowed with standard learning policies while it returns a Nash equilibrium up to a constant, yet adjustable, error in the time-varying case. As a motivating application, we consider the ride-hailing service provided by several competing companies with mobility as a service orchestration, necessary to both handle competition among firms and avoid traffic congestion.
2023
Game theory, machine learning, time-varying optimization
File in questo prodotto:
File Dimensione Formato  
Personalized_Incentives_as_Feedback_Design_in_Generalized_Nash_Equilibrium_Problems.pdf

non disponibili

Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 3.03 MB
Formato Adobe PDF
3.03 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
2203.12948.pdf

accesso aperto

Tipologia: Documento in Pre-print
Licenza: Creative commons
Dimensione 1.49 MB
Formato Adobe PDF
1.49 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11771/25787
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
social impact