Global optimization problems over a multi-agent network is addressed in this letter. The objective function, possibly subject to global constraints, is not analytically known, but can only be evaluated at any query point. It is assumed that the cost function to be minimized is the sum of local cost functions, each of which can be evaluated by the associated agent only. The proposed algorithm asks the agents at each iteration first to fit a surrogate function to local samples, and subsequently to minimize, in a cooperative fashion, an acquisition function, in order to generate new samples to query. In this letter we build the acquisition function as the sum of the local surrogates, in order to exploit the knowledge of these estimates, plus another term that drives the minimization procedure towards unexplored regions of the feasible space, where better values of the objective function might be present. The proposed scheme is a distributed version of the existing algorithm GLIS (GLobal optimization based on Inverse distance weighting and Surrogate radial basis functions), and share with it the same low-complexity and competitiveness, with respect to, for instance, Bayesian Optimization (BO). Experimental results on benchmark problems and on distributed calibration of Model Predictive Controllers (MPC) for autonomous driving applications demonstrate the effectiveness of the proposed method. © 2017 IEEE.

Multi-Agent Active Learning for Distributed Black-Box Optimization

M. Zhu;A. Bemporad;
2023-01-01

Abstract

Global optimization problems over a multi-agent network is addressed in this letter. The objective function, possibly subject to global constraints, is not analytically known, but can only be evaluated at any query point. It is assumed that the cost function to be minimized is the sum of local cost functions, each of which can be evaluated by the associated agent only. The proposed algorithm asks the agents at each iteration first to fit a surrogate function to local samples, and subsequently to minimize, in a cooperative fashion, an acquisition function, in order to generate new samples to query. In this letter we build the acquisition function as the sum of the local surrogates, in order to exploit the knowledge of these estimates, plus another term that drives the minimization procedure towards unexplored regions of the feasible space, where better values of the objective function might be present. The proposed scheme is a distributed version of the existing algorithm GLIS (GLobal optimization based on Inverse distance weighting and Surrogate radial basis functions), and share with it the same low-complexity and competitiveness, with respect to, for instance, Bayesian Optimization (BO). Experimental results on benchmark problems and on distributed calibration of Model Predictive Controllers (MPC) for autonomous driving applications demonstrate the effectiveness of the proposed method. © 2017 IEEE.
2023
Multi-agent networks
Black-box optimization
distributed optimization
model predictive control
surrogate models
File in questo prodotto:
File Dimensione Formato  
Multi-Agent_Active_Learning_for_Distributed_Black-Box_Optimization.pdf

non disponibili

Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 1.5 MB
Formato Adobe PDF
1.5 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11771/27958
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
social impact