We study the long-run conventions emerging in a stag-hunt game when agents are myopic best responders. Our main novel assumption is that errors converge to zero at a rate that is positively related to the payoff earned in the past. To fully explore the implications of this error model, we introduce a further novelty in the way we model the interaction structure, assuming that with positive probability agents remain matched together in the next period. We find that, if interactions are sufficiently persistent over time, then the payoff-dominant convention emerges in the long run, while if interactions are quite volatile, then the maximin convention can emerge even if it is not risk-dominant. We contrast these results with those obtained under two alternative error models: uniform mistakes and payoff-dependent mistakes.
|Titolo:||The evolution of conventions under condition-dependent mistakes|
|Data di pubblicazione:||2019|
|Appare nelle tipologie:||1.1 Articolo in rivista|
File in questo prodotto:
|Bilancini-Boncinelli2019_Article_TheEvolutionOfConventionsUnder.pdf||Pdf-editoriale||Nessuna licenza||Administrator Richiedi una copia|