Reasoning about the performance of models of software systems typically entails the derivation of metrics such as throughput, utilization, and response time. If the model is a Markov chain, these are expressed as real functions of the chain, called reward models. The computational complexity of reward-based metrics is of the same order as the solution of the Markov chain, making the analysis infeasible when evaluating large-scale systems. In the context of the stochastic process algebra PEPA, the underlying continuous-time Markov chain has been shown to admit a deterministic (fluid) approximation as a solution of an ordinary differential equation, which effectively circumvents state-space explosion. This paper is concerned with approximating Markovian reward models for PEPA with fluid rewards, i.e., functions of the solution of the differential equation problem. It shows that 1) the Markovian reward models for typical metrics of performance enjoy asymptotic convergence to their fluid analogues, and that 2) via numerical tests, the approximation yields satisfactory accuracy in practice.

Fluid Rewards for a Stochastic Process Algebra

Tribastone M;
2012-01-01

Abstract

Reasoning about the performance of models of software systems typically entails the derivation of metrics such as throughput, utilization, and response time. If the model is a Markov chain, these are expressed as real functions of the chain, called reward models. The computational complexity of reward-based metrics is of the same order as the solution of the Markov chain, making the analysis infeasible when evaluating large-scale systems. In the context of the stochastic process algebra PEPA, the underlying continuous-time Markov chain has been shown to admit a deterministic (fluid) approximation as a solution of an ordinary differential equation, which effectively circumvents state-space explosion. This paper is concerned with approximating Markovian reward models for PEPA with fluid rewards, i.e., functions of the solution of the differential equation problem. It shows that 1) the Markovian reward models for typical metrics of performance enjoy asymptotic convergence to their fluid analogues, and that 2) via numerical tests, the approximation yields satisfactory accuracy in practice.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11771/4067
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 26
social impact