ABM Benchmark§

Frameworks' performance has been tested with different models configurations, starting with a field 100x100 , 1000 agents, and 200 steps, keeping an agent density of 10%. The subsequent configurations are obtained by doubling the number of agents and changing the field dimensions to preserve the initial agent:

  • Agents: 1000 - Field: 100x100
  • Agents: 2000 - Field: 141x141
  • Agents: 4000 - Field: 200x200
  • Agents: 8000 - Field: 282x282
  • Agents: 16000 - Field: 400x400
  • Agents: 32000 - Field: 565x565
  • Agents: 128000 - Field: 1131x1131

Each experiment has been executed 10 times, repeating the run in case of failure, to collect the average execution time for each model.

ForestFire is the only example where there isn't a clear number of agents, but there is a parameter of the simulation for the density (to clarify, it is 70%). You can see all the script and files used for benchmark all the engines at the ABM_comparison.

We compared our results with the most commons frameworks for ABM:

  • MASON is a fast discrete-event multiagent simulation library core written in Java
  • Agents.jl is a pure Julia framework for agent-based modeling
  • Repast is a tightly integrated, richly interactive, cross platform Java-based modeling system
  • Netlogo is a multi-agent programmable modeling environment
  • Mesa is an agent-based modeling framework written in Python.

There are two types of charts:

  • Time: this chart plots the average time of simulations
  • Speedup: this charts plots the speed up of krABMaga versus the others engines.

You can choose one of the model using the radio buttons on the bottom of the chart and you can also choose which type of chart to display using the combobox. You can click on the various engine on the top legend to show or hide their corresponding values on the chart.