You just built a model, everything works perfectly well and suddenly you have a request for a change (minor or major). You make the change; run the model and things look good. But how can you be sure that the model is working well in all possible cases?
The larger the model, the more difficult it is to answer that question, but you can reduce your chances of adding bugs to your simulation model when you make changes to it, if you follow certain practices.
Let’s assume here that you work alone on your model, because when you don’t work in a team, there is less pressure to test it, and you are probably not using any version control such as GIT or SVN.
Print The Results
To test if things are going well, in AnyLogic you can use the function traceln(msg) in strategic places around your model in order to clearly see if you are getting the results you expect. This is the most obvious thing to do and there is an article you can read in order to understand more on how to do it efficiently:
https://www.anylogic.com/blog/improve-model-testing-with-one-simple-function/
Prepare your testing
In an ideal world, whenever you build a new feature, you want all your previous features to remain unbroken. And opposite to websites where you can use Selenium to automate your testing, with simulations it’s much more complicated and in most cases you have to test in part manually.
Step #1
Make a list of all the features that you want to test. For instance:
- 14 trucks arrive at the docks between 8:00am and 8:00pm
- Rain affects the delivery times increasing it by 20%
- When the simulation ends, data is automatically exported to an Excel file
- When an order is created, a forklift is immediately assigned to that order
- The forklifts avoid each other when they are in the same path
- Etc.
You don’t need to build this list while you are developing the model. Build it when you want to test your whole model for the first time or whenever you are testing something. You can of course remove and add tests at any given point.
Step #2
Create an experiment to test everything and log the results you get. For instance, you can create a parameter variation experiment or a custom experiment to test all the features that you need to test.
Step #3
Get ready to test the rest manually. Sometimes you need to review things visually and you can’t automate your tests. You need your animation to look as you expect, or your buttons to work as expected. Make a clear distinction in your list of the things you need to test manually and the ones you can test automatically.
Be disciplined
Ok, now you finished your new feature and it’s time to test everything. This is not a fun task, but absolutely necessary to be sure that you are not sending a model with bugs to your boss, client or professor. Run the testing experiment(s) and check that you are getting the right results and that your model is behaving as expected. And then run the model testing things you need to test manually.
Don’t avoid tests
How many times I had a problem in a place in the model I thought my changes had absolutely no effect over. But you don’t need to test everything all the time. Just do it when you feel ready to send it to your stakeholder.
Conclusion
This method is the one I use myself to test the models. Many times I automate things even further simulating a real user of the simulation having random behaviors of that user to see if I get an error or unexpected behavior, but I do that rarely and only with models that have heavy user interaction.
Of course all this doesn’t guarantee that you won’t have bugs, but it dramatically reduces the chances of having obvious ones that will make you ashamed when your stakeholder runs the model. So better be cautious and test your models.