After compilation the next step is running automated tests. Best practice approach would be to run unit tests. Unfortunately it's not always so easy to write tests for single modules. Then the tests tend to become somewhat integration tests. But since in my case we haven't made very strict rules about these, I'll simply adopt the vocabulary from How Google Tests Software and simply call them Small and Medium tests.
After these shorter tests are run, the following round will include Large tests. These are end-to-end tests that tests the functionality in many realistic use cases or even integration between different products.
Going a bit more into details, the execution times for these tests one year ago was around 10 hours for all Small and Medium tests. They were executed using physical machines and the system running them was TeamCity. Or actually the tests were ran for both x86 and x64 bit programs. number of tests was around 5k for each bitness.
The Large end-to-end tests took also 8-12 hours, but they had not been automated. Also the visibility to these tests was rather poor. They were executed by the system team, but the results weren't transparently available. The best way to know whether the tests were passing or not was to ask from some team member.
The radical change happened when build machines were virtualized. Instead of having a few high clock rate physical machines we ended up in having a set of servers which we filled with Hyper-V virtual machines. Number of processors was set to 8. We also made some modifications to run the tests in parallel (they were previously all just run consecutively). CI system was moved from TeamCity to Atlassian Bamboo.
Build times (or build + running tests) dropped significantly. One thing we also learned was that virtualization platform makes a big difference. With all the same settings VMware was almost 20% faster than Hyper-V. Good thing is that both can be scripted nicely using PowerShell. We created VMware build machines with 16 logical processors.
During the same effort we managed to get also our Large tests to run in parallel in our continuous integration system. This wasn't as straight forward, because the framework used fixed file names and the test runs interfered each other. But in the end all the problems were resolved and we got the tests running automatically.
After the changes our build + Small & Medium tests now take 30-45 minutes depending a bit on bitness and other factors. Large tests are executed in about 45 minutes. So, for any change done to the codebase we now get the following phases:
In parallel with step 2 we have other builds that produce installer for manual testing. Build time is much shorter, so the test execution still dictates the build length.
Before our latest efforts the steps 1-3 would have taken almost 22 hours including some manual steps. Now they take around 1,5 h for any changes, things are fully automated and the results are transparently available for anyone anytime. Since nothing is ever enough we aim to go even further, but I think the current results are already worth a small celebration!
After these shorter tests are run, the following round will include Large tests. These are end-to-end tests that tests the functionality in many realistic use cases or even integration between different products.
Going a bit more into details, the execution times for these tests one year ago was around 10 hours for all Small and Medium tests. They were executed using physical machines and the system running them was TeamCity. Or actually the tests were ran for both x86 and x64 bit programs. number of tests was around 5k for each bitness.
The Large end-to-end tests took also 8-12 hours, but they had not been automated. Also the visibility to these tests was rather poor. They were executed by the system team, but the results weren't transparently available. The best way to know whether the tests were passing or not was to ask from some team member.
The radical change happened when build machines were virtualized. Instead of having a few high clock rate physical machines we ended up in having a set of servers which we filled with Hyper-V virtual machines. Number of processors was set to 8. We also made some modifications to run the tests in parallel (they were previously all just run consecutively). CI system was moved from TeamCity to Atlassian Bamboo.
Build times (or build + running tests) dropped significantly. One thing we also learned was that virtualization platform makes a big difference. With all the same settings VMware was almost 20% faster than Hyper-V. Good thing is that both can be scripted nicely using PowerShell. We created VMware build machines with 16 logical processors.
During the same effort we managed to get also our Large tests to run in parallel in our continuous integration system. This wasn't as straight forward, because the framework used fixed file names and the test runs interfered each other. But in the end all the problems were resolved and we got the tests running automatically.
After the changes our build + Small & Medium tests now take 30-45 minutes depending a bit on bitness and other factors. Large tests are executed in about 45 minutes. So, for any change done to the codebase we now get the following phases:
- Changes are checked into continuous integration and compiled
- Small and Medium tests are executed.
- Large tests are executed.
In parallel with step 2 we have other builds that produce installer for manual testing. Build time is much shorter, so the test execution still dictates the build length.
Before our latest efforts the steps 1-3 would have taken almost 22 hours including some manual steps. Now they take around 1,5 h for any changes, things are fully automated and the results are transparently available for anyone anytime. Since nothing is ever enough we aim to go even further, but I think the current results are already worth a small celebration!