A.I. based testing
Software that optimizes itself
The autonomous, A.I. based testing of CognizantMobility
Software development has made enormous progress in recent years. More and more automated processes, a new, agile mindset of the developers. We are even talking about the beginning of a paradigm shift in software development thanks to machine learning. However, one area has always been the bottleneck: testing.
The testing of software in conjunction with hardware consists mainly of writing so-called test cases based on previously defined requirements, performing the test and subsequently analyzing any errors that have occurred. The more complex the systems are, the clearer one thing becomes: it is an almost impossible task to test all conceivable scenarios and to find exactly the one that causes the error from the multitude of input variables. It is even more complicated when you consider that the test itself can already be faulty.
This means that companies are always confronted with high resource costs resulting from manual work steps and the time-consuming use of test hardware. Most of the errors that occur correspond to recurring patterns that are difficult to detect manually - and therefore cause testers to repeat the task. A heterogeneous system landscape does the rest.
While many companies from a wide range of industries operate an "A.I. washing" (similar to "green washing", in which PR departments market an unjustified green fingerprint of their company), actual added value can be created in testing through artificial intelligence. Only by applying the full range of machine and deep learning methods can recurring error patterns be distinguished from new ones, putting an end to the tedious duplication of work by the testers. However, there are still some steps to be taken before this happens.
Model-based testing
The test cases are modeled with the help of visual illustrations (e.g. in the modeling language UML). This saves time, because once modeled, many individual test cases can be derived automatically. If the system requirements change, a change in the model can automatically adapt all affected test cases. Test data can also be generated from the model and the test run can be prepared. The model-based approach also helps with the increasingly demanded agile development method or Scrum. Due to the increase in speed, the test cases can be derived in parallel to the development of the features in the same sprint (e.g. two-week cycle).
In A Nutshell
- Model-based Testing
- Continuous Integration
- Automated Testing from ECU and Backend to Frontend
- Jenkins
- Artificial Intelligence
Automated test procedure with Continuous Integration
Before the actual test run, a selection of the previously generated test cases must be made. The correct test hardware - i.e. the ECU - must also be reserved on which a specific feature will later be implemented. After the test sequence, the results must be exported together with the reports. To ensure that this can run in parallel throughout the project, the Cognizant Mobility testing experts rely on continuous integration environments such as Jenkins and CircleCI. The approach behind it is both simple and ingenious: changed test cases are automatically pushed through the entire process via the tool, without the need for a manual trigger.
The biggest advantage is in the error analysis, and this becomes clear when you consider the complexity a tester faces today. The failed tests must be manually analyzed one by one. It is important to find out why they failed by using parameters and time series. Several tests can fail because of the same parameter or because they depend on each other. Also, the function to be tested may be flawless and may not work simply because of faulty test hardware or a test case that is not modeled well enough. The sources of error are very difficult to distinguish from each other. It is practically impossible to identify sources of error across range limits. Once the faulty hardware or the faulty test case has been corrected, manual post-tests are performed to check the actual function once again. The errors finally detected are logged and returned to the software development department.
A strong AI can detect patterns in the errors and automatically draw conclusions about the source of the error. Not only is the similarity of the individual errors uncovered at high speed, but new test cases are generated completely autonomously, and post-tests are initiated. Today, it is not yet possible to estimate exactly how high the savings potential of fully autonomous testing will be. However, initial estimates by Cognizant mobility experts suggest that there is a great deal to be gained. After all, it is nothing less than a revolution in this area of development. Ultimately, the vision of software that writes and improves itself is now within reach.