Deep Learning (DL) is revolutionizing the face of many industries these days, such as computer vision, natural language processing, and machine translation, and it penetrates many science-driven products and technological companies.
In this post, Next Generation Automation share his experiences while building Deep Learning Solutions using GUI Testing for its Clients in North America region.
Current GUI testing talks more about Functional Testing focusing system external behavior or its elements and Structual Testing focusing in internal implementation like business work flows.
These methods are susceptible to changes and usually involve extensive automation efforts. Cross-screen testing, like in the case of desktop Web and mobile Web or mobile App testing, accentuates these risks and costs. Testing across multiple operating systems, devices, screen resolutions, and browser versions quickly becomes a huge challenge that is difficult to execute and govern.
Quality control measures, such as coverage-based or usage-based testing, address some of these uncertainties, but only to a certain degree, as it comes at a cost to the overall quality.
As product developers wrap-up GUI implementation, quality engineers begin breaking down the screen to its elements, identifying locators for each UI components and writing up large pieces of their code around asserting the elements’ aspects, such as dimension, position, and color, to make sure the GUI implementation matches the design. Even a slight design change or refactoring of product code could end up failing the regression suites and may involve significant re-work for QE to fix the automaton code.
Due to all this writing and maintaining test suites and scripts for multiple platforms take considerable time and effort and come at the risk of reducing the test scope.
Contemporary developments in Deep Learning space unleashes efficiencies in GUI testing and in the software lifecycle, potentially. A recent proof of concept, described below, proved this approach to be realistic and practical.
Deep Learning Technology
Deep Learning simulates the human way of finding errors or anomalies. Humans are driven by past experience and conditioning to make decisions. Machines with the proper application of training or conditioning can detect errors that surpass human precision.
Process of learning from training data and validating against test data is called modeling.When you model any application under test, first Algorithms take a set of training examples called as the training data and get learning insights about application. Once learning gets completed, it provided with test data to validate learning algorithm.
Complete process called Modelling, More robust your algorithm going to be more stable you get the predictions for any new scenario.
Neural Nets (NN)
A Neural Nets is a group of logically connected entities that transfer a signal from one end to another. Similar to brain cells or neurons that are connected to enable the human brain to achieve its cognitive powers, these logically connected entities are called perceptrons, which allow signals to move across the network to do formidable computations, for example, differentiating a lily from a lotus or understanding the different signals in the traffic.
These operations become possible when we expose our Neural Nets to a significant amount of data. A deep neural net (DNN) is an addition of multiple layers arranged in the order shown in figure above. This mathematical lattice is the core structure that drives complex business workflows.
The suggested methodology begins with capturing the entire webpage as an image. This image is then divided into multiple UX components.
The division of UX components or groups of components helps in generating training and test data to feed our model. Once model is ready, it can test any new UX component across browsers, resolutions, and additional test dimensions by feeding the image of the UX components for the desired specification to the model.
Model would classify whether UX component passes the desired quality criteria or not. This process of deciding the particular images into one of the classes (passing or failing the quality criteria) is called Classification.
Training data and test data creation
Training and test data created by automated modification of UX components taken from the webpage wireframes. Based on the design guidelines and the test variations, potential flaws introduced in direct correlation to the design input. These flawed design mockups are manifested as images. Proper labeling of these images ensure proper organization of test data. Once minimal set of images generated, model can be trained for right UI predictions.
Based on the training data and the complexity of test scenarios, different models such as Convolutional Neural Nets (CNNs), Support Vector machines (SVMs) or Random Forests (RFs) can be chosen. Once the model is decided, model can be trained to capture GUI defects.
Deep Learning Model Benefits and Learning:
Traditional approaches and tools come at a high cost to the individual engineer. Ramping up on some test applications can take more than a week and proficiency comes with much longer periods of time. Machine Learning calls for a different developer skill set, which deprecates the need to master a great deal of traditional validation and verification techniques and tools, such as Selenium WebDriver or iOS and Android drivers.
The new approach eliminated the need for a deep and intimate domain knowledge. A new QA Engineer able to ramp up Next Generation Automation Deep Learning Models in a matter of a day or two and start generating test data when training a ML model.
ML-based testing helps single QA Engineer to prepare test automation to run against main UX components in a matter of day or two. Usually, the test teams invest multiple weeks to achieve such test coverage in traditional UI Testing approach.
Next Generation Automation Deep Learning Models kicked-off the quality assurance process early-on, using design mockups. Training model with available wireframes allowed QA teams to begin the quality engineering work potentially before substantial development phase even started and this approach makes implementation details becoming not so important for quality engineering teams.
Some findings were particularly prominent when it became clear that the defects detected by the deep learning model would have been practically impossible to capture by any other means of manual or automated testing.
The model produced a classification score per asserted output. Such results allowed the Quality Engineering teams to focus their attention on GUI artifacts with the highest probability of having a fault.
Maintaining test suites and scripts for several platforms take considerable amount of time and effort. It comes at high risk of reducing test scope, when time is of essence. Even a slight design change or refactoring of product code could end up failing the regression suites and may involve significant re-work for Quality Engineer to fix the automaton code. Next Generation Automation ML process became agnostic to implementation details and less sensitive to the platforms it runs on.
Same time QA Teams excited about the use of innovative techniques and unleashing its potential. It inspired engineers to hone their skills and learn new tools and approaches.
At Next Generation Automation, Team has developed some of the best Deep Learning Models to help clients achieve UI testing of complex business applications render across multiple platforms (Android, IOS, Windows, Linux), multiple devices that support multiple browsers like Chrome, IE, Fire fox, Safari
For detailed discussion, please send expression of interest email@example.com
Building better QA for tomorrow