DETAILED READ SECTION

Learn Framework Design Best Practices



Here are 23 best framework design practices which can be bring in action by any automation engineer while designing test automation framework for his assigned projects:


1. Configurability

Configurable items like application, url, versions, paths, ip’s, etc. of a script should be kept in an external file. Once deployed to a system, no manual configuration changes should be required and scripts should automatically configure the required settings.


2. Setup Information

Any kind of setup should be done by the @Before method, or some similar annotations.

So the framework being used will know these actions need to be executed before performing the automated tests.


Information examples: type of browser, which URL should be opened, which timeout should be respected, should the browser be maximized or not and so on.


3. Test Folder and Source Folder


All Test classes should be saved inside a folder called “test” and it MUST be a kind of mirror of the source folder, meaning it will follow the same structure of the main project folder, but it will only contain tests.


For example:

Main source folder = Project A > Sanity > src > main > java

Main test folder = Project A > Sanity > src > test > java


4. Maintain Object identification repository


Most common issues faced during automation are object identification changes. Framework should be able to patch such changes easily.


This can be achieved by storing all object identification settings at a shared location in the form of external XML file, excel file, database or automation proprietary format.


5. Status monitoring using debug logs and Exception Handling


A framework should allow monitoring of the execution status in real time and should be capable of sending alerts in case of failure using exception handling. This ensures quick turnaround time in event of a failure.


6. Reporting


The framework should support html/Excel/Pdf report formats with details about test pass/fail for each test case/suite/test run.


7. Test Scripts and Test Data


Test scripts and test data should always be separated from each other. Test data input can be in various forms like XML, Json, Excel, text files, database inputs, hash maps etc.


8. Libraries


A library should contain all reusable components and external connections such as databases, generic functions, application functions etc. Tests should be exposed only to the implemented libraries and tests should be performed by invoking these libraries


9. Performance impacts


A framework should also consider the performance impacts of the implementation. Techniques like compiling all code into single library / dll while execution, no hard sleeps etc. should be used to improve performance whenever possible.


10. Script/Framework Versioning


Versions of framework/scripts should be maintained either in a local repository or versioning tools TFS, which would help in easy monitoring of changes to the software code.


11. Modular, re-usable and maintainable test code and data


Test code should always be modular so that it can be re-used. Also modularity makes sure that whenever there are changes in the existing features or new features need to be added, minimum effort is required by the engineers. E.g. A reusable library can be created, which would help in enhancing application features with minimal effort.


12. Avoid Hard Coding values


Always check for:

  • Should this be a parameterized value for the method/function/step using it?

  • Should this be passed into the test as an external input (such as from a config file or the command line)?


13. Documentation for developed common utlitity functions / APIS


Documentation is vital for good testing and good maintenance. When a test fails, the doc it provides (both in the logs it prints and in its very own code) significantly assist triage.


14. Poor Code Placement

  • Automation projects tend to grow fast. Along with new tests, new shared code like page objects and data models are added all the time.

  • Maintaining a good, organized structure is necessary for project scalability and teamwork.

  • Test cases should be organized by feature area. Common code should be abstracted from test cases and put into shared libraries.

  • Framework-level code for things like inputs and logging should be separated from test-level code.

  • If code is put in the wrong place, it could be difficult to find or reuse.

  • It could also create a dependency nightmare. For example, non-web tests should not have a dependency on Selenium WebDriver.

  • Make sure new code is put in the right place.


15. Avoid Duplication

  • Many testing operations are inherently repetitive.

  • Engineers sometimes just copy-paste code blocks, rather than seek existing methods or add new helpers, to save development time.

  • Plus, it can be difficult to find reusable parts that meet immediate needs in a large code base.


16. Uniform Syntax


Keep uniform coding syntax while writing variables, commong functions, test case methods


17. Batch Execution


Any new script development make sure all previous scripts get executed along with new scripts


18. Locator Strategies


Locators should be relative and not absolute. Use locator axis where ever needed to make locators less dependent on change in Application design. Some pointers for same below:

  • Id / Name: these parameters are easy, efficient, increase performance and readability.

  • XPath: although slow in some browsers, sometimes it’s the only way to get an object. It is also a good option if you need to ensure that some object must appear after another, like div//a;

  • Link Text or <a>: efficient, good performance, but take care if the text changes too often.

  • Dynamic Elements / AJAX: these elements are generated by the server and, normally, the id/name changes each time the page is opened, so the best way is use the XPATH to map them.

19. Fluent APIs


Use Fluent API calling approach while writing Test Scripts


20. Multi Platform Support


Framework Core must be written to allow addition of new platforms: Mobile, Desktop and multiple browsers Chrome, IE, Firefox


21. Environment Configuration


Environment related dependencies must be handled in annotations other than Test Method and should not be bind with 1 test case method


22. Making Your Own Set of Methods


This is often used to speed up the code and easily reuse repeatable code pieces, such as: click on this and wait for that. So, just create a method that will click on something and wait for another thing and call this method every time you need this.


The example above is frequently called “click and wait actions” or the “clickAndWait method”.


You can improve your Wrapping Selenium calls including many other commands. For example, adding a verification to check if the element is still available before clicking on it and then, wait the page. But be careful, to not include too many Selenium calls at the same methods.


23. Assert vs. Verify (Using Junit or TestNG)


Asserts: will break the test and give an immediate response, as soon as the test fails and will not perform any other action.


Verify: will continuous your tests, executing the other commands, even with a fail result.


Deciding which one to use depends on the case. Use Asserts for critical functionality checks and verify for rest others


#NGAutomation

Building better QA for tomorrow