A NALASHAA OPEN LEARNING INITIATIVE
Pragmatic Test Automation Systematic Investment Plan Piyush S Bhatnagar Director – Delivery, Nalashaa
WWW.NALASHAA.COM
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
T ABLE OF C ONTENTS License.................................................................................................................................................................4 Preface ................................................................................................................................................................5 Chapter Zero – The Trailer .............................................................................................................................6 Typical Software Test Automation .......................................................................................................6 Test Automation SIP ................................................................................................................................6 Chapter One – Test code is first-class citizen like application code ................................................ 10 A.
BUFD (Big Up Front Design) .......................................................................................................... 10
B.
NUFD (No Up Front Design) .......................................................................................................... 11
C.
Poor Structure ................................................................................................................................... 11
Chapter Two – Power of Abstractions ..................................................................................................... 12 A.
Abstractions ....................................................................................................................................... 13
B.
Concept of Zoom .............................................................................................................................. 13
Chapter Three – Test is as strong as the test data................................................................................ 15 A.
Impact of Test Data on Test Automation ................................................................................... 15
B.
Common pit-falls in test data implementation ........................................................................ 15
Page | 2
© Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
C.
So what do we do? Few tips from our Experience... ................................................................ 16
Chapter Four – Do not reinvent the wheel ............................................................................................. 19 A.
Free/Open Source Tools Available ............................................................................................... 20
Chapter Five – Invest in tracing and logging in test automation logic ........................................... 23 A.
The Long Story .................................................................................................................................. 23
B.
Early investment in tracing and logging mechanisms ............................................................ 24
Chapter Six – Integrate with the daily build........................................................................................... 26 A.
Continuous Integration ................................................................................................................... 26
B.
Test results as Execution Log and Trace ..................................................................................... 26
Chapter Seven – Epilogue ........................................................................................................................... 28 A.
Last but not the least ....................................................................................................................... 28
Page | 3
© Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
L ICENSE
Pragmatic Test Automation-Systematic Investment Plan - Nalashaa by Piyush Bhatnagar, Nalashaa
is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
Based on a work at www.nalashaa.com. Permissions beyond the scope of this license may be available at http://www.nalashaa.com. Nalashaa logo is property Nalashaa Private Limited. All other logos and trademarks, wherever shown and mentioned, belong to their respective owners.
Page | 4
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
P REFACE The need for automating functional tests for software has been well established over recent couple of years with ever growing application complexity and ever shrinking customer release cycles. While putting time and effort in any kind of test automation helps in the application life-cycle, we have seen that a systematic investment plan in developing right test automation recipes and frameworks yields significant savings of time and money. Writing code for automating test cases is still writing code at the end of the day and faces similar challenges as traditional software development, plus a few more unique to the world of testing. Being considered mostly as a support activity in software development lifecycle, test automation efforts usually fall short of what is required and desired. In this short e-book we discuss the need for a regular & systematic investment in automating functional test cases for any application and provide an in-sight into how such a planned effort leads to significant savings of money & time (which again is equivalent to money). We hope that our distilled experience will save you some heart-burn and head-aches and will help you focus your energies on solving other important problems!
Page | 5
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
C HAPTER Z ERO – T HE T RAILER In coming several chapters we will talk about multiple facets of software test automation and how various factors influence the success of the endeavor. This small summary “zero” chapter provides a glimpse of what we would discuss in detail in subsequent chapters.
T YPICAL S OFTWARE T EST A UTOMATION Traditionally software test automation is seen as a project activity, with efforts focused on a particular application/system at hand. While this approach makes it easy to initiate the automation effort, it does not promote benefits of reuse of design ideas, implementation and architecture. As the application goes through churn and evolution, so does the need for updating and keeping the test automation suite. Complexity in the application has a cascading affect on the complexity of the automation code, sometimes even more to account for the plumbing required to automate newer complex functionality. Like any other piece of written code, test automation logic can also suffer from defects and this further leads to another dimension to the already complicated puzzle.
T EST A UTOMATION SIP Having faced similar situations in our multiple engagements, we realised that it is best to treat test automation as an umbrella activity in
Page | 6
© Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
the application development and invest little, but invest consistently in the automation code. We have captured the essence of our multi-man-month experience in developing test automation suites for multitude of applications of varying sizes and complexity in this eBook.
1. T EST
CODE IS FIRST - CLASS CITIZEN LIKE A PPLICATION
CODE Test code is as important as the bits which make any application. So it is subject to same rules of engagement - coding standards, naming conventions, design patterns, architectural best practices & well maintained documentation.
2. A BSTRACTIONS
HELP
While it is legitimate to write test automation logic using low level objects like buttons, text-boxes, it makes tremendous difference when such objects are encapsulated and abstracted to represent high level objects in the eco-system like forms, widgets, etc.
3. T EST
IS AS STRON G AS THE TEST DATA
While test automation engineers focus their time and energies into developing test scripts, the data which is fed to these scripts often remains ignored. Probability of a test catching a bug in the system increases manifolds if the test data is intelligently randomized.
4. D O
Page | 7
NOT REINVENT THE W HEEL
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
Many software vendors have published libraries for UI automation of applications being developed on their platform. Microsoft has made automation API integral part of newer versions of .Net framework. Microsoft UI Automation is the new accessibility framework for Microsoft Windows, available on all operating systems that support Windows Presentation Foundation (WPF). UI Automation provides programmatic access to most user interface (UI) elements on the desktop, enabling assistive technology products such as screen readers to provide information about the UI to end users and to manipulate the UI by means other than standard input. UI Automation also allows automated test scripts to interact with the UI.
5. I NVEST
IN TRACIN G AND LOGGING IN T EST AUTO MATION
LOGIC Since test automation scripts would run in most of the cases in scheduled mode unsupervised without any manual intervention, it becomes imperative to have solid tracing and logging mechanisms built into the automation framework so that if anything goes wrong, it is still possible to recreate the error and make meaningful logical inferences about the source of the error. Another important piece to be made available to test developer is a screenshot API so that script can take screenshots when things do not go as planned.
Page | 8
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
6. I NTEGRATE
WITH THE DA ILY BUILD
A test script which is executed with the daily build matures early and proves to be a very important first gate for common errors. Without going through the rigor of being executed frequently a test script will remain weak and purposeless.
Page | 9
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
C HAPTER O NE – T EST CODE IS FIRST CLASS CITIZEN LIKE APPLICATION CODE Continuing the discussion about our distilled experience in test automation, we will continue our discussion, starting with the basics. Anyone who can type a semi-colon can write code. Writing code is easy. Writing good is not. And writing good code which can test other code is even more difficult. Common mistakes to avoid when embarking on the test automation journey are:
A. BUFD (B IG U P F RONT D ESIGN ) In today’s ever evolving world, it has never helped developers to create huge design artifacts in the very beginning only to dismantle, re-do, rework and waste lot of time (and money) in the process. This holds true even more for test code. Investing great amount of time in designing and creating testing frameworks which are not immediately put to use in a live test script is bound to put pressure on the test engineers as time-lines shrink and delivery deadlines converge. Result will be beautiful, unusable test code and, even with good deal of investment in test-automation, poorly tested application. Needless to say this will result in a spiral situation further worsening as budgets will be questioned by stake-holders and will shrink since it will be very difficult to demonstrate value-add from code which is still in the kitchen. No customer will pay for a dish as long as it is in kitchen!
Page | 10
© Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
B. NUFD (N O U P F RONT D ESIGN ) This is contradictory to BUFD, but equally catastrophic. Directly jumping into creating test scripts without thinking through aspects of maintainability, "extensibility", code-reuse, SOLID design principles & simplicity, will result in poorly developed scripts which would be bugridden, fragile to change, and will be tough to evolve as application goes through churn and evolution cycles. This could be fairly tricky situation to handle if there are multiple test developers involved in the development of scripts. S I D E N O TE : It is a balancing act to invest right amount of time and energies into design of test-code. What is just enough is a tough question to ask, tougher to answer. Simple design choices are the best. As Kelly Johnson wisely said - “Keep it Simple, Stupid”, KISS principle is the best yard-stick to tackle this tough decision. If it requires reams of comment lines to explain that pretty little hack, it is probably not worth to keep it in.
C. P OOR S TRUCTURE Good (test) code looks good too. If it is messy looking, it is highly likely it is messy implementation too! Clean well thought folder structure, name-spaces, packages, etc depending on the language used go a long way in making code readable & maintainable by posterity.
Page | 11
© Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
C HAPTER T WO – P OWER OF A BSTRACTIONS Writing Test Automation code traditionally has been a low-level activity. Find a window, find a control, set the value, click the button and then - find the window, find the control, check the value, and so on. More often than not, a major portion of effort goes in what we call “Infrastructure” code than “Heartbeat” code. Core logic of the test is cluttered with plumbing to deal with the interaction with the application user-interface. As a rule of thumb we have observed that in order to test X lines of code, it takes at least 2.5 X lines of code to test basic flow and about 45 X lines for reasonable coverage. Now given the scale of modern rich UI applications, complexity of required tests can be anyone’s guess. Many test automation attempts start with a straight translation of manual testing steps, interlaced with API calls to identify forms, fields and other controls. This approach often produces quick results with few test cases getting automated in very short period of time. However like any other quick n’ dirty solution, if not corrected at early stage results in reams of test-automation logic which is monolithic in structure and extremely fragile to maintain. So what can help us here? The power of abstractions.
Page | 12
© Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
A. A BSTRACTIONS In our daily life we all use so many gadgets - laptops, cars, watches, refrigerators, etc, without ever worrying about the complexities of the underlying subsystems. Each of these devices provides specific features by combining many smaller sub-systems behind the scenes. This layering makes it easy to install a program on a laptop, adjust time in a watch, make ice-cubes in refrigerator and so on. Similarly, while programming test automation if we write test logic in terms of high level widgets instead of the underlying representations, it will make our tests a lot more readable and extremely amenable to change. After all,
LoginForm.UserName = “UserName” is lot easier on brain cells to understand than
Windows.Application.Find(“My Application”).FindWindow(“LoginForm”).FindControl (“UserName”).SetValue(“UserName”)* *Not actual API, just representational
B. C ONCEPT OF Z OOM Another way to get this concept is look at the concept of Zoom. Zoom
Page | 13
© Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
let us look at the same landscape at different levels of understanding. Each level of zoom will show entities relevant at that level. In test automation parlance, this means that before even thinking of writing test flows, we should think of entities participating in these flows. Tests should be viewed as interactions between these entities rather than as sequential steps. This approach immediately adds up to ROI on the effort because it is easier to device and test complex interaction between various entities than writing complex sequential scripts. This approach instantly shifts attention to tests over data by separating the two, data being input to interaction than being primary test driving factor.
Page | 14
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
C HAPTER T HREE – T EST IS AS STRONG AS THE TEST DATA A. I MPACT OF T EST D ATA ON T EST A UTOMATION What can make a good test a weak one? What can make a simple test an effective one? Data. Or Test Data to be precise. A test and its test-data are like body and soul. But like in real life, soul often gets neglected. While test developers focus a lot of their energies in automating most complex of test scenarios, same extent of attention often gets denied to test data. This neglect can manifest in different ways.
B. C OMMON PIT - FALLS IN TEST DATA IMPLEMENTATION A.
D IDN ’ T
KNOW , DIDN ’ T DO
This means the test developer was not aware of the significance of test data and missed to implement any special strategy for the same. B.
S TRONG C OUPLIN G
WITH
L OGIC
Another mistake which happens is tight coupling with test logic. The flow gets intermingled with setting/reading of data and gets strongly
Page | 15
© Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
embedded in the DNA of the test. It becomes difficult to see test separately from its data. For ex, test to check if account gets locked after multiple repeated attempts can suffer from this issue when the number of attempts is hard-coded in the test. In future when this business condition changes, test becomes invalid. Sometimes changing such hard-coded data has severe impact on the over-all test and results may get misinterpreted. C.
R ANDOMIZE T EST D ATA
Quite often test data is left to random data generators. While this can be useful in certain stress testing scenarios, it can be counterproductive to rich test executions which require meaningful data. D.
R EPEATED D ATA
Each test run gets executed with same test data every time. This monotonous repeat execution may fail to detect any bugs which are associated with usage of data since some of the data-flows may never get tested.
C. S O WHAT DO WE DO ? F EW TIPS FROM OUR E XPERIENCE ... So what should be the approach for proper data handling in test automation scenarios? Well, the key lies in moderation and mix-match of approaches. A.
Page | 16
G IVE
IMPORTANCE TO TE ST DATA
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
First and foremost, give that importance to test data it deserves. Invest in test-data planning when you plan for test automation scenarios. B.
S EPARATION
OF
C ONCERNS
Keep test logic separate from the data. Rather than testing for 3 attempts, test for n attempts where n could be picked based on a test strategy. C.
R ANDOMIZE
WITH CAUTIO N
Instead of having a randomizer as sole test data provider, implement a test data strategy factory. This will provide multiple implementations of test data providers. A random test data generator can be one of the implementations and other could be implemented as required. This will ensure that test logic is not tightly bound with test data and can change independently of test data. D.
C HAN GE
TEST DATA
If you invest in a test data strategy factory, this will be an added advantage since there could be a test data strategy to modify data for each test run or for any run based on some rules relevant to the scenario at hand. There are multiple programming techniques which could aid in effective test data planning and execution. Important thing to remember is that, test data should be given equal importance if not more, as actual test logic. This approach will yield multiple benefits in future in terms of re-
Page | 17
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
usable test data which makes tests more useful & effective with each run.
Page | 18
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
C HAPTER F OUR – D O NOT REINVENT THE WHEEL
Continuing our conversation about tips for better software test automation, this time we would talk about reuse. Given the fact that many test automation initiatives suffer from time crunch, this aspect becomes even more important to understand. While it is very tempting to want to write your own framework for everything you need, it is not practical. If we pause and think, we can emerge over the urge to write our own libraries suited uniquely to task at hand. While there are legitimate scenarios where such an approach might be required, more often than not, it is counter-productive. The contrast between generic and specific implementation is too stark, yet too subtle at the same time. It is advisable to seek the middle ground in our work. As a test automation developer it is important to first learn what is already available in terms of frameworks, libraries and refer to other people’s documented experience. Only when we are sure that an existing implementation will not suffice for more than 80% percent of what we need, we should look for custom/specific solutions. Because in
Page | 19
© Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
most of the cases, 80% requirement match is a large enough ground to start with.
A. F REE /O PEN S OURCE T OOLS A VAILABLE There are multiple competing tools available in both commercial and open source categories and we can evaluate them before embarking on our own implementation journey. Following is a brief list of opensource/free to use tools/frameworks available for various types of automation needs -
1. S ELENIUM F RAMEWORK Selenium is a suite of tools to automate web browsers across many platforms.
2. R OBOT F RAMEWORK Robot Framework is a generic test automation framework for acceptance testing and acceptance test-driven development (ATDD). It has easy-to-use tabular test data syntax and utilizes the keyword-driven testing approach. Its testing capabilities can be extended by test libraries implemented either with Python or Java, and users can create new keywords from existing ones using the same syntax that is used for creating test cases. Robot Framework is open source software released under Apache License 2.0.
3. F ITNESSE Fitness is an acceptance testing framework and also duals as a fully
Page | 20
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
integrated standalone wiki.
4. W ATIR Watir, pronounced water, is an open-source (BSD) family of Ruby libraries for automating web browsers. It allows you to write tests that are easy to read and maintain. It is simple and flexible.
5. W ATIN Inspired by Watir development of WatiN started in December 2005 to make a similar kind of Web Application Testing possible for the .Net languages. Since then WatiN has grown into an easy to use, feature rich and stable framework.
6. F RANK Frank allows you to write structured text test/acceptance tests/requirements (using Cucumber) and have them execute against your iOS application.
7. M ONKEY T ALK MonkeyTalk is an integrated environment for recording, customizing, running and managing test suites for native mobile applications. It supports iOS (iPhone + iPad) & Android.
8. M ICROSOFT UI A UTOMATION L IBRARY Microsoft UI Automation is the new accessibility framework for Microsoft Windows, available on all operating systems that support
Page | 21
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
Windows Presentation Foundation (WPF). UI Automation provides programmatic access to most user interface (UI) elements on the desktop, enabling assistive technology products such as screen readers to provide information about the UI to end users and to manipulate the UI by means other than standard input. UI Automation also allows automated test scripts to interact with the UI.
9. W HITE White is a framework for automating rich client applications based on Win32, Win Forms, WPF, Silverlight and SWT (Java) platforms. It makes use of Microsoft UI Automation Library. There are many commercially available tools too which can help in your own test-automation adventure! Again, remember, if you really wish to re-invent the wheel, you can, but the actual fun begins when you start putting wheels together and build a car!
Page | 22
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
C HAPTER F IVE – I NVEST IN TRACING AND LOGGING IN TEST AUTOMATION LOGIC Since test automation scripts would run in most of the cases in scheduled mode unsupervised without any manual intervention, it becomes imperative to have solid tracing and logging mechanisms built into the automation framework so that if anything goes wrong, it is still possible to recreate the error and make meaningful logical inferences about the source of the error.
A. T HE L ONG S TORY All remains well when nothing goes wrong. But that unfortunately is not the case in real world. In real world, disks get full, memory becomes corrupt, file-handles become invalid, power goes off - in short - so many things which “should” not happen, indeed happen. This reality of our world is best expressed by Murphy’s Law - “If anything can go wrong, it will.” And when things go wrong, we want to set them right. Further, being human, we would want to know the circumstances which led to the error and take precautionary measures for future. Such “Holmes”-like investigation is never easy. More often than not exact sequence of events leading to the error or unknown. In other cases, same conditions do not give the same error!
Page | 23
© Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
So what could aid the poor test-developer in debugging such errors when they occur?
B. E ARLY INVESTMENT IN T RACING AND LOGGING MECHANISMS While the choice of exact mechanism for logging and tracing will depend on multiple factors like platform of development, underlying operating system, language/compiler support, system constraints amongst many others, there are few guidelines which could help in making the right choices. 1. First and foremost, if tracing and logging effort is included in architecture and design, this in itself is a big right step. This would ensure that implementation is natively available when required. 2. On platforms like .Net and Java, tools like log4net , log4j, slf4j, Enterprise Library could be of immense use. These are well crafted feature rich libraries which make the task of logging a breeze. Before trying to “re-invent the wheel” - it will be a good idea to explore these libraries and ascertain requirements against their feature sets. 3. As we have discussed before keeping the logging implementation simple is a good idea. 4. Keeping in mind the probability of Murphy’s Law, conditions which should never occur, should always be logged.
Page | 24
© Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
5. It will be worth-while to invest in a screenshot utility so that errors are captured at UI level. Experience says that these screenshots become immensely useful evidence in debugging process later. 6. All popular logging frameworks have notion of log-levels and while writing test-logic it is imperative to use the log levels judiciously. Too less of log will be lost in woods, too much of log will be noise. Log levels help by providing a throttling mechanism so that at runtime we can tweak what messages we really want to be logged and what we want to be filtered depending on our current situation. Not all good things in a developer’s life are free. Investing in tracing and logging mechanism is in reality an architectural decision which should get translated in non-intrusive design elements, further making its way into actual implementation. If tracing and logging is not given its due importance, and is meted with second-grade treatment by adding as an after-thought to the implementation, the results will fall short of desired and will not be adequate enough to be a strategic aid in debugging.
Page | 25
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
C HAPTER S IX – I NTEGRATE WITH THE DAILY BUILD A. C ONTINUOUS I NTEGRATION Practice makes man perfect - so goes the old adage. This is even truer for tests. While there could be few scripts to be executed on-demand for certain situations, as a rule of thumb attempt should be made to include automation test scripts with a continuous integration system. If there is no continuous integration system in place, it is absolutely a great idea to invest time and energy in creating one! A good frequency of test execution helps in quickly establishing badbuilds when an error occurs. Comparing the test reports of successive runs, one can quickly ascertain when a test case first failed. This can save tremendous amount of time otherwise wasted in tracking the source of bugs.
B. T EST RESULTS AS E XECUTION L OG AND T RACE As software ages, new features are introduced and sometimes these could break existing functionality. A daily build integration of test scripts preempts such avoidable errors. A test execution report with all “greens” and no “reds” or “yellows” is a tremendous confidence booster as well as discipline inducer for the development team. Knowing that a failed test in automated build will result in rejection of the task at hand, instills the discipline of practicing good software development practices
Page | 26
© Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
in the team. And - such a report is a direct and very accurate indicator of the health of the software being developed. A test script which is executed with the daily build matures early and proves to be a very important first gate for common errors. Without going through the rigor of being executed frequently a test script will remain weak and purposeless.
Page | 27
Š Nalashaa
Pragmatic Test Automation
A Nalashaa Open Learning Initiative
C HAPTER S EVEN – E PILOGUE A. L AST BUT NOT THE LEAS T We hope that we were able to provide some practical words of advice which would help you in your test automation efforts. More importantly, we hope that we were able to give some food for thought for test-automation. While we make no claim that following everything we talked about as prescription will solve all your test automation woes, we still feel that being sensitized about the issues discussed will certainly help you make choices, the right ones, in your context. We would love to hear from you about your experiences and your findings and what you have learned – we are always greed to learn, and eager to help. You can write to us at -
dev<at>nalashaa.com
Page | 28
© Nalashaa