e: st sid ge In Di ST TE
innoVAtion For soFtwAre QuAlitY Volume 3: issue 5: october 2011
Tact & Diplomacy Devyani Borade on developing the right attitude
inside: load testing | test automation | requirements Visit test online at www.testmagazine.co.uk
BORLAND SOLUTIONS FROM MICRO FOCUS DESIGNED TO DELIVER BETTER SOFTWARE, FASTER
Borland Solutions are designed to:
•
Align development to business needs
•
Strengthen development processes
•
Ensure testing occurs throughout the lifecycle
•
Deliver higher quality software, faster
Borland Solutions from Micro Focus make up a comprehensive quality toolset for embedding quality throughout the software development lifecycle. Software that delivers precisely what is needed, when it is needed, is crucial for business success. Borland Solutions embed quality into software delivery from the very beginning of the development lifecycle whichever methodology you use – traditional, Agile or a mixture – from requirements, through regression, functional, performance and load testing. The result: you meet both business requirements and quality expectations with better quality software, delivered faster.
Visit Micro Focus at EuroSTAR 2011, stand 45 & 46, and check out what the Borland Solutions have to offer you. Register now at http://www.eurostarconferences.com/ Micro Focus Ltd. Email: microfocus.communications@microfocus.com © 2011 Micro Focus IP Development Limited. All rights and marks acknowledged.
: t ide es Ins Dig ST TE
TEST : INNOV ATION FOR SO FTWARE QUAL ITY
Leader | 1 Feature
INNOVATION FO R SOFTWARE QU ALITY
Volume 3: Issue 5: October 2011
Tact & Diplomacy Devyani Borade on developing the right attitude
VOLUME 3: ISS UE 5: OCTOBE R 2011
innoVAtion For soFtwAre QuAlitY
Inside: Load testing |
Test automation | Requirement s
Visit TEST online at www.testmaga zine.co.uk
Tact and diplomacy
F I guess that when you are a software developer and you have had the code that you have (hopefully) poured your heart and soul into demolished (see page 24), then you are perhaps entitled to feel a little dis-chuffed when the glitches are pointed out. Hence tact and diplomacy. Matt Bailey, Editor
© 2011 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor or TEST Magazine or its publisher, 31 Media Limited. ISSN 2040-0160
www.testmagazine.co.uk
irst of all can i just say how great it is to be back. sometimes it’s good to try something new if only to prove that you shouldn’t have gone in the first place. And so here i am back in the editor’s chair of test after my brief summer sojourn. sincere thanks of course to John hancock for doing such a sterling job during my sabbatical. he did much more than simply keeping the train on the tracks.
The theme this month is diplomacy; diplomacy and tact (see cover story on page 6). These are skills that testers should have, and ones they will have to deploy frequently. When it is your job to point out the failings in other peoples’ work, the way in which you do it is crucial. When, as an editor, I receive a text from a contributor it quite often has to be taken apart and put back together again – although without sounding too much like the prodigal editor trying to get back in your good books, I have to say that the standard of literacy in the testing industry is as good as, if not better than, any I have worked in. What I hope to achieve from this exercise is a more coherent and functional piece of writing. The message should still be intact, but the method of delivery should ideally be better, more efficient and concise. The contributors
editor Matthew Bailey matthew.bailey@31media.co.uk Tel: +44 (0)203 056 4599 to advertise contact: Grant Farrell grant.farrell@31media.co.uk Tel: +44(0)203 056 4598 production & Design Toni Barrington toni.barrington@31media.co.uk Dean Cook dean.cook@31media.co.uk
are usually pleased with the result – writing, as a rule not being their day job. And so it should be in software testing. You are helping to produce a better more functional end product. Developers though are professionals, so they may well take things more personally. I guess that when you are a software developer and you have had the code that you have (hopefully) poured your heart and soul into demolished (see page 24), then you are perhaps entitled to feel a little dis-chuffed when the glitches are pointed out. Hence tact and diplomacy. There are ways, and there are ways of delivering bad news and this issue’s cover story gives some helpful pointers to sweeten the pill. More often these days the quality of the code is crucial and can even be a life saver, so those faults and glitches that you are highlighting could be vital. And anyway, the thing about writing, whether it be prose or code, is that when you stick your head above the parapet, someone is bound to take a potshot sooner or later. Until next time...
Matt Bailey, Editor
editorial & Advertising enquiries 31 Media Ltd, Three Tuns House 109 Borough High Street London SE1 1NL Tel: +44 (0) 870 863 6930 Fax: +44 (0) 870 085 8837 Email: info@31media.co.uk Web: www.testmagazine.co.uk printed by Pensord, Tram Road, Pontllanfraith, Blackwood. NP12 2YA
October 2011 | TEST
Join the Revolution Don’t let your legacy application quality systems hamper your business agility At Original Software, we have listened to market frustrations and want you to share in our visionary approach for managing the quality of your applications. We understand that the need to respond faster to changing business requirements means you have to adapt the way you work when you’re delivering business-critical applications. Our solution suite aids business agility and provides an integrated approach to solving your software delivery process and management challenges.
Find out why leading companies are switching to Original Software by visiting: www.origsoft.com/business_agility
Contents | 3
Contents... OCTOBER 2011 1
leader column The prodigal editor is talking tact and diplomacy.
4
news
6
cover story – Developing the right attitude Mighty tester Devyani Borade lists ten all too familiar tester/developer conflict situations and shows how a little tact and diplomacy is better for you and the business.
12 choosing the right load testing tool Freelance tester Jason Buksh explains the difference between types of performance tools and offers useful advice for those assessing which load tool is right for their purposes. He also looks at what makes a good performance tester.
16 is your application ready for test automation? suri chitti makes a case for due diligence before making any moves into test automation and provides a few guidelines about how it should be done.
6
20 training corner – testers be free With the whiff of freedom in the air, Angelina samaroo makes the case for freedom for testers to do their job.
24 construction or demolition? Testers are often seen as the people that come along late and destroy all the good work done so far, earning themselves a reputation as demolition experts. susan chadwick wants to set the record straight.
26 Design for test – taking a holistic approach to software development
20
In his first column for TEST, Mike holcombe, Professor of Computer Science at Sheffield University suggests that the testing process should be considered from the start of any software project.
27 test Digest Hacking scandals in the headlines may not be good news for sufferers but ray Bryant, CEO at Idappcom believes that they will open everybody’d eyes to the importance
24
of security and how to achieve it.
41 test Directory It’s not so much the date by which the project has been promised, Milan sengupta, Senior Software QA/Testing consultant explains; it’s more important that the target is realistic.
48 last word – Dave whalen Dave whalen says Agile is like the dune buggy of his youth in that you need to peek under the bonnet before you take it for a drive. www.testmagazine.co.uk
26 October 2011 | TEST
Do we need testing degree courses? A
Testing “fastest growing niche in IT”
I
n a time where countless innovations are occurring on a weekly basis in IT, recruiters appear to be struggling to find software testing candidates that fit the bill. There are currently no universities that offer courses on software testing (an industry predicted to reach $56billion by 2013) and few include even a module on it, despite the fact it is a fundamental part of the software development lifecycle. So with no specific degree to prove any credentials, what kind of people do software testing companies employ? Sathish Ramadoss, a senior recruitment consultant at Test People in Cheltenham, comments: “The candidates I usually screen for junior roles do not have any experience in software testing, so I look whether they have a general interest in IT and IT-based degrees, such as Computer Science and Information Technology. I’ve even happened to come across a candidate with a degree in Gaming Technology!” However, UWE Computer Systems lecturer Rob Williams believes software testing should not have a place in academia: “Introducing a significant amount of testing theory/practice with CS undergraduates may act to reduce our recruitment”, he argues. “My experience is that we cannot cover the whole range of topics and skills in much depth within four
years, and that employers mainly expect graduates to be able to program in a couple of languages and understand the general principles of software development.” Network designer at HCL Technologies, Sparsh Bali counters Williams’ argument, by stating that testing should be integrated into a regular Computer Engineering degree. “Those who want to probe deeper could opt for an ISQTB/ISEB exam, separately”, he contends, “But a full degree might not be practically feasible, as testing alone cannot cover whole aspects of the software engineering domain.”
ccording to the Indian publication, ‘The Economic Times’ software testing is becoming the fastestgrowing niche within the IT space. In an article on 17th August this year, the magazine said: “For long the poor cousin of software development, the testing function is forcing its way into the limelight, say software companies. And that trend happens to coincide with testing becoming the fastest-growing niche within the IT space. Technology company officials say that the biggest difference in recent times is a surge in interest among youngsters to choose testing as a career, as against companies having to push a few staffers into the testing function out of necessity.” “The turn of events leading to an increased interest in testing has been quite dramatic", says Ranga Reddy, CEO of Maveric Systems, an independent software testing company based in India quoted in the story. “Even up to the year 2000, testing was almost like an afterthought, and having no specific budgets". Sumithra Gomatam, senior VP and global head of testing practice at Cognizant, says there has been a significant growth in the testing function over the past five years. "We are very committed to growing this business", says Gomatam, “and Cognizant has seen its employees in the testing function grow from 4,000 at the end of 2004 to more than 17,500 now.”
Online learning centre Data obfuscation
A
new online learning centre for scriptless web application performance testing tool StressTester has been launched, to help testers execute performance tests for web or Enterprise Asset Management (EAM) applications within just a few days, regardless of their familiarity with the software. It is said to be the first time a scriptless performance testing tool has been made available with a user-friendly, step-by-step online learning resource which combines simple video tutorials and ‘hints and tips’ to guide users through the process of tool configuration to running their first performance tests. The StressTester online learning centre is divided into easy-to-follow, logical chapters covering each of the following key areas needed to test any Web or EAM application: installation, StressTester GUI, recording, user journeys, system monitoring, running tests, analysing results and advanced features.
TEST | October 2011
Graham Parsons, CEO of Reflective Solutions, comments: “We really believe that StressTester, when combined with the new online learning centre, represents a breakthrough for the performance testing community. For too long, the ability to correctly and realistically performance test applications, or even to performance test at all, has been restricted by the cost and timescales associated with deploying script-based solutions. Testers have traditionally been required to spend weeks reading user guides and lengthy manuals before starting the testing process. “The intention behind the online learning centre launch was to take a zero-scripting performance testing tool that is already simple and cost-effective to deploy, and make it even simpler and easier to access and use. It allows users to track their progress and manage their own learning, without cost, and at a pace convenient to them.” www.stresstester.com/tutorial_home.php
patent granted
D
irect Computer Resources, a provider of data privacy, file management and application development testing software, has announced that it has been granted a patent in connection with the data obfuscation technology used in its DataVantage Global software. The software is used for the management and testing of databases and database applications, data migration and the protection of sensitive data. It is designed to protect personally identifiable information and other sensitive data by utilising encryption, masking, de-identification, data substitution and other obfuscation methods. Using a rules-based technology, the software identifies the data to be obfuscated and selects the appropriate method for obfuscating the data. The software gives organisations the ability to obfuscate large collections of data and to test and validate the obfuscated data across an enterprise. Joseph Buonomo, the company's CEO, comments, "We have been at the forefront of the data privacy business for more than a decade. The patent confirms our recognition as a leading innovator in the field."
www.testmagazine.co.uk
N
ews that new versions of an old worm – Agent.btz – which attacked the US military back in 2008 are still appearing, and causing problems confirms that a strategy of boosting the efficiency of an organisation's IT security is the best defence. According to Ray Bryant, CEO of the data traffic analysis and security specialist Idappcom, with many tens of thousands of new malware and attack variants arriving daily in cyberspace, it is natural that the focus of IT security defence strategies will be on the latest attack methodologies. “Many IT security technology users assume – incorrectly as it turns out – that the older attack vectors used by malware, phishing attacks and other electronic nasties, are all countered by today's IPS, IDS, UTM or firewall systems, but the reality is that
old attack vectors can be modified and re-used by cybercriminals," he said. According to Bryant, the reality with IT security defences – no matter what strategy they employ – is that there are only so many processor cycles to go around. Put simply, he explained, this means that an IT security platform needs to be regularly tuned and refined over time, in order to balance the areas of defence it needs to focus on. And the more efficient the security platform is, he said, the more cycles there are to cope with less popular attack vectors, such as reworked and re-energised malware, as as exemplified by the Agent. ntz worm which is now causing headaches for President Obama's military IT specialists. The story here, says the Idappcom CEO, is that old worms and viruses can never be
High demand for software testers
F
acebook recently paid an elite team of software testers £24,500 to uncover bugs and security flaws on its social media website. In an age where business is taking place more and more in the digital arena, companies are following Facebook’s lead by using testers to locate problems within their applications. According to BugFinders, a Cheltenham, UK-based testing company, there has been tremendous growth in the sector. The company’s website provides free bug tracking tools and complimentary training to all users in the form of downloads and video tutorials, explaining the various testing models and the differences between manual and automated testing. “The software problems that have recently been highlighted in the press have led companies to paying significantly more attention to the quality of their software,” says Martin Mudge, director of BugFinders. “Poor quality software can cost billions a year, which means testing and bug finding have become a priority. We created Bugfinders for people who are looking to start a career in software testing and so far we’ve had more than 3,000 people sign up.” The company reports that both demand and salaries for testers in the UK are particularly high. The fact that candidates with five years’ experience are able to earn between £50,000 and £80,000 has been a key factor for interest in testing roles. With the current economic gloom still causing alarm among job seekers, companies like Bugfinders are offering a welcome opportunity in an emerging sector, and one with rewarding prospects ahead.
www.testmagazine.co.uk
NEWS
Resurfacing of old worm proves the case for thorough pen testing
ignored. They may appear to offer a lesser risk profile than today's headline attack code, but the reality is that they will pose a risk – and a risk that needs to be countered.
Oldest website celebrates 20th birthday
T
he first web site is twenty years olkd this year. On the 6th August 1991, the first website went live, it was produced by CERN to help demonstrate the potential of the World Wide Web. Perhaps not even Tim Berners-Lee himself could have predicted that in a mere 20 years, the use of the internet would have exploded so dramatically, with billions and billions of websites now accessible. “ Despite the billions of sites out there, there are still many companies around the world who still don’t have one of their own,” says Chris Winstanley of BaseKit, an organisation that help companies create, host and manage their own websites, “At just 20 years old, the website has come a long way. As Internet access begins to spread across developing countries, and new tools are created that make it even easier for individuals and companies to create their own sites, we believe that it has an incredibly bright future ahead of it, and will evolve to have uses we cannot even begin to imagine at this stage.”
Test-centric vendors outperform Tier 1 Indian vendors
A
ccording to a survey conducted by Forrester Consulting to evaluate software testing trends, test-centric vendors are outperforming tier 1 Indian vendors. The study yielded the following key findings: Those companies that have outsourced testing are seeing impressive results; IT spending is coming back, and, therefore, testing engagements will increase; A significant percentage of companies don’t separate development/programming from testing but believe they should; Companies not currently outsourcing will be a potential market for services; and that Agile software development is tied to advanced testing. To explore these trends, Forrester developed a survey to identify the best practices, understandings, and results from Fortune 500 companies and studied additional relevant Forrsights survey results. “It is great to see Fortune 500 Companies confirm the value they’ve realised in outsourced testing”, said Sashi Reddi, founder and CEO AppLabs. “Testing-centric vendors have a higher percentage of testing professionals, lower attrition rates, and a greater variety of testing tools and domain experience, and customers today find great value in them”
October 2011 | TEST
6 | Cover story
Developing the right attitude Self-styled ‘mighty tester’ Devyani Borade’s motto is “The bug stops here”. Here she lists ten all too familiar tester/developer conflict situations and shows how a little tact and diplomacy is better for you and the business.
TEST | October 2011
www.testmagazine.co.uk
Cover story | 7
A
degree of tact and diplomacy is an essential tool in the tester’s arsenal. Testers must be aware that their task is, in its baser nature, finding faults in another person’s work and nobody likes that. It is important to learn how to avoid confrontational situations.” "
“Testers must try to make their bug reports impersonal and unbiased. Focus on reporting the bug in the most comprehensive way not on placing the blame on the developer.” Sound advice; and advice that is familiar to all testers. The ubiquitous phrase of “conflict resolution” crops up regularly in the tester’s day to day life – be it at a job interview, a bug triage meeting or a performance appraisal. Every tester who really tests, knows well the dangers of getting into an argument with a developer, and will do just about anything to avert such an occurrence. Being branded ‘The Enemy’, having the work environment turn toxic, not being taken notice of – the effects of a tester-developer faceoff are often most harmful to the tester, frequently to the extent of affecting their performance. From an organisation’s point of view, the worst outcome may be a marked absence of team spirit resulting in hiccups in project management resulting in unstable product deliveries resulting in unsatisfied customers and finally culminating in no payments and no business. From the personal point of view, it is lack of job satisfaction, increase in mental stress or even end of employment. Be that as it may, every tester worth their weight in bugs has had exactly these arguments with a developer at some time or the other. We can’t seem to avoid them. Conflicts follow us like dogs follow scents, often at the most inopportune moments (read: when performance appraisals are just round the corner). Those of us who have been burnt enough times by these flames of dispute have developed some useful solutions to tackle this problem. After all, let’s face it: the problem itself is never going to go away, unless they start teaching developers the same doctrine that they inculcate into testers at the School of Development, which frankly looks unlikely to happen this side of the year 2020. The problem may
www.testmagazine.co.uk
hide itself cleverly, by keeping things quiet in the office for a while and letting everyone get on with doing what they do best. But like the proverbial calm before the storm, it will come back with a vengeance at the next tight deadline, customer complaint or threat of redundancy. So how do we make sure that as the ‘odd man out’, we don’t end up stressed and facing the prospect of another day at work with dread? Here are ten of the most common examples of typical tester-developer standoffs you are likely to face in the performance of your day-to-day duties at work, and what you could do to ease the situation: 1. “It’s not a bug, it’s a feature. It’s supposed to work that way!” Ignore the unsaid but implied ‘stupid’, obviously. As for the rest, if you can get a specification to spell out how that ‘feature’ is actually meant to work, then the developer is going to be the one with some explaining to do. What not to do is blow your cool and exchange dirty looks. No one profits that way, despite the brief relief your feelings may get after you vent your frustration. If you don’t have cold hard documentation to back up your reasoning, and only a ‘gut feeling’ that something is wrong, then take a step back. Do you really need to raise this issue? Do you need to do it now? While it may not exactly be a desirable ‘feature’ in the system, is it undesirable to the extent that it is becoming an obstacle in using the system? If so, then you need to change tactics and rope in a third person to act as mediator. If not, keep your peace for the moment. There may be a more opportune time to raise it again. 2. “Well, if it works on my machine and I can’t replicate the problem, then I can’t really fix it, can I?” Sure they can. Get them to run the test on your machine. Or offer to do the test on their computer yourself. Your customers are not all going to have the exact same configurations and setups as the developer does, are they? Someone is going to face the same problem as you are and then the complaints will start trickling in. For the worst of the difficult-toreplicate bugs try using a screencapture tool to track your exact clicks
For the worst of the difficult-to-replicate bugs try using a screen-capture tool to track your exact clicks and provide as much information about the environment as you can. What not to do is give in and give up too easily. Yes, we would all like to avoid confrontations. But we would also like to avoid embarrassment if there is the slightest chance that something critical may go wrong in production.
October 2011 | TEST
8 | Cover story
and provide as much information about the environment as you can. What not to do is give in and give up too easily. Yes, we would all like to avoid confrontations. But we would also like to avoid embarrassment if there is the slightest chance that something critical may go wrong in production.
There is a lot of pressure on developers, especially towards the end of the life cycle as the golive date looms closer. At this time, you must be tactful. Log all bugs diligently as usual but don’t approach developers for every small little thing. Keep interruptions down. Use your judgement and only raise a flag for a really crucial thing that may render the product completely unusable or unfit for purpose.
TEST | October 2011
3. “someone’s messing around with my code. it was working yesterday!” First, express your agreement. It is possible that more than one developer is working on the same system at the same time. Perhaps multiple people code on shared files. Perhaps an update to a reference library has impacted in an unexpected area. Always give the developer the benefit of doubt. Then, confirm what the real issue is. If you have a copy of the previous release, check it. If it turns out that it wasn’t working even in the previous release, gently inform the developer of this fact. Be careful that your tone of voice does not sound accusing, contemptuous or ridiculing. These things happen. It’s software, as unstable as a pin standing on its head, perhaps even more so. It’s not worth getting into an ugly fight over. 4. “stop raising nice-to-have suggestions. start logging some useful bugs instead.” Ouch, this one really hurts. Most testers truly believe that the purpose of testing is not just to log bugs, it is also to be a beacon of light for the product’s progress; to raise questions from a perspective that no one else may have thought of; to highlight gaps in requirements, understanding and implementation; and to keep the customer’s best interests at heart. Hence most testers go the extra mile in logging issues about the product’s usability, as well as any scope for improvement that they can anticipate as a business user or any potential problems that could crop up in the future. To ignore these or keep them ‘on hold’ for a while due to project deadlines or resource constraints is the product owner’s or triage committee’s prerogative and they are fully entitled to exercise it. They are also, as a result, fully responsible and answerable to the
consequences of their decision, but that’s another story, and to ignore these simply on the whim of a developer or a miscommunication between members of a team is quite another. Make sure that that is not what is happening. Try to be objective and honestly assess your bug list. Do you see more non-critical, non-functional type of items? Recall your previous experiences on a similar project or working with similarly competent developers and within similar constraints. Does that number now begin to look a tad on the higher side? Maybe some of the items could be assigned a lower priority without seriously impacting the working of the product. Perhaps there is a mismatch in your understanding of the objective and scope of that particular test and the stakeholders’ expectations from the test. Maybe you just need a clearer and more focussed direction for testing. If that is the case, back off gracefully. Admit that you may have become overzealous in your quest for perfect quality. Make it clear that at the right times, this attitude is an asset. Perhaps some examples from the past of where it has come in handy may help support your retreat. Admitting that you have made a mistake is not all that difficult. We are not machines. To err is human. You might just be surprised that the developers understand and appreciate your commitment to your work, after all. However, if after your review, you are confident that the items still stand, then, by all means, be firm (but polite) and say so. Call in a senior stakeholder like a product owner, project manager or subject matter expert to take a call on the items. Then ensure that you respect the agreed decision. Often, seeking help is the easiest and least rocky option. It takes the pressure off your hands and the third person also acts a mediator in a tense situation. 5. “i don’t really have the time to look at your bugs right now.” This one can be tricky. Here the developer is not really being awkward, but has been himself put into an awkward situation. He probably understands your concerns and may go so far as to even agree with you.
www.testmagazine.co.uk
Cover story | 9
But at the end of the day, he is the creator of that tangible product that the company is selling. There is a lot of pressure on developers, especially towards the end of the life cycle as the go-live date looms closer. At this time, you must be tactful. Log all bugs diligently as usual but don’t approach developers for every small little thing. Keep interruptions down. Use your judgement and only raise a flag for a really crucial thing that may render the product completely unusable or unfit for purpose. Nowadays in the agile environment, customers work closely with the development team and chances are that they may already be aware of the issue from a previous iteration. If you feel that there is a bug that the developer should be taking seriously and is not, first give him the chance to explain his reasons. It may just turn out that he has a valid, genuine and very real reason to ignore a certain item. Perhaps there are other higher priority items that need his immediate attention. Perhaps he has already fixed the issue and it may be available to you in the next release. Perhaps he knows that while the bug may look serious, it is not actually causing any real harm. If his reasons don’t convince you, explain your own reasons why he should be looking into it from a different angle – the user’s. And then leave it up to him to take the final call. Remember, you are not there to direct and order your colleagues on what they should be doing. You are part of a team where everyone does their best and works as a collective to achieve a common goal. You’ve done your end of the job. Now leave him up to his. 6. “this is out of scope. i’ve changed only this part of the system, i haven’t touched that, so your bug is irrelevant and must always have existed there.” Testers probably get to hear this more than anything else, especially those of us who practise what is now being called ‘exploratory testing’. The very nature of a developer’s job is narrow and compartmentalised. It is a sad but true fact that a developer has barely just enough time to program what he believes he has been asked to program and cannot spare a lot of thought
www.testmagazine.co.uk
for the surrounding system. Secondand third-line support personnel are the people most likely to be heard mouthing this refrain. Their job is to provide quick fixes for the problem that was reported. It isn’t to sit and dig out what else is wrong. And there is plenty wrong with software. That is why it keeps them in the money! Trouble is, the nature of a tester’s job is nearly the antithesis. We look at things from a macro level – the proverbial ‘bigger picture’. We are professional testers. We are trained (or have trained ourselves) to observe, detect, analyse and report several things happening simultaneously. It goes against our grain to be asked to ‘not log’ a bug. We generally don’t go looking for unrelated bugs, but when we come across one, we cannot ignore it and pretend that we never saw it. As a user of the system, we don’t care if the bug is the result of the current fix or has been lying around dormant since the first release. It is a bug if it annoys us or stops us doing what we want to do. And so we will log it. As such, the ‘scope’ of testing is much wider than the ‘scope’ of development. So where should one draw the line? This is the ideal time for middle management to step in. Let them decide whether to consider the bug as part of the current fix or open a new ticket for it. As long as the bug is serious enough to warrant a fix and gets fixed, you needn’t really be worried if it is part of your current remit or outside it. 7. “that’s rejected. why? Because i say so,” or “You are just the company tester. we’ll see what the customer says.” Log it. State the situation as comprehensively as you can then leave it alone. There is not much you can do when met with a response like that. Clearly, the developer is in no mood to get into a discussion about it, for whatever the reason. Trying to start one is going to be a waste of your time and energy. 8. “Bah, causal analysis meetings are so useless!” One word: Education. Developers who don’t realise the value of project end review meetings are not to be faulted. Most likely they have either
Help developers understand the importance of these reviews by encouraging active participation and a healthy competitive atmosphere. No fingers pointing (or raising), superiority complexes, abuse, or sycophancy please, save all that melodrama for your blog.
October 2011 | TEST
10 | Testing the tester
never attended one, or never truly participated in one with the right spirit. Causal analysis meetings, when conducted in the correct manner, are phenomenal learning tools that only add to the overall know-how that you build up for each project that you execute, and take away nothing from you. Imagine: you are not only gaining from your own experience, but by listening and sharing you are gaining from others’ experiences, without going through it all yourself the hard way. What could be better? Help developers understand the importance of these reviews by encouraging active participation and a healthy competitive atmosphere. No fingers pointing (or raising), superiority complexes, abuse, or sycophancy please, save all that melodrama for your blog.
Loss of business is catastrophic and most organisations will not take it lightly or lying down. This will have a direct impact on your own future in the company. Granted that this is a worstcase scenario, but such things have come to pass regularly in software history and will continue to do so. Until testers and developers learn to work together.
TEST | October 2011
9. “That is not a requirement. The customer has not explicitly stated it in the specification and so I am not coding that.” Again, this is similar to point number one above. Explain as neutrally and persuasively as you possibly can about implicit and tacitly understood requirements. If the developer still has his ears set back, leave a more senior person to deal with it. Scope creeps happen all the time. It is up to the project manager to decide whether the issue is important enough to be absorbed within the current budget or whether the issue will have to be charged additionally for. 10. “Hey, you are the tester. You do your own automation tests,” or “What do you mean I didn’t tell you about the new release? Didn’t you notice I was working until late yesterday?” or “It’s not my fault. It’s the lousy unreliable configuration/release management system that we have.” … And variations thereof. They are all instances of developers being uncooperative, unhelpful, un-team spirited, or worse, adopting a ‘blame the system’ attitude. This smacks of a breakdown of communication between members of a team. This is a dangerous and potentially harmful situation and one that should be remedied as soon as possible.
Perhaps it is time to take aside the offending developer into a meeting room and try a bit of straight talking. Perhaps it will help instead to get a mutual colleague to intervene. Perhaps the higher management needs to set down urgent and immediate clear guidelines for communication. Whatever the method, this needs to be defused and resolved fast before it starts harming your morale or his, or influencing the team’s morale, or begins affecting the project’s outcome. Loss of business is catastrophic and most organisations will not take it lightly or lying down. This will have a direct impact on your own future in the company. Granted that this is a worst-case scenario, but such things have come to pass regularly in software history and will continue to do so. Until testers and developers learn to work together.
What you can do to minimise conflicts within the workplace • E mpathise with developers and try to see things from their point of view – they may have a valid point that you have not thought of. •A ttack the issue, not the person – never introduce a personal motif into any argument. Always keep things on a professional level. •A nticipate and come prepared – like any good battlefield strategy pre-empt your opponents armed with knowledge. • T ake it slowly – introduce change or a difference of opinion gradually so that people get time to let the idea sink in. •G et assistance – seek out the key players and bring them over to your side. They will, in turn, be able to influence others. Use these tips to enjoy a healthier and more mutually supportive office environment!
Devyani Borade Software tester http://devyaniborade. blogspot.com
www.testmagazine.co.uk
Industry-leading Cloud, CEP, SOA and BPM test automation Putting you and the testing team in control Since 1996, Green Hat has been helping organisations around the world test smarter. Our industry-leading solutions help you overcome the unique challenges of testing complex integrated systems such as multiple dependencies, the absence of a GUI, or systems unavailable for testing. Discover issues earlier, deploy in confidence quicker, turn recordings into regression tests in under five minutes and avoid relying on development teams coding to keep your testing cycles on track. GH Tester ensures integrated systems go into production faster: • • •
Easy end-to-end continuous integration testing Single suite for functional and performance needs Development cycles and costs down by 50%
GH VIE (Virtual Integration Environment) delivers advanced virtualized applications without coding: • • •
Personal testing environments easily created Subset databases for quick testing and compliance Quickly and easily extensible for non-standard systems
See Green Hat’s capabilities for yourself at EuroSTAR in Manchester, November 21-24
Every testing team has its own unique challenges. Visit www.greenhat.com to find out how we can help you and arrange a demonstration tailored to your particular requirements. Discover why our customers say, “It feels like GH Tester was written specifically for us”.
Support for 70+ systems including: Web Services • TIBCO • webMethods • SAP • Oracle • IBM • EDI • HL7 • JMS • SOAP • SWIFT • FIX • XML
12 | Performance testing
Choosing the right load testing tool Freelance tester Jason Buksh explains the difference between the different types of performance tools and offers useful advice for those assessing which load tool is right for their purposes. He also looks at what makes a good performance tester.
T
here are many load testing tools available – far too many for me to research individually. I’ve thought long and hard about how to best present this information in a way that is easy to digest. What follows is my interpretation and a first stab, but there are always exceptions to the rule.
The tools tend to fall into three distinct categories:
Freeware performance test tools Open source and free; Specialist knowledge required: Developer level interest required to learn; Learning curve: High; Development time: High; Handover time: High; No commercial support: Reliance on forums and goodwill; Best suited to: In-house, subsystem testing, ‘quick and dirty’ testing, have high access to insider knowledge of application under test;
TEST | October 2011
www.testmagazine.co.uk
Performance testing | 13
Commercial tools are generally suitable for medium- to large-sized customers that have a complex n-tier architectures and large development teams. If there are a number of different protocols that need supporting, then you can generally find a commercial tool that will support them.
These tend to concentrate on generating load only. The largest benefit of open source tools is they are free and extremely flexible. Time to record and playback scripts can be lengthy and require different toolsets, but they are especially suitable for developers that wish to quickly perform sub-system (or system) load tests or for the performance tester that has close access to the development team. I view the development time to create performance tests for large applications as a major factor when considering them, eg client server traffic is changing frequency with new builds. I personally think there is a large hole in market to replicate and pull together a number of open source tools and build a solution that replicates the commercial tools – and potentially do a better job (more on this in another article).
Examples: Loadrunner, Forecast, SilkPerformer; Commercial tools are generally suitable for medium-to large-sized customers that have a complex n-tier architectures and large development teams. If there are a number of different protocols that need supporting, then you can generally find a commercial tool that will support them. Commercial tools have a number of sub-products bundled together: record and playback, scenario creation, injectors, real-time analysis, post analysis and tie in with monitoring products (eg memory, CPU, network I/O). Together these sub-products greatly enhance the productively of a performance tester. These tools tend to be suitable for large development team dropping regular releases into performance testing. Most commercial tools work on the same principles and generate load in slightly different ways.
Commercial performance test tools
Cloud-based performance test tools
All in one package: A lot of what you need is included in these packages, eg record and playback functionality, analyst graphs, monitoring capability, debugging ability; Costs involved; Learning curve: Medium; Development time: Medium; Handover time: Medium; Commercially supported: protocols updates available quickly, help for issues and fixes can be speedy; Best suited to: In-house testing of applications prior to release where aggressive releases and builds are scheduled;
No hardware or installation required; Very cheap and easy pricing structure; Learning curve: Low; Development time: Low; Handover time: Low; Best Suited To: Simple load scenarios executed over the web on HTTP based applications, true E2E testing; Disadvantages: Low protocol support, no in-house test ability, difficult to create complex load scenarios; Examples: Loadstorm, LoadImpact, Gomez; These are the new kids on the block and shaking up the old guard. Cloud-based tools are an exciting
www.testmagazine.co.uk
development as they are a fraction of the cost of the commercial tools. They tend to work on a memory and CPU sledgehammer approach, which means they consume lots of hardware resources, eg Amazon cloud services. This in a way is a minor drawback as a lot of them do not give you the ability to inject their services in-house and past the firewall. Scripting ability and logic tend to be simple – which means complex load testing scenarios are not sometimes achievable depending on your requirements. These tools have a time and a place – if you are considering them then also consider your hardware environment and infrastructure in place. You may only be able to test during quiet periods of the day (as you are testing from internet) and you will have to look carefully at the infrastructure outside the lab, eg firewall, network capacity – the web facing infrastructure. I feel that this is the future – it’s only a matter of time before these services improve on their weak points.
More tools There is a fourth category, network-based Performance Test tools which operate on the lower levels of the OSI model – but these are out of scope for this article, eg Spirent Avalanche. As I said earlier – this is a generalisation of the different load testing tools available (there are over 100), but it should point you in the right direction and give you a good starting point when assessing which tool to use.
October 2011 | TEST
14 | Performance testing
These are the new kids on the block and shaking up the old guard. Cloud-based tools are an exciting development as they are a fraction of the cost of the commercial tools.
What makes a good Performance Tester?
Jason Buksh Freelance technical PM & performance consultant Blog: www.perftesting.co.uk
TEST | October 2011
Performance testing is a high-profile and highimpact activity; mistakes made here are very visible and costly. Performance testing is akin to a technical project manager role. Quality requirements and stakeholder management is as important as the tool experience. The fact of the matter is that I’ve found performance testing to be a largely misunderstood area, particularly by the people that are recruiting. Very often the person doing the recruiting has very little knowledge of what is required – other than specific toolset knowledge. I’ve also found that once into a company, many stakeholders have very different ideas of what the performance testing role entails – It’s a tricky balance to achieve and it takes time to correctly set expectations. There are the obvious and generic qualities such as: attention to detail; diligence and discipline. Everyone should have these core set of values. The key core skill – ability to script in the chosen performance tool – shouldn’t have the weight and precedence that people assign to it. Here are the attributes I look for when interviewing, in priority order: • Previous development experience; • Performance Tools Experience; • Willingness to share and transfer knowledge; • Techniques adopted when tackling a problem; • Knowledge of the performance testing lifecycle; • Ability to learn and adapt to different systems; • Graphing / monitoring experience; • UNIX/3GL/SQL experience; It’s not an exhaustive list, but I’ve found it to be a good indicator. Notice the specific tool set is not on the list, eg Loadrunner, Performance Center, JMeter, Facilita. If someone has good experience with other performance tools it really won’t take very long to come up to speed with the in-house toolset, they mostly work on the same principles. I’ve cross trained some very talented individuals and these have become more productive
than some people that have had more than five years’ experience. I look for ability first and foremost and then experience. A willingness to extend outside their comfort zones and come across well to other stakeholders is another key attribute. A good performance tester really should want to understand the application, architecturally and from a business perspective. Do you really want someone to just sit, script, run tests and report results – if they have no interest in understanding the application then they will write poor performance tests. This means you should also look to people that are able to communicate effectively; the performance role is directly related to the quality of the requirements driving the test. The role should have someone that is always interested in questioning and evidencing what lands at their desk. A healthy scepticism is always good. I also have a selfish requirement – I want my team to grow and not need me. I deliberately go for personnel that can learn, step up and eventually lead the team. There is nothing worse than going away and coming back to a series of high profile failures. The quality of management within a team is directly related to the amount of issues caused by them. Good teams can often go unnoticed.
Key Takeaways • Experience does not equal ability; • Specific performance toolset ability need not be an automatic disqualification; • Strong programming ability is good indicator; • Performance testing is a high profile and high impact activity; A word of warning – Performance testers tend to be a funny old lot, they tend to have a quirky sense of humour and strong personalities. This may not be immediately apparent; but over time you will find they can enrich your team and working environment immeasurably.
www.testmagazine.co.uk
TM
Powerful multi-protocol testing software
Can you predict the future? Don’t leave anything to chance. Forecast tests the performance, reliability and scalability of your business critical IT systems backed up by Facilita’s specialist professional services, training and expert support.
Facilita Software Development Limited. Tel: +44 (0)1260 298 109 | email: enquiries@facilita.com | www.facilita.com
16 | TEST automation
Is your application ready for test automation? Suri Chitti makes a case for due diligence before making any moves into test automation and provides a few guidelines about how it should be done.
M
any automation projects have been derailed because of issues found between the tool and the application during automated test development (scripting) and playback. This is not because teams do not know how to make estimations but it is more likely because they have done estimations without due diligence or have mixed up the two. While test automation has been around for more than a decade and while teams have matured in their approaches towards attempting automation, it will still help clarify the distinction between the two and the order in which they must be done.
In many test automation projects I have worked on, due diligence ended with building a sound business case for automation. We would then begin estimations. We typically would sample the application for large, medium and small sized test cases and base our estimations on efforts (amounts of time taken) to automate our sample cases. In most of these projects we ended up being thrown off track during automated test development, almost invariably due to issues found between the tool and application. In a few projects we took a little time to explore the application exercising various application functionalities with the tool, attempting to record and
TEST | October 2011
playback the steps and verifications in the test cases. We were not bothered about size of test cases at this point; we were more interested in sampling the different complexities that existed, not with a view of making estimations but with a view of making the application amenable to be scripted with our tool. Yet, it was in these few projects that we ended up making realistic estimations and delivering on time. What was better, we could assume a linear correlation between size of test cases and effort for automation in our estimations and achieve them during automated test development. The improvement we achieved in these few projects was because we did not stop due diligence with building a business case. We went a few steps ahead and explored the application with the tool, what many would call a feasibility study, which in other projects we had mixed with estimations. Estimations should not be done before due diligence is completed and a feasibility study is an important part of the due diligence process.
Feasibility If you have already automated tests for an earlier version of your application, you may omit the feasibility study, provided there are no major changes in the architecture, technology layers, design and coding styles behind the newer version of your application. Having a test automation framework is not a valid reason to omit the exercise.
www.testmagazine.co.uk
TEST automation | 17
A framework will not guarantee that your application will work with your tool without issue unless issues have been previously identified and addressed in your framework. Your framework, however advanced, will ultimately interface your application to the tool and any issues cannot be magically precluded by it. The most valuable asset for, even, your automated testing, is the set of manual test cases. The scope of your automation work is determined by your manual test cases and not the software requirements. Hence the way your test cases have been written decides to a large degree, the success of your test automation. The bad news is that most of the time, test cases are not written to help automation. Good test cases do not need the tester to be conversant with the application because there is enough detail to perform navigations and verifications with accuracy. Bad test cases need the tester to be wholly conversant with the application and rely more on his expertise with the application than on the detail they provide. Good test cases are needed by the automation engineer, not just to help him pick up his way with the application but to understand his scope of work. If the test cases are not in shape, do not attempt to take the situation in stride by including an estimate to rewrite your test cases. Repair your test cases as part of your due diligence. A bad test case (Fig 1) does not mention application controls, unlike the good one (Fig 2).
xploring the application Once your test cases are in good shape, you need to explore your application with your tool or in other words begin your feasibility study. You should not mix exploring your application with making estimates for your automation project; you are not ready to make estimates at this point. You should not put off this activity till when you start making estimations, for if you do so you may go through several iterations of trial and estimation or as mentioned earlier, you may deliver inaccurate estimations that will trouble you during automated test development. The bad news for your feasibility study is that no application is ready to be automated with any tool. The good news is most applications are amenable to be automated with most tools. I am assuming here that you
www.testmagazine.co.uk
Fig 1
Fig 2
know what third party technologies are utilised in your application and that your tool explicitly supports them. Otherwise you may have to ponder over what it takes to extend your tool to work with them and this requires a much higher level of effort. Your feasibility study will be a concrete attempt at making your application amenable to your tool. You can make valid estimates only after you have confidence your application will work with your tool. The amount of time your feasibility study needs does not depend so much on the size of your application as much on factors like presence of different technology layers behind your application, the way your application has been built to facilitate navigations (like validations, prompting, autofilling etc), how test verifications can be effected on your application, how many different systems you need to access in your testing like other applications, databases, command line interfaces, consoles etc . Done properly this exercise will take a few weeks time even in the most complicated cases and is worth its weight in gold. There is no prescribed way of exploring your application with your tool. It is rather a key skill
Centralised management of the testing process enables cross organizational knowledge sharing, and avoids duplication of efforts by allowing test scenarios and assets to be recycled.
October 2011 | TEST
18 | TEST automation
that automation engineers build and refine over years of experience. I will elaborate a little on how to go about it.
Starting to explore Where you start depends on the tool and its methodology. I will make a few recommendations with QTP as an example. Firstly, you should attempt to capture every page of the test application. In this process, you should take each page (or screen) of the application and capture it by using QTP’s learn feature. When you capture a page, ensure that all constructs on that page (links, text fields, dropdowns, radio buttons, forms etc) that will be interacted with during your testing are being recognised and recognised correctly. This activity should be attempted even if you do not plan to use an object repository and resort to descriptive programming instead. This activity will help you identify and resolve any controls in the application that the tool has difficulty recognising, a difficulty which will be there even if you choose not to use the object repository. At the least, this activity will identify key tool configurations and extensions that need to be enabled to work with your application. It could identify any bugs present in the application that will impede automation (and they may not impede functionality). For example in a recent project we found that several html input objects in a web application were using the same name in the source code. (On a different note, such duplication should be discouraged for reasons of good coding practice, if not for any other reason). While this was not an issue for functionality, it was causing the object repository manager to generate a Visual C++ runtime error when the page was being captured. We escalated the issue to the application developers who resolved the issue and removed a serious impediment for our automation. The next task of exploration is to arrive at a set of paths that addresses every feature of the application and complication in testing. These paths need not be a selected bunch of test cases, but can be shortened or extended versions or combinations of test cases. When you arrive at these
TEST | October 2011
paths keep in mind that you should include from the test cases, the steps and verifications performed on all the different features in your application like forms, stylesheets, reports etc. You should include the different systems you need to access, connections you need to make and any other complication. Take the help of the manual testers to build such paths. The more inclusive you’re set of paths the more likely you will identify issues your application has with your tool, so building the right set of paths is the crux of your whole automation affair. Once you have your paths you should automate and run them in a simple record and playback fashion. You may use the object repository or descriptive programming in doing so. You need not attempt to build functions or other complicated structures during this activity, unless programmatically required to handle a situation. The objective is to unravel any difficulties in automating the procedures involved in your testing. While you previously attempted capturing all application controls in the object repository and resolved recognition issues, recording and running test procedures can have further difficulties, hence the need for this task. For example in a recent project we had a page where there were four sequential text input fields. The application was designed such that a keyboard input event into a text field would enable the next text field. QTP did not have any issue capturing the set of sequential text fields, yet when a script was recorded to input values into the series of text fields and run, only the first text field was getting populated and the others did not get enabled. This was because QTP recorded the action of inputting a value into a text field by using the VBScript set method and this method did not fire a keyboard event that would cause the application to enable the next field. We rectified the problem by modifying the script to use another method called Sendkeys.
Tools vs applications The automation world abounds with many problems reported between tools and applications. Fortunately, during your feasibility study you will
not run into many problems. Mostly, projects report only a few major problems that have had to be overcome. If you are finding too many problems you are not wrong to wonder if your application is too tardy for automation. Choosing another tool may not deliver a miracle either. You should then seriously weigh the option of not having automation or tone down your expectations from automation. By making sure your test cases are in good shape and by doing a feasibility study, you will derive a few major benefits. You are likely to identify and resolve showstoppers for your purported automation. You can make reliable estimates because you know what it takes to automate your application. Since all complexities have been identified and handled, your estimation can assume a linear relation between effort required and size of test cases (number of steps and verifications in a test case) that need to be automated. I agree that issues between the tool and application are only a sub-set of the issues one may face during the entire process of automation and that I am making a stretched assertion. Yet these issues are the ones that decide whether your application is fit for automation or not. You are ready to begin your automation. Once you begin your automation project, your estimations are not likely to be thrown off by unanticipated issues with the tool.
About the author: Suri Chitti is a software test professional. His extensive experience prominently includes testing software in several industrial verticals, automated functional testing and non functional testing. He has worked for major companies that are involved in the production of commercial software, across the world
www.testmagazine.co.uk
20 | Training Corner
Testers be free Angelina samaroo smells a whiff of freedom in the air and makes the case for freedom for testers to do their job.
A
s the world clamours for its view of freedom, we too have a need for freedom. For many of us, that need is simple. we just want to get on and carry out testing. not managing testing, just doing it. to just do it requires everyone else to do their bit. let’s summarise the (raw) deliverables that we need from each discipline. From the business we need a defined and signed-off need. From the project management we need the start and end dates for our piece of work. From the developers we need a stable system. From us they need information – what did we test, what did we find out and what did we do about it?
The tester is not quite sure what to test, and thus becomes shackled in uncertainty. The road to freedom in this instance is to recognise that there can never be code without a requirement. A developer who churns out code vaguely resembling your system must have had some idea of the requirements. The probability of a random coding exercise resulting in systems matching your business is nil. He must have known what he was coding at the time that he wrote it. This is a given. So, if the business is not forthcoming with a requirement, then your next portof-call is the developer – ask what he did. Let the project manager know that you have asked the questions. Record the answer.
Defined requirements?
Now, I’m hearing the screams – “we do not have time to chase the developers”. Accepted. So now you are apparently testing without a requirement. However, as for the developer, the chances of you knowing how to access and test the system without some prior need is also nil. Thus, you also have a requirement. Unlike developer or business though, you cannot carry on regardless. Your only value-add to the software development process is in the provision of accurate and timely information. In today’s world, long-haul projects that reach the end date without
In the spiced-up reality we often do not get a written requirement from the business. If it is written, it is often not signed-off. It was hastily put together, under duress, and it shows. There are many reasons for this. Most compelling is that the business may simply not know what they want at the start. This of course is ok in Agile. However, if the development life-cycle is not defined, and no one cares enough to attempt a definition, then the lack of a properly documented requirement tends to be a cause of unease.
TEST | October 2011
testing without a requirement?
significant change will probably be going-dead, not live. Thus the question of “what is the likely risk to product, project and business if we had to release today?” may become the norm. You cannot provide that information without a set of requirements. As we said though, in this instance, they were not forthcoming. Your job however, is to define what you did test. So, the spreadsheet you used for your tests steps on its own is not enough. It will not give you the big picture. It must now be preceded by a summary of what you were testing. In sentences, generally starting with a capital letter and ending with a full stop. These sentences then become paragraphs. Eventually you might even have a set of requirements. Then you’re in Agile without even trying. The project which changes is your goal – stagnation makes for a very murky pool.
semantic shift Now, you may have been wrong in your interpretation of the requirements, but in test best practice you will have succeeded. To test without tracing back to some requirement is anathema to the test professional. When you have been shown to be wrong in your interpretation of the (undocumented) requirements, take it as progress. Someone became interested in what you were doing. You, who already knew what you were doing, now have
www.testmagazine.co.uk
Training Corner | 21 traceability. Track the changes for comparison later. You may find that you were right in the first instance; it just took the business a while to catch up. This is often the case when the business people around you come and go. You quickly become (as do the developers) the business experts in some areas. No bad thing for the CV, so don’t forget to update it, for recall and editing later when you’re on the move. With the CV in mind, Boris Beizers’ book, Software Testing Techniques of 1983, can be judged by its cover. It provides the how-to-guide of systematic techniques (outside the bounds of a magazine article, so I will not elaborate). Worthy of a mention though, is his concept of the semantic shift – two meanings for the same wording. Working into your vocabulary the term ‘semantic shift’, may be of value in many projects. Take for instance a date – 06/10/2012. In the UK this would usually mean 6th October 2012. In the US, it could mean 10th June 2012. This you would surmise is a well-known risk. So imagine my surprise when trying to book a flight to the US via a US-based airline’s website. I was aware of the risk, so was vigilant when clicking away. However, even though I had clicked on the required date via their calendar, the date was still the wrong way ‘round on confirmation. Presumably the coder also recognised the issue and ‘translated’ the date for the UK site. Another developer I guess also recognised the problem and just gave you a calendar up-front; a very sensible approach. Did the two ever meet I wonder? Each was right in his own world. They fell over in the joined-up thinking, aka endto-end testing. I am happy to report though that the site in question now does not display a calendar, it shows the date first, followed by the month spelt out in words, so problem solved, once-and-for all, if the configuration management team keep up their end of the bargain.
Starting and finishing
Angelina Samaroo Managing director Pinta Education www.pintaed.com
Let us now look at those start and end dates. The start date is often not an issue – life can be kind. It knows that the bigger hurdle is to finish. So it makes starting easy. Don’t bother with market research, don’t conduct due diligence on suppliers, don’t stand-up to the business. Just start, and focus on the finish line. I was taught this at university. Never look down. This works when I’m running, and panting – just a few more steps. When running however, I would have made sure that I was properly
www.testmagazine.co.uk
dressed for the terrain, and had eaten at the right time. So I have no need to look down, because I would have already prepared for a short sprint. An orienteering trip at night however would require me to know what the start and end points are, and the waypoints along the route. I would need to know what to do if it rains, if the flashlight gets left behind or if I actually get lost. Managing a single IT project is no different; it requires significant thinking before-hand to avoid the darkness. Not being able to fix the issues is not the problem; it may be outside of the project manager’s control. For him, success is in knowing the course and putting relevant and realistic contingencies in place. His contingency paves the way for our freedom to test what is relevant and realistic, with confidence. It is the novice project manager who falls over things in his domain which are predictable.
The code Now let’s talk about the code itself. Here, freedom for us is being able test from an end-to-end perspective. Not as easy as you might think. Consider that the case where a navigation link is broken. You will not be able to reach the end of your thread. The test will be suspended, pending fix of the link. You will not be able to down tools and read a book to find the reason for your very existence; you will need to do something relevant. Run another test perhaps. When the fix has been applied you will need to resume testing. This may require you to spend time setting up the test environment all over again, which could be a real drag, especially if you need other technical skills and people to do this. While you’re finding and getting fixed all these unit and integration (in the small) bugs, you’re not really doing your job as a system tester. You are doing the job of a developer, in the test environment, from your perspective and in your test window. How long do you have left before it closes? Whilst you were doing their job, who was doing yours? Who is responsible for what you do? These are questions for the novice tester. The test professional who is free knows the answer. He knows that the ‘you’ in Dr. Boehm’s summation, that downstream swimming is much harder if you do not remove the rocks upstream, means him. This thought he relishes; he will stand up and be counted.
Managing a single IT project is no different; it requires significant thinking before-hand to avoid the darkness. Not being able to fix the issues is not the problem; it may be outside of the project manager’s control. For him, success is in knowing the course and putting relevant and realistic contingencies in place. His contingency paves the way for our freedom to test what is relevant and realistic, with confidence. It is the novice project manager who falls over things in his domain which are predictable.
October 2011 | TEST
22 | TEST Feature management
What the cloud offers test management With the news that it has launched a cloud-based test management tool, Matt Bailey spoke to Automation Consultants’ director Francis Miers about the testing business in general and what TestWave can offer the tester in particular.
I
T services and software company Automation Consultants recently announces the launch of TestWave, a full-service, cloudbased test management tool. With a fixed price per user per month of £75, and instant access through a web browser, Automation Consultants says TestWave fills a gap in the market for a full service test tool that is affordable and easy to use.
Francis Miers Director TestWave Automation Consultants www.automationconsultants.com
TEST | October 2011
“As a company that does a lot of testing ourselves, we felt there was a need for a management tool that was easier to install and was cheaper,” says Automation Consultants’ TestWave director Francis Miers. “Many organisations are big enough to do their own testing but are not sufficiently large to warrant buying a commercial test management tool and that’s where we come in.” The software was developed in-house by a team of engineers who have consulted on testing projects and IT transformations for a range of clients. It enables teams to store test scripts, analyse results, and record and track defects. Test managers can see the progress of testing in real time through intuitive dashboards, and map the testing to releases and requirements. “Having worked in this sector for a decade, we knew there was a need for a test management tool that could reduce waste in IT projects without breaking the bank,” adds
Miers. “It enables the test manager to keep track of the progress of the many tests involved in any significant IT project, regardless of the scale of the project or whether it is spread over multiple locations.”
Managing test assets When compared to other methods, like tracking testing by spreadsheets and email, TestWave reduces duplication and omission, and makes management easier by showing the current state of testing in a series of intuitive real-time dashboards. It is therefore ideal for software development or major changes to existing systems such as upgrades. “We have created a cloud-based tool that manages the assets of the test team,” says Miers. “It stores test scripts and tracks their execution and tracks defects; It maps tests onto requirements and releases, and it is intuitive and easy to use. We naturally turned to the cloud to support TestWave as we knew the demand was there for a simple-to-use tool that can be instantly accessed via a browser and incurs no installation or management costs. We believe that by offering a more accessible test management tool test practices will improve.” Security is perennially top of the list when IT professionals are asked for their main objections to using cloud services. “We take security very seriously,” says Miers. “Obviously companies need
to be sure that their data is being protected. We use top quality data centres in the UK where all communications are encrypted. But we are seeing a growing acceptance and willingness to trust data to the cloud.”
Testing business The company is also a tester itself and Francis Miers has seen many changes over recent years. “The testing business in the UK never stops changing,” he confirms. “There has been a trend for off-shoring among the larger organisations, but that hasn’t cut down on the on-shore testing. Anecdotally, I am hearing that the recession has driven many organisations to reduce their risk and cut down on off-shore testing where possible. My guess is that an equilibrium will be reached where the off-shore testers will specialise in what they do best and the more reactive, faster turnaround stuff will be tackled on-shore. Of course, labour costs in India are rising and this will have an impact on off-shoring too.” Another trend highlighted by Miers is the drive to automate. “There is a trend for increasing automation in testing,” he says. “Automated tools are improving and becoming easier to use and we have seen a growing trend towards agile automation. In fact we have plans to launch an Agile module for Test Wave, so it’s a trend we intend to follow!”
www.testmagazine.co.uk
24 |TEST requirements
Construction or demolition? All too often test teams are seen as the people that come along late in the process and destroy all the good work done so far, earning themselves a reputation as demolition experts. Susan Chadwick, co-founder of Edge Testing wants to set the record straight.
S
ome time ago I found a quote which seemed to encapsulate what is often a key challenge for the testing community. It went something like: ‘testing should be seen as part of the construction process, not as a demolition exercise’. All too often, in practice, this is not the case and the test team are seen as the people that come along late in the process and destroy all the good work done so far, earning themselves a reputation as demolition experts.
I have seen test teams get caught up in a whirlwind of finding defects – then finding more defects – then having a celebration for finding the hundredth defect – then watching any relationship that did exist with the development team disintegrate totally.
TEST | October 2011
Construction time A quote from Lyndon Johnson states ‘any jackass can kick down a barn but it takes a good carpenter to build one’. Probably a slightly too harshly worded analogy, but testers have to be aware of how behaviour plays a key part in the successful outcome of a project. A lot of this can be caused by fundamental flaws in the approach to testing, the solution to which includes: • Engaging testing at the outset – starting with requirements and then made an integral part of construction; • Ensuring everyone involved understands each others’ roles and are working towards the same business objectives; • Making testing responsible for not just finding defects but for preventing them from entering the process – it is far too easy to come
www.testmagazine.co.uk
TEST requirements | 25
in at the end and criticise what has gone before and it is a no-brainer that prevention is more cost effective than cure. This sounds easy when put in a few bullet points, none of it is new and none of it is ‘rocket science’ and yet many projects fall or at least trip at the first hurdle: requirements. The logic seems infallible – if requirements are explicit, discrete, unambiguous and well tested you have a solid foundation for any project. Many people are familiar with the parable of the man who built his house on sand and the one who built it on rock. One interpretation is that of the two builders, one is a thoughtful man who deliberately plans with an eye to the future; the other is not a bad man, but thoughtless, casually building in the easiest way. One is earnest; the other content with a careless and unexamined life. The latter seems to want to avoid the hard work of digging deep to ensure a strong foundation, and also takes a short-range view, never thinking what life will be like six months into the future. He trades away future good for present pleasure and ease. I think the analogy with establishing strong requirements and testing them well and thoroughly is obvious. It also focuses the mind on how key the early business analysis is. It can be a case of more haste – to sign off the requirements – and less speed if the quality is poor and lack of support during development and testing leaves it to developers and testers to try to complete the analysis and establish the full requirements. In the worst case scenario this approach is given the excuse of being ‘agile’ which I think does a great disservice to Agile methodology.
www.testmagazine.co.uk
Rising to the challenge Software testing is a difficult role and testing teams are regularly faced with the challenge of delivering without complete or detailed requirements and specifications. They have to rise to that challenge: ‘a successful man (tester) is one who can lay a firm foundation with the bricks others have thrown at him’ (David Brinkley). I have since hunted (well Googled) high and low for the source of my quote at the beginning, other than it being my own imagination, but to no avail. It did, however, as I struggled for more tenuous words to Google, take me to, of all things, Building Regulations and sound foundations. Those readers who have built extensions or made other home alterations are likely to be much more familiar with them than I am and may be able to correct me on the detail but I have distilled the essence of what is covered in an industry that does strive to view testing as part of the construction process in its regulations! It states that building work may be checked for compliance with the requirements of the Building Regulations in a number of ways: • Inspecting plans and drawings; • Checking that materials and products meet relevant standards; • Visual inspection of construction details during and after construction; • Checking commissioning results; • Use of accredited construction details by the builder, combined with formal on-site inspection and sample testing; • Pre-completion testing. Software testing processes adopt similar techniques with parallels to walkthroughs and inspections, code reviews, testing to specification, that the solution is fit for purpose and quality gates as examples.
Susan Chadwick Co-founder Edge Testing www.edgetesting.co.uk
In the midst of achieving benefits through innovation and agility there is a need to remember the strong foundations which underpin all project delivery, including testing and making that testing an integral part of the construction process. Whole project team collaboration with a collective focus on the overall project objectives and business outcomes is critical to the ultimate success of any software project.
October 2011 | TEST
26 | Feature Design for TEST
Taking a holistic approach to software development In his first column for TEST, Mike holcombe, Professor of Computer Science at Sheffield University suggests that the testing process should be considered from the start of any software project.
i
guess people should feel sorry for testers. it’s often a case of responsibility without power. they are usually faced with a daunting task of trying to fix problems not of their making in a time scale that is usually impossible. if the release bombs, they are in the firing line because they didn’t solve or predict all of the problems. The problem for testers is to convince the managers and financial big shots that a more holistic approach to a project pays off in the long term. All projects will involve testing. Testing plays a number of roles, one of which is, of course, to make sure that what is delivered works. But testing also contributes in a number of other ways. Testing carried out early when the project is in its infancy can also identify problems in the requirements, in the architecture and in the design. Mike holcombe Professor of Computer Science
Sheffield University www.dcs.shef.ac.uk/~wmlh/
TEST | October 2011
Design for test One of the key things that hardware chip designers discovered decades
ago is that leaving the design of the tests until after the system had been designed was a disaster. They identified the concepts of ‘design for test’ and ‘testability’ as important factors in ensuring the quality of the final product. Quite often they were faced with trying to test some function or behaviour that was, essentially, impossible due to the design decisions that had been taken. Perhaps some part of the circuit could not be accessed fully to check it out. This can happen in software too. In an ideal world testers should be involved at the conceptual stage of any project in order to provide expertise about how the final system will be tested. In this way they can influence the architecture and the design so as to make testing much easier, more reliable and, ultimately, cheaper. During the requirements development stage requirements for testing should also be considered. As things progress into the design specification stages then test
specifications should be going along in tandem. Whether it is at the level of a major feature or in the details of unit code there are two fundamental questions that testers should ask if they are to be assured that their testing is going to be able to reveal the maximum number of critical faults: 1. Can I find a way to trigger a function/ action under ALL the conditions that it is going to be operational under? 2. If I carry out a test can I access all of the results of this test from the system? These two questions relate to the properties of controllability and observability. They are fundamental to all systems testing but are often overlooked. This will make testing much, much harder and more expensive. A strategy of early involvement, thinking about testing from the beginning and ensuring that these conditions are satisfied will make the project more successful, more likely to be on time and within budget as well as providing testers with a more rewarding and productive role.
www.testmagazine.co.uk
innoVAtion For soFtwAre QuAlitY
D i g e s t A vendor perspective of current software testing processes
28 | TEST Digest
Contents Welcome
W
elcome to the test Digest. the Digest is a round-up of comment and thought leadership from the testing vendor community. what is especially interesting about this group of companies is their closeness to their customers. perhaps all vendor groups would like to think they understand and share the concerns of their clients, but in the testing vendor community many of those supplying testing tools and other products and services are actually testers themselves, so clearly they couldn’t be closer. And even when this is not the case, the relationships are often symbiotic in nature.
Inside we discuss topics ranging from the perennial favourite Agile, where Seapine relates how a major international customer has adopted Agile to meet its business needs; automation, with Green Hat looking at the next generation of automated tools; testing mainframe applications with Micro Focus; to addressing the diversity of testing sectors with T-Plan. Enjoy!
29 Adopting Agile to meet business needs peter Varhol of Seapine Software explains how the company’s TestTrack is helping Fitness First to align development and testing with an Agile approach.
32 next generation automation The chance to reduce testing time and deploy personnel to areas where they can add more value is a tempting proposition. Matt Bailey spoke to Green Hat chief technology officer peter cole about the next generation of highly productive test automation tools.
34 From cavemen to manual testing – the evolution of a smarter approach Original Software takes us on a journey back through time to discover why manual testing was, is, and will always be needed.
36 Maintaining mainframe horse-power By providing a compatible mainframe testing environment away from the mainframe, Micro Focus is revolutionising how mainframe applications are tested and how key IT services are delivered.
29
38 Addressing the diversity To satisfy the number and diversity of systems under test, the traditional demarcation lines between toolsets have become blurred. charlie wheeler, director of T-Plan believes that companies now need a variety of tools from different vendors to achieve blanket coverage for their testing activities.
32
Matt Bailey, Editor
38
TEST Digest | October 2011
www.testmagazine.co.uk
TEST Digest | 29
Adopting Agile to meet business needs peter Varhol of Seapine Software explains how the company’s TestTrack is helping Fitness First to align development and testing with an Agile approach.
B
ecoming agile in response to the needs of the organisation is easier said than done. with the right plan and software tools to support it, software project teams can adopt enough Agile practices to better respond to business needs while retaining familiar and proven practices. that’s the case with the Agile project teams working for Fitness First.
www.testmagazine.co.uk
Fitness First is the largest privatelyowned health club group in the world, with over 540 clubs worldwide, reaching over 1.4 million members in 20 countries. While at first glance it may be difficult to see the relationship between pumping iron and building state-of-the-art software, Fitness First has an active set of software project teams that provide essential support to its mission and operations. Much of the development effort centres around the its web site and web
October 2011 | TEST Digest
30 | TEST Digest
Seapine Software’s TestTrack enables Agile teams to take user stories and define tasks for individual sprints, while also tracking overall progress and quality on the entire application.
Easy-to-use web applications, like Find a Club, give Fitness First a competitive advantage in the fitness industry.
The main result that Fitness First has achieved in adopting an Agile process is better aligning development and testing to work toward a common goal. This enables the teams to commit to, and deliver, fully tested features at the end of every sprint.
TEST Digest | October 2011
store. This portal, plus back-end services supporting products and services, keep the Fitness First software development teams busy and engaged. The web site and its associated applications are critical components of the business strategy. Fitness First recruits new members heavily from the web, and regularly posts special offers. In addition, the company uses the web site to make special offers and to sell fitness-related products. Being agile in web application development is an important part of the mission of the development teams. Fitness First primarily uses Microsoft development tools, including Visual Studio and C#, for the majority of projects. In addition, the teams make heavy use of Seapine Software’s TestTrack Pro, TestTrack TCM, and TestTrack RM to manage development and test artifacts, track requirements, user stories, and test cases, and manage defects. “The development group currently works in two teams, each consisting of two developers, four testers, and a business analyst,” explained Kevin
Moore, the Scrum Master for the development group. “The exact size and makeup of the teams may vary, depending on the features being implemented. But, whatever the size of the team and nature of the project, the teams leverage automation to achieve speed and agility.” Specifically, the project teams use TestTrack Pro for all defect tracking, user story, and item tracking, and have started using the Agile reporting toolset for burn down and burn up charts that come standard with TestTrack. In addition, the teams use TestTrack TCM to track their test cases, as well as TestTrack RM and TestTrack Pro to track defects, work items, and user stories throughout an Agile sprint.
Adopting an Agile process The Fitness First project teams have adopted an Agile development methodology to deliver mission-critical features quickly and with significant involvement from user representatives. In particular, Fitness First uses a Scrum process, and has committed to regular two week sprints, at the end of which
www.testmagazine.co.uk
TEST Digest | 31 they deliver working code with new features. Applications typically require multiple sprints to implement all the defined features. “The project process starts with the design team,” Moore continued. “During any sprint, the design team requests an elicitation meeting with the project team for the upcoming sprint. In these meetings, the design team gathers information from testers and developers before committing any story to the backlog.” The meeting is usually attended by at least one developer, one tester, and the product owner, in addition to the software designers. At the end, development and testing both provide a high- level estimate of the effort required for each story. This estimate, which includes assumptions about the types of features needed to support a story, allows the product owner to go back to the business with any questions. Pre-sprint planning takes place on the first day of the sprint, with the teams dedicating half a day to planning. The teams use the planning session to prioritise the stories and to get final commitment from the team before the sprint starts. They use the traditional Agile stationary, including Post-It notes, index cards, and Planning Poker cards, to get a picture of the sprint. Once the stories are committed to and prioritised, they are broken down into smaller pieces of deliverable work. Both developers and testers are involved at this stage, and hourly estimates are placed on each of the tasks. Seapine’s TestTrack Pro comes into play once the planning has been completed. Each member of the team will pick a story to break down into deliverable work. The stories have already been added to TestTrack Pro, and it is up to the team member to add the tasks to the story, with estimated hours, and to provide a short description of the task. At the end of the planning day, the team estimates the effort for every story, so it has a clean burn down chart for the start of the sprint. From then on, all defects and tasks to be completed are managed in TestTrack Pro. Each team member will update the hours worked against the work item at the end of each day. Once the item has been completed, it is closed. Once all the tasks against the story are complete, the story can be moved into a Done status and the product owner will verify that the feature is complete. At the end of the sprint, the team conducts a demo. The demo is attended by the entire team, plus management and customers.
www.testmagazine.co.uk
This process has proven to be very successful, and having a preview of features allows the business to actually see what they asked for before the features are delivered. “The tester has a very important part to play throughout this process,” Moore said. “The project teams rely on the tester in the elicitation meetings and the planning meetings to give clear insight into what is involved with testing. All the estimates include developer and test time. The tester makes sure the team can deliver what was promised, to a fully tested and shippable standard.” Every tester has the same knowledge of the system, and can move between different areas of the system. That ability, coupled with the insight into the story at the elicitation and planning sessions, gives the teams an enormous advantage because everyone knows what everyone else is doing. The project teams also have a wealth of automated testing processes, which work well with the continuous integration methods used.
Aligning people and tools The main result that Fitness First has achieved in adopting an Agile process is better aligning development and testing to work toward a common goal. This enables the teams to commit to, and deliver, fully tested features at the end of every sprint. The story estimations include developer and testing time, and no commitment is made to any story unless the whole team understands the story. The teams also have a strong definition of when a feature is completed, requiring that a fixed set of criteria has been met, including unit tests, automated functional tests, and regression tests. This approach to supporting the business couldn’t have been accomplished without a strong set of application lifecycle management tools for documenting stories, tasks, defects, and results. Seapine’s solution was a natural fit to manage the Agile methodology, while also collecting all development assets in a single place for all team members. In addition, TestTrack made it possible to set the criteria for shipping and determining what had to be accomplished to meet the criteria. The result was the alignment of development and testing toward the goal of quickly delivering high quality software to the business. Being agile is all about teamwork, and ensuring that all team members have the information they need to do their jobs.
Being agile is all about teamwork, and ensuring that all team members have the information they need to do their jobs.
Peter Varhol Solutions evangelist Seapine Software www.seapine.com
October 2011 | TEST Digest
32 | TEST Digest
Next generation automation The chance to reduce testing time and deploy personnel to areas where they can add more value is a tempting proposition. Matt Bailey spoke to Green Hat chief technology officer Peter Cole about the next generation of highly productive test automation tools.
G
reen Hat has been providing testing tools for the last ten years. It aims to make automated testing simple for complex systems relying on cloud, web services, messaging, SOA (Service Oriented Architecture), ESB (Enterprise Service Bus), BPM (Business Process Management), CEP (Complex Event Processing), SAP and other distributed computing technologies. “We enable customers to get these in to production faster automating difficult testing scenarios,” says Peter Cole, the company’s chief technology officer. “Quite often in these processes there aren’t any actual humans involved – it’s about computers talking to computers, while the people are doing the exceptions.”
GH Tester Green Hat’s automation offering takes two forms; firstly there is a testing tool called GH Tester which provides a user interface for systems that don’t have one – regardless of the underlying system technology. “With GH Tester, the tester doesn’t need strong technical domain knowledge in order to do their job,” says Cole, “and if there is no UI, the tester is stuck with basic tools like text editors or even
TEST Digest | October 2011
the ‘Mk 1 human eyeball’, which can make it very difficult for the usual tools and time-consuming for the tester. In one case we worked on, the manual testing effort took three weeks, but once it was automated using our tool it took ten minutes! When regression testing can be done overnight or in minutes, then development lifecycles can be as long or as short as it takes to deliver meaningful functionality for the business.” GH Tester is a modular test automation suite specifically designed to address the challenges of testing distributed and/or complex systems. Often these systems do not have user interfaces but where they do Green Hat has a UI testing module or can integrate with the customer’s existing products. “It is part of the new generation of test automation tools where the tester doesn’t need to write code,” says Cole. “The last generation of test automation products required users to write code. GH Tester utilises a series of configurable test steps that can be used to model any test script and correspond more closely to the actions that a tester would take. “There is no coding involved. As there is no coding, emphasis is placed on wizards to help with operations such as test creation from schemas and message exchange patterns, as well as
repairing damaged tests. At all times the objective is to remove manual, repetitive, time-consuming actions and allow the tester to concentrate on value-add activities.”
Virtual Integration Environment Virtualisation as a concept is offering the tantalising opportunity to reduce costs in the IT sector. Green Hat’s second automation offering is applying many of its benefits to the testing sphere. “If we want to run end-to-end testing, we have to duplicate all of the systems the software is running on,” explains Cole. “Creating these test environments can be very expensive. One of our banking customers estimated they were going to be spending almost $2 million on building testing environments and it was going to take them 16 weeks to do it. What we’re able to do is build a model of what the inputs and outputs from that system look like and use that model to create a simulation or virtual application. This is our Virtual Integration Environment or VIE.” GH VIE allows you to create understandable, virtualised applications without relying on teams of developers for coding. Testing of integration projects has always been complicated by multiple
www.testmagazine.co.uk
TEST Digest | 33
dependencies, such as: deliverables from different teams, systems owned and managed by third parties, and systems that are simply not available in the test environment, either due to expense or complexity. The historic approach to these complications has been the creation of ‘stubs’, simulating the part of the system that is not available for testing. ‘Stubbing’ has traditionally been undertaken by developers and requires coding knowledge and specialist skills. This has often resulted in sparsely documented and poorly understood testing environments that rely on key individuals, and make continuous integration testing unachievable or prohibitively expensive to maintain. “GH VIE changes all of this,” says Cole, “testers can now take control of their own destiny without needing any programming language knowledge, creating virtualised applications in a commonly understood form that can be readily shared and easily updated as underlying systems and data change. And they can control this testing environment from a web browser, without having to modify the application that you are testing.”
Away from open source and into the cloud Looking at testing industry trends Peter
www.testmagazine.co.uk
Cole is seeing a loss of confidence in open source solutions. “As a vendor one of the things we are starting to see is a second wave of customers who are migrating from the open source tools that they are currently making do with, because they are finding that these tools are not as productive as they had expected them to be,” he says. “Test case maintenance in open source testing tools is painful and time consuming. Our clients soon let us know if test case maintenance is painful using our tools!” While the cloud is often touted as a panacea, private cloud options have obvious benefits for testing. “One of the most exciting things that we are helping our clients to do is to use private cloud for testing,” confirms Cole. “We can demonstrate how you can schedule an entire performance testing environment to appear on hardware which is under-utilised at, say 2am and execute your performance test then. The resources for the test will be allocated dynamically, scaling out with additional machines being spun-up in the private cloud as needed. Reports from the test can be automatically emailed to interested parties before the servers have shut down, and in the morning it’s almost as if nothing has happened.”
The resources for the test will be allocated dynamically, scaling out with additional machines being spun-up in the private cloud as needed. Reports from the test can be automatically emailed to interested parties before the servers have shut down, and in the morning it’s almost as if nothing has happened.”
Peter Cole Chief technology officer Green Hat www.greenhat.com
October 2011 | TEST Digest
34 | TEST Digest
From cavemen to manual testing the evolution of a smarter approach Original Software takes us on a journey back through time to discover why manual testing was, is, and will always be needed.
M
an developed the ability to make fire, and just as quickly as a bush fire spreads, man developed many more tools for his survival. with every development, a test was also created. this process is the very fibre of life. Through the evolutionary process, we develop and adapt through changing environments, evolving our minds and creating new technologies. By looking back and understanding evolution, it is easy to see how quickly we have become skilled at moving forward. Technology has rapidly evolved to meet our demands and we have continued to learn from our mistakes along the way. As we move forward we often wonder; “How on earth did we cope in the past?” What was once cutting-edge technology has become as primitive as striking a stone with a flint to make fire. Software may not have been around in 10,000 BC, however manual testing was around from the start. “If I club that
TEST Digest | October 2011
cave-woman on the head will she go out with me?” It took some time for man to realise that a bunch of flowers would have been better. We test manually today as much as we did back then; the difference is that we have moved on from fire to fibre optics, club to clubbing it, and ad hoc tests to testing as a profession. The speed at which our software and technology is evolving does not change the fact that manual testing will always be a fundamental part of the process. No matter how complicated the software becomes, manual testing is here to stay as long as man is the one pushing the buttons. So let’s take a journey back in time to learn more about the evolution of software testing and discover why manual testing will always be needed.
the dawn of testing Our hairy knuckled ancestors were the first ‘testers’ and their ability to develop and test early innovations has paved the way for new technological horizons. Caveman: “I’ve made something I
call fire!” Cave-woman: “Yes and you also made me a nag!” Over the past 70 years, software has transformed the way we live: from morning till dusk, technology is there helping us to communicate, trade, record, learn and create. We could not have advanced this far in 70 years if we had not applied the manual testing processes used by our primitive ancestors in every development stage of an application. Manual software testing can be traced back to the first software bug in 1946. At the time, testing carried the ‘debugging’ label rather than being identified as the skilled process it is today. The separation of the classification of debugging from testing was introduced thirty-two years later by Glenford Myers in 1979 and focussed on breakage testing. Eventually the software engineering community developed classifications for each stage of the software testing process, and the software testing lifecycle was established. Until the 1980s the term ‘software
www.testmagazine.co.uk
TEST Digest | 35
tester’ was not generally used. Today, the software testing community, whilst having only been around in the last thirty years, has done well to keep up with the speed of new developments and technology, but it’s getting tough! So many new methodologies for testing have now come into force; it has now become difficult to distinguish which methodology is the most effective. This has created discontent within QA teams and in many cases QA is seen as a development bottleneck. Meeting the demands of a business has meant that software testers have had to re-think the way they perform their tests. Pressure to adapt in a fast changing environment has become the driving force to improve the manual testing process and whilst some software testers embrace change, others lack confidence, fearing that the basic principles of testing will be challenged and the quality of the test will be compromised.
The manual testing dark ages Whilst technology has charged ahead, manual testing has stood still, and it’s no surprise that testers are now struggling to keep up. Software test automation was then heralded as the new way, promising to help manual testers perform their tests in an automated fashion; but in reality only five to 20 percent (Original Software AQM Survey 2010) of testing has been automated. This is mainly due to the fact that only the tests for stable applications, which don’t change, can be automated. Not exactly the saviour of a testing bottleneck that we would have hoped for! While there are a number of advantages to manual testing; after all a machine can’t do everything, the limitations of manual testing are apparent. How does one re-create an issue? How does the tester document the tests carried out? What about exploratory testing? The limitations are endless, but we can’t ignore the fact that there is no complete substitute for manual testing. So how can manual testing be improved, without being eliminated, and without automation being promoted as the only approach? A dynamic manual testing solution is the evolution that testers have been crying out for!
The modern age of manual testing Manual testing need not be stuck in the dark ages. The ripples of a bad
www.testmagazine.co.uk
automation splash in the past, has slowly dispersed and a new dynamic way of manual testing has taken the testing community a radical step forward. With the 21st Century in mind, Original Software pioneered a new approach to the manual testing effort. ‘Dynamic Manual Testing’ was a huge leap in evolutionary steps, offering a significant move forward for manual testers. Software testers are gladly laying down their fire stick and flint for a new glimmer of hope that is as easy as the flick of a light switch. Manual testers are now able to capture everything that they are doing. Every mouse click and every submission made during a test is recorded and documented as a script and this can then feed the next testing cycle. It doesn’t matter whether the tester is performing exploratory, functional, unit, or any other tests. Valuable reports are also being created and used for training and business process documentation. With powerful tracking functions that operate in a non-intrusive and natural fashion, testers can detect and mark-up defects quickly and effectively. Every user action is tracked, both at the visual layer and at the database layer. The captured information can be used for auditing and test results can be reported in black and white automatically and without recreation. Developers can correct defects faster and more efficiently and the resulting assets can be used to build automated tests. What is this evolutionary approach? Well it’s the approach provided by TestDrive-Assist, a manual testing solution launched over four years ago to make the life of a tester easier. They say that imitation is the greatest form of flattery, and we were slightly amused when the giant HP, started paying attention to its more innovative and visionary competitor in an attempt to play catch up. With HP plugging a ‘copy cat’ manual testing tool, we knew that this was a market that was largely untapped. So if you would like to learn more about TestDrive-Assist or find out how it differs to HP Sprinter, please feel free to get in touch via our website, LinkedIn group or Twitter. www.origsoft.com/products /testdrive-assist LinkedIn: http://www.linkedin. com/groups/Application-QualityManagement-3185646 Twitter: @origsoft
Software may not have been around in 10,000 BC, however manual testing was around from the start. “If I club that cave-woman on the head will she go out with me?” It took some time for man to realise that a bunch of flowers would have been better.
October 2011 | TEST Digest
36 | TEST Digest
Maintaining mainframe horse-power By providing a compatible mainframe testing environment away from the mainframe, Micro Focus is revolutionising how mainframeapplications are tested and how key IT services are delivered. The company’s eddie houghton reports.
F
or many organizations the natural home for their critical and pervasive applications is the mainframe. After all, mainframes are optimised for scalable, resilient production performance. so when banking systems need to keep financial enterprise running, trading systems need to process commodity transactions quickly, billing and invoicing applications to compute overnight without fail, insurance quotations to reach clients fast – it’s usually a mainframe providing the horse-power. ensuring these systems deliver trusted performance year after year and continue to support a need to reduce cost and accelerate time to market is a continual challenge.
Background and business context As mainframe applications are used to supply services to customers and deliver vital competitive advantage, they are frequently updated and enhanced to meet changing business requirements. And, because the applications are critical to the enterprise, it’s essential to test the updates thoroughly before releasing them into production. Testing mainframe applications is often a time-critical activity as key updates have to meet aggressive delivery timescales. Today’s IT organisation faces increased demand for better service from all internal providers and mainframe application service delivery is no exception. As a result, even essential pre-production test phases are being
TEST Digest |October 2011
FIG.1 THE MAINFRAME ENVIRONMENT AS A BOTTLENECK FOR SERVICE DELIVERY
viewed as potential bottlenecks in the release process. Looking more closely at the mainframe testing cycle bottlenecks puts the spotlight on supporting technology and incumbent skills, as well as the process itself. Pre-production testing can only be scheduled according to available capacity (measured in mainframe MIPS ). It follows that inadequate mainframe capacity for testing can compromise the delivery of updates that meet both the functional and time demands of the business. QA Directors and Service Delivery Managers are tied to the existing capacity that the mainframe can provide, and are typically unable to improve delivery throughput in this environment. production test cycles you can deliver more functionality to the business, with a proportional increase in quality, in less time. This can require significant additional investment to increase mainframe capacity to accommodate
expanded test phases. Even then many mainframe environments will prioritise production needs over test, so there is no guarantee that the additional capacity is ring-fenced and testing MIPS may be lost to a critical production run. To make application change and service delivery meet business demands, you need the flexibility to increase capacity easily, quickly and reliably.
changing the game for mainframe application testing and delivery With processes and schedules for mainframe application delivery handcuffed to the mainframe environment and capacity constraints, delivery managers and CIOs need a game changer. The intelligent approach is to move the testing environment from the mainframe and onto lower cost commodity hardware – ensuring that the applications behave the same on
www.testmagazine.co.uk
TEST Digest | 37
Windows, for example, as they do on the mainframe. That is the premise behind Micro Focus Studio Enterprise Edition Test Server (Test Server) – a mainframe application execution environment on Windows. Because the work takes place on a Windows server rather than the expensive and resourceconstrained mainframe, the mainframe testing bottleneck is eliminated and IT application service delivery dramatically accelerated. Exploiting a low-cost commodity platform provides highlyflexible test capacity, so testing can scale up to meet delivery expectations set by the business. Providing the execution environment for testing mainframe applications off the mainframe breaks the vicious cycle of resource dependency and releases teams to set their own testing and delivery schedules (Figure 2) What’s more, developers, test teams, quality assurance engineers, end-users, or non-mainframe programmers, (Java or. NET programmers, for example) developing composite applications that use mainframe resources, can access the applications to conduct their testing and consume little or no mainframe processing power.
too good to be true? Transforming mainframe application delivery has been a long-running objective for Micro Focus. The company has provided mainframe compatibility and tooling for nearly 30 years, including mainframe equivalence for CICS, VSAM, IMS, DB2, JCL, Assembler and COBOL applications. Micro Focus has worked with thousands of mainframe clients and core systems. Test Server is part of the latest release of its mainframe-centric technology that also provides state-ofthe-art mainframe development tooling as well as the full rehost deployment (or ‘migration’) technology. Importantly, and acknowledging that no mainframe environment is ‘straightforward’, Micro Focus provides clients the ability to approach mainframe testing flexibly. For example, data can remain on the mainframe or be brought down to the server, alternatively unchanged code can remain on the mainframe, and only the changed elements that need testing can be focused on in isolation.
what it and business benefits could you expect? The benefits of this approach fall into three main areas:
www.testmagazine.co.uk
FIG.2 EFFICIENT SERVICE DELIVERY USING TEST SERVER
FIG.3 PRE-PRODUCTION TESTING FOCUS
capacity/time to market: QA and delivery teams can complete testing phases faster and with higher quality as test cycles are not constrained by scarce mainframe processing power. Test capacity can be scaled up immediately to meet fresh business demands, and the test environment being on Windows opens up access to other stakeholders including business users and front-end (Java) developers. cost containment/reduction: Increasing test capacity on a low-cost commodity platform avoids the need for substantial investment in new mainframe MIPS. In fact, organisations may even be able to reduce mainframe MIPS consumption as more testing is performed on Test Server. Quality: Of course, none of these benefits count for anything if the quality of delivered applications was at risk. In fact quality improves as teams are able to identify issues earlier in the development cycle and reduce costly rework. With more testing achievable in shorter timeframes, increased testing raises quality. Additionally, many organisations will be able, for the first
time, to deliver genuine end-to-end testing of composite COBOL and Java applications is possible using a single environment, again improving overall quality.
A Mips management sweet spot? As IT organisations look to save cost and provide better value of service, they are compelled to look at all aspects of their operations. Managing and reducing expensive OpEx items is a natural point of scrutiny, and this inevitably includes mainframe MIPS costs. As figure 3 illustrates, preproduction testing is very much in the firing line in terms of suitability for efficiency improvements, and with the advent of Test Server very real savings and operational service delivery improvements are achievable. By providing a compatible mainframe testing environment away from the mainframe, Micro Focus is changing the game – revolutionising how mainframe applications are tested and how key IT services are delivered.
eddie houghton Director product management – Enterprise Solutions
Micro Focus www.microfocus.com
October 2011 | TEST Digest
38 | TEST Digest
Addressing the diversity To satisfy the number and diversity of systems under test, the traditional demarcation lines between toolsets have become blurred. Charlie Wheeler, director of T-Plan believes that companies now need a variety of tools from different vendors to achieve blanket coverage for their testing activities.
F
ounded in 1990, T-Plan developed a test management tool which ensured that applications of critical importance to the UK and global economy were developed, tested and launched successfully. The company has worked closely with some of the most prestigious institutions in the UK.
T-Plan Robot Enterprise is the most flexible and universal black box test automation tool on the market. Providing a human-like approach to software testing of the user interface, and uniquely built on JAVA, Robot performs well in situations where other tools may fail. It runs on, and automates all major systems, such as Windows, Mac, Linux, Unix, Solaris, and mobile platforms such as Android, iPhone, Windows Mobile, Windows CE, Symbian. It can test any system. As automation runs at the GUI level, via the use of VNC, the tool can automate any application.
Challenges Presently the market is presenting some interesting challenges for software testing. With a mind-set on budget and obtaining the same quality of testing for a smaller budgetary spend; we are seeing a shift away from the historic main conglomerate testing providers,
TEST Digest | October | October 2011 2011
towards more focused independent companies like ourselves. For our test management tool this has been great news as our tool is very similar to the major players, but strategically is more cost effective. For our test automation software we are also seeing a trend towards maximising the automation potential of a collection of tools and techniques, rather than a silver bullet solution by one large provider. Like ourselves, when a company only does testing they are able to be very nimble in bringing solutions to market very quickly, where new technology or technical implementations require a customised solution.
Opportunities The opportunities in this space right now are tremendous and with the plethora of tools and services being provided at the moment, it is an exciting industry to be part of. With this sentiment in mind it is my belief that we will continue to see an increased collaboration and communication between different companies, and therefore technologies, when testing projects are being carried out for customers. We have heard for some time now about grey box testing (essentially a combination of black box and white box testing), and it is my belief that we
shall see an explosion of these sort of testing practices, being more and more deployed in test environments. For some this will be a use of a combination of different toolsets, whereas for others we will see developers and testers alike working to get a coherent test structure, utilising a combination of different techniques, data sets and of course tools. For example: take the case of a company that is having to produce a product that now has to function in not just a windows environment, but in a mobile and Linux one also. Users expect a unique but consistent experience across the different technologies, but above all the present day consumer expects quality and security. We have become failure intolerant when using technologies that are no longer emerging and have become intertwined with everyday life. What this means for the software testing industry is a procurement of a selection of tools to test systems across a wide range of different environments. Historically we have been able to control the testing into fairly neat packages, as the tests conducted on the different environments, all used similar hardware and networks etc. For example, tests carried out against different operating systems of Windows XP, Vista, and 7 etc. However in the
www.testmagazine.co.uk
TEST Digest | 39
mobile and tablet world, not only do the hardware, and networks need to change, but also the nature of the tests being deployed, as you are now in a world of widely different operating systems. Last but not least, you add Linux, Android and Chrome into the mix alongside the traditional desktop testing, and you have a whole range of tools that have to act in harmony.
Automated future Coming back to the grey box testing ethos it means that in the future we can expect testing automation tools to have a firm footfall into the development of the systems under test, and the data deployed end to end within them. Test automation tools in this rapidly changing landscape need to be positioned at the black box or user level, as after all we need to make sure that the end user is getting the same unique and consistent experience, whatever the system or device they are running. What I mean by unique is that an Apple user expects a Mac look and feel to their application, while a Windows user expects a Windows look and feel to their application, ie, a unique experience, but consistent functionality across the different platforms. The positioning of these tools at the black box level is also important as the
www.testmagazine.co.uk
traditional object oriented automation tools, offered by the large providers, cannot keep pace with the wide range technologies. The rapid development interaction required to make sure that objects are embedded at the correct code level, to ensure workable automating solutions, is no longer the intelligent solution in a world now of many different devices and platforms. Of course, in these situations, environment agnostic tools built on platforms like Java, which therefore function across multiple different systems, are ideally placed to drive the test automation. Indeed our test automation tool T-Plan Robot with its open architecture, can readily achieve this level of synergy by controlling multiple toolsets and interacting with data feeds. Indeed one can truly say that image based testing tools as a whole, have become more and more integrated into the full test process and the system development life cycle. In order to satisfy the number and diversity of systems under test, the traditional demarcation lines between toolsets, have become blurred. It is my belief that a company can no longer consider just one suite of tools from a traditional vendor, but more a variety of tools from different vendors to achieve blanket coverage for their testing activities.
Charlie Wheeler Director T-Plan www.t-plan.com
In order to satisfy the number and diversity of systems under test, the traditional demarcation lines between toolsets, have become blurred. It is my belief that a company can no longer consider just one suite of tools from a traditional vendor, but more a variety of tools from different vendors to achieve blanket coverage for their testing activities.
October 2011 | TEST Digest
INNOVATION FOR SOFTWARE QUALITY
Subscribe to TEST free! FTWA FOR SO
LIT RE QUA
il 201 1 Issu e 2: Apr Vol um e 3:
Y
TEST : INNOV ATION FOR SO FTWAR E QUA LITY
ATION
TEST : INNOVATION FOR SOFTWARE QUALITY
LITY E QUA FTWAR FOR SO ATION INNOV TEST :
INNOV
INNOV ATION FOR SO INNOVATION FOR SOFTWARE QUALITY FTWA Volume 3: Issue 3: June 2011
Vol um e 3: Issu e 4: Aug ust 201 1
RE QUA LITY
Apps do stand alo n’t ne
IT's InvisibleBanishing ant Gisecurity bugs
Professor Mike Holco mbe on fitting goo d ideas in to the wor ld
VOLUM E 3: IS SUE 4: AUGUS T 2011
VOLUME 3: ISSUE 3: JUNE 2011
2011 APRIL SUE 2: E 3: IS VOLUM
sual testing Inside: Vi
ey on the Chris Lives g of testinMiia l tia Vuontisjärvi and Ari Takanen oten massive p on the power of fuzzing
n testing | Regressio Inside: Pe lifications rformance | Test Qua Inside: Risk-based testing | Application development | Safety testing testing
| Ge
tting real .uk | The ethic agazine.co s of testing www.testm Visit TEST online at Visit TEST online at www.testmagazine.co.uk online at Visit TEST www.testm agazine.co .uk
For exclusive news, features, opinion, comment, directory, digital archive and much more visit
www.testmagazine.co.uk Published by 31 Media Ltd Telephone: +44 (0) 870 863 6930 Facsimile: +44 (0) 870 085 8837
www.31media.co.uk
Email: info@31media.co.uk Website: www.31media.co.uk
INNOVATION FOR SOFTWARE QUALITY
Can you predic TEST company profile | 41
Facilita Load testing solutions that deliver results Facilita has created the Forecast™ product suite which is used across multiple business sectors to performance test applications, websites and IT infrastructures of all sizes and complexity. With this class-leading testing software and unbeatable support and services Facilita will help you ensure that your IT systems are reliable, scalable and tuned for optimal performance.
Forecast, the thinking tester's power tool A sound investment: A good load testing tool is one of the most important IT investments that an organisation can make. The risks and costs associated with inadequate testing are enormous. Load testing is challenging and without good tools and support will consume expensive resources and waste a great deal of effort. Forecast has been created to meet the challenges of load testing, now and in the future. The core of the product is tried and trusted and incorporates more than a decade of experience but is designed to evolve in step with advancing technology. Realistic load testing: Forecast tests the reliability, performance and scalability of IT systems by realistically simulating from one to many thousands of users executing a mix of business processes using individually configurable data. Comprehensive technology support: Forecast provides one of the widest ranges of protocol support of any load testing tool. 1. Forecast Web thoroughly tests web-based applications and web services, identifies system bottlenecks, improves application quality and optimises network and server infrastructures. Forecast Web supports a comprehensive and growing list of protocols, standards and data formats including HTTP/HTTPS, SOAP, XML, JSON and Ajax. 2. Forecast Java is a powerful and technically advanced solution for load testing Java applications. It targets any non-GUI client-side Java API with support for all Java remoting technologies including RMI, IIOP, CORBA and Web Services. 3. Forecast Citrix simulates multiple Citrix clients and validates the Citrix environment for scalability and reliability in addition to the performance of the hosted applications. This non-intrusive approach provides very accurate client performance measurements unlike server based solutions. 4. Forecast .NET simulates multiple concurrent users of applications with client-side .NET technology. 5. Forecast WinDriver is a unique solution for performance testing Windows applications that are impossible or uneconomic to test using other methods or where user experience timings are required. WinDriver automates the client user interface and can control from one to many hundreds of concurrent client instances or desktops.
6. Forecast can also target less mainstream technology such as proprietary messaging protocols and systems using the OSI protocol stack. Powerful yet easy to use: Skilled testers love using Forecast because of the power and flexibility that it provides. Creating working tests is made easy with Forecast's script recording and generation features and the ability to compose complex test scenarios rapidly with a few mouse clicks. The powerful functionality of Forecast ensures that even the most challenging applications can be full tested.
4
G
Facilita Software Development Limited. Tel: +44 (0)12
Supports Waterfall and Agile (and everything in between): Forecast has the features demanded by QA teams like automatic test script creation, test data management, real-time monitoring and comprehensive charting and reporting. Forecast is successfully deployed in Agile ‘Test Driven Development’ (TDD) environments and integrates with automated test (continuous build) infrastructures. The functionality of Forecast is fully programmable and test scripts are written in standard languages (Java, C#, C++ etc). Forecast provides the flexibility of open source alternatives along with comprehensive technical support and the features of a high-end enterprise commercial tool. Flexible licensing: Geographical freedom allows licenses to be moved within an organisation without additional costs. Temporary high concurrency licenses for ‘spike’ testing are available with a sensible pricing model. Licenses can be rented for short term projects with a ‘stop the clock’ agreement or purchased for perpetual use. Our philosophy is to provide value and to avoid hidden costs. For example, server monitoring and the analysis of server metrics are not separately chargeable items and a license for Web testing includes all supported Web protocols.
Services In addition to comprehensive support and training, Facilita offers mentoring where an experienced Facilita consultant will work closely with the test team either to ‘jump start’ a project or to cultivate advanced testing techniques. Even with Forecast’s outstanding script automation features, scripting is challenging for some applications. Facilita offers a direct scripting service to help clients overcome this problem. We can advise on all aspects of performance testing and carry out testing either by providing expert consultants or fully managed testing services.
Facilita Tel: +44 (0) 1260 298109 Email: enquiries@facilita.co.uk Web: www.facilita.com
www.testmagazine.co.uk
October 2011 | TEST
42 | TEST company profile
Parasoft Improving productivity by delivering quality as a continuous process For over 20 years Parasoft has been studying how to efficiently create quality computer code. Our solutions leverage this research to deliver automated quality assurance as a continuous process throughout the SDLC. This promotes strong code foundations, solid functional components, and robust business processes. Whether you are delivering Service-Orientated Architectures (SOA), evolving legacy systems, or improving quality processes – draw on our expertise and award winning products to increase productivity and the quality of your business applications.
Specialised platform support:
Parasoft's full-lifecycle quality platform ensures secure, reliable, compliant business processes. It was built from the ground up to prevent errors involving the integrated components – as well as reduce the complexity of testing in today's distributed, heterogeneous environments.
Trace code execution:
What we do Parasoft's SOA solution allows you to discover and augment expectations around design/ development policy and test case creation. These defined policies are automatically enforced, allowing your development team to prevent errors instead of finding and fixing them later in the cycle. This significantly increases team productivity and consistency.
End-to-end testing: Continuously validate all critical aspects of complex transactions which may extend through web interfaces, backend services, ESBs, databases, and everything in between.
Advanced web app testing: Guide the team in developing robust, noiseless regression tests for rich and highly-dynamic browserbased applications.
Access and execute tests against a variety of platforms (AmberPoint, HP, IBM, Microsoft, Oracle/ BEA, Progress Sonic, Software AG/webMethods, TIBCO).
Security testing: Prevent security vulnerabilities through penetration testing and execution of complex authentication, encryption, and access control test scenarios.
Provide seamless integration between SOA layers by identifying, isolating, and replaying actions in a multi-layered system.
Continuous regression testing: Validate that business processes continuously meet expectations across multiple layers of heterogeneous systems. This reduces the risk of change and enables rapid and agile responses to business demands.
Multi-layer verification: Ensure that all aspects of the application meet uniform expectations around security, reliability, performance, and maintainability.
Policy enforcement: Provide governance and policy-validation for composite applications in BPM, SOA, and cloud environments to ensure interoperability and consistency across all SOA layers. Please contact us to arrange either a one to one briefing session or a free evaluation.
Application behavior virtualisation: Automatically emulate the behavior of services, then deploys them across multiple environments – streamlining collaborative development and testing activities. Services can be emulated from functional tests or actual runtime environment data.
Load/performance testing: Verify application performance and functionality under heavy load. Existing end-to-end functional tests are leveraged for load testing, removing the barrier to comprehensive and continuous performance monitoring.
Spirent CommunicationsEmail: plc Tel:sales@parasoft-uk.com +44(0)7834752083 Email: Web: www.spirent.com Web: www.parasoft.com Tel:Daryl.Cornelius@spirent.com +44 (0) 208 263 6005
TEST | October 2011
www.testmagazine.co.uk
TEST company profile | 43
Seapine Software TM
With over 8,500 customers worldwide, Seapine Software Inc is a recognised, award-winning, leading provider of quality-centric application lifecycle management (ALM) solutions. With headquarters in Cincinnati, Ohio and offices in London, Melbourne, and Munich, Seapine is uniquely positioned to directly provide sales, support, and services around the world. Built on flexible architectures using open standards, Seapine Software’s cross-platform ALM tools support industry best practices, integrate into all popular development environments, and run on Microsoft Windows, Linux, Sun Solaris, and Apple Macintosh platforms. Seapine Software's integrated software development and testing tools streamline your development and QA processes – improving quality, and saving you significant time and money.
TestTrack RM TestTrack RM centralises requirements management, enabling all stakeholders to stay informed of new requirements, participate in the review process, and understand the impact of changes on their deliverables. Easy to install, use, and maintain, TestTrack RM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and role-based security. Whether as a standalone tool or part of Seapine’s integrated ALM solution, TestTrack RM helps teams keep development projects on track by facilitating collaboration, automating traceability, and satisfying compliance needs.
TestTrack Pro TestTrack Pro is a powerful, configurable, and easy to use issue management solution that tracks and manages defects, feature requests, change requests, and other work items. Its timesaving communication and reporting features keep team members informed and on schedule. TestTrack Pro supports MS SQL Server, Oracle, and other ODBC databases, and its open interface is easy to integrate into your development and customer support processes.
TestTrack TCM TestTrack TCM, a highly scalable, cross-platform test case management solution, manages all areas of the software testing process including test case creation, scheduling, execution, measurement, and reporting. Easy to install, use, and maintain, TestTrack TCM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and role-based security. Reporting and graphing tools, along with user-definable data filters, allow you to easily measure the progress and quality of your testing effort.
QA Wizard Pro QA Wizard Pro completely automates the functional and regression testing of Web, Windows, and Java applications, helping quality assurance teams increase test coverage. Featuring a nextgeneration scripting language, QA Wizard Pro includes advanced object searching, smart matching a global application repository, datadriven testing support, validation checkpoints, and built-in debugging. QA Wizard Pro can be used to test popular languages and technologies like C#, VB.NET, C++, Win32, Qt, AJAX, ActiveX, JavaScript, HTML, Delphi, Java, and Infragistics Windows Forms controls.
Surround SCM Surround SCM, Seapine’s cross-platform software configuration management solution, controls access to source files and other development assets, and tracks changes over time. All data is stored in industry-standard relational database management systems for greater security, scalability, data management, and reporting. Surround SCM’s change automation, caching proxy server, labels, and virtual branching tools streamline parallel development and provide complete control over the software change process.
www.seapine.com Phone:+44 (0) 208-899-6775 Email: salesuk@seapine.com United Kingdom, Ireland, and Benelux: Seapine Software Ltd. Building 3, Chiswick Park, 566 Chiswick High Road, Chiswick, London, W4 5YA UK Americas (Corporate Headquarters): Seapine Software, Inc. 5412 Courseview Drive, Suite 200, Mason, Ohio 45040 USA Phone: 513-754-1655
www.testmagazine.co.uk
October 2011 | TEST
44 | TEST company profile
Micro Focus Deliver better software, faster. Software quality that matches requirements and testing to business needs. Making sure that business software delivers precisely what is needed, when it is needed is central to business success. Getting it right first time hinges on properly defined and managed requirements, the right testing and managing change. Get these right and you can expect significant returns: Costs are reduced, productivity increases, time to market is greatly improved and customer satisfaction soars. The Borland software quality solutions from Micro Focus help software development organizations develop and deliver better applications through closer alignment to business, improved quality and faster, stronger delivery processes – independent of language or platform. Combining Requirements Definition and Management, Testing and Software Change Management tools, Micro Focus offers an integrated software quality approach that is positioned in the leadership quadrant of Gartner Inc’s Magic Quadrant. The Borland Solutions from Micro Focus are both platform and language agnostic – so whatever your preferred development environment you can benefit from world class tools to define and manage requirements, test your applications early in the lifecycle, and manage software configuration and change.
Requirements Defining and managing requirements is the bedrock for application development and enhancement. Micro Focus uniquely combines requirements definition, visualization, and management into a single '3-Dimensional' solution, giving managers, analysts and developers precise detail for engineering their software. By cutting ambiguity, the direction of development and QA teams is clear, strengthening business outcomes. For one company this delivered an ROI of 6-8 months, 20% increase in project success rates, 30% increase in productivity and a 25% increase in asset re-use. Using Micro Focus tools to define and manage requirements helps your teams: • Collaborate, using pictures to build mindshare, drive a common vision and share responsibility with rolebased review and simulations. • Reduce waste by finding and removing errors earlier in the lifecycle, eliminating ambiguity and streamlining communication. • Improve quality by taking the business need into account when defining the test plan. Caliber ® is an enterprise software requirements
definition and management suite that facilitates collaboration, impact analysis and communication, enabling software teams to deliver key project milestones with greater speed and accuracy.
Software Change Management StarTeam® is a fully integrated, cost-effective software change and configuration management tool. Designed for both centralized and geographically distributed software development environments, it delivers: • A single source of key information for distributed teams • Streamlined collaboration through a unified view of code and change requests • Industry leading scalability combined with low total cost of ownership
Testing Automating the entire quality process, from inception through to software delivery, ensures that tests are planned early and synchronize with business goals even as requirements and realities change. Leaving quality assurance to the end of the lifecycle is expensive and wastes improvement opportunities. Micro Focus delivers a better approach: Highly automated quality tooling built around visual interfaces and reusability. Tests can be run frequently, earlier in the development lifecycle to catch and eliminate defects rapidly. From functional testing to cloud-based performance testing, Micro Focus tools help you spot and correct defects rapidly across the application portfolio, even for Web 2.0 applications. Micro Focus testing solutions help you: • Align testing with a clear, shared understanding of business goals focusing test resources where they deliver most value • Increase control through greater visibility over all quality activities • Improve productivity by catching and driving out defects faster Silk is a comprehensive automated software quality management solution suite which enables users to rapidly create test automation, ensuring continuous validation of quality throughout the development lifecycle. Users can move away from manual-testing dominated software lifecycles, to ones where automated tests continually test software for quality and improve time to market.
Take testing to the cloud Users can test and diagnose Internet-facing applications under immense global peak loads on the cloud without having to manage complex infrastructures. Among other benefits, SilkPerformer ® CloudBurst gives development and quality teams: • Simulation of peak demand loads through onsite and cloud-based resources for scalable, powerful and cost effective peak load testing • Web 2.0 client emulation to test even today’s rich internet applications effectively Micro Focus, a member of the FTSE 250, provides innovative software that enables companies to dramatically improve the business value of their enterprise applications. Micro Focus Enterprise Application Modernization, Testing and Management software enables customers’ business applications to respond rapidly to market changes and embrace modern architectures with reduced cost and risk.
For more information, please visit www.microfocus.com/solutions/softwarequality
TEST | October 2011
www.testmagazine.co.uk
TEST company profile | 45
Original Software Delivering quality through innovation With a world class record of innovation, Original Software offers a solution focused completely on the goal of effective software quality management. By embracing the full spectrum of Application Quality Management (AQM) across a wide range of applications and environments, we partner with customers and help make quality a business imperative. Our solutions include a quality management platform, manual testing, test automation and test data management software, all delivered with the control of business risk, cost, time and resources in mind. Our test automation solution is particularly suited for testing in an agile environment.
Setting new standards for application quality Managers responsible for quality must be able to implement processes and technology that will support their important business objectives in a pragmatic and achievable way, and without negatively impacting current projects. These core needs are what inspired Original Software to innovate and provide practical solutions for Application Quality Management (AQM) and Automated Software Quality (ASQ). We have helped customers achieve real successes by implementing an effective ‘application quality eco-system’ that delivers greater business agility, faster time to market, reduced risk, decreased costs, increased productivity and an early return on investment. Our success has been built on a solution suite that provides a dynamic approach to quality management and automation, empowering all stakeholders in the quality process, as well as uniquely addressing all layers of the application stack. Automation has been achieved without creating a dependency on specialised skills and by minimising ongoing maintenance burdens.
An innovative approach Innovation is in the DNA at Original Software. Our intuitive solution suite directly tackles application quality issues and helps you achieve the ultimate goal of application excellence.
Empowering all stakeholders The design of the solution helps customers build an ‘application quality eco-system’ that extends beyond just the QA team, reaching all the relevant stakeholders within the business. Our technology enables everyone involved in the delivery of IT projects to participate in the quality process – from the business analyst to the business user and from the developer to the tester. Management executives are fully empowered by having instant visibility of projects underway.
Quality that is truly code-free We have observed the script maintenance and exclusivity problems caused by code-driven automation solutions and has built a solution suite that requires no programming skills. This empowers all users to define and execute their tests without the need to use any kind of code, freeing them from the automation specialist bottleneck. Not only is our technology easy to use, but quality processes are accelerated, allowing for faster delivery of businesscritical projects.
Top to bottom quality Quality needs to be addressed at all layers of the business application. We give you the ability to check every element of an application - from the visual layer, through to the underlying service processes and messages, as well as into the database.
Addressing test data issues Data drives the quality process and as such cannot be ignored. We enable the building and management of a compact test environment from production data quickly and in a data privacy compliant manner, avoiding legal and security risks. We can also manage the state of that data, so that it is synchronised with test scripts, enabling swift recovery and shortening test cycles.
A holistic approach to quality Our integrated solution suite is uniquely positioned to address all the quality needs of an application, regardless of the development methodology used. Being methodology neutral, we can help in Agile, Waterfall or any other project type. We provide the ability to unite all aspects of the software quality lifecycle. Our solution helps manage the requirements, design, build, test planning and control, test execution, test environment and deployment of business applications from one central point that gives everyone involved a unified view of project status and avoids the release of an application that is not ready for use.
Helping businesses around the world Our innovative approach to solving real pain-points in the Application Quality Life Cycle has been recognised by leading multinational customers and industry analysts alike. In a 2011 report, Ovum stated: “While other companies have diversified, into other test types and sometimes outside testing completely, Original Software has stuck more firmly to a value proposition almost solely around unsolved challenges in functional test automation. It has filled out some yawning gaps and attempted to make test automation more accessible to non-technical testers.” More than 400 organisations operating in over 30 countries use our solutions and we are proud of partnerships with the likes of Coca-Cola, Unilever, HSBC, Barclays Bank, FedEx, Pfizer, DHL, HMV and many others.
www.origsoft.com Email: solutions@origsoft.com Tel: +44 (0)1256 338 666 Fax: +44 (0)1256 338 678 Grove House, Chineham Court, Basingstoke, Hampshire, RG24 8AG
www.testmagazine.co.uk
October 2011 | TEST
46 | TEST company profile
Green Hat The Green Hat difference In one software suite, Green Hat automates the validation, visualisation and virtualisation of unit, functional, regression, system, simulation, performance and integration testing, as well as performance monitoring. Green Hat offers codefree and adaptable testing from the User Interface (UI) through to back-end services and databases. Reducing testing time from weeks to minutes, Green Hat customers enjoy rapid payback on their investment. Green Hat’s testing suite supports quality assurance across the whole lifecycle, and different development methodologies including Agile and test-driven approaches. Industry vertical solutions using protocols like SWIFT, FIX, IATA or HL7 are all simply handled. Unique pre-built quality policies enable governance, and the re-use of test assets promotes high efficiency. Customers experience value quickly through the high usability of Green Hat’s software. Focusing on minimising manual and repetitive activities, Green Hat works with other application lifecycle management (ALM) technologies to provide customers with value-add solutions that slot into their Agile testing, continuous testing, upgrade assurance, governance and policy compliance. Enterprises invested in HP and IBM Rational products can simply extend their test and change management processes to the complex test environments managed by Green Hat and get full integration. Green Hat provides the broadest set of testing capabilities for enterprises with a strategic investment in legacy integration, SOA, BPM, cloud and other component-based environments, reducing the risk and cost associated with defects in processes and applications. The Green Hat difference includes: • Purpose built end-to-end integration testing of complex events, business processes and composite applications. Organisations benefit by having UI testing combined with SOA, BPM and cloud testing in one integrated suite. • Unrivalled insight into the side-effect impacts of changes made to composite applications and processes, enabling a comprehensive approach to testing that eliminates defects early in the lifecycle. • Virtualisation for missing or incomplete components to enable system testing at all stages of development. Organisations benefit through being unhindered by unavailable systems or costly access to third party systems, licences or hardware. Green Hat pioneered ‘stubbing’, and organisations benefit by having virtualisation as an integrated function, rather than a separate product.
• ‘Out-of the box’ support for over 70 technologies and platforms, as well as transport protocols for industry vertical solutions. Also provided is an application programming interface (API) for testing custom protocols, and integration with UDDI registries/repositories. • Helping organisations at an early stage of project or integration deployment to build an appropriate testing methodology as part of a wider SOA project methodology.
Corporate overview Since 1996, Green Hat has constantly delivered innovation in test automation. With offices that span North America, Europe and Asia/Pacific, Green Hat’s mission is to simplify the complexity associated with testing, and make processes more efficient. Green Hat delivers the market leading combined, integrated suite for automated, end-to-end testing of the legacy integration, Service Oriented Architecture (SOA), Business Process Management (BPM) and emerging cloud technologies that run Agile enterprises. Green Hat partners with global technology companies including HP, IBM, Oracle, SAP, Software AG, and TIBCO to deliver unrivalled breadth and depth of platform support for highly integrated test automation. Green Hat also works closely with the horizontal and vertical practices of global system integrators including Accenture, Atos Origin, CapGemini, Cognizant, CSC, Fujitsu, Infosys, Logica, Sapient, Tata Consulting and Wipro, as well as a significant number of regional and country-specific specialists. Strong partner relationships help deliver on customer initiatives, including testing centres of excellence. Supporting the whole development lifecycle and enabling early and continuous testing, Green Hat’s unique test automation software increases organisational agility, improves process efficiency, assures quality, lowers costs and mitigates risk.
Helping enterprises globally Green Hat is proud to have hundreds of global enterprises as customers, and this number does not include the consulting organisations who are party to many of these installations with their own staff or outsourcing arrangements. Green Hat customers enjoy global support and cite outstanding responsiveness to their current and future requirements. Green Hat’s customers span industry sectors including financial services, telecommunications, retail, transportation, healthcare, government, and energy.
• Scaling out these environments, test automations and virtualisations into the cloud, with seamless integration between Green Hat’s products and leading cloud providers, freeing you from the constraints of real hardware without the administrative overhead. • ‘Out-of-the-box’ deep integration with all major SOA, enterprise service bus (ESB) platforms, BPM runtime environments, governance products, and application lifecycle management (ALM) products.
sales@greenhat.com www.greenhat.com
TEST | October 2011
www.testmagazine.co.uk
TEST company profile | 47
T-Plan T-Plan since 1990 has supplied the best of breed solutions for testing. The T-Plan method and tools allowing both the business unit manager and the IT manager to: Manage Costs, Reduce Business Risk and Regulate the Process. By providing order, structure and visibility throughout the development lifecycle from planning to execution, acceleration of the "time to market" for business solutions can be delivered. The T-Plan Product Suite allows you to manage every aspect of the Testing Process, providing a consistent and structured approach to testing at the project and corporate level.
What we do Test Management: The T-Plan Professional product is modular in design, clearly dierentiating between the Analysis, Design, Management and Monitoring of the Test Assets. • What coverage back to requirements has been achieved in our testing so far?
Test Automation: Cross-Platform Independence (Java) Test Automation is also integrated into the test suite package via T-Plan Robot, therefore creating a full testing solution. T-Plan Robot Enterprise is the most flexible and universal black box test automation tool on the market. Providing a human-like approach to software testing of the user interface, and uniquely built on JAVA, Robot performs well in situations where other tools may fail. • Platform independence (Java). T-Plan Robot runs on, and automates all major systems, such as Windows, Mac, Linux, Unix, Solaris, and mobile platforms such as Android, iPhone, Windows Mobile, Windows CE, Symbian. • Test almost ANY system. As automation runs at the GUI level, via the use of VNC, the tool can automate any application. E.g. Java, C++/C#, .NET, HTML (web/browser), mobile, command line interfaces; also, applications usually considered impossible to automate like Flash/Flex etc.
• What requirement successes have we achieved so far? • Can I prove that the system is really tested? • If we go live now, what are the associated Business Risks?
Incident Management: Errors or queries found during the Test Execution can also be logged and tracked throughout the Testing Lifecycle in the T-Plan Incident Manager. “We wanted an integrated test management process; T-Plan was very exible and excellent value for money.” Francesca Kay, Test Manager, Virgin Mobile
Web: hays.co.uk/it Email: testing@hays.com Tel: +44 (0)1273 739272
www.testmagazine.co.uk
October 2011 | TEST
48 | The last word...
the last word... Dune buggies Dune buggies and Agile? Dave whalen says whatever you do, don’t look under the bonnet!
W
hen i was a wee lad – somewhere in my early twenties – i decided that i wanted a dune buggy. i did live near the beach in Miami after all. i really just wanted a cool car to pick up girls. As a young military guy, I didn't have lots of cash to buy an already built dune buggy, or so i thought. i went for the cheap option – build it myself! i perused the car and hot rod magazines in search of a mail-order ‘dune buggy kit’. For those of you that may not be familiar with these kits, they were essentially a fibreglass dune buggy body that you were supposed to be able to just "bolt on" to an existing Volkswagen Beetle chassis. Sounds simple right? I didn't quite think this one out. So I saved up my hard-earned money and bought a 1968, piece of garbage, Volkswagen Beetle. The body was atrocious but the chassis and motor seemed to be in pretty good shape and the price was right. So I bought it. I took it apart and scrapped the interior and body. Step 1 accomplished. Next, I ordered my dune buggy kit. It arrived in crates. Since I was living in a dormitory at the time, I had prearranged to have it delivered to a friend's house. We planned to build it together in his garage. We figured, at most, it would be a weekend project... Right! What they didn't tell me in the ad was I still needed gauges, wiring, lights, and a long list of other thing to get it running. Then I had to paint it. To cut a long story short, it took me months and ended up costing a small fortune to finish my dune buggy. I should have bought the used one. We finally got it done after about a year. In the end – it looked awesome. Underneath of course it was still a 1968 Volkswagen Beetle. Maintaining it became an expensive nightmare. It pretty much
TEST | October 2011
just sat in the parking lot, looking pretty. Occasionally, I would sit in it and just listen to the 8-track player. It was a girl magnet alright. Until they wanted to go somewhere and I couldn't afford to put gas in it. I eventually scrapped it. I often think of my dune buggy when I hear about Agile implementations. Many new Agile adopters say all the correct buzzwords like sprint, or backlog. They even hold daily stand-up meetings. From all outward appearances it looks really nice. Very Agile. Just don't look under the bonnet. If you do, you will most likely find an old piece of crap Volkswagen Beetle. I call it the Agile Illusion. It looks great on the surface. Sadly, like my dune buggy project, many managers think converting to Agile is a process that can be accomplished in a only a few days and reap immediate rewards. Not so much. As I mentioned in an earlier issue, I am now part of an Agile team, a highly successful one. Actually, the company declared itself ‘Agile’ a while back. I can honestly say, most, if not all, of the other teams are dune buggies. Not us! We're doing it right. Are we always successful? No, not always, but with a dozen or so sprints behind us, we've pretty much got it down. We're not sprucing up an old model, or putting an old model inside a shiny new package. No, we started from the ground up. Everything is new. New processes, new ideas, new technologies, you name it. Look under our bonnet and you will see the real thing – no surprises. Was it easy? No way. We struggled from the start. But it really does work. But it’s not something you can just take out of a box or sine up an old one. It takes a lot of work. There are no shortcuts. You're going to fail at times. That's something the Agile pundits will never tell you. But now that we're up and running, we're pretty darned proud of our dune buggy. We love to show it off – and we do – often!
A key result is our flexibility. We can shift on a dime. In fact, since I last discussed it here, we have shifted direction twice as or potential customer base and technology has shifted. When it comes to adopting Agile, many companies experience what I did with my dune buggy. They throw tons of money at it. It often looks really nice. They love to show it off. Just don't try to use it. I have shifted from Agile hater to Agile Evangelist. With a caveat, I don't sugar coat it. Start fresh. Toss out the old Volkswagen and build it from the ground up. It will hurt your wallet in the beginning. People will think you're nuts. But in the end it will be a much more rewarding experience. Feel the wind in your hair!
Maintaining it became an expensive nightmare. It pretty much just sat in the parking lot, looking pretty. Occasionally, I would sit in it and just listen to the 8-track player. It was a girl magnet alright. Until they wanted to go somewhere and I couldn't afford to put gas in it. I eventually scrapped it.
Dave whalen
President and senior software entomologist whalen technologies softwareentomologist.wordpress.com
www.testmagazine.co.uk
The Whole Story Print Digital Online
For exclusive news, features, opinion, comment, directory, digital archive and much more visit
www.testmagazine.co.uk www.31media.co.uk
INNOVATION FOR SOFTWARE QUALITY