i n n o vati o n F o r S o ft w a r e Q u a l it y Volume 3: Issue 4: August 2011
Apps don’t stand alone Professor Mike Holcombe on fitting good ideas into the world
Inside: Performance testing | Getting real | The ethics of testing Visit TEST online at www.testmagazine.co.uk
BORLAND SOLUTIONS FROM MICRO FOCUS DESIGNED TO DELIVER BETTER SOFTWARE, FASTER
Borland Solutions are designed to:
•
Align development to business needs
•
Strengthen development processes
•
Ensure testing occurs throughout the lifecycle
•
Deliver higher quality software, faster
Borland Solutions from Micro Focus make up a comprehensive quality toolset for embedding quality throughout the software development lifecycle. Software that delivers precisely what is needed, when it is needed, is crucial for business success. Borland Solutions embed quality into software delivery from the very beginning of the development lifecycle whichever methodology you use – traditional, Agile or a mixture – from requirements, through regression, functional, performance and load testing. The result: you meet both business requirements and quality expectations with better quality software, delivered faster.
Visit Micro Focus at EuroSTAR 2011, stand 45 & 46, and check out what the Borland Solutions have to offer you. Register now at http://www.eurostarconferences.com/ Micro Focus Ltd. Email: microfocus.communications@microfocus.com © 2011 Micro Focus IP Development Limited. All rights and marks acknowledged.
TEST : INNOV ATION FOR SO FTWARE QUAL ITY
Leader | 1 Feature INNOVATION FO R SOFTWARE QU ALITY
Volume 3: Issue 4: August 2011
Apps don’t stand alone Professor Mike Holcomb e on fitting good ideas into the world
VOLUME 3: ISS UE 3: JUNE 20 11
i n n o vati o n f o r s o ft w a r e q u a l it y
Inside: Performance
testing | Getting real
| The ethics of testing
Visit TEST online at www.testmag azine.co.uk
Not just a slice of life: testing is part of it all
W It is this ability to observe, through the milieu, those specific functions and operations that seem out of kilter that, it seems, marks out software testers from the crowd.
John Hancock, Editor
© 2011 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor or TEST Magazine or its publisher, 31 Media Limited. ISSN 2040-0160
www.testmagazine.co.uk
ell, you are an interesting lot. When I took on the TEST editor’s chair, I carried no preconceptions except an awareness of the high editorial standards to which Matt Bailey had led you to become accustomed. Further than that, in editing this issue, I find not only are you an interesting lot but also your role is critical to the operations of modern organisations. Far from being boffins immersed in the rarefied environment of the laboratory, software testers are right at the heart of things.
I’m a bit of a Formula One nut and one driver who sprung to mind while I was reading the excellent articles we have in this issue was Michael Schumacher. He can follow a test driving sequence exactly as the engineers dictate. But, more importantly, he can run a Grand Prix race and, in addition to everything that a driver has to deal with, can make mental notes of observations under race conditions that will cut days or even weeks off the team’s testing schedule. It is this ability to observe, through the milieu, those specific functions and operations that appear out of kilter that, it seems, marks out software testers from the crowd. At the opposite end of the frantic scale from motorsport, farmers, by walking their land, identify where things seem not right and then investigate more thoroughly. A farmer’s feet, they say, make the best manure. Perhaps a software tester’s eyes make the best designs and fixes. Anyway, enough philosophy; in this issue we have some great articles written from the real
Editor John Hancock john.hancock@31media.co.uk Tel: +44(0)203 056 4599 To advertise contact: Grant Farrell grant.farrell@31media.co.uk Tel: +44(0)203 056 4598 Production & Design Toni Barrington toni.barrington@31media.co.uk Dean Cook dean.cook@31media.co.uk
experience of people working in the sector. Professor Mike Holcombe, has written our cover article on the key matter of testing apps in the context of where they have to operate. Mike is an additional columnist for TEST who, from the next issue, will be regularly sharing his considerable knowledge with us. We look at the performance tester skills and performance testing, testing security, and attitudes to third party code. Rob Lambert offers a challenging view on learning for testers and we cover testing for protection, bringing reality into testing and, in Angelina Samaroo’s continuing series, the ethical responsibilities of testers. Eitan Lavie, explains the impact of IP networks on testing and last but never least, Dave Whalen offers his bracing approach; asking what’s in your toolbox? There’s lots between these covers to keep you informed until we next land on your desk and, in the meantime, why not visit www.testmagazine.co.uk where you can review past issues and keep up to date. And, as it’s summer, enjoy the cricket and time with the people whose company you like. Oh, and if you are hit by an inspiration about software testing, jot down the key points on an email, send it to me, john.hancock@31media.co.uk and, you never know, you could be in a future issue. Until the next issue.
John Hancock, Editor
Editorial & Advertising Enquiries 31 Media Ltd, Three Tuns House 109 Borough High Street London SE1 1NL Tel: +44 (0) 870 863 6930 Fax: +44 (0) 870 085 8837 Email: info@31media.co.uk Web: www.testmagazine.co.uk Printed by Pensord, Tram Road, Pontllanfraith, Blackwood. NP12 2YA
August 2011 | TEST
TM
Powerful multi-protocol testing software
Can you predict the future? Don’t leave anything to chance. Forecast tests the performance, reliability and scalability of your business critical IT systems backed up by Facilita’s specialist professional services, training and expert support.
Facilita Software Development Limited. Tel: +44 (0)1260 298 109 | email: enquiries@facilita.com | www.facilita.com
Contents | 3
Contents... august 2011 1 Leader column Not just a slice of life: testing is part of it all. 4 News 6 Cover story – Testing new apps for old systems When testing an app, as Professor Mike Holcombe of Sheffield University explains, you need also to make sure that you’re testing against the right system and that tests include the environment where the app will be used.
10 Scriptless tools: focus on what matters Graham Parsons, CEO at Reflective Solutions explains how the introduction and rising popularity of scriptless testing tools is gradually changing the role and required skill-sets of the modern day performance tester.
14 Testing All-IP Networks The boundaries between traditional telephony and Internet services are fast disappearing. This opens up tremendous opportunities but also poses new challenges for software testers, as Eitan Lavie, VP Product Marketing at QualiSystems explains.
18 Process of Security Testing a System Greater sophistication is often accompanied by greater exposure to risk. Ashwin Palaparthi, Vice President, Innovation at AppLabs considers the threats and makes the case for a systematic approach to testing for those risks.
6
22 Running the risk with third party code Rutul Dave, Senior Development Manager at Coverity, discloses the findings of Forrester Research’s ‘Software Integrity Risk Report’ and offers ideas for best practices to apply when managing software integrity.
24 It’s what you know not the paper you hold Just because certificates are viewed as a proof of skills doesn’t mean that they are. Rob Lambert, Creative Director at The Software Testing Club believes their are other ways for testers to build their skills bases.
28 The security audit imperative
10
Hacking scandals in the headlines may not be good news for sufferers but Ray Bryant, CEO at Idappcom believes that they will open everybody’d eyes to the importance of security and how to achieve it.
30 Let’s get real
I t’s not so much the date by which the project has been promised, Milan Sengupta, Senior Software QA/Testing consultant explains; it’s more important that the target is realistic.
32 Testing in ethical times In ‘Training Corner’ Angelina Samaroo considers the ethical responsibilities of testers for the way that systems impact of consumers, their decisions, their security and their outcomes.
14
36 Network Performance Testing and WAN Acceleration As Corporate networks expand; global network efficiency becomes a crucial factor in the performance of a given production network. One efficient solution, says Arupratan Santra and colleagues, is optimizing the Wide Area Network (WAN) through WAN accelerator.
40 TEST Directory 48 Last Word – Dave Whalen
Dave Whalen wants to know what’s in your toolbox.
www.testmagazine.co.uk
28 August 2011 | TEST
SilkPerformer CloudBurst Micro Focus enables software application performance testing on-premise and in the cloud
S
ilkPerformer® Cloudburst™ is a cloudbased extension to SilkPerformer, the performance testing offering from Micro Focus that enables software quality teams to rapidly launch any size peak-load performance test without having to set up and manage complex infrastructures. The systems that run business have to be available and responsive at all times. Today’s applications are under massive strains, which can be prompted by a big promotional event, a seasonal activity spike, unusual end of month processing, the merging of two systems etc. Testing to ensure performance under these conditions demands an innovative approach. Take testing to the cloud SilkPerformer CloudBurst enables software quality teams to easily take performance testing to the cloud. As an integrated component of the SilkPerformer solution,
CloudBurst enables performance teams to instantly deploy existing performance test scripts to cloud-based load generators for a seamless transition to the cloud. When a massive peak load test is needed, SilkPerformer CloudBurst quickly and economically simulates the load without requiring investment in load testing hardware and setup. Confidence that applications perform under immense loads SilkPerformer CloudBurst not only simulates peak loads, but also diagnoses problems giving organizations insight into their resolution even during a performance fire drill. Because SilkPerformer CloudBurst has a hybrid on-premise and cloud model, the on-premise capabilities make it possible to diagnose performance issues in the data center while an application is put under any size peak load from the cloud.
New Automated Simulation Capability
W
hether testing the functionality of critical analog I/O pin functions controlled by the Platform Manager's internal CPLD, or checking the integration of enhanced digital control functions coded in Verilog or VHDL within the Platform Manager's FPGA control section, PAC-Designer 6.1 software integrates seamlessly with Diamond 1.3 design tools to compile the entire design, create the necessary stimulus template file and then automatically generate initial timing waveforms within the Aldec ActiveHDL simulator. This previously complex, manual design flow has been optimized and automated in PAC-Designer 6.1 software to generate all the necessary design files and deliver the initial timing flow diagram with just the click of a mouse. Comprehensive Analog and Digital Design Support PAC-Designer 6.1 software provides a GUI-based design methodology for analog engineers that uses intuitive dialog boxes to configure the Platform Manager's analog sections; LogiBuilder design methodology to integrate power management functions into the on-chip CPLD; and LogiBuilder or Lattice Diamond Verilog/VHDL design methodology to integrate digital board management functions into the FPGA section of the Platform Manager device.
Renesas Electronics Selects Wind River FAST for Android Software Testing
E
mbedded and mobile software supplier Wind River, has announced that Renesas Electronics Corporation, a supplier of advanced semiconductor solutions, has selected Wind River Framework for Automated Software Testing (FAST) to test Renesas Electronics’ Cortex-A9-based system-on-chip (SoC) platform for Android-based smartphones and consumer products. Wind River FAST for Android is a fully automated software testing solution that assists silicon vendors, device manufacturers and mobile operators in improving the software quality of their Android implementations and test performance, compliance and device user interface, and user experience. “Wind River FAST for Android helps companies, especially those with demanding product schedules and complex designs, tackle the challenges of Android software testing and deliver high-quality products on time and on budget,” said Jerry Ashford, vice
TEST | August 2011
president and general manager of mobile solutions at Wind River. "By investing in our… test solution, customers can focus efforts and resources on critical issues such as developing unique features that differentiate their mobile devices and verifying Android compliancy. Wind River FAST significantly reduces the overhead associated with quality testing and compliance.” Renesas Electronics is a supplier of microcontrollers and advanced semiconductor solutions including microcontrollers, SoC solutions and a broad range of analogue and power devices as well as LCD modules. Renesas Electronics uses Wind River FAST for Android to streamline its software testing process, improve software quality and stability through the use of a variety of tests including the Wind River Advanced Device Characterization Suite, and verify compliance with Android Compatibility Test Suite (CTS). For easy access and reporting of test results,
Wind River FAST for Android automates thousands of heterogeneous tests and then consolidates the results into a single uniform database. “To conduct software testing for Android, it would have required our teams to spend extra time and resources during the testing process. Instead, we’re leveraging Wind River FAST for Android’s out-of-the-box tests for performance, stability and compliance," said Hiromi Watanabe, general manager of SoC Software Platform Division at Renesas Electronics Corporation. Additionally, Wind River provides Renesas Electronics with support, Wind River FAST installation and engineering consulting and training services, including integration of software components, assistance throughout software testing for graphics and multimedia framework customization and optimization work to enable Android on Renesas Electronics’ SoCs.
www.testmagazine.co.uk
S
hunra, the specialist in Application Performance Engineering (APE) solutions for WAN, Web, Mobile, and Cloud-based networks, has announced a new Worldwide Consulting Partnership agreement with Capgemini Group, providers of consulting, technology and outsourcing services. The agreement will focus on specialist testing solutions. Shunra’s solutions allow organizations to implement an APE process that covers: Discovery of performance conditions from the target deployment environment; Incorporation of real-world network conditions into performance testing environments; Reporting of performance results against targets; and… Analysis of performance bottlenecks. Capgemini Group’s Global Testing Service Line, comprising both Capgemini and wholly-owned subsidiary Sogeti, will adopt Shunra’s APE technology as best practice as part of their performance testing service. According to a report from National Institute of Standards and Technology,
"four of every five dollars of the total cost of ownership of an application is spent and directly attributable to finding and fixing performance issues post-deployment. A full one third of this cost could be avoided if better software testing was performed." Capgemini Group will use Shunra’s innovative solutions to help optimize and accelerate non-functional performance testing projects, mitigate the associated risks, and demonstrate results more quickly and at reduced cost. As part of the relationship, Shunra and Capgemini Group will immediately roll out a training and certification program to Capgemini and Sogeti consultants in key geographies. “Partnering with Capgemini and Sogeti will help provide broad market benefit from Shunra capability and significantly wider awareness of our solutions,” said Bill Varga, Shunra COO. “Capgemini Group has a history of leveraging technical insights to the best advantage for their clients. We are delighted to have the opportunity to jointly deliver our solutions to their client base.”
Lattice's New Mixed Signal Design Software Simplifies Platform Management Design
L
attice Semiconductor has announced release 6.1 of its PAC-Designer(R) mixed signal design software, with updated support for Lattice's Platform Manager(TM), Power Manager II and ispClock(TM) devices. Users designing with Platform Manager devices will now have access to the Lattice Diamond(R) 1.3 software design environment, which has also been announced. This integration of the PAC-Designer 6.1 and Diamond 1.3 design software tools will make more advanced digital design options available with Platform Manager products. An automated simulation environment, not previously available to Platform Manager designers, is a primary benefit of the design software integration. "With the integration of PAC-Designer 6.1 and Lattice Diamond 1.3 software, [users] will be able to design and simulate Platform Manager devices at an even higher level of productivity while maintaining the ease of use for which PAC-Designer software is widely recognized," said Shakeel Peera, Lattice Director of Marketing for Silicon and Solutions.
www.testmagazine.co.uk
“For many of our clients, particularly in the financial services, telecommunications and government sectors, ensuring the performance of a business-critical application in a variety of environment scenarios is a very significant challenge. This is especially the case for major application rollouts, large-scale infrastructure changes, and database migrations and consolidations,” said Charlie Li, Vice President of Global Quality and Testing Services for Capgemini Group. Li added, “In the testing space, Shunra can deliver… accurate results… because of their ability to simulate network conditions without actually burdening the network. In addition, their technology can reduce the cost of load testing, particularly for mobile applications. For our clients, this represents higher degree of certainty of system load capability and an indication of the scalability of the server stack – significant value for application owners. Therefore the business case for partnering with Shunra and leveraging their technology is a strong one.”
NEWS
Shunra Partners with Capgemini Group
Zoltes Releases the Most Flexible IO Benchmarking Tool for the Virtualization Market
Z
oltes, a lprovider in software only testing solutions for Data Centers, Cloud Computing, Voice and Video Technologies, has released a new version of Zoltes Test Fabric (ZTF) with IO benchmarking capabilities for VMware virtualization environments. ZTF is a cost effective, software only solution that provides an IO benchmarking framework for VMware virtualization environments leveraging its flexibility, use cases coverage, performance metrics and scalability. Customers can easily generate any type of IO traffic under the VMware environment and scale up to thousands of VMs. The ZTF can run on the existing blade servers in the lab and save significant time on test bed configuration and management. Using the GUI or Automation CLI interface, users have full control of industry standard tools, like Iometer and Iorate, which run on dedicated virtual machines, generating various flavours of realistic IO traffic by combining block and file modes with read, write, sequential and random access patterns. In addition, ZTF performs data verification of the IO operations generated during testing. ZTF collects and centralizes the IO statistics and results, presenting them to the user for comprehensive characterization of the system under test. Depending on the customer needs, ZTF can be easily integrated with any other 3rd-party test tools with full control over the test execution and results and can be instructed to simultaneously generate IP background traffic in combination of the IO operations in order to measure in a realistic manner the performance of any storage device (FC, iSCSI, NAS) over any type of interface (HBA, Ethernet, CNA) connected to the VMware ESX hosts.
August 2011 | TEST
6 | Cover story
No app is an island – entire of itself You don’t just have to test the app, says Professor Mike Holcombe of the University of Sheffield, you have to test how it fits into the whole system and how users will interact with it
T
he hand-over went well with the client very much liking the app, especially the interface. The app went live a few days later. After a couple of hours we were getting reports of problems, users of the app found that it kept crashing. The support phone lines were buzzing – lots of queries about problems users were having, what to do under different conditions, not being able to see the results of their transactions and so on. It got worse. Other
TEST | August 2011
users of the main system found that they were locked out and some data seemed to be getting lost, and this got so bad that the client decided to turn off the system and reinstate the original software that was in use before the new app was delivered. So, what had happened during the development of the app?
What went wrong? The client was a wholesaler who had previously dealt with customers over the telephone. The new app was
www.testmagazine.co.uk
Cover story | 7
an e-commerce version of the sales system and needed to interface with the company’s main database. The failure of this project led to a post mortem with the suppliers and some experts on testing. This looked at what testing was done during the project. To cut a long story short the lessons from this failed project can be summarised first in what was not done and then what should have been done. The developers were given the current data model by the client. They implemented this in the latest version of the DBMS (database management
www.testmagazine.co.uk
system) being used by the client company – however the company was using an older version. The developers populated the database with some of their own artificial data – but it only consisted of a few records. The developers each took a different aspect of the app and developed the code. They then integrated the code and carried out some tests, using a small selection of the basic data users would typically use. The system was then delivered and combined with the existing system. A brief user manual accompanied it.
Now we need to think about the design of the app and in particular how the customers will be able to achieve their objectives. We need to articulate the business processes involved and check these against the business processes in the original system.
August 2011 | TEST
8 | Cover story
Let’s start afresh
Asking another developer to spend a few hours reviewing some code might well detect more defects than the equivalent time spent in testing. We also need to test the system with people who might represent typical customers and assess their experience.
TEST | August 2011
The first decision is about the database. Should they upgrade the main database with the latest version of the DBMS? This is a decision that should not be taken lightly – there are pros and cons. If it is updated then it needs to be rigorously tested using all the application features. Probably the best thing is to run it in parallel with the original system to see that there are no nasty surprises. The developers need to take a full copy of the original database for their own testing purposes. If any of the data is too sensitive to be released to the developers then it needs to be replaced with test data of a similar volume and complexity. Also, the test data needs to include examples of invalid data that may be present in the original database, but perhaps the new app will in future prevent through more rigorous input validation. Now we need to think about the design of the app and in particular how the customers will be able to achieve their objectives. We need to articulate the business processes involved and check these against the business processes in the original system. These may not be the same but they should be related so that customers do not find themselves in difficulties – perhaps some critical aspect of the transaction has been left out or is incomplete. A good way to do this is to think about the different screens and how these provide the mechanisms for each part of the business process/transaction and how these are then related to provide
the workflow that represents a complete transaction process. At this point we need to define the specific functions involved AND WRITE DOWN A FULL SET OF TESTS for these functions. As yet we have no code written but we know what it has to do and how we will assure ourselves that it will do it.
Coding and testing Now we come to the coding, we choose a key screen – maybe the log on screen where customers are faced with a number of choices – e.g. new customer, returning customer etc. The functions relating to these are then written. There is a lot of empirical evidence that pair programming will be more productive with fewer mistakes and more shared knowledge about the architecture and progress of the app. When a function is implemented then it is connected to the full database and tested using the test sets previously defined. At this stage we should also run some of the tests that we used for the original database. Now we turn to another screen or key function and implement that. Here we carry out the same process, testing the new function fully and then integrating it to the system. In this way we build a working system in increments – we know it works and we therefore have much more confidence about the state of progress and the likelihood of meeting the delivery date. In terms of testing such systems it is clearly valuable to use automated test sets. Many unit test tools are available, such as SimpleTest for PHP, JUnit for
www.testmagazine.co.uk
Cover story | 9
Java, and RSpec for Ruby. There is an investment in defining the tests but the benefits of well-defined and easily repeatable tests are considerable. As this is a web-based system – maybe on a tablet as well – then suitable behaviour testing systems should also be used. For checking out the navigation of a web page tools such as Cucumber, for describing user stories, are very useful. Coupled with browser simulation tools, such as Capybara or Webrat, test sets involving data entry and user interaction with web pages can be simulated and will save a lot of time compared to manual testing. They can also be reliably repeated to aid in regression testing. People are also very important in testing. Walkthroughs of features with other developers can identify design defects at an early stage when they are still easy (and cheap) to fix. Asking another developer to spend a few hours reviewing some code might well detect more defects than the equivalent time spent in testing. We also need to test the system with people who might represent typical customers and assess their experience – this can be very revealing. Finally, if all tests pass – we may be ready to deliver and install with the client. Even then it is best to run the new system including the app alongside the old system for a time, just to make sure that everything still works in a real word environment. Acknowledgement – Chris Murray, epiGenesys, for helpful comments on a draft.
www.testmagazine.co.uk
Professor Mike Holcombe Professor of Computer Science University of Sheffield www.shef.ac.uk
He has been teaching practical software engineering for 25 years. He adopted Agile development – specifically Extreme Programming in 2000 and has carried out empirical research into the benefits of this new approach.
August 2011 | TEST
10 | Testing the tester
Scriptless tools: focus on what matters Graham Parsons, CEO at Reflective Solutions considers the changing role and skill sets of a Performance Tester
TEST | August 2011
www.testmagazine.co.uk
Testing the tester | 11
"When I grow up I want to be…. a performance tester"
I
n the world of application development, becoming a performance tester has long been regarded as a specialist career choice. The complexity of the mainstream performance test tools has meant that one pre-requisite for becoming a performance tester has been a high level of programming skills needed to write the testing scripts these tools require. Not for the faint hearted, becoming an expert tester has up until now involved extensive class and on the job training with their tool of choice before they can even start thinking about applying their brain power to the more important tasks of designing correct performance tests, analysing results and suggesting fixes. www.testmagazine.co.uk
The performance tester has a crucial role in the development of any application that is important to an organisation. Ensuring that an application will eventually scale and respond quickly, even under peak user loads, is central to the satisfaction of the user base. Getting this wrong can have a significant impact on a business whether through productivity issues, in the case of core internal applications, or lost revenue and brand standing from badly performing ecommerce Websites. A few brands have suffered some high profile failures recently when site traffic levels rose beyond ‘normal’ and /or tested levels; the London 2012 Olympic ticketing site, Amazon (Lady GaGa album discount), the Sunday Times (Social Media list) and Police.uk are all sites that have fallen over since the beginning of 2011. The varying levels of controversy, and media coverage, caused by these site crashes prove that end-users are growing increasingly dissatisfied when Websites fail to deliver on performance issues. So while performance testing might not be the most obvious career choice for school leavers, it is certainly a career that carries great responsibility and offers increasing opportunities as the need to give greater priority to optimising application performance continues to be driven by the market.
Evolving skill-sets and mindsets Typically, the best performance testers have been required to invest a lot of time and energy into developing their skills in using a particular testing tool. It can take a very long time to ‘learn’ the detail associated with traditional script based test tools, which still dominate the market, and to gain specialist experience in relevant scripting languages. Complex script based tools require testers to use their expertise to
Ensuring that an application will eventually scale and respond quickly, even under peak user loads, is central to the satisfaction of the user base. Getting this wrong can have a significant impact.
August 2011 | TEST
12 | Testing the tester
With less time needed to configure tests, performance testers can spend longer applying their advanced skills both before the tool is configured by designing correct tests and after the test has been executed by fully analysing the test results and identifying the root causes of performance issues.
Graham Parsons CEO Reflective Solutions www.stresstester.com
TEST | August 2011
develop test scripts before a test can be configured. This process alone can take weeks to complete. So even when experienced performance testers know the tool inside out, they have to dedicate a lot of their testing time on any project writing scripts before they can get onto the intended, and most important, act of ‘testing’. In addition, with the pressure on getting the scripts completed and executing the test, there is often not enough time for the tester to use the real skill they possess; using their experience to design a performance test that will accurately simulate how the application will be used by real world users. The irony is that because it can take weeks, if not months, to set up a performance test with a script based tool, performance testing does not usually start until the very end of project. At this point in the project, application launch deadlines are looming, budgets might have overrun and the pressure to deliver an end product is really on. This means that the most important part of the testing process – ensuring that the scripts correctly simulate in all ways how the user community will interact with the application – is often the part which testers are forced to rush. In response to the time and financial constraints they all too often find themselves under, a performance testing team might be forced to take testing shortcuts. Typically this can result in failure to fully test the whole user journey, or all the different routes through the journey, or testing without exploring all the potential user loads and scenarios that could occur in the production environment. The consequences can be disastrous when the application is launched. This situation is all too familiar and seems to suggest that the central role and most important output of the tester
– the ability to design realistic tests, analyse results and report back on the issues encountered – has fallen victim to budget and time constraints. After all, what is the point of a tester knowing a test tool inside out, and spending time configuring and executing tests, if he or she is not given adequate time and resource to design correct tests and analyse the results properly? The good news is that new innovative performance testing tools are liberating performance testers from the time consuming technical tasks of script writing, by providing easy-to-learn and quick to use features which allow even the most complex performance tests to be carried out within days, rather than weeks. Driven by intuitive GUIs, scriptless tools feature wizards which prompt for information in order to complete configuration automatically and provide context specific help through online resource centres, which can include video tutorials. They are generally revolutionising the process of performance testing by simplifying and automating it, with adopters reporting effort and timescale reductions of up to 80%. The impact of the introduction of scriptless tools on the role of the performance tester is significant. With less time needed to configure tests, performance testers can spend longer applying their advanced skills both before the tool is configured by designing correct tests and after the test has been executed by fully analysing the test results and identifying the root causes of performance issues, prior to ensuring that fixes have corrected the problem. Performance testers using scriptless tools also find that they experience a welcome cultural shift in terms of their way of working alongside the development team. With performance testing suddenly and dramatically made quicker and much
less expensive, it means that more members of the project team can execute tests themselves throughout the development process, to enable errors to be rectified much earlier in the project. As such, performance testers are now becoming integrated members of the life cycle management team, rather than being viewed as specialists who come in at the end of a project. This marks a sea change in terms of how performance testing is likely to be perceived and paves the way for it to be assigned as much importance as functional and unit testing. Equally, it will enable performance testers to focus more attention to their people and multitasking skills as their role becomes less ‘technical’ (in a sense) and more ‘analytical’ or problem solving.
Demonstrating competence The competence of a performance tester or performance testing team is often ultimately judged on how well an application performs and scales in the production environment. Until now, this evaluation methodology has, perhaps, been unfair, based on the limitations performance testers faced; the complexity of the job they have to do in a limited testing time; insufficient resources; and forced shortcuts due to tool, time and financial constraints, leading to unrealistic test scenarios. The advent of scriptless tools will make it much easier for performance testers to deliver a more thorough job, in spite of the time and budget restrictions. This in turn, will ultimately allow them to receive a fairer verdict in terms of the application’s success reflecting their professional competence. After all, there is some truth in the old adage ‘a workman is only as good as his tools.’
www.testmagazine.co.uk
14 | IP network testing Feature sponsored by
Testing all-IP networks From lab management to data analysis, Eitan Lavie, VP Product Marketing at QualiSystems explains how the advent of IP networks has changed testing priorities
U
ntil relatively recently, voice, data and video services (known as ‘triple play’) were carried separately on networks designed specifically for their unique requirements. With the advent of IP convergence, however, the Internet is increasingly used as a single platform for these three different functions. The benefits of IP convergence for the consumer and businesses are clear. IP
TEST | August 2011
systems are cheaper and more flexible. New technologies can easily be incorporated into the network without adding expensive infrastructure. Employees can now be more mobile, by accessing information remotely over the network. A single device can now be used for different technologies. Smartphones, for example, are used to surf the Internet, as cameras, video and music players, and as Voice over Internet Protocol (VoIP) phones as well as traditional cellphones.
www.testmagazine.co.uk
IP network testing | 15 Feature sponsored by
As the boundaries between traditional telephony and Internet services disappear, telecom companies providing the new multimedia and converged IP networks face increasing challenges. So do the enterprises (such as banks, insurance companies etc.) relying on these complex networks for their businesscritical services.
Network testing challenges As network complexity increases, so does the need for network testing. To ensure services maintain a high degree of flexibility and reliability while remaining competitive, Service Providers (SPs) and enterprises must conduct rigorous and thorough network testing, which is often made more complex by the nature of the converged all-IP network.
Upgrading the network In order to handle the increase in both volume and type of communications, organizations today are massively investing in infrastructure improvements. But upgrading the network infrastructure is no simple matter, and network crashes or outages are an SP’s worst nightmare. There are many recent examples that emphasize the importance of this issue. The damages extend beyond financial losses and also affect brand reputation. Last April, Verizon suffered its most severe network crash, shortly after implementing LTE. Skype’s outage last December resulted in 24 hours without service for many of its worldwide users. The UK’s HSBC Bank suffered a major network crash when upgrading its entire business infrastructure. And this is just the tip of the iceberg.
Managing Resources In order to secure network stability, organizations require to mimic network
www.testmagazine.co.uk
infrastructures in the testing lab, where frequent topology changes are required for a large variety of scenarios. One of the major reasons for the complexity is the endless variation of equipment in the network – including different types, models and versions from a growing number of vendors. Without centralized management, resource tracking and allocation, managing equipment is a great challenge, especially when it is shared among testing teams in multiple, distributed sites. In a large-scale lab, multiple test engineers or testing teams compete to use the same network elements and equipment for their tests. In the absence of a resource tracking system, this inevitably results in overlapping and test downtime. Testing teams must then manage ongoing resource conflicts, and under-utilization of costly network resources.
Setup and Tear Down Network infrastructure comprises multiple elements, usually from different vendors, including routers, switches, gateways, BSS/OSS, traffic generators, sniffers, network monitors and more. While modeling equipment setup in the lab, and taking into account the complex matrix of combinations, all these elements must be configured to communicate with each other and support test requirements. The need to mimic the network structure requires extensive cabling and configurations to connect all network elements. Manual equipment set-up and cabling for each test is time consuming, and also comes with an inherent risk of human error. Configuring these different elements requires time and effort, as well as extensive specialist knowledge such as familiarity with the various interfaces and protocols of the resource. The expanding range of equipment, for which new hardware and software
As the boundaries between traditional telephony and Internet services disappear, telecom companies providing the new multimedia and converged IP networks face increasing challenges.
August 2011 | TEST
16 | IP network testing Feature sponsored by
versions are regularly released, requires testers to periodically re-program drivers to interface with test software. Therefore, testing lab knowledge management is problematic, and sometimes can be costly to bridge when skilled personal is absent. When the setup/teardown and configuration effort is huge, some labs avoid the effort by keeping them static, while purchasing more equipment for diverse topologies. As a result resources are underutilized.
Expanding testing matrices, expanding time-to-market
Rather than relying on traditional testing technology and paradigms, converged telecom service providers, carriers and enterprises need new tools that take into consideration the reality and complexity of new converged IP networks.
TEST | August 2011
As new network technologies are introduced, it becomes increasingly challenging for testing teams to maintain a competitive time-to-market while still guaranteeing the high quality of service that customers demand. An enormous variety of tests must be executed: including traffic blasting, load, stress, QoS, QoE, interoperability tests and more. Each test case may measure several different parameters and make multiple validations. The result is an ever expanding testing matrix, and an ever shrinking timeframe in which to cover it. When adding to the equation the huge volume of data that need to be handled, timing becomes even a greater issue. If test data are not thoroughly analyzed, crucial trends are missed, and the influence of parameters like network configuration changes and version upgrades on test results are easily overlooked. Manual report generation is time consuming, requiring engineers to aggregate non-standard test data and execute extensive data mining efforts. All too often, the end result gives only a limited view of a test run. If test data cannot be read and analyzed as soon as possible after a test ends, the result is costly, business critical delays.
Breaking conceptions of network testing Rather than relying on traditional testing technology and paradigms, converged telecom service providers, carriers and enterprises need new tools that take into consideration the reality and complexity of new converged IP networks, the wide variety of services that run on them, and the need for end-to-end testing covering all network endpoints. The necessity to increased efficiency encouraged many companies to take the challenge and embrace automation in their testing labs. Some are investing efforts in designing custom frameworks that meets the company’s unique requirements. But these ‘Home-Grown’ solutions become an exhausting burden for the R&D team. Assessments of the benefits and setbacks of building or buying an Automation Framework can now be resolved with innovative new off-theshelf technologies that are scalable and flexible to fit unique and changing testing lab requirements. Organizations can concentrate development efforts where they are mostly needed – in the product – while test duration is reduced, test coverage is increased and a single testing solution can be successfully used across the entire testing infrastructure.
Central Lab Management To guarantee efficient resource management a central control of all equipment and its properties is required. All resources need to be managed in a single repository, easily accessed and regulated for all users. In addition, resource properties should be available on-the-fly to all users. Having such a structured management will allow automated tracking of required resources according to set parameters. The outcome is drastic savings on time and effort.
www.testmagazine.co.uk
IP network testing | 17 Feature sponsored by
Since resources are shared among users, a reservation, scheduling and locking mechanism is required. A central shared calendar to schedule equipment usage can guarantee resource locking under a required topology for specific timeframes. With such mechanisms, resource availability tracking can be available in a snap shot, allowing efficiency by avoiding overlapping and test breaks. The direct impact is an increase in resource utilization; a secondary benefit would be a ‘greener’ test lab, as equipment power consumption is not wasted, but is well-managed and controlled.
easily create complex automated tests and regressions independently. This will reduce the burden on overstretched R&D teams, while allowing testing teams to become independent and easily adjust to a changing network environment. By automating the test execution process and using a scheduling mechanism, tests and regressions can be executed 24/7 over thousands of different test combinations, shrinking execution cycles and increasing testing coverage including integration testing, IP Performance, CPE, conformance,load and stress and much more.
Automated topologies setup and configuration
Data aggregation and analysis
To avoid considerable efforts spent building and tearing down test topologies for the all-IP test matrix, SPs need a user friendly method to set up and modify topologies. Having a graphical interface to draw topologies and associate the devices with active automated configuration procedures enables users to build and tear down topologies in a click of a button. Combined with embedded physical switch control the application can automatically set connection routes between devices saving cabling efforts and simplifying the whole process even further. Through providing, out of the box interface integration together with device control templates for many popular brands of test equipment the time to build automated procedures is reduced substantially allowing testers without programming skills to generate the automation directly.
Centralised management of the testing process enables cross organizational knowledge sharing, and avoids duplication of efforts by allowing test scenarios and assets to be recycled. Using a unified framework to cover the testing process brings great value in the analysis stage as all components are integrated to a single central server. Automatic aggregation of standardised test data to a central database can simplify test result tracking, help to avoid manual data mining efforts and will allow test data to be leveraged for timely business critical decisions. By shrinking overall test cycle time and maximizing the usage of the lab equipment, an end-to-end test automation and lab management framework can help Service Providers, Carriers and Enterprises dealing with business critical networks, keep time-to-market to a minimum, ensuring competitiveness, and moreover, guaranteeing network stability and adaptation to change during the ongoing journey towards the next network upgrade.
Increasing the Testing Coverage With a drag-and-drop graphical test creation interface testers will be able to
www.testmagazine.co.uk
Eitan Lavie VP Product Marketing QualiSystems www.qualisystems.com
Centralised management of the testing process enables cross organizational knowledge sharing, and avoids duplication of efforts by allowing test scenarios and assets to be recycled.
August 2011 | TEST
18 | Security testing
Process of Security Testing a System Thorough testing, as Ashwin Palaparthi, Vice President, Innovation at AppLabs explains, is best done to a plan
I
n this new era where technologies converge with user experience, most enterprises across various industries use and depend on a growing number of relatively new and emerging applications. While technology advances focus on providing rich user experience, they also pose greater threat to enterprises as they have a larger attack surface. The sophistication of attacks has increased over a period of time and more varied and advanced techniques have evolved to exploit vulnerabilities in emerging technologies. Adding to this, is the growing pressure on enterprises to comply with numerous regulatory mandates from government and private industry bodies. TEST | August 2011
In the current computing environment where the IT infrastructure has become increasingly complex and interconnected, it has become imperative for businesses to assess their security vulnerabilities and defence mechanisms time and again. No matter how well a given system might have been developed, the nature of today’s complex systems with huge volumes of code, interconnectivity with several disparate external applications and devices, and varied internal connections, will expose them to higher risk of attack. Even a small security breach or compromise by hackers of any system in an enterprise can prove very expensive with a time consuming rework and huge damage to a brand reputation built over years of good service.
www.testmagazine.co.uk
Security testing | 19
We need a plan Hence, there is a need for enterprises to meet the security requirements in the fast changing threat landscape, while at the same time delivering applications/services in the given time span. Implementing the right security testing best practices will help enterprises to enhance the security posture of their systems. While organizations are investing heavily to boost their security efforts, ensuring security of the enterprise systems require a combination of right security testing approach, suitable strategy, intelligent technology choices and experienced security testing professionals. Security Testing effectively deals with several issues ranging from performance issues to security failures while keeping a tab on overspending. It attempts to break the security mechanisms and identifies various scenarios where an application fails or behaves abnormally. Enterprises need to establish a systematic and comprehensive security testing approach to effectively and efficiently deal with a broad range of security issues across several systems. An ideal approach to security testing a system will essentially deal with four significant steps… • Analysis of the system; • Identification of potential weak points; • Assessing the risks posed by weak points; and • Actions to be taken following those findings.
the functional behaviour of a system as against its business goals. Knowing the business goals will help to predict what an attacker can compromise and, potentially, to pre-empt such risks. The industry has an informally evolved process called Threat Modelling. A few free tools have been developed by Microsoft to help organizations gather and model the threats from a design and testing perspective. Given the goals, we could then index the threats. This includes identifying all the roles of a system, the data assets that the application is dealing with, and the privileges that each of the roles possess (such as Create, Read, Update, Delete). A suite of ‘Use Cases’ for the application can be built covering the capabilities, benefits, limitations and boundaries of different roles on different data assets within a system. For each Use Case, we can then devise a list of ‘Abuse Cases’ that will essentially map the relevant attacks that could compromise the expected path of system usage. These relevant attacks could be based on the Top Ten Vulnerabilities listed by OWASP (www.owasp.org), SANS Top 30 and so on. The OWASP top ten list includes injection, cross-site scripting (XSS), broken authentication and session management, insecure direct object references, cross-site request forgery, security misconfiguration, insecure cryptographic storage, failure to restrict URL access, insufficient transport layer protection, and un-validated redirects and forwards.
Analyse the system
Identify weak points
It is quite essential for enterprises to analyse the system in order to identify the possible attacks. There is a difference between understanding
The next step would be to identify the potential weak points in a system. Entry/ exit, Communication and Storage are the three key weak points where three
www.testmagazine.co.uk
No matter how well a given system might have been developed, the nature of today’s complex systems with huge volumes of code, interconnectivity with several disparate external applications and devices, and varied internal connections, will expose them to higher risk of attack.
August 2011 | TEST
20 | Security testing
facets of the security, Confidentiality, Integrity and Availability (CIA triad) can be compromised. Extranets, intranets, firewalls, public web servers, routers, and several other critical systems can act as an entry point for hackers to exploit any mechanisms to gain access into the protected network. Remote access area is one of the most common points of entry for most intruders. Hackers can also exploit any communication channels between the enterprise and its customers and suppliers. With increasing volumes of data stored across various data repositories in an organisation, there is a need for enterprises to protect critical data from attacks.
Assess the risks Upon identifying the weak points, the potential risks should be assessed. An attack matrix has to be prepared to catalogue all known vulnerabilities in a given type of application that might potentially be found and exploited. It is also important to identify the heuristics of the unknown vulnerabilities.
Upon identifying the weak points, the potential risks should be assessed. An attack matrix has to be prepared to catalogue all known vulnerabilities in a given type of application that might potentially be found and exploited.
TEST | August 2011
Take Action
Ashwin Palaparthi Vice President Innovation at AppLabs www.applabs.com
Once the risks are identified, necessary measures should be taken to rectify the vulnerabilities and prevent possible attacks. The identified vulnerabilities have to be ‘scored’ for determining the financial priority and influence in driving a fix. After investigating potential risks associated with each of the identified vulnerabilities in an application, the IT team should explore the best practices for fixing the issues. Unlike regular bugs in an application, most security issues have recommended ways of fixing them. A safety measure also has to be included into the design process for future defection and prevention of the same kind of issue.
Some of the key security testing measures such as network security assessment, vulnerability scanning, log reviews, penetration testing, and password cracking techniques can be implemented to identify the vulnerabilities in a system. While each security technique has its own benefits, most of these are used in combination to make a comprehensive security assessment. Network Security Assessment examines the entire external and internal IT infrastructure and identifies the presence of any environment related vulnerabilities, errors in configuration, missing hot fixes and patches, as well as compliance to specific security standards while penetration testing can be used to identify any vulnerable hosts and services that could potentially be targeted by hackers. Establishing a security testing framework helps enterprises to execute tests in a systematic way, have a common approach to test various emerging technologies, reduce cycle time, and minimize the cost of bug finding. There might be many patches enterprises need to apply for the systems on a regular basis as a result of the security testing, which ultimately results in the reduction of overall vulnerabilities. Priority should be given to the most important and critical systems as well as the most crucial patches first. While several security testing best practices will enable enterprises to uncover potential vulnerabilities, a well-defined process will help in analysing an organization’s ability to surmount attacks, identify weak points, assess the potential risks and implement necessary measures; ultimately, enhancing the security posture of an organization.
www.testmagazine.co.uk
22 | Quality and security tests
Running the risk with third party code A study reveals, as Rutul Dave, Senior Development Manager at Coverity explains, that less than fifty per cent of third party code is tested for quality and security in development
T
he issue of software integrity has become a vital source of competitive differentiation for companies across all industries. Whether direct and obvious, or indirect and hence harder to realise, there is real business value associated with software quality – and major potential for losses from unsecure code. Many recent news headlines related to security breaches, stolen user data, and unauthorised access can often be traced back to common programming mistakes and defects in code introduced during software development. Errors, bugs, defects (whatever you wish to call them) are also the cause of unsecure code, and can add up to major financial and business costs that cannot be ignored. According to a 2009 survey from Software Productivity Research LLC, poor software quality costs companies more than US$500 billion a year globally in financial, competitive, and brand equity losses.
A significant source of unsecure code these days is from different sources of third-party software. The Software Integrity Risk Report, commissioned
TEST | August 2011
from a study by Coverity with Forrester Consulting, surveyed 336 software development influencers in North America and Europe on current practices and market trends for managing software quality, security and safety.
Some of the results According to the study, more than 90 per cent of respondents admit to using source software code from multiple third parties and this code is not tested for quality, safety and security with the same rigor as in-house developed software. The study also reveals a skewed risk-to-responsibility culture forming in development and highlights the impact software defects have on business. Significant adoption of third party code and the impact of that code on business priorities are uncovered. Here are some of the figures that stand out: • Of the 90 per cent of respondents using third party supplied code, more than 40 per cent cited that problems with that code (resulting in product delays or recalls, security vulnerabilities, increase in development time, and revenue impact) have caused them to seek greater visibility into code integrity. • Roughly 65 per cent of companies
www.testmagazine.co.uk
Quality and security tests | 23
say that customer satisfaction is impacted by software defects, while 47 per cent believe time-to-market is also impacted by software defects. The research study also highlights gaps between testing internally developed code and third party software: • Only 44 per cent of companies conduct automated code testing during development for third party code, compared to 69 per cent that use automated code testing for internally developed software; • Only 35 per cent of companies conduct risk, security or vulnerabilities assessments for third party code, compared to 70 per cent of companies deploying these methods on their internally developed software; • Only 35 per cent of companies apply manual code review to third party supplied software, compared to 68 per cent who perform manual code review on internally developed code; • Quality assurance gaps are also indicated with 51 per cent of respondents stating they perform automated functional, load and unit testing for supplied software, compared to 75 per cent who apply these QA testing methods to internally developed software. The development of a skewed riskto-responsibility culture forming in development was identified because: • In nearly one out of every two cases, the buyer side is held 100 per cent responsible for quality and security issues found in third-party code, compared to one in every ten cases where the third party supplier is held 100 per cent accountable; • The study also confirms that developers are taking on additional responsibility with more than 74 per cent of respondents stating that developers are held more accountable for quality and security goals than a year ago. The Software Integrity Risk Report data is very telling of the drivers for change in software code accountability. Today's development teams are in a real pinch. Developers are 100 per cent accountable for the outcome of their software, yet cannot control the software supplied by third parties. This has led to strong demand from customers seeking control and governance over the entire software supply chain. One of the best ways to tackle software integrity when third party
www.testmagazine.co.uk
software is involved is by leveraging a solution like static analysis. This tool is not only adept at facilitating the discovery of security defects, but also at assuring the overall product quality of software. Static analysis is done by an automated engine – without actually executing programmes built from that code. As the code is being compiled, the analysis engine identifies execution paths and patterns that could result in unintended system behaviour, system crashes or security vulnerabilities. Mainstream adoption of static code analysis is gaining speed across industries using software to run their businesses and equipment. In some cases the analysis process is close to becoming a legislated requirement. Take, for example, the growing number of highly sophisticated medical software packages operating across hospitals to help patients and potentially save lives. The technology is so advanced and complex that, in America, the FDA has identified the use of static code analysis as a means of improving the quality of software. What’s also great about this technology is that the nature of static analysis means that as soon as the code has been written, developers potentially start analysing it and identifying defects/vulnerabilities. Contrast this with dynamic testing approaches – an expert would need to have at least a partially working/ executable test harness, some dependencies, perhaps a set of test data to feed it as input and, potentially, a working test environment which mimics what the 'real world' will look like. Of course, there will always be certain classes of bugs that depend upon runtime or environmental factors, which is why it's still important to also have a solid dynamic testing strategy.
Summary With the increasingly complex software stack and reliance on third party suppliers, companies need better visibility into the security, quality and safety of their code. Finding a way to effectively test all the source code found in software can give developers control over the supply chain.
To view a copy of the full Forrester Research study, visit www.coverity.com/forrestersoftware-integrity-risk.
Rutul Dave Senior Development Manager Coverity www.coverity.com
More than 90 per cent of respondents admit to using source software code from multiple third parties and this code is not tested for quality, safety and security with the same rigor as in-house developed software.
August 2011 | TEST
24 | Learning testing skills
It’s what you know not the paper you hold Focusing on certification, in the view of Rob Lambert, Creative Director at The Software Testing Club, may be blinding testers to real learning opportunities
I
have many concerns about the state of testing, but one area that really stands out for me is self-education and the perennial reliance on certifications for learning: or, looked at another way, the standardisation of testers and learning. Certifications are hotly contested and passionately discussed in the testing community; so much so that a sensible conversation can rarely be had on the topic. I wouldn't go so far as to say certifications are ruining our industry, but we need to ask some important questions about the future of software testing and testers’ reliance on just one form of education. I'm only asking questions, not apportioning blame but asking if there is an alternative to current solutions? A rhetorical question really; I know there are alternatives. No doubt the certification courses on the market today evolved from a movement that probably had good intentions; to raise the testers’ skills bar by offering an entry level set of standard knowledge and terminology. The problem is that the
TEST | August 2011
www.testmagazine.co.uk
Learning testing skills | 25
bar, if it was raised at all, didn't stay up for long. It is also difficult to standardise testing terminology, especially when the testing team is part of a bigger department. Trying to change an entire department or company’s use of terminology is often a waste of time. Does it matter if company A calls it a Test Case whilst company B calls it a Test Script? Not if everyone within these companies shares the common terminology.
Certification inflation Reliance on certifications has pushed software testing into certification inflation, where an increasing supply of eligible candidates with certifications, in a market with too few jobs, is inflating the certification requirement for the remaining roles. A job that once needed a Foundation is now requiring an Intermediate and so on. Testers are demanding more certification levels and more variants of existing levels, which vendors appear all too happy to provide. It’s fuelling a mentality where we need to continue to certify, because we've already started to certify, rather than seeking an alternative. The really concerning side effect is that I don't believe it is making people better testers. It doesn't feel like it is working. Many testers look to the certification providers and Standards Boards to provide a learning structure for them through their career in software testing. I believe they are being failed by the very system that they help fuel. When looking at Certification schemes I always see a direct comparison with the school system in the UK, and much of the Western world. In fact, any process of mass ‘certification’ suffers the same problems. Over time these systems of mass education become the norm, even if they don't work. But, for testers, I believe it is not too late to change. We can still seek out
www.testmagazine.co.uk
alternatives and experiment before the norm of certification standardisation becomes so ingrained that we are questioning this very same process in another 30 years, when changing it will be a whole lot harder.
Education assembly lines In ‘The Importance of Living’ Lin Yutang [1] wrote about the school system as he saw it in the 1930's; much of what he says remains true today and directly applicable to the way the mainstream testing community is approaching certifications. “We have this system (placing more value on measured, passive cramming, mechanical and uniform assessments) because we are educating people in masses, as if in a factory, and anything which happens inside a factory must go by a dead and mechanical system. In order to protect its name and standardize its products, a school must certify them with diplomas. With diplomas, then, comes the necessity of grading, and with the necessity of grading come school marks, and in order to have school marks, there must be recitations, examinations, and tests. The whole thing forms an entirely logical sequence and there is no escape from it.” Lin then moves on to say: “But the consequences of having mechanical examinations and tests are more fatal than we imagine. For it immediately throws the emphasis on memorisation of facts rather than on the development of taste or judgment.” In the Testing world we have further shortened the learning cycle. When I last checked, foundation course training was just one day followed by a multiple choice exam... and employers are hiring based on this certificate? I refer to this shortened, cash for education process as a Trolley Dash, where ‘buying’ solutions is preferred over long term continuous learning and self-education. The problem with any
No doubt the certification courses on the market today evolved from a movement that probably had good intentions; to raise the testers’ skills bar by offering an entry level set of standard knowledge and terminology.
August 2011 | TEST
26 | Learning testing skills
mass mechanical examination is that it treats each person and situation as a single, measureable, constant and predictable unit to be instructed, tested and graded. The real world is far more complex.
Not the only way As with almost any contentious subject, there are extremes of hard line thinkers and others somewhere on a scale in between. Throw in the apathetic, lethargic and stragglers and you have a varied mix of views and opinions. Yet, many people find certifications incredibly helpful and informative and they could be useful elements on a wider learning path. The problem arises when the certification scheme is the only learning and is assumed by the market to be a complete marker of excellence. Our value on the open market should not be derived from certificates alone but from the value we add to the community and the businesses we work for. Our own sense of self value should come from a learning path through many different sources of education. In my eBook ‘The Problems With Testing’ I jokingly suggest that the money being made from certification courses and exams could actually help to pay for a free or Open Source Foundation course. But, I'm not entirely joking. If the Foundation course is truly about creating an open introduction to terminology and fundamental concepts in testing, then why isn't it free? There are many alternative education paths available to testers, many of which are free, so I suspect the
There are many alternative education paths available to testers, many of which are free, so I suspect the time will come when people question why they are paying for certifications.
TEST | August 2011
time will come when people question why they are paying for certifications. You could do some testing at the weekend or weeknights, join an online community, sit a peer reviewed course, subscribe to a growing number of blogs, join the social network conversations (Twitter, LinkedIn, Facebook) or simply sit down and read a book. There are countless Open Source, Not-For-Profit or Community projects you could volunteer to help with. There are a huge number of Open Source or free Test tools available; try downloading one and mastering it. Each one of these alternatives will provide valuable skills and insight to alternative forms of learning. They will also build your social network and widen your awareness above and beyond anything a standardised, generic course could ever hope to do.
Conclusion Whether you find certifications useful to your learning or not, it's always worth seeking out alternatives such as open and free foundation courses, advanced courses combined with mentoring, apprenticeships and on the job training. Whether these sources compliment or replace certifications is a decision only you can make, but without knowing about the alternatives, it's going to be increasingly difficult to stand out in a crowded job market or develop your skills beyond everyone else. [1] – http://www.amazon.co.uk/ Importance-Living-Yutang-Lin/ dp/0688163521
Sources: The Problems With Testing eBook is available for free download here : http://blog.softwaretestingclub.com/wp-content/uploads/ theproblemswithtesting.pdf The Software Testing Club: http://www.softwaretestingclub.com/ Michael Bolton's Rapid Software Testing Course: http://www.developsense.com/courses.html Weekend Testing: http://weekendtesting.com/ Weeknight Testing: https://twitter.com/#!/wntesting Rob Lambert Creative Director The Software Testing Club
Association For Software Testing: http://www.associationforsoftwaretesting.org/
www.softwaretestingclub.com
www.testmagazine.co.uk
Test Lab Automation Seminar 2011
Get the BEST out of your Testing Lab!
28 | Testing for protection
The security audit imperative Ray Bryant, CEO at Idappcom explains why the News of the World voicemail hacking saga will help IT professionals in the longer term
T
he recent media frenzy surrounding the News of the World hacking saga is likely to have a lasting effect on the world of security, namely that it will help people understand when security is not strong enough. No-one is suggesting we introduce voice biometrics or two factor authentication (2FA) systems for cellular voicemail, but a higher degree of security other than a default PIN is now required.
And as the security posture of voicemail is increased, so it helps to set the scene for a better understanding of
TEST | August 2011
IT security generally – and the need for better education on the reasons why it is needed. From there, it then falls to the management and members of the company IT department to plan – and deploy – better and more effective security systems to defend the company’s IT resources and, of course, the company’s Web site, many of which have been hacked in recent months. Thankfully for most of us working in the IT industry, the majority of the mainstream Web site hacks have been amusing or embarrassing for the organisation concerned. In
www.testmagazine.co.uk
Testing for protection | 29
many cases, no lasting harm is done, although some IT managers will have had to burn the midnight oil to remediate the fallout from the system or Web site hack. What these hacks prove, though, is how easy the hackers are finding it to get past lax security. This is despite the fact that many experts are calculating the cost of IT security defences as rising significantly, to the point where the bean counters are starting to get agitated.
The problem The problem our industry faces is that there is a dearth of experienced people who can understand, develop and deploy effective IT security defences. And, unfortunately for us all, there is now a culture of hackers strutting their stuff for fun. The good news is that there is an increased awareness of security issues in most companies. IT security has become an understandable business risk issue for boardrooms, and managers generally. The problem here is that, as demand for expertise in a given area of IT such as security increases, since the actual supply is relatively finite, a scarcity of resources starts to occur. Against this backdrop, costs inevitably start to rise and, as many organisations are now finding, if you can't get the resource – or afford the resource – then you need to start buying into a shared resource, which typically involves outsourcing elements of your IT security. But whilst IT professionals have never had it so good in terms of the variety and ease of deployment for IT security systems, the need to audit the efficiency of those systems has also never been greater.
What has to be done? With corporate regulation such as the PCI DSS (Payment Card Industry Data Security Standard) increasingly mandating the use of security systems that can be audited it is not just a requirement to install effective IT security, but its efficiency needs to be both auditable and provable. This introduces testing and auditing of security defences… on a regular basis… and using verifiable means. Increasingly, IT departments must perform regular security audits and penetration testing on their IT systems to verify that the defences are working and continue to work as new software is added.
www.testmagazine.co.uk
The good news here is that the tools available to perform these audits vary from those, like scanners, that produce a list of what you might be vulnerable to, to those that actually test and enhance your network perimeter defences. Most tools that test the defences are Unix-based freeware and non-supported utilities. There is, to my certain knowledge, only one commercially developed and supported tool in the market. Depending on your IT perspective, this can be viewed as either positive or negative. I prefer positive; I'm a great believer in getting to know how your software ticks, as this level of expertise rarely goes amiss on today's IT driven organisations. Oh, and it also looks good on your CV.
How do you perform this task? Well, in today's hacker rich world, you need to raise IT security to the highest business risk awareness. You need to deploy all known methods of defence and set about the process of improving those defences. You may have a line of firewalls/UTMs/IDS/IPS etc. security systems installed on your network, but how do you know they really stop the attacks? There are many factors that can render your particular IT security configuration ineffective and it's even possible that the security signatures at the heart of your antivirus software may not really work. In order to plan an effective security strategy, an organisation should consider how risk averse they are. It's then down to the IT manager to explain what can and cannot be done on a given budget. But since these applications are now prime targets for cybercriminals and hackers, it is seriously crucial that the latest patches are deployed across all terminals in your organisation. And this is where effective IT security auditing enters the frame. A good network and security testing suite can spot the machines that make your entire IT platform weaker than it should be, and recommend the remedial steps needed. It is pointless to let threats through believing that your servers or workstations are protected. No matter what defences you have, what you use to test and meet compliance, it is the connection to the outside world that you need to enhance to its maximum capability.
Ray Bryant CEO Idappcom www.idappcom.com
What these hacks prove, though, is how easy the hackers are finding it to get past lax security. This is despite the fact that many experts are calculating the cost of IT security defences as rising significantly.
August 2011 | TEST
30 | Reality in testing
Let’s get real Reality in software testing and project management, says Milan Sengupta, Senior Software QA/Testing consultant, is important if results are to match expectations
I
often find it challenging in development and testing projects about how ‘real’ we are in doing what we do. On several occasions, it has occurred that timelines are drawn, estimates are done and promises are made (both to internal stakeholders and customers) that a release would be done on ‘so-and-so’ date but all without adequate analysis of whether we have been realistic about what and how much was considered in this entire exercise. This crucial factor is often ignored and it becomes extremely important in terms of the feasibility and do-ability of an effort. TEST | August 2011
It happened in one of the projects I was working on a couple of months back which had been running for over a year and where the overall project completion deadline had been extended several times (the initial plan was for six months). The deadline was extended once more for just two months, at which point I asked myself, ‘if it didn’t work for the last 16 months, why do we think it will work in the next two months?’ I am not comfortable simply believing that additional time will resolve a problem unless we have considered doing something different from what has been done in the past and we know that his new strategy should work. When this thought was voiced to the Project Management/
www.testmagazine.co.uk
Reality in testing | 31
Leadership team, it was returned with a slightly philosophical note that we should ‘stay positive and hope things will get better’. I am sorry, that’s all well and good; positivity and hope do help but they alone can never save a project. What does save a project is meaningful action that has been adequately analysed, strategized, estimated and essentially thought through. One cannot simply keep extending timelines and expect that another two weeks or another six weeks would do the magic – unless the budget is unlimited. But, I do suggest (at a high level) that some of the following things will help in making things realistic and workable: • Size: taking stock of the current situation and, given the remaining time left for the project release and the promises made to clients, understand how much has been completed and what is left to do. • Prioritize: understand what requirements are most critical for the business and ensure they are addressed to the fullest, as matters of priority without any compromise on quality. It’s better to release something that works rather than everything that may or most likely will not work. • Estimate/plan/strategize: estimate the effort/ resources that would be required for the outstanding tasks to be completed using realistic rules of thumb. • Be creative: bring some creative thinking to the table. Try a new way of doing things as opposed to the approaches taken in the past that have not worked. Determine the pros and cons of the various options available. Review the project with
www.testmagazine.co.uk
the same team that will deliver and get their confidence. Often times project team members find the timelines and expectations laughable because they have not been accounted for and they know it can never be done. • Review and learn from mistakes: never repeat the same mistakes. Learn from the past and review decisions made this time with other teams and mentors who have been involved with similar challenges. • Gut feel: never ignore your gut! Many a times the inner voice tells you how things will end up and so if your gut says it will not work, think again but also be practical and not emotional about it. • Monitor: set daily targets. Monitoring progress everyday works very well in the beginning to see where things are going. Ensure every resource is utilised to the fullest and determine who have the right skills for the right jobs. • Issues/Risks: do not ignore issues that are uncovered as you move through the project, ensure they are taken note of and resolved. • No Gossip: there is strictly no place for any gossip in the team. Keep the team spirits high and give them confidence with results achieved every day. Even the most cynical team member will start believing in the progress when results prove them wrong. Finally, after having done the best in finding an innovative solution to reach one’s objectives, we can all be hopeful and positive that things will work this time because it’s not just time that went into the solution, it was also some substance and meaningful action: what you might call reality.
Milan Sengupta Senior Software QA/ Testing consultant
Finally, after having done the best in finding an innovative solution to reach one’s objectives, we can all be hopeful and positive that things will work this time because it’s not just time that went into the solution.
milansengupta@yahoo.com
August 2011 | TEST
32 | Test ethics
Testing in ethical times What responsibilities, asks Angelina Samaroo, do testers have for ethics in the systems they test?
I
n the first article in this series, we started with riskbased testing. In this second instalment, we continue the story. However, the humour is all gone – for now. The news all week on all media platforms has concentrated on one issue – privacy. The ease with which our personal data can be hacked throws into debate the use of the words ‘smart’ followed by ‘phone’. As the News of the World TEST | August 2011
(NOTW) in the UK closes, perhaps it presents an opportunity to examine our part in the process.
Ease of use, criminals welcome? In the first article we focused on our role in the risk reduction process, mainly from a product risk perspective. Let us now look at the bigger and much wider picture. Product risks can lead to project risks… financial risks... political risks… reputational risks...
www.testmagazine.co.uk
Test ethics | 33
and your day(s) in court(s). News reporting, it now seems, did not need this specific event to have its many days in many courts. We will now have another opportunity for unwanted edification – after the banks taught us ‘quantitative easing’; the volcano, ‘ash clouds’; the tsunami, ‘nuclear meltdown-while-you-watch’; we will, in the coming weeks, learn all about ‘blagging’. In the case of accessing voicemail, anyone who has my phone in their hand can access my voicemail. Just dial 121 (if inactive, then a PIN is needed; so perhaps not that easy). As for hacking into it, I don’t know how this is done, but it does not appear to be that difficult if your need is great. On both counts, perhaps we have compromised security too much in the march towards complete ease of use. So now we do not have complete peace of mind, neither as consumer nor as provider. Consider yourself as consumer; think about your day yesterday, from dawn to dusk. Perhaps you used your phone to ring home to check on problems your child is having at school; maybe you responded to several emails on issues at work with copying live data for testing, perhaps you texted your friend to catch up for lunch, then you ordered a takeaway for the weekday night off from cooking. What if you did some banking on the train home. If security here has been compromised, then the banks will share the clean-up process. You are partners in solving the crime, and putting things right. You will show that you took all safety precautions, and they will seek to put you back in your rightful financial position. May take a while, and a few sleepless nights, but it should be all right in the end.
www.testmagazine.co.uk
However, what if your phone calls and emails had been intercepted, stored and published. Things cannot be put right. You cannot be restored to your rightful position of privacy; all you can do is seek justice. Justice, however, is a term we use after the crime. With the banking scenario we do not usually talk about justice, just getting our money back. Seeking justice is rather late in the process.
Problems, People and processes When, in the 1980s, Dr. Barry Boehm conducted experiments in software development which showed that failing to solve problems upstream makes swimming a whole lot harder downstream, he probably didn’t quite envision a torpedo being shot straight at you whilst you were already under and from the inside (your sub being a tiny and clearly dispensable part of the whole) as happened with the NOTW (and yes, the cynics will know that we will surely soon have a Sun on Sunday). ‘Solving problems upstream’ means checking that requirements for the project have been adequately defined. The project will usually be in the traditional ‘V’, or the Agile software development lifecycle. For the former, this will generally involve creating a full set of requirements, followed by high and low level designs; each subject to internal checks (that they are complete and accurate) and external checks (that they meet the requirements laid down at the previous stages, and that the next stage of development can proceed from them). For the latter this could be a set of user stories giving us the business needs as understood at the time. These of course will change, but the changes must be recorded and tracked as the project progresses.
Dr. Barry Boehm conducted experiments in software development which showed that failing to solve problems upstream makes swimming a whole lot harder downstream.
August 2011 | TEST
34 | Test ethics
In both cases, the focus of the theory is usually that we have correctly understood, captured and reviewed the business needs. In an ideal world, we would then have the perfect software build, test and project management processes. We would list all risks on the wall as well as on file and deal with them systematically. We would then be able to deliver a system that has the ‘wow’ factor – everyone’s happy. As I said, no humour here; so what if the risk identification process missed the real world risk that the system being built to fulfil the business need may encroach on the needs of the real (and little) people? I say ‘little’ because in the last week we have heard people at the highest levels of political power admit their rightful place amongst the hitherto unheard of people in journalism.
The real world will almost certainly not support it unless there’s a financial incentive. The green movement has long recognised that we’re all glad to be green when it suits us and when there’s money in that rubbish.
TEST | August 2011
Clear systems for fair dealings For instance many of us buy books online, the 1-click ‘buy now’ button was, for me, a great invention, really convenient. However, I had been busy for a while, so reading for leisure became a ‘nice to have’. On my next visit, I carried on as usual – 1-clicking merrily, knowing that I had two hours to change my mind. Trouble was, the rules had changed and I now had only 30 minutes. As I entered the site, there was no warning of what I would have thought was big news. I can of course cancel orders or return goods, so nothing went dramatically wrong, but suddenly the system was not as convenient for me.
And, whilst I’m at it, is it reasonable to have a flight advertised at £32 only to end up paying £160+? Should the requirement not be to display the minimum which must be paid to get body and belongings transported? What’s the value of advertising a number which has no bearing on the mandatory payments – taxes, baggage fee and the now seemingly reasonable ‘check in’ fees? Is it really a fare? ‘Fare’ surely means, in this context, ‘transportation fee’.
Responsible testing The theory tells us that we should consider all risks; write everything down so that it is traceable; conduct ourselves in a professional manner. It does not say what to do in today’s world when personal professionalism conflicts with business drivers; it does not say how to test in testing times; it does not define ‘ethical testing’. The real world will almost certainly not support it unless there’s a financial incentive. The green movement has long recognised that we’re all glad to be green when it suits us and when there’s money in that rubbish. Back in our privacy context, many testers are involved in systems involving the collection and use of personal data, often on a large scale. The front end testers may be focused on the usability of the overall system. The back end may be involved in performance and security of databases and data centres. There is legislation surrounding the creation of databases. It provides ownership to those involved in the obtaining, verifying or presenting the contents of a database. With
www.testmagazine.co.uk
Test ethics | 35
I love going to the theatre, so signed up to couple of local theatres. Now I get invited to shows in theatres hundreds of miles away.
ownership come responsibilities. If as testers we have been involved in testing a database of ill-gotten data, then could we be held liable for its subsequent use? Those in employment may be covered by their employer, but what about the consultants and contractors? I love going to the theatre, so signed up to couple of local theatres. Now I get invited to shows in theatres hundreds of miles away. There is of course, legislation to protect our privacy. We have Data Protection, Freedom of Information, Computer Misuse, and Privacy laws. Perhaps they could be rolled up into one – a Technology Misuse Act (TMA), covering not just using the end product, but creating it in the first place. Then all IT projects could start with this as the ultimate business requirement – the user needs the users need never know about. With communications as free and easy as they are now, we may now have another unique selling point – ‘never in the news’.
Angelina Samaroo Managing director Pinta Education www.pintaed.com
Stand out from the crowd with ICTTech Are you supporting users of ICT hardware and applications? Do you want to prove you are worth more than just your vendor qualifications? If the answer is yes then ICTTech could be what you are looking for. As an ICTTech you will enjoy a huge range of benefits, including:
■ recognition of your expertise and hard work ■ globally established professional qualification ■ improved career prospects
Simply submit your CV to find out more: icttech@theiet.org www.theiet.org /icttech The Institution of Engineering and Technology is registered as a Charity in England & Wales (no 211014) and Scotland (no SCO38698). The Institution of Engineering and Technology, Michael Faraday House, Six Hills Way, Stevenage, SG1 2AY
www.testmagazine.co.uk
August 2011 | TEST
36 | Testing WAN and LAN
Network Performance Testing and WAN Acceleration Santosh Kumar, Jasleen Kaur Bhatia, Arupratan Santra, Venkata Durgempudi of Infosys offer a solution for network performance
A
s corporate networks expand, global network efficiency becomes crucial for the performance of a given production network. One way of dealing with this is by optimizing the Wide Area Network (WAN) through using a WAN accelerator. But there are wide ranges of solutions available and it’s a tough challenge to select the right kind of WAN accelerator with the optimum configuration. This paper describes an approach that can be used when selecting a suitable WAN accelerator for implementation. The geographical locations have been simulated in the laboratory and network performance tests have been executed for the respective locations. Then the tests have been run again for those locations by switching on the WAN accelerator along with the WAN simulation. TEST | August 2011
Network Performance Testing Application performance will be most thoroughly checked when testing is performed in production like conditions. Various factors such as latency, bandwidth, overloaded network, dropped packets etc. can all impact the performance of an application. The technique that can help in identifying the application performance under both favourable and unfavourable conditions is to simulate the network environment. This type of environment can be simulated in a lab by means of a network emulator. A network emulator simulates actual network conditions including impairments that can be added to the network traffic.
WAN accelerators: For the purposes of this paper, we have considered an application deployed across the world. At each location, users are connected over a Local Area Network (LAN). A WAN
www.testmagazine.co.uk
Testing WAN and LAN | 37
is used to connect the LANs for each location. The bandwidth/latency constraints imposed by the WAN will have a considerable effect on the responsiveness of the application and the behaviour of users, e.g. in the case of an online shopping system, if users find that the application’s response is very slow, this would result in abandoned shopping carts. To resolve these issues some of the steps that can be taken will be to increase the bandwidth, employ Quality of Service (QOS) classes to prioritize the traffic, etc. However, there is an efficient solution for increasing the network efficiency through WAN acceleration. WAN accelerators improve performance in a number of ways. • Increased network efficiency through QOS Traffic generated over the network is prioritized for transmission on WAN in which case some of the traffic may have to wait until it is being transmitted. Some WAN accelerating solutions offer QOS feature for traffic prioritization. • Caching This is an efficient method to improve network efficiency. When a file is transferred over the WAN, from one location to another, a copy of it is stored at the client side WAN accelerator. When another request comes up for the same data, this can be handled by the locally stored data from the cache. • Mirroring A local copy of the database is maintained at every remote location. If any change is made to the database at one location, changes are replicated on other databases at all other locations. • TCP Optimization: Transport control protocol (TCP) maintains the Source to Destination integrity by requiring frequent periodic acknowledgements from the destination. TCP uses a sliding window to avoid congestion. The greater the window size, the more the data to be transferred (Figure 1). The WAN accelerator uses TCP optimization technique to handle the acknowledgements and improve
www.testmagazine.co.uk
Fig 1: Before TCP Optimization
performance. Whenever a transmission is made by the source, the session is terminated at the local accelerator rather than at the destination, as shown in Figure 2, thus ensuring instantaneous response rather than delayed acknowledgements.
Fig 2: after TCP Optimization
• Compression Identifying the pattern in streaming the data forms the basis for compression as shown in Figure 3. An algorithm is applied to the streamed data to remove the redundancy. • Forward error correction (FEC) Some WAN accelerators utilize FEC, allowing recovery of the packets that are lost and reconstructing them thus reducing the need for re-transmission of the data.
How to undertake network performance testing This can be done by validating the application in a lab, where the real world network is simulated through a network emulation tool. This network replication consists of: • Emulated geographical region with desired latency and bandwidth;
The bandwidth/ latency constraints imposed by the WAN will have a considerable effect on the responsiveness of the application and the behaviour of users, e.g. in the case of an online shopping system, if users find that the application’s response is very slow, this would result in abandoned shopping carts.
August 2011 | TEST
38 | Testing WAN and LAN
Fig 3: compression
application performance and failure thresholds. For the second step, the network simulator is configured to create the delay, packet loss and errors that the application traffic is likely to encounter while traversing the network. The same load test that has been executed on LAN will be executed under the network profile. Network delay and impairment settings are introduced in a step pattern in order to identify the failure point due to network congestion.
Choosing a WAN accelerator solution for customer
Fig 4: Figure 4: Response times with Accelerator: • 17% improvement in Response times; • Direct CE test also gave a very good improvement in response time; • No impact of compression turning on and off in transactional response time.
To simulate a network location, it is imperative that as much information as possible should be available about the final production environment. Factors such as the distance involved in traversing the network, the amount of data sent across the network, the speed at which the data is sent through the network, bandwidth guaranteed, the maximum delay that can occur in the network and data error rate are good starting points.
TEST | August 2011
Fig 5: Response time for Cisco WAN for different locations
Arupratan Santra Infosys Technologies Ltd. www.infosys.com
• Application under test; • Actual network endpoint elements (servers, switches, routers, storage arrays, mobile devices, etc.). To simulate a network location, it is imperative that as much information as possible should be available about the final production environment. Factors such as the distance involved in traversing the network, the amount of data sent across the network, the speed at which the data is sent through the network, bandwidth guaranteed, the maximum delay that can occur in the network and data error rate are good starting points. The expected bandwidth usage by the application, the number of users from the geographical location and the amount of traffic generated by the application are also important. In the first step, a baseline test is conducted on the LAN with no congestion, establishing the absolute
In the case we used to illustrate this, the initial rounds of test executions were carried out on LAN and network performance testing for 16 locations. For network performance testing a physical load generator from the particular location was used for the test. Where locations had no physical load generators available, the Bandwidth/Latency were simulated in a LAB and the test executions were carried out. Potential bottlenecks were identified and the application was tuned for performance issues. The next step was to evaluate the option of using a WAN accelerator. For this purpose, a set of WAN accelerators were chosen. First of all, baseline testing was conducted on the LAN as per the Workload model and with the WAN accelerator. The first test was executed using Riverbed WAN accelerator solution, the second test with Cisco WAN accelerator, the third with F5 and the fourth test with Direct CE. Response times for the transactions were then compared for each of the WAN accelerator solutions along with response times on the LAN.
Observations: Significant reduction of about 76% was observed in the response times when WAN accelerator solutions were utilized as shown in Figure 4.
Analysis For further evaluation of the tool, another set of executions were carried out from three different geographical locations, with the WAN simulator and with the WAN simulator plus WAN accelerator. A Baseline test was conducted on the LAN and a comparative analysis was done for the tests on the LAN – with the WAN simulator and with WAN simulator plus WAN accelerator. Significant improvements in response times were observed as shown in Figure 5.
www.testmagazine.co.uk
40 | TEST company profile
Seapine Software TM
With over 8,500 customers worldwide, Seapine Software Inc is a recognised, award-winning, leading provider of quality-centric application lifecycle management (ALM) solutions. With headquarters in Cincinnati, Ohio and offices in London, Melbourne, and Munich, Seapine is uniquely positioned to directly provide sales, support, and services around the world. Built on flexible architectures using open standards, Seapine Software’s cross-platform ALM tools support industry best practices, integrate into all popular development environments, and run on Microsoft Windows, Linux, Sun Solaris, and Apple Macintosh platforms. Seapine Software's integrated software development and testing tools streamline your development and QA processes – improving quality, and saving you significant time and money.
TestTrack RM TestTrack RM centralises requirements management, enabling all stakeholders to stay informed of new requirements, participate in the review process, and understand the impact of changes on their deliverables. Easy to install, use, and maintain, TestTrack RM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and role-based security. Whether as a standalone tool or part of Seapine’s integrated ALM solution, TestTrack RM helps teams keep development projects on track by facilitating collaboration, automating traceability, and satisfying compliance needs.
TestTrack Pro TestTrack Pro is a powerful, configurable, and easy to use issue management solution that tracks and manages defects, feature requests, change requests, and other work items. Its timesaving communication and reporting features keep team members informed and on schedule. TestTrack Pro supports MS SQL Server, Oracle, and other ODBC databases, and its open interface is easy to integrate into your development and customer support processes.
TestTrack TCM TestTrack TCM, a highly scalable, cross-platform test case management solution, manages all areas of the software testing process including test case creation, scheduling, execution, measurement, and reporting. Easy to install, use, and maintain, TestTrack TCM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and role-based security. Reporting and graphing tools, along with user-definable data filters, allow you to easily measure the progress and quality of your testing effort.
QA Wizard Pro QA Wizard Pro completely automates the functional and regression testing of Web, Windows, and Java applications, helping quality assurance teams increase test coverage. Featuring a nextgeneration scripting language, QA Wizard Pro includes advanced object searching, smart matching a global application repository, datadriven testing support, validation checkpoints, and built-in debugging. QA Wizard Pro can be used to test popular languages and technologies like C#, VB.NET, C++, Win32, Qt, AJAX, ActiveX, JavaScript, HTML, Delphi, Java, and Infragistics Windows Forms controls.
Surround SCM Surround SCM, Seapine’s cross-platform software configuration management solution, controls access to source files and other development assets, and tracks changes over time. All data is stored in industry-standard relational database management systems for greater security, scalability, data management, and reporting. Surround SCM’s change automation, caching proxy server, labels, and virtual branching tools streamline parallel development and provide complete control over the software change process.
www.seapine.com Phone:+44 (0) 208-899-6775 Email: salesuk@seapine.com United Kingdom, Ireland, and Benelux: Seapine Software Ltd. Building 3, Chiswick Park, 566 Chiswick High Road, Chiswick, London, W4 5YA UK Americas (Corporate Headquarters): Seapine Software, Inc. 5412 Courseview Drive, Suite 200, Mason, Ohio 45040 USA Phone: 513-754-1655
TEST | August 2011
www.testmagazine.co.uk
Can you predic TEST company profile | 41
Facilita Load testing solutions that deliver results Facilita has created the Forecast™ product suite which is used across multiple business sectors to performance test applications, websites and IT infrastructures of all sizes and complexity. With this class-leading testing software and unbeatable support and services Facilita will help you ensure that your IT systems are reliable, scalable and tuned for optimal performance.
Forecast, the thinking tester's power tool A sound investment: A good load testing tool is one of the most important IT investments that an organisation can make. The risks and costs associated with inadequate testing are enormous. Load testing is challenging and without good tools and support will consume expensive resources and waste a great deal of effort. Forecast has been created to meet the challenges of load testing, now and in the future. The core of the product is tried and trusted and incorporates more than a decade of experience but is designed to evolve in step with advancing technology. Realistic load testing: Forecast tests the reliability, performance and scalability of IT systems by realistically simulating from one to many thousands of users executing a mix of business processes using individually configurable data. Comprehensive technology support: Forecast provides one of the widest ranges of protocol support of any load testing tool. 1. Forecast Web thoroughly tests web-based applications and web services, identifies system bottlenecks, improves application quality and optimises network and server infrastructures. Forecast Web supports a comprehensive and growing list of protocols, standards and data formats including HTTP/HTTPS, SOAP, XML, JSON and Ajax. 2. Forecast Java is a powerful and technically advanced solution for load testing Java applications. It targets any non-GUI client-side Java API with support for all Java remoting technologies including RMI, IIOP, CORBA and Web Services. 3. Forecast Citrix simulates multiple Citrix clients and validates the Citrix environment for scalability and reliability in addition to the performance of the hosted applications. This non-intrusive approach provides very accurate client performance measurements unlike server based solutions. 4. Forecast .NET simulates multiple concurrent users of applications with client-side .NET technology. 5. Forecast WinDriver is a unique solution for performance testing Windows applications that are impossible or uneconomic to test using other methods or where user experience timings are required. WinDriver automates the client user interface and can control from one to many hundreds of concurrent client instances or desktops.
6. Forecast can also target less mainstream technology such as proprietary messaging protocols and systems using the OSI protocol stack. Powerful yet easy to use: Skilled testers love using Forecast because of the power and flexibility that it provides. Creating working tests is made easy with Forecast's script recording and generation features and the ability to compose complex test scenarios rapidly with a few mouse clicks. The powerful functionality of Forecast ensures that even the most challenging applications can be full tested.
4
G
Facilita Software Development Limited. Tel: +44 (0)12
Supports Waterfall and Agile (and everything in between): Forecast has the features demanded by QA teams like automatic test script creation, test data management, real-time monitoring and comprehensive charting and reporting. Forecast is successfully deployed in Agile ‘Test Driven Development’ (TDD) environments and integrates with automated test (continuous build) infrastructures. The functionality of Forecast is fully programmable and test scripts are written in standard languages (Java, C#, C++ etc). Forecast provides the flexibility of open source alternatives along with comprehensive technical support and the features of a high-end enterprise commercial tool. Flexible licensing: Geographical freedom allows licenses to be moved within an organisation without additional costs. Temporary high concurrency licenses for ‘spike’ testing are available with a sensible pricing model. Licenses can be rented for short term projects with a ‘stop the clock’ agreement or purchased for perpetual use. Our philosophy is to provide value and to avoid hidden costs. For example, server monitoring and the analysis of server metrics are not separately chargeable items and a license for Web testing includes all supported Web protocols.
Services In addition to comprehensive support and training, Facilita offers mentoring where an experienced Facilita consultant will work closely with the test team either to ‘jump start’ a project or to cultivate advanced testing techniques. Even with Forecast’s outstanding script automation features, scripting is challenging for some applications. Facilita offers a direct scripting service to help clients overcome this problem. We can advise on all aspects of performance testing and carry out testing either by providing expert consultants or fully managed testing services.
Facilita Tel: +44 (0) 1260 298109 Email: enquiries@facilita.co.uk Web: www.facilita.com
www.testmagazine.co.uk
August 2011 | TEST
42 | TEST company profile
Parasoft Improving productivity by delivering quality as a continuous process For over 20 years Parasoft has been studying how to efficiently create quality computer code. Our solutions leverage this research to deliver automated quality assurance as a continuous process throughout the SDLC. This promotes strong code foundations, solid functional components, and robust business processes. Whether you are delivering Service-Orientated Architectures (SOA), evolving legacy systems, or improving quality processes – draw on our expertise and award winning products to increase productivity and the quality of your business applications.
Specialised platform support:
Parasoft's full-lifecycle quality platform ensures secure, reliable, compliant business processes. It was built from the ground up to prevent errors involving the integrated components – as well as reduce the complexity of testing in today's distributed, heterogeneous environments.
Trace code execution:
What we do Parasoft's SOA solution allows you to discover and augment expectations around design/ development policy and test case creation. These defined policies are automatically enforced, allowing your development team to prevent errors instead of finding and fixing them later in the cycle. This significantly increases team productivity and consistency.
End-to-end testing: Continuously validate all critical aspects of complex transactions which may extend through web interfaces, backend services, ESBs, databases, and everything in between.
Advanced web app testing: Guide the team in developing robust, noiseless regression tests for rich and highly-dynamic browserbased applications.
Access and execute tests against a variety of platforms (AmberPoint, HP, IBM, Microsoft, Oracle/ BEA, Progress Sonic, Software AG/webMethods, TIBCO).
Security testing: Prevent security vulnerabilities through penetration testing and execution of complex authentication, encryption, and access control test scenarios.
Provide seamless integration between SOA layers by identifying, isolating, and replaying actions in a multi-layered system.
Continuous regression testing: Validate that business processes continuously meet expectations across multiple layers of heterogeneous systems. This reduces the risk of change and enables rapid and agile responses to business demands.
Multi-layer verification: Ensure that all aspects of the application meet uniform expectations around security, reliability, performance, and maintainability.
Policy enforcement: Provide governance and policy-validation for composite applications in BPM, SOA, and cloud environments to ensure interoperability and consistency across all SOA layers. Please contact us to arrange either a one to one briefing session or a free evaluation.
Application behavior virtualisation: Automatically emulate the behavior of services, then deploys them across multiple environments – streamlining collaborative development and testing activities. Services can be emulated from functional tests or actual runtime environment data.
Load/performance testing: Verify application performance and functionality under heavy load. Existing end-to-end functional tests are leveraged for load testing, removing the barrier to comprehensive and continuous performance monitoring.
Spirent CommunicationsEmail: plc Tel:sales@parasoft-uk.com +44(0)7834752083 Email: Web: www.spirent.com Web: www.parasoft.com Tel:Daryl.Cornelius@spirent.com +44 (0) 208 263 6005
TEST | August 2011
www.testmagazine.co.uk
TEST company profile | 43
31 Media 31 Media is a business to business media company that publishes high quality magazines and organises dynamic events across various market sectors. As a young, vibrant, and forward thinking company we are flexible, proactive, and responsive to our customers' needs. www.31media.co.uk
T.E.S.T Online Since its launch in 2008 T.E.S.T has rapidly established itself as the leading European magazine in the software testing market. T.E.S.T is a publication that aims to give a true reflection of the issues affecting the software testing market. What this means is that the content is challenging but informative, pragmatic yet inspirational and includes, but is not limited to: In-depth thought leadership; Customer case studies; News; Cutting edge opinion pieces; Best practice and strategy articles The good news is that the T.E.S.T website, T.E.S.T Online has had a root and branch overhaul and now contains a complete archive of previous issues as well as exclusive web-only content and testing and IT news.
Heathrow, the VitAL Focus Groups promises to be a dynamic event that provides a solid platform for the most influential professionals in the IT industry to discuss and debate their issues, voice their opinions, swap & share advice, and source the latest products and services.
For more information visit: www.vitalfocusgroups.com or contact Grant Farrell on +44 (0) 203 056 4598
vital Inspiration for the modern business
At T.E.S.T our mission is to show the importance of software testing in modern business and capture the current state of the market for the reader. www.testmagazine.co.uk
VitAL Magazine VitAL is a journal for directors and senior managers who are concerned about the business issues surrounding the implementation of IT and the impact it has on their customers. Today senior management are starting to realise that implementing IT effectively has a positive impact on the internal and external customer and it also influences profitability. VitAL magazine was launched to help ease the process. www.vital-mag.net
VitAL Focus Groups VitAL Magazine, the authoritative, thought provoking, and informative source of information on all issues related to IT service, IT delivery and IT implementation is launching a specifically designed programme of Focus Groups that bring together senior decision makers for a series of well thought out debates, peer-to-peer networking, and supplier interaction. Held on the 21st June 2011 at the Park Inn Hotel,
31 Media Limited www.31media.co.uk info@31media.co.uk Media House, 16 Rippolson Road, London SE18 1NS United Kingdom Phone: +44 (0) 870 863 6930 Fax: +44 (0) 870 085 8837
www.testmagazine.co.uk
August 2011 | TEST
44 | TEST company profile
Micro Focus Deliver better software, faster. Software quality that matches requirements and testing to business needs. Making sure that business software delivers precisely what is needed, when it is needed is central to business success. Getting it right first time hinges on properly defined and managed requirements, the right testing and managing change. Get these right and you can expect significant returns: Costs are reduced, productivity increases, time to market is greatly improved and customer satisfaction soars. The Borland software quality solutions from Micro Focus help software development organizations develop and deliver better applications through closer alignment to business, improved quality and faster, stronger delivery processes – independent of language or platform. Combining Requirements Definition and Management, Testing and Software Change Management tools, Micro Focus offers an integrated software quality approach that is positioned in the leadership quadrant of Gartner Inc’s Magic Quadrant. The Borland Solutions from Micro Focus are both platform and language agnostic – so whatever your preferred development environment you can benefit from world class tools to define and manage requirements, test your applications early in the lifecycle, and manage software configuration and change.
Requirements Defining and managing requirements is the bedrock for application development and enhancement. Micro Focus uniquely combines requirements definition, visualization, and management into a single '3-Dimensional' solution, giving managers, analysts and developers precise detail for engineering their software. By cutting ambiguity, the direction of development and QA teams is clear, strengthening business outcomes. For one company this delivered an ROI of 6-8 months, 20% increase in project success rates, 30% increase in productivity and a 25% increase in asset re-use. Using Micro Focus tools to define and manage requirements helps your teams: • Collaborate, using pictures to build mindshare, drive a common vision and share responsibility with rolebased review and simulations. • Reduce waste by finding and removing errors earlier in the lifecycle, eliminating ambiguity and streamlining communication. • Improve quality by taking the business need into account when defining the test plan. Caliber ® is an enterprise software requirements
definition and management suite that facilitates collaboration, impact analysis and communication, enabling software teams to deliver key project milestones with greater speed and accuracy.
Software Change Management StarTeam® is a fully integrated, cost-effective software change and configuration management tool. Designed for both centralized and geographically distributed software development environments, it delivers: • A single source of key information for distributed teams • Streamlined collaboration through a unified view of code and change requests • Industry leading scalability combined with low total cost of ownership
Testing Automating the entire quality process, from inception through to software delivery, ensures that tests are planned early and synchronize with business goals even as requirements and realities change. Leaving quality assurance to the end of the lifecycle is expensive and wastes improvement opportunities. Micro Focus delivers a better approach: Highly automated quality tooling built around visual interfaces and reusability. Tests can be run frequently, earlier in the development lifecycle to catch and eliminate defects rapidly. From functional testing to cloud-based performance testing, Micro Focus tools help you spot and correct defects rapidly across the application portfolio, even for Web 2.0 applications. Micro Focus testing solutions help you: • Align testing with a clear, shared understanding of business goals focusing test resources where they deliver most value • Increase control through greater visibility over all quality activities • Improve productivity by catching and driving out defects faster Silk is a comprehensive automated software quality management solution suite which enables users to rapidly create test automation, ensuring continuous validation of quality throughout the development lifecycle. Users can move away from manual-testing dominated software lifecycles, to ones where automated tests continually test software for quality and improve time to market.
Take testing to the cloud Users can test and diagnose Internet-facing applications under immense global peak loads on the cloud without having to manage complex infrastructures. Among other benefits, SilkPerformer ® CloudBurst gives development and quality teams: • Simulation of peak demand loads through onsite and cloud-based resources for scalable, powerful and cost effective peak load testing • Web 2.0 client emulation to test even today’s rich internet applications effectively Micro Focus, a member of the FTSE 250, provides innovative software that enables companies to dramatically improve the business value of their enterprise applications. Micro Focus Enterprise Application Modernization, Testing and Management software enables customers’ business applications to respond rapidly to market changes and embrace modern architectures with reduced cost and risk.
For more information, please visit www.microfocus.com/solutions/softwarequality
TEST | August 2011
www.testmagazine.co.uk
TEST company profile | 45
QualiSystems QualiSystems is an automation pioneer providing unparalleled Test Automation and Lab Management solutions that drive innovation, efficiency and ROI. Beyond Testing – TestShell Framework TestShell Framework is an extensive software suite offering complete solution of Test Automation and Lab Management for Engineers enabling them to increase testing coverage and improve product quality, while reducing time, effort and equipment usage.
Network, system and Device Testing QualiSystems customers are using TestShell for different types of tests required in different environments, including: • Functional Testing • Integration Testing • QoS / QoE
TestShell has already proven itself as an industrycritical solution in Europe, North America, APAC and the Middle East, where it is used by market leaders from a wide spectrum of industries including network equipment manufacturers, telecom operators, electronics and flash memory device manufacturers, and enterprises dealing with business critical networks.
• IP Performance
End to End Testing
• Shorter release time to market
TestShell cuts costs and time while significantly enhancing product quality through the entire development, production and service cycle. Providing central management of the testing process and equipment in a single framework TestShell offers: • Complete test and lab management
• Conformance testing • Load and Stress Testing • Benchmark testing And more.
Optimizing testing with TestShell with a smooth and efficient testing process, repeated efforts are eliminated and sharing is easily achieved • Improved product quality reaching higher testing coverage and ensuring accurate and thorough analysis
• Optimized resource utilization, tracking and sharing
• Reduced expenditures
• Automated setup and provisioning, activated in a click
Optimizing the organizations human and physical resources utilization and reducing the cost of poor quality product release.
• Simple creation of test scenarios directly by test engineers • Vendor-agnostic integration with any equipment in the test environment • Supreme usability and reusability of test assets • Central control of all test stations allowing parallel execution, locally and remotely, 24x7 • Across-the-board standardization for collaborative data collection and sharing • Real time view of test results and trends • User defined reporting and analysis dashboards
Web: www.qualisystems.com Email: info@qualisystems.com Tel: +44 8456 808715 Spirent Communications plc Tel: +44(0)7834752083 Email: Daryl.Cornelius@spirent.com Web: www.spirent.com QualiSystems UK, Cattle Lane Farm, Cattle Lane, Abbotts Ann, Andover SP 11 7DS United Kingdom
www.testmagazine.co.uk
August 2011 | TEST
46 | TEST company profile
TechExcel TechExcel is the leader in unified Application Lifecycle Management as well as Support and Service solutions that bridge the divide between product development and service/support. This unification enables enterprises to focus on the strategic goals of product design, project planning, development and testing, while enabling transparent visibility with all customer-facing initiatives. TechExcel has over 1,500 customers in 45 countries and maintains offices in UK, US, China and Japan.
Application Lifecycle Management DevSuite is built around the best-practices insight that knowledge is central to any product development initiative. By eliminating the silos of knowledge that exist between different teams and in different locales, DevSuite helps enterprises transform their development processes, increasing efficiency and overall quality. DevSpec DevSpec is an integrated requirements
management solution that is specifically designed to provide visibility, traceability and validation of your product or project requirements. DevSpec provides a framework to create new requirements, specifications and features that can be linked to development and testing implementation projects.
DevPlan DevPlan is a project, resource, and task management tool. It allows users to plan high level areas of work, assign team members to work in these areas, and then track the tasks needed to complete the activities. DevTrack DevTrack is the leading project issue and defect tracking tool that is used by development teams of all sizes around the globe. Its configurable workflows allow DevTrack to meet the needs of any organisation's development processes. DevTest From test case creation, planning and
execution through defect submission and resolution, DevTest tracks and manages the complete quality lifecycle. DevTest combines the test management features of DevTest, DevTrack and TestLink for test automation into one integrated solution.
KnowledgeWise KnowledgeWise is the
knowledge management solution at the core of the entire suite. It is the centralised knowledge base for all company documents including: contracts, processes, planning information and other important records as well as customer-facing articles, FAQs, technical manuals and installation guides. More information at: www.techexcel.com/products/devsuite.
Service and Support Management Service and Support Management solutions provide enterprises with total visibility and actionable intelligence for all service desk, asset management and CRM business processes. ServiceWise ServiceWise is a customisable and comprehensive internal Helpdesk, ITSM- and ITILcompliant solution. Automate and streamline
services and helpdesk activities with configurable workflows, process management, email notifications and a searchable knowledge base. The selfservice portal includes online incident submission, status updates, online conversations and a knowledgebase. ServiceWise includes modules such as incident management, problem escalation and analysis, change management and asset management.
CustomerWise CustomerWise is an integrated
CRM solution focused on customer service throughout the entire customer lifecycle. CustomerWise allows you to refine sales, customer service and support processes to increase cross-team communication and efficiency while reducing your overall costs. Combine sophisticated process automation, knowledgebase management, workflow, and customer self-service to improve business processes that translate into better customer relationships.
AssetWise AssetWise aids the process of
monitoring, controlling and accounting for assets throughout their lifecycle. A single and centralised location enables businesses to monitor all assets including company IT assets, managing asset inventories, and tracking customer-owned assets.
FormWise FormWise is a web-based form management solution for ServiceWise and CustomerWise. Create fully customised online forms and integrate them directly with your workflow processes. Forms can even be routed automatically to the appropriate individuals for completion, approval, and processing, improving your team's efficiency. Web-based forms may be integrated into existing websites to improve customer interactions including customer profiling, surveys, product registration, feedback, and more. DownloadPlus DownloadPlus is an easy-to-use
website management application for monitoring file downloads and analysing website download activities. DownloadPlus does not require any programming or HTML. DownloadPlus provides controlled download management for all downloadable files, from software products and documentation, to marketing materials and multimedia files. More information at: www.techexcel.com/products/itsm/
Training Further your investment with TechExcel, effective training is essential to getting the most from an organisation's investment in products and people. We deliver professional instructor-led training courses on every aspect of implementation and use of all TechExcel’s software solutions as well as both service management and industry training. We are also a Service Desk institute accredited training partner and deliver their certification courses. More information at: www.techexcel.com/support/ techexceluniversity/servicetraining.html
For more information, visit www.techexcel.com or call 0207 470 5650.
TEST | August 2011
www.testmagazine.co.uk
TEST company profile | 47
T-Plan T-Plan since 1990 has supplied the best of breed solutions for testing. The T-Plan method and tools allowing both the business unit manager and the IT manager to: Manage Costs, Reduce Business Risk and Regulate the Process. By providing order, structure and visibility throughout the development lifecycle from planning to execution, acceleration of the "time to market" for business solutions can be delivered. The T-Plan Product Suite allows you to manage every aspect of the Testing Process, providing a consistent and structured approach to testing at the project and corporate level.
What we do Test Management: The T-Plan Professional product is modular in design, clearly dierentiating between the Analysis, Design, Management and Monitoring of the Test Assets. • What coverage back to requirements has been achieved in our testing so far?
Test Automation: Cross-Platform Independence (Java) Test Automation is also integrated into the test suite package via T-Plan Robot, therefore creating a full testing solution. T-Plan Robot Enterprise is the most flexible and universal black box test automation tool on the market. Providing a human-like approach to software testing of the user interface, and uniquely built on JAVA, Robot performs well in situations where other tools may fail. • Platform independence (Java). T-Plan Robot runs on, and automates all major systems, such as Windows, Mac, Linux, Unix, Solaris, and mobile platforms such as Android, iPhone, Windows Mobile, Windows CE, Symbian. • Test almost ANY system. As automation runs at the GUI level, via the use of VNC, the tool can automate any application. E.g. Java, C++/C#, .NET, HTML (web/browser), mobile, command line interfaces; also, applications usually considered impossible to automate like Flash/Flex etc.
• What requirement successes have we achieved so far? • Can I prove that the system is really tested? • If we go live now, what are the associated Business Risks?
Incident Management: Errors or queries found during the Test Execution can also be logged and tracked throughout the Testing Lifecycle in the T-Plan Incident Manager. “We wanted an integrated test management process; T-Plan was very exible and excellent value for money.” Francesca Kay, Test Manager, Virgin Mobile
Web: hays.co.uk/it Email: testing@hays.com Tel: +44 (0)1273 739272
www.testmagazine.co.uk
August 2011 | TEST
48 | The last word...
the last word... What's in your toolbox? He's not being nosey but Dave Whalen has a few suggestions
W
hen I was in the Air Force, one tool was in every technician’s toolbox – safety wire pliers. When repairing aircraft, every nut & bolt, fastener and connection, had to be wired to a fixed surface to prevent it from vibrating loose during flight. We called it ‘safety wiring’. Did we need the pliers? Not really. We were trained to do it by hand. But when you’re hanging upside down in the cockpit of a jet fighter, sweat dripping in your eyes, trying to safety wire a component in place, they were nice to have. There were two reasons we carried safety wire pliers in our toolboxes. First, we had to safety wire stuff and they made the job easy. Second, they were a universal tool, or as close to one as you could find. They could cut and strip wires, break off stripped or broken fasteners, work as rudimentary clamps, etc. Since leaving the Air Force, and becoming a dedicated software tester, I’ve searched for a similar universal software test tool: so far, without luck. Rather, I’ve found it important to have a well-stocked toolbox with many different tools. If you consider software testing a craft (and I do) then it’s important to have a well-stocked toolbox. For software testers, our experience with various testing methodologies may be considered tools. Test process methodologies, test development methodologies, defect management methodologies, etc., are all tools.
Since all tools have their good and bad points, a combination usually works best in any given situation. TEST | August 2011
Pick ‘n Mix Will we use every tool on every project? I doubt it. Some tools are better than others. Some tools aren’t so good. Over time we tend to develop a comfort level, a relationship even, with our favourites. Since all tools have their good and bad points, a combination usually works best in any given situation. I have a favourite Test Management tool, Defect Management tool, etc. Some tools work best in very well defined situations, others, not so much. Some tools will be great doing tasks that they were not originally designed to do – like safety wire pliers. Is there one perfect tool? No, I don’t think so. The most valuable tool in my toolbox is Microsoft Excel. I can, and have, used it to develop an entire Test Management system. Excel, I’ve found, is the closest thing to safety wire pliers. The important thing is to have many tools, know their advantages and disadvantages, and know when to use them. The bigger and better stocked your toolbox, the more valuable you become. One of the most valuable tools you can stock in your toolbox is knowledge of various test processes or methodologies, such as Waterfall, Iterative, Agile, etc., and how to apply each. I’ve found that one tool alone, is rarely enough to do the job. When I was a consultant, one of my favourite stages was the initial client meeting. The client would typically espouse how they had implemented the latest methodology, yada, yada, yada – typically Agile. That was usually the first bubble I burst – sometimes tactfully... Because I’ve been involved in software testing for a while, I’ve seen and used most of the current and some not-so-current methodologies. One of my responsibilities as a staff consultant was to evaluate and recommend tools, such as test methodologies, for client companies. You name it; I’ve probably used or evaluated it. I’ve also been witness to a number of tool implementations. What I’ve found consistently is: • No one uses just one tool or
methodology; most use a combination. • Most formal methodologies are adopted in name only, rarely in practice. Investigation typically revealed that most clients really didn’t even use their chosen methodology. They tended to use parts of them. Just because you have a daily stand-up meeting, or do frequent, small releases doesn’t make you an ‘Agile’ shop. The best methodology is more like a well-stocked toolbox. You use the tool that works best for the task, adapted as needed. At the risk of offending the ContextDriven school – let’s call these tools ‘best practices’. Best practices, as far as I’m concerned, are really nothing more than tools – and they work well in very well defined situations. I find there is no one ‘best’ practice, but rather a number of them. Each has parts that work great, while others are not so great, depending on the situation. To be successful I typically take a little of this, and a little of that, until I find the combination that works best. So Agile, as a whole, may not be the best solution given the current situation, but there are parts of it that are wonderful. I’m going to use them. The same goes for Waterfall and Iterative. Some good parts and we’ll use some of that. Throwing out a tool name is not good enough. You need to know what tool, or combination of tools, works best in a given situation and know how to use them. How does each tool work? What are the strengths and weaknesses of each? Can I customize the tool, or combine tools to fit the current situation? Lastly, you need to care for your tools. They get rusty if you don’t pull them out and use them occasionally. Keep your tools sharp, keep them current, and always be on the lookout for new and better tools. • Software test industry periodicals and websites are a great source of information. • Try tools out. • Build a network and ask others And ask yourself, what’s in your toolbox?
Dave Whalen
President and senior software entomologist Whalen Technologies softwareentomologist.wordpress.com
www.testmagazine.co.uk
INNOVATION FOR SOFTWARE QUALITY
Subscribe to TEST free! FTWA FOR SO
rua ry Issu e 1: Feb Vol um e 3:
201 1
LIT RE QUA
Y
TEST : INNOV ATION FOR SO FTWAR E QUA LITY
ATION
TEST : INNOVATION FOR SOFTWARE QUALITY
LITY E QUA FTWAR FOR SO ATION INNOV TEST :
INNOV
INNOV ATION FOR SO INNOVATION FOR SOFTWARE QUALITY FTWA Volume 3: Issue 2: April 2011
Vol um e 3: Issu e 3: Jun e 201 1
RE QUA LIT
Y
vati on Raja Nera ent testing d en ep d in
VOLUM E 3: IS SUE 3: JUNE 2 011
11 ARY 20 FEBRU SUE 1: E 3: IS VOLUM
VOLUME 3: ISSUE 2: APRIL 2011
IT's Taking a ic strateg Invisible Banishin h c g a o r p security ap Giant bugs
Chris Livesey onMthe iia Vuontis järvi and massive potential of testing Ari Takane n on the pow er of fuzzin g Inside: Ris
testing netration tools | Pe k-based | Testing testing | Inside: Visual testing | Test Qualifications | Regression testing tomation au Applicatio ile Ag n develo Inside: azine.co.uk pm ag www.testm online at Visit TEST
Visit TEST Visit TEST online at www.testmagazine.co.uk online at
ent | Safe ty
www.testm agazine.co .uk
testing
For exclusive news, features, opinion, comment, directory, digital archive and much more visit
www.testmagazine.co.uk
Published by 31 Media Ltd Telephone: +44 (0) 870 863 6930 Facsimile: +44 (0) 870 085 8837
www.31media.co.uk
Email: info@31media.co.uk Website: www.31media.co.uk
INNOVATION FOR SOFTWARE QUALITY