TEST Magazine - December 2009-January 2010

Page 1

I n T o u c h W it h T e c h n o l o g y

T h e E u r o p e a n S o f tw a r e T e st e r Volume 1: Issue 4: December 2009

i hate agile! Dave Whalen takes on the cult of Agile Inside: Data security | Testing virtual worlds | User accepted testing Visit T.E.S.T online at www.testmagazineonline.com

Supported by


Test Case Management

Satisfy your quality obsession.

TestTrack® TCM

www.seapine.com/testmag

[

Test Planning & Tracking

© 2009 Seapine Software, Inc. All rights reserved.

TestTrack® Studio

Surround SCM®

Configuration Management

Seapine CM®

Change Management

Automated Testing

QA Wizard® Pro

[

Full-Time Quality Assurance Manager—Immediate Opening

• Achieve complete traceability between test cases and defects with seamless TestTrack Pro integration.

• Ensure all steps are executed, and in the same order, for more consistent testing.

Test Case Management

• Streamline the QA > Fix > Re-test cycle by pushing test failures immediately into the defect management workflow.

• Manage suites of platform-specific compliance tests, functional tests, and performance tests in one central location. • Assign tests to your QA team, track results, and report on performance and workload.

Issue Management

• Know instantly which test cases have been executed, what your coverage is, and how much testing remains.

• Use test variants to target multiple platforms with the same test case for more efficient test case management.

TestTrack® Pro

TestTrack TCM puts you in control of test case planning and tracking, providing better visibility over the testing effort and giving you more time to manage your team. With TestTrack TCM your team can write and manage thousands of test cases, select sets of tests to run against builds, and process the pass/fail results using your development workflow.

TestTrack® TCM

Don’t work yourself to death. Use TestTrack® TCM to manage your testing effort.

www.testmagazineonline.com


Leader | 1

T h e E u r o p e a n S o f tw a r e T e st e r

IN TOUCH WITH TECHNOLOGY

Supported by

T.E.S.T THE EUROPEAN SOFTWARE TESTER

T H E E U RO P E A N S O F T WA R E T E S T E R Volume 1: Issue 4: December 2009

Season’s Greetings

W VOLUME 1: ISSUE 4: DECEMBER 2009

I HATE AGILE! Dave Whalen takes on the cult of Agile Inside: Data security | Testing virtual worlds | User accepted testing Visit T.E.S.T online at www.testmagazineonline.com

Cover credit: Shawn Fury and Kathy Esetenson

Editor Matthew Bailey matthew.bailey@31media.co.uk Tel: +44 (0)1293 934464 To advertise contact: Grant Farrell grant.farrell@31media.co.uk Tel: +44(0)1293 934461 Production & Design Dean Cook dean.cook@31media.co.uk Toni Barrington toni.barrington@31media.co.uk Editorial & Advertising Enquiries 31 Media, Crawley Business Centre, Stephenson Way, Crawley, West Sussex, RH10 1TN Tel: +44 (0) 870 863 6930 Fax: +44 (0) 870 085 8837 Email: info@31media.co.uk Web: www.testmagazineonline.com

ell it’s the end of the first decade of the 21st century and our first full year in operation, and it’s been an at times bumpy, but overall fruitful and interesting journey. This issue I decided to see what some of our main contacts in the industry made of the last year and how they see things panning out for testing in 2010. I think the results make an interesting reading (see page 24). One topic most of those quizzed alighted on was cloud computing. Estimates vary about the speed with which, and the extent to which, all our regular software applications and services will migrate to the cloud. Of course many of the applications and services we use every day are already there; all the social networking sites (another hot topic to testers), YouTube, Google Apps, the list goes on. And there will probably be stages – private cloud etc – and a one foot in, one foot out approach for some time to come, before we all switch over to dumb terminals connected to the Web – reminiscent of the ones some of the more mature amongst us may have used to access the mainframes of our youth. In my opinion the benefits are

Printed by Pensord, Tram Road, Pontllanfraith, Blackwood. NP12 2YA © 2009 31 Media Limited.All rights reserved. T.E.S.T Magazine is edited, designed, and published by 31 Media Limited. No part of T.E.S.T Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available.

too great to ignore and the drawbacks, easy enough to overcome. In the summer I attended a small conference in the ‘Strangers’ Dining Room’ of the House Of Commons chaired by the environmental IT charity Global Action Plan (www.globalactionplan.org.uk) about making IT greener. It was targeted mainly at the various layers of the bureaucracy, both local and national in the UK and the main aim was to show how IT could be made ‘greener’ in the public sector through various processes, technologies and practices. The elephant in the room was, I thought, the potentially massive savings which could be delivered by cloud services to the behemoth that is the public sector – along with the environmental benefits. Shift it all to the cloud and pay for it by the seat! Of course it’s all a little more complicated than that, especially when you’re dealing with public sector databases – many of which legally have to remain on-shore and under strict Government supervision. But if there’s one thing experience has proven to me it’s that where there’s a few quid to be saved, there’s usually a way; especially with the way the public finances are looking at the moment. One thing is for certain whether the software is delivered through the post on a disc or from the cloud, paid for by the seat, it still has to be tested. On that positive note, have a great Christmas break and a prosperous New Year.

Opinions expressed in this journal do not necessarily reflect those of the editor or T.E.S.T Magazine or its publisher, 31 Media Limited. ISSN 2040-0160

Matt Bailey, Editor

Published by:

One thing is for certain whether the software is delivered through the post on a disc or from the cloud, paid for by the seat, it still has to be tested. Matt Bailey, Editor

www.testmagazineonline.com

December 2009 | T.E.S.T


In Touch With Technology

SUBSCRIBE TO T.E.S.T. IN TOUCH WITH TECHNOLOGY T.E.S.T

T.E.S.T

THE EUROPEAN SOFTWARE TES TER

THE EUROPEAN SOFTWARE TESTER

T H E E U RO P E A N S O F T WA R E T E S T E R

Volume 1: Issue 3: September 2009

IN TOUCH WITH T ECHNOLOGY Supported by

Supported by

THE EUROPEAN SO F T WA R E T E S T E R

Volum e 1: Issue 4: Decem ber 2009

James Christie takes the agile approach

VOLUME 1: ISS UE 4: DECEMBE R 2009

VOLUME 1: ISSUE 3: SEPTEMBER 2009

BRIDGING THE GAP I HATE AGILE! Dave Whalen takes on the cult

of Agile

Inside: Risk-based testing | Crowd testing | Testing as a service Inside: Data security | Testi ng virtual worlds | User accepted T.E.S.T in now online at www.testmagazineonline.com

testing

Visit T.E.S.T online at www .testmagazineonline.com

Simply visit www.testmagazine.co.uk/subscribe Or email subscriptions@testmagazine.co.uk *Please

note that subscription rates vary depending on geographical location

Published by 31 Media Ltd Telephone: +44 (0) 870 863 6930 Facsimile: +44 (0) 870 085 8837

www.31media.co.uk T.E.S.T | December 2009

Email: info@31media.co.uk Website: www.31media.co.uk

The European Software Tester www.testmagazineonline.com


Contents | 3

CONTENTS dec 2009

1

Leader column

Editor Matt Bailey sees interesting shapes in the clouds.

4

Cover story – I hate Agile!

Testing heretic and software ‘entomologist’ Dave Whalen takes on the cult of Agile and says the evangelists are really starting to get to him.

8

Knowing when to stop Mitigating risk and achieving quality faster by using optimal testing efforts and intelligent analysis, Stephen Sanjay Desmond Emmanuel and Sindhu Mahesh explain the benefits of point of stop testing (POST).

12

Ignore data security at your peril

Peter Mollins looks at organisations’ shaky defences during the testing process and what can be done to secure them.

15

When requirements go bad – Part II

In the second of his features on the crucial field of testing requirements, Kurt Bittner tackles specification and implementation.

20

The e-volution of testing security

4

The paraphrase Darwin, the strong survive and the weak perish. According to Rodrigo Marcos, events during 2009 in the IT industry suggest that similar rules may also apply in software development.

24

...and a happy New Year!

As we leave the first decade of the 21st century, T.E.S.T editor Matt Bailey picks the brains of some of the industry’s movers and shakers about developments in 2009 and what to expect in 2010.

28

Testing virtual worlds

Currently working on their second project together, Dominic Mason and

20

Kerry Fraser Roberts, share their insights into testing the complex, dynamic area of online virtual worlds.

32

Striking a balance

How best can organisation strike a balance between competitiveness and product accuracy? Simon Morris walks the tightrope between speed to market and quality.

35

Performance assured

Paul Caine discusses the dangers facing ecommerce businesses that fail to include performance testing as an integral part of their online store-front development life-cycle.

38

32

Get Smart

Is user accepted testing (UAT) really your best testing option? Brian Hambling says it could be if you get Smart.

42

T.E.S.T Directory

48

The Last Word – Tom Millichamp Will 2010 see a move away from proprietary testing tools and towards open source equivalents, or is the interest in the open source tools just a short term reaction to the global credit crunch?

www.testmagazineonline.com

38 December 2009 | T.E.S.T


4 | Test cover story

I hate agile! Testing heretic Dave Whalen, president and senior software entomologist (“Bugs are my life!”) at Whalen Technologies takes on the religious cult of Agile and says the evangelists are really starting to get to him.

T.E.S.T | December 2009

O

K, before you start the hate mail or roll the pillory out into the town square, maybe hate is too strong a word. Actually it is not the Agile (or Scrum, or XP) process that I hate, but the Agile practitioners are really starting to get to me. I now cringe whenever I see the word. I recently read a blog post where one of the high priests of Agile (I’m not going to name anyone specific... you know who you are) was ranting about how some newly-converted software teams are beginning to fall off the Agile wagon, back into old habits. He chose of course to publically ridicule the backsliders as a group (thankfully, no specific names). Another Agile queen likes to refer to these transgressions as ‘mini waterfalls’. The problem with most of these born-again Agile cultists, if you don’t strictly adhere to their rules or their definition of what Agile is, you are no longer worthy of their support, and are subject to public condemnation. Sadly, www.testmagazineonline.com


Test cover story | 5

Agile has become a religious cult and the practitioners have become like early American religious fanatics (you know, the ones kicked out of England for their fanatic beliefs). “Believe as we do or we shall excommunicate you from the flock!” Well then point me to the pillory – I confess – I’m a nonbeliever!

The meaning of Agile Let's take a quick look at the meaning of the word Agile from Webster's: 1) Marked by ready ability to move with quick easy grace; 2) Having a quick resourceful and adaptable character. A couple of key terms: ‘quick’, and ‘adaptable’. No one familiar with any Agile process will deny that ‘quick’ is a goal of Agile. Quick equals cheap. Unfortunately many company leaders usually stop reading there. The ‘adaptable’ concept seems have become lost somewhere. Just try to adapt and the Agile purists will roll out the pillory. If you research the early days of Agile, you will find that ‘adaptable’ was strongly emphasised. Somewhere along the line this fundamental concept was dropped and replaced by a strict adherence to the rules. But who's rules? I first ran into this a few years ago as a new software test consultant. We were called into a client company to evaluate their development and testing processes and then make recommendations for improvement. The first question I always ask during the initial interview is “Do you follow any specific development process?” Invariably I will hear “We're an Agile Shop.” I then say, “OK let’s look into that shall we? Do you do X, Y, or Z?” referring to basic Agile stuff like daily stand-ups, backlogs, etc. The answer I typically receive is something like: “Well, we do X. We sort of do Y. But we tried Z and it didn't really work here, but we’re trying to change that.” After some investigation and a few interviews, I agreed - Z didn't really work there. Instead, they had created a completely different idea based on Z that was actually an amazing process and worked very well. I wouldn't recommend it for everyone, www.testmagazineonline.com

Agile has become a religious cult and the practitioners have become like early American religious fanatics (you know, the ones kicked out of England for their fanatic beliefs). “Believe as we do or we shall excommunicate you from the flock!” Well then point me to the pillory - I confess - I’m a non-believer! but it worked extremely well for this company. The development manager caught me in the break room afterwards and asked me not to mention “the Z thing” in the formal report. He told me they were doing it behind leadership's back because it wasn't really ‘Agile’! We finished our assessment, wrote the report, and briefed leadership on our findings. What happened next completely shocked me. We were essentially accused of siding with the development teams. We were told in no uncertain terms that “we didn't understand what it meant to be Agile”. Seriously?

Agile overload Apparently, one of the senior leaders had read about Agile on a plane trip to California. Upon his return, he declared the company was now Agile and would adopt the ‘Agile Methodology’. A team of Agile consultants was hired to help implement Agile. They bought all the Agile posters and other propaganda and plastered the walls with it. There were a handful of team members who had concerns about the December 2009 | T.E.S.T


6 | Test cover story

No one familiar with any Agile process will deny that ‘quick’ is a goal of Agile. Quick equals cheap. Unfortunately many company leaders usually stop reading there. The ‘adaptable’ concept seems have become lost somewhere. Just try to adapt and the Agile purists will roll out the pillory. If you research the early days of Agile, you will find that ‘adaptable’ was strongly emphasised. Somewhere along the line this fundamental concept was dropped and replaced by a strict adherence to the rules.

‘all or nothing’ approach to Agile as it was being recommended and dared to question the implementation. These people were actually strongly in favour of what they understood was the Agile process. They wanted to implement it with a couple of minor modifications (the same ones I recommended). Nope – all or nothing! Needless to say these naysayers were eventually excommunicated from the company. The moral of the story: They ignored our findings and recommendations. They hired another consulting company that basically told them what they wanted to hear. To my knowledge the company is still in business, but barely. They have had close to a 100 percent turnover in the software development department. Is this a story unique to this company? I wish it was, but no. I saw something similar with every Agile client we visited. Without fail, most of these companies adopted Agile in the early days. Then the Agile high-priests and priestesses began writing about T.E.S.T | December 2009

companies falling off the wagon. Leaders got alarmed that they weren’t ‘doing it right’. They brought in a team of Agile consultants who confirmed their suspicions – they weren’t doing it right (according to the consultant’s definition of ‘right’ anyway). So they focused on doing it right with little success.

The Gumbo process I wish there were one single, out-ofthe-box development process that would work universally for everyone. There isn’t! There never will be! Personally, I like to follow what I call the Gumbo process. A little bit of this, a little bit of that, add what works, subtract what doesn’t – heat and stir and Voila! – Software (or as Eddie Izzard would say: “Hootcha, hootcha, hootcha... Software!”) Don't get me wrong – Agile has some really good ideas, but so does Iterative, RUP, and believe it or not – Waterfall! That’s right – I said Waterfall – rack me! Hand me my scarlet letter – I will wear it proudly! Let the emailing and blog posting begin!

Dave Whalen President and senior software entomologist Whalen Technologies Blog: http://softwareentomologist wordpress.com/

www.testmagazineonline.com


t.


8 | Testing techniques

Knowing when to stop Mitigating risk and achieving quality faster by using optimal testing efforts and intelligent analysis, Stephen Sanjay Desmond Emmanuel, delivery manager with Infosys’ Validation and Testing Practice and Infosys test manager Sindhu Mahesh explain the benefits of point of stop testing (POST).

I

n most IT projects, Quality Assurance (QA) teams are looked upon as the go-to-members among a cross-functional application development team when needing to secure a predictable state to stop testing and move into production. Often the mechanics behind this prediction lean towards intuition and general knowledge of the application, as opposed to formula-based discrete numbers. However, even to predict intuitively one needs a sound knowledge of domain, past quality levels and defect semantics. Further, QA teams find themselves with their backs to the wall with inflexible schedules and budget T.E.S.T | December 2009

constraints. So, if QA teams are to answer the question “What is the Point Of Stop in Testing (POST)?”, they need quantitative and scientific data that not only convinces them, but also the larger application development community.

STOP making sense Let us consider the following example where the IT manager is articulating their requirements to a QA manager: “We are going to release the next build of the ‘Payments Processing System’ for our bank using .NET technology. The development team will have their first incremental build ready for testing in the next six weeks and we have multiple iterations of release www.testmagazineonline.com


Testing techniques | 9

planned over the next year. There is no scheduled relief since this is to get us in compliance with financial regulation. How much time do you need for testing? When will you start and stop? How do we know we are ready to go to production and will you stop testing if we reach that point?” Putting ourselves in the shoes of the QA manager, most of us would have chosen to answer the question posed by the IT manager in one or more of the following ways: • Agree with the IT manager and commit to the least impact path for testing. • Stick out our neck and stand for quality no matter how long it takes to achieve it. • Insist that 100 percent RTM coverage is non-negotiable and the rest of the activities can revolve around it. • Allow the development team to borrow test cases and test their own code while we focus on the most important items and audit their testing. • Just keep testing and reporting defects until the IT manager decides to throw this build in to production. However, for us to respond to the IT manager’s query with conviction, we need to have answers to the following questions: “What is my desired quality level and when will we attain it?” Let’s assume that to test the payments processing application, which is undergoing maintenance type changes, we require 1,000 test cases. Let us also assume that these 1,000 test cases meet 100 percent RTM coverage with an unknown amount of room for improvement via optimisation. Let us assume that there www.testmagazineonline.com

is evidence from past historical data that the system needs two iterations to achieve the pass/fail ratio of 100 percent. In addition, if each test case is to take 30 minutes to execute, we would require 60,000 minutes to execute all of the test cases in two iterations, in no particular order. Based on the above data points, the imperative to stop testing for the application can be stated as, “execute all 1,000 test cases until a pass/fail ratio of 100 percent is achieved”. While this may be true from a requirements standpoint, it may not be the best practice to adopt. The actual POST (Point of Stop Testing) for the application can be determined as follows: First, apply the rules of business criticality to the application’s test case suite. Any test case that aligns to business-critical imperatives like exposure to financial and legal implication, loss of brand and credibility, high frequency of usage and high value usage has to be categorised as ‘must have’. Others will have to be categorized as ‘nice to have’. For the above mentioned example, let’s assume that at the end of this exercise there are 700 test cases in the ‘must have’ bucket and 300 in the ‘nice to have’ bucket. Now, let us consider the history of the application from a defect semantics standpoint. To this end in the lifetime of the application if there were 4,000 defects logged and the application has undergone 20 test iterations, then the application defect discovery rate per test iteration will be 200. Similarly, the defect discovery rate per test case for the application will be 0.02.

Identifying POST, through scientific and proven methods, is an attempt to help QA managers successfully address the needs of this ever-changing environment. POST can be adapted to draw the thin line between testing for go live (ie, what to test and how much to test) and testing endlessly for 100 percent coverage. December 2009 | T.E.S.T


10 | Testing techniques

QA is one of the last stops before production, POST can assist with being more accurate with any predictive models that dictate the controlling measures within the QA widget station.

Having determined the applications ‘must have’ test cases and defect semantics, the imperative to stop testing can be restated as execute 700 ‘must have’ end-end test cases until a pass/fail ratio of 100 percent is achieved. Based on historical evidence, we already know 100 percent pass/fail of the 700 ‘must have’ test cases is achievable in two iterations, yielding a total of 280+ defects. This means that if the remaining 300 non business critical test cases fail upon execution, the QA manager will still have enough confidence in the application to stop testing and give the green light for production to begin. Further, the total time required for executing the 700 ‘must have’ test cases will be 42,000 minutes or lesser, which is a 30 percent reduction over the previously estimated time line of 60,000 minutes.

Following orders

Stephen Sanjay Desmond Emmanuel

Delivery manager Infosys Validation and Testing Practice

www.infosys.com

Sindhu Mahesh Test manager Infosys Validation and Testing Practice

www.infosys.com

T.E.S.T | December 2009

To further strengthen the process, it is essential to ensure that we adopt the right sequence (order) for test case execution. Traditionally we have seen that the QA teams often run the easiest test cases first and leave the most complex and important ones for the last or they execute test cases in a random fashion with the scope of providing 100 percent coverage. Both approaches end up in a situation where defects with high development costs are not revealed until later and the testing stops only after 100 percent test case coverage is achieved, delaying the move to production and adding additional IT expenditure. With this approach, we are able to attain 100 percent pass/fail status with only 70 percent coverage of all mentioned test cases. For IT managers,

this means early feedback of the system leading to developers being able to concentrate on features that have to hit the market immediately, then the less important ones. For both QA and IT managers it means considerable money and time savings without compromising the desired quality for stamping the product successful. Circling back to the genesis of testing, the process has always been evolving and so have the hard and fast rules governing it. So, as applications become more and more complex, and testing moves from ‘test to break’ towards ‘test to release’, QA managers and teams have had to adjust to this dynamic environment.

Tried and tested Identifying POST, through scientific and proven methods, is an attempt to help QA managers successfully address the needs of this ever-changing environment. POST can be adapted to draw the thin line between testing for go live (ie, what to test and how much to test) and testing endlessly for 100 percent coverage. By setting dials like business criticality and past testing track record, POST helps QA managers attain higher confidence levels in application quality in shorter time periods. The method could also be used by QA managers to calibrate their test plans, Quality of Systems (QoS) activities and the resultant schedules for optimal performance. Also, considering that QA is one of the last stops before production, POST can assist with being more accurate with any predictive models that dictate the controlling measures within the QA widget station. www.testmagazineonline.com


The Whole Story Print Digital Online

For exclusive news, features, opinion, comment, directory, digital archive and much more visit

www.testmagazineonline.com www.31media.co.uk

www.testmagazineonline.com

The European Software Tester December 2009 | T.E.S.T


12 | Test security

Ignore data security at your peril Peter Mollins of Micro Focus looks at organisations’ shaky defences during the testing process and what can be done to secure them

W

hen HM Revenue & Customs revealed in November 2007 that it had lost 25 million records, the profile of data breaches exploded. Since that day, we have seen countless organisations, both in the public and private sector, losing sensitive data whether it is through leaving a laptop on a train or through being subject to an external hack. With breaches still regularly occurring today, it seems clear that many organisations still refuse to take the necessary precautions to secure their data, despite the fact there are plenty

T.E.S.T | December 2009

of solutions available to enable this. The failings of HMRC certainly saw the topic of security moving higher up both CIOs’ and CEOs’ agendas and there was an increased push to secure customer data. However, has enough been done to ensure this information remains secure? Is an extra firewall or a new layer of encryption technology enough to ensure that another organisation does not suffer similar embarrassment to its customers? It is clear that the answer is no.

Business agility Today’s unprecedented rate of change www.testmagazineonline.com


Test security | 13

requires business agility and faster time-to-market, whether it involves introducing new products and services or in response to mergers and acquisitions. The only effective way to ensure IT systems are still operating at full capacity after any change to the business is through application testing using realistic data. A recent survey indicates that the root cause to many data breaches is the use of live data in testing and development. The survey, conducted amongst 1,350 IT practitioners in companies with revenues from $10 million to over $20 billion, looked at data security trends in testing and development. Two thirds of all respondents experience change on a weekly basis with a further quarter declaring this takes place at least monthly. To be absolutely sure that IT systems are fully functional in production, the vast majority of surveyed organisations use live production data, such as customer records, employee records, credit cards and other business confidential information, in the testing process. This may raise a few eyebrows, but as long as the right security techniques are in place, organisations have nothing to worry about. They will be very aware of the risks of data breaches, due to their high exposure in the press, so surely they do not want to fall foul to one, right? Wrong. The survey went on to reveal that over two thirds (70 percent) of companies do not have the measures in place to mask this live www.testmagazineonline.com

data during development and testing. This alarming statistic is made even more staggering by the fact that over three quarters (79 percent) of all organisations have experienced a data breach in the last 12 months. Despite having their fingers burned once already this year, they are still putting their customers and their own information and reputation at stake by leaving themselves liable to another breach, for the majority, on a weekly basis. The risk is intensified by the unmanageable sizes of data being tested. Three-quarters of respondents confirmed they use test data files that are larger than one terabyte, with some testing more than 50 terabytes of test data. To give an example of the potential cost that could be incurred by a data breach, a recent study by the Ponemon Institute revealed that each record that is lost or stolen costs an organisation an average of $202. In today’s economic climate, this is a penalty no business can afford to experience.

Mitigating the risk So the question that needs to be asked is how can organisations mitigate this risk and guarantee their data are watertight during development and testing? To guarantee secure and realistic testing, businesses should implement an automated and repeatable test data management process. First, realistic testing requires realistic data – so, they must begin by accessing

To give an example of the potential cost that could be incurred by a data breach, a recent study by the Ponemon Institute revealed that each record that is lost or stolen costs an organisation an average of $202. In today’s economic climate, this is a penalty no business can afford to experience. December 2009 | T.E.S.T


14 | Test security

With a better test data management process, companies can accelerate and lower the cost of testing of high quality applications. At the same time, they avoid the loss of goodwill, costly penalties, and regulatory non-compliance stemming from data breaches.

relational and hierarchical databases and other data stores from the mainframe and distributed systems. Next, this test data should be subsetted both to make it more manageable and to reduce data storage and test execution costs. Following this stage, organisations must ensure that this process conceals private data within test data sets to adhere to data privacy regulations and eliminate the risk of data breaches. With a better test data management process, companies can accelerate and lower the cost of testing of high quality applications. At the same time, they avoid the loss of goodwill, costly penalties, and regulatory noncompliance stemming from data breaches.

Data masking In March 2009, Joseph Feiman, research VP and Gartner Fellow, confirmed the capability of data masking, stating “Data masking raises enterprises' security and privacy assurance against insiders' abuses T.E.S.T | December 2009

and helps enterprises to be compliant with data-centric regulations. It is an integral part of software life cycle (SLC) processes.” As businesses strive to achieve organic growth in the toughest recession for 100 years, they need to guarantee they are looking after the information they already possess. Testing is always going to be an integral part of a company’s development – the study above showed just how often this process has to take place. CIOs need to establish a firm data protection strategy for the production environment as well as for the use of live data in testing and application development. And the assessment and implementation of these masking and subsetting techniques need to be an integral part of this. Cutting corners leaves organisations everywhere vulnerable to a major data leak, an event that could cause irreparable damage to a company’s database and reputation.

Peter Mollins Director of product marketing Micro Focus www.microfocus.com

www.testmagazineonline.com


Testing requirements | 15

When requirements go bad Part II In the second of his features on the crucial field of testing requirements, Kurt Bittner, chief technology officer at Ivar Jacobsen International tackles specification and implementation.

E

rrors of specification, or errors arising from the way a requirement is described, are the most common types of requirement errors. For the purposes of discussion, it's useful to break these errors into several categories: – Under-specification, resulting from incomplete or vague descriptions; – Incorrect specifications; – Inappropriate techniques used for description; – Over-specification. Each of these errors has different root causes require different avoidance strategies.

Under-specification Under-specification is the most frequent type of specification error, and it takes a variety of forms. For people employing scenario-based requirements description approaches such as use-case modelling, the most typical error is to omit important alternative flows. Alternative flows describe error handling and alternate behaviour paths; failing to identify important alternative flows results in the system failing to handle important exceptions or error conditions, or failing to offer behaviour important to the stakeholders of the solution. If an alternative flow is not described, you have to assume that the system does not support that behaviour. Careful attention to missing behaviour must be paid to missing functionality in reviews and walk-throughs. Also common is what can only be described as ‘lazy descriptions’ that point to an example to infer that the reader use their imagination to fill in the details. These descriptions often look like the following: “The system must support the translation of foreign currency www.testmagazineonline.com

transactions, eg US Dollars to Euros.” The offending phrase has been underlined. The problems here are two-fold: firstly the algorithms used to translate currency need to be specified, and second, the specific currencies to be handled must be enumerated. The reason for the first is obvious - the specific steps required to perform a currency translation need to be made clear; one cannot assume that a software developer will know how to handle the calculation. The reason for the second is that specific work will need to be done to make sure that each specific currency of interest can be handled; if the currencies are not specified then you cannot be sure that the one that you want will be supported. As with the case of missing alternative flows, missing details can be exposed by asserting that unless something is specified it will not be delivered. There are many ways to specify behaviour (some of which I will discuss later when talking about problems related to using the wrong technique for the specification), but if something is not specified in some way you have to assume that it will not be delivered, at least in the way that you want it delivered. Tools that can help you to visualise the flows can help you to see what is missing. Of only somewhat less frequent occurrence is missing definition of common terms. Some of this occurs where a term is defined in one context (such as ‘in-line’ in a use case) but not in a common place where if it can be referred from multiple places. Keeping a glossary of common terms can help here, but you should also have from the places where the term is used in the glossary in order for this to be useful. Linking does not require special

In writing requirements, precision is required, and this requires precision in the language used. I have talked to people about tooling that can analyze text and identify ambiguous descriptions, but my view is that ambiguity is in the eye of the beholder - what you really want to ensure is that everyone has the same understanding, and you only really know whether you have the same understanding when you have active discussions about the requirements. December 2009 | T.E.S.T


16 | Testing requirements

tooling other than the ability to embed hyperlinks or URLs in the descriptive text. A final aspect of under-specification is vague description. In writing requirements, precision is required, and this requires precision in the language used. I have talked to people about tooling that can analyze text and identify ambiguous descriptions, but my view is that ambiguity is in the eye of the beholder - what you really want to ensure is that everyone has the same understanding, and you only really know whether you have the same understanding when you have active discussions about the requirements. We need to banish the practice of writing requirements and then ‘throwing them over the wall’ to developers or testers in favour of a more open, communicative approach. In this view, requirements are vehicles that initiate and capture discussions, but it is the discussion that matters most.

A strategy for working within the usual constraints is to not document and review all of your requirements, but focus only on the most important ones (the ‘must do’ requirements). To the purist this is going to sound like heresy, but here is the rationale: if you really can't review all of the requirements for correctness, rather than doing no reviews it is better to review some of the requirements than none, and it makes sense to focus on the most important requirements. T.E.S.T | December 2009

Incorrect specification Sometimes the specification is complete but wrong. Often the errors are not obvious, and it can take some careful review to uncover them. Just as a misplaced semi-colon can completely change the meaning of a statement in a computer program, a mistake in the way a requirement is described can result in a lot of wasted effort. A common technique in safety-critical software development is to perform code reviews; a similar technique can be applied to requirements. In both

cases the effort is labour-intensive and tedious, but it is often the only way to avoid errors. A common problem is that most teams barely have enough time to specify requirements in the first place, let alone review them with a high degree of scrutiny. A strategy for working within the usual constraints is to not document and review all of your requirements, but focus only on the most important ones (the ‘must do’ requirements). To the purist this is going to sound like heresy, but here is the rationale: if you really can't review all of the requirements for correctness, rather than doing no reviews it is better to review some of the requirements than none, and it makes sense to focus on the most important requirements. Most people will agree with that. If you have time left you can move on to the ‘should do’ requirements, but if you can't afford the time to review the requirements, it may not be worth the time to write them down. Instead, stay focused on the most important requirements with the idea that if the team is really that constrained, the ‘should’ requirements will probably not get implemented anyway, at least not in the immediate release. In other words, start managing scope early and only describe the requirements that you know are going to get implemented. The key to uncovering incorrect specifications are walk-throughs and discussions. I find reading most requirements specifications to be mindwww.testmagazineonline.com


Testing requirements | 17

numbingly dull, and I expect that I am not alone. Subject matter experts from the sponsoring business organisation are probably even less receptive to reviewing long specifications. In today's world, most people have the attention span of a gnat (present company excluded, of course), so sending around requirement specification documents for review is usually too passive an approach to get good results. Some different strategies are needed. A successful approach is to use storyboards (informal sketches of the user interface, but not full-fidelity prototypes) to walk-through use case scenarios, highlighting specific areas for discussion. Walking through scenarios in this way will be more engaging for everyone, but I raise a couple of cautions: – Don't use high-fidelity prototypes that look and behave like the real application. This is based on practical experience: you want to get rapid, early feedback, and if you spend too much time developing a highfidelity prototype, you will delay getting feedback unnecessarily. In addition, if the prototype looks ‘too good’ both you and the stakeholder may be reluctant to make changes. Rough sketches, like those used in the animation industry to sketch out the plot line for a film, are sufficient for having useful discussions about the desired behaviour. – Don't wait to get feedback. Start the feedback sessions early, and have a lot of them. A number of www.testmagazineonline.com

short, informal ten-minute sessions are better than one huge two-hour one. You can have the ten-minute sessions very frequently and much sooner than the two-hour ones, and the quality of feedback will be better. Your goal should be to have continuous (or nearly continuous) feedback.

Over-specification Sometimes the specification is overdone, inappropriate or excessive. I once encountered a project that had 18,000 requirements for a customer service system. No one could comprehend the specification, and unsurprisingly the system had failed to be built - not just once but several times. Usually this over-specification begins with good intentions, usually as an outgrowth of prior failures from underspecification. Over-specification brings its own problems, usually the foremost of which is that it usually presents as requirements things that are not really ‘required’ - usually dictating things that should be left to the creativity of the developer. Over-specifications tend to treat symptoms (and lots of them) without getting to the root causes for the problems the system needs to solve. Over-specification often occurs in the user interface, where users sometimes have grand ideas about how things should look and feel, and they are often not shy about sharing these ideas. User interface design is a very challenging thing, however, and the best solutions

are usually simple and intuitive, utilising effective metaphors and organisation for information. Most users are not very experienced with this – they don't know what they want, but they know what they don't want when they see it. As a result, it's best to employ people with deep experience in human factors and user interface design and engage them as part of a team that includes the users and developers. The team can then work through the best way to represent information, the best way to provide the most natural flow of information and function, and iteratively come up with a design for the user interface. This process usually starts with storyboards and outlines for the use cases, and then evolves the two in parallel, using the storyboards, and later, prototypes, to describe look and feel of the system, and using use cases to describe the flow of the system. People working with use cases often fall into the habit of putting details about the UI into their use case descriptions. The practice initially seems harmless, but usually becomes unwieldy and impractical. Prototypes and storyboards are usually better tools for describing the UI, but they lack the facility to describe flows of behaviour effectively. Use case descriptions (usually in text but also in flow diagrams) lack the facilities to capture the dynamic nature of the UI. Keeping the two things separate but related is usually the best approach - keep the use case description to capture flow of events and user context, December 2009 | T.E.S.T


18 | Testing requirements

but use storyboards or prototyping tools to capture the UI, with the use case providing a kind of script for the enactment of the storyboard. Over-specification can also rear its head in the form of describing implementation details. Consider this fragment from a use case: “... the user enters their password. The system then goes to the User Authentication System and retrieves the stored password using the user ID. The system then validates the entered password...” What's wrong with this? The author has over-specified details that don't need to be specified. First, there may or may not be a User Authentication System, and whether there is or is not should not matter from the perspective of the use case. The user's ID and password need to get validated, but it really is up to the developer to decide how to do this. Instead, the use case fragment should read: “... the user enters their password. The system then attempts to match the entered password with the password on record for the entered user ID. If there is a match, the user is granted access ...” If there actually is a User Authentication System and the developer must use this to validate the password, this can be stated in a constraint requirement: “Constraint: The User Authentication System is used to validate all user information.” The benefit in expressing the constraint in this way, rather than in the use case itself, is that if the authentication mechanism changes you only have to change it in one place rather than in all the places where user information might be authenticated. The use of the User Authentication System will be of great important when designing and implementing the solution, but the requirements don't have to describe how this will be done. It is sufficient to state that it needs to be done and leave it at that. Other areas of over-specification, or more precisely misplaced specification, occur when there is a need to describe T.E.S.T | December 2009

the information that is captured or manipulated in the use case. Putting the information "in-line", in the text of the use case, usually starts innocently enough but quickly becomes burdensome. Consider this fragment: “... The user then enters the customer's first name, their last name, their street address, their city, their state, their home phone number...” Let's call all this information related to the customer the customer information. Rather than repeating each piece of information every time we need to reference it, we can create a glossary term called customer information and use it every place where we might have need to refer to it. This saves a lot of typing and allows us to change the information in one place rather than in lots of different places. Taking this one step further, there are sometimes relationships between information. For example, if we are an insurance company a customer may have lots of different bits of information, such as billing addresses, insured property addresses, coverages for the insured properties, names and addresses for beneficiaries, and so forth. Keeping all this information in a glossary is possible but it is hard to visualise the relationships between the different types of information. What we really need is what I call a domain model (or a model of the concepts used in the problem domain), though different terms like business entity model can be used as well. What you call it is not important – the important thing is to have a way to describe the information the concepts that are embedded in the requirements. By having a richer set of conceptual tools for describing the requirements, including use cases, prototypes or storyboards, and glossaries and domain models, the requirements actually become simpler, easier to understand and easier to manage.

Errors of implementation, or, the importance of testing No discussion of requirements errors could be considered complete without

No discussion of requirements errors could be considered complete without discussing implementation errors: those cases where the concept was right, the specification was right, but the requirement was not implemented correctly, or possibly at all. www.testmagazineonline.com


Testing requirements | 19

discussing implementation errors: those cases where the concept was right, the specification was right, but the requirement was not implemented correctly, or possibly at all. It is easy to say that this is not a requirements problem but rather a project management problem or a testing problem, but I view such distinctions as artificial and ultimately not very useful. There is no point in writing requirements if you are not going to implement them, and the only way to know if they are implemented correctly is to test them. The functional areas of project management and testing are inextricably linked with requirements. Let me restate something I just said a little differently: every requirement should have one or more associated tests that verify that the requirement was implemented correctly. If you're not going to test it, don't bother with writing the requirement because you'll have no way to know if it ever got done. It is possible to combine test cases and requirements (as some approaches do) by writing all requirements in the form of a test case, saving the ‘overhead’ of a separate requirement. I think this works fairly well in some cases but not so well in others. Consider the case where we want to make sure that the system is able to support 1,000 concurrent users. We could, with some work, define a set of tests that simulate the load created by 1,000 concurrent users, but the tests are likely to have a lot of implementation details that obscure the actual requirement. This makes it hard to review the requirements with people who can tell you whether the 1,000 users is the right number or not. As a result I generally prefer to have a separate statement of what we're trying to achieve and let the test cases focus on how to measure whether the requirement was met. Failure to test requirements usually arises from a resource problem – there are not enough people to test. This in turn usually arises out of what I call the ‘dysfunctional functional organisation’ www.testmagazineonline.com

mindset that puts people into narrow functional silos. Developers need to test, but also analysts, users, as well as people who might have ‘tester’ somewhere in their role description. There are specialised skills needed to deliver successful results, but there is no reason for rigid barriers between team members. Just as there is no reason to write requirements for things you're not going to test, there is no reason to develop things that you are not going to test. If you're getting behind on testing, someone is not pitching in and is potentially doing valueless work. Assuming that you have tests for every requirement, then figuring out what did or did not get implemented becomes much simpler – either the test ran and it passed, or it ran and it failed. Failure may be because the requirement was not implemented, or because it was not implemented correctly. Either way, the desired outcome behind the requirement is not being achieved. The availability of test data to support whether a requirement was implemented brings me to a final point on implementation of requirements. I sometimes work with project teams who have an overall mandate that every requirement must be traced down to code, usually to ensure that a particular requirement was implemented. I find this practice to be costly to the point of impracticality, and ultimately ineffective. Many requirements such as the ‘support 1,000 concurrent users’ requirement are satisfied by architectural considerations that do not have a single locus in the code – they emerge from the overall capability of the system. Requiring that these kinds of requirements be traced down to code is pointless. In addition, just because the developer says that a particular section of code was designed to satisfy the requirement does not mean that it does. As the old saying goes, “the proof of the pudding is in the eating,” meaning that only actual experience will tell you if it is good. Only testing under real-life conditions will tell you whether the requirement is really met.

Conclusion

Kurt Bittner Chief technology officer Ivar Jacobsen International www.ivarjacobson.com

Errors in the specification of requirements can take several forms – requirements can contain too little information, leaving them vague and incomplete. Requirements can also be over-specified, leaving them full of irrelevant and extraneous detail that makes the important parts hard to see. Striking the right balance often relies on knowing what details must be conveyed, and which details can be left to the developer to decide for themselves. Making the right choices usually involves having open and meaningful conversations between team members to agree on the appropriate level of detail for the project at hand. Using appropriate techniques for different aspects of the description can help to simplify the description and yet make it more complete. Using storyboarding and UI prototyping tools can supplement the requirements specification with additional essential information, and domain models and glossaries can augment the requirements specification with important information in a form that is easier to understand and manage. Once the appropriate level of description is agreed it becomes easier to spot remaining requirements problems: cases where the specification is simply wrong. These usually arise from misunderstanding or miscommunication, but they are easily remedied through frequent and open feedback between members of the extended team (including subject matter experts from the line of business). Finally, requirements need to be tested, and testing is the only reliable way to know if a requirement was actually satisfied. Every requirement should have one or more tests, and no tests should exist that do not derive from requirements (unless you have made the explicit decision that the requirement and the test are one and the same thing). December 2009 | T.E.S.T


20 | Test security

The e-volution of testing security Darwin's On the Origin of Species introduced evolution by natural selection, where the strong survive and the weak die out. According to Rodrigo Marcos, principal consultant at Secforce, events during 2009 in the IT industry suggest that similar rules may also apply in software development.

I

n the Darwinian analogy each programming language is a species, development projects are individuals and the threats are the loss of confidentiality, integrity or availability. There is one species which is quickly becoming dominant and famous for its merciless attacks: the botnet. Botnets usually exploit client-side vulnerabilities affecting popular web browsers to execute code in the victim's operating systems. The victims will in turn be compromised and will become part of the botnet. In recent years we experienced an interesting twist on this situation and saw some botnets focusing not only on clients but also on servers, compromising web applications using simple and generic SQL injection attacks. The purpose of these server-side attacks is to further propagate malicious content which makes the botnet grow. Only the strong survive. Compromising bespoke applications using generic attacks is something T.E.S.T | December 2009

that proves how immature software development still is. Many companies run penetration tests just to ‘tick the box’ and do not implement security early during the software development life cycle. These poor practices create weak individuals. However there is a natural selection of software driven by the market and those individuals are likely to disappear.

Web programming frameworks The trend of using programming frameworks such as Ruby on Rails, Zope or Django will become even more apparent in coming years. The use of these kinds of frameworks in web environments will grow not only because important security measures are already implemented by design but also because it leads to easy-tomaintain elegant and simple code. We will witness a constant increase of these kinds of technologies. Each programming framework is different but they all share some relevant features that make them an www.testmagazineonline.com


Test security | 21

interesting option when choosing the technology used in a development project: Layered approach by design: The application is divided into three core components making a tidy separation of duties: – The front-end, which represents the interface with the user. It usually makes use of templates and in the case of web applications it returns HTML code but it is up to the developer to return any format. By default frameworks encode all the active content included in the template preventing cross-site scripting attacks; – The model is a representation of the data schema that the application is going to interact with. The framework provides an interface to access and modify the data held in the database. Moreover, it makes it transparent to the kind of database held in the back-end; – The view represents the logic of the application. It links and interacts with the other two layers and it is where most of the code resides. Abstraction layer to access to database: Frameworks provide an abstraction layer to access the data. Only in exceptional circumstances the developer crafts an SQL query that is sent to the database. This usually prevents a number of security issues, the most remarkable being SQL injection attacks. Additionally database schema relationships defined in the model are inherited and enforced by the framework and therefore access control is not delegated to the programmer, ensuring that given an appropriate data model no unauthorised access to information occurs. www.testmagazineonline.com

Sanitised user entry: Insufficient user-input data sanitisation is one of the main causes of most software bugs which lead to security issues. In web environments user-entry is received in the shape of GET and POST requests by the web server. In both cases the input received is subject to a strong typing, so for instance the application will not accept a text string if it is expecting a number. Likewise, injection vulnerabilities such as SQL injection or XPath injection are unlikely to happen as there is a solid separation between user-supplied input data and the rest of the application.

Cloud computing If one had to choose the buzzword of the year there would be no doubt: cloud computing. This is a broad term that represents any service delivered from the Internet, which usually involves virtualisation, scalability on demand and paying only for the resources used. This year we have witnessed interesting examples of business models based on cloud computing and it is likely that this tendency continues, accelerated by a weak economy looking to cut down costs and the enhancement of virtualisation technologies and high-speed Internet access. The term is usually mistakenly used to refer to Software as a Service (SaaS), which represents only a subset of what cloud computing is. The existing models of cloud computing are: Software as a Service (SaaS): It offers a software service over the Internet often eliminating the burden of installing, updating and maintaining a local application. Salesforce.com is a good example of SaaS, where the sales management system runs on

The trend of using programming frameworks such as Ruby on Rails, Zope or Django will become even more apparent in coming years. The use of these kinds of frameworks in web environments will grow not only because important security measures are already implemented by design but also because it leads to easy-to-maintain elegant and simple code. We will witness a constant increase of these kinds of technologies. December 2009 | T.E.S.T


22 | Test security

remote servers and it is accessed using the web browser. Google Docs is another good example providing webbased documents, spreadsheets and presentations. Platform as a Service (PaaS): It offers a platform for scalable deployment of applications. Amazon EC2 and Google App Engine are some examples of this service for programming platforms. Users can use the platform to deploy applications and pay based on storage, bandwidth, or CPU cycle usage. Other examples include Amazon SimpleDB which offers distributed access to databases or Nirvanix which offers remote storage services. Infrastructure as a Service (IaaS): IaaS offers a computing infrastructure as a service, usually a virtualised environment. Some people understand it as the evolution of hosting and virtualisation - instead of buying or renting hardware, users simply buy resources. Amazon EC2 and Rackspace Cloud Servers are examples of this kind of service. There are interesting security implications affecting cloud computing. Whereas in the traditional technologies there is a real possibility of going through a process of software testing, in cloud computing that possibility vanishes. How do I know that my sales management information stored in Salesforce.com is properly protected? How do I know that the credit card numbers that I store in Amazon SimpleDB can not be accessed by a potential attacker? There is a high degree of trust T.E.S.T | December 2009

surrounding cloud computing and it exists because it is in the providers best interest to ensure that security is kept at all times.

Social networks On 7th January 2009 Mark Zuckerberg, co-founder of Facebook announced in the company's blog that they reached 150 million active users. Less than a year later, 1 December 2009 it was announced that they had reached 350 million users. This represents an increase of more than 600,000 users every day! Twitter experienced an even bigger growth. Statistics show a growth of 1,448 percent during a twelve months period (May 2008 – May 2009). The increase of social networks is proportional to the decrease of personal privacy which in turn leads to social security issues. There is no security awareness about the kind of information published by users which may jeopardise individuals using social networks. This is already an existing issue and it is likely to grow at a similar rate as the social networks growth. Social networks have gone beyond the social environment. In 2009 they have been heavily used by many businesses for marketing and PR purposes, especially for interactive marketing campaigns, new product launches and promotion, viral marketing, etc. This is however possibly just the tip of the iceberg, as social media becomes consolidated and increasingly professionalised as a branch of marketing strategies.

If one had to choose the buzzword of the year there would be no doubt: cloud computing. This is a broad term that represents any service delivered from the Internet, which usually involves virtualisation, scalability on demand and paying only for the resources used. This year we have witnessed interesting examples of business models based on cloud computing and it is likely that this tendency continues, accelerated by a weak economy looking to cut down costs and the enhancement of virtualisation technologies and high-speed Internet access.

Rodrigo Marcos Principal consultant Secforce www.secforce.co.uk

www.testmagazineonline.com



24 | T.E.S.T 2009 review

...and a happy New Year! As we leave the first decade of the 21st century, T.E.S.T editor Matt Bailey picks the brains of some of the industry’s movers and shakers about developments in 2009 and what to expect in 2010.

W

the sand won’t give us clarification. We have a responsibility to make those reviews happen – and get the thing signed, in blood, and in triplicate if necessary – maybe not today, maybe not tomorrow, but someday, if we stick at it. “If the code doesn’t do what it says on the tin, then make sure that you know how to let them know this; seek out the truth, how much code was tested, do they really know what it says on the tin? Assuming of course that we got those requirements sorted when we could. If the system passes UAT but fails in live, run and hide. Then,” says Amaroo, “quietly, check your test conditions and designs; were they extensive enough? Did you thoroughly check every relevant combination, or did you bury your head in the sand, again?” Testing blogger Andréas Prins says the next change in software testing, is Time for change a change to our mindset. “Everybody Looking forward to 2010, Angelina is talking about the economic crises Samaroo, MD of Pinta Education is in terms of cost reduction. I don`t like after some affirmative action: “As we this discussion because mostly this has start a new decade we need to make nothing to do with software testing,” that change. There are still those who he says. “But what I`ve seen this year do not see the value of testing. Think of the last website you commissioned; is that cost reduction has an impact on software testing and more specifically how often did the developers say it was ‘basically ready to go’, before you’d on the preparation phase of software testing. This expensive preparation had a chance to try it out for yourself. As we stand up and face the next year phase has to change and the drive for cost reduction has made people aware and a new decade, we remember not just our rights, but our responsibilities. of this. It will impact our work in 2010. “Another change will happen due It doesn’t really matter whether we’re to more general trends,” says Prins. testing in a crowd or in the cloud, “Think about social networks. It won’t testing is already a service. only be formal test methods that “To deliver, we need to know what will lead the way we think, but blogs, the system is supposed to do. So it’s tweets, and other social networks back to basics. If the requirements will play a role in this field. We have aren’t clear, then sticking our heads in hile many may yet say that 2009 was not the best time to launch any new business endeavour, T.E.S.T magazine hit the ground running in March with a clear brief to keep testers informed about what is important to them in their professional life; and, so far so good, the magazine has received excellent feedback and we have now launched online. Growth in the circulation and a growing network of contributors and commentators has followed are we’re looking forward to further expansion in 2010. But enough about T.E.S.T; to get a real feel for the important talking points of 2009 and what to look forward to in 2010, I contacted some of our contributing alumni and asked them for their unique perspectives...

T.E.S.T | December 2009

“It won’t only be formal test methods that will lead the way we think, but blogs, tweets, and other social networks will play a role in this field. We have to change our mindset to accept the changes in the way we do our work and the manner we gather our information in the near future. Besides this, we will have to use our creativity to deal with the increasing complexity of software.” Testing blogger Andréas Prins. www.testmagazineonline.com


T.E.S.T 2009 review | 25

to change our mindset to accept the changes in the way we do our work and the manner we gather our information in the near future. Besides this, we will have to use our creativity to deal with the increasing complexity of software.” System tester with British Car Auctions, Sally McCowan-Wright says it’s business as usual and that cutting quality for short-term gain is not an option. “Very little has changed with regards to the quality levels,” she says. “While compromising quality may save in the short term, most organisations surely realise that long term costs are almost certain to rise due to support issues and other associated activity. When projects are undertaken they are done so for a very specific reason and the overall return on the project is shorter and highly focused.” Looking forward to 2010, McCowanWright sees quality as the key. “Whether companies are looking for future growth or seeking to maintain what they already have, quality will remain a key issue and this can only be determined on a project by project basis.”

A global service perspective Makarand Teje is the president and CEO of AppLabs, one of the world’s largest software testing and quality management companies. He’s well placed to offer a perspective on the growth of testing services. “As the market starts looking up, the road to quality seems to be the new business imperative,” he predicts. “Businesses want agile methodologies to meet customer requirements, reduce delivery costs, improve quality and www.testmagazineonline.com

get to market faster. We witnessed the testing services market mature in terms of offerings from service providers as well as continued consolidation in the tools market. “The testers are now expected to get into the shoes of the end-users as every application is developed, keeping in view the end-user experience. Defect prevention at every phase, more so right at the design phase, is emphasised as defects found in the final stages of the SDLC always prove costlier. ‘Don’t make users your testers!’ seems to be the mandate of every CIO,” concludes Teje. “The word ‘career tester’ is fast gaining acceptance amongst the people management practices, unlike earlier when testing was more likely to be delegated to bottom 20 percent of development talent. Not anymore, testing has now become a serious business with direct measured business impact.”

Up in the cloud One all pervasive phrase has rapidly come to prominence in all sectors of the IT business: cloud computing. “We are now moving our IT infrastructures into the cloud,” says Angelina Samaroo, “the question is, when it rains, do those inside the cloud get wet!? This could be big trouble. We are talking more about Software (and thus Testing) as a Service, with capital letters to boot, so it must be serious!” Performance testing expert Thomas Barns is also alert to the potential of the cloud in these straightened times. “Server virtualisation has continued to be a popular technology as organisations seek to reduce

“The word ‘career tester’ is fast gaining acceptance amongst the people management practices, unlike earlier when testing was more likely to be delegated to bottom 20 percent of development talent. Not anymore, testing has now become a serious business with direct measured business impact.” Makarand Teje, president and CEO of AppLabs. December 2009 | T.E.S.T


26 | T.E.S.T 2009 review

cost through consolidation,” he explains. “In 2009 we have seen a desire to exploit server virtualisation technology to reduce the cost of delivering load when conducting performance testing. Organisations are increasingly implementing virtual servers to provision load generators, to generate large transaction volumes in a performance test environment. This has some obvious advantages. The number of machines used for generating load can be scaled up and down as required. This eliminates the need to set aside physical boxes as load generators. Instead the load generators share the virtual capacity used perhaps by the development and functional testing environments, which are already virtualised in many organisations.”

Keeping it Agile No one in the testing industry can have escaped the Agile ‘movement’. Elsewhere in this issue (cover story p04) we even have our first dissenting voice, but Angelina Samaroo is still a believer: “Of course we are all Agile. As a tester, this is one I believe in! We have had to be agile just to survive, and as a profession we have done more than that.” While every bit as committed, independent testing consultant James Christie sees some resistance in the larger IT institutions. “Maybe I'm slow on the uptake,” he confesses, “but until 2009 I didn't believe that the big beasts of IT really wanted to take Agile on board. I knew many people in these organisations wanted to. I just thought that corporate inertia would defeat them. Ultimately the big suppliers would shelter behind the pretence that Agile was OK for the little guys, boutique suppliers, ‘craftsmen’ in cottage industries, it just didn't fit the business model, and the self-image of ‘software engineers’ doing serious IT for big clients. “Moving a big company towards Agile is fiendishly difficult. I've worked there. I know how difficult it is to T.E.S.T | December 2009

make radical changes. It requires a new mindset, different styles of project management and governance. It means a different approach to writing contracts, to deploying and targeting staff. Everything is up for change, and the size of the problem is terrifying. Why would you want to try and teach an elephant to tap dance? I thought that Agile for the big consultancies would remain just a buzz word, something to drop into the marketing mix while serious projects were driven by contracts, finance and techniques that were proven failures. “So this year I've been watching with fascinated respect as IBM has been going Agile. It's not just that their commitment seems real, which is impressive enough. More significant is the fact that they have hired Mary and Tom Poppendieck. Instead of glossing over problems and creating the conditions for Agile to ‘fail’ they are addressing the deep changes that are required. “And if IBM is doing it then anyone can. Maybe I can see an end to the name calling, the presumptions that Agile is amateurish and light-weight, and the inflated counter-claims that Agile is always best in every circumstance. “Perhaps,” concludes Christie, “we're moving into a future where flexibility, iterative development and effective, early user testing are the norm, and different approaches are seen as valid for different problems. It's not just about Agile. If we move away from a world where rigid processes have to be applied in every case that's got to make life more challenging and rewarding for testers... perhaps. Or maybe I've just taken too much Christmas spirit on board!”

Testing Tools Not surprisingly, during the past 12 months many organisations have been trying to reduce costs and stream line their testing projects. Tom Millichamp, training director of Edgewords explains that he has been asked many times

recently about the open source testing tools that are available to use, but he questions whether these tools really do offer a credible alternative to the current market leading proprietary tools. “So will 2010 see a continued growth and interest in open source testing tools?” asks Millichamp. “That really is up to you, as the testers in the industry, but after considering the advantages and disadvantages, we expect that as the economy continues to recover, the interest and investment will return to the more mainstream products. Resources may be better utilised in up-skilling testers on the

“Let us, for 2010 and beyond, say to the banks, to the politicians, to the IT community, ‘We are organised, we are motivated, and we are educated - we have the papers to prove it – so bring it on!’ Merry Christmas and a Happy New Year to you all.” Angelina Samaroo, MD, Pinta Education. www.testmagazineonline.com


T.E.S.T 2009 review | 27

new features of the latest proprietary software through training; enabling them to make the most of their established and fully supported tools.” (For Tom’s full opinion piece see p48).

The future is bright, the future is complex Testing blogger Ewald Roodenrijs is looking forward to a more complex future for all in testing. “Future systems will be more complex,” he predicts. “And this increased complexity will have an effect on testing. We need more information about the systems, the clients’ real wishes and the business processes. With this information we can estimate what is needed for testing and make the correct adjustments to our testing process. “We already have this information, but we need to interpret it correctly. Information for example about the interfaces, business processes, testers, developers, used techniques and earlier testing. But we need to streamline this information and use it in a real-time view on the system with perhaps all its defects and missing items shown; like Augmented Testing. With Augmented Testing it’s an option to use this information as a layer over the system under test so all the information about the system under test while testing is available in real-time. This Augmented Testing will help us with testing this increased complexity and by combining all this information with a good process and good tools, the testing process can be a lot more efficient.” Whatever you approach and whatever your philosophy, T.E.S.T wishes you the most prosperous of years in 2010. And the last word to the ever-ebullient Angelina Samaroo: “Let us, for 2010 and beyond, say to the banks, to the politicians, to the IT community, ‘We are organised, we are motivated, and we are educated – we have the papers to prove it – so bring it on!’ Merry Christmas and a Happy New Year to you all.” www.testmagazineonline.com

“Why would you want to try and teach an elephant to tap dance? I thought that Agile for the big consultancies would remain just a buzz word, something to drop into the marketing mix while serious projects were driven by contracts, finance and techniques that were proven failures. So this year I've been watching with fascinated respect as IBM has been going Agile.” Independent testing consultant, James Christie.

“Whether companies are looking for future growth or seeking to maintain what they already have, quality will remain a key issue and this can only be determined on a project by project basis.” Sally McCowan-Wright, system tester, British Car Auctions. December 2009 | T.E.S.T


28 | Virtual testing

Testing virtual worlds AtomFire Productions, an interactive entertainment agency and RedBedlam, multiplayer games specialist, have teamed up to create what they describe as a faster, more stable way of building virtual worlds. Currently working on their second project together, CEOs Dominic Mason and Kerry Fraser Roberts, share their insights into testing this kind of complex, dynamic software projects.

O

ur aim was to combine to find a faster, more stable way of building virtual worlds. AtomFire is the design and strategy part of the joint venture, while RedBedlam’s remit is technology and software development, but we share a testing regime. Our testing ‘headline’ is Total Quality. The phrase is borrowed from Johann Cruijff’s ‘Total Football’ strategy, where each player on the field can replace another to some extent, but there is still some strategic bedrock.

T.E.S.T | December 2009

In our development environment, that bedrock is the technical architecture, which is accessible and available to all team members. From artists through management and between the technical staff, each person knows what the others should be doing; can double check their own work; and doesn’t need to escalate every testing failure to a senior programmer.

It needn’t be complicated A common error in interactive entertainment development teams is in not building a good technical www.testmagazineonline.com


Virtual testing | 29

foundation, and not building it in an inclusive fashion, meaning that key members of staff feel excluded and programmers feel responsible. This wastes time, builds barriers between team members and also makes valuable insight – which can come from any member of the team – difficult to gather and quantify. The capability to, at any time, shine a light into the darkest corners of our development means that our biggest challenge – avoiding reopening development decisions – can hopefully be avoided. When developing projects like the ones we specialise in, you need an explosive, creative, front-loaded decision-making phase that then settles down in to a productive, progressive development phase, much of which is similar to development and testing regimes that you have used in the past, but are repeating with novel content. The development phase testing, which is threaded through development as is our ‘Total Quality’ way, simply checks that the decisions made in the creative, energetic first phase are on track. I say, simply, but of course we are talking about multi-dimensional, massively scalable, dynamic systems, so the actual projects themselves www.testmagazineonline.com

are anything but simple. Controlling them does not need to be complicated though.

The shift to online I used to work for one of the world’s largest videogames outsourced services companies. One of its core services was testing. Many of its customers were interactive entertainment companies who owned large development projects and platforms. At the time, online was nascent and most products were offline, boxed products that would be patched (in the case of PC) by releases which were downloaded and installed. Many of these customers wanted ‘just-in-time’ testing of the most quantitative nature. As they saw what we could offer they then wanted more qualitative testing, which progressed in some customers to be more and more early stage testing which was then fed back into design. With the advent of more online projects and connected platforms, this situation has been flipped around. More and more products are designed and tested to be released ‘incomplete’ or in Beta form and then data from those products are used to design them outwards. This additional development, or feature releases can be, and should

Imagine a virtual world with a billion registered users. It’s not as fantastical as it might seem. Now imagine the incoming data from those users, even the five million that play on a daily basis. Imagine the 500,000 simultaneous connections. These are nice problems to have, but unless you control and test each element in your development chain, you can very rapidly suffer as products are required to scale up, or scale down. December 2009 | T.E.S.T


30 | Virtual testing

be, affected by players’ actions. The nature of QA and testing has therefore changed, and is still changing, as we are no longer talking about boxes, the content of which are impenetrable until after purchase, but live, active, changing products, the success of which rely upon pleasing users continuously and dynamically.

Enabling business change In this way, the QA department, or QA services provider is essential to test that both the game play test results, but equally the data results, that the management and design team receive are of a high enough quality to enable business changing decisions. These decisions are not remote ones based on designers experience and whims, but data-led ones. The tester regime determines the quality of the data and therefore the quality of the business decisions. To put it bluntly, high quality testing equals a better chance of success and more revenue. With testing and data much closer to core business and management than ever before, testing now has a better chance of making a case for its services than when products were ‘fire and forget’ or ‘fire and give lip service to’ in the past. We take this approach very seriously and the future success of our organisations and joint venture is dependent upon it. We very much live by the development ethos of ‘it will be released when it is ready’. But now, instead of the small team of people in a room being the ultimate arbiters of when something is right, the end users are the ones to tell us when a product is working to their satisfaction and there are potentially millions of them who, given the right tools, can give you T.E.S.T | December 2009

endless feedback to help you make a better product for them.

An internet of testers That is our biggest challenge – a whole Internet of testers. Imagine a virtual world with a billion registered users. It’s not as fantastical as it might seem. Now imagine the incoming data from those users, even the five million that play on a daily basis. Imagine the 500,000 simultaneous connections. These are nice problems to have, but unless you control and test each element in your development chain, you can very rapidly suffer as products are required to scale up, or scale down. If you have either ‘Western’ or ‘Eastern’ skewed products then there will be inevitable downtime, so what does your platform do when it’s scaled right down –and how can you test that scenario? Similarly, when the above scenario occurs and 500k users log-in simultaneously, what happens? Unless you have modelled these scenarios and are also cycling live data continuously back into your development environment for testing, then you are never going to know. The next twelve months will be very interesting for our respective and collective projects, as we are involved in such cutting-edge areas of development as the exponential growth of the 3D web, Facebook games which go from ten to 580,000 users in three weeks; and high-value, rapid and flexible MMOG platform development. However, no matter how fun or challenging our projects are, we will always adhere to our ‘Total Quality’ standards. These standards bring a level of cohesion, via an intrinsically simple understanding of what testing means, to the whole team, our customers and our partners – and that is invaluable.

Dominic Mason Director/Owner AtomFire Productions

www.atomfireproductions.com

Kerry Fraser Roberts Chief executive RedBedlam

www.redbedlam.com

No matter how fun or challenging our projects are, we will always adhere to our ‘Total Quality’ standards. These standards bring a level of cohesion, via an intrinsically simple understanding of what testing means, to the whole team, our customers and our partners – and that is invaluable. www.testmagazineonline.com



32 | Performance testing

Striking a balance How best can organisation strike a balance between competitiveness and product accuracy? Simon Morris, founder and R&D director of Pentura, walks the tightrope between speed to market and quality.

T.E.S.T | December 2009

www.testmagazineonline.com


Performance testing | 33

O

rganisations need to develop a balance between being first to market and being the best in the market. In most businesses a sales and marketing team will push for new products to be ready in time for certain market conditions, to be ahead of competitors and to position the organisation as the leader in its market place. This however is somewhat of an issue when it comes to testing security products because the two tactics don’t sit well together. Testing is not a process that can be rushed, it is essential for businesses to make sure their products meet their customers’ objectives and needs and the product needs to be ready in time to ensure a competitive edge. So how best can an organisation realise this balance between competitiveness and product accuracy?

Getting the right balance Organisations need to take a semi formalised approach to testing in order to keep a ‘real world’ aspect. Many organisations in the past have used mathematical testing, which has proved a product is robust and safe to use, however, some of the maths behind the software testing algorithms has been shown to have flaws themselves. Some organisations don’t use validation testing and instead release a BETA so that they can make corrections as they go along. These tactics only work to a certain extent; organisations need to prioritise the risks involved with the product and how accurate the testing needs to be eg online banking products must be safe and have no bugs/vulnerabilities before they are introduced to customers and so must undergo a meticulous testing process. Organisations also need to ensure that the testing process is as efficient, accurate and as fast as possible. In many cases, particularly in programming, there are millions of lines of code that have been written by a programmer who is no longer with the company. The code is almost impossible to understand and even if a www.testmagazineonline.com

bug is found it is extremely difficult to fix. As a result the new programmer will need to start from scratch. Many issues can be easily overlooked in the rush to get a product to market. Unfortunately testing is often seen as an overhead as organisations are too eager to reach the end product and cannot see the tangible return-oninvestment testing can bring. Communication is key in the product development process; many program managers still struggle to articulate their ideas and plans and very often find themselves under pressure from marketing and sales managers to deliver a product to market before it is adequately tested. The commercial reality of needing to get a new product to market must be balanced with a tester’s typical risk-averse attitude. There needs to be a clear middle ground/compromise between these two business departments. Getting the balance right is crucial.

The importance of testing An organisation needs to test the stability of a new product, simple questions such as ‘does it do what it says on the tin’ and ‘does it do what the marketing and sales department has asked for?’ will often be overlooked. Also, the ease of use of a product is vital; it needs to be aimed at the right audience. Even if it is a remarkable piece of code if it’s not something that can be used easily by the customer then the project objectives have not been met. One of the main reasons Apple has been a success is because of its focus in the early days on the ‘human computer interaction’ aspect of its products. Apple tested its user interfaces to assure they could be used by anyone and that everything was where a user would expect to find it. Businesses must follow this example to keep customers satisfied. To ascertain the quality of a product, bounds checking is vital. Situations where an application allows a user to enter a number between one and ten must be tested and validated. If a user can enter a number outside of that range this can result in the product

crashing, it must be able to cope with irregular input and exception trap reliably. I have seen examples where some of the leading firewall products allow users to enter any syntactically incorrect data and give obscure error messages, when the product compiles the policy in some cases it crashes as the data was in-correct. Organisations must ensure the information that users enter, no matter how random, does not break the product. The aim is to catch as many problems as possible before a product goes to market. Organisations today are realising the value in a carefully planned product development process and the value in testing all possibilities. Many of the exploits on banking sites that are banded around the media are from bad coding. Banks have realised this over the last few months and are now investing in accurate code checks.

Accurate implementation In order to make the product development process as streamlined and efficient as possible there are a number of tactics an organisation can use. With time being a very important factor in product engineering it is not possible for businesses to stay competitive if products need to be constantly redeveloped and bugs removed. With so many products released way ahead of time without adequate testing, the only way to stay ahead of competitors and maintain customer satisfaction is to take the ‘second-mover advantage’. A well respected and recognised way to manage the software lifecycle is to follow the seven stages of the ‘Waterfall Model’, a sequential software development process: 1. Requirements specification; 2. Design; 3. Construction (AKA implementation or coding); 4. Integration; 5. Testing and debugging (AKA Validation); 6. Installation; 7. Maintenance. Each of these phases must be completed accurately and precisely December 2009 | T.E.S.T


34 | Performance testing

before moving on to the next. The more time spent in the early stages of a software production cycle the better the results and cost efficiencies at the later stages. It has been shown that a bug found in the early stages, such as requirements specification or design, is cheaper in terms of money, effort and time to fix than the same bug found later on in the testing phase. It is very difficult to ensure every phase of a software product's lifecycle is perfected; this is why testing is still a very important and necessary step in the product development process. One way of ensuring a product is meeting the original objectives is to break the process down into smaller projects. This allows developers to clearly see if they are on target to deliver on customer objectives at the end of each of the smaller projects. Some long term software engineering projects can last years and as new programmers come through the business, objectives can get misinterpreted and misunderstood, the code that was originally written is difficult for another programmer to translate and understand and very often projects have to be started again from scratch. It makes sense to break large projects into smaller sections so there are clear benchmarks where objectives can be reviewed, re-set and monitored and new programmers can be introduced with minimal disruption. Another way of minimising time spent developing and writing new code is to use object oriented techniques to ensure proven and robust code is used to reduce overall development times. Over time a businesses can develop lots of objects for different tasks. This creates a pool of re-usable code, which can prove invaluable with future projects as the business can reuse these pieces of code giving it the building blocks for future products. There are no regulatory measures for writing code, it is generally understood and an unwritten rule that programmers will annotate and document their coding so that should another programmer need to edit and develop their code further the annotations will allow them to do this with ease. T.E.S.T | December 2009

Targets and milestones The issues of time and understanding the ins and outs of product development will not be fixed overnight, the best way to ensure that a product is ready to go to market is by setting up targets and milestones to monitor its progress, this can be done using modular programming techniques. Over time languages have supported these techniques, which have been improved with experience: Pascal, Modular 2. ADA (a defence language that promoted OOP), C++ and variants. It is important to put the final product into perspective, if it’s not going to be deployed into a high risk environment then testing is not as essential and perhaps the BETA technique can work well in this environment. In high risk environments it is essential to spend more time testing because the impact of a bug can be catastrophic. In a perfect world there would be more staff and time available to test and ensure consistency. Consistency is important to ensure everyone can understand the objectives. Marketing and product engineer departments need to set realistic time allocations to ensure they manage customer expectations well. Software testing should provide accurate information about the quality of a product or service with respect to the context it is intended to operate in. It should also provide an objective and independent view of the product to allow an organisation to appreciate and understand the risks involved in implementing the software. It must validate and verify that a product meets business and technical requirements that were agreed in the early stages and it must work as expected. Testing can never completely highlight and eradicate all of the bugs and faults within a product. Instead, it identifies how well a product will work in a particular environment. Every software product has a target audience and when an organisation develops or invests in a software product, it must ensure the product will address the needs of its end users, its target audience and its purchasers.

Simon Morris Founder and R&D director Pentura Limited www.pentura.com

Even if it is a remarkable piece of code if it’s not something that can be used easily by the customer then the project objectives have not been met. One of the main reasons Apple has been a success is because of its focus in the early days on the ‘human computer interaction’ aspect of its products. Apple tested its user interfaces to assure they could be used by anyone and that everything was where a user would expect to find it. Businesses must follow this example to keep customers satisfied.

www.testmagazineonline.com


Performance testing | 35

Performance assured Paul Caine, managing director of Trust IV discusses the dangers facing ecommerce businesses that fail to include performance testing as an integral part of their online store-front development life-cycle and argues that testingas-a-service is the way forward to ensure better online user experiences.

I

t has been estimated that revenues from online shopping during this pre-Christmas period will reach around $40bn worldwide, with the majority of that spending occurring during the last week in November and the first week in December; the first Monday after the Thanksgiving holiday weekend in the US is now officially dubbed as Cyber Monday by the Internet community due to the massive numbers of shoppers choosing that day to get their online orders in time for Christmas delivery. Akamai Technologies, which tracks the world’s major ecommerce Web sites, recorded peak traffic on the top sites reaching over 3.5million visitors per minute during the last Cyber Monday period with over £600million being spent on the day. While this is clearly excellent news for the growing numbers of e-commerce based organisations that have embraced the power of the Internet as a central element of their business strategy, this massive increase also brings with it some significant technological challenges. www.testmagazineonline.com

There can be serious negative business consequences for any website not optimised to cope with such high volumes of customer traffic. Last year it was reported that around 90 percent of all regular online shoppers in the UK experienced at least one problem in completing an online transaction due to a performance issue, with more than 50 percent of those people choosing to purchase from a competitor’s site rather than wait for the problem to be resolved.

Fit for purpose The problem arises when companies focus their Internet investment on the look, feel and functionality of their Web sites but fail to put the same emphasis on ensuring that at every stage of the development life-cycle the focus is maintained. Performance bottlenecks need to be identified and corrected at key stages of the process as well as before going live. For the beleaguered Web development team, under pressure to meet commercially sensitive deadlines, the temptation to cut corners is understandable. But ultimately it is their neck on the line if

Studies carried out by the leading analysts, Forrester, IDC, and the Yankee Group suggest that the potential cost of a 24-hour outage for a large e-commerce company could be as much as £15million. With that kind of money at stake businesses cannot afford to take risks; getting this aspect of the site wrong can result not just in major losses in revenue but also seriously damage long term customer relationships, erode corporate reputations and even impact share price valuations. December 2009 | T.E.S.T


36 | Performance testing

One of the major challenges facing Web application developers is to build a site that can maintain acceptable post-live operational performance levels, including during wild traffic fluctuations and multiple user behaviour patterns, and how to accurately recreate these conditions in the test environment. T.E.S.T | December 2009

the site falls over at a critical trading time and their responsibility to take every possible step to ensure the site is fit for purpose. Studies carried out by the leading analysts, Forrester, IDC, and the Yankee Group suggest that the potential cost of a 24-hour outage for a large e-commerce company could be as much as ÂŁ15million. With that kind of money at stake businesses cannot afford to take risks; getting this aspect of the site wrong can result not just in major losses in revenue but also seriously damage long term customer relationships, erode corporate reputations and even impact share price valuations. Customer loyalty online does not have the same resonance as it does in the high street and the multimillion pound investment put into driving customers to a Web site can be lost forever by just one negative experience. The problem lies in the fact that all Web sites have a finite limit on the number of connections they can handle at any given moment in time. This is determined by the site architecture as well as being a function of a number of other variables including the specifications of the server and the capacity of the connection to the Internet. There is a lot that can be done at the development stage to identify and iron-out any traffic bottlenecks and ensure that the this limit is set as high as possible. This fundamental weakness in Web technology has been ruthlessly exploited in recent years by the growing threat from professional cyber-crooks with many online retailers and e-commerce businesses finding themselves victims of a targeted distributed denial of service attack. In 2008 it was estimated by the leading security vendor Symantec that DDOS attacks were occurring at a rate of around 5,000 per day on a global basis. With the added factor of exponential growth in legitimate flash-flood traffic making the problem worse, particularly for ticketing organisations, maintaining Web site availability is now a major headache for the Web application developers as

well as the data centre management team.

Maintaining performance One of the major challenges facing Web application developers is to build a site that can maintain acceptable post-live operational performance levels, including during wild traffic fluctuations and multiple user behaviour patterns, and how to accurately recreate these conditions in the test environment. Given that there are plenty of proven specialist testing tools available that can be used for the job including solutions such as QTest and QAP from Quotium Technologies and Reflective Solutions’ StressTester and Sentinel that are all capable of simulating a range of user-journey scenarios, scaling to virtually unlimited traffic loads, the question is why do so many Web applications still fail at critical times? Clearly there is no one simple answer to this but cost and complexity of the testing tools are often cited as the primary reasons for opting to follow a high risk development strategy, particularly by organisations that do not have a dedicated QA team at their disposal, which in these difficult economic times is far from being a commonplace resource. An additional problem often identified is deciding whether to opt for an in-house solution or outsource to one of the many third party testing companies that have emerged in recent years. Both scenarios can have their merits and choosing the right path for a particular organisation can be a difficult process that can result in the company doing nothing or opting for an ineffective compromise solution. The in-house option brings with it the high annual licence fees associated with some of the leading testing tools on the market plus the cost of maintaining an internal team of dedicated experts for the chosen solution. Where the organisation has multiple software development projects as a core part of its business function the ROI equation may well be in favour of the in-house option. However for the vast majority of businesses that have irregular need www.testmagazineonline.com


Performance testing | 37

for a sophisticated testing tool this is unlikely to be the case and the benefits of employing third party help, start to make much more sense.

Outsourced testing As well as making commercial sense, choosing the outsource option can also bring a range of additional technical benefits. Any testing consultancy worthy of the name will be able to offer a range of specialist skill sets covering the whole gamut of the software testing requirements together with the experience and knowledge to be able to select the right tool for the job, whether that be for load and stress testing Web applications or more in-depth monitoring and analysis of the whole user experience. For those organisations that decide on the third party route, choosing the right organisation to partner with is the next challenging step. Investing in a long term partnership arrangement with the testing organisation can ultimately prevent costly, late-stage errors and ensure that the application is delivered on budget and on time. This means choosing an organisation that can help to build testing into the key stages of the development project from the outset and provides the resources to ensure rigorous standards are applied at every stage of the process. A critical element of any testing process is not only the ability to simulate realistic user journey paths through the application from landing on the entry page to completing a transaction, under extreme load conditions, but they must also be able to accurately recreate the same www.testmagazineonline.com

tests under the same conditions as often as is necessary until the application is signed off ready to go live. Without this ability the tests are pretty much meaningless and cannot provide a reliable prognosis of how the application will stand up in the worst case scenario. Finding out that the application is not robust enough to scale to handle a Cyber-Monday level surge of traffic in a live environment can be both extremely costly for the company concerned and career limiting event for the development team responsible.

A mainstream part of development As the e-commerce sector continues to grow and increasing numbers of organisations move their core IT operations online, performance testing is becoming a more mainstream part of the application development process. While there will always be a place for both in-house QA teams as well as a requirement for in-depth analysis provided by third party consultancies there is also an emerging need for online ‘utility’-based services that are simple to use and are realistically priced on a per use basis enabling better control of project costs and that can flexibly adapt to the realities of the development life-cycle. This testing-as-a-service approach has to be the future for the performance testing sector to ensure that more Web applications are fit for purpose with fewer e-commerce businesses losing customers as the result of a badly constructed online shop-front.

Paul Caine Managing director Trust IV www.trustiv.co.uk

December 2009 | T.E.S.T


38 | Testing techniques

Get smart Is user accepted testing (UAT) really your best testing option? ISEB chief examiner for software testing, Brian Hambling says it could be if you get Smart.

U

Introducing Smart

well documented as a basis for testing, developers produced applications that faithfully implemented the requirements, professional testers were effective in systematically testing applications, and project managers ensured that systematic testing was completed, even if development was overrunning, we would get better results. Is that a realistic aim? Not really. The history of the last 60 years of computing show depressingly little progress in any of those areas. Better to look for a more pragmatic way forward, for the time being at least. We need to get smarter. Smart testing could be one way forward. It is testing for the realistic user interested in business results and not in methods or technology. Smart testing has just three principles.

What would be the alternative? Leave it to the developers and the professional testers? Not an encouraging prospect, unfortunately. So, if we have to do UAT, let’s get some value from it. We all know that if requirements were complete and

Principle 1: Use scenarios to describe what the business needs. Teaching users to define requirements and tests (a skill set that many IT professionals never completely master) for a one off testing activity is not practical. Worse,

ser acceptance testing (UAT), that is, testing done by amateurs pressed into service while they still have a ‘day job’ to do. Often scheduled too late to enable any problems found to be fixed, and expensive because the ‘testers’ are generally the revenue generators for the business. Imagine keeping bankers from earning their end of year bonus to test a new system. Is it worthwhile? Well, headlines continue to highlight disastrous failures in new systems, so it is not a massively successful practice. Shouldn’t we just quietly drop the whole thing and save the money? Well we certainly have to ask the question, don’t we?

T.E.S.T | December 2009

www.testmagazineonline.com


Testing techniques | 39

it encourages them to think like testers instead of like business users. The user community is uniquely competent to define what is needed, and to ensure that what is delivered matches the need, but that does not mean that they should have to write perfect requirements statements. If I ask a plumber to install a new bathroom in my house I do not expect to define the pipe diameters or the rate of flow of water; I simply explain how I would like it to look and what I intend to use it for. I run through a scenario in my mind of running a bath or a shower and imagine how it will work and look, and then I transfer the idea to my plumber’s imagination. Building realistic scenarios is the natural domain of the end user. Scenarios provide development teams with a much more effective picture of the required system behaviour than a bare set of requirements. In my bathroom scenarios, there will be some development of the ideas (like when the taps I want are not available or I realise that the corner bath will not actually fit), and the same applies to business scenarios. Eventually the scenarios will stabilise. By using scenarios we save users the time, effort, loss of revenue and loss of business perspective that results from trying to learn how to write requirements and test IT systems. Principle 2: Test before you build. Scenarios enable users, developers and testers to work together as a single community to explore the implications of scenarios and decide what needs to be put in www.testmagazineonline.com

place to ensure the scenario can be successfully realised. While the scenarios are being elaborated on, and are naturally evolving, the project community will also be thinking about what the system will look like and how it will behave in each scenario. This is the first stage of UAT. Imagination of an outcome is the first step to recognising what will not work (the bath I always wanted will not fit in my bathroom). This early testing will avoid many of the worst kind of defects; the attempts to deliver desirable but impracticable characteristics of a system. It will also give the whole project community a shared vision of how the system will behave. Scenarios can be converted into tests when they are stable. Defining a test as a way of realising the scenario can be a joint effort that helps developers, testers and users to understand what the outcome will be. It helps users, testers and developers to interact in the early stages of a project rather than at the end. The scenarios in test form will become the UAT, but they will already have made a huge contribution by aligning everyone’s thinking about the outcome. Running the UAT scenarios will be more of a formality, meeting the real purpose of UAT, which is to provide confidence in the solution before the whole user community takes it on. Principle 3: Decide what is good enough before you begin. Which of the scenarios is most important? What proportion of the total

Scenarios enable users, developers and testers to work together as a single community to explore the implications of scenarios and decide what needs to be put in place to ensure the scenario can be successfully realised. December 2009 | T.E.S.T


40 | Testing techniques

strategy for release to be agreed. We know that complete and fully effective systems are seldom achieved in the originally planned time scale; if we have some measure of the outstanding priorities we can better determine the best way forward to guarantee the quickest possible route to an acceptable level of capability. Completeness measures give us the opportunity to analyse the fitness of the system for release without needing to get involved in the technical details. None of these principles is new; they are all time-honoured, though Building confidence visible more in the breach than in the Scenarios build confidence that observance. Their implications are far the system does the right things. reaching. To achieve them we need Professional testers and developers to reshape project teams so that the will have been working to ensure user community, testing specialists that other aspects of the system are and developers are all engaged at the satisfactory, especially in areas such outset of the project, working together as reliability, usability and other nonfunctional aspects. Many of these need in an integrated team to achieve common goals. Smart UAT is the user specialist input and may also require contribution to this approach. the use of tools. This is clearly the domain of the professional testers and developers, but the user community Three obligations on the user needs confidence in what the Smart UAT places three obligations specialists have done. Completeness on the user community. First of all, measures can give us this confidence it will need the user community to by specifying required reliability, form a team with all levels and skills performance and other attributes represented as far as possible so that of the system. appropriate members can be assigned The completeness measures can be at appropriate stages. Communication combined into an overall measure within the user team and between of the risk of releasing a system as the user community and the rest of deadlines approach; they enable a the project team will then enable scenarios is essential? How many glitches could we tolerate at initial release? How reliable do we need the system to be? Which areas of the system need most testing? Can we risk releasing before testing is complete? The answers to questions such as these form a set of criteria that we can use to decide whether or not to let a system go live. Setting these criteria at the beginning of a project provides a solid and agreed basis for a decision on whether a system is fit for release.

T.E.S.T | December 2009

We need to reshape project teams so that the user community, testing specialists and developers are all engaged at the outset of the project, working together in an integrated team to achieve common goals. Smart UAT is the user contribution to this approach. www.testmagazineonline.com


Testing techniques | 41

continuous interaction with the developers but without individuals being permanently assigned. It will not be necessary to involve the same representatives of the user community at every stage, nor will it need users to be concerned about the details of development activities. Senior users will be needed at some stages to ensure that the overall strategy and vision are clear; ‘hands on’ users will be needed to assess the practicality of solutions, and middle level users will be needed to validate the processes based on the proposed solution. Secondly, the user community will need to collaborate on the construction of scenarios that define business needs and their testing, and to maintain contact with developers and testers to resolve questions as the scenarios are elaborated. Finally, the user community will need to construct completion criteria to drive the development project to a successful conclusion, and to contribute to decision making as the project nears completion and risk is being assessed. The construction of tests to represent scenarios can be done by users with basic testing knowledge or by professional testers. As long as a scenario is faithfully represented by a test the choice of author for the test is a matter of convenience, and www.testmagazineonline.com

testers already have the necessary skills. The running of tests is best done by users because the experience of the functioning of the system will be valuable experience as system release gets close, but testers could do the job if users are unavailable.

Get Smart The overall impact of this on the user community is to require earlier and wider involvement, but the usual intensive testing activity at the end of the project will become less intensive. The concentration on testing early, at the scenario definition stage, and on interaction with the development team will reduce the defect count and make the final testing less stressful. Is UAT the smart option? Well, Smart UAT enables the user community to contribute their knowledge and experience without having to learn new skills and without having to become IT people. It enables them to spread the effort of collaboration between different groups of users and to ensure continuous involvement without permanently assigning reluctant users. It gives them the best chance to avoid the expensive mistakes that UAT conventionally identifies at the end of the project. And it provides developers and testers with effective collaboration at the early stages of the project, when it is of most value to them. That’s smart.

Brian Hambling Chief examiner software testing ISEB www.iseb-exams.com

The user community will need to collaborate on the construction of scenarios that define business needs and their testing, and to maintain contact with developers and testers to resolve questions as the scenarios are elaborated. December 2009 | T.E.S.T


42 | T.E.S.T company profile

ISEB The Information Systems Examinations Board (ISEB) is part of the BCS, the Charted Institute for IT, and is an international examination body created to raise the standard of competence and performance of people working in IT. We’re leading the way in qualifications for IT professionals – delivering more than 380,000 exams in over 200 countries. Our qualifications are internationally recognised and cover eight major subject areas in: Software Testing, ITIL /IT Service Management, IT Assets and Infrastructure, Systems Development, Business Analysis, Project Management, IT Governance, Information and Security and our new qualification Sustainable IT. These are available at Foundation, Practitioner and Higher Level to suit each individual candidate. ISEB Professional Level is also available. For more information visit www.iseb-exams.com. These qualifications are delivered via a network of high quality accredited training and examination providers. The breadth and depth of our portfolio is one of its key strengths as it encourages knowledge, understanding and application in specific business and IT areas. Candidates develop their competence, ability and aptitude – and therefore their professional potential – giving employers the edge they’re looking for

BCS BCS, The Chartered Institute for IT, promotes wider social and economic progress through the advancement of information technology science and practice. We bring together industry, academics, practitioners and government to share knowledge, promote new thinking, inform the design of new curricula, shape public policy and inform the public. As the professional membership and accreditation body for IT, we serve over 70,000 members including practitioners, academics and students, in the UK and internationally. A leading IT qualification body, we also offer a range of widely recognised professional and end-user qualifications.

BCS membership for software testers BCS membership gives you an important edge; it shows you are serious about your career in IT and are committed to your own professional development, confirming your status as an IT practitioner of the highest integrity. Our growing range of services and benefits are designed to be directly relevant at every stage of your career.

Industry recognition Post-nominals – AMBCS, MBCS, FBCS & CITP – are recognised worldwide, giving you industry status and setting you apart from your peers. BCS received its Royal Charter in 1984 and is currently the only awarding body for Chartered IT Professional (CITP) status, also offering a route to related Chartered registrations, CEng and CSci.

Membership grades Professional membership (MBCS) is our main professional entry grade and the route to Chartered(CITP) status. Professional membership is for competent IT practitioners who typically have five or more years of IT work experience. Relevant qualifications, eg a computingrelated degree, reduce this requirement to two or three years of experience. Associate membership (AMBCS) is available for those just beginning their career in IT, requiring just one year’s experience. Joining is straightforward – for more information visit: www.bcs.org/membership where you can apply online or download an application form.

Best practice By signing up to our Code of Conduct and Code of Good Practice, you declare your concern for public interest and your commitment to keeping pace with the increasing expectations and requirements of your profession.

Networking opportunities Our 44 branches, 16 international sections and over 40 specialist groups including Software Testing (SIGIST) and Methods & Tools, provide access to a wealth of experience and expertise. These unrivalled networking opportunities help you to keep abreast of current developments, discuss topical issues and make useful contacts.

Specialist Group in Software Testing (SIGIST) With over 2,500 members SIGIST is the largest specialist group in the BCS. Objectives of the group include promoting the importance of software testing, developing the awareness of the industry’s best practice and promoting and developing high standards and professionalism in software testing. For more information please visit: www.sigist.org.uk.

Information services The BCS online library is another invaluable resource for IT professionals, comprising over 200 e-books plus Forrester reports and EBSCO databases. BCS members also receive a 20 percent discount on all BCS book publications. This includes Software Testing, an ISEB Foundation and Intermediate. As well as explaining the basic steps of the testing process and how to perform effective tests, this book provides an overview of different techniques, both dynamic and static, and how to apply them.

Career development A host of career development tools are available through BCS including full access to SFIA (the Skills Framework for the Information Age) which details the necessary skills and training required to progress your career.

BCS, First Floor, Block D, North Star House, North Star Avenue, Swindon, SN2 1FA, United Kingdom Tel: +44 (0) 1793 417655 Fax: +44 (0) 1793 417559 Email: isebenq@hq.bcs.org.uk Web: www.iseb-exams.com

T.E.S.T | December 2009

www.testmagazineonline.com


T.E.S.T company profile | 43

Parasoft Improving productivity by delivering quality as a continuous process. For over 20 years Parasoft has been studying how to efficiently create quality computer code. Our solutions leverage this research to deliver automated quality assurance as a continuous process throughout the SDLC. This promotes strong code foundations, solid functional components, and robust business processes. Whether you are delivering Service-Orientated Architectures (SOA), evolving legacy systems, or improving quality processes – draw on our expertise and award winning products to increase productivity and the quality of your business applications. Parasoft's full-lifecycle quality platform ensures secure, reliable, compliant business processes. It was built from the ground up to prevent errors involving the integrated components—as well as reduce the complexity of testing in today's distributed, heterogeneous environments.

What we do Parasoft's SOA solution allows you to discover and augment expectations around design/development policy and test case creation. These defined policies are automatically enforced, allowing your development team to prevent errors instead of finding and fixing them later in the cycle. This significantly increases team productivity and consistency.

End-to-end testing: Continuously validate all critical aspects of complex transactions which may extend through web interfaces, backend services, ESBs, databases, and everything in between.

Advanced web app testing: Guide the team in developing robust, noiseless regression tests for rich and highly-dynamic browser-based applications.

Application behavior virtualisation: Automatically emulate the behavior of services, then deploys them across multiple environments – streamlining collaborative development and testing activities. Services can be emulated from functional tests or actual runtime environment data.

Load/performance testing: Verify application performance and functionality under heavy load. Existing end-to-end functional tests are leveraged for load testing, removing the barrier to comprehensive and continuous performance monitoring.

Specialised platform support: Access and execute tests against a variety of platforms (AmberPoint, HP, IBM, Microsoft, Oracle/BEA, Progress Sonic, Software AG/webMethods, TIBCO).

Security testing: Prevent security vulnerabilities through penetration testing and execution of complex authentication, encryption, and access control test scenarios.

Trace code execution: Provide seamless integration between SOA layers by identifying, isolating, and replaying actions in a multi-layered system.

Continuous regression testing: Validate that business processes continuously meet expectations across multiple layers of heterogeneous systems. This reduces the risk of change and enables rapid and agile responses to business demands.

Multi-layer verification: Ensure that all aspects of the application meet uniform expectations around security, reliability, performance, and maintainability.

Policy enforcement: Provide governance and policy-validation for composite applications in BPM, SOA, and cloud environments to ensure interoperability and consistency across all SOA layers. Please contact us to arrange either a one to one briefing session or a free evaluation.

Web: www.parasoft.com Email: sales@parasoft-uk.com Tel: +44 (0) 208 263 6005

www.testmagazineonline.com

December 2009 | T.E.S.T


44 | T.E.S.T company profile

Seapine Software TM

With over 8,500 customers worldwide, Seapine Software Inc is a recognised, award-winning, leading provider of quality-centric application lifecycle management (ALM) solutions. With headquarters in Cincinnati, Ohio and offices in London, Melbourne, and Munich, Seapine is uniquely positioned to directly provide sales, support, and services around the world. Built on flexible architectures using open standards, Seapine Software’s cross-platform ALM tools support industry best practices, integrate into all popular development environments, and run on Microsoft Windows, Linux, Sun Solaris, and Apple Macintosh platforms. Seapine Software's integrated software development and testing tools streamline your development and QA processes—improving quality, and saving you significant time and money.

TestTrack RM TestTrack RM centralises requirements management, enabling all stakeholders to stay informed of new requirements, participate in the review process, and understand the impact of changes on their deliverables. Easy to install, use, and maintain, TestTrack RM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and rolebased security. Whether as a standalone tool or part of Seapine’s integrated ALM solution, TestTrack RM helps teams keep development projects on track by facilitating collaboration, automating traceability, and satisfying compliance needs.

TestTrack Pro TestTrack Pro is a powerful, configurable, and easy to use issue management solution that tracks and manages defects, feature requests, change requests, and other work items. Its timesaving communication and reporting features keep team members informed and on schedule. TestTrack Pro supports MS SQL Server, Oracle, and other ODBC databases, and its open interface is easy to integrate into your development and customer support processes.

TestTrack TCM TestTrack TCM, a highly scalable, cross-platform test case management solution, manages all areas of the software testing process including test case creation, scheduling, execution, measurement, and reporting. Easy to install, use, and maintain, TestTrack TCM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and role-based security. Reporting and graphing tools, along with user-definable data filters, allow you to easily measure the progress and quality of your testing effort.

QA Wizard Pro QA Wizard Pro completely automates the functional and regression testing of Web, Windows, and Java applications, helping quality assurance teams increase test coverage. Featuring a next-generation scripting language, QA Wizard Pro includes advanced object searching, smart matching a global application repository, data-driven testing support, validation checkpoints, and built-in debugging. QA Wizard Pro can be used to test popular languages and technologies like C#, VB.NET, C++, Win32, Qt, AJAX, ActiveX, JavaScript, HTML, Delphi, Java, and Infragistics Windows Forms controls.

Surround SCM Surround SCM, Seapine’s cross-platform software configuration management solution, controls access to source files and other development assets, and tracks changes over time. All data is stored in industry-standard relational database management systems for greater security, scalability, data management, and reporting. Surround SCM’s change automation, caching proxy server, labels, and virtual branching tools streamline parallel development and provide complete control over the software change process.

www.seapine.com United Kingdom, Ireland, and Benelux: Seapine Software Ltd. Building 3, Chiswick Park, 566 Chiswick High Road, Chiswick, London, W4 5YA UK Phone:+44 (0) 208-899-6775 Email: salesuk@seapine.com Americas (Corporate Headquarters): Seapine Software, Inc. 5412 Courseview Drive, Suite 200, Mason, Ohio 45040 USA Phone: 513-754-1655

T.E.S.T | December 2009

www.testmagazineonline.com


T.E.S.T company profile | 45

TechExcel TechExcel is the leader in unified Application Lifecycle Management as well as Support and Service solutions that bridge the divide between product development and service/support. This unification enables enterprises to focus on the strategic goals of product design, project planning, development and testing, while enabling transparent visibility with all customer-facing initiatives. TechExcel has over 1,500 customers in 45 countries and maintains offices in UK, US, China and Japan.

Application Lifecycle Management DevSuite is built around the best-practices insight that knowledge is central to any product development initiative. By eliminating the silos of knowledge that exist between different teams and in different locales, DevSuite helps enterprises transform their development processes, increasing efficiency and overall quality.

DevSpec DevSpec is an integrated requirements management solution that is specifically designed to provide visibility, traceability and validation of your product or project requirements. DevSpec provides a framework to create new requirements, specifications and features that can be linked to development and testing implementation projects.

DevPlan DevPlan is a project, resource, and task management tool. It allows users to plan high level areas of work, assign team members to work in these areas, and then track the tasks needed to complete the activities.

DevTrack DevTrack is the leading project issue and defect tracking tool that is used by development teams of all sizes around the globe. Its configurable workflows allow DevTrack to meet the needs of any organisation's development processes.

DevTest From test case creation, planning and execution through defect submission and resolution, DevTest tracks and manages the complete quality lifecycle. DevTest combines the test management features of DevTest, DevTrack and TestLink for test automation into one integrated solution.

KnowledgeWise KnowledgeWise is the knowledge management solution at the core of the entire suite. It is the centralised knowledge base for all company documents including: contracts, processes, planning information and other important records as well as customer-facing articles, FAQs, technical manuals and installation guides. More information at: www.techexcel.com/products/devsuite.

Service and Support Management Service and Support Management solutions provide enterprises with total visibility and actionable intelligence for all service desk, asset management and CRM business processes.

ServiceWise ServiceWise is a customisable and comprehensive internal Helpdesk, ITSM- and ITILcompliant solution. Automate and streamline services and helpdesk activities with configurable workflows, process management, email notifications and a searchable knowledge base. The self-service portal includes online incident submission, status updates, online conversations and a knowledgebase. ServiceWise includes modules such as incident management, problem escalation and analysis, change management and asset management. CustomerWise CustomerWise is an integrated CRM solution focused on customer service throughout the entire customer lifecycle. CustomerWise allows you to refine sales, customer service and support processes to increase cross-team communication and efficiency while reducing your overall costs. Combine sophisticated process automation, knowledgebase management, workflow, and customer self-service to improve business processes that translate into better customer relationships. AssetWise AssetWise aids the process of monitoring, controlling and accounting for assets throughout their lifecycle. A single and centralised location enables businesses to monitor all assets including company IT assets, managing asset inventories, and tracking customerowned assets.

FormWise FormWise is a web-based form management solution for ServiceWise and CustomerWise. Create fully customised online forms and integrate them directly with your workflow processes. Forms can even be routed automatically to the appropriate individuals for completion, approval, and processing, improving your team's efficiency. Web-based forms may be integrated into existing websites to improve customer interactions including customer profiling, surveys, product registration, feedback, and more.

DownloadPlus DownloadPlus is an easy-to-use website management application for monitoring file downloads and analysing website download activities. DownloadPlus does not require any programming or HTML. DownloadPlus provides controlled download management for all downloadable files, from software products and documentation, to marketing materials and multimedia files. More information at: www.techexcel.com/products/itsm/

Training Further your investment with TechExcel, effective training is essential to getting the most from an organisation's investment in products and people. We deliver professional instructor-led training courses on every aspect of implementation and use of all TechExcel’s software solutions as well as both service management and industry training. We are also a Service Desk institute accredited training partner and deliver their certification courses. More information at: www.techexcel.com/support/ techexceluniversity/servicetraining.html

For more information, visit www.techexcel.com or call 0207 470 5650.

www.testmagazineonline.com

December 2009 | T.E.S.T


46 | T.E.S.T company profile

31 Media 31 Media is a business to business media company that publishes high quality magazines and organises dynamic events across various market sectors. As a young, vibrant, and forward thinking company we are flexible, proactive, and responsive to our customers' needs.

www.31media.co.uk

T.E.S.T Online Since its launch in 2008 T.E.S.T has rapidly established itself as the leading European magazine in the software testing market. T.E.S.T is a publication that aims to give a true reflection of the issues affecting the software testing market. What this means is that the content is challenging but informative, pragmatic yet inspirational and includes, but is not limited to: In-depth thought leadership; Customer case studies; News; Cutting edge opinion pieces; Best practice and strategy articles The good news is that the T.E.S.T website, T.E.S.T Online has had a root and branch overhaul and now contains a complete archive of previous issues as well as exclusive web-only content and testing and IT news. At T.E.S.T our mission is to show the importance of software testing in modern business and capture the current state of the market for the reader.

www.testmagazine.co.uk

VitAL Magazine VitAL is a journal for directors and senior managers who are concerned about the business issues surrounding the implementation of IT and the impact it has on their customers. Today senior management are starting to realise that implementing IT effectively has a positive impact on the internal and external customer and it also influences profitability. VitAL magazine was launched to help ease the process.

vital Inspiration for the modern business

www.vital-mag.net

Customer Magazine Customer Magazine was launched to address and assist with the various challenges senior professionals face when establishing a customer-centric business. Our editorial takes a pragmatic approach to what has become a series of complex issues and delivers dynamic, provocative, and insightful articles, case studies, opinion pieces, and news stories that not only challenge our readers but also bring clarity and vision to the many challenges they face.

www.customermagazine.co.uk

31 Media www.31media.co.uk info@31media.co.uk Crawley Business Centre, Stephenson Way, Crawley, West Sussex, RH10 1TN, United Kingdom Phone: +44 (0) 870 863 6930 Fax: +44 (0) 870 085 8837

T.E.S.T | December 2009

www.testmagazineonline.com


T.E.S.T company profile | 47

iTrinegy Network emulation & application testing tools iTrinegy is Europe’s leading producer of network emulator technology which enables testers and QA specialists to conduct realistic pre-deployment testing in order to confirm that an application is going to behave satisfactorily when placed in the final production network.

Delivering more realistic testing Increasingly, applications are being delivered over wide area networks (WANs), wireless LANs (WLAN), GPRS, 3G, satellite networks etc, where network characteristics such as bandwidth, latency, jitter and packet error or loss can have a big impact on their performance. So, there is a growing need to test software in these environments. iTrinegy Network Emulators enable you to quickly and easily recreate a wide range of network environments for testing applications, including VoIP, in the test lab or even at your desktop.

Ease of use Our network emulators have been developed for ease of use: • No need to be a network expert in order to use them • Pre-supplied with an extensive range of predefined test network scenarios to get you started • Easy to create your own custom test scenarios • All test scenarios can be saved for subsequent reuse • Automated changes in network conditions can be applied to reflect the real world • Work seamlessly with load generation and performance tools to further enhance software testing.

A comprehensive range to suit your needs iTrinegy’s comprehensive range of network emulators is designed to suit your needs and budget. It includes: • Software for installation on your own desktop or laptop (trial copies available) • Small, portable inline emulators that sit silently on the desktop and can be shared amongst the test team • Larger portable units capable of easily recreating complex multi-path, multi-site, multi-user networks for full enterprise testing • High performance rack-mount units designed to be installed in dedicated test labs • Very high performance units capable of replicating high speed, high volume networks making them ideal for testing applications in converged environments. If you would like more information on how our technology can help you ensure the software you are testing is ‘WANready’ and going to work in the field, please contact iTrinegy using the details below:

Email: info@itrinegy.com Tel: +44 (0)1799 543 345 Web: www.itrinegy.com/testmagazine

The Software Testing Club The Software Testing Club is a relaxed yet professional place for software testers to hang out, find likeminded software testers and get involved in thoughtful and often fun conversations. Interesting things happen at The Software Testing Club. It started out as an experiment. Now two years on it has turned into vibrant online community of software testing professionals. You'll find members are dedicated to their profession and you can find them in deep conversation within the forums. However, it's more than just forums and your standard niche social network. As the club grows we are finding things happening. This includes things like a Job Board, a Mentoring Group, a collaborative Software Testing Book and a crowd sourced testing initiative called Flash Mob Testing. The Software Testing Club is a grassroots effort. It's for the members and grows according to what we believe they want. Come join and let us know what you think.

Rosie Sherry – Founder & Community Manager Email: rosie@softwaretestingclub.com Tel: +44 (0)7730952537 Web: www.softwaretestingclub.com

www.testmagazineonline.com

December 2009 | T.E.S.T


48 | The Last Word

the last word... To pay or not to pay Will 2010 see a move away from proprietary testing tools and towards open source equivalents, or is the interest in the open source tools just a short term reaction to the global credit crunch? Tom Millichamp, training director of Edgewords questions whether open source testing tools are a credible alternative.

N

ot surprisingly, during the past 12 months many organisations have been trying to reduce costs and stream line their testing projects. There are many open source tools available, some of which appear to be reasonably well supported; but if you are a tester considering making the move, make sure you carefully consider the advantages and the disadvantages first.

supported by the community that developed them; however, be aware there is no contract between the user and this community, so it is important to consider where you will go for help if the community stops developing or supporting the software.

The right environment

Many of the open source functional automated testing tools have been designed for specific environments, such as Web. Although the tools may be useful if that is the only type of The pros and cons applications to be tested, if you have of open source The first obvious advantage is the cost; to test applications developed in a number of different environments, the the initial investment is much lower than the propriety equivalent. Secondly, usefulness of the tool becomes greatly restricted. the source code is easily accessible, In comparison, most of the marketand organisations can also develop and leading proprietary tools such as customise some of the products to HP’s QuickTest Professional or IBM’s meet their individual specifications. Rational Robot, support multiple However, there are some major environments including Web, Java, disadvantages to using open source .NET, Standard Windows, and Terminalproducts. Before making the move based applications. This offers a great to open source testing tools, it is advantage of only having to learn the important that testers fully consider the long term implications. Sometimes tool once, yet having the ability to test the full range of different applications. what appears to be more cost It also provides the opportunity to effective can actually end up costing develop full end-to-end automated substantially, in downtime, product tests spanning several systems. faults, or having to restart a project with another software tool. Before opting for an open source tool Training it is also important to check whether As a professional trainer, I am it comes with full documentation and concerned that there is not adequate, support. The type of documentation high quality training available for you need to look for, includes, the open source tools. Many of on-line help systems, user guides and the automated testing tools are installation instructions. Users should complicated to use; they have also check in advance that the tools programming languages to learn, provide sufficient support. Make sure and should be set up and deployed you know where you can go when using established industry best things don’t work properly, and where practices. To get the best results from you will be able find skilled contractor test automation it is important to or consultancy support for these tools. fully understand the product and its Many open source tools are well capabilities, therefore good quality, T.E.S.T | December 2009

comprehensive training is a must. This is an area where proprietary tools are very established, with professional training courses available to support them. It is also important to consider that many of the vendors of the proprietary tools are rapidly adding new and exciting functionality to their products, to enable them to compete against the growing number of open source and smaller vendor offerings. This can only be a good thing. Many of the proprietary tools offer much more functionality than their open source peers, but I would welcome some in-depth, side-by-side evaluations of open source versus proprietary tools. So will 2010 see a continued growth and interest in open source testing tools? That really is up to you, as the testers in the industry, but after considering the advantages and disadvantages, we expect that as the economy continues to recover, the interest and investment will return to the more main-stream products. I suggest that resources may be better utilised in up-skilling testers on the new features of the latest proprietary software through training; enabling them to make the most of their established and fully supported tools.

Tom Millichamp Training director Edgewords www.edgewordstraining.co.uk

www.testmagazineonline.com


Subscribe FREE to the most VitAL source of information VitAL : Inspiration for the modern business

vital

Inspira tion for the moder n busine ss Volume 3 : Issue 2 : November / December 2009

The future is in their hands What the ‘realtime generation’ thinks

Volume 3 : Issue 2 : November/Decem ber 2009

Courting disaster Are we just paying lip service to DR?

of IT

A healthy reliance on ITSM

Taking the temperature of IT in healthcare

VitAL IS NOW ONLINE AT WWW.V itAL-MAG.NET

News, Views, Strategy, Management, Case Studies and Opinion Pieces

vital Inspiration for the modern business

www.vital-mag.net/subscribe 31 Media will keep you up to date with our own products and offers including VitAL Magazine. If you do not wish to receive this information please write to the Circulation Manager at2009 the address given. www.testmagazineonline.com December | T.E.S.T Please tick here ■ if you do not wish to receive relevant business information from other carefully selected companies.


Promote yourself...Test yourself ISEB Software Testing certification • • • • •

International benchmark of skills and experience for Software Testers 80% of all Software Testing exams worldwide taken through ISEB Hierarchical structure providing a basis for skills and career progression Over 70,000 Software Testing exams delivered worldwide Supported by ISEB Foundation and Intermediate level publications www.bcs.org/books

ISEB is part of the BCS, The Chartered Institute for IT, and is an International examination body. We have delivered over 380,000 exams worldwide and our exams are available in over 200 countries. For more information visit www.iseb-exams.com or call 01793 417655

Book a training course at www.iseb-exams.com/st Book an exam at www.iseb-exams.com/computerbasedexams The British Computer Society (Registered charity no. 292786) MTG/AD/689/0909

T.E.S.T | December 2009

www.testmagazineonline.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.