Sponsoring TEST’s 20 Leading Testing Providers. Pages 35-47
i n n ovatio n F o r S oftwa r e Q u a l it y Volume 4: Issue 5: October 2012
A bright future What graduates can expect from a career in testing. Inside: Test automation | Agile testing | Mobile app testing Visit TEST online at www.testmagazine.co.uk
TEST : INNOV ATION FOR SO FTWARE QUAL ITY VOLUME 4: ISS UE 5: OCTOBE R 2012
i n n ovatio n F o r S oftwa r e Q u a l it y
Leader | 1 Feature Sponsoring TEST’s 20 Leading Testing Providers.
INNOVATION FO R SOFTWARE QU ALITY
Pages 35-47
Volume 4: Issue 5: October 2012
A bright future What graduates can
expect from a career in testing.
Inside: Test automation
| Agile testing | Mobile
app testing
Visit TEST online at www.testma gazine.co.uk
A booming industry
R They reckon that with all the complexity in this market there are as many as 1,800 different device platforms and the picture is changing ever day with new releases of both hardware and operating system software. It is certainly an interesting time to be involved in testing.
ead through the following pages and it’s easy to come to the conclusion that despite the prevailing economic gloom, testing is bucking the trend and actually booming. Many of the features herein address the challenges of testing one of the main driving factors in this boom – mobile apps, or to be more precise applications for mobile devices.
The figures are astonishing, to quote from the Borland feature, The future of mobile testing: Fixing a moving landscape on page 20: “This year, 100 million people will use smartphones. Maybe you’re reading this article on one of the 118.9 million consumer tablets that will be sold this year. There will soon be more mobile-connected devices on earth than there are people to use them and Bring Your Own Technology (BYOT) has raised the stakes higher still.” They reckon that with all the complexity in this market there are as many as 1,800 different device platforms and the picture is changing ever day with new releases of both hardware and operating system software. It is certainly an interesting time to be involved in testing. The Apple iPhone is of course a very important part of this complicated picture. The Cupertino giant has recently launched the iPhone 5. As you would expect from Apple, it is a beautifully ergonomic and
user-friendly (to use the quaint old term) piece of technology but its launch hasn’t been without controversy. Apple has ditched the Google Maps-based app of previous models and created its own Apple Maps to take its place. Initial reports have suggested that Apple has, in the process of launching the app, managed to lose Stoke-On-Trent as well as placing a Chinese restaurant in the middle of a river - among other gaffes and glitches. I can just about imagine how difficult it is to check that everything is in its right place on a global map with the fractal detail needed by something like Apple Maps, but I’m tempted to wonder whether this sort of thing would have happened with Steve Jobs still in charge. To be fair, given the scope of the project it probably would have. Finally I would urge you to check out our 20 Leading Testing Providers section at the back of this issue (starting on page 35) – an essential guide to partners and suppliers to help testers in this mind-bogglingly complicated era.
Until next time...
Matt Bailey, Editor
Matt Bailey, Editor
© 2012 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor or TEST Magazine or its publisher, 31 Media Limited. ISSN 2040-0160
www.testmagazine.co.uk
Editor Matthew Bailey matthew.bailey@31media.co.uk Tel: +44 (0)203 056 4599 To advertise contact: Grant Farrell grant.farrell@31media.co.uk Tel: +44(0)203 056 4598 Production & Design Toni Barrington toni.barrington@31media.co.uk Dean Cook dean.cook@31media.co.uk
Editorial & Advertising Enquiries 31 Media Ltd, Unit 8a, Nice Business Park Sylvan Grove London, SE15 1PD Tel: +44 (0) 870 863 6930 Fax: +44 (0) 870 085 8837 Email: info@31media.co.uk Web: www.testmagazine.co.uk Printed by Pensord, Tram Road, Pontllanfraith, Blackwood. NP12 2YA
October 2012 | TEST
TM
Powerful multi-protocol testing software
- A powerful and easy to use load test tool - Developed and supported by testing experts - Flexible licensing and managed service options
Don’t leave anything to chance. Forecast tests the performance, reliability and scalability of your business critical IT systems. As well as excellent support Facilita offers a wide portfolio of professional services including consultancy, training and mentoring.
Facilita Software Development Limited. Tel: +44 (0)1260 298 109 | email: enquiries@facilita.com | www.facilita.com
Contents | 3
Contents... October 2012 1 Leader column Mobile app brings boom time to testing. 4 News 6 A bright future We are told that there is a high demand for qualified testing personnel these days. Indeed, testing seems to be that rarest of industries – one that is expanding and flourishing. So what can it offer graduates from the wider, non-IT world? Matt Bailey spoke to test analyst Jenny Higgins a recent alumnus of the Sogeti Graduate Programme.
10 Test early, test often As a prime mover in the lottery software sector, GeoSweep’s products need to be developed to the highest technological standards. Company CTO Matt Young explains why this means testing early and testing often and adopting an Agile approach.
14 Automated testing for continuous integration While the benefits of automated testing as part of a continuous integration process are accepted, there is an ongoing debate about what tests are needed for the build process. Nigel Wilson, head of development, IT consultancy division at BJSS explores the advantages and disadvantages of the different categories of testing and offers some advice on which to use.
6
20 The future of mobile testing: Fixing a moving landscape The challenges for developers and testers in the brave new world of the mobile device app are daunting. Their products need to work smoothly across as many as 1,800 different device platforms while predicting what device manufacturers will come up with next. A challenge for any organisation, but Borland has some advice.
24 So you want another test environment? Explaining the benefits of service virtualisation, Parasoft offers an alternative test environment for those that need one.
26 Cloud in a box Scott Murphy, HP divisional leader at Avnet Technology Solutions offers his five steps
10
to delivering Agile development and testing in the cloud.
27 Design for TEST – Testing Java programs the easy way
For those interested in testing Java code, this issue Mike Holcombe looks at JWalk, a very useful free tool that is available for download.
28 Addressing mobile load testing challenges
The advent of mobile apps and mobile websites have brought a set of new load testing challenges that must be addressed by any load testing solution. Neotys reports.
32 Integrated test data management – the key to increased test coverage
20
Test data management should be a central part of any testing project, it is becoming the key to increasing test coverage. TEST editor Matt Bailey spoke to Wolfgang Platz, founder and CEO of TRICENTIS about his company’s approach to this highly influential keystone of the testing process and its potential for optimising the testing process.
35 20 Leading Testing Providers 48 Last Word – I Told You So... Desperately trying to resist the urge to gloat when a project fails in exactly the way he predicted, Dave Whalen is relishing being right. www.testmagazine.co.uk
24 October 2012 | TEST
Two-thirds inadequately test mobile applications
T
he fourth annual World Quality Report has revealed that organisations are struggling to manage the challenges of the mobile era, with only one-third of those surveyed currently formally testing their mobile applications. Where organisations are conducting mobile Quality Assurance (QA), they are primarily focused on performance (64 percent) and functionality (48 percent), with a mere 18 percent of organisations focusing on security. Even more worrying, it appears that testing organisations are unable to address the new challenges, posed by the increasingly digital environment due to a lack of the right resources, tools and methods. Yet, the ability to effectively test the quality of software applications has never been more critical to an organisation’s reputation and operations. The report unveiled that most businesses (51 percent) still run testing as an in-house function. Only 13 percent have moved to a service fully managed by an external provider. Unfortunately, those organisations
managing testing in-house have internal QA capabilities that are failing to keep pace with new technologies. The report reveals a lack of confidence in firms’ software QA capabilities, with more than half of respondents (59 percent) characterising their internal QA teams’ knowledge of the latest testing tools and technologies as merely ‘average.’ Two-thirds of respondents admit they do not have the right tools to test mobile applications and one-third lack the appropriate testing methodologies and processes (34 percent) and specialist expertise (29 percent) necessary. The report also highlights the impact of cloud computing on testing and QA. Almost a quarter of the applications of the firms surveyed are now being hosted in the cloud, which is expected to rise to one-third by 2015. As a result, testing in the cloud is expanding significantly as QA professionals become more comfortable with the cloud as a testing platform. And while more than a quarter of testing now occurs in a cloud environment, the report shows this is
The next generation of apps
A
pple's App Store celebrated its fourth birthday in July and the number of apps for various smartphones and tablet computers is now said to be well over a million – a number that grows by the day. While most apps are free, industry analyst Juniper Research predicts that by 2016 mobile apps will generate revenues of $52bn. Technology such as near field communications (NFC) is bringing a host of mundane equipment to life via phones. If you need to fill your bath, water the plants or turn on the heating in your car, it can be
TEST | October 2012
done remotely using a smartphone as a remote control. Augmented reality (AR) is set to change the way we look at things. Point a smartphone at a street, shop or even an exhibit at a museum and it will automatically detect your location to provide on screen details that include reviews, background information and money off vouchers. Juniper Research predicts that over two billion AR apps will be downloaded by 2017. According to Kevin Galway, of app designer bss Digital, “Over the next few years, businesses will move away from using PCs and laptops to provide staff with tablet computers. Within a matter of a few clicks staff can access inventories, risk assessments, CRM data, competitor intelligence etc. Such mobile computing power could revolutionise the office and drive productivity. Galway does sound a note of caution though: “Figures show that as the number of apps has increased, their lifespan has decreased. Research has found that only around 20 percent of users return to an app after the first day of downloading it and the average app has a less than five percent chance of being used for greater than 30 days.”
forecast to increase to 39 percent by 2015. With just four percent of companies expecting not to use cloud technologies for QA over the next three years, compared with 31 percent just two years ago, the perceived benefits of cloud-based QA clearly outweigh the pitfalls. The report also reveals that as companies seek to drive a reduction in costs and time to market, over 42 percent of QA budgets increased over the last year, while just 11 percent shrunk. The majority of respondents also expect their QA budgets to rise by 2015, with just one-fifth forecast to fall. As organisations strive to centralise and consolidate QA for further cost reduction and efficiencies, the study highlights a significant trend towards investment in Testing Centres of Excellence (TCOE). While only six percent of firms currently have a fully operational TCOE, this is a 50 percent increase on 2011. Nearly two-thirds are currently in the process of building or planning a TCOE as they look to secure further competitive advantage.
Half of companies suffer web application security breaches
A
study by Forrester Consulting on behalf of Coverity, to examine application security and testing practices has shown that web application security incidents have become increasingly common and expensive, with the majority of companies responding to the study experiencing at least one breach in the last 18 months and many companies losing hundreds of thousands, if not millions, of dollars as a result. At the same time, the study found that the majority of companies have yet to implement secure development practices, most often citing time-to-market pressures, funding and the lack of appropriate technologies suitable for use during development as their primary roadblocks. Based on a survey of 240 North American and European software development influencers from companies that develop web applications, the report revealed that security incidents are prevalent and expensive: According to the survey, just over half of respondents had at least one web application security incident in the prior 18 months. Eighteen percent put their losses at more than $500,000 while another eight percent saw losses in excess of $1 million. Two reported losses of more than $10 million.
www.testmagazine.co.uk
A
s Apple releases the latest version of its pioneering iPhone, the iPhone 5, and the business world gears up to benefit from its high-speed 4G capabilities, early adopters criticise its mapping app after the controversial switch from Google Maps. According to Apple, the combination of the A6 processor with 4G will render the iPhone equally as fast as any broadband internet connection, making it possible to carry out the same tasks whether its user is at home or on the move. Dan Wagner, CEO of mPowa, which relies on mobile broadband to deliver his service, said, “It’s about time the UK progressed into the world of 4G. Our internet connection,
be it on mobile or fixed location, has not been as fast as that of other international markets for quite some time now. It will dramatically enhance the competitiveness and efficiency of British companies in this ‘always on – always connected’ business landscape.” Users of the new map app on the smartphone however have complained that it looks “rushed, half-baked and delivers a really poor user experience.” The app is also falling down on the detail. It reportedly placed a Chinese restaurant in the middle of a river and has misplaced many other towns and features. Many users are said to be resorting to using Google Maps in their web browsers.
Testing is maturing
NEWS
4G iPhone ushers in new era of mobile working
New testing jobs promised
E T
his year’s Test Maturity Model integrated (TMMi) results have shown a considerable improvement over the last survey across the ten industry sectors that took part according to the organiser. The latest (2011) survey sees a significant reduction in the number of companies at Level 1; only 11 percent of companies are still working in a chaotic manner, representing a major improvement on 2010 where 63 percent of the respondents were at TMMi Level 1. In the 2011 survey there was a growth in the average maturity of test process to 89 percent now achieving Level 2 and heading for Level 3, showing an overall improvement of in excess of 100 percent, a major improvement on the 2010 results where 37 percent were at TMMi Level 2 heading towards Level 3. A small number of the companies demonstrated strength across the board in the survey meaning that it is likely these companies are well on their way to level 3. Three companies (MTP, Aricent and MSTB – See the TMMi Foundation website for details) have
www.testmagazine.co.uk
achieved Level 3 certification over the last 12 months and the first company achieve TMMi Level 4 in 2012. The organisers say that while it is encouraging to see a 33 percent increase in organisations having a well established test policy, which in turn is reflected in a very large increase in the use of goal oriented test process performance indicators, these increases are not being reflected in the actual product quality being monitored against plan and expectation. According to the organisers, this does bring into question whether the KPIs implemented have been properly considered and are the right metrics especially considering the value/ impact which metrics focussed on quality can deliver. In addition, it may be early days but initial results could indicate that organisations are just going through the process of implementing controls, without communicating these changes and there is a time lag to implementing improvements. www.tmmifoundation.org
dge Testing Solutions, which claims to be Scotland’s fastest growing software testing and risk-management company, says it has accelerated its growth strategy, targeting new markets and creating up to 20 new skilled jobs with the opening of a new office in Birmingham. “We’re delighted to be marking Edge’s fifth anniversary by expanding upon our work down south and creating new jobs,” says Susan Chadwick, co-founder of the company. “Despite the largely tough trading conditions for UK businesses we have delivered solid and sustained growth. It’s exciting to be able to extend our geographical reach to England. As always though, we remain fully committed to our client base in Scotland and will continue to expand here, as well as down south.” Edge’s rapid expansion has been fuelled by the impact of technological risk and its increasing threat to the digital economy, particularly in the financial services sector. Software ‘glitches’ hit the headlines every day with technical problems on the Nasdaq stock market marring Facebook’s debut as a public company and embarrassment for US-based Knight Capital Group last month resulting in $440 million of capital losses and a devastating plunge in the company’s share price.
October 2012 | TEST
6 | Cover Featurestory
A bright future We are told that there is a high demand for qualified testing personnel these days. Indeed, testing seems to be that rarest of industries – one that is expanding and flourishing. So what can it offer graduates from the wider, non-IT world? Matt Bailey spoke to test analyst Jenny Higgins a recent alumnus of the Sogeti Graduate Programme.
TEST | October 2012
www.testmagazine.co.uk
Cover story | 7
I
A passion for technology
So what sort of career could someone with, say, an arts degree expect in what could be considered a highly technical field like software testing? Jenny Higgins studied history at Brunel University. After graduation she was advised the check out the Sogeti graduate software testing programme by a friend of the family. Although – perhaps like everyone else in this day and age – she was no stranger to information technology, her knowledge didn’t really go any deeper than knowing her way around a PC and being able to get the best out of the software she used, she was sufficiently impressed to pursue a career in testing and is now a test analyst for Sogeti, working in Fleet Street, London. “It looked like a great opportunity, even though it was an area I hadn’t really considered before,” says Higgins. “Although it was outside of my comfort zone, it looked like an expanding industry with a lot going on and a lot to learn. It looked like a bit of a challenge and a career that I could hopefully progress in.” The lack of in-depth IT knowledge didn’t put her off. “Having basic IT skills is common these days, but I wouldn’t say I had in-depth knowledge, maybe slightly more than average, but certainly no knowledge of software coding or anything like that. I am quite interested in solving things though. I’m quite into puzzles and things like that and it helps to have a forensic approach. I think I pick up on certain things quite quickly that other testers might miss. Your own personality comes into it; I think people with, for example, history degrees might spot different things and perhaps look at the software in a different way. My degree has given me a lot of analytical skills, you’re constantly analysing different bits of information to try and draw your own conclusions and there is a definite
The scheme is comprised of compulsory ISEB software test training and certification, assessment days, mentoring, exams, and on-site shadowing. As it progressed, Higgins completed classroom training, certification in software testing, learnt new methods, worked on site and shadowed consultants. Many of the graduates on the scheme have a computer science background, but like Jenny Higgins, a number have degrees in other fields including English, history or geography; however, the main qualification is that candidates have a passion and interest in technology. Trainee consultants on the programme enjoy real roles delivering results for many of highly regarded customers. They are expected to use initiative continually while being supported by a group of people that coach them through many of the tougher challenges. They discover how to work with customers and have a solid understanding of the customer’s needs. They also get an understanding of how relationships with colleagues and customers provide the insight needed to produce, well-structured and integrated solutions to complex business issues. “I liked the look of the programme, it seemed to be all encompassing and it didn’t look like you’d be thrown into the deep end without knowing what’s going on,” says Higgins. “It has provided me with a very good basis. We did six weeks of classroom training, plus the ISEB course, a quality centre course and a soft skills course, it was all encompassing and gave us a lot of information to take on the two-week placement in the telecoms sector where I shadowed a tester on a project. We then had a six month probation period where we were looked after a bit more and mentored by senior experienced testers who helped us to learn the ropes.” With her training at an end, Jenny Higgins has now been with Sogeti for fifteen months. Since she started working as a tester she has been involved with two major projects. “I worked on a project which was mainly end-to-end testing. It was right towards the end of the project where we were
s testing bucking the trend? It certainly appears that there is great demand for qualified software testers out in industry – perhaps this is no surprise when you consider the sheer volume of software being released these days and the critical nature of a significant proportion of it. What may be more surprising is that there is demand for graduates from a broad range of academic backgrounds, not just the expected IT or computing experts one would expect.
www.testmagazine.co.uk
comparison with analysing a piece of software to make sure it’s doing what it should.”
Jenny Higgins, history graduate and test analyst
As an industry, when you get involved you realise it’s not quite as technical and complicated as it first appears. It’s not all about coding and programming, there are lots of other aspects that people need to work on as well. I have picked up a little bit since I have been testing professionally and I’m sure I’ll pick up more as I gain experience, but it’s not something that I have needed to get deeply into. I did learn a little bit of basic Java at one point for example, but I’ve found you don’t need it as much as you may think. Quite often I’m doing the more end-to-end or user testing rather than delving too deeply into the code.
October 2012 | TEST
8 | Cover story
getting ready for the software to go live. That was really adding the finishing touches, making sure all the serious defects were out of the system so it could go live. The project I’m working on at the moment is much more mid-development.”
Out of the comfort zone
Sometimes it’s good to have a different set of eyes and different skills. When you don’t come from an IT background you can usefully occupy the middle ground between the technical guys and the user and this makes you a useful part of the team. You know how to test, but you can look at software more from a user’s perspective.
TEST | October 2012
“It has been a big change for me to go into a sector that is completely different from what I am used to as a history graduate,” says Jenny Higgins, “but I have learnt so much about the testing industry, I’ve gone from being someone who only knew software as an end user to someone who sees the whole process of how it’s made. I learnt how much effort goes in to making a bit of software or a website, I didn’t realise how many stages they go through before they are seen by end users. I have a much more sympathetic view of the software development process now.” As well as being out of the comfort zone of the average arts graduate, outside of IT, testing isn’t really a high profile industry. “If you’re not in IT you’re probably not very aware of testing as an occupation,” says Higgins. “You might hear about games being tested before a major new release perhaps. I don’t think people outside of the software industry really know what it’s about. I only had a very vague knowledge before I started the graduate programme but I soon realised how important it is. As an industry, when you get involved you realise it’s not quite as technical and complicated as it first appears. It’s not all about coding and programming, there are lots of other aspects that people need to work on as well. I have picked up a little bit since I have been testing professionally and I’m sure I’ll pick up more as I gain experience, but it’s not something that I have needed to get deeply into. I did learn a little bit of basic Java at one point for example, but I’ve found you don’t need it as much as you may think. Quite often I’m doing the more end-to-end or user testing rather than delving too deeply into the code. I’ve never had a task where I have had to properly look at the code.” As to whether her career will lead her into a more technical area, Higgins is pragmatic, “It’s difficult to say because it depends on what projects you work on and what you’re
required to do, but if I’m honest it’s probably not the side of testing that I’d like to go into. It wouldn’t be something I’d seek out, but if it came along I’d rise to the challenge as best I could. I much prefer looking at the program itself rather than the programming and getting to see the end result, the real technical bits aren’t really where my strengths lie. Many of the other graduates I trained with were more into that side though.” An important aspect of any job is where it leads. It is perhaps unsurprising given her background that Jemmy Higgins sees herself more in a management role rather than a technical one in future. “One of the things I like about testing is that you’re dealing with people all the time and there’s a lot of sharing opinions and listening to other peoples’ views. I like that aspect of the job.”
Job satisfaction So Jenny Higgins hasn’t yet transformed herself into a fully fledged techie, but she does get satisfaction from working in a technical field. “It’s great that you get to really dig into the software and give your opinion on what you think about it depending on what the testing requirement were. You get a lot of satisfaction from using your brain. My pet peeve though is acronyms, they’re all over the place and they drive me mad! Some of the acronyms are actually longer than the things they’re meant to be short for.” With the testing industry now crying out for new blood it could be time to cast the net a little wider, although it seems there has always been a place in testing for a broader skills base than just the technical. “In testing, it’s not just about the technical side,” confirms Higgins, “sometimes it’s good to have a different set of eyes and different skills. When you don’t come from an IT background you can usefully occupy the middle ground between the technical guys and the user and this makes you a useful part of the team. You know how to test, but you can look at software more from a user’s perspective. You might pick up issues that a more technical tester may overlook. Of course there are many testers from a technical background that would notice those sort of things, so it is a generalisation, but sometimes not knowing all the technical stuff can help.”
www.testmagazine.co.uk
www.neotys.com
The Load & Performance Testing Solution for all your Web and Mobile Applications Easy and accurate testing of web & mobile apps ■ Cloud integrated ■ Mobile capabilities include: ■
■ ■ ■ ■ ■
Realistic tests for Web and Native apps HTTPS capable Bandwidth simulation Parallel connections handling Support for all devices
Free Evaluation of NeoLoad 30-Day License with Free Technical Support
www.neotys.com/trial
Ask us how NeoLoad helps you reduce production performance issues by 90% while improving test productivity 50%
10 | Agile Feature testing
TEST | October 2012
www.testmagazine.co.uk
Agile testing | 11
Test early, test often As a prime mover in the lottery software sector, GeoSweep’s products need to be developed to the highest technological standards. Company CTO Matt Young explains why this means testing early and testing often and adopting an Agile approach.
G
eoSweep has a track record of delivering projects to tight deadlines – including creating our core platform from a standing start (ie, including having to hire the engineering team!) in just nine months. We credit this success to our testing and development processes in which we use test early/test often approaches as well as Agile techniques.
GeoSweep is a technology company that makes innovative games for lotteries around the world. Using Google Maps technology, our flagship game, GeoSweep, divides the landscape into unique squares, termed ‘Geos’. Players can navigate the map and stake their claim on locations that appeal to them, such as their home or a local landmark. As with traditional games, each selection represents a chance to win in a random lottery draw. Considering we want to appeal to a technology savvy audience, but also make the game as accessible as possible to all players, it is vital that the platform is developed to the highest technological standards. This starts with a talented team. We look at 100 CVs for every hire made to our team of just over a dozen top-tier engineers. As a result, we believe our technology is superior to most other games on the market.
The testing process The process around testing starts before we even write the code. Part of our approach to defining and analysing
www.testmagazine.co.uk
what needs to be built is to understand how we will test that it works. It may sound surprising, but constantly asking the question “How will I know this does what I want?” is actually a very good way of checking that your requirements are fully thoughtthrough and unambiguous. Similarly, by constantly asking ourselves about edge cases (eg, “What happens if two users try to buy the same Geo at the same time?”) we solve a lot of problems at the requirements stage that would otherwise manifest themselves as bugs in the software much further down the line. When we move into design and development, we continue to think about ‘testability’ as we build the software. For instance, the automated tests we create at the same time as writing the code not only check that the code does what we wanted when it’s built, but continue to be run long afterwards, often several times a day, to check that future changes to the software do not introduce unintended changes in behaviour. But another thing that’s key to us is that our developers and QA engineers work closely together. Developers help QA folks by providing info about how to create edge case scenarios; and in turn QA engineers often help developers in working out how to reproduce ‘intermittent’ bugs. When the code is passed over to our QA team, they create more automated tests, as well as formal manual tests (for things that it does not make sense to automate). Anything that doesn't pass these tests goes straight back to the
The process around testing starts before we even write the code. Part of our approach to defining and analysing what needs to be built is to understand how we will test that it works. It may sound surprising, but constantly asking the question “How will I know this does what I want?” is actually a very good way of checking that your requirements are fully thought-through and unambiguous.
October 2012 | TEST
12 | Agile testing
Matt Young Chief technology officer GeoSweep www.geosweep.com
TEST | October 2012
development team for a fix. Every time we do a release, we run something from 30 to 100 percent of all the manual tests (depending on the size of the release and what has been changed in it) to check the software is still behaving as designed and to look for any other bugs. One of the benefits of doing at least some manual testing is that, with a good QA team such as ours, testers are not just mindlessly following a script but also constantly on the lookout for other bugs. For example if a drop-down menu looks a bit odd in a particular browser, a bug will be raised by a QA engineer even if that menu still ‘works’. We test the software in a variety of browsers and configurations to flush out issues that only occur in particular circumstances. We do regular internal demos of pre-production releases of the software, partly to keep everyone in the company up to speed with upcoming new features and partly to get as many pairs of eyes on the product before our customers see it. In addition to the famous ‘demo effect’ (whereby whenever you do a formal demonstration of a product you always seem to discover a new way to break it), questions from the floor in such sessions often either uncover usability issues or inspire further features. As a B2B supplier, one of the advantages GeoSweep has is our UK website. While we do operate the game in the UK, our main customers are international lotteries. As such, the British version of the game can be used as a fully functional, live proofof-concept platform on which we can perfect the development of new features for roll-out internationally. In this way the UK player base experiences new features first and, by judging player feedback and tackling any issues which arise on what is essentially a full-scale trial, we can be reactive and Agile as we refine the product. Finally, when we do find bugs in our production environment, part of the fix process is to answer the question “How do I make sure we never encounter this problem (or similar ones) again?” Sometimes that involves writing more tests; other times it is feeding back into our development process or coding standards. But the common thread is seeing how we can learn from problems and constantly improving the quality of the software. It's because
of these efforts that our B2B customers find our software remarkably reliable.
Agile development GeoSweep’s development methodology uses Agile techniques – because rapidly iterating the development of new features is the quickest way to optimise our products. What matters most with Agile is how you do it. Some companies seem to assume that being Agile is just a case of following a strict set of rules and procedures; or even just throwing away all their documentation and deadlines! For them, Agile is spelled C.H.A.O.S. The true Agile approach requires a lot of trust, pragmatism and self-discipline. For tech folk, external deadlines don’t disappear just because you chose a different development methodology; and for business folk, trade-off decisions between different features and fixes still need to be made. Aside from having a great team, part of what makes it possible for us to deliver to tight deadlines is ensuring that everyone in the company has a clear understanding of what we’re trying to achieve and of what the priorities and risks are. This allows developers and QA engineers to make sensible day-to-day, and even hour-to-hour, decisions about what to work on next and how to build or test a particular feature. Equally, it means that the product management side of the business responds quickly to issues that come up during the development process – whether it’s unexpected interactions of different requirements (including external requirements such as regulatory ones) or technical problems uncovered in the course of implementation. Moreover, because everyone – from QA to development to design to product management – has a clear understanding of our non-functional requirements, we don’t get caught out trying to bolt them on as an afterthought at the end of the project. That may sound obvious best practice, but our B2B customers tell us that it’s not common practice. For instance, when we recently did a major product delivery to a new customer, it was the first time ever that they had taken software through their security audit process and not found any issues. Needless to say, they were very pleased.
We do regular internal demos of pre-production releases of the software, partly to keep everyone in the company up to speed with upcoming new features and partly to get as many pairs of eyes on the product before our customers see it. In addition to the famous ‘demo effect’ (whereby whenever you do a formal demonstration of a product you always seem to discover a new way to break it), questions from the floor in such sessions often either uncover usability issues or inspire further features.
www.testmagazine.co.uk
14 | Test automation
Automated testing for continuous integration While the benefits of automated testing as part of a continuous integration process are accepted, there is an ongoing debate about what tests are needed for the build process. Nigel Wilson, head of development, IT consultancy division at BJSS explores the advantages and disadvantages of the different categories of testing and offers some advice on which to use.
A
lthough the benefits of regular automated testing as part of a continuous integration process are widely accepted, there has been considerable debate (both within BJSS and the wider software development community) as to the nature of the tests that are executed as part of the build process. This paper explores the advantages and disadvantages of the different categories of testing and provides a series of recommendations as to the adoption of these categories in future projects. Whilst it is expected that these recommendations will provide the default BJSS position for new projects, the specific approach will be tailored to the demands and circumstances of each project.
Automated test categories We consider two different forms of automated testing: End-to-end tests: ‘black box’ tests that exercise the system and assert its behaviour through external interfaces (both ‘public’, such as APIs and messaging interfaces and ‘private’, such as databases and log files). These tests have no knowledge of the code structure or the programming language of the system under test and
TEST | October 2012
operate against a deployed system. Unit tests: ‘white box’ tests that are tightly coupled to the source code of the system under test. These tests operate within an independent simulated environment and do not rely on a deployed system for execution. Unit tests can be further subdivided into ‘pure’ unit tests, where all dependencies are mocked out without exception, and ‘link’ unit tests, where some or all dependencies reflect the real-world interactions of the system .
End-to-end tests End-to-end tests grant numerous advantages over unit tests: • They most closely reflect the expectations of the system’s users. A user does not typically care whether a failure is down to a specific class, subsystem, etc. • They provide a holistic approach to testing that exercises the entire lifecycle and all real-world dependencies. End-to-end tests identify issues not only in bespoke software, but also in configuration, deployment, data setup, and third party software. This is particularly important in Enterprisegrade systems that usually involve a combination of proprietary and off-the-shelf components working together.
www.testmagazine.co.uk
16 | Test automation
• They are typically more meaningful in terms of software acceptance, as they can reflect a complete feature rather than just a partial implementation of that feature. • They can encourage better ‘design for supportability’ since a meaningful production environment is unlikely to provide the same level of access that a developer enjoys with unit tests. If end-to-end tests are consistently hard to diagnose, then production issues are likely to follow the same pattern.
End-to-end tests and unit tests each have a very different focus within the automated testing of a system. They have different advantages and disadvantages that make it impossible for one to completely eclipse or replace the other one. They provide a mutually complementary role with end-to-end tests verifying ‘what the system does’ and unit tests verifying ‘how the system does it’ – both of which are equally important. As such, it is recommended that both unit tests and end-to-end tests are executed as part of a continuous integration build.
TEST | October 2011
End-to-end tests also have disadvantages: • They can provide false positives where multiple underlying defects combine to provide, fortuitously, the expected external behaviour. They can’t detect if a system is doing the right thing in the wrong way. • Since end-to-end tests only interact with a system through a limited range of entry points it may not be possible to exercise certain aspects of the system through these interfaces (especially technical aspects around concurrency, fairness, temporal behaviour, etc). • End-to-end tests can be difficult to produce during the early stages of development when the external interfaces they must hook into are yet to be defined or implemented. • They are typically slower than corresponding unit tests due to their reliance on prior system deployment and increase in IPC and network requirements. • End-to-end tests are more at risk of intermittent failures if poorly written. • Significant diagnostic effort can be required to determine the cause of an end-to-end test failure, as it can often be due to a brittle test and not the system under test.
Unit Tests Unit tests have the following advantages over end-to-end tests: • They are faster to execute, which can make saturation testing of multiple permutations more feasible than in end-to-end testing. • They are more focused – a failure in a unit test will typically lead a developer straight to the problem. • They facilitate testing of all aspects of the system, including data structures and algorithms that are not apparent or accessible from outside.
• Careful use of mocking allows the impact of very specific failures and race conditions to be tested in a way that may be difficult to exercise otherwise. • Unit tests can encourage good design, as design for testability can improve consistency of abstraction and reduce coupling . • Through careful manipulation of internal state, unit tests can provide deterministic testing of aspects of the system that would be at best non-deterministic as an end-toend test. • They can be a significant aid to refactoring where interfaces are static but the implementation is changing. • Where unit tests are well commented, they can provide ‘live documentation’ for the expected usage of a particular API. Some of the disadvantages of unit tests are: • Unit tests can be a hindrance to refactoring when interfaces change or when they have been written in a particularly brittle manner (e.g., assert method X is called Y times). • Unit tests can lead to false confidence in the quality of a system because they don’t exercise configuration, deployment, or other real-world dependencies, thus producing the ‘tests pass but system fails’ scenario. • ‘Pure’ unit tests can lead to false positives where the author of mock objects makes incorrect or optimistic assumptions about behaviour of the mocked out classes. For mocking to be effective, both the syntax and semantics of the mock must be correct, not just the syntax. • ‘Pure’ unit tests can fail to detect performance problems that might be detected through ‘link’ unit tests or end-to-end tests. The ‘false confidence’ and ‘false positive’ issues are taken particularly seriously since over-reliance on unit tests might lead to issues being discovered later in the development process than they otherwise would be. These can contravene the ‘risk-first’ mantra of the Enterprise Agile approach.
Some recommendations End-to-end tests and unit tests each have a very different focus within the automated testing of a system. They have different advantages and
www.testmagazine.co.uk
Ride the wave. Stay on top with TestWave. Testing is complex - why not make it simple? Running a test project takes coordination, patience and endurance. You need to keep track of what’s been tested, what needs to be tested and who should be doing it. Without a test management tool, your work just gets harder. TestWave keeps track of all your test cases, requirements, releases and defects in a central location to improve your efficiency. Since TestWave is cloud based you can be up and running in minutes, not weeks, allowing you to be immediately productive. • • • •
A full test management tool incorporating requirements, test planning, execution and defect management Delivered and hosted online: no complex installation and no costly servers (onsite option also available) Test teams can be using TestWave within minutes from anywhere in the world Extensible - interfaces with applications such as JIRA® and Quick Test Pro®
For a free 30-day trial or for more information visit our web site.
www.testwave.co.uk Automation Consultants Ltd. Email: info@testwave.co.uk © 2012 Automation Consultants Ltd. All rights reserved.
18 | Test automation
It is important to recognise that the ‘definition of done’ includes unit testing and end-to-end testing, as well as implementation. Estimates should take account of all developer testing required so that the “there isn’t time to do the testing” excuse becomes the exception rather than the rule.
TEST | October 2012
disadvantages that make it impossible for one to completely eclipse or replace the other one. They provide a mutually complementary role with end-to-end tests verifying ‘what the system does’ and unit tests verifying ‘how the system does it’ – both of which are equally important. As such, it is recommended that both unit tests and end-to-end tests are executed as part of a continuous integration build. End-to-end tests should cover all of the functional ‘happy paths’ through the system, and unit tests should cover all classes. ‘Link’ unit tests are typically preferred to ‘pure’ unit tests for business-asusual paths since they mitigate some of the risks associated with false assumptions and real-world dependencies. ‘Link’ unit tests also avoid the wasted effort of repeating or simulating the behaviour of real objects in mock objects. ‘Pure’ unit tests can also add significant value to testing of specific areas, such as algorithms, data structures, and otherwise nondeterministic or subtle behaviours. Time pressures on the build might dictate that a proportion of end-toend tests are relegated to an overnight build, but it is recommended that this is reserved for established mature tests, and that immature tests that exercise newer areas of code should still be run with each build until they have ‘bedded in’.
Test Coverage
Nigel Wilson Head of development, IT consultancy division BJSS www.bjss.co.uk
Code coverage is considered a useful metric that can indicate gaps in the test coverage. Additionally, for end-to-end tests, code coverage can identify areas of orphaned code that are unreachable from external interfaces. It is recommended that coverage for each type of test is monitored in isolation rather than be combined since a combined coverage figure might hide orphaned but unit tested code. Although an absolute test coverage figure can be useful as a target for estimation purposes or even as a contractual obligation, it should
always be considered pragmatically. Irrespective of the project target, there may be some areas that naturally demand higher unit testing (e.g., algorithms) and some, lower (e.g. simple data carriers with getters and setters). We shouldn’t be writing trivial tests for the sake of coverage, nor should hitting the coverage figure alone be an indicator that testing is complete. The change in coverage over time can be a useful supplementary metric since it can demonstrate the overall trend in quality of a project at a point in time.
Test quality It is important for technical leads to assess both the quality of the automated tests and the coverage. High coverage testing does not necessarily indicate high quality testing as it is possible to write high-coverage tests with just one execution of each path and not a single assertion. High quality testing can often be verified during code reviews by examination of the following: The number and the strength of the assertions used. - Checking of the contents of collections, not just the size; - Asserting both negative and positive outcomes; - Assertions that examine preconditions, post-conditions and intermediate state. The richness of the test scenarios and combinations of test data used. - Positive tests – system behaves correctly with valid inputs; - Negative tests– system behaves correctly with invalid inputs; - Boundary tests – negative numbers, zeroes, decimal precision, etc.
Planning It is important to recognise that the ‘definition of done’ includes unit testing and end-to-end testing, as well as implementation. Estimates should take account of all developer testing required so that the “there isn’t time to do the testing” excuse becomes the exception rather than the rule.
www.testmagazine.co.uk
20 | Mobile testing
Feature sponsored by
The future of mobile testing: Fixing a moving landscape The challenges for developers and testers in the brave new world of the mobile device app are daunting. Their products need to work smoothly across as many as 1,800 different device platforms while predicting what device manufacturers will come up with next. A challenge for any organisation. Borland has some advice...
TEST | October 2012
www.testmagazine.co.uk
Mobile testing | 21
Feature sponsored by
A
s mobile demand continues to accelerate, the developers and testers who build mobile apps will struggle to keep pace without a shift in thinking when it comes to mobile quality, performance and development. This year, 100 million people will use smartphones. Maybe you’re reading this article on one of the 118.9 million consumer tablets that will be sold this year. There will soon be more mobileconnected devices on earth than there are people to use them and Bring Your Own Technology (BYOT) has raised the stakes higher still.
The iPhone has, arguably, been overhauled by rival manufacturers, sparking a turf war where manufacturers launch potentially ruinous claims and counter-claims against each other in an aggressive battle for market share. These days, smartphones and other mobile devices don’t simply carry the news – they make it. So interchangeable are these devices – at least on the surface – that the technology press now focuses on what each new device doesn’t have and cannot do rather than wowing us with what it can. This competition between device vendors means more people than ever have inexpensive access to powerful mobile computing platforms. For consumers, this means choice. For organisations building mobile applications, this means added complexity.
Get with the program The genie isn’t going back in the bottle: the demand for mobile devices is huge and isn’t going away. Companies must, and will, increasingly engage with their customers ‘on the go’, through mobile apps. A quick Google search for ‘best business apps’ delivers more than a billion results. And executed properly, a mobile app will work on many levels, including the bottom line – worldwide app revenue hit more than $15bn at the end of last year. With nearly half of smartphone users downloading at least one new app every week, that’s a lot of bottom line.
www.testmagazine.co.uk
The key phrase here is ‘executed properly’. As an example, currently developers must write, develop, test and release apps that work for 130 different Android devices running on seven different platforms and two firmware sets – that’s more than 1,800 device combinations that need thorough testing to ensure business functionality. Some device functions vary on different carrier networks – and even the movement of the device itself can affect the application’s behaviour. Get it wrong and, well, the number turned up by a Google for ‘worst business apps’ isn’t too far behind the positive results. As quickly as a consumer can download one app, it is just as easy to switch to another app vendor if the user experience isn’t smooth. So you need to create an app that tops the first list, avoids the second and works across all the platforms while predicting what device manufacturers will come up with next. A challenge for any organisation. In this volatile environment it’s worth remembering the ground rules. Applications on mobile devices must match the functionality and performance of the operating systems and devices they’re going to be used on. At the very least, developers must tick a few boxes.
Mobile vs traditional automation Because of the maturity of the platforms and the historical build-up of the quality teams and infrastructures, traditional test automation typically leverages more ‘black box’ testing. Mobile test automation, while conceptually the same, requires different methods to accommodate the near-constant change to platforms and architectures. Mobile testers need more access and insight into the application architecture, structure, and how it works, also known as ‘white box’ testing. To get truly flexible white box testing, a strong object recognition process must be in place. Object recognition provides insight into items like buttons, controls, images, containers, and menu items. Because of the differences in platforms and devices, not all objects are created equal. It’s imperative that mobile testing technologies have a variety of ways to recognise objects under
Currently developers must write, develop, test and release apps that work for 130 different Android devices running on seven different platforms and two firmware sets – that’s more than 1,800 device combinations that need thorough testing to ensure business functionality. Some device functions vary on different carrier networks – and even the movement of the device itself can affect the application’s behaviour.
October 2012 | TEST
22 | Mobile Featuretesting
Feature sponsored by
test. If a tester doesn’t know or doesn’t have access to objects within the app, it makes testing that app a long and drawn out process which would defeat the purpose of test automation. Native object recognition is the most robust method for understanding the parameters of objects within the apps but isn’t always available to all platforms. Testers need to make sure that mobile test automation tools use a combination of object recognition technologies such as native, imagebased recognition, and test-based recognition. This combination will ensure that tests can be conducted on the widest variety of platforms with the least amount of maintenance across testing scripts. Apps are built with industry standard tools, frameworks and languages such as Java, C#, Ruby on Rails, JUnit, or NUnit and the process uses more of a ‘white box’ approach to test automation. On top of multi-modal recognition technologies, an instrumentation strategy - a way to enhance native object recognition is beneficial. While frowned on in traditional testing, instrumentation is recognised as the norm within mobile testing and is driving future thinking. So the future of mobile testing is to embrace white box testing – and to be less prescriptive about object recognition to maximise the benefits.
Make it easy to maintain In our instant gratification society, our devices must work – and work fast. Websites should load quickly, even on the go. The performance testers’ maxim, ‘Seven seconds can cost you $7 million dollars,’ illustrates perfectly how a delay in performance can ultimately impact the bottom line.
TEST | October 2012
A major challenge within all testing organisations is the reuse and maintenance of testing scripts. Testers are all about efficiency and repeatedly recreating test scripts negates most productivity gains. In traditional software testing, the challenge for test automation tools was to ensure that scripts were portable, easily editable, shared, versioned, and could be rolled back either natively or via a file/source control system – a challenge met by the evolution of most software testing tools. As for mobile test automation, the immaturity of the tools in the market means many scripts aren’t reusable across multiple platforms, devices, and application types. Organisations looking for a mobile test automation platform should factor in the overall maintenance of the scripts and tests being created. The mobile test automation tool should be able to reuse as many scripts to test across multiple units – an important consideration
bearing in mind the proliferation of devices. This script will be responsible for recording, replaying, modifying, and creating error controls on all the test commands. The testing tool is the conduit between the connected devices and the extension inside a chosen IDE, such as Visual Studio. Done properly with the right tool and architecture, one test script will execute against multiple devices. Performance is everything. In our instant gratification society, our devices must work – and work fast. Websites should load quickly, even on the go. The performance testers’ maxim, ‘Seven seconds can cost you $7 million dollars,’ illustrates perfectly how a delay in performance can ultimately impact the bottom line. As mobile networks spread, so does the need for robust application performance – and the risk of highprofile failures increases exponentially. The rapid demand for accurate mobile application distribution across multiple devices and platforms raises the stakes for mobile development teams until performance testing is just as (if not more) critical to an applications release as functional testing. While many organisations’ current infrastructure and performance testing tools are sufficient, some fail to prepare for multi-channel/multi-speed swarms to the network under load. Infrastructures can withstand hyperfast networks but unknowns quickly become apparent when traffic from slower networks hits the primary route. Open sockets waiting resulting from a cell tower switchover, database pooling sharing load between users on fibre and GPRS connections and how the round robin load balancer handles connections vs bandwidth: hitherto unseen issues, all requiring a fix.
Loads better? Another, often overlooked item with mobile testing is how the application responds under heavy load. Is the user seeing the right screen? Is the application showing the correct ads under load? What about HTML5 template based applications, is the right style sheet showing up for the right device? Because there have been at least eight different versions of the Android operating system (OS) since 2009, performance and functional testing of different Android OS versions on multiple mobile devices is difficult. Add in iOS, bada, BlackBerry ,
www.testmagazine.co.uk
Mobile testing | 23
Feature sponsored by
Windows Mobile and HTML5 and testing complexity reaches another level. Testers must, therefore, use tooling that has mobile performance simulation capabilities on top of having robust object recognition technology for functional testing. Mobile testers should also prepare for the future by practicing ‘edge testing’. Start with the popular, most common devices, typically the ones promoted by the major mobile phone providers (ie iPhone, iPad, Google Nexus, Galaxy Tab etc.) to narrow the gap when developing a strategic plan of attack for mobile test automation. Then test the smallest and largest screen sizes to ensure ample coverage. Edge testing with robust tools that allow for performance and functional testing is a strategy that still holds water – regardless of how often devices change.
Work better Clearly, with developers under pressure to deliver more robust code that can meet all these challenges, they’re going to have to work better; find better, more productive working practices that create an improved product. Agile testing certainly ticks a lot of boxes. Because instead of a labourintensive process where developers work in isolated teams and test software until destruction to find the bugs that will affect it in a real-world situation, Agile testing is helping teams to work together to ensure the bugs don’t get in there in the first place. Testers are writing automated tests to show that the software works, rather than obsessing about its vulnerability. Of course, Agile testers will still – and should – find bugs, and pre-Agile testers must write tests to show what works. But agile thinking has prompted a new way of thinking that, like the mobile technologies they support, is more fluid and interactive. Using tools that enable teams to collaborate out of the box is imperative to working better. Having a mobile testing infrastructure that allows for developers to write unit tests using open source frameworks such as JUNIT while working with tooling that takes care of the ‘heavy lifting’ across different devices will shrink testing cycles. In this ecosystem, a business analyst can design the tests with visual tooling, export those as selfcontained scripts that self-describe the given state of the application and any subsequent states, and then have that automatically generate JUNIT code
www.testmagazine.co.uk
lets developers work as developers and testers work as testers. The future of mobile testing isn’t – and cannot be – about silos of developers working on niche projects that seek to find problems. It’s going to be teams of developers and testers working closer together and drawing on their creativity and expertise to create robust code that meets the challenges of mobile.
The future of mobile testing: A summary An organisation’s future mobile testing solution should meet these criteria: Reusability: By definition, mobile app development is a moving environment. Systems must withstand frequent changes and keep pace with rapid developments. Maintainability: Mobile testing scripts need to be easily modified and portable across platforms to avoid rework which results in delays and added costs. Real world mobile use: While what we perceive as the ‘real world’ will change, mobile apps should be tested in the same way that the audience will use them. Industry standard languages: Some things are more stable. Java or C# will integrate into any continuous delivery system. Multiple devices: The testing solution should support multiple iterations of the mainstream mobile development platforms. Improved performance tests: accuracy is paramount in performance testing. You want real traffic from native apps that mimic real devices. Use emulators and simulators: take away the legwork by using software that mimics the actions of different devices. Simulate real mobile bandwidth: Test for the real world by covering and replicating real mobile bandwidth speeds such as GPRS, EDGE, UMTS, HSDPA, HSPA+, and LTE. Scale it up: while there are some variables the future is definitely going to see more people using your app, so scale up your mobile testing to a global scale. Identify the correct testing subset: It is unrealistic to test your app on every known released device and mobile OS platform, so identify your key players – and recalibrate it as required. For further details please contact
The future of mobile testing isn’t – and cannot be – about silos of developers working on niche projects that seek to find problems. It’s going to be teams of developers and testers working closer together and drawing on their creativity and expertise to create robust code that meets the challenges of mobile.
borland.communications@borland.com
October 2012 | TEST
| Featurevirtualisation 24 |Service
So you want another test environment? Explaining the benefits of service virtualisation, Parasoft offers an alternative test environment.
B
usiness systems today are complex environments. Gone are the days where a test system can be thrown together in a day or so. Duplicating a live infrastructure, even allowing for reduced capacity, can involve databases, ESBs, multiple servers, mainframes, even access to third party systems. Is it any wonder then that putting together a new test environment can take a lot of both time, and money?
As these complex systems become more interdependent, development and quality efforts are further complicated by constraints that limit developer and tester access to realistic test environments. These constraints often include: - Missing/unstable components; - Evolving development environments; - Inaccessible third party/partner systems and services; - Systems that are too complex for test labs (mainframes or large ERPs); - Internal and external resources with multiple ‘owners’. Although hardware and OS
TEST | October 2012
virtualisation technology has provided some relief in terms of reduced infrastructure costs and increased access, significant gaps still exist for software development and testing. It is not feasible to leverage hardware or OS virtualisation for many large systems such as mainframes and ERPs. And more pointedly, configuring and maintaining the environment and data needed to support development and test efforts still requires considerable time and resources. As a result, keeping complex staged environment in synch with today's constantly-evolving Agile projects is a time-consuming, neverending task. The majority of responses to a recent questionnaire exposed that companies were well aware that testing was suffering due to the cost of test environment provisioning, with time delays introduced by waiting for said environments, and the cost and delays introduced by relying on access to third party systems. On the upside, these companies are now serious about resolving the situation with a technology that is coming of age, Service Virtualisation (not to be
confused with OS virtualisation such as VM Ware). So what is Service Virtualisation? One thing it is not is a silver bullet! Test environment provisioning will still take time and money, just less of it. Once in place though, your virtualised test environment will give you: - The ability to deploy multiple versions of your test infrastructure with no time or financial penalty. - Instant access to systems that previously had time constrictions. - Management rights over your own test data. - Reduced costs for infrastructure licensing - Fewer headaches
Figure 1: Typical Systems replaced by Service Virtualisation
www.testmagazine.co.uk
Service virtualisation | 25
What is involved and is there any risk? The easiest way of creating a virtualised asset is to create it from existing message traffic. In simple terms this means recording the message flow between two systems, the virtualisation system will use this to create an asset that always send the same response when a specific request is received. A unique key field (or fields) are used to correlate these request/response pairs. A more advanced system will interrogate an incoming request and then pull data from a datasource with which to populate the correct response message.
The risk of course is that you find that the virtualised environment does not exactly mimic your live one. To this end, yes you will need to invest time and effort in designing these systems. Use the services of experts, by all means train up your staff in how to do it, but get the initial systems set up by experts so that you have confidence in the initial systems. The old saying of ‘a stitch in time’ really does apply.
And the benefits? Some of the benefits are those already stated, ie the ability to deploy test environments with reduced costs. Others though are more subtle. How does your system under test cope with the response times from other pieces of infrastructure change? Can you deploy the exact test data you require on your existing test environments, and then reset it at will? How can you ‘break the system’ to apply negative tests? Testing a system that is only half built? No problem, virtualise the rest of it!! Are you able to load test, if some of your test environment is actually ’live’ as well? Virtualise live assets, so that systems under test do not get affected by inconsistent results from other systems.
What makes a workable system though is the ability to easily manage it in use. Systems being virtualised ‘may’ be modern, with adaptability in mind, or may be 20+ year old mainframe applications. In practise this means that to point your application under test to your virtualised asset may be as simple as changing a field on a web page, or changing a configuration file in some exotic location. Your virtualised environment needs a way to capture the information needed to change between live and virtualized assets, and allow testers the ability to change at will, automatically via a user interface. A tester should not need to know (or care) how this change is affected. What is required is an Environment Manager.
More about application virtualization can be read on www.parasoft.com / VIRTUALIZE, where you can find a recorded presentation of Parasoft Virtualize, download our Virtualization eKit, and request an evaluation.
www.testmagazine.co.uk
October 2012 | TEST
26 | Cloud Featuretesting
Cloud in a box Scott Murphy, HP divisional leader at Avnet Technology Solutions offers his five steps to delivering Agile development and testing in the cloud.
H
Some would say the development bottleneck has simply moved to the complex task of configuring and administering the hardware and software stack required for testing new applications. Modern composite applications rely on a complex stack of interdependent software programs. Changes in any one of the applications that contribute to the overall solution can have unanticipated consequences.
TEST | October 2012
ow agile is your Agile development? Software development cycles have become compressed over the years. This is true in terms of both pure software development and the implementation of package software applications. There has been a significant shift from historic waterfall development methodologies, which consisted of long planning cycles with a limited number of releases per year, to Agile development techniques with continuous rapid delivery of incremental improvements. As a result of the larger number of release cycles there is need for increased testing and provisioning of controlled test environments which means a different set of challenges. Some would say the development bottleneck has simply moved to the complex task of configuring and administering the hardware and software stack required for testing new applications. Modern composite applications rely on a complex stack of interdependent software programs. Changes in any one of the applications that contribute to the overall solution can have unanticipated consequences. Scott Murphy HP divisional leader Avnet Technology Solutions www.avnet.com
Here are five steps to avoid those consequences, break the bottleneck and test the cloud.
1. Automated deployment in a private cloud Software development and testing is an ideal environment to exploit cloud automation software. It solves the issue of provisioning delays, inaccuracy and system administration costs without adding risk to production systems. With initiatives such as Avnet’s Cloudin-a Box organisations can implement a private cloud and populate it with advanced test management software to allow developers and testers to work together in a streamlined environment that fully supports agile methodologies without creating a testing bottleneck.
2. Beating the provisioning challenge In order to keep up with Agile development testing there is a need to stand up complex software and hardware environments quickly, accurately and reliably. Precise environment descriptions are required to ensure that the exact version of every contributing element is consistent between development, testing and production. On average each fresh install of components in a ‘sandbox’ can take 12-man hours of system administration per server, followed by six hours to configure and verify the complete environment. Assuming 80 sandbox requests per year and the average request requires five servers to be built this gives an annual cost in excess of £290K. The beauty of using private cloud automation tools means this cost can be reduced by as
www.testmagazine.co.uk
Design for TEST | 27
much as 75 percent and even better it can be paid out of operational expense (OPEX) instead of capital expense (CAPEX).
3. Saving time on test planning & execution Applications go through a predictable lifecycle and developers and testers need a systematic approach underpinned by tools that enforce the methodology in a productive manner. For example there should be a Requirement Tree that displays the hierarchical relationship among requirements and ties them to tests and defects; a Test Plan Tree with defect and requirements association, riskbased prioritisation and test execution. By managing the scheduling and running of tests and organising them into test sets designed to achieve specific goals and business processes, time and expense can be saved by using the cloud.
4. Speeding production deployment Server automation tools form the basis of production cloud deployments. By gaining familiarity with server automation tools during development and testing, IT departments become well-placed to evaluate their production for migration to the cloud. The precise software stack identified by the development team and verified by the quality testing team can then be deployed to a production environment for example running HP’s Server Automation tools.
5. Risk-based quality management Risk-based, automated quality testing controls IT costs by reducing the number and duration of business critical application outages. This means less time and effort spent on problem identification, resolution and reworking. Centralised and rigorous risk-based testing should include three-way traceability between requirements, tests and defects to facilitate reduced outages and time spent on resolving them. Following these five steps and taking advantage of a fully automated cloud environment spanning development, testing and production organisations can benefit from faster time-to-market, the elimination of production outages arising from software deployment errors and vast improvements in hardware and software license utilisation. So just how agile are your Agile developments and could the testing in the cloud make all the difference?
www.testmagazine.co.uk
Testing Java programs the easy way For those interested in testing Java code, Mike Holcombe explores JWalk, is a very useful free tool that is available for download (see below).
T
he way the JWalk tool works is that it performs bounded exhaustive testing of any compiled Java class. As soon as you start writing, and have a prototype class compiled, the tool supports feedback-directed testing, by first exploring the behaviour of the class and reporting on this. If this is not what was expected, the code can be changed and retested. Through a process of dynamic analysis and interaction with the user, the strategy leads to more sophisticated series of tests and greater confidence about the correctness of the code.
Mike Holcombe Founder and director epiGenesys Ltd www.epigenesys.co.uk
continuous feedback during the design process is a powerful mechanism for achieving what is required, compared to investing huge up-front effort in design documentation, which is troublesome to maintain anyway. Systematic testing describes a modelbased testing method that tests exhaustively the whole state space of the object, with respect to its specification. http://staffwww.dcs.shef.ac.uk/ people/A.Simons/jwalk/ http://en.wikipedia.org/wiki/JWalk http://staffwww.dcs.shef.ac.uk/ people/A.Simons/research/papers/ jwalkvsjunit.pdf
This tool represents a significant advance over tools such as JUnit, which requires the tester to devise handwritten tests which can then be applied automatically. With JWalk, however, the tool proposes all the significant test cases systematically and the tester merely has to confirm a subset of test outcomes. The JWalk tool predicts the test outcomes for many more testcases, after learning the state-space of the test class; and generates new tests after a class's behaviour has changed by the programmer. Technically, the approach is called lazy, systematic unit testing. The lazy specification describes how the implicit specification of a class is allowed to develop gradually as the designer evolves the code. This is highly compatible with Agile development techniques, since the idea of
October 2012 | TEST
28 | Mobile Featureload testing
Addressing mobile load testing challenges The advent of mobile apps and mobile websites have brought a set of new load testing challenges that must be addressed by any load testing solution. Neotys reports.
M
obile applications and mobile websites have become a major channel for conducting business, improving employee efficiency, communicating, and reaching consumers. In the past, mobile played a smaller role in business applications, so performance issues and outages were less of a concern. This is no longer the case. Today, performance problems with mobile applications lead directly to revenue loss, brand damage, and diminished employee productivity.
Application developers have long understood the need for load testing conventional desktop web applications to ensure that the application will behave properly under load with the expected number of users. With the advent of mobile apps and mobile websites the principles of load testing have not changed. There are, however, challenges specific to mobile load testing that must be addressed by your load testing solution. Since mobile apps and applications for desktop web browsers use the same underlying technologies, the good news is that most load testing tasks and challenges are the same. You don’t necessarily need a brand new, mobile-specific load testing tool, but you do need a quality web load testing tool capable of handling the nuances of load testing mobile apps. Using a tool that enables testing of traditional
TEST | October 2012
and mobile web applications enables you to leverage existing in-house skills for designing and parameterising your scripts, running your tests, and analysing the results. Aside from the similarities between traditional and mobile load testing are three key differences: Simulating bandwidth for wireless protocols: With 3G wireless protocols, mobile devices typically connect to the Internet using a slower, lower quality connection than desktops and laptops. This has an effect on response times on the client side and on the server itself, which you’ll need to account for as you define your tests and analyse your results. Recording on mobile devices: Obviously, mobile apps run on mobile devices, and this can make it difficult to record test scenarios, particularly for secured applications that use HTTPS. Supporting a wide range of devices: The many different kinds of mobile devices on the market have led web application designers to tailor content based on the capabilities of the client’s platform. This presents challenges for recording and playing back test scenarios.
Mobile load testing basics A typical automated functional test for a mobile application emulates user actions (including tap, swipe, zoom, entering text, and so on) on a real device or an emulator. The objective of load testing, however, is not to test the functionality of the application for just a
single user. Rather, the goal is to see how the server infrastructure performs when handling requests from a large number of users, and to understand how response times are affected by other users interacting with the application. An effective load test simulates a high volume of simultaneous users accessing your server via your application. Using real devices or emulators for this task is impractical because it demands acquiring, configuring, and synchronising hundreds or thousands of real devices or machines running emulators. The solution, of course, is to use a load testing approach that is designed to scale as needed. With a client-based approach, user actions in the browser or the native application are recorded and played back. In contrast, a protocol-based approach involves recording and reproducing the network traffic between the device and the server. To verify performance under large loads, tools that enable protocol-based testing are superior to those that support only client-based testing because they can scale up to millions of users while checking errors and response times for each user. The basic process for protocol-based mobile load testing is: 1. Record the network traffic between the device and the server; 2. R eplay the network requests for a large number of virtual users; 3. Analyse the results.
www.testmagazine.co.uk
Mobile load testing | 29
It appears straightforward, but there are challenges at every step. The good news is that these challenges can be addressed with an effective load testing approach.
Recording mobile load testing scenarios To generate a mobile test scenario, you first need to identify the type of mobile application under test. Challenges associated with capturing the data exchanges between a mobile application and the server depend on the design of the application: Native apps: These apps are coded using a programming language (Objective-C for iOS, Java for Android) and API that is specific to the device. As such, they are tied to a mobile platform and are installed from an online store or market. Web apps: Built with web technologies (such as HTML and JavaScript), these applications can be accessed from any mobile browser. More sophisticated web apps may use advanced features like geolocation or web storage for data or include customisations to better match the browser used. Two popular web apps are: http://touch.linkedin. com and http://m.gmail.com. Hybrid Apps: A web app embedded in a native app is known as a hybrid app. The native part of the application is limited to a few user interface elements like the menu or navigation buttons, and functions such as automatic login. The main content is displayed in an embedded web browser component. The Facebook application, installed from an online store or a market is a typical sample.
Recording tests for native apps Because native apps run on your device or within an emulator, to record a test you need to intercept the network traffic coming from the real device or the emulator. To intercept this traffic, the equipment that records the traffic must be connected to the same network as the device. When the recording computer is on the intranet behind a firewall, it is not possible to record a mobile device connected via a 3G or 4G wireless network. The device and the computer running the recorder must be connected to the same Wi-Fi network. Most load testing tools provide a proxy-based recorder, which is the easiest way to record an application’s network traffic. To use this approach, you need to configure the mobile device’s Wi-Fi settings so that the traffic goes through the recording
www.testmagazine.co.uk
proxy. Some mobile operating systems, such as iOS and Android 4, support making this change, but older versions of Android may not. Moreover, some applications connect directly to the server regardless of the proxy settings of the operating system. In either of these cases, you need a tool that provides an alternative to proxy-based recording methods based on network capture or tunnelling. There is a simple test to check if the application can be recorded using a proxy. First, configure the proxy settings on the device and record your interactions with any website in a mobile browser. Then, try to record interactions in the native application. If your testing tool successfully records the browser generated traffic, but does not record traffic generated by the native application then you can conclude that the native application is bypassing the proxy settings and that an alternative recording method is required.
Recording tests for web apps and mobile versions of websites Web apps use the same web technologies as modern desktop browsers. As a result, you can record the application or the mobile version of a website from a modern browser on your regular desktop computer, which is an easier and faster alternative to recording from the device. Many web apps check the browser and platform used to access them. This enables the application, when accessed from a mobile device, to redirect to a mobile version of the content that may contain less text or fewer images. To test such an app from the desktop, you need to modify the requests to make them appear to the server to be coming from a mobile device. Otherwise, you will not be testing the mobile version of the application as the server may redirect to a desktop version. Some browser plug-ins provide the ability to alter the identity of the browser (by modifying the User-Agent header of requests). Support for this feature is also directly integrated in the recorder of advanced load testing tools. Modifying the browser’s identity is not always enough. You obviously cannot use this approach to transform Internet Explorer 6 into an HTML5 compatible browser. The browser you use on the desktop must be able to parse and render content created for mobile browsers, so it’s best to record with a modern browser like Internet
An effective load test simulates a high volume of simultaneous users accessing your server via your application. Using real devices or emulators for this task is impractical because it demands acquiring, configuring, and synchronising hundreds or thousands of real devices or machines running emulators. The solution, of course, is to use a load testing approach that is designed to scale as needed.
October 2012 | TEST
30 | Mobile load testing
Explorer 9, Firefox 5, Chrome 15, or Safari 5 (or a more recent version of any of these if available). If the application includes WebKit specific features, you should use a WebKit-based desktop browser, preferably either Chrome or Safari.
Recording tests for hybrid apps Obviously, tests for native apps cannot be recorded using a desktop browser. However, tests for many hybrid apps can. You may be able to directly access the URL used for the application, for example http://m.facebook.com for the Facebook application, and record your tests as you would for a classic web app.
Recording tests for secured native applications
Note that the primary factor in the success of load testing with the cloud is not the move to the cloud itself, rather it is the tool you select and how well it uses cloud technology. In particular, it’s best to select a solution that is integrated with multiple cloud platforms, enables in-house test assets to be reused in the cloud, and supports realistic, large-scale tests across multiple geographical regions.
TEST | October 2012
There are additional challenges to consider when recording tests for a secured native application, that is, an application that uses HTTPS for the login procedure or any other processing. By default, all HTTPS recording methods, whether proxy or tunnel based, are seen as man-in-the-middle attacks by the device. This raises a non-blocking alert in a desktop or mobile browser but it leads to an outright connection refusal in native applications, making it impossible to record the secured traffic. The only way to record tests for secured native applications is to provide a root certificate that authorises the connection with the proxy or tunnel. While this feature is currently supported by relatively few load testing solutions, it is essential for load testing any native application that relies on HTTPS. Note: The root certificate must be installed on the device. This operation is simple for iOS devices; you can simply send the certificate via email and open the attachment on the device. For other platforms, including Android, the procedure is not as straightforward and may depend on the version of the operating system and the manufacturer of the device.
Running realistic tests Once you’ve recorded a test scenario, you need to be parameterise it so that it can emulate users with different identities and behaviours as it is played back to produce a realistic load on the server. This step is required for traditional and mobile web applications, and the tools used to complete it are the same. When playing back the test scenarios, however, there are several challenges specific to mobile load testing.
Simulating network conditions: Today’s mobile devices generally access the server over networks that are slower than those used by desktop computers. Network conditions have a significant effect on the user experience, and the effect may be more or less pronounced depending on the application. Bandwidth and latency both affect response times, but of the two, bandwidth is more important in producing realistic test scenarios. Bandwidth and response times: The bandwidth is directly correlated with how long it takes to download data from the server. The lower the bandwidth, the higher the response time. A server that provides acceptable response times for desktop users using DSL or another high-speed broadband service may deliver a poor end user experience to mobile users with lower bandwidth. It is important to validate your service-level agreements (SLAs) and performance objectives with tests that use the same bandwidth limitations as your users to avoid making decisions based on misleading test results. Such tests must incorporate bandwidth simulation, which is the process of artificially slowing down the traffic during a test to simulate a slower connection. Bandwidth and server load: Clients using lower bandwidth connections also affect the server. The lower the bandwidth, the longer the connections. Longer connections, in turn, lead to more simultaneous connections on your web server and your application server. Thus, mobile users tend to consume more connections than their wired counterparts. Most servers have settings that limit the number of simultaneous connection that they can handle. Without a testing tool that realistically simulates bandwidth, these settings cannot be properly validated. Simulating bandwidth limitations for individual virtual users: When load testing, effective bandwidth simulation requires the ability to individually limit the bandwidth for each user or groups of users, independent of the others. Consider a situation in which you need to verify performance when 100 mobile users are accessing the server. In this scenario, you’d want to simulate 100 virtual users, with each user limited to a 1Mbps 3G connection. In this case, the total bandwidth for all users is 100Mbps (100 users x 1Mbps/user). Though it is possible to use WAN emulation software or a network appliance to globally limit the bandwidth for the load generation machine to 100 Mbps (or any other
www.testmagazine.co.uk
Mobile load testing | 31
arbitrary limit), in practice this does not provide a realistic test because it does not impose a strict 1Mbps constraint on each user. Bandwidth simulation support must be integrated in the load testing tool itself to enable bandwidth limits to be applied to individual virtual users. To conduct an even more realistic test, you’ll want to simulate a mixed population of users accessing your application with a variety of bandwidths. With a tool capable of bandwidth simulation on a per virtual user basis, you can determine the response times for users at each bandwidth across a range of bandwidths in a single test. This saves time when you need to compare the response times of web applications and business transactions for clients who have different bandwidth limits.
Simulating browsers and browser capabilities When a browser requests a resource from a web server, it identifies itself via the user-agent header sent with each request. This header contains information about the browser and the platform on which it is running. Servers use this information to deliver different versions of the content based on the client system. As noted earlier, many web applications deliver different content to mobile users and desktop users. Some further differentiate mobile users into subgroups based on information in the user-agent header, delivering less text and smaller images to devices with small screens. This can lead to bandwidth consumption and loading times that vary widely with the browser and platform being used. As a result, the ability to manipulate the user-agent header is essential not only for recording test scenarios, but also for playing them back. Tools that lack this capability will fail to retrieve the appropriate content from the server. Simulating parallel connections: Mobile browsers, like desktop browsers, can generate the HTTP requests needed to retrieve the static resources of a web page in parallel. Rather than waiting for each image to finish loading before requesting the next, this approach requests multiple images at once to shorten the overall page load time. To measure response times accurately, load testing tools must replicate this behaviour by generating multiple requests in parallel. Moreover, they must simulate the appropriate number of parallel connections as this number may differ from one mobile browser to another. Again, tools that
www.testmagazine.co.uk
lack this capability are not performing realistic tests, placing the results they deliver into question.
Identifying the most appropriate settings for realistic tests Finding the appropriate values for key test settings – such as the useragent, bandwidth, and number of simultaneous connections – can be a challenge. More advanced load testing tools can help testers set these values. For example, test scenario playback is greatly simplified by tools that can automatically inform the tester of which user-agent string and number of parallel connections to use based on the browser name, version, and platform. The process is further streamlined when the tools can suggest the most appropriate upload and download bandwidth settings based on the technology used (for example, Wi-Fi, 3G, 3G+, and so on) and the quality of the signal (for example, poor, average, or good).
Using the cloud You can use load testing with the cloud after (or in conjunction with) on-premise testing in the lab to improve the realism of your tests by generating high loads and testing from different locations, while saving time and lowering costs. Generating a high load: For consumerfacing apps and websites it is often difficult to predict the number of users your applications will have to handle. Traffic spikes that result from a promotion, marketing campaign, new product release, or even unexpected social network buzz can be substantial. To generate a similar load in-house, you would need a significant investment in hardware. Using the cloud, you can generate the same high load using on-demand resources at a much lower cost. Testing from different geographies: Your web application’s real users likely access the server from many different geographical locations and use different networks. To properly validate the application and the server infrastructure, your virtual users should operate under similar real world conditions. Testing the entire application delivery chain: When your real users are located outside the firewall, you should run your virtual users from the cloud to validate the parts of the application delivery chain that are not tested when testing from the lab, including the
firewall itself, load balancers, and other network equipment. Tools for testing with the cloud: While the cloud represents an opportunity to rapidly increase the scale and improve the realism of load testing at low cost, cloud testing is most effective when it used to complement internal load testing. Note that the primary factor in the success of load testing with the cloud is not the move to the cloud itself, rather it is the tool you select and how well it uses cloud technology. In particular, it’s best to select a solution that is integrated with multiple cloud platforms, enables in-house test assets to be reused in the cloud, and supports realistic, large-scale tests across multiple geographical regions.
Analysing results The default results of a load test are frequently delivered as averages. For example, load testing tools will typically show what errors occurred and the average response times for a request, web page, or business transaction regardless of the type of users being simulated or the bandwidth available to them. Because bandwidth may vary widely for the different kinds of users simulated, the errors and response times can also vary widely. Taking an average of results with significant variation does not provide an accurate picture of what is really happening. To gain meaningful insights and to validate your SLAs and performance requirements for each network condition, it is important to go beyond the default results and analyze the results for each kind of user.
Addressing the challenges In many ways, mobile load testing is similar to load testing classic web applications. As a result, testers can leverage much of their existing knowledge and reuse existing techniques – like using the cloud for realistic, large scale tests. However, there are specific requirements for testing mobile applications that are not addressed by traditional load testing techniques. Recording mobile test scenarios, conducting realistic tests that simulate real-world bandwidth and browser characteristics, and properly analysing the results are some of the key areas that require special attention for mobile applications. Addressing challenges in these areas is essential to ensuring mobile web applications are sufficiently tested prior to release and that they will perform well under load in production.
www.neotys.com October 2012 | TEST
32 | Test data management
Integrated test data management – the key to increased test coverage Test data management should be a central part of any testing project – it is becoming the key to increasing test coverage. TEST editor Matt Bailey spoke to Wolfgang Platz, founder and CEO of TRICENTIS about his company’s approach to this rarely considered but highly influential keystone of software testing and its potential for optimising the testing process.
A
s testing grows in importance, the role of test data management is also becoming more crucial. Wolfgang Platz, founder and CEO of TRICENTIS explains the solution for test data management provided by his company’s TOSCA Testsuite.
TEST: What is test data management and why is it so important? Wolfgang Platz: Test data management deals with providing appropriate test data exactly when it is needed. You need the right test data basis to conduct the best test case for specific
TEST | October 2012
scenarios. Today, about 70 percent of the overall testing effort in a complex project is spent on manual testing, out of which up to 60 percent is spent on finding and preparing the right test data! This represents a huge cost block and any possible reduction of this cost could be critical. TEST: Where do you see the biggest potential for optimisation of software testing today? WP: Actually, it isn’t just test data management. What we see is a compound picture consisting of test data management, test case design and test automation. These three
www.testmagazine.co.uk
Test data management | 33
elements must come together to provide an optimised test. We call it the ‘magic triangle’ of test optimisation, and test data management is an integral part. TEST: What about using production data as a test data basis? WP: This is the most widely used method of providing test data – taking it from production – but there are significant disadvantages. Production data will rarely have specific data for new functionality, so this only applies to a limited amount of the total testing needed. As soon as it comes to complex situations with production data, it requires a lot of effort to find the right data. The amount of production data tends to be enormous, leading to high costs for processing and there are constraints because in many instances companies are restricted from working with real data. TEST: Would you recommend making production data anonymous? WP: It is a possible solution, but most of our customers have complex system landscapes and you would need to synchronise the anonymisation from all of the different data sources which is a really complicated process. We have seen a lot of attempts to do this fail. The more complex systems get, the harder it is to anonymise data, and very few of our clients have succeeded. We have seen clients spend millions of dollars trying to do it! TEST: Why not use data extracts from production? WP: If you extract data from production you need to have specific extraction logic. This could be a series of specific SQL queries that produce the right test data object. The problem is: this series of SQL queries will redundantly represent parts of the system’s business logic! Now we face another development project just to provide the extractor and this project will consume additional resources. There has only been one example in our entire customer base where this has succeeded, and that was due to its simplicity. TEST: We have been seen plenty of attempts to make use of synthetic test data. Most of them failed. Why? WP: Synthetic test data is produced via an automated process; however, to bring this synthetic data objects
www.testmagazine.co.uk
(customers, accounts etc.) into the desired state it gets complicated because you would need to execute certain transactions like bookings etc. with these test data. This is a very complex task that cannot be achieved manually, especially with large volumes of test data. TEST: What is your offering to overcome these issues? WP: TOSCA is an automated software testing solution with fully integrated test data management which knows about every single test data object that is created. TOSCA also provides the automated procedures to create the perfect synthetic test data for the test cases and has a full track record of the states of these test data objects. With SAP for example, you would have to create an offer and then you would tie deliveries to the offer, and upon delivery, you would get a receipt. If you deliver only part of this order, the remaining number of goods to deliver will decrease ultimately to zero. TOSCA is in full charge of every stage and condition of the test data object, so TOSCA would know ahead that there is no open number of goods to be delivered and you would see instantly the measure needed to correct. TOSCA provides a test data management machine that has full track regarding the states of all test data objects. As a consequence, you will always have the right test data and you will never run out of test data. TEST: How can test data management be integrated with test case development and execution? WP: In projects we start with test case design, finding out which test cases are needed. Test data definition is actually a side result of test case design if you do it properly and test case automation is a follow-on step from design, so these things can be technically bound together. This leads us back to the ‘magic triangle’ of test optimisation: TOSCA connects all of them by using automated sequences to create test data. TOSCA leverages test case design to describe both the necessary test data and also the test cases that are necessary. TEST: How does test data contribute to the optimization of test coverage and to testing excellence overall? WP: When we go into customer projects today we see rather low test
coverage. As we deal with the creation of new objects, requiring only simple test cases, these have relatively high test coverage, of up to 50 percent – sometimes even higher. As we move into the field of administration and manipulation of test objects, we depend on base data. Instantly the problem arises of not having the right test data, and therefore, test coverage drops significantly. It gets even worse when there are end to end business procedures involved. So again, there is high coverage in the simple cases and a sharp drop off in the more complex test cases. TEST: Is integrated test data management more important in a rapid-paced environment, for example with Agile development? WP: There are a couple of things in testing that have always made sense, including: using a risk-based approach, being in full control of test data, and early automation where possible. With the introduction of Agile it is not possible to ignore these things any longer because agility dramatically increases the speed of evolution and revolution. It shortens the cycles of development and in order to keep providing significant test coverage you now need to optimise and industrialise testing. This necessitates risk-based testing and controlling test data management. Having early test automation and integrated test data management get more important the further you get into Agile development where short sprints and fast time to market are crucial. TEST: How do you think the role of test data management will develop in the near future? WP: Testing procedures account for as much as 60% of the application lifecycle and test data management has never been more important. As the demand for high quality testing increases, the demand for increased test coverage in all test projects will increase. The key to increasing your test coverage is to have appropriate test data management and its role will grow significantly stronger than the rest of software quality assurance. Within IT, nothing is growing as quickly as the need for improved software quality assurance! TEST: Wolfgang Platz, thank you very much.
Wolfgang Platz Founder and CEO Tricentis www.tricentis.com
October 2012 | TEST
20 Leading Testing Providers
Sponsored by
36 | 20 Leading Testing Providers
The top suppliers to a booming industry Welcome to TEST’s guide to some of the top suppliers to the software testing and QA sector. As is all too evident from the content of this and previous issues of TEST, there is something of a boom going on in the testing sector. With thousands of new mobile apps and an ever-widening range of platforms, operating systems and networks to run on, there has never been so much software to test To help with this monumental challenge, what follows on the next 11 pages or so is TEST magazine’s essential guide to 20 of the testing tools, products, software and service providers. It is designed to be a handy alphabetical guide to suppliers to the testing industry to be used as reference material for any testing organisation seeking assistance, products and services. If you are considering the purchase of any products or services I would urge you to check out the 20 Leading Testing Providers before you make your final decision. Our thanks go to Borland, a Micro Focus company for being our headline sponsor and to all the companies that are included in the section. We hope you find the 20 Leading Testing Providers both highly useful and highly useable. Happy reading...
Matt Bailey, Editor
www.testmagazine.co.uk
October 2012 | TEST
20 Leading Testing Providers | 37
A word from our sponsor... Borland: meeting the brief for the future
M
icro Focus can help you meet future testing challenges through the Borland suite of testing software products. Borland – a Micro Focus company – has watched the testing landscape change and meets the need both today and in the future. The Borland roadmap shows the way through a shifting landscape and offers a suite of products that embeds quality assurance throughout the entire development journey, from requirements definition to ‘go live’.
Borland: Define. Manage. Test Borland software delivers world-class open and agile requirement, test and change management solutions that realise genuine cost savings for our clients. Borland software development products transform good software into great software right across the application development lifecycle. The future will be open and agile so Borland has developed powerful products that work the way you, your team and your users will need to work in the future. • Gather, refine and organize requirements • Align what you develop with what your users need • Accelerate reliable, efficient and scalable testing to deliver higher quality software • Continuously improve the software you deliver • Track code changes, defects – everything important in collaborative software delivery • Give your users the experience they expect with better design, control and delivery A key plank of the agile strategy – and by extension a major factor in future mobile testing – is a new philosophy that requires team players to develop better code at the first time of asking, rather than asking testers to identify the bugs left in the product at the end of the development process. Borland Solutions put the focus on identifying and
www.testmagazine.co.uk
eliminating defects at the beginning of the process, rather than removing them at the end of development. It provides capabilities across three key areas: Requirements: Caliber ®, the requirements definition and management tools, uniquely combine requirements definition, visualization, and management into a single ‘3-Dimensional’ solution. This gives managers, analysts and developers the right level of detail about how software should be engineered. By removing ambiguity in the requirements definition and management process, the direction of the development and QA teams is clear, dramatically reducing the risk of poor business outcomes. Borland testing products are structured with mobile in mind. They include the flexibility that testers need to incorporate the ‘gamechanger’ – the next must-have product or platform – that shakes up the market in a way that no-one could have predicted. • Streamlined requirements collaboration • End to end traceability of requirements • Fast and easy simulation to verify requirements • Secure, centralized requirements repository. Change: StarTeam® enables development teams to regain control in their constantly shifting world with a single ‘source of truth’ – a reference point to help testers prioritise and collaborate on defects, tasks, requirements, test plans, and other in-flux artefacts – even for software built by global teams with complex environments and methods. StarTeam provides: • A single source of key information for distributed teams • Streamlined collaboration through a unified view of code and change request • Industry leading scalability combined with low total cost of ownership. Quality: The Silk™ family automates the entire quality process from inception through to software delivery. Unlike solutions that emphasize ‘back end’ testing – the ‘old school’ testing regime highlighted above – Silk tests
are planned early and synchronised with business goals, even as requirements and realities change. • Ensures that developed applications are reliable and meet the needs of business users; • Automates the testing process, providing higher quality applications at a lower cost; • Prevents or discovers quality issues early in the development cycle, reducing rework and speeding delivery. Bringing the business and end-users into the process early makes business requirements the priority from the outset, as software under development and test is continually aligned with the needs of business users. Borland solutions provide an open framework which integrates diverse toolsets, teams and environments, giving managers continuous control and visibility over the development process to ensure that quality output is delivered on time. Ensuring correct deliverables, automating test processes, and encouraging reuse and integration, enterprise critical software is continually and efficiently validated.
Meet the future challenge As mobile platforms continue to grow and apps become the default mechanism for companies to engage with their customers, the demand for robust apps will grow. But these apps are only as good as the testing regimes that create them. Failure inevitably reflects on the company they represent and the people who tested them. Development roadmaps are great but testing must be flexible enough to incorporate a ‘gamechanger’. Preparing now, by securing the right software support from the company that prides itself on gamechanging software, could make that roadmap easier to read – and the journey towards successful mobile development just a little more enjoyable for the passengers. www.borland.com
October 2012 | TEST
38 | 20 Leading Testing Providers
AUTOMATION CONSULTANTS Automation Consultants (AC) is a leading independent consultancy offering a comprehensive range of testing products. We also offer full end-to-end testing services ranging from fully outsourced test management to the provision of both functional and non functional test resources.
Products AC markets a number of advanced testing tools, including TestWave, a test management tool hosted in the cloud, which allows users to set up test projects in minutes, dispense with servers, and pay by subscription. Another leading product is OpsWave which automatically discovers all applications and the servers that they reside in a data centre. It can also predict the effect on performance of physically moving a data centre from one location to another.
Test Services Automation Consultants offers a full software testing service spanning both functional and non-functional testing. The service is built on in-depth knowledge acquired from working in finance, telecommunications, utilities and government sectors. 1) Test Management: AC provides a high quality, independent test management service. Customers value AC's ability to manage complex test programmes in a way that balances
risk with the need to meet tight budgets and deadlines. Customers also appreciate AC's independence, which allows it to provide a true and unbiased view of the systems being tested. 2) Test Automation: Automation Consultants has in depth expertise in functional test automation, using industry standard tools such as HP UFT/QTP, Selenium and IBM Rational Functional Tester. The company delivers automation using a methodology designed to maximise return on investment and the maintainability of scripts. 3) Performance Testing and Optimisation: Ensuring application performance is increasingly important, especially with the shift of enterprise IT towards the cloud. With over a decade of experience in performance testing and optimisation, Automation Consultants offers an unparallelled service in this area. Tests are performed with industry standard tools such as HP LoadRunner, and AC's custom tools and harnesses. Testing is followed up with detailed analysis of results to identify performance bottlenecks and potential solutions. 4) Training: Automation Consultants provides training in leading HP Application Lifecycle Management (ALM) products including Quality Center, Unified Functional Testing (QTP) and LoadRunner.
Number of sites: United Kingdom: 2 Globally: 4
Merlin House, Brunel Road, Theale, Reading, Berkshire, UK RG7 4AB T: +44 (0)1189 323001 F: +44 (0) 1189 323003 E: info@automation-consultants.com www. automation-consultants.com Contact: Jeff Cunliffe
BugFinders Web and Mobile testing – pay only for results. Over 50,000 testers, 280+ Web and Mobile platforms, 67+ countries and the option to pay only for results. BugFinders is the fastest, most cost effective and highest quality way to get software tested. Using our community of professional testers, we deliver testing to: - Ecommerce - Corporate/Enterprise including regulated industries - Gaming - Mobile development - Design and Digital Agencies - Software Development Houses Within 2 hours of calling us with a new project, you can have a project launched and be seeing bugs valid bugs coming in. Every bug is reviewed and retested by one of our review team to ensure that it meets your expectations and our easy to use interface can be accessed 24 hrs a day.
Here’s some examples of how we can deliver: Time: An E-commerce site fully tested in just 12 hours by 67 testers from 16 countries. We work 24hrs a day 7 days a week. Quality/Coverage: Large m-commerce project delivered in 4 days with 261 testers on over 80 different Android and IOS platforms. Cost: Choose to pay-per-bug or unlimited testing projects. Projects can be started for £500 and you only pay for results. BugFinders – a great way to get your site/mobile application or any other software that can be made remotely acceptable. BugFinders is software testing in the real world. Use coupon Bug15 to get 15% off your first order.
BugFinders is completely managed and you work with an in-house project manager as much or as little as you need.
T: 0844 870 8710 www.bugfinders.com
TEST | October 2012
www.testmagazine.co.uk
20 Leading Testing Providers | 39
Codenomicon Codenomicon is a leading vendor of proactive software security solutions Top governments and leading software companies, operators, service providers and manufacturers use Codenomicon's solutions to secure critical networks and to provide robust products and services. Codenomicon enables its customers to build and offer reliable solutions, and critical infrastructure customers rely on Codenomicon's security solutions when taking pre-emptive security measures.
Fuzz-o-Matic is an automated fuzzing service in the cloud. Especially for Agile projects which require extensive and flexible fuzz testing, Fuzz-o-Matic is the ideal solution. Accessible anywhere, Fuzz-o-Matic can handle several builds at a time so you can test build by build. Access and share the test results with your development team effciently. Patch verification has never been this easy.
Security testing solutions
http://www.codenomicon.com/fuzzomatic/
In all types of cyber-attacks, the initial access is enabled by software vulnerability in an open interface. Unknown vulnerabilities are the biggest threat to IT security, because there are no defenses for attacks against them. With Codenomicon’s Unknown Vulnerability Management solutions you can reveal critical interfaces and find and fix critical vulnerabilities, before anyone has a chance to exploit them.
Abuse Situation Awareness
Defensics is the world leading fuzzing solution. It provides fully automated security testing suites for over 200 different communication protocols, file formats, and web and XML applications. Defensics uses modelbased systematic fuzzing to provide the best testing coverage, with unparalleled effciency in finding unknown vulnerabilities.
Codenomicon's Abuse Situation Awareness solution creates interactive visualizations from real-time network traffc and abuse information. See the status of your network and critical resources with one glance. Share the actionable incident information with your stakeholders and get cleaner networks in the process. http://www.codenomicon.com/clarified/ Abuse SA is essential for the CERT and CSIRT organizations of governments, internet service providers and other major players in the field. Abuse SA is a botnet-inspired solution to internet abuse incident handling with automated process for collecting, processing and reporting network abuse.
http://www.codenomicon.com/defensics/ Codenomicon Ltd. Tutkijantie 4E, FIN-90590 Oulu, Finland T: +358 424 7431 F: +358 8 340 141 E: sales@codenomicon.com
www.codenomicon.com
Coverity Coverity, the development testing leader, is the trusted standard for companies that need to protect their brands and bottom lines from software failures. More than 1,100 Coverity customers use Coverity's development testing suite of products to automatically test source code for software defects that could lead to product crashes, unexpected behavior, security breaches, or catastrophic failure. Coverity is a privately held company headquartered in San Francisco, with offices in Boston, Calgary, London and Tokyo, with more than 200 employees worldwide. Coverity is funded by Foundation Capital and Benchmark Capital. Founded in the Computer Systems Laboratory at Stanford University in Palo Alto, California, Coverity invented a fundamentally different way to test complex source code to find hard to spot, yet critical software defects in a solution that was consumable for commercial use. Since its inception in 2003, Coverity has cemented its leadership position in the industry with its flagship offering, Coverity® Static Analysis, and has expanded its product portfolio to include a suite of additional capabilities as part of its development testing platform.
Over 1,100 of the world's largest brands, including Honeywell, NEC, BAE Systems, Juniper Networks, BMC Software, Samsung, France Telecom, Sega, and Schneider Electric rely on Coverity to help ensure the quality, safety and security of their products and services. Industry leading companies use Coverity to help them sustain competitive advantage, customer satisfaction, and innovation by delivering high quality products to market, including: • 10 of the Top 10 Diversified Electronics Manufacturers; • 10 of the Top 10 Mobile and Telecommunications Device Manufacturers; • 7 of the Top 10 Aerospace and Defense Companies; • 6 of the Top 10 Software Companies. Coverity currently has over five billion lines of source code under management and is the development testing gate to over 11 billion devices shipping in the market today.
Number of sites: UK, Europe, rest of world: five in total
Coverity, Inc. 185 Berry St. Suite 6500, San Francisco, CA 94107 USA T: U.S. Toll Free: (800) 873-8193 T: International Sales: +1 (415) 321-5237 E: info@coverity.com
www.testmagazine.co.uk
www.coverity.com
October 2012 | TEST
40 | 20 Leading Testing Providers
Facilita Software Development Load testing solutions that deliver results
Facilita Testing Services
Are your applications and server infrastructure optimised for optimal performance?
Load testing is challenging. Facilita provides firstclass professional services from fully managed testing engagements through to product mentoring. Utilizing Cloud based servers, Facilita can provide a cost effective tailored performance testing infrastructure.
Load and performance testing using Facilita Forecast™ shows in advance how your applications, IT systems and websites will perform before they go live. Forecast can be used to test a wide variety of both web and non-web applications.
Reduce risk, optimise performance Comprehensive load testing minimises the risk of system failure, poor application performance and a damaging user experience. Facilita Forecast™ is a powerful load testing tool that genuinely meets the “real world” needs of testers, QA specialists and developers. Forecast™ is used by companies across many business sectors as well as Government and Public Sector organisations to ensure the performance and reliability of their business critical applications and systems.
Number of sites: United Kingdom: Many hundreds of clients use Facilita Software and Services throughout the United Kingdom.
Globally: Facilita is increasing its global reach with many new international clients and partnerships.
Facilita Software Development, Somerford Business Court, Holmes Chapel Road, Congleton, Cheshire, CW12 4SN T: 01252 405468 F: 01260 298335 E: enquiries@facilita.co.uk www.facilita.com Contact: Gareth Heaton
grid tools Grid-Tools are the leading test data management vendor internationally, specialising in data generation, test data management and provisioning of agile data for Agile environments. We have revolutionised the way test data is being provisioned by developing innovative software solutions for synthetic test data creation, data masking, subsetting, design and archiving. Our innovative solutions, Datamaker™ and Enterprise Data Masking™, offer companies of any size and market sector the ability to provision high quality test data for their testing, development and training environments, as well as outsourcing both off-shore and near-shore. Datamaker™ is a complete test data management suite for the provisioning of high-quality, compliant test data that is 'fit for purpose'. It offers total flexibility in choosing the right test data management methods to suit company strategy and objectives. Datamaker™ is Grid-Tools' flagship product and contains the following powerful functionality and components: • Data Creation • Data Profiling • Data Subsetting • Data Masking • Data Coverage
Enterprise Data Masking™ is a complete data masking suite, powered by our flagship test data management suite, Datamaker™. Enterprise Data Masking™ is a component-based solution, which can be implemented either as a stand-alone of enterprise-wide data masking tool. It includes three powerful masking engines: Simple Data Masking, Fast Data Masking and Mainframe Data Masking as well as a Data Subset module. Intelligent Virtual Services™ is the new virtualization solution from Grid-Tools, which allows users to generate fit for purpose, structured, compliant test data and service message responses to improve the quality of your SOA and MQ/JMS message testing and development. IVS uses a virtual service layer to replicate the behaviour and structure of SOA message system. This virtual layer eliminates the constraints of cross-system dependencies on traditional SOA testing. By creating a virtual service layer, IVS enables testing teams to work in a stable, isolated environment, minimising disruption and delays in waiting for data to flow downstream. The result is high quality, efficient SOA message testing and development. Grid-Tools have been working with some of the largest government agencies, financial institutions, telecoms, insurance providers and media and communications corporations around the globe. We strive to promote better processes and best practice methods for provisioning compliant test data that is fit for purpose.
Number of sites: UK, USA, India, Global coverage.
Grid-Tools Headquarters: 11 Oasis Business Park, Eynsham, Oxford, England OX29 4TP T: UK +44 (0)1865 884600; USA 1-866-519-3751 E: sales@grid-tools.com www.grid-tools.com
TEST | October 2012
www.testmagazine.co.uk
20 Leading Testing Providers | 41
IBM IBM Rational quality management solutions help organizations deliver enduring quality throughout the product and application lifecycle – from concept through launch, maintenance, and retirement. IBM Rational’s newest addition to its quality portfolio, integration testing and service virtualization technology from Green Hat, simulates the behaviour and performance of dependent services in complex application environments to save companies millions of dollars in hardware and staff costs to maintain test labs. By bringing together functional, performance, integration testing, and service virtualization into a comprehensive offering, organizations are able to deliver their products and services to market rapidly with greater levels of efficiency, higher predictability, and reduced cost. This collaborative, integrated approach to managing quality helps make testing a shared responsibility instead of a siloed and disconnected function to enable Agile development cycles and rapid product iterations. IBM Rational Test Workbench delivers end-to-end functional, regression, load, and integration testing to address the quality challenges of highly complex applications. • Simplify the creation of functional and regressions test with Story Board testing, combining natural language test narrative with visual editing through application screenshots. ScriptAssure® technology provides greater test script resiliency for changes in the application UI.
• Quickly develop complex performance test scenarios with script less, visual, performance test and workload models. A powerful automatic data correlation engine significantly reduces the effort needed to develop and maintain large, multi-user load tests. • Deliver earlier, end-to-end continuous integration testing through a script less, wizard-driven, test authoring environment currently supports over 70 technologies and protocols out of the box. IBM Rational Test Virtualization Server enables companies to model and simulate real system behavior to eliminate application test dependencies and reduce infrastructure costs. • Reduce dependencies on third party applications and systems that can delay your project and increase your cost of testing. • Improve test coverage with support for a broad set of middleware and messaging technologies from IBM, TIBCO, Software AG, and other integration providers across financial, healthcare, business to business, and other application messaging formats. • Quickly update, share, and deploy virtualized test environments through an easy to use web console to keep pace with changes in the underlying systems and data. • Deploy virtualized services without having to reconfigure the original application environment saving time and avoiding configuration errors that can affect test results. • Validate application performance and scalability with all the features of Rational Performance Test Server included in the box.
Number of sites: Offices in over 170 countries worldwide
T: 1-800-728-1212 E: ratlinfo@us.ibm.com www.ibm.com
Neotys Since 2005, Neotys has helped more than 1,000 customers in more than 60 countries enhance the reliability, performance, and quality of their web and mobile applications. The best-in-class NeoLoad load and performance testing solution enables teams to efficiently apply best practices for load testing mobile applications. It provides the features and functions needed to record test scenarios for mobile websites and mobile apps, including secured native applications that use HTTPS. It enables realistic playback of these scenarios with per virtual user bandwidth simulation, user-agent manipulation, simulation of parallel connections, and integration with the cloud. NeoLoad accelerates load testing of all web applications with advanced real-time analysis, agentless monitoring, scheduling, scripting, and reporting capabilities.
NeoLoad provides support for all mobile technologies including Android, iOS, and Windows Mobile. It also provides extensive support for a wide range of web technologies including HTML5, AJAX PUSH, Adobe Flex and AIR, Silverlight, RTMP, SAP, and Siebel among others. Neotys solutions are backed by a dedicated team of Neotys professional services and support engineers to ensure your success. Additionally, Neotys just announced the launch of Neotys Service Packages which provide an on-demand, turnkey load and performance testing service… the fastest way to actionable insight into the performance of your web and mobile applications.
Hampden House, Monument Park, Chalgrove, Oxfordshire, OX44 7RW T: 0845 299 7539 E: oxygen@e-warehouse.com www.oxygenhelpdesk.co.uk Contact: Andrew Hill
www.testmagazine.co.uk
October 2012 | TEST
42 | 20 Leading Testing Providers
parasoft SOA Quality as a Continuous Process Parasoft empowers organizations to deliver better business applications faster. We achieve this by enabling quality as a continuous process across the SDLC– not just QA. Our solutions promote strong code foundations, solid functional components, and robust business processes. Parasoft's SOA solution provides an automated infrastructure that enables SOA quality as a continuous process, allowing you to reap the full benefits of your SOA initiative.
Error prevention Parasoft's SOA solution allows you to discover and augment expectations around design/development policy and test case creation. These defined policies are automatically enforced, allowing your development team to prevent errors instead of finding and fixing them later in the cycle. This significantly increases team productivity and consistency.
Continuous regression testing Parasoft's SOA solution assists you in managing the complex and distributed nature of SOA. Given that your SOA is more than likely to span multiple applications, departments, organizations and business partners, this requires a component-based testing strategy. With the Parasoft solution set, you can execute a component-based testing strategy that ultimately allows you to focus on the impact of change. Parasoft's continuous regression tests are applied to the multiple layers throughout your system. These tests will then immediately alert you when modifications impact application behavior providing a safety net that reduces the risk of change and enables rapid and agile responses to business demands.
Service virtualization Parasoft Virtualize enables organizations to easily replicate infrastructure for the purpose of test environment provisioning. Have you ever had to wait for access to a CRM system, or could not test a new SOA service as it relies on a 3rd party service that was not available. Virtualize eliminates all these constraints.
Functional audit Parasoft's continuous quality practices promote the reuse of test assets as building blocks to streamline the validation of end-to-end business scenarios impacted by changing business requirements. Functional test suites can be leveraged into load tests without the use of scripting, allowing you to track performance metrics throughout the SDLC. This enables your team to execute a more complete audit of your business application, reduces the risk of business downtime, and ensures business continuity.
Process visibility and control Parasoft's SOA solution enforces established quality criteria and policies, such as security, interoperability, and maintainability, on the various business application artifacts within an SOA. Adherence to such policies is critical to achieving consistency as well as ensuring reuse and interoperability. As a result, you evolve a visible and sustainable quality process that delivers predictable outcomes. Please contact us to arrange either a 1 2 1 briefing session or a free evaluation.
T: 0208 263 6005 F: 0208 263 6100 E: sales@parasoft-uk.com www.parasoft.com
qualisystems About QualiSystems QualiSystems is a leading provider of enterprise software solutions for test and lab automation, driving innovation, efficiency and ROI. QualiSystems’ TestShell software framework offers complete Lab Management, Device Provisioning and Test Automation. Used in the Networking and Storage environment to manage and drive large scale testing labs, the framework enables engineers to optimize lab performance and increase testing coverage while expanding equipment utilization, reducing setup time and accelerating testing. TestShell has already proven as an industry-critical solution in North America, APAC, Europe and the Middle East, where it is used by market leaders from a wide spectrum of industries including network equipment manufacturers, telecom operators, data center providers, enterprises and electronics device manufacturers.
Number of sites: Israel (HQ), US, UK, Germany, China, Japan, Taiwan, Korea
QualiSystems, Cattle Lane Farm, Cattle Lane, Abbotts Ann, Andover, SP11 7DS UK T: +44 8456 808715 M: +44 7785 268899 E: info@qualisystems.com www.qualisystems.com Contact: Robin Jackson, director of sales
TEST | October 2012
www.testmagazine.co.uk
20 Leading Testing Providers | 43
Ranorex Test automation for desktop, web and mobile applications What kind of application do you need to test? Is it installed on the Windows Desktop? Does it run in a browser? Is it used on smartphones or tablets? It does not matter which platform your software is developed for, Ranorex provides a cost-effective and comprehensive test automation tool to create reliable automated test for any kind of application technology. Due to its ease-of-use, increased testing accuracy and low cost per seat, Ranorex is an excellent choice for software teams of virtually any size or level of sophistication, and is now used by hundreds of enterprise and commercial software companies around the world.
Reduce test maintenance and become Agile Not only software development teams move to Agile methodologies. Also testing teams need to become agile too. Automated testing is one of the key factors to deliver in time and to ensure high software quality in today’s fast moving world. In processes like this, robust and reliable tests scripts are absolutely essential. There is no time left to spend days for test script ‘reanimation’. Ranorex tools assist you in being prepared. A modern object-based test automation approach separates your logical test case structure from the technical identification layer. Moreover addressing UI elements on your desktop, in the browser or on your mobile device is done by the powerful RanoreXPath allowing your test scripts being robust against UI changes in the system under test.
Working in teams – Not everyone has to be an expert! Seriously, test automation is more than Capture & Replay, right? You are right! At the same time we know that not everyone in a test team has the skills to implement automated test scripts. For this reason the Ranorex tool set offers different test automation approaches making it easier to work in teams. Ranorex Studio assists testing teams working together on a test automation project. While domain testers might focus on which test cases has to be automated automation experts concentrate on preparing reusable automation modules (key words). You don’t have any experts? No problem. Ranorex Recorder is the perfect to get started with Ranorex. Integrated in Ranorex Studio Ranorex Recorder is a lot more than classic Capture & Replay. It allows to simply creating flexible and reusable automation modules without hitting the ‘Record’ button.
Entry Level Price: € 1,480.00
Ranorex, Strassganger Strasse 289, 8053 Graz, Austria T: +43 316 281328 E: info@ranorex.com www.ranorex.com
Reflective Solutions Ltd Reflective Solutions Ltd was formed in 1998 as a specialist application performance testing consultancy, and have been assuring application performance ever since. Reflective is dedicated to providing software and services that allow our clients to protect and increase their brands and revenues. We enable this by ensuring our clients' clients never experience performance problems or outages with the systems they provide.
Products StressTester™ is an enterprise class performance testing tool, capable of providing comprehensive, in-depth analysis of any web application’s performance, scalability and load capacity. StressTester™ is designed to reduce the costs and timescales traditionally associated with load and performance testing. The innovative Graphical User Interface (GUI) ensures that StressTester™ is quick and simple to use, with no need for scripting expertise. Simple drop down menus, intuitive Wizards, a free Online Learning Centre, Tutorial videos and World class support all ensure that testers are able to configure complex scenarios in significantly reduced timescales.
StressTester™ is even suitable for performance testing within agile development projects – the speed at which tests can be configured and executed allow testing to be undertaken within sprints. StressTester™ also monitors every aspect of the system under test. This includes low level monitoring of the system’s operating systems, web servers, application servers and database systems. Poor performance results are automatically correlated with system monitoring data, allowing StressTester™ to instantly pinpoint the underlying causes of performance issues. Benefits of adopting StressTester™: • Reduced testing timescales • Reduced testing costs • Reliable results • Detailed analysis • Automatic problem diagnosis • Reduce the risk of failure Download a free trial by visiting our website www.reflective.com
UK: 44 Toynbee Street, London, E1 7NE; USA: 800 W Cummings Park, Suite 2000, Woburn, MA 01801 T: UK: +44 (0)20 8617 3012; USA: +1 617 502 2070 E: info@reflective.com www.reflective.com
www.testmagazine.co.uk
October 2012 | TEST
44 | 20 Leading Testing Providers
Seapine Software Founded in 1995, Seapine Software is the leading provider of quality-centric product lifecycle management solutions for development and IT organizations. Headquartered in Cincinnati, Ohio, with offices in Europe and Asia-Pacific, Seapine’s development and QA tools help organizations of all sizes streamline communication, improve traceability, and deliver quality products on time and within budget. Seapine's integrated software development and testing tools streamline your development and QA processes – improving quality, and saving you significant time and money. QA Wizard Pro automates the functional and regression testing of web, Windows, and Java applications, and load testing of web applications. Featuring a nextgeneration scripting engine, QA Wizard Pro includes advanced object searching, smart matching, a global application repository, data-driven testing support, validation checkpoints, remote execution support, and built-in debugging. Surround SCM, a highly scalable cross-platform software configuration management solution, manages all changes to your product's digital assets and makes them available to your team anytime and anywhere. All data is stored in industry-standard relational T: 0208 948 9460 E: salesuk@seapine.com
database management systems for greater security, scalability, data management, and reporting. Featurerich Surround SCM includes changelists and atomic transactions, LDAP and Active Directory support, shadow folders, defect linking, hyperlink access to files and branches, integration with a variety of IDEs and build tools, and WebDAV support. Surround SCM’s change automation, caching proxy server, labels, and virtual branching tools streamline parallel development and provide complete control over the software change process. TestTrack manages all application development phases and artifacts. From requirements, user stories and release planning, through sprints, assignments and work items, to test cases, QA cycles, defect resolutions and releases. By configuring the workflow, renaming fields, creating custom fields, setting default values, and defining event-based triggers, you can fully customize TestTrack to fit your terminology, methodology, and industry regulations. Quality is the way of life at Seapine. From our expert consulting and services teams, to our award-winning product management solutions, to our world-class customer support, Seapine Software has helped thousands of companies worldwide build, test, and deploy quality software.
www.seapine.com
spirent Is your mobile security solution really securing your network, devices and information? The issue of mobile security has become a buzz-word in the industry today. Enterprise IT organizations have much to lose by not taking the initiative and enforcing a strict mobile security regimen on their own. Now is the time to take control of protecting their business and not just depend on software vendors’ claims.
traffic patterns that pose greater yet currently unknown risk to enterprise mobility. A comprehensive security policy has to protect the organization from threats, but it also has to remain transparent to the workforce, allowing employees to remain productive before, during, and after threat incidents occur. Only testing and validation will tell the true story.
As more enterprises move towards supporting a mobile workforce, ensuring security of the network, systems and information is critical. Furthermore, the trend of bring-your-owndevice (BYOD), migration to cloud computing and increasing numbers of application malware, virus, worms and denial-ofservice (DOS) attacks are adding additional levels of security risk to the network.
Selecting the right tools
Today’s remedies such as firewalls in the networks, vulnerability scanning, mobile device management and information security management are fragmented and incoherent. Security is only as strong as the weakest link in the network defenses. Unfortunately, most enterprise IT departments depend only on their vendors’ claims to assess the fortification of their network and lack a comprehensive and consistent security policy that covers all corporate physical and virtual assets. A sound security policy dictates that enterprises must enforce best practices that cover mobile security testing, compliance, network security testing, web security testing and device and application management. The specifics of the validation and testing must be tailored to the organizations workflow, userprofiles and types of connected-devices. It needs to go beyond just looking for known risks but also track and test for abnormal
Spirent, a recognized leader in testing the performance, availability, security and scalability of networks and applications, offers enterprises the tools and solutions to effectively manage network and application security. Spirent Studio Security is the industry’s only unified security testing solution designed to quickly and simply test today’s cloud applications and infrastructure. Spirent Studio Security helps ensure that applications and network infrastructure are bulletproof. It helps measure the networks ability to detect and prevent thousands of known attacks, test network resiliency by sending it millions of unexpected or malicious input, measure the security devices’ ability to withstand targeted DDOS attacks and test their capabilities to inspect traffic for malware, unwanted URLs and spam and take appropriate action. With security and performance testing using Spirent Studio Security, including fuzzing, known vulnerability testing, mobile device emulation, DDoS emulation and application security testing enterprises can minimize exposure to security breaches and ensure protection against the latest attacks. For more information visit: http://www.spirent.com/Networksand-Applications/App_Aware_Security
Spirent Communications plc, Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United Kingdom. T: +44 (0) 1293 767679 E: UKsales@spirent.com
TEST | October 2012
www.testmagazine.co.uk
20 Leading Testing Providers | 45
TechExcel TechExcel DevSuite provides an integrated platform for development and defect tracking, requirements management, project planning and quality management. With tools to effectively support both agile and traditional processes, DevSuite allows you to manage development your way. Achieve balanced and sustainable development with a solution that truly fits your needs.
Portfolio – Product Portfolio Management Product Portfolio Management tool provides a comprehensive and realtime view of all internal and external processes and resources to enhance the management practices for all of an organization’s project and program portfolios. Integrating PPM across all DevSuite modules bridges the gaps between development teams communicating project status, managers reporting progress, and decision-makers quantifying projects with the greatest business value.
DevSpec – Requirements Management
in DevTrack, you achieve real-time visibility into the current project status, risk areas, burndowns and expected delivery dates. DevTrack – Implementation and Defect Tracking DevTrack comprehensively tracks and manages all aspects of a development project, from feature stories and implementation tasks to product defects and change requests. Teams can effectively plan, organize and execute development work, and with tools to support both agile and traditional practices, DevTrack can easily be configured to fit your needs. DevTest – QA Test Management From test case creation, planning and execution through defect submission and resolution, DevTest manages the complete quality lifecycle. Implement quality processes earlier in the development lifecycle to manage shorter deadlines, address complex contemporary testing challenges, and improve your deliverable software.
DevSpec enables organization-wide collaboration for the management of requirements, specifications, product ideas or agile stories, with integration points that allow you to drive development and testing directly from completed requirements. Whether you wish to adopt a simple agile process for building a product backlog or need to implement a defined and regulated requirements process, you will quickly achieve full traceability throughout the project lifecycle.
KnowledgeWise – Knowledge Base
DevPlan – Development Project Planning
DevTime – Timesheet Management
A better alternative to traditional project management tools, DevPlan provides robust development project planning and resource management. By dynamically linking your project timeline to implementation tracking
Adding the optional DevTime module to DevSuite creates a complete time sheet management system that is integrated with development time tracking in DevTrack.
DevSuite includes KnowledgeWise, a central knowledge repository for managing documents, images, Wiki articles, Wiki books, and other digital assets and attachments. KnowledgeWise provides a fully configurable user interface and definable workflow process for tracking knowledge creation, review, publishing, and approval processes, and items in KnowledgeWise can be accessed from all areas of DevSuite.
Number of sites: UK – 60 Globally – 1,500 Entry level price: £1,500
Crown House, 72 Hammersmith Road, London, W14 8TH T: +44 207 470 5650 F: +44 207 470 5651 E: emeainfo@techexcel.com www.techexcel.com
Testing Technologies Testing Technologies has been advising and supporting successful software test automation projects for many years. We design and market an extensive portfolio of immediately available and approved test tools to significantly increase your system quality. Beyond that, we provide direct, reliable tool and service support that is backed by a strong community in your particular domain, worldwide. With tools and services of Testing Technologies you are able to create and enhance highly customised but off-the-shelf test environments fast and easy - right on budget, right on time. During the test process, you only have to concentrate on what to test, not on how. Using our integrated test automation platform TTworkbench together with a variety of existing plug-ins or within complete test suites leads you to fast results in your test automation project. The extension capabilities of TTworkbench via open and standardized APIs enable the implementation of additional functionalities. That way you can create your own solutions, enhancements, missing features…
All your investments in building your particular test infrastructure can be reused in the future as TTworkbench is based on the internationally standardised test technology TTCN-3 – well designed by testers for testers, with a solid test system architecture designed by software developers for software developers. Through cooperation with strong global and local partners, Testing Technologies is able to respond more effectively to your needs in testing. Numerous partner companies across Europe, USA and Asia spread the ideas and basic principles of systematic, automated testing worldwide. Testing Technologies was founded as a spin-off of the Fraunhofer Institute FOKUS in 2000. Our team of experts is continuously developing new products to meet the expectations of evolving markets.
Testing Technologies IST GmbH, Michaelkirchstrasse 17/18, 10179 Berlin, Germany T: +49 (0) 30 726 19 19 0 F: +49 (0) 30 726 19 19 20 E: info@testingtech.com www.testingtech.com
www.testmagazine.co.uk
October 2012 | TEST
46 | 20 Leading Testing Providers
T-Plan T-Plan since 1990 has supplied the best of breed solutions for testing. The T-Plan method and tools allowing both the business unit manager and the IT manager to: Manage Costs, Reduce Business Risk and Regulate the Process. By providing order, structure and visibility throughout the development lifecycle from planning to execution, acceleration of the ‘time to market’ for business solutions can be delivered. The T-Plan Product Suite allows you to manage every aspect of the Testing Process, providing a consistent and structured approach to testing at the project and corporate level.
What we do Test management: The T-Plan Professional product is modular in design, clearly differentiating between the Analysis, Design, Management and Monitoring of the Test Assets. • What coverage back to requirements has been achieved in our testing so far? • What requirement successes have we achieved so far? • Can I prove that the system is really tested? • If we go live now, what are the associated Business Risks? Incident Management: Errors or queries found during the Test Execution can also be logged and tracked throughout the Testing Lifecycle in the T-Plan Incident Manager.
Test Automation: Cross-Platform Independence (Java) Test Automation is also integrated into the test suite package via T-Plan Robot, therefore creating a full testing solution. T-Plan Robot Enterprise is the most flexible and universal black box test automation tool on the market. Providing a human-like approach to software testing of the user interface, and uniquely built on JAVA, Robot performs well in situations where other tools may fail. • Platform independence (Java). T-Plan Robot runs on, and automates all major systems, such as Windows, Mac, Linux, Unix, Solaris, and mobile platforms such as Android, iPhone, Windows Mobile, Windows CE, Symbian. • Test almost ANY system. As automation runs at the GUI level, via the use of VNC, the tool can automate any application. E.g. Java, C++/C#, .NET, HTML (web/ browser), mobile, command line interfaces; also, applications usually considered impossible to automate like Flash/Flex etc. If you have an interest in improving the quality of your delivered systems; reducing the cost of testing those systems; or reducing the uncertainty and risk associated with releasing your systems; then a proven test process supported by the T-Plan suite of tools will be a welcome addition to your test team. For more information about how T-Plan is changing the face of testing, please contact us on +44(0) 1029 614 714.
T: +44 (0)1209 614 714 E: sales@t-plan.com www.t-plan.com
TRICENTIS Since 1997, TRICENTIS offers cross-industry expertise in all aspects of software testing and software quality assurance. With TOSCA Testsuite, TRICENTIS has developed an innovative and technically superior solution for software testing, test automation, and risk assessment. Over 300 customers worldwide rely on the expertise of our consultants and the performance of TOSCA Testsuite. TOSCA is ideal for industries with workflow business processes, many operating engines, browsers and smart devices such as in Banking, Insurance, Financial Services, High Tech, Telecom, Manufacturing, Retail, Healthcare, Utilities, Public Sector and other leading industries requiring the highest quality of software testing.
effective, model-based solution that sets TOSCA apart from its competitors. TOSCA offers certified integration with SAP and with popular Application Lifecycle Management (ALM) Solutions, popular Defect Tracking and other testing solutions for medium, large and global enterprises. TOSCA Testsuite was designed with best practices in mind. – Winner of Best in Test Award, 2012; – ASP Award “The Year‘s Ten Best Web Support Sites” 2011; – Winner of R.E.C.S.S. (Recognition of Excellence in Customer Support and Service), 2011.
Why we’re different The TOSCA model-based approach marks a paradigm shift in automated testing, empowering non-technical users to quickly create automated business test cases in plain English and configure them for multiple operating systems, browsers, and smart devices. TOSCA’s OneView technology provides leap frog advances in software testing, software management and test automation technology and delivers a robust, cost
T: +43 (1) 263 24 09 E: a.brouwers@tricentis.com www.tosca-testsuite.com www.tricentis.com
TEST | October 2012
www.testmagazine.co.uk
20 Leading Testing Providers | 47
Testing Solutions Group Ltd (TSG) Established in 2001, TSG are enablers who help ensure successful business outcomes for our clients IT systems. We have a long history of helping to validate and test large-scale, complex systems. Our extensive experience in Software Quality Management (SQM), Independent Verification and Validation, coupled with our international standing, strong core competency within its own permanent Consulting pool to deliver projects that meet the needs of the business cost effectively. Specialising in the Financial and Legal Sectors, we work with organisations to provide the right testing and management skills for projects and programmes.
Consultancy: Test Strategies, new ISO 29119 standard for software testing, PTTM (personal test maturity matrix. Managed services: Co-sourced or in-sourced test teams. Skills gaps: Permanent and contract resource services to support your in house team. Learning and development: ISTQB and Agile Certificated training on site and public access; Practical Test Management and Agile workshops for onsite teams.
The stakes are high today for organisations, reputational and financial risk being at the top of the agenda. This is up against the requirement for CIO’s to drive strategies to become more nimble and reduce costs. Software testing is critical to success. With four distinct divisions, TSG can provide your organisation with the direction and support it looks for in a testing partner.
Number of sites: Number of UK customers in excess of 200, growing base of global customers.
Testing Solutions Group Ltd (TSG), 117-119 Houndsditch, London EC3A 7BT T: +44(0) 20 7 469 1500 E: test@testing-solutions.com www.testing-solutions.com Contact: Chloe Foster
The Whole Story Print Digital Online
For exclusive news, features, opinion, comment, directory, digital archive and much more visit
www.testmagazine.co.uk
www.31media.co.uk www.testmagazine.co.uk
i n n ovatio n f o r s oftwa r e q u a l it y October 2012 | TEST
48 | The last word...
the last word... I Told You So... Desperately trying to resist the urge to gloat when a project fails in exactly the way he predicted, Dave Whalen is relishing being right.
T
ry as I might to resist, sometimes I revert to a six-year old. I'm usually not one to gloat or give a hearty “I told you so!” when I'm right even when it's completely justified. Sometimes I just can't resist. I'm having one of those times and I should totally exploit it. I know, I'm bad, but maybe they'll listen next time. Our story begins a little more than a year ago. We were starting off on a new project and adopting Agile as our methodology. For those of you that may have been reading me for a while, you know that I was hardly an Agile cheerleader back then. As we were talking about testing, it was painfully obvious that there was going to be a lot of pressure to only validate the new functionality from a ‘happy path’ or positive testing perspective. As you can imagine, I was opposed to this plan. OK, that was an understatement. I was livid! But, I'm a team player. As I learned in the Air Force, I may not agree but I will salute smartly and carry out the orders of my superiors. For the first few sprints, against my better judgement, I did as I was told, only positive, happy path tests. If I strayed off the path (as I tend to do) and accidentally ran a negative test and found a defect, I reported it and moved on. After a while I became uncomfortable with just the happy path. I once again approached management with my lack of negative testing concerns. Once again I was told to stick to the happy path. This time though, I made sure I had the decision in writing – just in case. What we in the Air Force used to call CYA (covering your a--). With my a-- effectively covered, I saluted smartly and carried on. Well sort of. I tried. I really did. But ultimately, I couldn't do it. So, how do I make management happy and still be able to sleep at night. What follows is, by far, not a
TEST | October 2012
recommendation. I was bad. I broke the rules. Do as I say, not as I do. What did I do? I lied. I know – bad Dave. But in this case it was a good lie (if there is such a thing). My mother would kick my butt for saying that. I inflated all of my estimates to include the additional time that I needed to create and execute my suite of negative tests. I'm glad I did. I found a lot of defects from the suite of negative tests. Unfortunately, I never got the time to go back and create the negative tests missing from the previous sprints; more on that later. I created a number of negative test related defects. After a while someone caught on. I had some explaining to do. I said I wouldn't do it but I did it anyway. Bring on the inquisition. My guilt got the better of me. I broke under the pressure and admitted that I had indeed been doing negative testing. Bring on the punishment. What actually happened took me by surprise. Rather than being branded a nonconformist, they were actually pleased that I had taken the initiative to create the negative test suite. It uncovered a lot of issues that would have probably been eventually discovered by our customers. So instead of being tied to the mast and given 20 lashes, I received a pat on the back and a hearty ‘job well done’. Fast forward a few months. We released the application to the customer. Sure enough, they did something unexpected - a negative test scenario. It was one of the overlooked negative tests that we should have found early in the development cycle. We missed it. Sure, it was a negative test scenario. It was the result of the customer going astray and doing something off the happy path. Something they shouldn't have done. The result - an unhappy customer. I know, the customer was ultimately at fault. They didn't use the application correctly; good luck convincing them
of that! No - we messed up; we were the ones with the egg on our faces; we looked bad! Point us to the naughty corner. We put on our big boy pants and admitted it was our fault. No finger pointing, no excuses. It was tough but I think ultimately we earned the respect of the customer. And we learned a hard and valuable lesson. Do I relish being right? Of course I do. Do I gloat? Maybe a little – OK, a lot. Do I revert to age six? I could, it's been really hard not to but there is no need for me to kick them any further even if I want to! Six-year-old Dave would do it.
I lied. I know – bad Dave. But in this case it was a good lie (if there is such a thing). My mother would kick my butt for saying that. I inflated all of my estimates to include the additional time that I needed to create and execute my suite of negative tests. I'm glad I did. I found a lot of defects from the suite of negative tests.
Dave Whalen
President and senior software entomologist Whalen Technologies softwareentomologist.wordpress.com
www.testmagazine.co.uk
INNOVATION FOR SOFTWARE QUALITY
Subscribe to TEST free! FOR
il 201 2 Issu e 2: Apr Vol um e 4:
Y
TEST : INNOV ATION FOR SO FTWAR E QUA LITY
ATION
TEST : INNOVATION FOR SOFTWARE QUALITY
LITY E QUA FTWAR FOR SO ATION INNOV TEST :
INNOV
LIT RE QUA SOFTWA
INNOV ATION INNOVATION FOR SOFTWARE QUALITY FOR SO FTWA Volume 4: Issue 3: June 2012
ing for t s e t d u Clo world an Agile ger reports
Fred Berin n Valley from Silico
Vol um e 4: Issu e 4: Aug ust 201 2
Institutio nal applicat ions
RE QUA LIT
Y
Sponsorin g TEST’s 20 Leading Testing Pro viders. Edit or’s Focus
page 2939.
testing
Richard El dridge exp lains how tackle test to ing in the financial sector.
The end of the road for the Test Phase? VOLUM E 4: IS SUE 4: AUGUS T 2012
VOLUME 4: ISSUE 3: JUNE 2012
2012 APRIL SUE 2: E 4: IS VOLUM
Derk-Jan de Grood asks is it time to say goodbye to a separate testing phase?
n certificatio | Testing Inside: Ag ed testing ile developm testing Model-bas Inside: Test Automation | Software security | s | Performance ol to ent | Mob tomation ing: P20-23 ile apps te web audit Inside: Au sting | The new ng CUS: Fuzzi FEATURE FO
cations – web appli
Testing te chniques
Visit TEST Visit TEST online at www.testmagazine.co.uk online at www.testm agazine.co .uk
For exclusive news, features, opinion, comment, directory, digital archive and much more visit
www.testmagazine.co.uk Published by 31 Media Ltd Telephone: +44 (0) 870 863 6930 Facsimile: +44 (0) 870 085 8837
www.31media.co.uk
Email: info@31media.co.uk Website: www.31media.co.uk
INNOVATION FOR SOFTWARE QUALITY
Inventor, writer, developer and maverick
FRANK BORLAND IS BACK
20 years ago, I was a maverick programmer working high in the Santa Cruz mountains. Now I’ve decided to shake off my reclusive past and get back to work for you, the developer. I’m ready to help deliver game-changing software once again, just as I did with Turbo Pascal and Sidekick. See my journey back to Borland at meetfrankborland.com and follow me on Twitter to tell me what I can do for you: @Frank_Borland