[Title will be auto-generated]

Page 1

No. 12 • June 2013 • www.todaysoftmag.com • www.todaysoftmag.ro

TSM

ms e t s y ds s

ri

G Data

T O D A Y S O F T WA R E MAG A Z I NE

rtups a t S ian

man o R f o ates c o v d urity The A c e S s ation c i l p p iOS a

Gemini Solutions Foundry TechHub Bucharest Software Craftsman’s Tools: Unit Testing Clean code = money in the pocket Eclipse Rich Client Platform Big Data, Big Confusion NEO4j Graph Database Hadoop(II)

Haskell(II) JQuery Europe 2013 Our experience with Behavior Driven Development in Python Planning for Performance Testing Test Driven Development (TDD) Performance Management (II) Start me up - Akcees Why waste our breath on Agile ?



6 The Advocates of Romanian Startups

30 Test Driven Development (TDD)

Ovidiu Mățan

Ladislau Bogdan and Tudor Trisca

9 TechHub Bucharest Irina Scarlat

10 Gemini Solutions Foundry Radu Popovici

11 Start Me Up Irina Scarlat

33 Big Data, Big Confusion Mihai Nadăș

35 Data Grids systems Attila-Mihaly Balazs and Dan Berindei

37 NEO4j Graph Database for highly connected data Iulian Moșneagu

12 NGO connect

39 Hadoop (II)

NGO Connect team

Radu Vunvulea

13 The Cluj IT History (VI) in Numbers Marius Mornea

15 Dezvoltarea de aplicaţii iOS ţinând cont de securitate Cristian Roșa

18 JQuery Europe 2013 Andrei Otta

21 Performance Management (II) Andreea Pârvu

24 Software Craftsman’s Tools: Unit Testing Alexandru Bolboaca and Adrian Bolboacă

27 Our experience with Behavior Driven Development in Python Ramona Suciu and Dan Pop

41 Functional Programming in Haskell (II) Mihai Maruseac

44 Planning for Performance Testing Alexandru Cosma

46 Book review: Eclipse Rich Client Platform Silviu Dumitrescu

48 Why waste our breath on AGILE? Bogdan Nicule

50 Clean code = money in the pocket Dan Nicolici

50 Gogu on the road with milestones Simona Bonghez, Ph.D.


editorial

Ovidiu Măţan

ovidiu.matan@todaysoftmag.com Editor-in-chief @ Today Software Magazine

I have recently taken part in the closing festivity of the entrepreneurship program in Tandem teams organized by GRASP Cluj where I watched the presentations of four teams which presented the projects they carried out in the eight weeks of the program. It was interesting to notice that none of these projects was based on IT, but on traditional businesses in the domain of agriculture, among which plant cultivation or animal husbandry. The estimated period for the development of one of them span along several years until it reached its due date and could be released on the market. From this point of view, of the time necessary to carry out a project, the domain of IT has a great advantage. We can create a prototype on one weekend or a night, staying at home or participating in a hackaton. It is so easy to materialize an idea! Even though one does not belong to the IT environment or for now does not have a world changing idea, one can take part in a Startup Weekend or Startup Live and join one of the teams. At the end of the weekend you will already have a prototype available and during the next week you can receive support from some local incubators. We dare you to try this experiment, and for those who have already done so and have a prototype, we have prepared the article entitled “The Advocates of Romanian Startups”. We also launch a new service, www.programez.ro, a new way to communicate between specialists. Practically, the unconference panels that go on during the release events can be continued online on this discussion forum. Moreover, we will be delighted to answer the technical questions you might have concerning the entire IT area: programming, testing, management or HR. Since it is a TSM initiative, we can guarantee the answers directly from specialists. Another novelty is the introduction of a new communication channel with our readers, through the support provided by TXT Feedback. This local startup offers the possibility to communicate in magazines and others by using SMSs. Thus, beginning with June, we offer you the possibility to give us feedback or address your questions directly, by sending an SMS to the number 0371700018. An important event, ICT Spring Europe 2013, will take place in Louxembourg on 19th-20th of June. We will participate in it with no.12 TSM. We invite you to join us, and for this we offer you a promotional code which gives you free participation in the event: TSMS13. The articles in this number begin with a list of incubators and co-works actively involved in supporting Romanian Startups. You can be sure of its reliability. Testing gets to be focused on in a series of articles on Test Driven Development (TDD), Behavior Driven Development (BDD), but also on Performance Testing. Cloud applications development is thoroughly discussed in Big Data, Big Confusion, High Availability Performance Systems Using Data Grids in Java, NEO4j Graph Database – simulation of interconnected data and Hadoop (II). Continuing with software development, you will find part II of Functional Programming in Haskell. JQueryEurope 2013 was an exceptional event and here we have an article dedicated to it. In the end, I would like to remind you about the Timeline project which is in progress. In a few words, TSM has initiated a project which focuses on drawing a poster regarding the development of Romanian companies and their main achievements. The email address timeline@todaysoftmag.com is dedicated to those who wish to become part of this project.

Enjoy your reading !

Ovidiu Măţan

Editor-in-chief @Today Software Magazine

4

no. 12/June, 2013 www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE Editorial Staf Editor-in-chief: Ovidiu Mățan ovidiu.matan@todaysoftmag.com Editor (startups & interviews): Marius Mornea marius.mornea@todaysoftmag.com Graphic designer: Dan Hădărău dan.hadarau@todaysoftmag.com Copyright/Proofreader: Emilia Toma emilia.toma@todaysoftmag.com Translator: Roxana Micu roxana.elena@todaysoftmag.com Reviewer: Tavi Bolog tavi.bolog@todaysoftmag.com Reviewer: Adrian Lupei adrian.lupei@todaysoftmag.com Made by

Today Software Solutions SRL str. Plopilor, nr. 75/77 Cluj-Napoca, Cluj, Romania contact@todaysoftmag.com

Authors list Alexandru Bolboaca

Dan Berindei

Agile Coach and Trainer, with a focus on technical practices @Mozaic Works

Software Developer @ Infinispan

Irina Scarlat

Mihai Maruseac

Co-Fondator of Akcees

IxNovation @ IXIA ROSEdu, ARIA member

alex.bolboaca@mozaicworks.com

irina.scarlat@howtoweb.co

dan@infinispan.org

mihai.maruseac@gmail.com

Silviu Dumitrescu silviu.dumitrescu@msg-systems. com

Radu Vunvulea

Java consultant @ .msg systems Romania

Senior Software Engineer @iQuest

Mihai Nadăș mihai.nadas@tss-yonder.com

Radu.Vunvulea@iquestgroup.com

Adrian Bolboaca

adrian.bolboaca@mozaicworks.com Programmer. Organizational and Technical Trainer and Coach @Mozaic Works

CTO @ Yonder

Radu Popovici radu.popovici@geminisols.ro

Bogdan Nicule

Software Engineer @ Gemini Solutions

Bogdan Nicule is an IT manager with international projects experience

BNicule@neverfailgroup.com

Iulian Moșneagu

www.todaysoftmag.com www.facebook.com/todaysoftmag twitter.com/todaysoftmag

iulian.mosneagu@geminisols.ro

Dan Pop

dan.pop@3pillarglobal.com Senior Software Engineer @ Gemini Solutions

ISSN 2285 – 3502 ISSN-L 2284 – 8207

Senior Test Engineer @ 3Pillar Global

Andreea Pârvu Attila-Mihaly Balazs dify.ltd@gmail.com

andreea.parvu@endava.com Recruiter at Endava

Code Wrangler @ Udacity Trainer @ Tora Trading

Andrei Otta

Ramona Suciu

Software developer @ Accesa

Test Lead @ 3Pillar Global

andrei.otta@accesa.eu

ramona.suciu@3pillarglobal.com

Copyright Today Software Magazine Any reproduction or total or partial reproduction of these trademarks or logos, alone or integrated with other elements without the express permission of the publisher is prohibited and engage the responsibility of the user as defined by Intellectual Property Code www.todaysoftmag.ro www.todaysoftmag.com

Simona Bonghez, Ph.D.

Cristian Roșa

Speaker, trainer and consultant in oroject management,

mobile developer @ ISDC

simona.bonghez@confucius.ro

cristian.rosa@isdc.eu

Owner of Confucius Consulting

Alexandru Cosma

alexandru.cosma@isdc.eu Senior Tester @ iSDC

Tudor Trișcă

tudor.trisca@msg-systems.com Software Developer @ .msg systems Romania

www.todaysoftmag.com | no. 12/June, 2013

5


startups

The Advocates of Romanian Startups

T

hough we find ourselves at the beginning in so far as the Romanian startups are concerned and we cannot yet boast about a mass culture of the autochthon IT environment, its advocates begin to make their presence seen more and more. Through the actions and the facilities offered they have a common goal, namely to support the local businesses and their success. The present article is designed as a list of contacts for anyone who wishes to launch a startup.

Co-works or common working areas

A new practice on the local market is represented by renting an office for a certain number of hours or days. At the end of the day that space is cleared so that the next day somebody else can use it. The advantage of this approach is diversity, the possibility to quickly establish connections with other businesses and the ability to feel the local pulse. Due to their nature of not having a permanently dedicated space, the number of organized events is great and through connections or affiliated organizations the visibility of the business is increased.

TechHub Bucharest Services offered • co-working space, • resident type offices with access 24/7, • events organized by TechHub and the online community.

this. Monthly costs to work within the co-work? 140 Euro/ month, and in the first month a 20 Euro discount is offered. For startups we are always willing to negotiate. What are the criteria in order to be accepted? The same for everybody… if you are serious, motivated, hardworking and open minded, then you are accepted. Which are the activities within the Cowork? Every week there are interesting events, such as GeekMeet. Besides these there are also relaxing and socialization events such as Sangria-nights, Cocktail courses, sushi making & eating and barbecue’s. Contact: • 3 Emil Isac Street, Cluj-Napoca, • clujcowork.ro

Affiliation to an international organization • TechHub, a global community dedicated exclusively to contractors in technology, currently represented in London, Cluj HUB Manchester, Riga and Bucharest. Besides the opportunity to carry out your professional activity from one of the 4 floors (plus garden) provided with offices and Admission criteria: collaborative desks, in ClujHub we also provide you with connec• Tech startups. ted services that add value to your business: Contact • financial and accounting services specialized in start-up; • 39-41 Nicolae Filipescu Boulevard, 1st floor, sector 1, • business development services and advice in attracting Bucharest investors; • bucharest.techhub.com • matchmaking and networking services; • services of hosting the registered office; Cluj CoWork • printing and design services; An agreeable place in the center of Cluj, which makes you feel • renting rooms for workshops and conferences for up to 90 at home. There is internet support (WiFi), fruit, water and coffee. people. If the weather is nice you can work on the balcony and you are surrounded by hardworking people who are willing to help you in The number of enrolled teams: Currently more than 10 compacase you have problems. Cluj CoWork makes you productive in a nies are part of our community, and we estimate that by the end of relaxing way. For startups they offer mentorship programs, design year 2013, there will be 35 companies and more than 50 members. and support in the development of applications. Attracting investors: Among the companies that we accomInvestments: Financial, time and services investments have modate, there is one which received financing of approximately been made for startups. 30.000 euro within an acceleration program. How do we attract investors? FWe use our own networking for The incubation period: The investors we are in touch with

6

no. 12/June, 2013 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE belong to the local community and we meet them periodically, but we also have investors from the international community. Affiliation to an international organization: We are not affiliated to an international organization, but we collaborate with many similar entities. Charging: In order to carry out your activity within ClujHUB, on the co-work area, the fees for members start from 30 euro and they can reach 140 euro for full acces. Admission criteria: The admission criteria are fairly subjective, as we are looking for an affinity with the other members of the community, but we begin from: • freelancers – liberal professions and start-ups, • foreigners delegated in Cluj; we have a long term partnership with Cluj International Club; • companies that target local investments; • contractors who develop new services and products and bring innovation on the market; • social entrepreneurs and people who value community involvement; • open minded people who are ready for collaboration and personal development. Activities within the incubator: sthere are many events which should be treated separately, because beginning from June, they will be more numerous. Contact: • 9 Pitesti Street, Cluj-Napoca • clujhub.ro

Crowd Funding Platforms

Community involvement in supporting project development is an innovation. The support comes directly from users who wish to become the customers of the respective business before it actually exists. This number’s proposal, multifinantare.ro represents an interesting approach.

multifinantare.ro It provides crowdfunding services by publishing projects on its own platform. Each project that obtains 50% of the requested amount from the crowd will get the other 50% from the investor. The accepted projects must have an innovative component. They are provided with legal advice, financial fiscal advice and investment advice. We will publish more on this initiative in one of the following issues of the magazine. Contact: • multifinantare.ro • 4 N. Titulescu Boulevard, Cluj-Napoca

Start-up accelerators/ incubators

Whereas in the solutions presented above, the offered support was an additional service, and the business driver was represented by renting the space or keeping a percentage out of the obtained amount, the accelerators/ incubators represent a category whose success is entirely connected to start-ups. The main problem in this case is the admission, but once accepted, one benefits from committed support for turning the start-up into a real business.

Gemini Solutions Foundry It is the first incubator exclusively dedicated to Romanian IT start-ups, which facilitates the connections between the Romanian entrepreneurs and investors from Silicon Valley. It provides the teams that were accepted into the incubation program with fully equipped offices and access to conference rooms and showrooms. Technical mentorship is provided by people with a rich technical experience from the Gemini Solutions team, people who have worked for differently sized companies, from start-ups to multinationals in almost all the existing domains, from back-end to front-end, from cloud computing to mobile applications. The financial and legal advice offered have as a goal the facilitation of the work required in this respect in order to allow entrepreneurs to focus on what really matters in a star-up, namely the implementation of the idea. But maybe the most interesting facility is represented by the connections with the investment funds of the Silicon Valley. These connections are done within an annual event called Demo Day, where teams will present their idea as well as the progress made within the incubation program. This type of financing is very important for the further growth of the team, since, besides the received money, the team also benefits from the connections, as well as from the investor’s counseling, in other words: Smart money Contact: • gemsfoundry.com

NextPhase NextPhase is interested in innovative projects which have a commercial potential. By getting involved, NextPhase completes the res ources necessary to the project in order to insure its proper progress. Thus, in the projects that we consider to be of interest, we can participate with a combination of services, such as: the organization and management of the research-development process, the elaboration and marketing of the intellectual property strategy; the provision of facilities; financing; business management; elaboration and development of growth strategy; access to additional sources of investment; training programs for inventor/ entrepreneur teams. By rephrasing, we could say that NextPhase is a “one stop shop” type technology incubator where ideas with www.todaysoftmag.com | no. 12/June, 2013

7


startups a commercial potential find what they need in order to become successful businesses. The number of enrolled teams - On December 31st 2012, there were 2 projects in progress and 3 projects in preparation. The next project will be in the software area, more precisely simulation, and it will be launched in June. The other two projects, most probably, will be launched towards the end of this year’s third trimester. Manner of attracting investors - Up to a certain point, NextPhase is the investor. This point depends mainly on the project, on the necessary of resources in the planned phases and on the evolution of the project up to that moment. Further on, the strategy of attracting investors greatly depends, once more, on the domain of the project, on its indicators and its risks. After all, the initiation of a healthy collaboration with an investor cannot be based but on an honest and transparent business proposal. Incubation period - For technical, but also commercial reasons, we aim at an incubation period of about 12- 18 months. Of course, there can be exceptions, but they have to be well-grounded. Affiliation to an international organization - NextPhase is subsidiary of Flogistics AG Switzerland and it has its own network of consultants and potential investors. NextPhase is not part of any notorious network of technological incubators. This seems to be an advantage considering the speed of decision making and the adaptability to the particularity of the projects. Offered support after the teams leave the HUB: By the nature of its involvement, NextPhase also continues and adapts its offered support after the incubation period, until the moment of exit from the business. The support and exit strategy depend on the project’s degree of success and the evolution of its ability to function independently. Charging for co-work - Generally, we get involved only in the projects in which we believe and usually, this implication is done in exchange of a share of the resulted profit. It is fairly difficult to take part in the success of a project in its incubation phase by charging in advance for consultancy which is, generally, one of the most expensive services. Therefore, we usually do not charge for our implication, but we take on the project’s mission and risks, subsequently benefiting from a part of its financial success. Admission criteria - The main indicators which we use in project evaluation are: the innovative/ differentiating nature of the concept on which the project is based, the commercial potential of the idea and, most important, the inventors/ entrepreneur team. As action philosophy, we believe the following are required: a good idea, an adequate implementation, but also the passion and endurance in order to face the obstacles which are, inevitably, present along the project. Activities within the incubator: Besides the main activity, namely project development, NextPhase organizes display workshops, networking events, training programs for inventor/ entrepreneur teams, wishing to contribute to the development of the enterprise community in Cluj. Contact: • nextphase.ro

Ovidiu Măţan

ovidiu.matan@todaysoftmag.com Editor-in-chief @ Today Software Magazine

8

no. 12/June, 2013 | www.todaysoftmag.com


startups

TODAY SOFTWARE MAGAZINE

T

TechHub Bucharest

he month of May started with good news for the Romanian community of technology entrepreneurs: TechHub Bucharest was officially launched, the first co-working space in Romania, exclusively dedicated to them and which belongs to the international network TechHub.

In the last years Romania has followed the tendencies from West, and the coworking phenomenon started to spread. In our country’s big cities, entrepreneurial hubs developed – places where entrepreneurs and freelancers work together. Moreover, the members exchange ideas and experiences, share resources and teach lessons learnt to one another, which leads to the formation of a community. In addition to this, there is the fact that the events taking place in the co-working space are relevant for the community and help the members to expand their companies more quickly and efficiently. The benefits from working in such a place account for the high degree of occupation of TechHub Bucharest in less than a month after its opening. The entrepreneurs who are at the beginning point in their business are aware that belonging to such a community helps them evolve rapidly and appreciate the international gateway brought along with the membership. Why TechHub and how was the project born? TechHub is a global community dedicated exclusively to entrepreneurs in technology. The concept was born in 2011 at the initiative of Mike Butcher (European Editor TechCrunch) and Elizabeth Varley. In the last two years, TechHub supported the innovation and development of technological start-ups in London, Riga and Manchester. It all started last year, more precisely on the 5th of June 2012, with a simple blog article written by Bogdan Iordache (Co-Founder of How to Web, the most important event on entrepreneurship and technology in South-East Europe). The How to Web team had just received the news that Bucharest Hub was about to close, so Bogdan asked his readers, members of the community, whether or not they needed a hub. The positive answers were numerous, so Bogdan decided to further develop this project. One day later he would publish a list of conditions which

The Next Hub (the first name of what today is Tech Hub) was supposed to meet. It took a lot of work, exceptional people, dedication and passion in order to turn the Next Hub project into today’s TechHub Bucharest. Bogdan Iordache worked with Daniel Dragomir and Stefan Szakal, Co-Founders of TechHub, and together they made things happen. All this was done with a lot of commitment and the support of some people who believed in the project right from the beginning and brought their significant contribution to its development: Victor Anastasiu, Dan Calugareanu, Vodafone and Adobe Romania. Today, TechHub Bucharest fulfils all the criteria initially established: it is situated in the center of Bucharest (no. 39-40, Nicolae Filipescu street, a 5 minutes’ walk from the University underground station), has a reasonable cost and it is an open space where members benefit from quality interaction and can participate in the events organized by TechHub and the online community. The members get so much more than access to an office/ desk and to the coworking area. The 420 square meters space is divided between 35 permanent offices for the resident members, 60 places for co-working, meeting rooms and a space for events with a capacity of up to 150 people. TechHub aims to become the center of entrepreneurial community of Bucharest and it brings together entrepreneurs, freelancers, investors, tech companies, developers, bloggers and journalists. On the 9th of May, the events hall could hardly accommodate all the community members who had gathered there from all the corners of the country in order to take part in the official inauguration. The event brought together important representatives of the Romanian tech scene and enjoyed the presence of special guests – Mike Butcher (European Editor TechCrunch) and James Knight (International Development Manager TechHub). The evening began with a message from

the founding members, Bogdan Iordache, Daniel Dragomir and Stefan Szakal and continued with a panel dedicated to startups, panel in which Bobby Voicu (Mavenhut), George Lemnaru (Green Horse Games) and Vladimir Oane (UberVu) took part. The three recounted the history of their own startups and stressed once more the importance of TechHub as an aggregation factor for Romanian tech start-ups. Then it was time for the investors to take over the stage. Andrei Pitis, Marian Dusan and Radu Georgescu talked about the things that an investor looks for in a start-up. Paula Apreutesei (RomanianAm er i c an Fou n d ati on ) , C at a l i n a Rusu (Geekcelerator), Mihai Sfintescu (3TS Capital Partners) and Dan Lupu (Earlybird) talked about the support programs for entrepreneurs and the venture capital funds. As usual, there was also some discussion on innovation with the representatives of the greatest Romanian research-development centers: Alex Marinescu (EA Games), Teodor Ceausu (IXIA) and Dragos Georgita (Adobe Romania). Finally, the guests of the event had the opportunity to socialize during an anniversary party. TechHub Bucharest places Romania once more on the global map of innovation and entrepreneurship, and the place in Nicolae Filipescu street will very soon become the heart of a very active national community and it will bring the most important Romanian technological events. TechHub Bucharest, engage !

Irina Scarlat

irina.scarlat@howtoweb.co Co-Fondator of Akcees

www.todaysoftmag.com | no. 12/June, 2013

9


startups

Gemini Solutions Foundry The first romanian IT business incubator with connections to Silicon Valley

L

ately, the Romanian entrepreneurs have had an increasing number of options to facilitate them in developing their business. Be they non-refundable funds, entrepreneurial communities and business incubators, all these programs are an irrefutable aid, especially during the first few steps of business development.

A special case is represented by the IT startups because of the extremely dynamic market which can lead to either a rapid ascend (accompanied by a large number of users) or a lifetime of anonymity. Following in the footsteps of the United States, this business category is starting to enjoy an extraordinary attention in Romania, attention backed by the inauguration of several hubs (also known as co-work spaces) and business incubators designated exclusively to the IT industry. If the hubs excel in creating a community and organizing networking events between their members, the incubators presume a higher level of involvement from the administrative staff through technical mentorships, financial and legal services, and through the connections they create between the members and potential clients, partners and investors. Because of the different approaches chosen by the two, a hub will have a higher number of enrolled teams which pay a monthly fee, whilst a business incubator will have a more restricted number of select teams which sell a percentage of their company as means of payment. Gemini Solutions Foundry is soon to open in Bucharest – the first Romanian business incubator designated exclusively to IT startups, which creates connections between the Romanian entrepreneurial community and Silicon Valley investors.

Located in Victory Square (Piata Victoriei) in Bucharest, in the close vicinity of the subway station with the same name, Gemini Solutions Foundry offers its members free private office space with all the necessary amenities and free access to conference and presentation rooms. The technical mentorship is provided by highly trained developers from the Gemini Solutions team who have a rich experience working with companies of all sizes, from startups to multi-nationals in almost every domain there is, from backend to front-end, from cloud computing to mobile applications and everything in-between. The financial and legal consulting services are meant to ease the company’s work in these areas to allow greater focus on what really matters for a startup: the idea implementation. But perhaps the most interesting facility is represented by the connections between the incubators’ members and Silicon Valley investment funds. These connections are created during an annual event called Demo Day where the teams have the chance to pitch their idea and overall progress to a series of investors from the Valley. This type of financing is very important for the coming growth of the company because the money also bring the investors’ connections and guidance, otherwise said: the teams earn smart money.

The teams that benefit from the investors’ interest will be helped to restructure their company in a US based entity, open an office in Silicon Valley and hire an executive team. As compensation for these services, the startups sell a percentage of their company to the incubator, percentage which is negotiated with each team individually. This method of payment saves the startup of an extra expense in its incipient stage when cash flow is usually a problem, and, this way, the better the company does the better the business incubator does. This leads to an alignment of interests on both sides. Through the offered services, Gemini Solutions Foundry comes to the aid of IT entrepreneurs and supports them in extending their business on an international level, especially on the US market – one of the largest, most dynamic and competitive markets in the world. Those who are interested to apply to this program are welcome to send an email to info@gemsfoundry. com or to fill the contact form on the site gemsfoundry.com.

Radu Popovici radu.popovici@geminisols.ro Software Engineer @ Gemini Solutions

10

no. 12/June, 2013 | www.todaysoftmag.com


startups

TODAY SOFTWARE MAGAZINE

Start Me Up

a different kind of entrepreneurial school

2

5 highly trained participants selected among over 200 applications… an international trainer… 8 well known Romanian entrepreneurs and investors… 5 new business ideas… 5 companies which offered grants for the future entrepreneurs… 45 media partners who brought the project to public attention and 176 mentions in the Romanian press. This is what it looks like in numbers. We are talking about the first edition of the entrepreneurial school Start Me Up, which took place in Predeal last year. During the event, the participants learnt about the steps they have to take in order to materialize their own ideas, they wrote business plans and finally they validated their business concept by presenting it to a professional jury. Each day of the Start Me Up entrepreneurial school was dedicated to a certain aspect of the business plan. Thus, the participants started by an idea generation and selection process, they went on with drawing up the marketing and respectively financial plan, then they prepared the business plan and project presentation. The participants had the occasion to discuss directly with some of the most appreciated Romanian entrepreneurs and investors and find out their successful stories. Among the exceptional people that were present in Start Me Up we mention Peter Barta (Chief Executive Officer of FPP), Dragos Anastasiu (President of Eurolines and TUI Travel Center), Bogdan Iordache (serial entrepreneur, co-founder of How to Web), George Lemnaru (serial entrepreneur, co-founder eRepublika), Bogdan Grosu (Founder of Grosu and Associates), Niels Schnecker (businessman, economical analyst, retired colonel of the US Air Force and television man), or Tiberiu Mitrea (entrepreneur, founder of NaturaLact). Full time trainer during the program was Daniel Ramamoorthy, CFO, director of the investment company Phoenix Group and serial entrepreneur. On the last day of the event, the participating teams presented the projects developed during the program in front of an expert jury. The winning team developed Boomerang, a professional counseling and orientation company for high school

students, and received a 2000 euro initial investment for project implementation, together with free business establishment. A special reward for professionalism was granted to Bartello confectionery, and Bogdan Cange and Vlad Tudose, participants is Start Me Up, were awarded with special distinctions for courage and, respectively, perseverance. “In the context in which the country is suffocated with problems and everybody expects only to receive things, you have proven that you do not sit and wait, but you wish to do something. Seeing you, I realize that Romania still has a chance!” This is what Dragos Anastasiu, President of Eurolines and TUI Travel Center, told us last year. The results were not late to show up and, a year after the event, its impact is visible. The training of the participants continued in monthly mentorship sessions where the young people had the opportunity to talk to some of the most appreciated Romanian entrepreneurs and investors. Vlad Tudose, the Start Me Up alumni who got the jury’s special award for perseverance, changed his initial business idea and established a start-up: Puzzled.by, an online platform for questions and answers. Puzzled.by has already received an initial financing of 50.000 euro from an angel type group of American investors, and the start-up was enrolled in the competition organized within the Shift international conference which takes place in Croatia. With a passion for entrepreneurship, Sebastian Maraloiu is another Start Me Up alumni with a remarkable story. He was about to launch the alpha version of Bring the Band, platform which allows the public to bring their favorite artists in their hometown, when, together with his

partner, he identified an unique business opportunity: Formspring, a socialization platform of over 30 million active users could not sustain its growth and was about to close up. Sebi and his partner worked day and night to take advantage from this opportunity and two weeks later, they succeeded in launching YouReply, a platform developed with an initial investment of 50 $ which now has over 14.000 users. Successful stories can continue. Start Me Up has made new departures and has changed lives. The participants are either currently developing their own startups or accumulating experience in order to become entrepreneurs in the future. Consequently, the Akcees team is getting ready for the second edition of the project which will take place between the 21st and the 27th of July at Speranta (Hope) Boarding House in Predeal. The ingredients for success will also be found in this edition: an international trainer, successful Romanian entrepreneurs and investors, quality participants and an experience which should not be missed by any young person who wishes to start on their own. The novelty of this edition, however, is the organization of a national entrepreneurship promotion campaign: the Start Me Up Caravan. From the 20th of June to the 4th of July, the Akcees team will organize workshops on entrepreneurship in 8 cities of Romania and will bring in front of the audience some entrepreneurs who have succeeded. For Akcees, this summer is under the sign of entrepreneurship! For young people, it is the summer in which they get the opportunity to learn how to make their dreams come true!

Irina Scarlat

irina.scarlat@howtoweb.co Co-Fondator of Akcees

www.todaysoftmag.com | no. 12/June, 2013

11


startups

www.ngo-connect.com

T

You are here to make a difference!

he story NGO CONNECT is a wonderful story that started on the night of 24 of May. 7 young entrepreneurs that didn’t know each other came from 3 different cities in Tirgu Mures. They traveled hundreds of kilometers because the idea of SW seemed like one that would bring them a nice experience. It turned out to be one of the best experiences of their lives…one never to be forgotten and one that can bring joy to them but most importantly to others. Marius, the one with the idea, and whom we, his team, want to thank for bringing us together, took the microphone and said that he wants to build a platform for NGOs where everyone can see different social projects and can actively get involved into any cause they believe in. This was the start of NGO CONNECT and magical word that brought us together is NGO. That night began the 54 hour marathon …we were all extremely enthusiastic and

12

came up with different ideas on how we the team saw this project. After half a day of brainstorming from all the ideas one in particular seem to be the one we all wanted: a web platform where different players can connect: NGOs, donors, volunteers and companies. Currently we are all working in order for this platform to come to life because we strongly believe that we can make this a better world and most of all, because we believe that we can bring something good into people’s lives. Our main focus is how to help different NGOs and people get together and get involved into causes they actually believe. It is in the human nature to have doubts regarding everything. We completely understand that people have trust issues, even more so, when it comes to where their money are going. Here is where we step in. We want to eliminate all these doubts and make every donation crystal clear. And more than that, we want to take things to the next level. For example, if one of the projects on our platform regards a little pour girl and a person is interested in helping her, that person will receive a feedback with a thank you video, some pictures and some contact info of the social case. We w a n t t o CONNECT all types of players on this particular market: NGOs, volu nt e e r s , d on or s

no. 12/June, 2013 | www.todaysoftmag.com

and last but not least Corporations. Why Corporations? Because they have a big CSR (Corporate Social Responsibility) budget. Social Responsibility is an integrated part of any company through which the corporation gets actively involved in the community and takes part in solving some of the community’s problems. The people behind all of this are 7 young ones who come from different backgrounds: economical, IT, NGO, blogging. The team was named the most united from the competition. They never got separated, never wanted to quit and always played fair play. And the letter “I” never came up, everyone had one focus…to make the project succeed. And it did. The work was highly rewarded. We won the 3rd place but felt the same as if we had won the 1st place. And another thing, the team has great chemistry. Just look at the photo of this article and you will see it. They started as strangers and parted true friends. The conclusion is simple: this platform wants to help as many people as possible. Because it is not a myth that if you give in the end you will receive, maybe double. This team was brought together by one goal, and I hope everyone who will read this will get involved in this purpose, making the world better.

NGO Connect team contact@ngo-connect.com


istorie

TODAY SOFTWARE MAGAZINE

The Cluj IT History (VI) in Numbers

F

or this article I’ve picked a series of numbers: 1957, 7400, 196/17, 27/100000000. They describe the path from the inauguration of the first ITC institution till the auto-organization of the current community into a guild. In short, I will refer to the historical context, the evolution and the current stage.

1957 the inception

the moment of birth for ITC in Cluj, the year when Tiberiu Popoviciu, a renowned mathematician, founded the Calculus Institute with its own Department of Computers – the first ITC institution in Cluj. Institution that set the tone for the development of research and education, the base of the current community. 1948 - Emil Petrovici founded a local branch of the Romanian Academy 1951- T. Popoviciu founded the Mathematics Section in the local branch of the academy 1959. M ARICA - electromagnetic relays based computer 1960 - Romania holds the eleventh spot in the world regarding computer infrastructure, Cluj holding third spot in the country 1963 - DACCIC-1 – relays and transistor based computer 1965 - the Automatics section is founded in the Polytechnics Institute (current UTCN) in partnership with the Atomics Physics Institute of Cluj (renamed to ITIM in 1977 and INCDTIM in 1999), under the leadership of Marius Hăngănuț 1965 - the Computer Science department is founded (in UTCN) led by Ioan Dancea 1968 - DACCIC-200 – first fully transistor based computer 1971 - the first informatics highschool is founded: The Tiberiu Popoviciu Informatics Highschool 1975 - Babes-Bolyai University (UBB) creates The Calculus Center 1994 - UBB renames the Mathematics Faculty to the Mathematics and Informatics Faculty

7400 the graduates

ITC graduates from 1970 until this day. They are the product of the above institutions and make up the larger part of the current generation of local ITC professionals. They represent: 60, 200, 400 - the average number of graduates per year in 1970..1990..2000..2012 86% - out of the 8600 ITC employees in Cluj from over: 50+ - companies with more than 50 employees 2,75% - out of 310K Cluj inhabitants are ITC professionals

27 / 100M+ the guild

is a naturally emerged cluster in which 27 of Cluj ITC companies, with a cumulated annual turnover of 100 million Euros, are organized under the name of Cluj IT Cluster. Their plans for the future include: 300M - Euro investment over the next 15 years 20K - IT specialists concentrated in Cluj Innovation City

196+ / 17+ the community

has organized around 200 events, ranging from small meetups, to European level events, through the work of the dedicated 17+ professional communities. They were established in: 2006 - GeekMeet Cluj 2008 - Transylvania Java User Group 2010 - Cluj.rb, The Cluj Napoca Agile Software Meetup Group, Cluj Semantic WEB Meetup 2011 - Romanian Testing Community, Romanian Association for Better Software, Google Developer Group Cluj-Napoca 2012 - Today Software Magazine, Testing Camp (Tabăra de testare)

Marius Mornea

marius.mornea@todaysoftmag.com Engineer intrested and involved in different IT activities, from development to, management, education to journalism as part of Epistemio, UTCN and TSM

www.todaysoftmag.com | no. 12/June, 2013

13


communities

Cluj-Napoca IT communities

M

ay was full of major events. We are sorry we couldn’t reach/ attend all of them, but we promise to come back with a series of interviews and impressions during the next issues of the magazine.

Transylvania Java User Group Java technologies community. Website: http://www.transylvania-jug.org/ Started on: 15.05.2008 / Members: 539 / Events: 42 TSM community Community created around Today Software Magazine. Website: www.facebook.com/todaysoftmag Started on: 06.02.2012 / Members: 652 / Events: 9 Romanian Testing Community Testing dedicated community. Website: http://www.romaniatesting.ro Started on: 10.05.2011 / Members: 607 / Events: 2 GeekMeet Cluj Community dedicated to web technologies. Website: http://geekmeet.ro/ Started on: 10.06.2006 / Members: 547 / Events: 17 Cluj.rb Ruby community. Website: http://www.meetup.com/cluj-rb/ Started on: 25.08.2010 / Members: 134 / Events: 34 The Cluj Napoca Agile Software Meetup Group Community dedicated to Agile development. Website: http://www.agileworks.ro Started on: 04.10.2010 / Members: 324 / Events: 29 Cluj Semantic WEB Meetup Community dedicated to semantic technologies. Website: http://www.meetup.com/Cluj-Semantic-WEB/ Started on: 08.05.2010 / Members: 140/ Events: 22 Romanian Association for Better Software Community dedicated to IT professionals with extensive experience in any technology. Website: http://www.rabs.ro Started on: 10.02.2011 / Members: 223/ Events: 12

14

no. 12/June, 2013 | www.todaysoftmag.com

Calendar June 6 Launch of issue 12 of TSM magazine www.todaysoftmag.ro June 7 Project Managers Meetup www.facebook.com/events/519200844808524 June 12 Linked Data Technology Stack www.meetup.com/Cluj-Semantic-WEB June 12 Functional Programming Retreat it-events.ro/events/functional-programming-retreat June 14 Development of a business it-events.ro/events/proiectarea-si-dezvoltarea-unui-e-business June 18 Business Analysis Community Group Meeting it-events.ro/events/business-analysis-community-groupmeeting/ June 19-20 ITC Spring Europe, Luxemburg (recommended by TSM) www.ictspring.com Thursday/weekly OpenConnect www.facebook.com/groups/355893314491424/ Wednesday/bi-monthly OpenCoffee www.facebook.com/opencoffeecluj


programming

TODAY SOFTWARE MAGAZINE

Developing iOS applications with Security in mind

M

obile security has become increasingly important in mobile application development due to sensitive information stored on our smartphones. All expectation and predictions of usage and projected usage are broken year after year because of the “flood” of smartphone users at the disadvantage of those using laptops or desktops. Who can blame them? The handheld device has become the “wallet” of the modern era, filled with personally identifiable information (photos, videos, notes) and private data (from healthcare, medical data to diaries, passes or coupons). Cristian Roșa

cristian.rosa@isdc.eu mobile developer @ ISDC

Furthermore, there is a trend that defines mobile as how businesses are connected regardless of the category. Banking data and information (bank account number, credit card, authentication information, security tokens) are stored on our mobile devices or sent over unsecure wireless networks, hence there is a greater security risk than desktop applications. It is therefore vital when developing mobile applications to take security into consideration because of the variety of attacks than can produce significant amounts of damage.

The App Sandbox

iOS places each application in it’s own sandbox at install time that limits the access to files, network resources, hardware and private data. The path to the home directory of the application looks like / ApplicationRoot/ApplicationID Even though the sandbox limits the damage that a potential attack can do, iOS

Apple’s Architecture

Apple has managed to find – until now – the best balance between usability and security with its architecture tightly woven into hardware and software. This proves to be both easy for customers to encrypt data on their devices and very hard for hackers to steal and decrypt sensitive information. At the cornerstone of the Apple’s security is the Advanced Encryption Standard algorithm (or AES), a data-scrambling system considered unbreakable. The implementation of that in Apple’s devices is the encryption key stored deep in flash memory, itself encrypted by user’s passcode. Setting also the “data wipe” to on after 12 failed passcode attempts makes the device almost impenetrable.

cannot prevent it. That means a hacker can still access sensitive information within the sandbox, leaving the developer with no choice but to take extra countermeasures for security.

Encrypt Data while device is locked

While the device is locked certain files

www.todaysoftmag.com | no. 12/June, 2013

15


programming Developing iOS applications with Security in mind marked by the developer can be automatically encrypted, but doing this requires enabling the encryption capabilities and configuring. In lock state, the private sandbox of the application is “shielded” from other applications accessing it; furthermore, even the actual application does not have access. The encryption is realized by setting the desired level of protection: no protection, complete, complete unless already opened and complete until first login. API’s offering Data Protection are NSData using writeToFile:options:error; NSFileManager for setting attributes to files with setAttribu tes:ofItemAtPath:error; changing the value of NSFileProtectionKey and sqlite3_open_ v2 option for sqlite3. After enabling file protection, the application must implement some delegates to be prepared to lose access to that file temporarily.

Data protection with Keychain Access

“Keychain Services” is a programming interface that enables you to find, add, modify, and delete keychain items”. The keychain is about the only place where you can safely store data, because is encrypted in it’s own set of keychain items of the application. These items are backed up when the user syncs his device via iTunes, and will be preserved across re-installation. All these make the keychain the main place to store sensitive data such as passwords, license keys, account numbers, etc. To make use of the “Keychain Services”, the binary of the application must be linked with the “Security.framework: framework. Unlike other iOS frameworks, this is a C framework so all the style method calls are

16

of that language form. As data encryption while the device is locked has levels of protection, so is the data protection via keychain access: always protected, AfterFirstUnlock protected or WhenUnlocked. Also, each class exists in migratable and non-migratable variation – ThisDeviceOnly suffix -, which is tying the encryption to a specific device. The best approach for using Keychain Access is to declare a basic search dictionary used for all calls to keychain services. This will contain attributes of the keychain item to create, find, update or delete.

an automatic screenshot of the iPhone’s window while making the shrinking and disappearing effect. All these screenshots are stored into the system portion of the device’s NAND flash and are presumably deleted after the application has closed. In most cases the deletion does not remove permanently files from the device. These can contain user and application data. As a workaround for this issue, blocking caching of application snapshots by using API configuration or code is needed. An easy way to do this is using the willEnterBackground API method, in which deletion of sensitive information is Keyboard Cache done. Then you can create a splash screen Have you ever noticed when using for the application to display while it moves browsers, autocomplete comes into place into the background. when you try to retype something? Did you notice that also on iPhones? If not, you Caching Application Data should know that all keystrokes entered Application Data can be captured in on an iPhone potentially get cached unless a variety of artifacts like log/debug files, measures for disabling this are taken. This cookies, property list files or SQLite datais valid for all keystrokes entered which are bases. While property list files, SQlite saved, except password fields. databases and common files/documents The bad news is that almost every non- can be encrypted offering some security, numeric word is cached and all keyboard the log, debug or crash files are potentially cache’s content is beyond the administrative accessible. privileges of the application meaning that it If the application crashes, the result is cannot be removed from the application. logged leaving the attacker with a potenThat leaves the developer only the tially valuable workaround for retrieving possibility to disable auto-correct feature sensitive data. Also consider disabling by setting autocorrectionType property to NSAssert for iOS because the applicaUITextAutocorrectionNo. tion will crash immediately after failure; an elegant way is to handle failed assertiAutomated Screenshots ons. In order to provide a rich user expeYou should also disable Debug logs rience, iOS has many shrink, push, pop before shipping your application; these or disappearing effects. However, that can store sensitive data. Disabling those comes with a downside: when the applica- can remove potentially sensitive information is moved into background, iOS takes tion from leaking and a slight increase in

no. 12/June, 2013 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE performance can be seen.

PasteBoards

“The UIPasteboard class enables an application to share data within the application or with another application using system-wide or application-specific pasteboards”. Sounds familiar? Look at picture below. I’m sure you seen this in many iOS applications. When user requests a copy or cut operation on a selection in user interface, an object writes data to a pasteboard. If the correct level of security is not set, other application can have access to previous saved data creating big security risks. Also persistency of that data must be controlled because if attributes aren’t correctly set, the copied information will be stored unencrypted on the phone file system and kept alive even after the application it closed.

As a final remark, remember that security practices are only making things harder for hackers. There is no way to be 100% sure that your measures won’t be able to be bypassed. After all, all versions of iOS have been jailbroken. While it is getting tougher and tougher for hackers you should always plan for contingencies. In the end you need to find a sensible mix between usability and deterrent practices that make sense for you, your customer and your users. Special thanks to Mircea Botez and Andrei Adiaconitei.

Bibliography 1. Apple iOS Developer Library 2. “Top 10 iPhone Security Tips” by Kunjan Shah 3. “Penetration Testing for iPhone / iPad Applications” by Kunjan Shah 4. “iOS Keychain Weakness FAQ” by Jens Heider, Rachid El Khayari 5. “Lost iPhone? Lost passwords!” by Jens Heider, Matthias Boll 6. https://www.viaforensics.com 7. http://www.useyourloaf.com 8. http://www.technologyreview.com

www.todaysoftmag.com | no. 12/June, 2013

17


conference

JQuery Europe 2013

A

web developer, Haymo Meran and the ghost of Prince Johann Adam Andreas von Liechtenstein walk into a palace… Sounds like the beginning of a crazy joke but you tell that to anyone you know and they won’t have a clue what happens next. Well, unless they’ve been to or heard of jQuery Europe 2013. Truth be told, I’ve seen no actual sign that the prince’s spirit was with us, and there were 300 developers (or better said, jQuery enthusiasts), not one. The rest however is quite factual and certainly not a joke, amazing as it may sound. Andrei Otta

andrei.otta@accesa.eu Software developer @ Accesa

Yes, it was a jQuery conference that was held in an actual palace, the Palais Liechtenstein (Gartenpalais for the locals) in Vienna to be more precise, and what an exceptional event it was… I would like to take this opportunity to thank the organizers, Gentics Software, and in particular Mr. Haymo Meran (Gentics CTO and very friendly event host) for making this happen and doing a pretty good job on the ‘maiden voyage’. Probably the only (funny) ‘negative’ thing worth mentioning is the statement that participants arriving for the conference at metro exit „Rossauer Lände” would be guided to the venue by someone in a conference T-shirt. Why is that funny,

taking people to conferences in a T-shirt… But enough sidetracking, let’s get to the actual juicy details! The conference spanned two days, from the 22nd to the 23rd of February, and was packed full of Javascript, jQuery and CSS goodness delivered by the 16 speakers, sprinkled here and there with odd bits such as singing bananas and tropical smoothies. The event kicked off with a bang! Relax, there was no shooting of any kind, security

was very tight, with some very Bond-like characters keeping a close eye on everyone, as you can see below… No, the bang I’m talking about describes the impact of the very first sesyou ask? This is Vienna on the first day of sion, whose speaker was none other than the conference: Richard D. Worth, the executive director of Hardly a time to go running around the jQuery Foundation. He presented the

18

no. 12/June, 2013 www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE current state of jQuery, now at (stable) version 1.9.1, while also giving a glimpse into the future and inviting people to join the community, join the team, help improve jQuery and drive it forward. He ‘teased’ the participants with little bits of information on the next major step for jQuery, version 2.0, of which by far the most acclaimed (cheered for and loudly applauded) was the announcement that as of version 2.0, jQuery will no longer support Internet Explorer 8 or older. I know, right? Hats off to them for shutting the door and locking it tight and leaving the annoying little brat who always likes to do things differently out in the cold… However, as they’re not heartless people with little mercy for the poor souls that need to have their site running on IE also, the jQuery team will be maintaining both version 1.9.1 and 2.0 in parallel, porting over to 1.9.1 any development they make on 2.0 and beyond as long as it doesn’t require insane amounts of adaptation. Mr. Worth also covered changes in jQuery-adjacent components such as jQueryUI or jQuery Mobile and brought into light several jQuery initiatives such as contribute.jquery.org (for anyone that is interested in contributing to jQuery), plugins.jquery.com (the new and improved plugin registry) or learn.jquery.com (if the URL didn’t give you a hint, this is an initiative aimed at people wanting to learn jQuery). As going into all the details of what followed would turn this article into some pretty heavy reading, I’ll try to summarize (and probably fail, eventually opening the mental flood gates and just letting thoughts loose) what made up the rest of a really fascinating day. Next up on the list was Corey

Frang, also a member of the jQuery team, who pretty much dissected a jQuery UI widget into its building blocks and showed how using the widget factory can yield some very flexible and extensible results. Doug Neiner presented some best practices regarding separating your javascript/ jQuery from your CSS and HTML, which will lead to much more maintainability and thus less headaches for you. He also delved into the world of CSS transitions and animations and the jQuery equivalent. Sebastian Kurfürst showed how, using RequireJS, you can break up and organize your javascript into components and drastically improve clarity in your client-side code. Jörn Zaeffer opened our ears and our minds to what the current web is like for those who can’t experience the world through sight, a significant category of people who we sadly often forget about. I have to admit it was very humbling, saddening even, to witness what some webpages ‘sound like’. I haven’t yet been put into a position to think much about accessibility and it’s hard to keep this aspect in mind when you’re frantically aligning pixels and adjusting colour tones, but it’s something that should always be taken into consideration. Oddly enough, making your design more accessible might actually improve it by giving it a more organized, coherent structure. We then saw first-hand what jQuery can do on the server through a very insightful presentation by Golo Roden on Node.js, an exceptionally powerful tool for anyone interested in things like web scraping, creating lightweight high-performance web servers or data-intensive distributed applications. Or more really, my feeling after seeing a small glimpse of what can be done is something in the area

of “sky’s the limit…”. That feeling was taken to new heights after watching the next speaker in action. Sascha Wolter is probably the closest to a mad genius I’ve seen so far, but in the most absolute good way. The guy is amazing, and his keynote was nothing short of that. He did things with javascript I wouldn’t think about even on the most creative of days. From controlling LEGO robots or quad copters and sending SMS messages to coffee machines to downright making bananas sing, I’ve thoroughly enjoyed and savored every fascinating little second of it (not to mention brewed up some crazy ideas of my own to try on unsuspecting people). The day ended with the keynote talk by Christian Heilmann, who put forward an interesting question: are the tools (javascript libraries, frameworks, plugins, etc.) we use during development helping or hurting us? Sure, reinventing the wheel is always painstaking, laborious and often boring, but if you think about it, if people hadn’t reinvented the wheel at some point, our cars would still move around on stones instead of rubber tires (insert witty Flintstones joke). It’s good to use tools to speed up development time or bring more clarity to your code, but it’s essential to understand what’s actually behind these tools, what makes them tick, what makes them so good and what could make them even better. I think back to some numbers Mr. Worth presented in the opening talk: 55.7% of sites currently out there on the vast plains of the internet use jQuery. That’s 90.7% of the sites that use javascript. An overwhelming majority, no doubt about it, but I wonder how many of the developers that used jQuery in making those sites actually got a non-minified

www.todaysoftmag.com | no. 12/June, 2013

19


conference JQuery Europe 2013

version of the jQuery js and actually took a peek inside to see what sort of wicked magic is behind the awesome framework that makes their lives so much easier… I think I’ll end the day with this question. Second day of jQuery Europe 2013 (and alas the last) is upon us. Well, technically, by the time you’re reading this the present tense is highly inappropriate in the previous sentence, but my mind is often still back at the conference. I really wouldn’t have minded if it went on for another two or three days after the first two. Even more saddening, I had to leave early to catch my flight and missed the last two talks, but let’s not dwell on regrets and go through what I did manage to experience. The day started with a great presentation on jQuery Mobile and Responsive Web Design by Todd Parker, design lead for jQuery UI and member of the jQuery Mobile team. He walked us through the built-in RWD

20

(Responsive Web Design) capabilities of the jQuery Mobile framework and showed some real-world examples of how to use media queries and mobile-first thinking in building responsive content, be it for sites or apps. He also outlined some performance-improving techniques regarding bandwidth minimization, high-res image handling and selective layering of content and features based on device capabilities. Maximilian Knor, a developer evangelist at Microsoft, presented how ASP.NET MVC and ASP.NET SignalR (http://www. asp.net/signalr check it out if you haven’t already) build on top of jQuery and jQuery Mobile for client-side features and singlepage applications. Mike West outlined the essentials of securing your client side code against malicious intent, be it cross-site scripting or other forms of attack. Well, to only mitigate the effects of such an attack to be honest, because though you may think ‘I’ve written my code in such a way that it’s impossible for someone to hack into it’, things like JSFuck(http://www.jsfuck. com/) might make you go back and add a few layers on top of your ‘brick wall’ and even then realize it’s not strong enough. We’ll skip detailing Theodore Biadala’s bland keynote on Drupal using jQuery and jump straight to Patrick Lauke and “Web on TV”, describing how jQuery can be used in interactions with your TV set. By now, any respectable company that produces TVs probably has at least one representative of the SmartTV generation and in a few years we’ll probably barely remember a time when you couldn’t browse the web on your TV or watch Youtube or play Angry Birds. What this means, in addition

no. 12/June, 2013 | www.todaysoftmag.com

to a whole new can of worms regarding website and app support for multiple devices, is that with some javascript/jQuery knowledge, you could be making interfaces or apps for people to enjoy on their TV as well (a little joke about putting someone’s Facebook wall on their actual wall comes to mind). The last keynote I was able to attend was a sort of double-enlightenment, courtesy of Dyo Synodinos. The first step of the enlightenment was in relation to the murals on the ceiling of the large and masterfully adorned hall we were in, which depict scenes from the life of Hercules (not played by Kevin Sorbo). The second step was when I realized it was very much related to his presentation on State of the Art in Visualisations and not just introductory filler. Indeed, what was presented was very visually impressive and by all means state-of-the-art. All of the eye-candy tools such as CSS3, SVG, Canvas or WebGL were given a run through and frameworks that bring the power of these technologies into a more accessible format, such as Raphael.js, D3.js or Fabric.js, were also given a teasing peek. The show ended with some impressive examples of the above in some really creative and stunning displays. And then I had to leave… And though I had a small measure of regret for missing a little, I was immensely satisfied with what I was able to experience over the course of the two days… And I left with a small giggle at the thought of being back for next year’s jQuery Europe, which is definitely happening according to Haymo. Considering they’ve set the bar real high with holding it in such an impressive venue this year: Who knows what 2014 will bring?


TODAY SOFTWARE MAGAZINE

HR

Performance Management (II)

T

Andreea Pârvu

andreea.parvu@endava.com Recruiter at Endava

he second part of the article „Performance Management” will focus on the tools used in this process. In the same time it will present some practical model that could be easily implemented in every company. In the following part of the article, I will describe a model of defining objectives in the 3 phases of the performance management process: (1) Define competencies that are necessary for living the company’s values. (2) Define team objectives: make a short description of them, establish a clear method of measuring them by setting Key Performance Indicators and set the results expected. (3) Define individual objectives that are allingned with the ones set together with the team. Every department should have defined Criteria in setting the objectives: a level of performance established based on • Individual objectives should be aligned the results obtained in the previous years. In with the ones set by the team. Minimum the same time it is indicated to create every one objective of the team and 3 individual month raports that will define the level of ones. performance in the team. The individual ones • The objectives should be SMART, clear are evaluated officially 2 times/ year. These KPIs defined for every objective and comresults are used in establishing the personal pany value. development plans which contain develop• The manager is responsible for aligning ment objectives, the actions that will be taken the organizational objectives with the ones in order to be achieved and the competencies set by the team that need to be improved. The development activities are very Key Performance Indicators (KPI) diverse: from development programs for the It is necesary to define KPIs for every employees to trainings on the job, virtual area. The purpose of them is to have a clear materials, etc. overview on the performance that needs to be achieved and how. When setting the KPIs Scale used in measuring performance 3 essential factors should be taken into conAll the results should be evaluated on a 6 sideration: resuls, the quality and quantity level scale. A model is presented in the table of them. For a better understanding of this below. The evaluation should be built on two model, every aspect will be explained in dimensions: the results obtained (What?) detailed in the following rows : and the applicability of the company’s values 1. Results: are set accordingly to the (How?) stakeholder’s needs. The purpose of them is to accomplish the expectations as well as Define the objectives possible. Results should be accroding to: Professional objectives should be set on • key business objectives long term and then split on short term miles• client needs tones with constant feedback from the line • managerial and leadership objectives manager. • self development plans 2. Quality: it is a description of the results www.todaysoftmag.com | no. 12/June, 2013

21


HR Performance Management (II) based on the excelency that should be achieved. Basically it is the added value of the performance. 3. Indicators: have the purpose to measure the What and How of the performance.

Leadership and management competencies

For a sustainable performance management process it is recommended to create a catalogue of competencies and skills needed by every employee to accomplish the daily responsibilities. There are 4 leadership competencies that should be taken into consideration. • People leadership: the capacity of individuals to communicate, to coordinate the others, to have leadership skills to develop a team. • Personal leadership: the capacity to adapt to new situations and new environments, emotional intelligence. • Results Leadership: the capacity to achieve objectives and meet deadlines and the capacity to make decisions. • Thought Leadership: the capacity to understand the business needs and the organisation, the capacity to indetify opportunities and trends in the internal and external environment. Besides this leadership competencies, persons that have management positions should also develop this kind of skills and abilities.

Performance Champions

Companies offer employees the posibility to benefit from the help of performance champions, who are split in 2 categories:

22

(1) the ones that have the knowledge on performance management and can offer support to all those who are involved in this process and (2) the ones who have outstanding performance and are promoted in the company. Their purpose: • Identify, select and motivate a group of change agents in the company that will have an impact in implementing the process inside the company; • Educate teams and individuals in being involved in the process of performance management for the benefits of developing themselves. The FLAME Model, could be implemented in every company. For a clear understanding of the role of these performance champions, I will exemplify below the characteristics of this model: F- facilitate L- lead by example A-act as a consequence M-measure E-educate The key of success is to motivate and educate the team members in order to create a culture of performance and everybody should be considered a change agent that can have a positive impact in the company. The question that should be brought in discussion is how do we identify and select these performance champions? I will draw a profile of them: • Good understanding of the performance management process; • Highly skilled in different areas; • G o o d c o m m u n i c a t o r s a n d facilitators;

no. 12/June, 2013 | www.todaysoftmag.com

As a conclusion, the process of performance management should have 3 main consequences: 1. R e w a r d a n d r e c o g n i t i o n o f employees; 2. Talent management and career growth of employees; 3. Individual development plans for employees; Good luck in implementing a strong performance management system that will bring the best results for the employees and for the company.


TODAY SOFTWARE MAGAZINE

Team objectives

How to measure them?

Mid-term evaluation of the results Describe the objective How to measure the objective? Results Score (previous year) Level of performance (previous year) Describe the objective How to measure the objective? Results Score (previous year) Level of performance (previous year) How to measure them?

Score

Results Level of performance Score

Results Level of performance Score

Total Score

Total Score

Personal development objective Development activities

Personal development objective Development activities

Competencies that need to be developed

Competencies that need to be developed

Define expectances Describe the objective How to measure the objective?

Individual objectives Describe the objective How to measure the objective?

Company values

General Level of Performance

Total Score of Total Score Evaluation ( previous year ) (100% for validation ) Personal Development Plan Personal development objective Development activities Competencies that need to be developed

Performance Evaluation Describe the objective How to measure the objective? Results Score (previous year) Level of performance (previous year) Describe the objective How to measure the objective? Results Score (previous year) Level of performance (previous year) How to measure them?

Results of the development plan

Table 1.1. A model of setting the objective

Nivel Scale Name

Describtion of the performance level

Description of the behavious of living the companies values

5

Outstanding

The performance achieved is outstanding: the objectives are over accomplished. Over-exceeds the expectations of the employees on this position by accomplishing all the requirements in the deadline established.

It is a role model for the others in living the company’s values.

4

Exceeds expectations

The performance achieved exceeds expectations. There is a need of supervision of the employee, but in the same time there is a lot of initiative and independence in solving problems.

Lives the company’s values constantly.

3

Meets Expectations

The performance is achieved with the supervision of the line manager.

Lives the company’s values constantly.

2

Needs Improvement

The objectives are partialy achieved and barely in the Some behaviours are not accordingly with the deadline set. There is a constant need of supervision from the company’s values. line manager. Needs improvement of competencies and skills.

1

Unsatisfactory

The performance obtained is below expectations. There is a need of direct involvement of the line manager in achieving the objectives.

0

Unmeasurable

Most of the behaviours are not accordingly with the company’s values.

New employees (with less than 6 months in the company)

Table 1.2. Scale used in measuring performace Objective: ______________________________________________________________ Team/ Name ________________________________________ Results

Quality

Indicators

Table 1.3. A model of setting the objectives www.todaysoftmag.com | no. 12/June, 2013

23


programming

Software Craftsman’s Tools: Unit Testing

I

magine the following situation: a team has developed for 6 months a great product that immediately sells. Users show their passion for the product by asking new features. If the team does not deliver the new features fast enough, their happiness will decrease. They might even switch to a competing application. The team must deliver quickly. Alexandru Bolboaca

alex.bolboaca@mozaicworks.com Agile Coach and Trainer, with a focus on technical practices @Mozaic Works

Unfortunately, it takes weeks or months to validate the product completely, without accounting the time for bug-fixing. The team cannot deliver in time if the product is tested completely. What can they do? The most common strategies are: • test only new features, hoping that the old ones have not modified • analyze the impact of changes and test only the affected features • users test the application during a stabilization period.

Adrian Bolboaca

adrian.bolboaca@mozaicworks.com Programmer. Organizational and Technical Trainer and Coach @Mozaic Works

24

no. 12/June, 2013 www.todaysoftmag.com

All these strategies translate into temporarily decreasing the care for the product in order to obtain faster delivery. Unfortunately, the technical teams in the scenarios above are taking business risks. Their major risk is missing the impact on certain application areas and delivering bugs. This can turn into: • Decreased user happiness and creating detractors. In business and life winning a person’s trust is hard, but very easy to loose. • Increased support cost. Every bug reported by users means time spent to understand, solve, test and deploy of the new version. Expenses accumulate in: call center, development, testing, operational etc. • Opportunity cost: while the team solves bugs, the competition can produce new features that get the users. Solving bugs is equivalent from the business point of view with running in place.

What if...

the team could validate the whole application in hours and not in weeks or months? What if each programmer could learn after each code change, in minutes, that nothing else was affected (with 80+% probability)? The article on Software Craftsmanship mentions the main idea of the movement: a software craftsman can deliver quality under pressure. Given the circumstances above, a software craftsman should deliver new features in the given time with as few bugs as possible. How few? Less than 10 per release. This is a conservative number. Agile methods (including Scrum) demand 0 known bugs at the end of each 2-3 weeks sprint, after a validation by testers.

But any software application has bugs!

The name “bug” is a euphemism for mistake. Mistakes are normal in any human activity, including software development. How can we reduce the mistakes and their impact? Unit testing is a tool that can help, but it is not the only one. Others are: code review, pair programming, reducing the number of lines of code, design by contract.

Unit testing

Unit testing means writing code, named testing code, that validates the production code. Testing most of the application becomes automatic. History: programmers often believe that unit testing is a new practice. The


TODAY SOFTWARE MAGAZINE reality is that unit testing was used from the times of mainframe computers and punch cards. Back then, debugging was very difficult because it implied reading tens of meters of sheets of paper printed with the program results and execution information. Automated tests that ran at the same time as the program gave much more information about the source of the mistakes. What about testers? They are often afraid that they’ll lose their work once automated testing is introduced. The truth is that testers become much more important because they’re the only ones who can discover hidden problems, difficult or impossible to find with automated tests. They help increase the probability of correctness from 80+% to almost 100%. Unit tests have a few important properties: • each test validates one small behavior of the application • run really fast, under a few minutes • are short and easy to read • run on the push of a button, without additional configurations To be fast, unit tests often use so called “test doubles”. Similar with plane pilots that learn in a simulator before driving a plane, unit tests use pieces of code that look like the production code but are only used for testing. Stubs and mocks are the most common test doubles; others exist but are less used. A stub is a test double that returns values. A stub is similar with a very simple simulation: when a button is pressed, a value appears. A manually written stub looks like this: class PaymentServiceStub implements PaymentService{ public boolean valueToReturnOnPay; public boolean pay(Money amount){ return valueToReturn; } } class PaymentProcessorTest{

can be used to validate method calls for implementation was finished. The usual methods that don’t return values. result is that the tests are not written or they are written based on the existing class PaymentServiceMock implements PaymentService{ implementation and not on the correct public boolean payWasCalled; application behavior. public Money actualAmount; Test First Programming is a way to public void pay(Money amount){ actualAmount = amount; write tests that has the following steps: payWasCalled = true; } • design the solution } • write the minimum code (compiclass PaymentProcessorTest{ lable, if the programming language is @Test public void compiled) based on the design paymentServiceCalledOnPaymentProcessing(){ • write one or more tests that show PaymentServiceMock paymentServiceMock = new PaymentServiceMock(); what the implementation should do; the PaymentProcessor paymentProcessor = tests will fail write the code to make tests new PaymentProcessor(paymentServiceMock); pass. Money expectedAmount = Money.RON(100); • By applying Test First Programming, paymentProcessor. processPayment(expectedAmount); the developers make sure that they write unit tests and they test the required assertTrue(paymentServiceMock.payWasCalled); behavior. assertEquals(expectedAmount, paymentServiceMock.actualAmount); } }

Test doubles can be created using special frameworks, like mockito for Java (ports exist for other languages) or moq for .NET. class PaymentProcessorTest{ @Test public void paymentDoneWhenPaymentServiceAcceptsPaymentWithMockitoStub(){ Money amount = Money.Ron(100); PaymentServiceStub paymentServiceStub = mock(PaymentService.class); when(paymentServiceStub.pay(amount)). thenReturn(true); PaymentProcessor paymentProcessor = new PaymentProcessor(paymentServiceStub); paymentProcessor.processPayment(amount); assertPaymentWasCorrectlyPerformed( paymentProcessor); } @Test public void paymentServiceCalledOnPaymentProcessingWithMockitoMock(){ Money amount = Money.RON(100); PaymentServiceMock paymentServiceMock = mock(PaymentService.class); PaymentProcessor paymentProcessor = new PaymentProcessor(paymentServiceMock); paymentProcessor.processPayment(amount); verify(paymentServiceMock).pay(amount); } }

@Test public void paymentDoneWhenPaymentServiceAcceptsPayment(){

Test doubles were initially used only in places where controlling the system was very hard or where tests were slow due to paymentServiceStub.valueToReturn = true; PaymentProcessor paymentProcessor = calls to external systems. In time, test dounew PaymentProcessor(paymentServiceStub); bles became more used in all unit tests, paymentProcessor.processPayment( Money.RON(100)); resulting in the “mockist” way of testing. assertPaymentWasCorrectlyPerformed( The article „Mocks aren’t stubs” by Martin paymentProcessor); } Fowler1 has much more details. } Unit tests are written by a programmer, A mock is a test double that validates during the implementation of a feacollaboration between objects. Mocks vali- ture. Unfortunately, the most common date method calls with certain parameters, way of writing tests is sometime after the for a certain number of times. A mock 1 http://martinfowler.com/articles/mocksArentStubs. PaymentServiceStub paymentServiceStub = new PaymentServiceStub();

html

Test Driven Development (TDD) can be considered a third way of writing tests. Actually, TDD is a way of making incremental design. A future article will detail what is and why to use TDD.

Writing tests takes a long time!

Case studies2 and personal experiences show that the time spent just to develop a feature is indeed increasing when adopting unit testing. Same studies show the time spent to maintain the product decreases considerably, showing that unit testing can be a net optimization in the total development time. However, no study can change the programmer’s feeling that he or she writes more code. Often, programmers assume that the project is slower because of automated testing.

How do I start?

The best way to adopt unit testing is carefully and incrementally, following a few important points: 1. Clarif y unit testing concepts. Programmers should handle fearlessly concepts like: stubs, mocks, state tests, collaboration tests, contract tests, dependency injection etc. Programmers should also understand what cases are worth testing. Mastering this concept is possible in a few ways: • Specialized unit testing works h op s . Moz ai c Wor k s ( http : / / mozaicworks.com) offers such a workshop (http://bit.ly/unit-testing-workshop) that constantly had 2 The most well known case study on unit testing took place at Microsoft: http://collaboration.csc.ncsu.edu/laurie/Papers/ Unit_testing_cameraReady.pdf

www.todaysoftmag.com | no. 12/June, 2013

25


programming feedback over 9.25/10 from the attendees. • pair programming between a tester and a programmer. • pair programming between an experienced unit tester and a beginner. The experienced unit tester can be an external technical coach. • reading books (see the recommended books at the end), Internet resources or attending community events. • attending conferences where concepts related to unit testing are discussed. 2. A technical coach can work with programmers, helping them transfer theoretical information in their everyday practice such that productivity is least affected 3. Write unit tests for the most important parts of the application and then of the features with the highest risk of mistake 4. Use the “Test pyramid” testing stra tegy to eliminate as many mistakes as possible 5. If lots of code without tests exists (legacy code), learning additional techniques to write tests on existing code is recommended. More details in a future article

Common mistakes

A few common mistakes related to unit testing are: 1. Writing too many integration tests3 (that exercise more classes and modules) that are slow and fragile instead of unit tests that are small, fast, easy to maintain 2. Not using test doubles, or using them for other purposes than they were 3 http://blog.thecodewhisperer.com/2010/10/16/ integrated-tests-are-a-scam

26

created for. Test doubles help writing short, fast tests. 3. The test names don’t express the tested behavior. The test name can provide a lot of information when the test fails. 4. Extensive usage of debuggers when tests fail. Well written tests immediately tell where the problem is when they fail. Debugging is still useful in exotic situations. 5. Not caring for the testing code. The test code is at least as important as the production code. More details on these issues are available in a blog post from the same author, „5 common unit testing problems” from http://mozaicworks.com/ blog/5-common-unit-testing-problems/.

Conclusion

Unit testing is one of the tools a programmer can use to significantly reduce the number of mistakes he or she makes when writing code. When correctly used, unit testing can significantly reduce the time spent with bug fixing, reducing the load on colleagues that take care of support and testing and allowing the introduction of new features faster, resulting in higher competitiveness. Care must be taken when adopting unit testing by following industry practices (the pyramid of tests – see http:// watirmelon.com/2012/01/31/introducingthe-software-testing-ice-cream-cone/ for details, using test doubles etc). External help (training and technical coaching) can make the difference between a successful and a challenged adoption. Technical coaching is a fast way of adopting a new practice, with hands-on, on the job and on production code help of a person specialized in the technique and in teaching it.

no. 12/June, 2013 | www.todaysoftmag.com

A software craftsman masters unit testing and uses it to protect from his or his colleagues’ mistakes. He/she is certain they can deliver bug-free software even under time pressure. Of course, the condition is to know unit testing so well that it is the default way of work even when under pressure.

Recommended Books „The Art of Unit Testing”, Roy Osherove „xUnit Test Patterns”, Gerard Meszaros „Growing Object Oriented Software Guided by Tests”, Steve Freeman, Nat Pryce


QA

TODAY SOFTWARE MAGAZINE

Our experience with Behavior Driven Development in Python

T

oday, testers are often viewed as the ones that execute the easier tasks, and whose technical abilities are not as strong as the developers’. There are two divided teams - testers and developers. Instead of focusing on communication and collaboration, the two teams invest effort and energy on proving their individual team does a better job.

Ramona Suciu

ramona.suciu@3pillarglobal.com Test Lead @ 3Pillar Global

Dan Pop

dan.pop@3pillarglobal.com Senior Test Engineer @ 3Pillar Global

Situations like the one described above have led to the false premises that testers are second class citizens. What does this mean exactly? Testers and developers don’t work together from the start, when a new feature is ready for implementation, but the testers only start their work once a major part of that feature has been implemented. Since the collaboration between the two teams is not always a point of focus, several communication gaps appear, and a build is ping-pong(ed) between devs and testers. A vast part of these issues could have been avoided, had the testers and developers worked as a fully integrated team.

Our project

...is a complex one, where 50+ people are involved in implementing and testing each new feature. If communication and collaboration would not be our permanent goals, we would have difficulties in approaching successfully the Kanban methodology we are practicing at this moment in our project. Kanban involves proactivity, initiative, a very large number of automated tests and the confidence that each team member brings his/her contribution to testing the product.

How do we manage to work successfully under Kanban, on a very complex project? By exercising communication and collaboration between all team members, on a daily basis. Testers don’t have to wait for a new build to be deployed on an external environment so that they can start testing. They could work together with the developers since the details of an epic/story were clarified. Testers can draw a few basic scenarios that could be tested, and developers can point other tests, while working together in establishing the architecture of the automated scripts to be implemented. We are proud to say that we work in a team where everyone guides themselves after the Quality is a team effort principle and we have understood that testing is not solely the responsibility of the testing team. We have to keep in mind one thing: this approach would not be possible without a very large number of automated tests, which would quickly reveal any regressions in the application. The communication aspect we strongly march upon throughout our process would become an overhead for everyone, since it would require time, time that would otherwise be dedicated to writing and maintaining automation scripts.

www.todaysoftmag.com | no. 12/June, 2013

27


QA Our experience with Behavior Driven Development in Python Behavior Driven Development definitions (as we applied them)

In order to further improve our process, we steered our attention towards Behavior Driven Development1, a methodology which focuses on communication and collaboration as strong points. There are many definitions for BDD, but we are going to present some of them, as they were applied on our project: 1. C omparing with Test Driven Development (where tests dictate the architecture) - Behavior Driven Development represents an extension to TDD. As with TDD, the tests are the catalyst, but the tests written with BDD are written in a simpler form, easier to understand by everyone on the team. By easier to write/understand, I refer to the Gherkin language (Given-WhenThen), which allows that non-technical team members also write and maintain test scenarios. 2. Communication and Collaboration are the fundamental aspects of BDD. Without these two concepts, BDD cannot be successfully applied. 3. Different perspectives on the system are provided when applying BDD. And this thing is possible because BDD facilitates asking key questions, such as: a. Why is this feature necessary? b. What problem does it try to solve? c. What is the target audience? d. What are the alternatives? e. ...and so on 1 http://dannorth.net/introducing-bdd/

By getting answers to questions like the above, we gain a different perspective on the system, as seen through the eyes of the product management team. We can understand in this way the business value brought by each new feature. 4. If the above presented concepts are applied correctly, then another major advantage is almost instantly and effortlessly obtained - better communication with the clients/stakeholders. 5. Living Documentation - the documentation formed exclusively by the test scenarios written with Gherkin language. Living Documentation represents the main benefit of Behavior Driven Development.

BDD in our team

The steps we follow when applying BDD concepts in the context of our team/ project, from the moment when an idea for a new feature appears, and until the moment it materializes in working functionality, are the following: 1. The product management team has a new idea. The idea is shared with the rest of the team via the backlog of our bug tracker. 2. Ideally, the product management team and the business analyst are involved in all the discussions revolving around the new feature. Key questions are asked and answered, and any confusing aspect is clarified as soon as possible. • However, since we don’t have a business analyst in our team, we reached a compromise - the role of the business analyst can be substituted by the rest of the team - tech leads,

developers, and testers. 3. Several discussions ensue around the idea, until everybody has a shared understanding of what needs to be done. 4. Finally, the idea is logged in our bug tracker, not as a solution coming solely from the product management team, but as a result of the collaborative effort put on by the entire team.

Challenges

There are several obstacles we have had to overcome in the context of BDD, and we want to share those, so that everyone is aware of what might become a challenge: 1. The client has a much bigger role in BDD than in other methodologies • Challenge: the product owner’s role is very important in BDD, and their contribution to writing and maintaining test scenarios written with Gherkin language has to exceed the same contribution brought by other team members. In our case, we haven’t managed yet to reach this goal. Considering the limited amount of time our product owners can allot to this particular task, the developers and testers from our team are the ones that write the Gherkin scenarios. These scenarios are later reviewed by the product management team. It’s not the ideal solution, but it’s a compromise which works for us for the time being 2. Using Gherkin language at its best • Challenge: This represents one of the most complex BDD concepts, although it might seem simple in the beginning. Writing tests so that they are easy to understand, easy to maintain, Our core competencies include:

Product Strategy

3Pillar Global, a product development partner creating software that accelerates speed to market in a content rich world, increasingly connected world. Our offerings are business focused, they drive real, tangible value.

www.3pillarglobal.com

28

no. 12/June, 2013 | www.todaysoftmag.com

Product Development

Product Support


TODAY SOFTWARE MAGAZINE and which reveal the exact business 1. Permanent communication and value of the feature to be tested can collaboration prove to be a challenge by itself . 2. BDD is very context depending, 3. Living Documentation hence we always try to adapt our own • Challenge: the documentation conapproach of BDD so that it covers the taining exclusively test scenarios written project’s needs and particularities. with Gherkin language can become a 3. The application code has to be teschallenge, since there are a lot of charactable. Without this, we won’t be able to teristics that this documentation must write scenarios that reflect the charactecomply to: ristic of a good Living Documentation • it represents the exact situation of the system. code at any given moment 4. Living Documentation represents • easy to access our permanent goal, as long as we’re • easy to understand, easy to maintain applying BDD on our project. • it could actually reveal possible risks when major changes are applied on a BDD is a complex project, but it has system (for example, when working with many advantages and it plays an important legacy components). role in forming a real team, working on a successful project. We highly recommend Having talked a lot about the Gherkin applying this methodology, no matter the language, please see below an example of project’s complexity. When applied so that how a feature can be described with test it takes into account the project’s indiscenarios written with this language. These vidual characteristics, Behavior Driven tests are kept in the same location as the Development can easily become a success application’s code, and since they repre- story. sent the entry data needed by BDD specific We also sustained a presentation on the tools (Cucumber, Lettuce, Behave, etc.), we same subject, to this year’s Romania Testing are confident that these tests will always be Conference2. Our presentation slides can up to date. The tests are modified as soon as be viewed online at http://testers-lab. the code is changed, so that they continue github.io/testerslab.github.com/. Also, this to reflect the exact state of the code. GitHub repository shows how to write and run test scenarios with Lettuce & Django Conclusions https://github.com/testers-lab/RTC-demo We are confident that we have successfully managed to apply the BDD concepts adapted to our project’s needs, but we are aware of the fact that there is still much to learn. Here are the objectives we want to reach when it comes to permanently improve the BDD process:

Bibliography

More information on how to write good Gherkin scenarios, depending on the BDD specific tool of choice, can be found at these locations: • Lettuce tutorial - http://lettuce.it/ tutorial/simple.html#id1 • Cucumber tutorial - http://cukes.info • Freshen tutorial - https://github. com/rlisagor/freshen • Behat tutorial - http://behat.org • JBehave tutorial - http://jbehave.org/ • ...and the list could go on

2 http://www.romaniatesting.ro/speakers2013/

www.todaysoftmag.com | no. 12/June, 2013

29


QA

Test Driven Development (TDD)

T

est Driven Development (TDD) is a software development technique that combines Test First Development (TFD) with refactoring. The goal of test driven development is split between multiple points of view: code specification, not validation; in other words, is a way of thinking from the requirements or designs point of view before we can write functional code (TDD is an agile requirement and an agile design technique). Another point of view is that TDD represents a programming technique of writing clean and functional code. I will describe a situation which most of us are familiar with: the developers write code, that code arrives in testing. The testers discover certain abnormalities about the functionality of the code, called bugs, which they send back to the developers to fix. After the developer has corrected the bugs, he sends the code again in testing. Most of the times, this developer-tester “dialogue” is not clear and precise, one bug being sent from the developer to the tester as fixed, the tester realizing that in fact it wasn’t resolved and he sends it back to the developer and so on. In order not to get too often in this kind of situations, one of the measures taken was that the developer must write tests before he implements a certain new functionality. This technique is called TFD (Test First Development). The first step of TFD is writing a test that will fail, after which all the tests will run, or to save some time, only part of the tests will run, just to prove that in fact the added test will fail. Next step is to make a little modification in the code so that the test will pass. In case we did not manage to do that, we will modify the code again and rerun the tests until our new test passes. When our tests run successfully, we can start over and add a new test.

30

TDD = TFD + Refactoring If by applying TFD and before we add another test we clean and optimize our code, then we apply Test Driven Development, also called RED – GREEN – REFACTOR workflow. The rules of TDD: 1. It is not allowed to write production code unless the purpose of this code is to make a failing test pass. 2. It is not allowed to write more than one test. 3. It is not allowed to write more production code than necessary to make the failing test pass. The starting point is writing a test for the functionality wanted to be implemented. According to the second rule, one cannot write more than one test and immediately when the unit test fails, production code must be written to make the test pass. But attention to the third rule that states that immediately when we have written enough production code to make the failing test pass, we are bound to stop writing production code. If we analyze the three rules of TDD, we will realize that we cannot write a lot of code without compiling, running something. This is actually our goal. Whatever we are doing: writing tests, writing production code or refactoring, we have to keep the system in execution mode. The time interval between running the tests can be seconds or minutes. A duration of ten minutes can be considered too much. When most of the developers hear about this technique, most of them think: “This is stupid! It is only going to slow me down; it is a waste of time and effort. It will not let me think, I will not be able to make the design right; it will only ruin the

no. 12/June, 2013 | www.todaysoftmag.com

whole process.” Perfect, let us think about what would happen if we entered a room in which all the developers are using TDD. Pick randomly a person at any time and ask him/her what the state of the code is. One minute ago, all the code was running. I repeat: “One minute ago, all the code was running!” And it doesn’t matter which is the person you pick, the answer is always the same: “One minute ago, all the code was running!” If your code works at every minute, how often do you think the debugger will be used? The answer is: not that often. It is a lot easier just to press CTRL+Z a few times to go back to the state where your code was running, and after trying to rewrite what you did wrong in the last minutes. How much time will you save if you do not debug that often? How much time are you spending now debugging? What would you say if you could significantly reduce that time? The true advantages of this technique are even bigger: if you use TDD, then you write a few tests an hour. Tens a day. Hundreds a week. You will write thousands of tests a year. You can save all of these tests and run them anytime you want. When should we run them? All the time. Every time we make a modification in the code, no matter the size of the modification. Why are we not cleaning up the code even if we know it is not that clean? Because we are afraid to break it. But if we have all these tests we can be sure that we won’t break the code, and in case that happens, we could see it immediately. If we have tests, we can rest assured when we modify the code. If we see messy code, we can clean it up with no reserves. Because of our tests, the code becomes manageable. We continue with the benefits of TDD: if you want to know how to call a


TODAY SOFTWARE MAGAZINE certain API, there is a test that does that. If you want to know how to create a certain object, there is a test that does that. Whatever you want to find out about the existing system you will find that there is a test that contains the answer to your question. Tests are like little design documents, small examples of code that describe how the system works and how to use it. You have integrated a third party library into your project? You get with this library a manual that contains the documentation and at the end a list with a few examples. Which one of the two have you read? The examples, of course. These are the unit tests. The most useful part of the documentation. Examples of the way you can use the code. These are the detailed, unambiguous, clear, design documents which cannot get out of sync with the production code. If you have tried adding a unit test to an already existing system, you may have noticed that this is not a trivial thing to do. In order to do this, it is possible you may have changed some parts of the system’s design, or you may have tricked the tests because the system was not designed to be testable. For example: let us assume you want to test a certain method “m”. This method “m” calls another method that deletes a record from the database. In our test, we do not want this record to be deleted, but we cannot stop it. This means that the system was designed in such a matter that it is not testable. When we follow the three rules of TDD all of our code becomes testable by definition. Another word for testable is decoupled. To be able to test a certain module as isolated, it must be decoupled. TDD forces you to decouple the modules, and by applying the three TDD rules you will notice that you will use decoupling more often than you did before. This will make you create a better design, less coupled.

The start of the Test-Driven Cycle

The TDD process suggests that we can enhance our system just by adding tests for new functionalities in an already existing infrastructure. But what happens with the first functionality, before we have created an infrastructure? An acceptance test must run end-to-end and provide the necessary feedback about external interfaces of our system, which means we must already have implemented an automated build, deploy and test system. It is a lot of work to be done before we can see our first test failing.

Deploying and testing the project right from the beginning forces the development team to understand how their system integrates into the world. It reveals all the necessary know-how and organizational risks that must be addressed while there is still time. Starting with an automated build, deploy and test cycle of a non-existing system may sound strange, but it is essential. There are all kinds of projects that were cancelled after a few weeks of implementation because a stable deployment of the system could not be made. Feedback is an important tool, and we want to know as soon as possible if we are headed in the right direction. After the first test passes, the other tests will be written much faster. The problem in writing and passing the first acceptance test consists in the fact that it is hard to automate a system build and test the new implemented functionality at the same time. Modifications made in one of the two causes problems with any progress made in the other, and watching the failures is harder when the architecture, tests and all the production code are constantly changing. O ne of t he symptoms of an unstable development system is that there is no obvious first place to look when something fails. This paradox of the first functionality can be split into two smaller problems. The first one consists in finding out how can an automated build, deploy and test cycle be made on this “walking skeleton”, after which this newly created infrastructure is used to write acceptance tests for the first important functionality. After all that, we can begin the test-driven development on the entire system. A “walking skeleton” is an implementation of the smallest part of functionality on which an automated build, deploy and end-to-end testing can be made. This can contain only the major components and communication techniques that help in the implementation for the first functionality. The skeletons’ functionality is so simple that it is obvious and uninteresting, letting us concentrate on the infrastructure of the system. For example: a web application with a database must have a skeleton that contains only a web page with fields from the database. While constructing the skeleton we must concentrate only on the structure and

not how to make the test cleaner. The skeleton and its infrastructure are made only to help up start the test-driven development. It is only the first step to a complete solution of end-to-end acceptance tests. When we write the first acceptance test, this one must be a test that we can read, to ensure that it is a clear representation of the systems behavior. Developing the skeleton is the moment when the decisions about the high-level structure of the application begin. We cannot automate a build, deploy and test cycle without having an idea about the whole structure. There is no need for details, but only of a broad-brush picture of the major components necessary for the first planned release, and the communication mechanism between these components. The basic rule is that the design can be drawn in only a few minutes. For the design of the initial structure we need a high-level vision of the clients’ requirements, both functional and nonfunctional, to guide our decisions. The image below shows how the TDD process integrates in this context:

But we do not always have the luxury to create a new system from scratch. Most of the projects we are working on have started from an already existing system that must be extended, adapted or replaced. In these conditions, we cannot start constructing a “walking skeleton”; we must adapt to the existing structure, no matter how hostile it is. The process of starting TDD on such a system is not different from applying it on a new system – although it can prove to be more difficult because of the technical baggage this system is already carrying. It is pretty risky to work on a system when there are no tests to detect regressions. The safest way to start the TDD process is to automate a build, deploy and test cycle and then add end-to-end acceptance tests that cover those regions of code that need changing. With this protection, we can start addressing internal quality problems with more confidence, refactoring the code and start introducing unit tests while adding functionality.

www.todaysoftmag.com | no. 12/June, 2013

31


QA Test Driven Development (TDD) The maintaining of the test-driven cycle

Once the TDD process was started, it must be kept running smoothly. Working on a new functionality starts with adding an acceptance test that fails, thus demonstrating that the system does not have yet the functionality that we want to implement.

This test will be written using terminology from the application domain and not from underlying technologies (e.g.: databases or web servers). This way, it will help us understand what our system is supposed to do and it also protects our tests against possible technical changes in our system’s infrastructure. The writing of these tests before writing production code helps us clarify what exactly we want to obtain. The failing tests keep us concentrated in implementing the limited set of functionalities they describe, increasing our chances to deliver them. Unit tests are important in constructing class design and showing us that they work, but do not give us any information if they work together with the rest of the system. Where do we start when we want to write a new class or functionality? It is tempting to start with the degenerate or error cases because these are usually easier. The degenerate cases do no bring more value to the system, and more importantly they do not provide us with feedback about the validity of our ideas. Incidentally, focusing on error cases from the start of a new functionality is bad for moral: if we occupy ourselves only with error handling, we will feel like we haven’t accomplished anything. The best way is to start with the simplest success case. Once this test passes, we have a better idea of the real structure of the system and we can prioritize between working on possible error cases we observed in the meantime and other success cases. We want every test we write to be as clearly as possible a representation of the system’s or object’s behavior. When the test is readable, only then we write the

32

infrastructure behind the test. We know we have implemented enough testing code when the test fails in the way we want it to, with a clear error message that describes what needs to be done. Only then we can start writing the production code that will make the test pass. We need to always see the test fail before we start writing the code that will make it pass, and check the diagnostic message. If the test fails in a way we did not expect, then we know there is something we did not understand right or the code is incomplete, and we can correct it. When we receive the failure we were expecting, we need to verify if the diagnostic message helps. If the description is not clear, someone will have to struggle to understand what needs to be done when the code will break in the future. We adjust the testing code and we run the tests again until the error message guides us to the problem with the production code.

Only writing lots of tests, even when it results in high code coverage, does not guarantee an easy codebase to work with. Many developers that adopt TDD find their first tests hard to understand when they have to review them later, and this common mistake consists in the fact that they think only about testing the object’s methods. For example: a test that is called testBidAccepted() tells us what the test does, but not for what it is used. The best way is to focus on the functionalities provided by the tested object, maybe one of them requires collaboration with another object. We need to know how to use the class to reach a goal, not just to exercise all the ways through its code. It is very helpful to choose the test names to describe how the object behaves in the certain tested scenario. When we write unit and integration tests, we stay alert for those parts of code that are hard to test. When we find such

no. 12/June, 2013 | www.todaysoftmag.com

functionality, we must not ask ourselves just how to test it, but also why it is so hard to test. Most of the times, the design needs improvement. The same structure that makes the code hard to test now will make it harder in the future. The process of writing the tests first is a valuable warning of a possible maintenance problem and it can indicate some hints on solving the problem while it is only at the beginning

Conclusions

Test-driven development does not replace traditional testing, but in fact defines a proven way of assuring efficient testing. A side-effect of TDD is that the resulting tests are functional examples of code invocation, thus offering a specification of the written code. From our experience, TDD works incredibly well in practice and it is something all software developers should adopt

Bibliography: Test Driven Development: By Example, Kent Beck, Addison-Wesley Longman, 2002 Growing Object-Oriented Software Guided by Tests, Steve Freeman, Nat Pryce,2009 http://butunclebob.com http://www.agiledata.org http://cumulative-hypotheses.org

Ladislau Bogdan

ladislau.bogdan@msg-systems.com Senior Software Developer @ .msg systems Romania

Tudor Trișcă

tudor.trisca@msg-systems.com Software Developer @ .msg systems Romania


directions

TODAY SOFTWARE MAGAZINE

Big Data, Big Confusion In an era when storage and processing costs are increasingly smaller, the traditional view of the manner in which we operate with data is changing crucially

I

n “Big Data: A Revolution That Will Transform How We Live, Work and Think” authors Viktor Mayer-Schonberger and Kenneth Cukier begin by presenting the situation of the year 2009, when the virus H1N1 represented a major concern for World Health Organisation and, in particular, for the American government. The rapid evolution of the epidemics created difficulties for CDC (Centre for Disease Control and Prevention), a governmental agency, as it reported the situation with a delay of 2 weeks in comparison to the reality in the field, partly because the population did not come into contact with the medical personnel after the first symptoms appeared. Real-time reporting would have allowed for a better understanding of the size of the epidemics, an optimisation of the prevention and treatment tactics, actions with the potential of saving lives in a disaster which ultimately amounted to 284,000 victims. Incidentally, a few weeks before H1N1 reached the first page of newspapers, Google published in Nature, a scientific journal, a paper in which they presented the results of a study that started from the question “Is there a correlation between the spread of an epidemics and searches on Google”? The assumption from which Google started is that when someone feels the effects of a newly acquired disease they will use the Internet to search for information about the symptoms (e.g. “medicine for flue and fever”). Thus, using the data published between 2003 and 2008 by the CDC and the top 50 million most frequent searches from the same period, Google managed to identify a mathematical model (iterating through over 400 million) which would demonstrate the correlation between the evolution of an epidemics and the manner in which people search on the Internet. With the help of this new technology, named Google Flu Trends, the CDC has managed in 2009 to monitor in a more efficient manner the spread of H1N1. The story of Google Flu Trends is from many points of view the archetypal example both for the benefits as well as for the technology and the challenged involved in solving a problem from the Big Data space. Starting from a hypothesis that looks for a correlation and using small unstructured amounts of data together with modern processing technologies, one is attempting to

validate the correlation which, eventually, industry is engaged in discovering benefits will bring value through the transforma- in a wide range of technologies and contion of data to new information. cepts, starting from an increased degree of maturity/applicability (e.g. Predictive Big Data: The New “Cloud Computing” Analytics, Web Analytics) and ending with Big Data is at its starting point. A proof Star Trek inspired scenarios (e.g. Internet for this is the confusion we can observe on of Things, Information Valuation, Semantic the market when it comes to defining the Web). problem that Big Data addresses and the “Cloud Computing” has already passed manner (or manners) in which it does this. its peak, according to the volume of searWhen I was talking in 2009 about Cloud ches on Google, while “Big Data” is still Computing, I was constantly amused that growing. The fundamental problem that the question “What is Cloud Computing?” determines the confusion and implicitly addressed to a room of 50 participants had the non-realistic expectations is, however, the potential of receiving 52 answers of caused by the fact that Big Data consists, which, go figure, many were correct. The according to Gartner’s “Hype-Cycle” situation is similar today in the case of Big model, of over 45 concepts in various stages, Data and this is because we are in a period from the pioneering one (i.e. “Technology close to what Gartner calls “peak of infla- Trigger”) to the maturity one (i.e. “Plateau ted expectations”. In other words, Big Data of Productivity”). Thus, Big Data cannot is discussed everywhere, and the entire be treated holistically at a tactical level, but

Figure 1 – The comparative volume of “Big Data” (blue) and “Cloud Computing” (red) searches (source: Google Trends)

www.todaysoftmag.com | no. 12/June, 2013

33


directions Big Data, Big Confusion rather only in principle, at a strategic level. Peter Norvig, the Google expert in artificial intelligence, stated in his book “The Small Data Thinking, Small Data Unreasonable Effectiveness of Data” that Results “simple models supplied with a big volume Mayer-Schonberger and Cukier iden- of data are going to eclipse more elaborate tify 3 fundamental principles that allow for models based on less data”, a principle used a shift from the Small Data approach to a also in the building of Google Translate, Big Data approach. an automated translation service based on a corpus of over 95 billion sentences for“More”: keep and do not throw away mulated in English, capable of translated in Data storage costs have reached in 2013 and from 60 languages. a historical minimum. At present, storing 1 gigabyte (GB) of data costs less than 9 cents “Correlation”: facts and not explanations / month using a cloud storage service (e.g. We have been taught and we got used Windows Azure) and for archiving they to the fact that the effect is determined reach 1 cent / month (e.g. Amazon Glacier), by a cause, a reason for which naturally reducing the storage costs of a petabyte we are tempted to find out “why?”. In the (1.048.576 GB) to almost $10,000.- (or Big Data world, the correlation becomes $10 for a terabyte), 1,000,000 times chea- more important that the causality. In 1997 per than at the start of the 1990s, when the Amazon had on their payroll an entire average storage cost / GB was of approxi- department responsible with drawing up mately $10,000. In this context, erasing lists of reading recommendations for those the digital data accumulated through the who visited the online bookshop. It was informatics processes makes increasingly a manual process, expensive and with a less sense. Google, Facebook, Twitter raise limited impact on generating sales. Today, this principle at the level of a fundamental thanks to an algorithm named “item-tolaw, representing their ticket for new deve- item collaborative filtering” developed by lopment and innovation dimensions, an Amazon, the recommendations are made opportunity open now to those that until completely automatically, dynamically and now were limited by the prohibitive costs. with a massive impact on sales (a third of the income generated by the electronic “Messy”: quantity precedes quality commerce coming from the automated Google Flu Trends functioned beca- recommendations). Amazon does not want use Google successfully introduced in to know why customers buying “The Lord the process of iteration of the mathemati- of the Rings” by J. R. R. Tolkien are intecal models the most frequent 50,000,000 rested as well in buying “Friendship and searches. Many of these searches were the Moral Life by Paul J. Wadell, but what irrelevant, but volume was required for interests them is that there is a strong cordetermining the model which finally relation between these two titles, and this managed to demonstrate the correlation. fact is going to generate income three times

as much as without such a system.

Conclusions

At this time, Big Data represents the most abused trend on the market, and as a result the degree of confusion generated by the plethora of opinions encountered at every step (a category from which this article is not excluded) is extremely high, leading to unrealistic expectations and similar disappointments. However, clarity comes from understanding the potential, from adopting the principles (i.e. more, messy, correlation) and from acting preventively for the adaptation of current systems to the new manner of thinking from the perspective of the calculus infrastructure, of the architecture and of the technical competences of those operating them. The stake is of identifying new addressable opportunities of transforming the data into information which could increase the efficiency of a product or of a business, as Google did through Flu Trends or Amazon through their automated recommendation system. Yonder has been accumulating Big Data experience, investing strategically in applied research projects together with product companies that understood the vision we have outlined and the benefits that such an investment could generate both on short and on long term, this trend representing one of the four technological directions chosen as an innovation topic in 2013.

Mihai Nadăș mihai.nadas@tss-yonder.com CTO @ Yonder

Figure 2 - Big Data „Hype Cycle” (source: Gartner, 2012)

34

no. 12/June, 2013 | www.todaysoftmag.com


programming

TODAY SOFTWARE MAGAZINE

High availability performance systems using data grids in Java

T

hese days the challenge of building a successful product includes ensuring that the product uses the available hardware resources in an efficient manner. This usually means clustering for all but the most trivial problems since our datasets have outgrown our individual computers considerably. However clustering brings a set of new problems: splitting up the processing between nodes, orchestrating the process and very importantly - ensuring that we don’t lose any data / progress if a subset of nodes goes offline - a possibility which increases dramatically as we add more and more nodes to our cluster. Data grids are a category of middleware software which help in addressing the problems enumerated above. In this article I will present a mock implementation of a financial electronical exchange which - despite its simplicity - has the robustness and performance of real-life systems thanks to the underlying library.

Basics of data grids

Data grids provide mainly three features: • One or multiple key-value stores (abstracted away behind an interface similar to Map<K,V> in Java). The data within these stores is replicated for high-availability. • Data grids also usually allow to define rules for the placement of the data (for example: it should keep N copy of each entry, of which K needs to be in a different rack / datacenter) to anticipate the failure scenarios envisioned by the designers. • An execution service which can run tasks on nodes. These services usually are parameterized using the keys from the key-value stores to ensure that the code runs on the machine where the data resides so as to avoid the overhead to transport it over the network repeatedly. • Possibility to be notified about events to the key-value store (when elements are added / removed / updated) In addition, while strictly not a requirement for data grids, they usually implement a pluggable interface to persist the data to external data stores - like databases or flat files - so that it is retained during full restarts of the system (since data grids almost exclusively store all of the data

in-memory). Also, they usually implement support for transactions which makes operations over distinct stores (separate maps) atomic. The library presented in this article - Infinispan - uses a technique called consistent hashing to offer the following possible configuration to the developer: having a set of N nodes, we want to keep each piece of data on exactly K of them (K < N, and usually 2 or 3). If nodes are added or removed from the cluster, data is reshuffled such that this property is preserved. This redistribution happens transparently from a functional point of view but non-functional aspects (like latency) are impacted during redistribution. You can see an illustration of the concept in the following drawing:

Here we have each piece of data (D1 through D3) replicated on three nodes, meaning that any two nodes can fail at any time and the data would be still available. Another useful effect of this replication mechanism is optimal resource usage compared to simple mirroring. For example, let’s suppose that we have N nodes, each with 12 GB of memory. If we would to mirror our dataset on all of them indiscriminately, it can have a maximum data size of 12 GB (but it would have N replicas). If we decide however that K instances of the data (one primary and K-1 replicas) are enough to satisfy our reliability requirements, we get a total possible data size of (N*12GB)/K. For example for

N=10 and K=3, we would get a maximum data size of 40GB (compared to the 12GB in the full mirroring case).

A brief history of Infinispan

Infinispan 1 is a JBoss sub-project, part of RedHat. It is a feature-rich, highly configurable data grid solution. It is a continuation of the JBoss Cache product with added features like: • high scalability • support for running as a dedicated server or embedded in the application • support for transactions • cross-rack and/or cross-datacenter replication • possibility of accessing the data through standard interfaces like REST, the memcache protocol, We b S o c k e t s or a cross-language binar y protocol called HotRod Infinispan is easy to get started with - just add the dependency to your Maven POM and you can start using it. It is based on JGroups, a pure Java reliable messaging solution, which means that there is no native code to compile/install before it can be run. Infinispan is available under the LGPL 2.1 license2, which means that it can be used in any project. There is also commercial support available for it from RedHat, under the name “Red Hat JBoss Data Grid”.

1 http://www.jboss.org/infinispan/ 2 http://www.jboss.org/infinispan/license

www.todaysoftmag.com | no. 12/June, 2013

35


programming High availability performance systems using data grids in Java Project description

This project models the “heart� of an electronic exchange - the matching engine. It does so in a highly performant manner (over 500 events per second, while the most popular BitCoin exchange - MtGox - averages less than one trade every second3). Given that it’s built on Infinispan, every operation is replicated to another node, which means that the loss of an arbitrary node can be tolerated while not losing any data. The sourcecode for the entire system is available on GitHub4 under the Apache 2 open-source license. The structure of the system can be seen in the below diagram. The client uses data captured from the MtGox bitcoin exchange to create orders (expressions of intention to trade - buy/sell - at a given price or better - so called limit or better orders). The orders are communicated through an HTTP/REST interface (implemented using Jersey) to one of the nodes (this is only to demonstrate the fact that third-party system can be connected to such a setup using standard protocols - the performance of this setup is not great - for real performance some more specialized html

36

3 http://bitcoincharts.com/markets/mtgoxUSD_trades.

protocol is recommended). Once an order is communicated to the appropriate node, it is placed into the orderbook (the sorted list of all the orders), the matching algorithm is re-run and all the resulting trades are stored. All this is done in a transactional fashion, meaning that if the process fails somewhere in-between, partial results are not written to the distributed cache. The client communicates with one node at any one time, but it will switch (fail-over) to the next node if it observes an error (and will retry the order creation). While not implemented, we could easily imagine a downstream connection consuming the data about the resulting trades over WebSockets for example. The orderbooks are efficiently marshalled between the primary and secondary nodes using the delta replication feature from Infinispan where only the changes are sent on the network. This allows keeping larger objects for one key (where locally they belong to the same entity) without sacrificing efficiency. During the test the nodes are restarted randomly every 10 seconds - all this without having an impact on the final result.

4 https://github.com/cdman/infinispan-exchange

no. 12/June, 2013 | www.todaysoftmag.com

Conclusions

Data grids can be a great solution to high throughput / high availability problems. The accompanying implementation to this article is able to process an order of magnitude more events than required by a real-life system while being fully fault tolerant with a minimum need for code changes.

Attila-Mihaly Balazs dify.ltd@gmail.com

Code Wrangler @ Udacity Trainer @ Tora Trading

Dan Berindei

dan@infinispan.org Software Developer @ Infinispan


TODAY SOFTWARE MAGAZINE

NEO4j Graph Database for highly connected data

N

eo4j is an open-source database based on graph theory, an optimized solution to simulate and interrogate great amounts of interconnected data, represented by graph type structures. The dynamics, the increase in the amount of data, as well as the continuous evolution of information processing has entailed the breaking out of the traditional database area and the orientation towards NOSQL solutions. Neo4j is part of the NOSQL phenomenon, belonging to the graph type database category. Their unique feature is the high degree of adaptability to real data models. In the case of relational database as well

NOSQL Stores

as in the case of some NOSQL (non-graph) solutions, the design process undergoes two phases: • definition of concepts, entities and their interaction – logic/real model • concretization of the logical model in a physical/ abstract model (eg. In the case of relational database) In most of the cases the logical model is very much different from the physical model. Within a software organization, any team can participate in the first phase – it doesn’t have to be necessarily a technical team (management/ sales) – for a better definition of requirements and concepts. The second phase, however, represents the simulation of the logical model according to the storage option. Thus the degree of comprehension of the logical model decreases with the increase of data complexity. The great advantage of graph databases, and therefore the usage of Neo4j is that the logical model is one and the same with the physical model. This consistent representation manner or, in other words, this “human readable” representation offers a great degree of flexibility, adaptability and

expressivity in simulating real data. This database type allows an iterative approach, thus it can be successfully Cypher enquiry language employed in the Agile type development Neo4j has its own language of enquiry strategy. of data organized in graph structures. We use the notion of “Traversal” which interData representation in Neo4j mediates the navigation within the graph, In Neo4j the data is represented by route identification and implicitly node nodes and relationships. Both nodes and selection for the result of an enquiry. relationships can have attributes. The relaCypher language is an assertive enquiry tionships have a very important role within language, being highly intuitive and the graph type database, since crossing the “human readable”, as it can be easily undergraph and thus data manipulation is done stood even by a non-technical person. through them. Some key-words are inspired by SQL: A relationship always involves two where, order by, limit, skip (the equivalent nodes; it has a direction and a unique type of offset). name. The language is made of the following In the example above, the relationship clauses: “KNOWS” connects nodes “Author 1” to • START – the point of entering the graph. Any graph enquiry has at least one starting node; • MATCH – the template for node searching which is connected to the star“Author 2”, also mentioning the additional ting node; relationship property “since”. • WHERE – node/ relationships filteDepending on a node the relationships ring conventions; can be divided into two groups: • RETURN – the result of the enquiry; • incoming; • CREATE – c re ate s no d e s or relationships; • DE L ET E – d e l e te s n o d e s or relationships; • outgoing. • SET – sets attributes to nodes or relationships; Both the attributes of a node and • FOREACH – update on lists of those of a relationship can be indexed in nodes; order to improve the crossing of the graph • WITH – divides cypher query into (like column indexing in the traditional several different parts. databases). Attempting a comparison to the tradiFor illustration I propose the following tional databases, you can imagine a node graph which represents the authors who as an entry in a table, and a relationship as publish in TSM, connected as follows: an entry in a connecting table or a pair of columns in the same table in the case of an abnormal type representation. www.todaysoftmag.com | no. 12/June, 2013

37


programming NEO4j Graph Database - Interconnected data simulation As one can notice, node and relationship creation is highly intuitive and flexible. Enquiry examples on the graph above: 1.start n=node(*) match n-[:WROTE]->(a) return n, count(a) (the result will display each author and the number of published articles) 2.start magazine=node(*) match magazine<[:Published_in]-(article)<-[:Wrote]-(author)[:Interested_in]->(subject) where magazine. name?=’’TSM’ and subject.subject?=’Java’ return author.name, article; (the result will display all the articles dealing with Java, together with the name of the author) 3.start n=node(*)match n<-[:Published_in]-(article) return count(article) (the result displays the number of articles published in TSM)

In the case in which we have millions of authors and there is a request to add a new relationship (eg. RATING between an author and an article), one just needs to add the relationship and the function of connecting authors to the articles can be performed. Within a relational database, the operations for changing the diagram are usually costly and not necessarily straightforward. Thus, we can say that the model evolves naturally with the real data and the requirements imposed by the business.

SpringData and Neo4j

Neo4j displays a manifold API Java which allows the graph creation and control. Another option is using the REST platform. Spring Framework founders have created a module addresThe graph above can be created with the following Cypher sing NOSQL databases. Its name is Spring Data and is based command: on the same abstraction upon database integration through the “Templates” concept (eg. JDBCTemplate). CREATE autor1 = { name : ‚Autor1, worksAt : As analogy, just as the SQL interaction is carried out through ‚Company1’ }, articol1 = { title : ‚Articol1’ }, articol2= { title hibernate, the Cypher interaction is carried out through Spring : ‚Articol2’ }, Data Neo4j support. (autor1)-[:Wrote]->(articol1), (autor1)-[:Wrote]->(articol2), revista = { name : ‚TSM’, domain : ‚IT’ , poweredBy:’Gemini Solutions’}, (articol1)-[:Published_in {date:’2013} ]->(revista), autor2 = { name : ‚Autor2, worksAt : ‚Company2’ }, articol3 = { title : ‚Articol3’ }, (autor2)[:Wrote]->(articol3)}, autor3 = { name : ‚Autor3, worksAt : ‚Company3’ }, articol4 = { title : ‚ Articol4’ }, (autor3)[:Wrote]->(articol4)} (autor2)-[:Knows]->(autor3), subject1 = { subject : ‚ Spring Framework’ }, subject2 = { subject : ‚ NOSQL }, subject3 = { subject : ‚ Agile’ },subject4 = { subject : ‚ Android’}, (autor1)-[:Interested_in]->(subject1), (autor1)[:Interested_in]->(subject3), (autor2)-[:Interested_in]->(subject2),(autor3)[:Interested_in]->(subject4);

38

no. 12/June, 2013 | www.todaysoftmag.com

Iulian Moșneagu

iulian.mosneagu@geminisols.ro Senior Software Engineer @ Gemini Solutions


programming

TODAY SOFTWARE MAGAZINE

Hadoop (II)

I

n the previous number we’ve discovered what the secret of Hadoop is when it needs to store hundreds of TB. Based on a simple master-slave architecture, Hadoop is a system that can very easily store and manipulate a big amount of data.

Hadoop contains two types of nodes for storing data. The NameNode is the node that plays the role of master. It knows the name and locations of each file that Hadoop stores. It is the only node that can identify the location of a file based on the file name. Around this node we can have 1 to n nodes that store the files content. The name of this kind of nodes is DataNode.

Data processing

Hadoop stores big data without any kind of problems. But it became known as the system that can process big data in a simple, fast and stable way. It is a system that can process and extract the information that we want from hundreds of TB of data. This is why Hadoop is the king of big data. In this post we will discover the secret of data processing – how Hadoop manages to do this. The secret that gives us the ability to process data is called Hadoop. This paradigm was not invented by Hadoop, but Hadoop managed to implement it very well. The first meeting with MapReduce will be hard for us. It will be pretty complicated to understand it. Each person that wants to use MapReduce needs to understand first the MapReduce paradigm. Without understanding MapReduce we will not be able to know if Hadoop is the solution for our problem and what kind of data we should expect from Hadoop.

MapReduce and Tuples

Don’t expect Hadoop to be a system that stores data on tables. This system doesn’t have the concept of tables. It only works with tuples that are formed by a key and a value. This is the only thing that Hadoop uses to extract data. Each task that is executed in this system will accept as input these tuples. Of course the output of a task will be formed by (key, values) pairs. Each pair can contain one or more values. Even if this tuple seems to be trivial,

we will see that this is the only thing that can have from 10 to 100-150 operations in we need if we want to process data.

Map

The MapReduce process is formed from two different steps – Map and Reduce. The Map is the process used to convert the input data into a new set of data. The data that will be obtained after this step is only intermediate data that will be used in the next step. We have the option to preserve this data, but generally this information is not relevant for the end user. The Map action is not executed on only one node. This action is executed on 1 to m nodes of DataNode type. Each DataNode on which this action is executed will contain the input data – because of this on each node we execute the Map over a part of input data. The result size of this action is smaller than the input data. This data can be processed more easily. At this step we have the result in the memory. The result is not written to the disk. We can image that the output of this step is like a summary of our data. Based on the input and how we want to map the input data we will obtain different results. At this step, the output data doesn’t need to have the same format as the input data. The result is partitioned based on the function that uses the key of the tuple. In general a hash function is applied, but we can define any kind of partitioning mechanism. The intermediate result can be used by Hadoop for different operations. At this step we can execute actions like sorting or shuffle. This small steps can prepare the data for the next step. This operations can and are executed also after the Reduce step. From the parallel point of view, on each node where the Map reduce is executed we

the same time. The number of concurrent operations is dictated by the hardware performance and the complexity of the Map action.

Reduce

Once we have the intermediate results, we can start the next step of processing – Reduce. In comparison with the Map operation, the Reduce step operation cannot be executed on each node of Hadoop. This operation will be executed on only a small part of the nodes. This is happening because the size of data that we need to process was already reduced. Each data is partitioned for each Reducer. If the Map reduce was formed by only one step, we will see that Reduce contains 3 main steps: • Shuffle • Sort • Reduce In the moment when the Shuffle step is executed, each DataNode that was involved in the Map operation starts to send the results to the nodes that will run the Reduce operation. The data is send over an HTTP connection. Because Hadoop runs

www.todaysoftmag.com | no. 12/June, 2013

39


management

programming Hadoop (II) in a private network, we don’t have any kind of security problems. All the key value pairs are sent and sorted based on the key. This needs to be done because there are cases when we can have the same key from different nodes. In general, this step is done in parallel with the shuffle process. Once the shuffle step ends, the Hadoop system will start to make another sort. In this moment Hadoop can control how the data is sorted and how the result will be grouped. This sort step gives us the possibility to sort items not only by key, but

TaskTracker. This two types of services are in a master-slave relationship that is very similar with the one that we saw earlier on how the data is stored - NameNode and DataNode. The main goal of the JobTracker is to schedule and monitor each action that is executed. If one of the operations fails, then the JobTracker is capable to rerun the action. Jo b Tr a cke r d i s c u s s e s w it h t h e NameNode and programs the actions in a way that each job is executed on the DataNode that has the input data – in this way no input data is send over the wire. TaskTracker is a node that accepts Map, Reduce and Suffle operations. Usually this is the DataNode where the input data can be found, but we can have exceptions from this rule. Each TaskTracker has a limited number of jobs that can be executed - slots. Because of this, the JobTracker will try to execute jobs on the TaskTracker that has free slots. From the execution model, an interesting thing is the way in which jobs are executed. Each job is executed on a separate JVM process. Because of also based on different parameters. This this, if something happens (an exception operation is executed on disk and also on appears), only one job will be affected. The rest of the jobs will run without problems. memory. The last step that needs to be executed is the Reduce. In the moment when this Example Until now we have discovered how operation is executed, the final results will be written on disk. At this step, each tuple MapReduce works. Theory is very good, is formed from a key and a collection of but we need also to practice. I propose a values. From this tuple, the Reduce ope- small example that will help us to underration will select a key and only one value stand how MapReduce works. In this way – the value will represent the final value. we will be able to understand in a simple Even if the Reduce step is very impor- way how MapReduce is doing its magic. tant, we can have cases when this step is not We will start from the next problem. necessary. In this cases the intermediate We have hundreds of files that contain data is the final result for the end user. the number of accidents from each city of Europe that happened every month. JobTracker, TaskTracker Because UE is formed from different counThe MapReduce operation requires tries that have different systems we end up two types of services - JobTracker and with a lot of files. Because of this, we have files that contain information from the cities of a countr y, others contain information for only one city and so on. Let’s assume that we

40

no. 12/June, 2013 | www.todaysoftmag.com

have the following file format: London, 2013 January, 120 Berlin, 2013 January, 300 Roma, 2013 February, 110 Berlin, 2013 March, 200 … Based on the input data, we need to calculate the maximum number of accidents that took place in each city during a month. This simple problem can become a pretty complicated one when we have 10 TB of input data. In this case Hadoop is the perfect solution for us. The first operation from MapReduce process is Map. In this moment each file will be processed and a key value collection will be obtained. In our case the key will be represented by the name of the city and the value will be the number of accidents. From each file we will extract the maximum number of accidents from each city during a month. This would represent the Map operation and the output would be something like this: (London, 120), (Berlin, 300), (Roma, 110), (London, 100), (Roma, 210), … This intermediate result has no value for us (yet). We need to extract the maximum number of accidents for each city. The Reduce operation will be applied now. The final output would be: (London, 120) (Berlin, 300) (Roma, 210) A similar mechanism is used by Hadoop. The power of this mechanism is the simplicity. Having a simple mechanism, it can be duplicated and controlled very easily over the network.

Conclusion

In this article we found out how Hadoop processes the data using MapReduce. We discovered that the core of the mechanism is very simple, but it is duplicated on all the available nodes. One each node we can have more than one job that can run in parallel. In conclusion we could say that all the tasks that are executed over the data are maximum paralyzed. Hadoop tried to use all the resources that are available. As a remark, don’t forget that the native language for Hadoop is Java, but it has support for other languages like Python, C# or PHP Radu Vunvulea

Radu.Vunvulea@iquestgroup.com Senior Software Engineer @iQuest


programming

TODAY SOFTWARE MAGAZINE

Functional Programming in Haskell (II)

T

Mihai Maruseac

mihai.maruseac@gmail.com IxNovation @ IXIA membru ROSEdu, ARIA

he last article was a short introduction to the language, being more concerned with presenting the history of the language and the benefits of using it and giving a lesser importance to syntax. This time, however, we will discuss the types of the Haskell language. The most defining attribute of Haskell is its static typing system: each expression has a type which is known from compile time. Moreover, there are no implicit conversions between similar types: the programmer is the one responsible for doing them in places where they are needed. Let’s start slowly, with the beginning. Unlike C, Java and other similar languages, in Haskell using types is not a cumbersome thing: declaring them is optional. One can write much code without declaring a single type. The compiler and the interpreter will use the type inference engine to determine the most general types for the written code. On the other hand, knowing the types of some expressions and using the types is useful. The simplest possible type is the one of boolean expression, True and False. As a note to the reader, remember that these two values are written with capital letters, this article will provide a motivation for this. The type of these two values is Bool. As a rule, Haskell types are always written in capital letters. It is a convention whose logic will be revealed soon. Turning our attention to the numerical types, we start with the integral numbers. We have an interesting aspect to consider. The value 7 can be either of type Int or of type Integer. The difference exists at semantic/interpretation level: Int is limited by the computer’s architecture while Integer involves using a special library for handling big numbers – thus, we can represent high precision numbers, no matter how high they are. On the real side, we have Float

and Double, just like in C. It is possible that a question has arisen: how does one convert between Int and Integer? Let’s consider a function f which receives an Int argument and an expression x of Integer type. It is impossible to write f x because the two types don’t match and the compiler will throw an error. Luckily, we have toInteger to convert an argument to a result of type Integer. So, the proper call is f (toInteger x) or, since we can get rid of some parentheses by using $, f $ toInteger x. It is possible that at this point static typing looks cumbersome. However, after certain experience is acquired, the benefits of using it will outweigh the little inconveniences that we presented in the previous paragraph. Let’s see some complex types. As you already know from the previous article, Haskell has a type for lists of elements of type a, [a]. Observe that this time we have used small letters: we have a type variable, not a proper type. For example, [1,2,3] is of type [Integer] (we write [1,2,3] :: [Integer]). Also, observe from [a] the restriction that all elements of the list have the same type. Finally, we have a standard string type String, which is nothing more than a synonym for a list of chars (a char has type

www.todaysoftmag.com | no. 12/June, 2013

41


programare

programare Functional Programming in Haskell

Char): [Char]. Finally, the last basic type of the language is the type of tuples. A pair has type (a, b), a 5-element tuple (a, b, c, d, e), etc. It is easy to observe that each element can have a different type. For example, we can have (“Anna”, “apples”) :: (String, String) but also (“Anna”, 42) :: (String, Integer). As a particular type, we can have tuples with no elements. The type is () and there is one single value of this type: () :: (). Finally, since every expression in Haskell has a type, we must also speak of one when talking about functions. This type informs us about the types of input values and the type of the result. In some cases we also find out what the function does, types are used as documentation. For example, for a function f :: a -> a we know that it receives an argument of any type and returns back a result of the exact same type. If we exclude non-terminating code (f x = f x), no runtime exceptions (f x = undefined or f x = error ...) and no useless complications (f x = head [x, f x, f (f x)]) we arrive at the only valid expression for f: the identity function. Thus, starting from the type we can determine the semantics of the code. Of course, not everything can be deduced, at least not using only what was presented so far. An interesting aspect is represented by functions with multiple arguments, like the following one addBothWith2 x y = x + y + 2

Its type is Integer -> Integer -> Integer (modulo some reductions for simplicity of this article). Each argument is segregated from the next one by ->. This signature captures an interesting aspect of functions in Haskell: we can give a number of arguments which is less than the expected one and we receive back a new function. In theory, Haskell functions are called curry (after the name of Haskell Curry) and this is possible because functions are first-class values (there is no difference between sending a number or a function to the identity function, as an example). Going back to addBothWith2 above we observe that the type of addBothWith2 3 is Integer -> Integer. Thus, Integer -> Integer -> Integer and Integer -> (Integer -> Integer) are the same thing (the curry aspect). On the other hand, (Integer -> Integer) -> Integer represents the signature of a function receiving a function from integer to integer and returning an integer result. An example could be the following

42

function: applyTo42 f = f 42

data Maybe a = Just a | Nothing

If we apply it to (+1) we will get back 43 and if we apply it to (addBothWith2 3) It is easy to observe that the type is the result is 47. generic: it receives a type variable as an argument: a. One of the constructors uses Knowing all of these things we can this variable. So, we can have Maybe Int write any Haskell program we desire. type or Maybe (Maybe String), each one However, by limiting ourselves only to with its own semantics. these types we won’t get any of the beneThere is a disadvantage in using Maybe: fits of Haskell’s static typing. Even more, there is no possibility to report why the we will get into some problems like the fact value is missing. For this, we use Either, that we have predefined functions only for defined as: tuples with 2 values, for all others we need data Either a b to roll up our own.. = Right a Luckily, Haskell allows us to define our | Left b own types to create a more expressive and declarative code. We will present all ways of How to use them properly in code is doing this in this article. left for another article. It is possible that To start with, remember that we said the number of fields in a constructor is too that type String is a synonym for [Char]. It high. Or, that we often need to access ceris far easier to read and understand a code tain fields from there and it is cumbersome using [String] than one using [[Char]]. to write accessing methods. However, we Idem, it is far more comfortable to work can use the special record syntax for this: with Vector2D, Point2D, Size rather than Person = P { nume :: String, only with (Integer, Integer) for all three data prenume :: String, varsta :: Int} possibilities. In Haskell, we declare new synonyms using type. For example, this is As a result, not only we obtain the the definition of String type: Person data type and its constructor P :: String -> String -> Int -> Person but we type String = [Char] also gain access to three functions: name :: Behind the scenes, the compiler still Person -> String, familyName :: Person -> uses the original types. The benefit is String and age :: Person -> String. that some type inference results use the Behind the scenes, types defined with synonyms and that the programmers can data require memory areas to save the conuse them as well in their own declarations structor tag and each parameter in turn. (when using explicit typing). This can be inefficient in those cases when To build a new type we use data. This is we only have one constructor and one sindone by defining the type constructors: list gle value, those cases when we could use a each constructor as well as the types of its type synonym but we’d like to tap on the arguments. For example, here is the exact full power of type inference. These are the definition of Bool type: cases where we use newtype. data Bool = True | False

There are two constructors, named True and False. The astute reader will recognize the two values of the types. Now, the law regarding capitalization of values in Haskell can be stated: only the constructors of a type begin with capital letter and no other value. A more complex type is Maybe. It allows us to signal missing values or the possibility that a function has reached a failure state. Thus, we are saved from obtaining null-pointer-exceptions at runtime: the type system enforces that the programmer checks both cases only when needed.

no. 12/June, 2013 | www.todaysoftmag.com

newtype State s a = S { runState :: s -> (s, a) }

Finally, the last thing we will mention about types is that we can find the type of any expression in ghci using :t expression: Prelude> :t map map :: (a -> b) -> [a] -> [b]

From here we immediately infer that map will apply the function received as first argument on the list received as the second argument, returning a list of the results. We can also find the type of expressions consulting lambdabot on the IRC #haskell channel or via Hoogle. In fact, Hoogle also allows in making the reverse


programare search: starting from the approximate type of a function, let’s consider String -> Int -> Char we can get to the proper function by skimming through the result page. In our case (!!) :: [a] -> Int -> a, is the needed function, the one returning the nth element from a list. For today’s exercise, we will simulate a database manipulation program. In fact, today we will only look at searching through a simple list of keyvalue pairs. We start by defining some type synonyms to help in reading and understanding the code:

TODAY SOFTWARE MAGAZINE generalize both arguments and search for namePhone Just 24828542 [(a, b)] -> a -> Maybe b the first result is the lookup function (ignore the Eq a => As you can see, using types gives a part for now). great boost but it is also cumbersome at Using this information we write the the beginning. It is the programmer who is search functions as follows: responsible in choosing the proper design. After doing this the compiler becomes searchNameAge name (NAgT l) = more or less of an ally, helping us having as lookup name l searchNameAddress name (NAdT l) = few as possible runtime bugs. lookup name l searchNamePhone name (NPT l) = lookup name l

Next time we will deal with types as well. But from a different and more inteOn the right hand side we have used resting perspective: polymorphism and exactly the same expression. The next capturing common patterns by using type type Name = String article will present how the proper call is level information. type Age = Int type Address = String made and how we can further reduce the type PhoneNumber = Integer code length by following the DRY principle Then, we define the types of the three (don’t repeat yourself). tables that we will use: For today, we are only concerned with testing the functions. First, let’s see newtype NameAgeTable = NAgT [(Name, Age)] deriving Show how the compiler behaves when calling a newtype NameAddressTable = NAdT function with a bad list (the power of type [(Name, Address)] deriving Show newtype NamePhoneTable = NPT inteference): [(Name, PhoneNumber)] deriving Show

We have used newtype and tuples to achieve an efficient representation while still having access to the full power of type inference. The deriving Show part is needed to display values of the new types and will be detailed in the next edition. Now, it is time to build some values for the three lists that we are using: nameAge = NAgT [(„Ana”, 24), („Gabriela”, 21), („Mihai”, 25), („Radu”, 24)] nameAddress = NAdT [(„Mihai”, „a random address”), („Ion”, „another address”)] namePhone = NPT [(„Ana”, 2472788), („Mihai”, 24828542)]

As you can see, we need a function to search in lists of key-value pairs. Of course, we can write our own recursive definition for this but it is an interesting exercise to use Hoogle. Were we to look for a function of type [(String, a)] -> String -> Maybe a no result will be returned. However, if we

*Main> searchNameAge „Ion” nameAddress <interactive>:21:21: Couldn’t match expected type `NameAgeTable’ with actual type `NameAddressTable’ In the second argument of `searchNameAge’, namely `nameAddress’ In the expression: searchNameAge „Ion” nameAddress In an equation for `it’: it = searchNameAge „Ion” nameAddress

Now, the actual searches: *Main> searchNameAge „Ion” nameAge Nothing *Main> searchNameAge „Mihai” nameAge Just 25 *Main> searchNameAddress „Mihai” nameAddress Just „a random address” *Main> searchNameAddress „Gabriela” nameAddress Nothing *Main> searchNamePhone „Gabriela” namePhone Nothing *Main> searchNamePhone „Mihai”

www.todaysoftmag.com | no. 12/June, 2013

43


QA

programare

Planning for Performance Testing

I

n this article I would like to present an introduction in the Performance Testing planning, results collection and analysis based on my experience with Performance Testing. I am presenting this info taking into consideration that the reader has experience in Performance Testing I will refer to some metrics, Non-functional requirements that I have used as examples of some concepts.

What is Performance Testing and why do Requirements functionality, percentage might vary) in we need it? I consider this the most important order to run Performance tests. Otherwise ISTQB definition of Performance Testing is: “The process of testing to determine the performance of a software product.” Our aim is to determine how good a product is, how fast, how many users can use the product and the response time for each one of them, what the product limits are and so on. Results of Performance Testing will help the Business stakeholders to make an informed decision when releasing the product. Performance Testing will reveal how the system will behave under real concurrent usage and will help us in predicting what hardware resources are needed in order to sustain a certain load. There are several Performance tests categories with particularities in goals, planning and execution: • Load testing – where we want to see how the system behaves under a certain load • Stress testing – where we want to find the system`s limits and robustness of a system under load • Soak testing – where we want to see how the system behaves under a longer period of loade

How do we plan the Performance Testing?

As any test planning, Performance Testing should be carefully planned. Some of the artifacts taken into consideration in the planning are: requirements, resources needed, scenarios design and run, results collection and analysis, reporting, tooling needs etc. A separate Performance Test plan can be created or added as a section describing Performance Testing planning as part of the Test plan of the project. I will detail those areas which I take into consideration when I plan the Performance Testing for a project:

44

aspect in Performance Testing, besides the technology used. The Tools needed are determined based on the technology and the requirements analysis. Non-functional requirements (NFR) will list the figures as the Response time, number of concurrent users, spike, etc.

Resource needs Identify skills and training needs for testers in order to design and run performance tests. I consider performance testing to be a team effort involving software architects and developers. The Expertise of a software architect is valuable in designing effective tests which will unveil weaknesses in the system. The developer’s expertise is valuable when development work is needed and speeds up the developing of the scripts. The input of the software architects and developers in result interpretation is needed for a better understanding of the system`s behavior and future correction activities.

Scenarios design and run

there is a big chance for the results to be irrelevant. Any major code change will impact Performance results and comparison on successive Performance test runs can’t be done. Performance tests can be run separately on a certain functionality in isolation or on a separate service when that piece of functionality is independent. Results are meaningful and offer a continuous and in time feedback on system’s performance. In agile environment when complete functionality is delivered after a Sprint or a small number of Sprints it is opportune to plan for continuous performance testing. Usually there is a limited number of full performance test runs during a release cycle due to time needed to run them and system resource consumption. The results need to be analyzed and the code changes for performance might be done, also in an iterative cycle. Another way to get continuous feedback on the system’s performance is to set up listeners in Unit or API tests which will give a continuous overview of the code under test performance in terms of methods/flows execution time.

A. Performance tests are designed in order to cover NFR reflected in a real world scenario. Performance test scenarios I use with the following parts included: C. Results collection and reporting a. Ramp-up – time needed for all users The set of results collected should be to be up and running (e.g. Login and carefully defined and be minimalistic in start running) order to: b. Distribution of load/actions over • c o l l e c t t h e r e l e v a nt r e s u l t s . time – real life scenario is emulated Performance tests tools offer a wide including profiling and types of actions variety of results that can be captured users are doing especially in terms of system response c. Ramp-down – time needed for users time. Hardware resources can be monito log out and load on the system under tored with built in functionality of the test to stop Performance test tools or using specific tools installed on the system under test B. The time when Performance tests • not to collect too many results. Too are run is very important. A big amount of many results collected will have an the functionality of the product is impleimpact on both client running the permented and stable (no less than 80% of formance tool and system under test

no. 12/June, 2013 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE resources. The client might not be able 85% in order to have space for maneuver Conclusions to sustain the load level and system to recover the system under load • Performance testing planning and under test results will be affected by the execution should be done as a joint consumption of resources by listeners D. Reporting: Performance test reports effort of testers, software architects and instead of test actions. present the system`s efficiency in relation developers with NFR. Production hardware might be • Importance of tooling and skills neeHere is a list of important results which configured based on the Performance test ded shouldn’t be underestimated I use to capture: results. • Response time over time. You can The following details, but not limited analyze when peaks are present and to, are part of Performance test report: correlate with the system under test a. Details about system under test hardware resources consumption like IP address, hardware capacity, OS, • min and max response time network topology used, product build • average response time. Besides used, date; average response time we want to b. Scenarios run and the aim for each know how many requests were dealt scenario; quicker than the average time and c. Performance tools used; how many in a longer time. Also it is d. iv. Measurement results. important how many requests are very slow. Percentile response time (90th, Tooling 95th, 99th ) is used to give the numThe choosing of the tools to be used for ber of requests that are in 90%, 95% Performance tests should take into consirespectively 99% on the scale from the deration the following criteria: shortest to longest response time a. Built in support for complex • Throughput: how much load the scenarios; system could handle. This result is b. Load/profiling distribution important for the capacity planning of c. Reporting capabilities hardware needs • Error rate: will provide how many The evaluation of free tools versus requests end up with an error and their different vendor’s tools needs to be done cause. Analysis of errors can reveal pro- by taking into consideration the context blems like the system cannot handle a of the project and the skills of the team. certain load and give timeouts, etc. Scripting is needed for some tools in order Alexandru Cosma alexandru.cosma@isdc.eu • System resources consumption to be able to run complex scenarios (like (CPU, memory, disk usage, network JMeter) when other tools offer built in Senior Tester bandwidth consumption). Load should functionality for load/profiling distribution @ iSDC aim a hardware resource consumption of (like LoadRunner).

www.todaysoftmag.com | no. 12/June, 2013

45


programming

Book review: Eclipse Rich Client Platform by Jeff McAffer, Jean-Michel Lemieux and Chris Aniszczyk

T

he book review for this number of Today Software Magazine belongs to that kind of books category that have an immediate impact on the reader. The focus of the book is on the practical side. The theoretical concepts are gradually introduced, starting with general terms and continuing with specific concepts and best practices. The particularity of the book consists in the fact that they are integrated immediately in an application with numerous features, which is also gradually developed throughout the entire book material. Silviu Dumitrescu silviu.dumitrescu@msg-systems.com Java Consultant @ .msg systems Romania

46

no. 12/June, 2013 www.todaysoftmag.com

Due to this reason, the time between the presentation of the theoretical concept, the implementing and the highlight of its utility shortens significantly. Thus, we are not facing a situation where we read about a concept, that we do or do not accurately understand, followed by a rather dummy application that implements it and after a long time, an application, if there is one, that integrates all the concepts addressed. Although very popular as a theme, the Hyperbola application presented in this book is, basically, an implementation of a messenger service using the Eclipse RCP, with classic features: establishing a group of friends, the actual communication between them, history and so on. The application is formed as a model application that uses RCP, but also as inspiration source for some other components of RCP applications. In addition to the actual implementation, the penultimate chapter of the book presents a method of structuring, packaging and delivery of the application, so that we can be able to create dynamic systems that run on a large variety of operating systems. My beginning of the review of the book Eclipse Rich Client Platform, by Jeff McAffer, Jean-Michel Lemieux and Chris Aniszczyk was quite steep, from the desire of attracting more readers. In addition to arousing interest in the technology presented, the book also brings the added value of immediate usage and understanding.

Rich (or fat) clients can be pretty similar to stand alone applications, that share resources remotely (eg databases). Rich clients are a lot different from thin clients (which have the browser for client) because they strongly use computer hardware and software resources, in addition to possibly remote access resources. Perhaps the biggest problem of thin clients is due to the fact that installs and changes must take into account the hardware configuration of the computer, on which the application was running. As we very well know, the number of such configurations is very large and the difficulties were a challenge not easily overcome. However there are many advantages, among them: the ability to work offline (without a permanent connection to a network) or enhanced multimedia performance. In my opinion these kinds of clients were very hard to develop with good architectural qualities, before frameworks. Frameworks support software developers providing structure development in pursuit of best practices. The technological progress then led to the emergence of platforms, which were collections of frameworks and hardware architectures that allowed the software to run. The first rich client platforms offered only a possibility of logic running of an


business

TODAY SOFTWARE MAGAZINE software development. So, in Eclipse 4: • The application is described based on a structure called the application model • This model can be modified at runtime or at development. The model can afterwards be extended • There is support for annotations and dependency injection with important implications regarding the test units usage • Eclipse widgets can be styled using CSS files, as with web pages • The application model is decoupled from presentation. This allows us, at the user interface, to use toolkits such as SWT or JavaFX for rendering the model

application in an operating system. Today’s platforms offer much more: a model of component called plugin (easily managed, updated, integrated etc.), a middleware above all these components (allowing extensibility, flexibility, scalability, updates etc.), a portability at running (from PCs to mobile devices and not only), assisted development tools, the possibility of smart installation and much more. In this book, the authors present Eclipse 3.5 (Galileo) version of the platform. This is not the latest version of Eclipse, but it includes a number of significant improvements over the previous versions. The latest version is Eclipse 4 (Juno) launched in June 2012. I want to make some specifications related to what’s new in Eclipse 4 and to state that the entire technology presented in the book deserves your full attention. Eclipse 4 is based heavily on all earlier versions and a proper understanding will lead to better results in

The book is structured in three main areas. The first area includes an introduction to basic RCP functionalities (login, key binding, menu etc) but also complex elements (help, update). The second area describes the implementation details (about perspectives, views, editors, actions, commands, plug-ins, dynamic testing) needed to obtain performance, given by: refining and refactoring the prototype previously obtained, customizing the user interface and also creating and delivering the product to the user. The third area is dedicated to references. In this area you can read about community OSGi framework specification, Eclipse data binding framework and other plug-ins useful for developing RCP applications. The OSGi Framework is described in detail in a separate book - OSGi in Action, whose review can be found in number 9 of Today Software Magazine. Last, but not least, it is interesting to

show to whom this book is addressed. First of all, to all those programmers who develop RCP applications. Even if they are experienced programmers in using this technology, reading this book can be very useful. In addition to highlighting the success architecture of a well developed application, a great deal of theoretical concepts are detailed. Secondly, the book is addressed to those who want to apply the rich client principles when developing RIA (Rich Internet Applications). The experience gained at the level of rich client is very useful for Rich Internet Applications. Of course, knowledge of Java technology on the standard platform is absolutely essential, and some Eclipse IDE skills are also welcomed. I think that any person willing to learn and excel in RCP usage can find precious information in this book. Personally, I think the evolution of the Eclipse RCP platform is sensational and ascending. Introducing MVC pattern, using OSGi, internationalization, annotations and dependency injection are just some examples of the remarkable progress made by this platform. Although it contains many guidelines, working in such a platform requires extensive and well understood knowledge, in order to really achieve performance. As always, I look forward to discussion related to the topic of the book. Enjoy the reading, Silviu Dumitrescu

www.todaysoftmag.com | no. 12/June, 2013

47


management

Why waste our breath on AGILE?

T

he present article presents some of the benefits of using the Agile methodology in everyday life in the IT domain. The comparison item is the methodology that was mostly used towards the beginning of the second millennium, the Waterfall methodology, in which many of us have worked and some still do.

Theory (as good as ever) Bogdan Nicule

BNicule@neverfailgroup.com Bogdan Nicule is an IT manager with international projects experience

48

no. 12/June, 2013 www.todaysoftmag.com

When comparing two entities we usually use common evaluation criteria. In this case we shall see how the two working methods can be characterized according to the following: their way to adapt to the continuous changes of the initial requirements of a project, the capacity to meet deadlines, the quality of the delivered product and the predictability of the evolution of the project while it is unfolding, both synchronically and diachronically. Thus: 1. Adaptability – the main feature of Agile methodology is that the team using it can easily deal with the constant changes along the project. Including the client’s representative into the team allows easier adjustment to the new requirements and rapid changes of priorities. In the case of Waterfall methodology, changes along the project are more difficult to include. 2. Deadlines – for a team that uses Agile methodology, delivery on time is not a concern. The product should be stable and ready to be delivered at the end of each completed sprint. In the case of using Waterfall methodology, there is a risk of postponing delivery if major bugs are discovered during the last part of the testing period. 3. The quality of the delivered product – in the case of using Agile methodology, the quality of the product is increased,

since it has passed through several complete testing phases before the delivery. A team that uses Waterfall methodology undertakes the risk that major bugs can be detected in an advanced testing phase. The cost for reparations in this case can be pretty high. 4. Predictability – this is one of the most important advantages of using Agile methodology. Predictability is increased synchronically, since at any moment of the project one knows what features existed at the end of the preceding sprint and, with an increased precision, the ones that will be added during the current sprint. In the case of Waterfall methodology, predictability is increased diachronically. It increases as it approaches the end of the testing period and it depends on the entire sequence of events within the project up to that point. The table below shows the information presented in this first part.

Practice

In the next part I will present three working contexts in which I used the Agile methodology.

The discipline of North America a. Location: Toronto; b. Team size: 150; c. Team members’ experience: highly


TODAY SOFTWARE MAGAZINE experienced, existing team; practice new things. The confirmation of d. Type of the project: project in the value of the team and of the delivered progress; product came in a moment of decision e. Quality of the delivery: excellent. regarding the continuation of the collaboration. The client’s representatives patently Due to the efficient implementation of decided to continue the collaboration with Agile methodology at company level and this team, who then received more votes of the high degree of collaboration between confidence from outside the company than the members of the team, corroborated from within. with a perfect discipline and the provision of a QA activity of the highest class, this can The Variable named Context be the perfect example for the conference a. Location: Cluj and Germany; whose main theme is Agile, “Even mamb. Team size: 9; moths can be agile”. All specific aspects of c. Team members’ experience: all levels, the methodology (initial negotiation, daily new team; meetings, pair programming) were funcd. Type of the project: new project; tioning with a Swiss precision. Therefore, e. Quality of the delivery: good. the Agile model can be used at corporation level and by large teams. This is an example in which the transition from the previously used methodology The Extraordinary Success of the Beginners towards Agile was done with difficulty. a. Location: Cluj; Because of this, the efficiency of the delib. Team size: 10; very process was harmed and the value of c. Team members’ experience: the the final product was medium. Some of the majority were junior, new team; team members found it difficult to adapt to d. Type of the project: existing pro- the new manner of work and discipline was ject, previously developed in Waterfall a weak point, not easily to manage. manner; The table below illustrates the infore. Quality of the delivery: excellent. mation included in the second part of the article. In this case, I could say that the value of the delivered product was mostly due Conclusion to the discipline that the team had proven, To the question whether Agile methothe enthusiasm characteristic of the age dology is good or not, the answer would be (an age average under 30) and the willing- the following: Agile methodology is very ness of the members to study and put into good, but it is allergic to context. It depends

a lot on the open-mindedness of the team using Agile, on its discipline and real willingness to learn and apply new things. Were we to think about the classical question whether or not Agile methodology is better than Waterfall, we could say that it brings some important advantages, and the results offered by each of the two depend a lot on the discipline of the team using them. Therefore, for a continuously changing world, where one wishes to have a view of things as clear as possible, we can say that Agile methodology is more suitable, thanked to its clarity and the degree of safety it provides.

Adaptability

Deadlines

Quality

Predictability

Agile

Easily adaptable

Delivery on time

Increased stability

Increased synchronically

Waterfall

Hardly adaptable

Risk of postponing the delivery

Risk – major bugs discovered late

Increased diachronically

Table 1

Team size

Type of team

Type of project

Result

Comments

150

Old team

Old project

Excellent

Discipline; mature, experienced team.

10

New team

Old project

Excellent

Discipline; enthusiasm; willingness to listen, to understand and to learn new things.

9

New team

New project

Good

Unfavorable context to the implementation of Agile methodology. Table 2

www.todaysoftmag.com | no. 12/June, 2013

49


management

HR

Clean code = money in the pocket

I

am writing this article, because I have seen the “bad code = lost money” equation way too many times. The article is intended for both technical and nontechnical audiences in the IT industry. I am going to give a short and to the point explanation on how writing clean code impacts positively the financial aspects of a product.

What is code? Dan Nicolici

dnicolici@neverfailgroup.com Senior Java Developer @ Neverfail Group

50

no. 12/June, 2013 www.todaysoftmag.com

countless hours of reading and debugging through unknown territory. There are countless systems out there running on millions of lines of code that nobody wants to touch because they are either too scared or it’s literally impossible to generate any business value from doing so. Some of the facts that make you realize you have to deal with bad code are: no clear separation of concerns (huge classes, methods or functions), poor modularity Here are some of the possible ways of (everything is tightly coupled), lots of glowriting source code: shell scripting, SQL, bals, bad/unreliable tests. C#, java, C, Python, etc. All these bad things generate an enorIn the above paragraph, I highlighted mous technical debt. the word “facilitate”, because it is a key concept in producing clean code. “Technical debt (also known as design So what is “clean code”? Clean code debt or code debt) is a neologistic metaphor is code that: reads well, is easy to change, referring to the eventual consequences of makes the design decisions easily visible, poor or evolving software architecture and is well protected by a good test harness, is software development within a codebase. The very pleasant to work with and the list can debt can be thought of as work that needs to go on. Basically, clean code is very maintai- be done before a particular job can be consinable code. dered complete“. - Wikipedia The programmer writing clean code In other words, leaving the codebase “facilitates” the work of other programmers in a poor state (amongst other non-code that will change it. related tasks), generates technical debt. Usually, the interest rate goes up with time, How is code in general? as the codebase “rots” (to quote Robert C. In the field, the situation is very diffe- Martin). This ever growing technical debt rent from what was described above. We results in a decrease in product quality see code that is a mess, code that makes it and an increase in feature cost. One simple really difficult to add even the smallest fea- way of measuring a product’s quality is the ture to a system, without having to spend “defect removal efficiency” (DRE) formula: “In computer science, source code is any collection of computer instructions (possibly with comments) written using some humanreadable computer language, usually as text. The source code of a program is specially designed to facilitate the work of computer programmers, who specify the actions to be performed by a computer mostly by writing source code.” - Wikipedia


HR

DRE = Quantity of Bugs during Software Testing / (Quantity of Bugs during Software Testing + Quantity of Bugs found by User) Example: DRE = 1 / (1 + 9) = 0.1 (10%) => bad DRE DRE = 9 / (9 + 1) = 0.9 (90%) => good DRE

Capers Jones, an American specialist in software engineering methodologies, often associated with the function point model of cost estimation, says about quantifying the concepts related to technical debt: “There’s an older metric called ‘cost of quality’ that has quantified data that’s been around for quite a few years. One of the things I’ve been measuring at IBM, ITT and dozens of other companies is what it really costs to achieve various levels of quality. Let me just give you some industry numbers as an example. The average cost of building a piece of software in the United States is about $1000 a function point, and during development it’s about half of that – $500 per function point is the cost of finding and fixing bugs. And then once the software is released, the first year it goes out, companies spend about $200 per function point in finding and fixing bugs and scripts delivered and then they spend about $250 per function point in adding features and enhancements. After five years, if you’ve done it right, you’re down to about $50 per function point in bug

TODAY SOFTWARE MAGAZINE repairs but what you spend on enhancement stays pretty constant. So if you’ve done it right, you have a first year quantity of defect repair cost declines quickly over time. On the other hand, if you botched it up, if you didn’t develop the software well and were careless, you’re going to spend $1200 per function point to build it and $600 per function point fixing bugs, $300 per function point after release fixing bugs. And instead of that number going down after five years, it’s going to either stay constant or go up. So after five years you can have a really bad project that you’re spending $350 per function point finding and fixing bugs at a time when it should have been down to $50 per function point. Actually, that kind of data – cost of quality – is relatively widely known. [...] As applications get big, the percentage of projects that are cancelled and never released goes up – above 10,000 function points, it’s about 35% of projects that are started and never released. The most common reason they don’t get released is because they had so many bugs they never worked. There is a huge cost of quality connected with not doing things right and ending up with cancelled projects.“ - Interview excerpts from January 22, 2013

and people writing (coding) a product. This, of course, means that writing good software is cheaper and the main building block of software is code. Clean code = money in the pocket!

Sources 1.

2.

3.

Capers Jones inter view : http:// w w w. o n t e c h n i c a l d e b t . c o m / b l o g / ward-cunningham-capers-jones-a-discussion-on-technical-debt/ DR E for mu l a : http : / / q ate st l ab. c om / knowledge-center/QA-Testing-Materials/ what-is-defect-removal-efficiency-insoftware-testing/ Wikipedia

According to Jones, data shows that you’re very likely to pay up to 7 times more, maintaining a badly written product. That is direct finance, right there! This should be an eye opener for both people investing in

www.todaysoftmag.com | no. 12/June, 2013

51


management

Gogu on the road with milestones - I got that one too. It’s a simple GPS, it’s cheap and does his job, I really recommend it. Chief appeared surprisingly behind Gogu. Auch!... he caught me wandering on Google. He thought about answering with an excuse, but he soon realized how ridiculous it would have been. Plus he was too interested in the technical details of the GPS. Chief quickly gave him all the information. - But where do you want to go? - I promised my parents that I would go with the whole family to see them. We see each other rarely, 450 km are not easy to travel with a child. He changed his voice and mimic his child: „Daddy, are we there yet? When will we arrive? How far until Granny? Daaaddy, I’m bored...” And there’s also my mother - that is Granny - who wants to know exactly what time we’ll arrive. If we are late, she panics, she imagines all sorts of disasters and she gets sick; if we get there too soon, it’s also a problem, we probably went too fast. No way is better. I dread this road, Chief... - Wait a minute Gogu, the situation is not that tragic. It can be solved easily, you just need some milestones. Gogu felt a hole in his stomach. Sure it was not tragic, but not even in a thousand years he would have expected Chief to laugh at him when he was really upset. And now he honestly confided... He tried to say something, but a lump in his throat stopped him and could only look down, disarmed, sad and incredibly disappointed. I did not expect it... he thought. Suddenly all turned to apocalyptic dimensions, his problem wasn’t anymore a family one, but a largescale disaster; the lack of Chief ’s understanding became a clear sign of an intricate conspiracy against him... - Gogu, hello, are you with me? What happened? Chief ’s huge smile shocked Gogu. The conspiracy theory dissipated immediately leaving room for a single major misunderstanding: - I don’t understand, Chief, what’s with the milestones. It’s not like you want me to stick stakes all the way from Cluj to Bucharest... - Ha ha ha... Come on, Gogu, it’s about the milestones from the project management theory: an element that marks an important moment in a project... which gives an indication of your position in time and enables you to report how the project falls within the estimated durations. That is exactly what you need for your journey. Gogu’s face didn’t show any sign of intelligence –it’s actually quite dull, Chief thought amused - so he hastened to add: you set some reference points, important places on your way. Those are your milestones. You calculate how long it will take to get from one to another, so that you know – based on the time you leave what time you’ll get to each of them. Should there be any delays, you’ll know immediately how they will impact the time of arrival. You make a drawing for your child and you show him where

52

no. 12/June, 2013 | www.todaysoftmag.com

you are each time you reach a milestone. You give him the task to inform Granny and this way you’ll have two stakeholders abreast with the evolution of your journey. What do you think? Slowly, Gogu’s face brightened and Chief felt relieved: - The previous expression of your face was not a good looking

one, Gogu. It seems that now things are a little clearer, huh? Gogu ignored him totally and began to draw the route. It seemed that he liked the idea and although the drawing was not exactly according to the reality, it showed somewhat the route between Cluj and Bucharest. He estimated the duration of the journey based on the speed and the road... He then began imagining himself giving clever explanations to his son and he enjoyed the idea of the little one communicating „the position” to Granny. Ha! It might not be as bad as he had imagined... - Chief, you are great! How can I thank you? - I thought you’d never ask! At the beginning of next month we have to do a workshop on project management topics; will you hold a presentation on milestones?! Now that you’re good at it... In fact that’s why I came to you in the first place, although I had no idea about what topic to address; so thanks for the idea...

Simona Bonghez, Ph.D.

simona.bonghez@confucius.ro Speaker, trainer and consultant in oroject management, Owner of Confucius Consulting


LUXEMBOURG

Biz Stone Co-founder, Twitter

Trip HAWKINS Founder of Electronic Arts, CEO, Digital Chocolate

Laura Yecies CEO, SugarSync

Peter Sondergaard Senior Vice President, Research Gartner

JUNE

Ruppert Keeley CEO EMEA, Paypal

Koichiro Tsujino Founder Alex Corporation and developed VAIO, Sony, former President of Google Japon

BRIAN STEVENS CTO, Redhat

Pepe MODER

Global Director fot the Digital Marketing & Communication, Pirelli

MORE SPEAKERS ON

www.ictspring.com

ReGISTER NOW www.ictspring.com


sponsors

powered by


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.