No. 3 / 2012 • www.todaysoftmag.com
TSM
T O D A Y S O F T WA R E MAG A Z I NE
Interview with Mihai Tătăran Co-founder ITCamp Native Code vs. Portable Code Function Point Analysis Quality Management Big Data - Data Representation Microsoft Kinect - Programming Guide Trends in HR
Special Edition
CIVITAS – Archimedes Project IASI Guice Startup - Hack a Server Think iCloud
6 Interview with Mihai Tataran Co-founder ITCamp Marius Mornea
12 Native Code vs. Portable Code
31 CIVITAS Archimedes Project - Iaşi Sebastian Botiș
33 Guice Mădălin Ilie
Ion Ionuț
16 Function Point Analysis
36 Modern Concurrency Béla Tibor Bartha
Ionel Mihali
21 Quality Management
38 Startup - Hack a Server Marius Corîci
Eugen Otavă
24 Big Data
41 Think iCloud Zoltán, Pap-Dávid
Cătălin Roman
27 Microsoft Kinect Simplex Team
29 Trends in HR Andreea Pârvu
43 Gogu Simona Bonghez
editorial
Editorial Which is your favourite programming magazine? The same question stands on the roll-up banner at IT Camp, the biggest conference on Microsoft technologies in Transylvania. It’s a sort of TIFF for programmers; every participant can choose to attend one of the three sessions running in parallel during two days. We are glad to attend this year’s conference together with 250 other participants and we promise to get back to you with impressions. Besides, Issue #3 is an IT Camp Special Edition and it was offered to all the participants. Many thanks to the organizers for their support and to the three sponsors of the magazine: ISDC, Small Footprint and Three Pillar Global.
Ovidiu Măţan, PMP
ovidiu.matan@todaysoftmag.com Coordonates Gemini Solutions Cluj’s team, in the past worked as a Product Manager for Ovi Sync at Nokia, founder of Today Software Magazine.
This issue marks the beginning of a new more technical stage, which intends to give more details on the subjects of the previous articles. We refer to the articles on programming the Kinect device or Big Data. We are going to discuss about a fashionable subject iCloud, saving and synchronizing data in Apple’s ecosystem. In the mobile devices section, there is an interesting article on Native Code vs. Portable Code, with a strong emphasis on the options in developing applications according to the clients’ needs: complexity, budget or developing time. How do we estimate the necessary resources for a project? This is a question the article Function Point Analysis tries to answer. Quality Management is an important subject for the Product Testing part and it has been covered in this issue. Tendencies in HR is an article worth reading by both employees and employers in order to achieve a balanced growing politics. Coming back to local initiatives, we present a project 100% developed by a company from Cluj: the implementation of an innovative system of bus ticket vending machines. The system is operational in Iasi and you can find here details about project implementation and used architecture. Another Romanian project is Hack a Server startup which has already begun to gather system administrators and hackers for constructive purpose, for a better data protection. Why is it good to get certified? is a question we are going to answer in two articles: The Advantages of an MBA and the problems of Gogu, the character of our magazine. The magazine has started to get the support of the local communities and companies. So, if you like reading our magazine, we are looking forward to your impressions at contact@todaysoftmag.com.
Ovidiu Măţan founder of Today Software Magazine
4
nr. 3/2012 | www.todaysoftmag.com
Editorial Staff Founder / Editor in chief: Ovidiu Mățan / ovidiu.matan@todaysoftmag.com Editor (startups and interviews): Marius Mornea / marius.mornea@todaysoftmag.com Graphic designer: Dan Hădărău / dan.hadarau@todaysoftmag.com Marketing collaborator: Ioana Fane / ioana.fane@todaysoftmag.com Reviewer: Romulus Pașca / romulus.pasca@todaysoftmag.com Reviewer: Tavi Bolog / tavi.bolog@todaysoftmag.com Made by Today Software Solutions str. Plopilor, nr. 75/77 Cluj-Napoca, Cluj, Romania contact@todaysoftmag.com www.todaysoftmag.ro www.facebook.com/todaysoftmag twitter.com/todaysoftmag ISSN 2285 – 3502 ISSN-L 2284 – 8207
www.todaysoftmag.com | nr. 3/2012
5
interview
Interview with Mihai Tătăran Co-founder of ITCamp
Today’s interview differs from the previous ones through its main subject: ITCamp conference, represented by Mihai Tataran, who cofounded it along with Tudor Damian.
Mihai Tătăran
mihai.tataran@avaelgo.ro Microsoft MVP Co-founder ITCamp Avaelgo
an interview made by:
Marius Mornea
marius.mornea@todaysoftmag.com Software Engineer and Founder of Mintaka Research
6
nr. 3/2012 | www.todaysoftmag.com
Let’s start by introducing our guest: ITCamp conference 2012 “is a premium confe¬rence for specialists in Microsoft technologies”. In the hosts vision, the conference gets its premium from the speakers, ”the best speakers we can bring [..] extremely wellknown people, which at top world class conferences like TechEd or PDC, constantly rank among the Top 10 rated speakers, based on audience feedback grades, and we have four of them this year”. The Microsoft technologies focus comes from this desire to invite only the best, Microsoft being the place where the two founders share the most connections and experience, allowing them optimum access, “If we were to spread out on several areas [technologies] it would prove more difficult to achieve this”.
due to the feedback from last years participants: “Yes! We want ITCamp next year and Yes! We want it bigger”, but also due to the mid term strategy: “To become The Conference on Microsoft technologies in Central and East Europe. In order to achieve this we need to overcome certain quantity and quality metrics, because popular speakers don’t attend relatively small conferences, there is a psychological border at around 500 attendees that we must exceed. And to attract this many people, obviously you need higher diversity content. The atten¬dees need to be able to choose something meaningful for them in each time slot.”
This year ITCamp reached its second edition, the debut experience can be descri¬bed briefly as: “a pilot conference, which strives to redefine the Romanian conference landscape, for Microsoft Technologies, and become a major player in the Central and East Europe. The objectives were met: over 200 attendees, over 20 speakers, with an audience distribution of over 50% seniors”. The senior grade attendees are one of the most important metrics As content has been mentioned, let’s for the overall con¬ference quality, in the stop and take a look at the main themes and hosts’ perspectives. schedule. The hosts have chosen three main tracks this year, adding one to last year’s Unlike the previous year, the current two-track agenda, a change motiva¬ted edition has known great growth both in by: “only because we’ve got a lot more speakers number and diversity, totaling content”. More precisely a track is a form about 34 and the lineup varying from well- of organizing content, not necessarily a known international names to local, first care¬fully selected theme, stating that: “in time speakers. These changes are mainly this room we will mainly talk about this”,
TODAY SOFTWARE MAGAZINE the attendees being free to choose any track with the largest crowd (approx. 2,500 at any given moment. world wide), they are the specialists in one specific technology MVPs (Most Valuable Unlike last year, when the track themes Professional); followed by Regional were selected from the audience point of Directors, more well- known and with a view, developers versus IT Pros, this year wider experience in several technolo¬gies, the content has been split based on the actually completely covering all of the actual pre¬sentation theme. The change Microsoft Application Platform, thus their has been brought about by the big gap number is smaller (approx. 150 worldwide); between developers and IT Pro numbers, a newer title: Azure Insider, which denotes as the developers dominated the audience a higher degree of specialization than MVP (95%). “The first track is about Cloud: and is specific to Azure; and in the end Public Cloud with Windows Azure, Office Community PM, an Microsoft employee, 365; Private Cloud with collocation, hos- which stands as an interface between the ting, servers, security. The second track company and the commu¬nities, also cooris about programming, effectively writing dinating programs like MVP, RD etc. More applications.” Both tracks have in common details on each posi-tion, what it means, mostly international speakers with a rich how to become one and what are the benecon-ference background, unlike the last fits, will be given by Alessandro Teglia one where: “we’ve tried to bring in peo- (Community PM) during the conference. ple from the sponsors, people with deep knowledge in one specific thing, giving Passing to the individual speakers and them the opportunity to talk about a sub- acknowledging that the limited time will ject on which they have a lot of hands-on, not let us talk about each and every one, up-to-date experience.” we will pick a few of the most well-known ones, keeping in line with the hie¬rarchy This novelty has an ulterior hid- chosen by the hosts on the event website. den motive: “we must increase the local The order is not random: “clearly we, as speakers’ palette and somebody has to hosts, are placed last and also equally clear take the first step”. The speakers will be: we’ve tried to place first the best known, “people that are not really used to public not necessarily the best, but the most visispeaking, but are very good programmers ble ones”. or architects in their company”. Seeing that Let’s start with Tim Huckaby, “a currently there is a very small group of hea¬vyweight from every point of view, experienced speakers, at least on Microsoft [..] speaker in keynotes with Bill Gates and technolo¬gies, which despite their limited Steve Balmer, not just once, [..] extremely number are very united and with a high well-known in the consultancy field and if concentra¬tion of MVPs (Most Valuable you watch Grey’s Anatomy you’ve surely Professional) and Regional Directors, this seen a Microsoft Surface screen application openness to sharing their experience and developed by Tim”. He’ll talk about Natural growing their numbers is more than wel- User Interface, a project developed in close comed. Even more so, seeing that their partnership with Microsoft Research, and experience spans beyond purely technical his presentation, during the keynote talk, and is also comprised of many attendances will cover the same world premiere demo at high profile international conferences, he gave two months ago in Israel. Tim both as spea¬kers and attendees, but more accepted the invitation to join ITCamp at importantly as founders and organizers of a glass of wine and was sold once Mihai communities and conferences of their own. pro¬mised a trip to Dracula’s castle. Since we’ve started covering speakers subject, let’s look at this year’s lineup of ITCamp guests. First of all, the number has grown from 20 to 34 and secondly, the diver¬sity increased. If last year we had 2 Regional Directors and 10 MVPs, this year we have more from each, but also new people like a Community PM, Azure Insiders and local experts. Before switching to individual descriptions, it would be best to define some of these acronyms: star¬ting
time slot in the agenda, Alessandro Teglia will host an open discussion about commu¬nities and the role played by events like ITCamp in their growth and development. Finally, we were curious to discover who will host and what will be the topic of the Open Panel, both because last year it proved to be one of the most interactive and successful sessions, but also because it conflicts with Rialdi’s WinRT presen¬tation. The topic for the panel is yet to be decided, because the hosts would like to take the pulse and get some attendees feed¬back before settling on a specific theme. However, we were assured that in the event of any conflict in the agenda, all conferen¬ces will be recorded and made available on the event website for all attendees. At first only to registered attendees, but after six months, the recording will be made publi¬cly available. Keeping to the themes and content, we’ve explored the covered technologies and possible recommendations from the hosts. Before getting into the specifics we wanted to dwell upon the marketing strategy through which ITCamp insists on being known as a Microsoft techno¬logies conference, considering that the agenda contains a much broader array of technologies and topics, some of them very generic. “It’s true that there are less technical, project management, process related sessions, but we consider that such knowledge is part of what an IT specialist needs to know”. However, as a general rule, generic topics are to be avoided due to: “the risk they raise to be to generic without diving deep enough in a specific problem”. However, seeing that the Cluj IT public has important components specialized in other technologies (Java, Ruby, PHP), such focus on Microsoft can scare away an significant number of attendees. “We’re clearly losing, big time, but it’s what we want: focus. My wish is to have a conference at which I’m also learning. Practically we don’t see it as loosing, more like selecting our public.” in order to keep the level of the presentati¬ons as high as possible.
Lino Tadros is “a very technical per¬son, a bit more than all the rest [..] smart in the purest programmer sense [..] he has been, for years, Anders Hejlsberg’s right hand (Turbo Pascal inventor, principal Borland Delphi and C# architect)”. Getting back to more practical topics: He’ll cover the more practical aspects of “the stars from a buzzword perspective are Windows 8 Metro and Windows Phone Windows 8, Windows Phone and HTML5. development. But for the public, comprised of at least half seniors (50% last year, reaching to 60% Even though he has no dedicated this one), for them the buzzwords are not www.todaysoftmag.com | nr. 3/2012
7
interview
that important, for them it matters more what they are working on right now and the upcoming projects [..] We might find our¬selves with a larger attendance at an ASP. NET heavy load application development session, even if it’s an old technology.” Even so: “the highest attendance will be at the well-known speakers, that we’ve already recommended.” Asked whether the audience can influence the conference agenda, like the possibility of an open discussion about topics that are not in the agenda, but of great interest, Mihai wanted to specify that the speakers are the ones that bring in the content. This is meant to ensure the high level of technical detail for each ses¬sion. „These are speakers that we want to have as a mandatory requirement, and it’s a pretty long list, of which at least half we want at any cost. [..] We know beforehand the topics they will bring, because that’s what they are working with now or lately [..] They choose to work on technologies based on what’s more appealing, newer, and in the end this is what sets the whole indus¬try trend.” Even so, the generous 30 minutes brakes and the Open Panel are adequate moments to talk about any topics and the attendees are encouraged to interact and discuss with the speakers.
Interview with Mihai Tătăran
ecosystem, TechEd being a constant presence in Mihai’s agenda starting with 2003 (initially with the support of Microsoft Academic program, while in college, after that as a paying attendee, and since 2011 “I’m part of the conference staff [..] There are spe¬cial Technical Learning Centers stands where I take questions on Azure”), both as an attendee and speaker at DevReach and CodeCamp Macedonia, or as a sim¬ple attendee at the MVP Summit. Tudor Damian (ITCamps second co-founder) has a very similar agenda. Besides the role of organizational know-how sources, these conferences are very effective communica¬tion channels that allow the two to get in touch and invite high profile speakers to ITCamp. S e cond ly, t he lo c a l exp er ience which allows the synchronization of accumula¬ted organizational know-how with specific cultural and industry practices of the east European market. To support this, a major part is played by the experience gai¬ned organizing local events: CodeCamp (Mihai) and ITSpark (Tudor), but also that which one can learn from similar confe¬rences in the region, the most important measure being: “DevReach, a premium conference (excellent quality of presenta¬tions, diversity, but a relatively high price for the regular East European attendee). At DevReach, and slowly also at ITCamp, one can benefit from sessions almost as good as the TechEd average, but at one tenth of the price. Another important aspect is that “both DevReach and ITCamp are commu¬nity conferences, meaning that, among other things, there is no profit to be made. This gives us great flexibility in choosing what suits the audience best, without any pressure of profit.” At the present moment the Bulgarian international conference “has the advantage of several more years and more attendees (600). The second advantage is that it has a private company behind it, Telerik, and clear business objec¬tives and a dedicated staff (4 full time employees).“
One last content related curiosity was whether ITCamp has any chances for new technologies launches or world premiere announcements. It was a plea¬sant surprise to find out that, even tough the major events and announcements are under Microsoft marketing department management, some of the speakers are working on technologies about to be laun¬ched and they have presentations already prepared. This means that, in the event of an official Microsoft announcement, they will be relieved of their NDA (Non-Disclosure Agreement) and will be able to hold short presentations on last minute technologies. Since we are getting closer to the end, we’ve decided to take a look at the orga¬nizational aspects, being interested in the inner workings of such an event and An important part of any such event its recipe for success. is the budget, so we would like to know what it amounts to and what the main First off, what counts is the experi- expenses are: “The budget is tens of thoence gathered through the attendance usands of Euro, we’re talking about 30-40 at high profile international conferen- k, depending on the attendance number. ces, a very strong point of the Microsoft A great deal of it goes on transportation,
8
nr. 3/2012 | www.todaysoftmag.com
[..] another consistent part goes on food and coffee breaks, more precisely what we pay for the hotel/restaurant and also for the speakers accommodations.” There are some additional activities, but they “are small costs, related to the whole budget”, more precisely “an event dedicated to the spea¬kers, organizers and sponsors: a VIP dinner and a visit to Dracula’s castle in Bran.” Seeing that Mihai hasn’t mentioned anyth¬ing about speaker session costs, we wanted to know how much it costs: “Being a community event, all the speakers attend it free of charge - we provide transportation and accommodation and try to make them feel welcomed.” The next logical question is where the money comes from to cover the above expenses and how it is managed. More precisely we are interested in the balance between sponsorship funds and atten¬dance fees. “We’ve defined the budget based on clear fixed costs (speaker trans¬port and accommodation) and the other costs relative to the number of attendees, and this gave us the cost per person, a spe¬cific amount. We’ve considered it too high and decided we have to go lower than that, so we reached out for sponsors.” Sponsor contribution can be split into “there are a few nuances: pure sponsorships account for about 33-40%, however the sponsors also send attendees, which leads to an actual sponsor amount of 85%.” The remaining 15% is covered by attendance fees which have a dual role of commitment and entry barrier, “while also covering a consistent part of individual costs, a bit over half of it.” In other words the entry fee (375 RON) is an amount resulted from actual expen¬ses, without relating it to the market value of the benefits and the quality of the presen¬tations. Such an estimate would prove difficult in the local environment: “We’ve looked around, but there is no clear ben¬chmark, seeing that DevReach is clearly more expensive, and locally everything is either free, our own events, or close to free, organized by Microsoft from time to time.” Looking at the international prices, ITCamp is a whole lot cheaper than TechED (around 2,000+ Euro), but with a lot more content, 4 days of 12-14 parallel slots; a more appropriate comparison would be a level-two conference (500-1,000
TODAY SOFTWARE MAGAZINE atten¬dees), which costs around 500-1,000 Euro and it is comparable in terms of conSeeing that we’ve burned out the tent and speaker caliber. In both cases we financial aspects, we turned to one last orgaare not taking into account the travel costs. nizational question: content mana¬gement and dosage. More precisely we wanted to Stretching it thin on the “money” know how the attendance vary during the sub¬ject and hoping our readers will bear day. “Clearly in the eve¬ning we don’t have with it just a little more, we wanted to dwell as many people as in the morning. The same upon the attendance fee and its signifi- thing happens to me when I attend TechEd. cance. We wanted to compare the benefits I sit through four sessions a day, and in the one can get at ITCamp, expressed by the meantime I chat with one another. This is hosts’ response to a frequent community why we’ve created the necessary time and feedback: “why should I pay for ITCamp, space for people to be able to network. A when I can go online and watch most pleasant surprise, last year, was to find a fair of TechEd sessions for free?” ”First off amount of attendees in the evening.” <<watching the ses¬sions online>> is just a nice story, it never happens. When you We’ve considered it necessary to stop reserve time for a conference, that time slot and take a look at the public as well, and is dedica¬ted and you are there with your to begin with, we wanted to know why mind and body. Secondly, when you’re in a ITCamp at Cluj: “It would have been a lot session with physical interaction and you easier for me to have it in Timisoara, but get the chance to talk with others that share there is no market, there are not enough your problems, you memorize better. The people, there’s no way to get as many attenthird benefit is networking which is the real dants as in Cluj”. Next question would be deal.” what motivates the average attendee: “I think it’s a little bit of everything, clearly he Following we’ve discussed the decla- enjoys making part of a community, there’s red objective of raising the standard in also the chance for networking and not last confe¬rence, content and speaker quality he might have a specific technical problem „There is a mentality problem, which I want that needs solving. We mustn’t ignore this to change: people need to get used to pay aspect. There’s also a tendency of compafor such events. Either the individual, or ring ourselves with others, and if you don’t the com¬pany that employs him, it doesn’t attend such events you can’t position yourmatter, but we must get used with the fact self among others, that’s why I consider that nothing is for free in this world and if this event extremely useful in the coheyou want to evolve as a specialist and not sion of the workforce, friendships being just go with the flow, you need to learn. born, chances for collaboration”. Beyond And one way would be to attend conferen- moti¬vation, another specific Romanian ces and it’s perfectly normal to have to pay problem rises: lack of interactivity during for it. Of course there are free initiatives, events, they “sit and listen”. “This happens even our own ITSpark and CodeCamp, in all Eastern Europe [..] all the speakers and these are more than welcome and know it and they try to joke and be as relawe still believe in them. It’s just that there xed as possible in order to create a state of is no compari¬son between a Saturday normality.” Motivation and passive attituCodeCamp and ITCamp. You can never des are both the kind of subjects that give have true quality for free.” birth to unen¬ding debates, so once on the road we’ve also covered topics like: the One solution I came up with, inspi¬red influence of an outsourcing dominated IT by the DevReach/Telerik partnership, environment over the technical interests would be: reducing the fees by attracting a of the attendees; cultural aspects related to similar partnership with a private company. education as a continuous process; lack of “We’ve taken into account a partnership, motivation for personal development due but we decided to do it this way, beca- to a job inflation in local IT; the industrial use we don’t want to depend on anyone. era inheritance when each had its own In the business environment people and job and speci¬fic norm; the specific inerobjec¬tives change, and this is a risk.” The tia of attending things just because your hosts want to keep their financial inde- employer wants you to and many others. pendence, to allow them to choose their In the end, all that matters for Mihai is: topics and speakers without compromise “someone wanted that person to attend, “We don’t want to turn it into a commer- either the individual, or the company, for cial thing.” me the impor¬tant thing is that in the end
he attends. I would love people to become aware on their own, but it’s hard, so we take it one step at a time.” A nice surprise was last years request from the audience: “to grow in quantity, a very clearly mentioned feed¬back, and to raise the technical level, the second important feedback”, a clear proof that the audiences “technical level is good” and the interest in practical things is high. As a follow-up the organizers have raised the stakes and removed generic sessions, which gather little interest. Before we close, we’ve taken a look in the future: “First we want to position ITCamp as a benchmark conference in the region. As for the rest of the little events, we want to extend their diversity in both speakers and attendance. It requires a lot of effort, but by bringing in new people, it’s happening.” In the end we turn back to the recipe of success for an IT conference: “Obviously the food and coffee are very important, but clearly content comes first. If you run out of coffee, you can still save the day, but when the content is beneath expectations… it’s bad, especially since people paid for it. [..] A lot of people asked us, either strai¬ght forward or not, how much money we make of this conference. Initially I was bothered by it, but in the meantime I came to realize it’s a pertinent question. Even our sponsors often said that if there’s something left for us it’s OK, since we put so much effort into it. No, it’s not OK, we don’t want to make money out of this. It hap¬pened at a conference last fall when an amount of money (500 dollars) was spared and we kept it for ITCamp. It’s very important, even if it’s a paid conference, the hosts don’t make a profit, on the contrary, we invest time and last year we even spent money. This year we hope not to, because it would be a sign of poor planning. I insist a lot on this, this conference is a community thing, we organize it because we can, we like to do it and because we ourselves learn a lot from it. Money is not the purpose, but it is part of life and you’ve got to pay sometimes.”
www.todaysoftmag.com | nr. 3/2012
9
programare
<b>A DAY IN TH E L IFE O F A S O F T WA RE DEVELOPER</b>
PHOTO COMPETITION FOR PEOPLE WITH IMAGINATION AND AN OBSESSIVE INTEREST FOR PHOTOGRAPHY | MAY 21 – JUNE 17 Geek, nerd, techie, computer expert, IT specialist, software developer... Different names for smart people with an extreme passion for technology. What is a day in their life like? Capture it in a photo and the most inspired photographer wins:
A BRAND NEW
iPad3!
With compliments from ISDC techies!
POPULARITY AWARD: SURPRISE WORTH OF
300 EURO!
Send your photo(s) (according to contest rules) at
Competition is fully covered on Facebook/ISDCTeam
ingoodcompany@isdc.eu
Partner & Judge: Photo Romania
Winner selection will take place between 18 – 22 June
ISDC IS A EUROPE AN IT SERVICES COMPANY WITH PASSION FOR CUSTOMERS, SOLUTIONS, AND TECHNOLOGY.
TODAY SOFTWARE MAGAZINE
Local communities
T
he community section commits to keeping track of the relevant groups and communities from the local IT industry and to also offer an upcoming events calendar. We start with a short presentation of the main local initiatives, and we intend to grow this list until it contains all relevant communities, both from the local landscape and the national or international one with a solid presence in Cluj. The order is given by a function of number of members and number of activities reported to the lifespan, thus we are striving to achieve a hierarchy that reveals the involvement of both organizers and members.
Transylvania Java User Group Java technologies community. Website: http://www.transylvania-jug.org/ Started on: 15.05.2008 / Members: 472 / Events: 37 Romanian Testing Community Community dedicated to QA. Website: http://www.romaniatesting.ro Started on: 10.05.2011 / Members: 453 / Events: 1 Cluj.rb Comunitate dedicată tehnologiilor Ruby. Website: http://www.meetup.com/cluj-rb/ Started on: 25.08.2010 / Members: 103 / Nr. Evenimente: 24 The Cluj Napoca Agile Software Meetup Group Community dedicated to Agile development. Website: http://www.agileworks.ro Started on: 04.10.2010 / Members: 187 / Events: 11
Calendar Mai 24 – 27
AQTR2012 (2012 IEEE International Conference on Automation, Quality and Testing, Robotics) Contact: http://www.aqtr.ro/
Mai 28 – 29
ITCamp 2012 Contact: http://itcamp.ro/
Mai 31
Workshop: Crosscutting Architectural Concerns Contact: http://www.rabs.ro/events?ee=1
Iunie 2
OSOM Event v3.0 – Back to the roots! Contact: http://osom.ro
Cluj Semantic WEB Meetup Community dedicated to semantic technologies. Website: http://www.meetup.com/Cluj-Semantic-WEB/ Started on: 08.05.2010 / Members: 112 / Events: 16
Iunie 15
Romanian Association for Better Software Community dedicated to IT professionals with extensive experience in any technology. Website: http://www.rabs.ro Started on: 10.02.2011 / Members: 126 / Events: 3
Semantic technologies applied to real world applications Contact: http://www.meetup.com/Cluj-Semantic-WEB/
Google Technology User Group Cluj-Napoca Community dedicated to Google technologies. Website: http://cluj-napoca.gtug.ro/ Started on: 10.12.2011 / Members: 25 / Events: 7 Cluj Mobile Developers Community dedicated to mobile technologies. Website: http://www.meetup.com/Cluj-Mobile-Developers/ Started on: 08.05.2011 / Members: 39 / Events: 2
Eclipse DemoCamp Contact: http://www.transylvania-jug.org/
Iunie 27
Other: Cluj Perl Mongers (www.cluj.pm), GeekMeet (http://geekmeet.ro/), ITSpark (http://itspark.ro/ default.aspx), CodeCamp (http://www.codecamp. ro/), CodExpert (http:// www.codexpert.ro/), PHPRomania (http://www. phpromania.net/), ARIES (http://www.aries.ro/)
www.todaysoftmag.com | nr. 3/2012
11
programming
Native Code vs. Portable Code in mobile applications development
Ion Ionuț
ionut.ion@threepillarglobal.com Software developer enthusiast at Three Pillar Global, main interest in Web and Mobile development domains.
12
nr. 3/2012 | www.todaysoftmag.com
In the beginnings of the Internet, during Web 1.0 ages, the general trend was to build portals where users could find many useful information about a certain subject. Everybody was impressed that an enormous amount of information could be found in one place. The hunger for information was finally satisfied. Then Web 2.0 generation arrived, which was more adapted to the demands of the new wave of users. These users already had the default right to have information one click away, but more important they needed to socialize, so everyone was connected in the virtual place. Starting with 2010 a new generation of users has arrived. The access to information and the fact to be connected with friends on the Internet were very common things. This is the Mobile generation for whom a connected computer to Internet is not enough. For them there is necessary to be connected everywhere, anytime. The mobility is the key and mobility doesn’t mean to have just a laptop anymore. More and more hardware devices become smart, which means that the users are not depending on a specific device. In this context, the company that I work for adopted a strategy named Mobile First, which means that the design of a new project begins with mobile component in mind, the project architecture being influenced by this component.
Evaluation criteria Given the dynamics of this segment and the multitude of devices and operating systems, we cannot find a winning universal solution for developing mobile applications. The correct decision regarding the choice of an optimal solution has to take into consideration a number of factors such as: • Pros and cons existing in that moment; • Mainly players on the market; • Clients needs; • Developers needs and level of expertise At the end of the evaluation we can conclude about the best possible solution. About the most important approaches that currently exist, with their advantages and disadvantages, we discuss in detail in the following sections. Also we’ll review the most important players on the market at the time of this writing. Looking through the eyes of the clients, the proposed solution is viewed through the following criteria: • Reduced costs, • Flexibility, • Easy maintenance, • Rapid development The customers are always interested to obtain a great value for their money. Also they want to preserve a very good quality and the product to be as soon as possible in production. All of these keeping in mind that the future improvements in
TODAY SOFTWARE MAGAZINE
operating systems, browsers or the emergence of more efficient devices with several hardware capabilities to affect the product as little as possible – this means easy maintenance. More on that, the flexibility is required for further improvements and expansion of the product. On the other hand, the developers are interested in: • Full control of the code; • Elegant development; • Professional growth. The responsible ones in developing the applications generally have a bohemian attitude. One of their main interests is to be in full control of the source code, being reticent adopting tools for “mouse developing” which automated generates code and are not flexible enough to permit easy tweaks on the generated code. The developers like to learn and experiment new things, being on the edge of technology. If the project they are working on doesn’t offer such then they will tend to leave it sooner or later. Even all of these seem to be less important in our discussion, they can determine the choice of a development path on a long term.
Approaches in developing mobile applications 1. Native code
with a new concept about mobile operating the application ; this browser is modified Figure 1. Developing and installing applications which use native code systems, Boot to Gecko, developed with to expose some hardware resources of the the HTML5 and CSS standards on top of device which in normally case the browser a Linux kernel). can’t have access to There is a number of arguments in favor of native code usage for developing applications such as : • High level of performance; • Framework build by the author of the platfom; • Versioning, simulators, unit testing and automated testing all of these are integrated; • Direct debugging; • Full access to native API; • Fastest reaction in case of OS/API bugs; • Very large developer communities;
The first and natural one alternative is choosing development of the product Of course there is a number of disavantages in native code, this means that the source too : code is compiled and executed directly • Increased effort to implement the same by the CPU of a particularly device, most functionality on different platforms; of the cases developed and designed by • The need for specialized developers for the producer especially for the specific each platform; hardware combination. • Higher costs; • Poor maintenance of different versions; Figure 1 represents the the steps from • Extra effort in coordination of develospecifications to developing the applicatiping new features; ons for multiple devices. • Every each platform requires develo• Platforms, languages and frameworks: ping from scratch. • Apple iOS - Objective C, Cocoa Touch • Android - Java, Android SDK/ NDK 2. Portable interpreted code • W i n d ow s P h o n e - C # , . N E T This approach we name it hybrid code Framework because, in general, there are two parts of • Blackberry - J2ME/ BB Java NDK this solution: • - one is a web application – The above listed frameworks are the HTML5, CSS, JavaScript etc – able to most important but they are not the only run in any browser, ones. More on that, the market is pretty - second is a native application dynamic in this area (for example Mozilla which is practicaly a container with a recently announced its entry on the market modified browser to run the first part of
Tools and languages: • P honeGap (HTML / JS – available on iOS, Android, WP7, BB, Bada, Symbian, WebOS) • Adobe AIR (HTML / JS / Action Script - iOS, Android, BB) • WebWorks (HTML / JS - Blackberry) Pros: • A single base code for all platforms; • Allow the team to focus on application logic; • T here is plenty of JavaScript frameworks on the market • R equires only web development knowledges Cons: • Poor performance; • Limited access to platform’s functionalities (access only on a subset API); • Complex and difficult troubleshooting; • New device’s features will be implemented by container with some delay; • Missing some predefined UI controls which are hard to simulate for each platform; • There is a dependency of a third party which implies a slow reaction to resolve bugs.
3. Cross compiled portable code The third approach uses a common portable code but also an compiler which is able to interpret that code and compile it to generate native code for each platform. www.todaysoftmag.com | nr. 3/2012
13
programming
Nativ Code vs. Portable Code
Figure 2. Developing and installing applications based on hybrid code Tools and programming languages: • C orona (LUA → iOS, Android, Windows Phone • Marmelade (C++ → iOS, Android, Bada, Symbian) • MonoTouch (C#, .NET → iOS, Android) • MoSync (JavaScript, HTML5 → C++ iOS, Android) • Verivo (ex-Pyxis Mobile) (visual development tool on middleware machine → iOS, Android, Blackberry) • Titanium (JavaScript, HTML5 → iOS, Android, BB) Pros and cons of this approach in fact are a combination of the aforementioned two approaches. Pros: • High performance;
When to chose interpreted portable • A single code base for all platforms; code? • Same behavior and installation process • The application is simple enough as native applications. • No need to access many device features • D ynamic functional specifications Cons: (multiple features expected in the • Difficult to detect the origin of a bug future) (own code or generated code) • Price is a concern. • Developers have no access to generated native code When to chose cross-compiled porta• Depends totally on a third party ble code? • A l w ay s a bit b e h i n d t h e n e w • Meeting deadline is critical technologies • Release versions needs to be in sync • Any bug in the cross-platform SDK will • The lack of specialized developers in have an impact on the development specific programming languages. and on all or some of the builds.
Conclusions
There is no generaly winnig solution for
Figure 3. Developing & installing applications with cross-compiled portable code
14
nr. 3/2012 | www.todaysoftmag.com
sure. However, when decisional momment will come, the nature and the specific of each application will dictate the direction of development. More on that, with the pros and cons presented so far, the table below highlights additional criteria that may influence the decision. When to chose native code approach? • The application is very complex and intensively accessing operating system’s resources (file access, image processing, etc.) • Performance is critical • When the customer wants the application to have look & feel like a platform specific one.
Tools
• Adobe AIR (www.adobe.com/products/air.html) • Corona (www.anscamobile.com) • Marmelade (www.madewithmarmalade.com/) • M o n o To u c h ( x a m a r i n . c o m / monotouch) • MoSync (http://www.mosync.com/) • PhoneGap (phonegap.com) • Rhomobile (rhomobile.com) • Titanium (www.appcelerator.com/) • Verivo (www.verivo.com) • WebWorks (bdsc.webapps.blackberry. com/html5)
Bibliography
• ***, “Mobile Application Development
TODAY SOFTWARE MAGAZINE
Strategies”, Uberity White Paper, 2012 (http://www.uberity. com/whitepapers/) • David DeWolf, “Mobile First Strategy” (http://www. threepillarglobal.com/blog/post/what-mobile-first-strategy), Mai, 2011 • Michael Enger “ The Great Cross-Platform UI Challenge: Titanium vs. PhoneGap vs. Native” (http://blog. thelonelycoder.com/2012/02/03/the-great-cross-platformui-challenge-titanium-vs-phonegap-vs-native/) • Martin Fowler “Cross Platform Mobile”, 2011, (martinfowler.com/bliki/CrossPlatformMobile.html) • Julio Franco, „Mozilla’s Boot 2 Gecko mobile OS handson”, Techspot, March, 2012 http://www.techspot.com/ news/47747-mozillas-boot-2-gecko-mobile-os-hands-on. html • P e r r y H o e , “ C r o s s C o m p i l a t i o n v s . M o b i l e N a t i v e D e v e l o p m e n t D e b a t e”, 2 0 1 1 , (blogs.perficient.com/spark/2011/04/20/ cross-compilation-vs-mobile-native-development-debate/) • I o n u ț I o n , “Ap l i c a ț i i m o b i l e n a t i v e v e r sus aplicații compilate”, TiMoDev ediția 14, Martie, 2012 http://ctrl-d.ro/development/resurse-development/ aplicatii-mobile-native-versus-aplicatii-compilate-ionut-ion-timo-editia-14/ • Eric Jackson „Here’s Why Google and Facebook Might Completely Disappear in the Next 5 Years”, Forbes, April, 2012 http:// www.forbes.com/sites/ericjackson/2012/04/30/heres-why-google-and-facebook-might-completely-disappear-in-the-next-5-years/
www.todaysoftmag.com | nr. 3/2012
15
management
Function Point Analysis a metric for determining the size of information systems
One would probably wonder why a metric with such a complicated name is needed since experts (developers, architects, requirements engineers etc.), directly involved in the production of information systems, already have their own methods (eg. Expert Judgment) that can estimate, with a margin of error, how long it would take to develop a product or a module / component of a software product. The method has as purpose: analyzing and improving productivity, project estimating, project control, using the method at different times of development phases
Applicability Ionel Mihali
Ionel.Mihali@isdc.eu QA Officer, ISDC
16
nr. 3/2012 | www.todaysoftmag.com
In the following paragraphs, I will compare the method with an area that is quite familiar to all of us: that of real estate. Imagine you’re a real estate appraiser. A customer asks you to do an assessment of an existing apartment building. What are the steps you would follow? You would probably: • Analyze the scope/assessment object • Define a methodology for assessing • Define a method of assessing • Execute the assessment: ■■ you take into consideration a number of general features of the building (location, structural elements, roofs etc..) ■■ then break up a series of smaller parts (apartments) and look at each of them: surface, finishings, thermal and electrical installations, etc. • - Finally, depending on the features of each apartment you assign a score (number of points). The sum of the scores is the total score for the whole building. • D epending on the context of the housing market, this score will be multiplied by market value / point which
leads to the total value of the property. You might wonder what this has to do with the productivity / estimations. The client’s evaluator, the builder, knows now how many points have been assigned to the building, he knows how long and how much money he needed to build it, so through a simple calculation he can tell how much a point costs in time and money. If you apply that method to evaluate a project that you plan to develop, you know how many points you have, so you will know how much it costs to carry it out.
The purpose of FPA method
Applying Functional Point Analysis (FPA) has as a result the assignment of a number of function points (FP) to software already developed, based on the product itself and / or associated documentation, or product to be implemented based on functional and non-functional requirements specified by the customer / client requirements. The first case is aimed at determining the productivity, while the second is aimed at estimating the product. The power of this method is that it focuses on the measurement of the business value that functionality of an information
TODAY SOFTWARE MAGAZINE
functions The way the application behaves (Behavior). Behavior is characterized by: ■■ a model that describes the behavior of the application (e.g. functional design) - used to determine the user functions ■■ the flow of data elements and processing logic used to determine the complexity of user functions Examples: functional design documents, technical design, architecture, data model, the application itself.
Figura 1: Evoluţia metodelor FPA în timp system offers to the end user. So, it does which it can go. not pursue the technical method chosen to implement the functionality. Main steps in determining the number of function points are: History • Collecting the documentation The method has a long history. It was • Determining the type of function point first defined in 1979 in „A New Way of count Looking at Tools” by Allan Albrecht at • Determining the boundary IBM. (see: http://wikipedia.org) • Identifying data functions and determiAs a result, a number of organizatining their complexity ons, some national, others international, • Identifying user transactions and determost non-profit, deal with optimizing mining their complexity the method and adapting it to current • Calculating the number of function information systems’ requirements. In points 2012, there are five ISO standards for this method. (see: http://en.wikipedia.org/wiki/ Collecting required documentation Function_point, To determine the number of function http://www.iso.org/iso/iso_catalogue/ points of an information system, we need catalogue_ics/catalogue_ics_browse.htm? information which indicates: ICS1=35&ICS2=80&published=on&withd • Data structure. The structure is characrawn=on) terized by: Conceptually speaking, there are no ■■ a model describing the data structure differences between the standards; there are still some differences in the technicality of the method. Also, a function point resulted from applying a standard cannot be compared with one from applying a different standard only if a conversion factor is applied, conversion which personally I do not recommended except for scientific purposes. I will explain below why.
NESMA Method
I’ve chosen to present „NESMA FPA Method: ISO / IEC 24570:2005” in more details, for reasons of familiarity and experience with this standard. The method has five types of functions, each representing a part of functionality or data model. According to FPA rules, these functions represent a breakdown in the functionality of the application and they are the lowest level of granularity, by
Determining the type of function point count
Depending on the moment from the software development lifecycle and level of detail of input information, you can choose one of the following types of function point count: • Indicative function point count • Estimated function point count • Detailed function point count The methods Indicative and Estimated from the standard are based on the assumption that there is only some basic information about the system, against which to extract a range of functions, over which a series of statistical calculations are applied (calculations which are found in specialized literature, databases, free or paid). These types are used when we want to measure a project / software in the early stage of development, and when documentation is minimal. NESMA has a special
Figure 2: Types of functions in FPA method (you can use the data model, domain model etc.) – it is used to determine the data functions ■■ elements contained in a data model (the ability to store data, the tables themselves, the data elements contained in the tables) – they are used to determine the complexity of the data
whitepaper on this topic; you can find it at the references chapter. Note that the detailed function point count can also be applied in early development lifecycle if the information is there. Like any method, it has a margin of error, especially if you use a less detailed type like the two above (margin of www.todaysoftmag.com | nr. 3/2012
17
management
error may vary up to 50% for the indicative method). If you are a beginner in FP counting, I personally do not recommend applying these two types. To fully understand this method, you must apply the detailed function point count; for these reasons I chose to get into more details of this practice.
Determining the boundary
The purpose of this step is identifying the interaction of the counted system with other systems or users. It can be done by analyzing different technical specifications, interviews with users etc. It is an important step because: • It defines the scope of measurement, • It has an important role in determining future types of transactions; Imagine that we have to measure a classic system, which has a database and a user interface. The measured system interacts with another system in the way that our system uses data stored and maintained by the other system. It is important to determine the boundary, because in this case for example, the retrieved data will be classified in a type of transaction which will be called external interface file (EIF). For better overview, see Figure 3. Identifying data functions and determining their complexity There are 2 types of data function/data logical files • Internal Logical Files (ILF) • External Interface Files (EIF) Please see below the NESMA definitions („Definitions and counting guidelines for the application of Function
Function Point Analysis
Point Analysis. NESMA Functional Size three data logical files: ILF_Invoices (conMeasurement method, compliant to ISO/ tains tables of invoices), ILF_Persons IEC 24570 version 2.1. (English)”) (contains tables of persons), EIF_RiskInfo “A logical group of data, seen from the per- (contains data from external system) spective of the user is a group of data that an experienced user considers as a significant and useful unit of object. An internal logical file is a logical group of permanent data seen from the perspective of the end user that meets each of the following criteria: it is used by the application to be counted; it is maintained by the application to be counted. An external interface file is a logical group of permanent data seen from the perspective of the end user, that meets each of the following criteria: it is used by the application to be counted; it is not maintained by the application to be counted; it is maintained by a different application; it is directly available to the application to be counted.”
Example: Imagine a system whose goal is to maintain the invoices, the individuals and the legal entities. In addition, the system uses information from an external system that helps to determine the credit worthiness or risk level of a customer. Most likely the application will use a series of tables that will model the invoicing module (InvoceType, Invoices, InvoiceItems, PersonType, Persons, Addresses etc.) Because the determination of logical data files must be made in terms of end user, in this case we will have two ILF, which I will call ILF_Persons and ILF_Invoices. The data related to risk information „taken” from the external system will be an EIF. The way a series of tables are grouped in one or more ILF or EIF is done based on the logical dependence between tables. Example: Table „Invoices” most likely is dependent on table „InvoiceType”, so these two tables will be in the same ILF. For more information about the dependencies between the tables, see the NESMA standard. So, in the example above, we identified
Determining the complexity of data logical files. Here we have to define three more concepts, namely: Data Element Type (DET), Record Element Type (RET) and File Type referenced (FTR) “Data Element Type, or DET - A data element type is a unique, user recognizable, non-repeated field. This definition applies to both analyses of data functions and transactional functions. Record Element Type, or RET - A record element type is a user recognizable subgroup of data elements within an Internal Logical File or External Interface File. File Type Referenced (FTR) - A FTR is a file type referenced by a transaction. An FTR must also be an internal logical file or external interface file. File Type Record are logical groups contained within Transaction functions. They are logical groups of data which are related groups but not exclusively dependent on group or sub-group. S o u r c e : h t t p : / / w w w. d e v d a i l y. c o m / FunctionPoints/node11.shtml , http://sourceforge. net/p/functionpoints/wiki/FPA/ Note: There are some exceptions in the method, which are not part of the scope of this article. Examples: foreign keys are not counted as DET’s, because this information is already stored in primary key, and the same information should not be counted twice. Also, there are tables that are meant only as relating other tables, which are not counted, because the method’s basic rule is counting only what has value for the end user (in FPA these are called „key-key entities”)”
• ILF_Invoices: RET: 3 (InvoiceType, Invoices, InvoiceItems), DET: Suppose that the sum of columns in these tables measured once and taking into account the method’s exceptions is 30. • ILF_Persons: RET: 3 (PersonType, Persons, Addresses), DET: Suppose that the sum of columns in these tables measured once and taking into account the method’s exceptions is 60. • EIF_RiskInfo – the method for determining the RET’s and DET’s is identical (and in practice the description of interfaces can be used, for example). Suppose it has RET: 1 and DET: 10. The complexity of the data logical files is given in Figure 4: Low (L), Average (A), High (H). In our example, the complexity will be: ILF_Invoices: A ILF_Persons: A EIF_RiscInfo: L
Figure 3. Scheme functions FPA (Source: http://portal-management.blogspot.com)
18
nr. 3/2012 | www.todaysoftmag.com
TODAY SOFTWARE MAGAZINE
Identifying user transactions and determining their complexity In theory there are three types of user transactions: External Inputs (EI), External Outputs (EO) and External Inquires (EQ). Below I will give the definitions and one example for each. “External Inputs (EI) is an elementary process in which data crosses the boundary from outside to inside. This data may come from a data input screen, electronically or another application. The data can be either control information or business information. If the data is business information, it is used to maintain one or more internal logical files. If the data is control information, it does not have to update an internal logical file. External Outputs (EO) is an elementary process in which derived data passes across the boundary from inside to outside. The data creates reports or output files sent to other applications. These reports and files are created from one or more internal logical files and external interface file. Also, the volume of the returned data is variable. External Inquiry (EQ) is an elementary process with both input and output components that result in data retrieval from one or more internal logical files and external interface files. This information is sent outside the application boundary. The input process does not update any Internal Logical Files and the output side does not contain derived data. Also, in addition, a unique selection key must have been entered and the output must be fixed in scope.” http://www.softwaremetrics.com/files/ Improved%20Function%20Point%20Definitions. pdf
In our example, the functionality of saving a person is an EI, an EO is the func-
Figure 4. Complexity data logical files. tionality which returns the list of people we have in the system. An EQ would be the search functionality based on person unique identifier. The user enters a unique identifier of the person (the input), and the system returns a single person (the output), which has a fixed size because it will always return a name, phone number, address etc.
The complexity of user transactions To determine complexity, we take into account two elements: DET and FTR. To better illustrate them, I will return to our example describing the functionality that implements it, which are the functions and how to determine the type and complexity of user transactions. Imagine that our system implements the following functionality (described in
Use Cases): UC1. Returns a list of invoices (assuming that the list has 20 attributes) UC2. The possibility to add a new invoice in the system (15 attributes), and in the add screen after selecting the person, Figure 5. EI complexity. the system informs us about the degree of risk (an attribute) related to that person. The information is saved in counted system database, in one of ILF_Invoices tables. Suppose that our invoice will make a reference to a person to which it links the Figure 6. EO complexity. invoice. UC3. Gives the possibility to search an our system. invoice by a unique identifier (an attribute So, our functions will be associated of input and 16 output) with the number of function points, as UC4. Returns a list of people (30 follows: attributes) Logical data files: UC5. Gives the possibility to add a new ILF_Invoices - A: 10 FP person in the system (35 attributes) ILF_Persons - A: 10 FP UC6. Gives the possibility to search EIF_RiskInfo - L: 5 FP for a Person_id (provides as output, 35 attributes). In the table below, I will determine the user transactions and their characteristics. So, we have the following complexity for user functions: EO_ InvoiceList – A Figure 7. FP number - data logical files. EI_ AddInvoice - A EQ_RiskInfo - L EQ_ SearchInvoiceById - L EO_Perons - A EI_AddPerson - A EQ_ SearchPersonById - A Figure 8. FP number - user transactions. The complexity of user transactions is given by the tables in Figure 5 and Figure User functions: 6: Low (L), Average (A), High (H) EO_InvoiceList - A: 5 FP EI_AddInvoice - A: 4 FP Calculating the number of function EQ_RiskInfo - L: 3 points FP EQ_SearchInvoiceById - L: 3 Based on transaction type and its comFP EO_Persons - A: 5 FP plexity, NESMA proposes a number of EI_AddPerson - A: 4 FP function points (see Figure 7 and Figure EQ_ SearchPersonById - A: 4 FP 8) for each transaction. Their sum is the number of function points that we have in Based on the above information, our www.todaysoftmag.com | nr. 3/2012
19
management
project will have a total of 53 function points. Note: The above example is for informational purposes only. Because this method of counting has a margin of error, like any method, the literature does not recommend applying it to systems that may have fewer than 100 function points.
Non-functional characteristics
As you may have noticed, the above examples have not reached the non-functional characteristics, which clearly have an important word to say in determining an estimate or team productivity. In specialized literature there is the concept of unadjusted function points and adjusted function points. In the example above the number that we assigned to our application is the number of unadjusted function points. In FPA theory, there are 14 general system characteristics used to determine the number of FP that should be added over those unadjusted function points to capture the non-functional features, resulting in a total number of adjusted function points. (Source: http://www.nesma.nl/section/fpa/howfpa.htm). I chose not to talk in detail about them in this article. In the following paragraphs, I will explain why. It was demonstrated in specialized literature that if we have a very light system, in terms of non-functional characteristics, then the number of adjusted function points (AFP) of the system will be equal to: number of unadjusted function points (UFP) - 35% unadjusted function points (AFP = UFP – 35% UFP) at minimum (with the assumption that those 14 system characteristics are at minimum). Similarly, if the system is very heavy, then the number of adjusted function points of the system will be equal to: number of unadjusted function points + 35% unadjusted function points (AFP = UFP + 35% UFP), at maximum (with the assumption that those 14 system characteristics are at maximum). Think practically. We have two systems. Both have a number of unadjusted function points equal to 1000. The first system is one that has an Access database, it does not require concurrent access to data, it doesn’t
20
nr. 3/2012 | www.todaysoftmag.com
Function Point Analysis
have to be portable, it will be used by a single user, so a very simple system. According to specialized literature, it will have a number of adjusted function points equal to 650 FP. The second system is very complex in terms of non-functionals, it should have implemented concurrent access to data, it must have a good performance with a million simultaneous users, it must be portable, etc. So, a complex system, whose characteristics I dare to say are at the higher limit. According to specialized literature, this system would have an equal number of adjusted function points 1350 FP. Therefore, do you think it would be realistic to say that the effort to implement the second system is just double the effort to implement the first system? I think not. The concept of adjusted function points / unadjusted function points was removed from the ISO standard in early 2000’s, perhaps for this reason. Still, you might say, what do we do with the non-functionals? We have to take them into consideration. Correct. This is also the reason why the next step comes into action: what do we do with the number of function points? How do we use them in the organization? I’ll call this „benchmarking” and explain it in the next chapter.
Benchmarking
Any organization wishing to use this method should apply it in the different stages of the Software Development Lifecycle, at least in the beginning (and end with the estimate) and once at the end of the project (with the purpose of determining productivity at team / organization level). Productivity is calculated as the number of hours required to develop a function point. Applying the method at the end of development, the organization has the possibility to create a database with the average productivity per types of systems, development teams, non-functional system features, technology used in its development. With this database available and using the method in the early development cycle, the organization will be able to estimate the number of hours for implementation in future projects. Therefore, transforming the number of function points into the number of hours
has to be seen as part of a continuous improvement of productivity and estimation processes in which historical data plays a crucial role. I mentioned above, that I do not recommend applying multiple standards simultaneously and this is why. To create a database of consistent historical data you have to relate to the same method / standard.
Sub-methods
To keep up to date with the current system requirements, there are a number of „sub-methods” which have as purpose to treat certain special cases related to systems development methodology, technology, types of functional specifications etc. Some examples are: - „N13 FPA for Software Enhancement (V2.2.1)” – it aims to approach the measuring of enhancement function points (EFP), for projects that have an agile development methodology. This method sets the number of EFP for activities of deletion, modification, addition of functionality in the counted system. - „N24 FPA Applied to UML and Use Cases (v1.0.1)” – it aims to approach the measuring of a number of function points (FP) using as input specifications described in UML (Unified Modeling Language) style and Use Cases. - „N25 FPA for Datawarehousing (v1.1.0)” – it aims to approach the measuring of a number of functional points for data warehouse systems. - „N20 FPA in Early Phases (v2.0)” – it aims to approach the measuring of a number of function points for systems which are in early stages of development.
Conclusions
In conclusion, I think that this method is recommended to determine productivity and potential fluctuations in productivity on one hand and it can be applied to make estimates based on productivity on the other. Personally, I applied the method successfully in a large number of projects.
QA
TODAY SOFTWARE MAGAZINE
Quality Management Recommended Practices for Quality Software Delivery
Current dynamics of the software development industry is a real challenge for quality assurance. This is due to being orientated towards rapid and frequent delivery of applications (in the context in which their degree of complexity has increased), to the large number of end users and to their increasing expectations, to the different operating environments. We mentioned the growing expectations of customers, and we believe that there is no secret that they want increased quality for the offered products, rapid
Eugen Otavă
eotava@smallfootprint.com Release Manager Small Footprint
changes to them, personalized services and easy maintenance, integrated systems and, of course, delivery at prices as small as possible, preferably immediately. Every time the quality of software was brought into discussion, I pointed out that software testing is not the only measure that we should consider in order to deliver a quality software product, as many might think at first glance. Moreover, the terms used, QA and Testing, do not overlap in meaning. Thus, the first idea/practice recommended is the migration from „the mindset” focused on quality control/testing to one focused on quality assurance. Testing is not quality assurance, but rather a method of quality control. Quality assurance (QA) is process oriented, while quality control (QC) is product oriented. Testing helps us gain confidence that what we deliver is as it should be, that everything works according to specifications and at the proposed quality standards, but in addition to these, and apart from technical issues related to the technology chosen for implementation, to the simple and efficient processes, good planning, effective and timely communication,
client’s involvement, clear requirements, the appropriate preparation/training of team members are also factors to be considered for the project’s success and for the delivered quality. QA engineers must assume the role of quality insurers in all phases of product development. They must observe the practices that lead to the introduction of defects, to analyze the requirements and processes in time, to make suggestions for product improvement, to notice mistakes or errors as quickly as possible, because we know that a documentation defect discovered in time significantly reduces the costs associated with late re-implementation, if we assume that this would be found only in the testing phase. Considering these processes, the trend is one of migration towards Agile iterative and incremental development methodologies, as opposed to the older sequential type, such as Waterfall. Standish Group International Inc www.standishgroup.com ) is a well-known leader in the IT industry analysis, by collecting information from real world and presenting it in a systematic and statistical manner. In the „Chaos Research reports,” www.todaysoftmag.com | nr. 3/2012
21
QA
issued last year, it showed a significant increase in the project success rate i.e. 37%, compared to the results of previous years reports. For example, in 2009 the percentage was 32%. The most dramatic report was in 2004 because it showed that only 28% of all projects that were part of the study were successfully completed. Basically, as Jim Johnson, chairman of the Standish Group stated, the report for 2011 shows the highest rate of success in the history of Chaos Research Reports. Agile project success rate is three times higher than non-Agile projects, as shown in „2011 - Chaos Manifesto – The Standish Group International”. Standish Group considers a project successfully completed, if it was delivered on time, within the proposed budget, and of course, in full. For their statistics, based on projects developed since 2002 and up to 2012, please see the picture
Quality Management
is a positive one. We would be surprised to find out that there are still many large companies which continue to work by the old sequential methodologies, or that there are others who claim they work with Agile, but still maintain principles that are foreign to Agile. We are not going to describe the Agile methodologies or their benefits, because many are known, and because Scrum was already explained in detail in the previous issue. I would like to highlight the challenge brought about by adopting Agile methodologies by the QA engineers who had been trained in the spirit of the traditional ISTQB (International Software Testing Qualifications Board) courses, which support the benefits of independent testing teams, test activities scheduling in sequential phases, using the V-model for the management of the test cycle, promote testing scripts and less checklists, things
above. that do not quite apply in Agile etc. Considering the above statistics, we can The above-mentioned report of The say that tendency to become Agile Friendly Standish Group, analyzes as well the causes
22
nr. 3/2012 | www.todaysoftmag.com
of failed projects and among the mentioned causes are: incomplete requirements, lack of involvement of the client, communication problems etc. We are going to briefly review the issue of incomplete requirements. Without a careful analysis of requirements, we risk wasting time and money implementing incomplete or even erroneous solutions. There is a real discipline in software regarding the requirements (Requirements Engineering) and hence training and certifications on this subject (e.g. http:// www. certified-re.de/en/). Perhaps this topic will be developed in detail in a future article, as it is very important. Before presenting other best practices used to deliver quality software, I would like to emphasize that talking about quality is quite difficult, as long as there is no common understanding of the concept. Thus, quality can be measured by the level of customer satisfaction. Some might say that a quality product is the one that meets the needs of the customer, and thus testing provides this. I agree, but let us not forget however, a nuance brought since the early 80’s by the Kano model, which is a theory of product development versus customer satisfaction. Theorized by the Japanese Professor Noriaki Kano (1984), this model shows that customer satisfaction increases significantly according to the performance of the product’s attributes, showing also that, when offered innovative features/components, which had not been initially identified by requirements as explicit needs of the client, once offered, they add value and contribute significantly to customer satisfaction. This model provides an overview of product attributes that are perceived to be important for customer satisfaction. The Kano model focuses on product characteristics differentiation. The first area, represented by the diagonal line in the graphic below, represents explicit requirements, written or verbal (linear attributes). A second area represents innovation and refers to attributes that are not normally expected, and therefore they are not missed in case they are not offered. However, these attributes delight if they occur. A third area, the most significant one, represents the undeclared needs, as shown by the curve in the bottom. The customer may not be aware of these or may think that these needs will be automatically covered (“Must
TODAY SOFTWARE MAGAZINE be attributes”). The Kano model can be used to prioritize requirements, based on customer satisfaction, and to this purpose four categories have been identified: „Surprising and delightful” (it can differentiate a product from the competition), „The more the better” (it increases user satisfaction), „must be” (it won’t increase user satisfaction, but if lacking, it will cause dissatisfaction), „Better missing” (it represents features that produce user dissatisfaction). This classification can also be used for distributing characteristics of the releases, for example, the first release should contain especially the requirements of „must be”. It can be concluded that innovation is a key step towards increasing customer satisfaction, towards increasing quality. The secret lies in identifying those attributes that the customer has not named explicitly, but that make the difference. Once offered, in addition to others that have been identified, they increase client satisfaction and help the product to differentiate itself from others of the same kind.
Conclusions
Adoption of quality assurance principles, throughout the lifetime of a product, not limited to testing, as a measure of quality control of the product is the first and most important step. Furthermore, I’ve been presenting the current trend, i.e. using an iterative and incremental software development methodology of Agile type, which, as shown in the reports published by The Standish Group Inc, has a success rate three times bigger than the Waterfall sequential methodology. Moreover, we have attempted to highlight the role played by clear requirements for the final quality of a product, as well as the performance of product characteristics, their innovative character, for the degree of satisfaction of the client/user, using the Kano model for this.
www.todaysoftmag.com | nr. 3/2012
23
programming
Big Data Data Representation
The previous issue of the magazine talked about the trend of Big Data in the software industry. The current article presents the fundamentals of the technology that enables the storage and query of high volume of data. To get a better picture about the issues around Big Data, let’s take a look at an example.
Cătălin Roman Software Architect at Nokia, Berlin catalin.roman@nokia.com
Works with SOA and likes e-commerce
24
nr. 3/2012 | www.todaysoftmag.com
Not long ago there was this statistics showing up on the Internet: eBay collects about 20TB of user generated data daily. Facebook collects 20TB of user generated content and it generates another 10TB from daily analytics. The Insights service is powered by processing about 15PB of data. Google processes 20PT of data daily. It is pretty much clear that not only hardware is needed, but also a new kind of software concept is required to deal with such a data volume. The traditional RDBMS, regardless of the replication topology, are becoming overwhelmed. Scalability issues are showing up, the write operations are getting hanged, the JOINs are becoming slower and slower and high synchronization delays occur. In addition to that, think about what would happen if a network adapter got burned or a hard disk failed. The higher the number of hardware nodes in a deployment, the higher the risk of hardware failure. Also we cannot neglect the hardware costs, electricity costs or the salaries of the DBA engineers that should be on call 24/7. In 2000, a professor called Eric Brewer, from The University of California, Berkeley
introduced the C.A.P theorem. If RDBM systems rely on ACID properties, this theorem talked about only three properties: Consistency, Availability and Partition tolerance. Consistency means that each client always has the same view of the data. Availability means that all clients can read and write at any moment. Partition tolerance means that the system works well even when nodes of the distributed system are down. The same theorem was saying one important thing: it is impossible for a distributed system to guarantee all three properties. This means you can choose C-A, A-P or C-P. This theorem was the foundation of many technologies, some open-source, some proprietary, that carry the name of NoSQL. NoSQL doesn’t reject SQL, but rather means „not only SQL”. These are systems with a distributed architecture, the data is partitioned and made redundant on several computers. Such a system can be scaled horizontally by adding more computers to the cluster. To achieve this horizontal scalability, a powerful tolerance is required for portioning, which means giving up on
TODAY SOFTWARE MAGAZINE
consistency or on availability. And this can be achieved by relaxing the relational capabilities and weakening or even dropping the transactional capabilities. These systems are optimized for read and append operations. Compared to RDBMS, besides storage, the NoSQL systems offer less functionality, but are excelling with scalability and performance. Another important aspect is that data does not follow a fixed schema, the data is semi-structured. But, before going into more details, let’s take a look how the NoSQL systems are classified. The above picture highlights the three properties of NoSQL systems. The RDBM systems are here just for comparison. In RDBMS the data model is relational, but here in the NoSQL world key-value data model or column-oriented or document oriented are mentioned more often.
Column-oriented_DBMS The document-oriented systems store the data using structured documents like JSON or XML, but still without JOINs. If JOINs are required, they will be handled at the application level. Another type of classification is by the CAP properties. Consistent – Available (CA) systems have challenges with partitioning and they are trying to solve this by relying on data replication. Such systems are: • RDBMS (relational) • Vertica (column-oriented) • Aster Data (relational) • Greenplum (relational) Consistent – Partition Tolerant (CP) systems have issues with availability, but otherwise data is consistent across all
Available, Partition -Tolerant (AP) systems are always available and a failure of a node does not affect the system. However, the write operation is being propagated more slowly across all nodes. • Dynamo (key-value) • Voldemort (key-value) • Tokyo Cabinet (key-value) • KAI (key-value) • Cassandra (column-oriented/ tabular) • CouchDB (document-oriented) • SimpleDB (document-oriented) • Riak (document-oriented) The slow propagation of write operations is also known as “eventually consistent”. If we apply a high write concurrency on a system, it might happen that we don’t see the expected value when we read it. So we need to be aware of the limitations of such systems and to handle these special cases at the application level. Let’s pick one NoSQL system and try to exercise. Consider a personal ad website, where users can register and post free ads. The visitors can filter the ads based on category and location. For storing and query the data a traditional system would do the job quite easy, but let’s say this particular website has 1.5 million new ads per day. Yes, it is craiglist. org. This being said, we need to look into NoSQL suite, so we’ll pick MongoDB.
The key-value systems offer support for read, write, delete operations relying on a primary key. There is no range-query or other type of “data search”. The column-oriented systems are still using tables, but they offer no joins. Here the data is stored per column, quite different from the traditional systems where the data is stored per rows. One of the main advantages is the access speed to the data. More details are available here: http://en.wikipedia.org/wiki/
system nodes: • BigTable (column-oriented/ tabular) • Hypertable (column-oriented/ tabular) • HBase (column-oriented/tabular) • MongoDB (document-oriented) • Terrastore (document-oriented) • Redis (key-value) • Scalaris (key-value) • MemcacheDB (key-value) • Berkeley DB (key-value)
With MongoDB the data is being stored as documents using BSON format (binary JSON). The table below shows the mapping between MongoDB terminology and RDBMS terminology. In our exercise we have to deal with two types of entities or in other words, documents: user and announcement. We can split them in collections, although it is not really necessary. user: { ”_id”: ”alice”, ”email”: „alice@hotmail.com”, „createdOn”: „2012-02-02 12:12:12”, „country”: „DEU”, „city”: „Berlin” } announcement: { „_id”: ObjectId(„9fd9fe0cad2f-409b-a8af-fd39b2a414f2”), „author”: „alice”, „createdOn”: „2012-02-02 13:12:12”, „subject”: „Bike for sale”, www.todaysoftmag.com | nr. 3/2012
25
programming
„description”: „Gorgeus bike for sale.”, „price”: „150”, „contact”: { „phone”: „151-82310373”, „email”: „alice@hotmail.com”, } „location”: { „city”: „Berlin”, „postalCode”: „13187”, „country”: „DEU”, } }
Big Data
To extract all the ads created by alice, number of clients, the read-write ratio. we’ll use: Which are the two CAP attributes that are important to us and how complex the db.announcement. queries are? How often does a new type of find({author:”alice”}) queries show up? Once we have an answer or if we want just the title of the ads: we need to acknowledge the capabilities of these systems. But more about this subject db.annoucement. in the following issues of the magazine... find({author:”alice”}, {subject: 1})
Using the above examples I’ve just defiIf this type of query is used often, then ned a semi-structured schema, the main we’ll prefer to be as fast as possible. To thing that can be observed is that the data achieve this we’ll have to index the author is not normalized. field: In order to create a user document db.announcement. ensureIndex({author:1}) using MongoDB, we have to execute the following It is also easy to load just the ads from Berlin: db.announcement.find({location. city=”Berlin”})
db.user.insert({_id : „alice”, email : „alice@hotmail.com”, createdOn: „2012-02-02 12:12:12”, country: „DEU”, city: „Berlin”})
To load the document using the ID: db.user.findOne({_id: „alice”})
26
nr. 3/2012 | www.todaysoftmag.com
The result set will be the list of JSON objects. These are just a few examples of MongoDB. The queries have been executed in a shell console, but of course MongoDB has drivers for the most popular programming languages: Java, C++, PHPetc. If I have triggered your curiosity, www.mongodb.org is the main source for learning. Another central resource for NoSQL system is http://nosql-database.org/. Before choosing a NoSQL system, we need to consider the whole context: the
programming
TODAY SOFTWARE MAGAZINE
Microsoft Kinect - Programming Guide Part I – A basic initialization In the previous issue we covered Kinect, a new technology from Microsoft that can monitor the whole body of its users in real time. After a brief introduction, we included a sequence of code aimed at initializing the device and suitable for a Hello World type application. Next, we will take a detailed look at the sequence of code and try to explain how it works. Simplex team
simplex@todaysoftmag.com
A basic Kinect initialization
Once the sensor state has changed, e the argument of the event handler, keeps both the old and the new sensor objects. Thus, we are able to stop the old sensor while continuing to work with the new one. KinectSensorChooser is a user control included in the official development reso- var parameters = new TransformSmoothParameters urces that is installed along with the official { SDK and although optional, its use greSmoothing = 0.3f, atly simplifies the development of certain Correction = 0.0f, Prediction = 0.0f, applications. JitterRadius = 1.0f, The above statement creates an event MaxDeviationRadius = 0.5f handler for the KinectSensorChanged }; event, which is triggered when Kinect begins its initialization, is disconnected, SAlongside the RGB and depth video or if its status changes in any other way. In data, the Kinect sensor (sensor) contains fact, the Kinect sensor can be in one of the skeletal data that allows the monitoring of following ten predefined states: the user’s body. This skeletal data can go through a smoothing transformation that 1. Connected generally reffers to the accuracy with which 2. DeviceNotGenuine the joints of the user’s body are monitored. 3. DeviceNotSupported The transformation is dependent on a set 4. Disconnected of five parameters, with values from 0 to 1: 5. Error 6. Initializing 1. Smoothing 7. InsufficientBandwith 2. Correction 8. NotPowered 3. Prediction 9. NotReady 4. JitterRadius 10. Undefined 5. MaxDeviationRadius Normally, as the user frame is placed Our aim now is the sensor to be ini- more precisely, the positioning takes more tialized and ready to transmit data to our time, giving the impression of a delay (lag). application. So accuracy comes at a price and the final Otherwise, KinectSensorChooser will decision is left to the developer: in case of display a message notifying the user of the a fast, resonsive application such as a video sensor’s current state. Although not requi- game, a skeleton with an almost immediate red, the notification is a useful feature that reaction would be more desirable than very should have been implemented manually, high accuracy, while a medical application had we not used the KinectSensorChooser would find accuracy more useful than a user control. very short response time. kinectSensorChooser.KinectSensorChanged += new DependencyProperty ChangedEventHandler(kinectSensorC hooser_KinectSensorChanged);
KinectSensor old = (KinectSensor) e.OldValue; StopKinect(old); KinectSensor sensor = (KinectSensor)e.NewValue;
sensor.SkeletonStream. Enable(parameters); sensor.DepthStream. Enable(DepthImageFormat.Resolution640x480Fps30); www.todaysoftmag.com | nr. 3/2012
27
programming
sensor.ColorStream. Enable(ColorImageFormat.RgbResolution640x480Fps30);
Enabling the depth and color streams of the Kinect sensor. sensor.AllFramesReady += new Even tHandler<AllFramesReadyEventArgs> (sensor_AllFramesReady);
The AllFramesReady event is triggered once three new frames (RGB, depth and skeletal) are finalized and ready for further processing. The sensor_AllFramesReady handler is attached to this event. try {
sensor.Start(); } catch (System.IO.IOException) { kinectSensorChooser.AppConflictOccurred(); }
We attempt to start the sensor, while being sure to handle the eventual exceptions. if (closing){ return; }
Once the sensor sent three new frames, and the application entered the corresponding event handler, we check whether the sensor has received the closing command a while earlier, case in which the received data is of no use and we decide to ignore it. Skeleton first = GetFirstSkeleton(e); if (first == null){ return; }
Otherwise, we extract the first Skeleton from the argument e and then check if its value is null. MainCanvas.Children.Clear(); Ellipse rightEllipse = new Ellipse(); rightEllipse.Fill = Brushes.Red; rightEllipse.Width = 20; rightEllipse.Height = 20; MainCanvas.Children. Add(rightEllipse); ScalePosition(rightEllipse, first. Joints[JointType.HandRight]);
The sequence above positions a red ellipse (in the MainCanvas canvas) above the user’s right hand, demonstrating a simple use of the skeletal data offered by the Kinect sensor.
28
nr. 3/2012 | www.todaysoftmag.com
Microsoft Kinect
Skeleton GetFirstSkeleton(AllFram esReadyEventArgs e){ using (SkeletonFrame skeletonFrameData = e.OpenSkeletonFrame()) { if (skeletonFrameData == null) { return null; } skeletonFrameData.CopySkeleton DataTo(allSkeletons); Skeleton first = (from s in allSkeletons where s.TrackingState == SkeletonTrackingState.Tracked select s).FirstOrDefault(); return first; } }
The getFirstSkeleton method receives an AllFramesReadyEventArgs argument and returns the first user skeleton found by the sensor (from a total of maximum six detected users). private void ScalePosition(FrameworkElement element, Joint joint) { Joint scaledJoint = joint.ScaleTo(1280, 720, .3f, .3f); Canvas.SetLeft(element, scaledJoint.Position.X); Canvas.SetTop(element, scaledJoint.Position.Y); }
The ScalePosition method positions a graphical element (the element argument, of the type FrameworkElement - in our case, a red ellipse received from the sensor_AllFramesReady event handler) according to the position of a certain joint in the user’s body (the joint argument, of the type joint - in our case, the right hand of the user). An object of the Joint type contains data describing a certain joint’s position in space. The position is stored in joint. Position, a vector of three elements (X, Y, Z), each containing a value in meters. To represent the joint on the screen, we must first convert these values into pixels. In our case, the conversion is done using the ScaleTo method included in the Coding4Fun dll, available at http://c4fkinect.codeplex.com. This is the easiest way to make the conversion, perfect for a simple application, as is the case now. However, the process can be done
manually, allowing further control and configuration options. We will see such examples in the following issues of the magazine private void StopKinect(KinectSensor sensor) { if (sensor != null){ if (sensor.IsRunning){ sensor.Stop(); if (sensor.AudioSource != null) { sensor.AudioSource.Stop(); } } } }
Stopping the Kinect sensor.
private void Window_ Closing(object sender, System. ComponentModel.CancelEventArgs e) { closing = true; StopKinect(kinectSensorChooser. Kinect); }
Before closing the application, we make sure to safely stop the sensor.
Upcoming issues
In the following issue of the magazine, we will include a sequence of code that displays the user’s entire skeleton on the screen. Furthermore, over the upcoming issues, we will try to add other useful features, such as separating the tracked user’s body from the background (much like the presenter from a weather show), elements of gesture recognition and the manual processing of the depth data obtained by the Kinect sensor. In addition, we will include a link to the project containing the functionalities implemented so far.
HR
TODAY SOFTWARE MAGAZINE
Trends in HR People are always talking about trends. Before starting to write this article, I did not have the curiosity to understand the deep meaning of the word. And as the best source is always the Dex , I started reading the simple and trivial explanations presented there. A trend represents a natural predisposition for something, a tendency, an attraction, a conscious action toward a defined purpose. Another definition describes the trend as an evolution of someone in a certain sense. Starting from this formulation I thought about what would be the connection between the trend and my work field
Andreea Pârvu
andreea.parvu@endava.com Recruiter for Endava and trainer for development of skills and leadership competencies, communication and teamwork
–HR. And I’ve found it easier to closely monitor what happens on the IT market. I’ve noticed a natural tendency for some HR processes and systems to evolve towards a defined purpose, so a trend! As follows, I will summarize a few trends which I consider to have a substantial impact on the market’s evolution. Talent Management - a complex process which consists of 4 important stages. The theoretical synthesis of the process consists in attracting, motivating and developing talents in order to achieve a high performance. Basically, though recruitment and selection is being done in a fast pace, the competencies which play an essential role in creating the profile of the perfect candidate, which covers the business need, most of the time urgent, are not neglected. In the context of expensive talents and I don’t refer strictly to the financial part, but to what they look for in a company, it is difficult to position as the first option company for the available on market resources. A prepared candidate looks for a stable organizational environment, which would harness his talent and which, above all, would offer financial stability in tight correlation with the labour market trends. Motivating a talented employee, which has dozens of other options on the market, is a big challenge for each department or management level. The role of the HR department is inexistent if the entire process is not supported by each head of department or each representative of the
management team. The success stories from renowned IT companies make one believe that it’s possible. From my point of view, their success is linked not only to ensure a consistent income, but also to a few other factors, sometimes forgotten. Any employee who reached a high level of expertise is looking to develop his talent as much as possible. Give the talented employees the chance to bring the extra value in the company! Offer them a career development plan which convinces them that the environment they’re part of is the one contributing to the development of their skills and competencies. How often have you considered that this way you could keep and motivate talents?
Developing competencies and soft skills. First of all I would like to clarify the concept of soft skills, in the context of this article. Personally, I haven’t identified a term in Romanian that would give the same significance as in English. For this reason I preferred to keep it this way, with the necessary explanations. In www.todaysoftmag.com | nr. 3/2012
29
HR
my vision, soft skills refer to leadership, communication, team work, coaching, time management, etc. I’m glad to see that there is a stronger emphasis on developing these competencies and that people moved on from the stereotype of the IT specialists having only technical skills. Up to a certain level there is an accumulation and/or a development of knowledge about a programming language, but from that point on the awareness is growing exponentially that lack of soft skills can hinder a career evolution. This HR process is tightly related to what we call ”career development path”, meaning that track in the career which offers one an organized development process. What I’ve noticed until now is that after approximately five years of experience there are two ways of development towards which the IT specialists focus their efforts: (1) technical orientation – software architect and (2) people orientation – team leading or project management. For the second case it is easily understandable why the development of certain skills related to coordinating a team and a project is needed in advance. Nevertheless, many ask themselves what is the role of those skills for the first case? The answer is as simple as possible: communication and time management. Unsubstantiated Wage growth compared to the average of the existing ones. Generally, there are a few players which entered the IT market with very high wages to attract as many employees as possible, with the purpose of covering the human resources needed for the developed projects. The main consequence was that these increases began too difficult to control. Although there are companies with a stable environment, which realize the fact that this consequence produces instability, most of the people involved in a job interview have very high financial expectations, wanting increases up to 100% of the wage. The paradox is that there are some companies that adapt to these expectations, while others, which I call “normal”, loose even people that would fit in their organizational environment. I’ve asked myself many times: What is better? To adapt and go into times of crisis or to maintain your growth pace and survive on a long term in an
30
nr. 3/2012 | www.todaysoftmag.com
Trends in HR
imbalanced market? Flexibility and adapting HR policies to business needs. This is one of the trends that began to take shape since 2011. From all its points of view, the market, especially the IT market, is in a continuous evolution. The HR policies must respond to the business needs. They must develop and constantly adapt to the market. The only aspect which does not offer flexibility is the administrative side, which is based on a set of rules generally accepted through voting the labour code. When I was thinking about this trend, my thoughts flew to the recruitment process first. The traditional means of identifying candidates do not have the same result as the one observed just a few years ago. If until 3 years ago the main source of discovering candidates were the well known traditional sites, nowadays they occupy the second place. Adapting to the business needs means searching for less conventional ways and identifying any instruments which allow access to the best talents. More than that, in my opinion, the business needs are strongly connected to accepting a set of tighter and tighter deadlines. Even though in HR the concept of Agile is no longer used, I consider that this kind of approach, through which the demands and solutions are facilitated via self-organizing and efficiency, is beneficial. This methodology encourages the adaptation, planning, quick and flexible response to determine a change. A few principles can be easily adapted even by the HR department: • Interaction between people above processes and instruments. In my recruiter language it is translated by building a long term relationship with the candidates. In other ways, I can find the relevance of these interactions including the whole performance management process: defining and achieving certain goals using a correctly implemented coaching process or in the process of career development. The more one interacts with an employee, the better one gets to know him, know what motivates him and what he wants. • The collaboration with clients, more than what “a contract” or “a written understanding” means. As HR department, your
client is the whole company. The client identifies a recruitment need and/or a development need, and the HR department, through its diverse areas (recruitment and selection, training, etc.) must satisfy these needs. • Adaptation to change. There are numerous situations in which if the wave of changes in business is lost, the work of the entire HR department will never bring the wanted results. A clear example is that if until a few years ago the main ability of an IT recruiter was to be able to identify a series of competencies during an interview, now he must present the best possible understanding of the business and of the technologies in order to recruite the most suitable people. Investing in your people. The involvement of companies in various programs for students or newly graduates, which encourages their development on a market with a trend of “disorganized exchange of senior employees”. Potential young people are promoted which, in a determined period, bring fresh perspective to a work environment. It is proven that people with vision can change the working direction of a company. It’s interesting that there is a trend of adapting the already existing organizational culture for the integration of young people. More and more collaborations with technical universities and students’ organizations offer access to top students. Even more, if educated in an environment that encourages commitment, they can attach to the company and bring long term results. In Western Europe, numerous studies claim that employees which are given the possibility to develop their potential and whose merits are acknowledged increase the retention rate of a company. Same as in fashion, trends come and go, but the motivated employees will always remain. I wish that in the future more companies from the IT market in Cluj would at least consider a few of the trends mentioned above, in order to observe a healthy evolution of the work force in this domain.
solutions
TODAY SOFTWARE MAGAZINE
CIVITAS – Archimedes project IAȘI The Archimedes project is made of a mixture of innovative, integrated and ambitious measures for a clean environment, energy efficiency, lasting urban public transportation and, therefore, a meaningful impact over the politics for energy consumption, transport and environment protection. AROBS Transilvania Software has offered a complete solution – hardware and software – for the fluidization and creation of an enhanced comfort during the process
Sebastian Botiș
sebastian.botis@arobs.com Project Manager – CSM, CSPO Specialized in Agile Methods Arobs Transilvania Software
of ticketing for public transportation in the city of Iasi, the beneficiary of this European Project. This solution means the development of the physical “self service” kiosks. In the modern world, one cannot ignore the increase of the use of such means of interaction between customers and administration institutes that offer various services to the society we live in
Definition of kiosk
A kiosk can be defined as an electronic device, even a mini computer, with the help of which the users can access various information in very short time and in great comfort, or to benefit of multiple services offered by local administration institutions or even private service providers.
Short history
The history of kiosks is not long but neither short. In 1977 the first interactive kiosk was developed at Illinois University by a student name Murray Lappe. The content was created on a PLATO computer and the user interaction was done by using a plasma touch screen (also invented at the same university). The initial purpose of this interactive kiosk was the ease of informing all students and visitors about different services available from the local authorities. However, the first commercial kiosk was brought to market a bit later in 1985 by a Californian company, which gave the users the possibility to be informed about the shoe stocks that weren’t available at the dedicated retail shops. Later in 1991, the Comdex Company introduced the first kiosk with internet connection, offering an application to localize lost children.
Types of kiosks
Nowadays, there is a successful development of kiosks of different shapes having, different scopes in the same time. Some of the kiosks are just for providing information, others are for providing services. Here are some types of kiosks that we can find all over the world: public or touristic information, acquisition of products (movie tickets, public transport tickets), taking and buying photos etc. We’ve just mentioned some of the existing kiosks, but in this article, we will focus on the kiosk that offers the possibility of fluidizing public transportation tickets selling.
Description of the solution
In the first phase, starting from the needs of the Iasi citizens, in close collaboration with the Iasi City Hall and RATP Iasi, we managed to shape a vision of the solution for modernizing and fluidizing the selling of public transport tickets. Considering that we are in the continuous development and research on extending the existing solution, we have tried to build a mature system that can be opened to new ideas and functionality increases. In order
www.todaysoftmag.com | nr. 3/2012
31
solutions
Proiectul CIVITAS - Archimede Iași
Payment possibilities
to ensure a long online presence, a major factor was the maintenance process of the solution, as well as the protection against vandalism, which have perhaps limited, somehow, a more modern approach in some aspects of design and functionality.
Description of the solution – details
To be able to cover most, if not all these aspects, we have decided to use the Microsoft platform for development of this solution, using the client server model. The server application exposes three main web services through which the communication with the data base (SQL Server) is done, using Entity Framework, as well as communication with the client by exposing different services using WCF Data Services (initially known as ADO .NET Data Service), which is a .NET Framework component. By the services offered by the server, there is also the possibility of managing the types of tickets available on the client application, as well as the restrictions imposed on the client application. It gives the possibility of managing really easy the types of tickets that RATP Iasi can modify and of introducing new types. Also, through the services offered by the server using the back office functions, a close monitoring of the actions at the kiosks can be done, for the actual selling of tickets, as well as permanent video surveillance in order to limit the vandalism acts, unfortunately still present. Having in place such well designed system for notifications for vandalism acts, we hope to be able to limits those acts, as well as keep the kiosks online for a longer time. Through the back office services offered by the server, a close monitoring of the hardware components installed on the ticketing kiosks can be done. In this way, the hardware maintenance process can be optimized and we manage the rapid repair of the special situations that may appear, to offer a permanent availability. The client application is designed to offer the users access to the selling services for tickets offering all the range of types of tickets currently existing at RATP Iasi. The client application was developed using Microsoft Technologies like WCF (Windows Communication Foundation), WPF (Windows Presentation Foundation) and Entity Framework for data base communication. Using HAL (Hardware
32
nr. 3/2012 | www.todaysoftmag.com
Abstraction Layer), the client module maintains a continuous communication and monitors all the hardware components in the kiosk. We have managed to develop the required drivers for communication with these components, so we were able to adapt them to all the special situations and citizens desires, for a better experience in handling these tickets kiosks.
Offered types of tickets
As we mentioned before, the beneficiaries of this application can handle any type of tickets easily and, at the same time, they are able to introduce any new type of ticket through the server application at any given moment. The innovation of this system is the introduction of the possibility to acquire monthly subscriptions for the public transport network by residents as well as non-residents of Romania. The subscription type tickets are nominal, so we offered the possibility to “hide” this information by utilizing QR Codes. For validation of such subscription, transport inspectors will soon have devices to read and validate these QR Codes. In the next picture such subscription ticket can be seen. Another detail that can be noted is that the subscription type tickets are addressed to all type of students considering the dates for start and end of the university year as well as school year.
Another aspect, not to be neglected, is the introduction in a single ticketing kiosk of all the existing means of payment. The payment at this kiosk can be done using coins, bills, as well as credit card (Visa and MasterCard) with chip. At the request of RATP, the change is given in coins only. Besides coins, we have introduced a different way of giving change, just in case there are not enough coins for the change or there was a problem at the kiosk. This possibility is a RATP Code that the citizens can use in the RATP internal network for payments or they can get a refund from RATP Iasi.
programming
TODAY SOFTWARE MAGAZINE
Guice
Dependency Injection (DI) is a specialized form of Inversion of Control(IoC) – – a broader OOP concept where objects are coupled by an external source at runtime – usually a container – also referred as IoC container. Using IoC we can select specific implementation of the dependencies at runtime which is a major advantage when dealing with Unit Testing for example. Injecting mock dependencies becomes trivial leading to easily test the application in isolation.
Mădălin Ilie
madalin.ilie@endava.com Cluj Java Discipline Lead Endava
The most common types of DI are: • Type 1 - also known as interface injection • Type 2 – also known as setter injection • Type 3 – also known as constructor injection
What is Guice?
According to Guice Official Page „Guice alleviates the need for factories and the use of new in your Java code. Think of Guice’s @Inject as the new new. Guice embraces Java’s type safe nature, especially when it comes to features introduced in Java 5 such as generics and annotations. You might think of Guice as filling in missing features for core Java. Ideally, the language itself would provide most of the same features, but until such a language comes along, we have Guice” If you’ve used Spring or other DI frameworks, you’re probably familiar with the above ideas. In this article I’ll present how to get started with Guice in a non-web
application. I’ll detail the web side in a future article. Guice offers all 3 types of DI. (Spring for example offers type 2, 3 and partially type 1).
Guice vs Spring?
Well, this is one of those talks that could last forever. Like EJB vs Spring, Struts vs JSF and so on. It’s a matter of choice. I can say one thing though: if you need just DI in your application, go for Guice. It’s very lightweight, it’s type safe and it’s faster than Spring.
Using Guice for a Non-Web Application
Suppose you have an application and the class that contains the main method it’s called Application. When using Guice the approach is a little changed. Besides starting the application you’ll also want a way to trigger the dependency injection. In order to to this you must have an „entrypoint” that „manually” triggers the creation of the graph of objects that will get injected www.todaysoftmag.com | nr. 3/2012
33
programming
and only after that it starts the actual application logic. We’ll call the class responsible for these 2 things the Bootstrap class. We also need to create a Module by extending the AbstractModule class. Quoting from the Java Doc of the Module interface: A module contributes to configuration information, typically interface bindings, which will be used to create an Injector. A Guice-based application is ultimately composed of little more than a set of Modules and some bootstrapping code. Let’s see the code:
Bootstrap class
Guice
CachingServiceImpl class.
PrintingService interface
package com.insidecoding.guice; import com.google.inject.ImplementedBy; @ImplementedBy(PrintingServiceIm pl.class) public interface PrintingService { /** * Printează un String * @param s */ void print(String s); }
package com.insidecoding.guice; import com.google.inject.Guice;
PrintingServiceImpl class
import com.google.inject.Injecimport com.google.inject.Injector; /** * Bootstrap class that creates the root object that triggers the graph of * objects creation and injection */ public class Bootstrap { public static void main(String[] args) { Injector injector = Guice. createInjector(new MyModule()); Application app = injector. getInstance(Application.class); app.start(); } }
import javax.inject.Inject;
Application interface
import com.google.inject.ImplementedBy;
package com.insidecoding.guice; public interface Application { /** * Lansează în execuție aplicația curenta */ void start(); }
ApplicationImpl class
package com.insidecoding.guice; import com.google.inject.Inject; public class ApplicationImpl implements Application { @Inject private PrintingService ip; public void start() { ip.print(„Hello from guice”); } }
package com.insidecoding.guice;
public class PrintingServiceImpl implements PrintingService { @Inject private CachingService cs; @Override public void print(String s) { System.out.println(s); cs.cacheIt(s); } }
CachingService interface
package com.insidecoding.guice;
@ImplementedBy(CachingServiceImp l.class) public interface CachingService { /** * Pune in cache un String * @param s */ void cacheIt(String s); }
CachingServiceImpl class
package com.insidecoding.guice;
public class CachingServiceImpl implements CachingService { @Override public void cacheIt(String s) { System.out. println(„Caching: „ + s); } }
Suppose the logic of your application is as follows: use the PrintingService to print a String and the printing service will use a CachingService to cache that String. The implementations of these services will be injected using Guice in MyModule class the AppplicationImpl, respective the package com.insidecoding.guice;
34
nr. 3/2012 | www.todaysoftmag.com
import com.google.inject.AbstractModule; public class MyModule extends AbstractModule { @Override protected void configure() { bind(Application.class). to(ApplicationImpl.class); } }
The code is pretty simple and straightforward. In order to make this code work, you need to add just the following JARs from the Guice distribution: • aopalliance.jar • guice-3.0.jar • javax.inject.jar If you take a closer look to the above code, you’ll see a small difference as far as injection is concerned: the Application class uses the com.google.inject.Inject annotation and the PritingServiceImpl class uses the javax.inject.Inject annotation. This is because Guice 3.0 is compatible with JSR-330. Mixing JSR-330 and Guice annotations though is discouraged so stick with Guice @Inject for now.
Types of Injection
Guice has 3 types of injection: • field injection • constructor injection • method injection
We’ve seen the field injection type in action in the above example. In order to use constructor injection we can modify the PrintingServiceImpl class to look like this: package com.insidecoding.guice; import javax.inject.Inject; public class PrintingServiceImpl implements PrintingService { private CachingService cs; @Inject public PrintingServiceImpl(Cachi ngService cachingService) { this.cs = cachingService; } @Override public void print(String s) { System.out.println(s); cs.cacheIt(s); } }
TODAY SOFTWARE MAGAZINE Note that when using field injection Guice will call the no-arg constructor in order to instantiate objects. Method injection can be used to inject the parameters. The Guice injector will resolve all dependencies before invoking the method. Injected method can have any number of parameters and the method name has no impact on the injection. Method injection looks as follows: @Inject public void printMe(PrintingService ps) { ps.print(„Printing with Guice method injection”); }
Injecting constants
Guice offers a very simple syntax for injecting constants: bindConstant(). annotatedWith(name).to(value);
The to() method can receive a primitive value, as well as String, Enum or Class. The above syntax does not offer type checking. Guice will automatically convert the value passed to the to() method to the actual type of the variables annotated with _name. Let’s see some examples. Suppose we have the following constant defined in the PrintingServiceImpl class: ... public class PrintingServiceImpl implements PrintingService { @Inject @Named(“myConstant”) private String printersNumber; private CachingService cs;
private CachingService cs; ... @Override protected void configure() { bind(Application.class). to(ApplicationImpl.class); bindConstant(). annotatedWith(Names. named(„myConstant”)). to(„exception”); }
You probably guessed that an exception will be thrown. To be more precise, a com. google.inject.ConfigurationException. You can force compile time type checking for binding constants using the following syntax: bind(Integer.class). annotatedWith(name). toInstance(value);
In this case _value must be of type Integer at compile time – of course the autoboxing will work.
@ImplementedBy vs bind()
You can notice in the above code that there are 2 ways of declaring bindings between interfaces and implementations: • annotate the interface with @ ImplementedBy • use the bind() method in a Module to specify the actual implementation: see the MyModule.configure() method.
In the above code you can switch to singleton by doing this: bind(CachingService). to(CachingServiceImpl).in(Scopes. SINGLETON)
I mentioned Eager Singleton as a special case. In this case the singleton objects are built when the application starts and not when they are needed. It allows you to sooner detect initialization problems. Depending on the application stage (development, production – controlled using values of the Stage enum when we create the injector) Guice has default initialization types for singleton objects. By default Eager Singleton is OFF in development stage. In order to force this you can use the following: bind(CachingService). to(CachingServiceImpl).asEagerSingleton()
@RequestScoped and @SessionScoped
vor fi detaliate în articolul pentru web applications. This is just a high level presentation of Guice. You can do a lot more with it and I hope I’ve opened your appetite to discover its capabilities. If you need a lightweight framework for DI, Guice beats Spring all the way.
For simple cases and interfaces with just one implementation you can go with the @ImplemenedBy approach, but if you need more control over your bindings or ... if you deal with multiple implementations In order to inject a concrete value for definitely use the bind() method. the printersNumber variable we’ll use the above syntax in the MyModule class: Scoping @Override One important thing to notice is that protected void configure() { Guice returns a new instance by default bind(Application.class). each time it supplies a value. to(ApplicationImpl.class); bindConstant(). Guice offers the following scopes: annotatedWith(Names. • prototype – implicit named(„myConstant”)).to(„345”); • singleton – with a special case: } eager singleton In this case no conversion is needed. • session (available as part of the If printersNumber will be Integer or int, servlet extension) Guice will automatically convert from • request (available as part of the String to Integer or int. servlet extension) What happens if printersNumber is • custom Integer and we do the binding using a Choosing the scope should be straiString value? ghtforward. If the object is statefull, ... per-application the scope will be @ public class PrintingServiceImpl Singleton, per-request @RequestScoped implements PrintingService { and per-session @SessionScoped. If the @Inject object is stateless and easy to create, sco@Named(“myConstant”) private Integer printersNumber; ping is unnecessary. Guice will return new instances every time. www.todaysoftmag.com | nr. 3/2012
35
programming
Modern Concurrency
There are new processors coming. Two cores seem to be past, even on mobile; GPUs can be used for computational operations, and they have many cores - 1500+ these days, how do we use them all? What’s next, how do we keep the development pace with this shiny new hardware? Clouds are getting bigger today, we have processing clouds and data clouds, they can handle more requests than the human mind can imagine. We have lots of concurrent Béla Tibor Bartha
bartha@macadamian.com Software Engineer @ Macadamian
users on a single server, which needs to be We could remove the bottlenecks by fed with data. How do we handle so many making the trucks really independent! requests? There are many programming languages implementing a solution for this, but before getting into some details, let’s start With this solution, the input will be with an example: how do we move a pile of consumed twice as fast, and the trucks bricks from one location to another? move independently. We can add in as many lanes of this type we want, but one truck’s operation is still taking too long. With one truck this operation will take Let’s take another approach. What haptoo long. How can we move all bricks fas- pens if we add two more trucks, but this ter? We could add one more truck! time with a delay?
This way, the transport is faster, but This way each truck does the operation there will be bottlenecks at the initial loca- independently with some coordination – tion and the destination. Also, we need to they’re communicating between each other synchronize the trucks. at some level. We can add a fourth truck here, and get a better flow. Each truck is doing a simple task now, and with a little arrangement
36
nr. 3/2012 | www.todaysoftmag.com
TODAY SOFTWARE MAGAZINE we’re getting a four times faster flow than the original design.
So, we improved performance by adding two concurrent trucks to the existing design. The above is only one way to parallelize the process, but we can combine the two designs. This way eight trucks will fully work:
This is a valid concurrent design, even if only one truck is moving at the moment – no parallelization at all. Let’s see a little the differences between concurrency and parallelism, or at least, how we can define them:
Parallelism • • •
S i mu lt an e ou s e xe c ut i on of processes Doing lots of things at once Deterministic behavior of a program
Concurrency • • •
• threads are exchanged with Concurrency provides a way to lightweight routines, called structure a solution in order to solve a goroutines/Processes/Agents problem that may (but not necessarily) be • these are not necessarily mapped parallelizable. to OS threads, this means the runLet’s take another example: time can decide how it handles • C onc ur rent desig n: mous e, them keyboard, display, disk drives • millions can run in the same time, • P ara l lel desig n: ve c tor dot for example Go was seen with 1.3 product million goroutines at a time. In this example, the devices can work independently, communicating with data Having lots of lightweight routines streams, they work in parallel, but not means that the runtime can map these to necessarily; on the other side the vector dot OS threads as needed, meaning that we can product is parallel by design. use as many cores the host processor is providing to the application. In “old” languages (like C/C++, Java) we use lots of patterns for parallelization There are some other languages impleat some level, for example: Double checked menting concurrency as a subset: locking pattern, Thread pool pattern, • C# with Task Parallel library Thread-Specific Storage, Scheduler pattern, • Objective C with NSOperation Active Object… Most of these have one and dispatch queues thing in common: they use some synchro• D with lightweight threads (simply nization. This is more like parallel design, named threads) and messaging but concurrent design is a bit different. • S t a c k l e s s P y t h o n w i t h microthreads In 1978 C.A.R Hoare published his book “Communicating Sequential In the near future the number of cores Processes”. In his book he defines a lan- may increase, because we reached maxiguage for describing patterns of interaction mum processor frequency with current in concurrent system. This is the base for technology. Are you ready? modern concurrency. Some programming languages that implement this design are: Go, Erlang, F#, occam, Clojure, SuperPascal, Ada.
Handling lots of thing at once – through structuring the process These languages have other things in Pieces can work independently common: coordinated by communication • they use communication instead Non-deterministic composition of direct synchronization between of programs threads.
www.todaysoftmag.com | nr. 3/2012
37
startups
Startup - Hack a Server
“Choose a job you love, and you will never have to work a day in your life” said Confucius. These would be the words that describe Marius Corîci the most. In 2003 he started doing business in the plumbing industry and co-founded ITS Group, a franchise for Romstal Company, the biggest plumbing installations retailer from South-Eastern Europe. In 2007 he moved into Artificial Intelligence field and founded Intelligentics, a group for Natural Language Processing. Now, he is very focused on infosec and got involved in all the biggest independent security projects in Romania:
Marius Corîci
marius@intelligentics.ro Antrepenor, in love of artificial intelligence
S3ntinel, Hack Me If you Can, Hack a Server, DefCamp. Marius considers himself a serial entrepreneur and is very passionate about Artificial Intelligence. Never a quitter, always a perfectionist, looking for challenges that will change the world we live in. He believes in people and the power of great teams, and he intends to start blogging in the near future.
What determined you to shift your attention towards software development industry?
Besides the great opportunities, I am a guy who loves challenges. I started to like developing digital products and I belive that the online industry will increase growth in the near future.
38
nr. 3/2012 | www.todaysoftmag.com
Hacking Servers What is Hack a Server?
HaS (Hack a Server) is a platform designed for conducting manual penetration tests using the power of crowdsourcing, covered by anonymity and confidentiality. It’s a fact that communities and individuals who love to discover and test security issues already exist. Whether they are called black, grey or white hackers, crackers, skiddies, PenTesters you name it, they love to find flaws and vulnerabilities. They love challenges and every flaw or vulnerability represents a challenge for them. This is the truth. When your system or production server gets hacked in real life, peaceful intentions are the least to expect. Trust me, we’ve been there having our platform
TODAY SOFTWARE MAGAZINE “tested” and tested. Thanks God we don’t keep any sensitive data about our users on the platform. HaS brings security skilled people in the same place and gets them paid for what they love doing most: Hacking. Everybody can register to our platform, but only the best will have access to “Playground Arena”, where all the hacking happens. In order to get access to the “Playground Arena”, they will have to pass a test first. We all know that the most important thing when someone finds holes into your system is not the penetration itself but the report that describes the security issues and the solutions. That report is the most important thing for a CTO, Sys Admin or web app developer. The test that a HaS user has to pass in order to get access for hacking is like any other tests that they should pass in order to get different security certificates (e.g. CPTC, OSPC, CEH, CEPT, CISSP etc). The only difference is that we give this opportunity to all our users and we don’t charge for it. This test ensures CTOs, Sys Administrators and web apps developers that, whenever they pay and receive a Penetration Test Report, it will comply with the Penetration Test Standard Reports
How did you come up with the idea behind HaS platform?
I keep saying: Solve a problem and then build a product. There were two ingredients that made me come up with this idea: Gaming: I hate gaming because if you are not aware, it’s like a drug. Security: Security is one big problem, believe me. One day, being with my little daughter at the doctor and waiting to get in, I was thinking „how can you use gaming in such a way to solve a big problem?” And it stroke me. Online Security Gaming but in a different way, a way that it hasn’t been done before. Using the power of Crowd Sourcing, and not for points (as it has been done until now), but for real money. After I figured out the outlines, I grabbed the phone, called a friend who is Sys Admin and asked him if he would use such a platform and how much he would pay for this service. He said yes, he would use such service and he would pay like 1,000 Euros. …And here we are. If you think deeper, we solve a few other complementary
problems, like black hackers can become grey and start earning real money for what they love most: Hacking Servers. Moreover we fill up a niche between companies that perform penetration tests with high rate cost for small and medium companies and those companies. In fact we don’t even compete with those companies but we complete them. And I can add at least two or three more good things such as: being a sys admin or a tester on our platform, you get the opportunity to become a consultant on InfoSec issues if you are in „The Hall of Fame”.
Building the product Who is currently working to bring out HaS platform to the world?
Many have tried, few have remained. Marius Chis is currently the CFO and the first investor in this project. I tried to involve people that fell in love with the project because I strongly believe money is a consequence of a “well done job” and not a purpose. Andrei Nistor is the CTO. He is the one who did most of the coding part, based on relevant feedback from team members or testers. He worked day and night to get the project working flawlessly and made crowdsourcing pentesting possible. Alexandru Constantinescu is the PR & Marketing Executive. He impressed me with his determination when he told me how much he loved the project and wanted to jump in on the marketing side with no initial financial interest, because he understood the development stages of a bootstrap leanstartup company. Cosmin Strimbu is our frontend developer. Although I haven’t met him at the time I’m being interviewed, the same like Alexandru, he just asked me to take him on board. I love this kind of people driven by passion of what they are doing and not by money. Am I lucky? Yes and no. I am lucky as it was them who found me (not otherwise) and it was them who found and fell in love with the project. I am not lucky because I worked hard to spread the word about me and my projects. No, this is not luck, this is hard work. I have
spent over 3 years in the online industry, and although I’ve met a lot of people, I would recommend just a few
What is the business model that will bring you revenue from HaS?
We had a few business models in mind, but since we are dealing with a two sided market place we have decided to charge the testers a decent percentage. This means low rates costs at a fraction comparing with penetration test companies, and we are aiming towards a mass adoption price.
Who are your customers?
HaS customers are companies that want to solve their security issues fast and with low costs. CTOs, CIOs, CISOs, Sys Administrators, Data Base Administrators, Web Apps Dev are also the professionals within companies that can use our product. Other customers are the individual specialists, whether they are PenTesters, Sys Administrators who want to verify the security of their innovative servers or applications, covered by what we value most, anonymity and confidentiality.
What are the current features of hackaserver?
Hack a Server is the next level solution to resolve critical security issues in a funny war game way. Cost effective: What can be better for your business than The Power of Crowd Source at a reasonable cost? It’s Fast, Reliable and Secure. Fast: Within minutes you can setup your server with the most popular OS and start to configure it. I think we need like 7 clicks to have a machine up and running. Reliable: Our PenTesters must pass a test and complete a Penetration Test Report to see if they are really able to be PenTesters before they get access to hack into Playground Arena. Secure: At Hack a Server, we encourage you not to disclose your real identity whether you are a company representative or a pentester. In this way, we don’t keep sensitive data on our platform which means that it doesn’t matter if someone tries to penetrate our system. They will find nothing.
What’s next?
www.todaysoftmag.com | nr. 3/2012
39
startups
Startup - Hack a server
Are there new features to be implement- security. ed into the platfom? These are only a few directions, a part There are a lot of features that we want of our marketing strategy. to implement. We have Top 3 Features to be implemented but itâ&#x20AC;&#x2122;s better for us to let our customers decide what they want most. On second thought, we have one option that we believe will help CTOs, Sys Administrators, Web Apps Dev and the companies: Finding the best way to automate the process to replicate a physical machine on our platform. Now this is a challenge and we will start as soon as we close this iteration.
How you intend to penetrate the market?
Hack a Server will become the official platform for gamming at DefCamp, a conference on InfoSecurity that will be held on September 6-8 at Hotel Napoca in Cluj. We made the virtualization module open source so everybody who wants to deploy a PenTest lab can make it fast and free of charge. We are going to implement the virtualization module in the faculties so the students will have a funny way to learn
40
nr. 3/2012 | www.todaysoftmag.com
programming
TODAY SOFTWARE MAGAZINE
Think iCloud iCloud is a set of interfaces and services provided by Apple, for sharing data among different instances of the application running on different devices, in an easy to use and safe manner. To be able to work with iCloud, first you will need an App ID configured for iCloud. In the XCode project entitlements should be enabled and also iCloud needs to be set
Zoltán, Pap-Dávid
zpap@macadamian.com Software Engineer @ Macadamian
up on the deployment device (iCloud app cannot be tested on a simulator). Detailed From the developer point of view, we configuration steps are not subject of this have three main ways an app can store its article, details can be found on https:// data in the cloud: developer.apple.com . If everything is configured correctly, • Key / Value Store - It can be used the app can access a special iCloud direcif we want to store small amount of tory on the device. Everything put in this data (NSUbiquitousKeyValueStore directory is synchronized automatically is very similar to NSUserDefaults) with iCloud using the iCloud daemon. You • S t ore d o c u m e nt s i n c l o u d can enumerate this directory content with ( Su b cl a s s i ng U I D o c u m e nt ) normal API. You need to carefully access (UIDocument) these files because iCloud daemon is using • Set up CoreData to store its data these files for synchronization too. in cloud
Migrating to iCloud
In order to store documents in iCloud, we could manually move files into the iCloud directory with the new methods in NSFileManager or the new NSFilePresenter, NSFileCoordinator classes. However this is fairly complex and unnecessary in most cases because iOS5 introduced new classes to achieve this.
www.todaysoftmag.com | nr. 3/2012
41
programming
Think iCloud tyContainerIdentifier:nil];
friendly manner would be to implement If the returned value is not nil, we have conflict detection code. To inspect the iCloud Key/Value data storage cloud configured correctly, we can start state of the document, UIDocument conThe primary purpose is to share small fetching the objects. tains a documentState property (also there amount of data between instances of appliis a notification for signaling state change cations running on different device. The NSMetadataQuery * query = [[NSUIDocumentStateChangedNotification). MetadataQuery alloc] init]; data can be synchronized as it is encap[query setSearchScopes: [NSArray sulated in: NSString, NSDate, NSNumber, arrayWithObject: NSMetadataQuery- Configuring CoreData for iCloud NSData, NSArray or NSDictionar y. UbiquitousDocumentsScope]]; In iOS5, CoreData contains enhanceiCloud data synchronization is achie- NSPredicate *pred = [NSPredicate ments to support efficient integration of the predicateWithFormat: @”%K like ved by NSUbiquitousKeyValueStore class SQLite store with iCloud. The new main ‚EventInfo_*’”, NSMetadataItemFSintroduced as part of iOS5 SDK. The only NameKey]; feature is that each change you commit to a restriction concerns the stored data size. [query setPredicate:pred]; persistent store is recorded in discrete tranFor every key max 64KB data can be stored, [[NSNotificationCenter defaultsaction log files that are stored in iCloud, Center] addObserver:self and we could have max 256 entries. If comrather than keeping the persistent store file selector:@selector(queryDidFinish bined, they must not exceed 64KB in total. Gathering:) in the cloud directly. Therefore the propaNSUbiquitousKeyValueStore syn- name:NSMetadataQueryDidFinishGath gation is efficient to the cloud. This way, chronizes data to iCloud in a transparent eringNotification when the database is modified, we don’t object:query]; manner, and it also provides notification need to push to the cloud storage and then [query startQuery]; in order to detect data changed by another to every device. When we want to enable instance. All we need to do is to initialize a query iCloud for CoreData, we need to consider and specify a predicate to filter out the the following aspects: Document Based Application desired objects on cloud storage. Once the • You must tell Core Data where to model objects are loaded in the memory, a store the transaction logs associated Model Controller. notification is received. with changes to the store. To achieve At this moment documents are avaithis you must provide values for Classes whose instance we want to store in lable in the local iCloud directory and the NSPersistentStoreUbiquitousConiCloud need to have UIDocument as base local copy is automatically synchronized tentNameKey, NSPersistentStoreUclass. UIDocument is practically a model with the cloud. We can subscribe to notifibiquitousContentURLKey keys when controller (MVC pattern), it implements cations of state change. setting up a store. Thus core data will NSFilePresenter and takes care of coordiknow where to record the transactination with iCloud daemon. All storage Creating new document in cloud ons related to the operations is done transpaWhen creating new instance of a • When deleting a store from cloud, we rently. The only thing we need to do is to UIDocument descendant class, we need need to delete logs implement the following two methods: to initialize with an url which points to a • The SQLite file itself should be localocations from the iCloud directory. When ted in a place where its modifications - (BOOL)loadFromContents:(id)consaving the UIDocument, the object will be will not be synchronized with the tents ofType:(NSString *)typeName stored locally (later will be synchronized cloud (this could be a directory in the error:(NSError **)outError - (id)contentsForType:(NSString with iCloud storage automatically). local application sandbox or inside a *)typeName error:(NSError **) NSURL *ubiquitousPackage = directory with a .nosync suffix in the outError [[self.ubiq URLByAppendingPathCom ubiquity container). ponent:@”Documents”] URLByAppendi The first method receives the input ngPathComponent:fileName]; object (NSData or NSFileWrapper – iOS5 contains integrated support the first returns data from a single file, UserInfo *doc = [[UserInfo alloc] for iCloud. The current structures were initWithFileURL:ubiquitousPacka meanwhile the second is able to work with updated in a clever way to provide straightge]; packages containing more files) and we EntityInfo * newEntity = [initWi forward access from a developer point of need to decode to internal model. The thFileURL:ubiquitousPackage]; view, even if we want to share simple data, second method should contain the deseor even complex databases between devices [doc saveToURL:[doc fileURL] forS rialization and needs to return a NSData using iCloud. aveOperation:UIDocumentSaveForCr or NSFileWrapper. If class implements eating completionHandler:^(BOOL NSCoding, implementing these methods success) { }]; is trivial. Storing the document in iCloud means Retrieving document from iCloud. that multiple instances of the application Initially the iCloud directory from the can access and modify the same docudevice doesn’t contain the documents we ment. When multiple instances perform want to access. First step is to retrieve the different changes over the same document, path of the iCloud directory. conflicts can occur. The simplest approach NSURL * storage = [[NSFileManto resolve conflicts is to overwrite previous ager defaultManager] URLForUbiqui instance by the last save. Also, a more user
42
nr. 3/2012 | www.todaysoftmag.com
others
TODAY SOFTWARE MAGAZINE
Gogu Meet Gogu! Gogu is a funny character, cynical at times, an introvert to whom the interior monologue is an alternative to the real life. With Gogu’s help, we explore different aspects of a project manager’s life trying to find and suggest solutions easy to understand and to apply. As Gogu would say: “almost common sense”. We invite you to follow Gogu and send him your comments and suggestions Simona Bonghez, Ph.D.
simona.bonghez@smartprojects.ro Speaker, trainer and consultant in project management, partener of TSP(smartprojects.ro)
Gogu was barely seen behind the mountain of CV’s that his sponsor had in front of him; you wouldn’t even have known he was there if you hadn’t heard him pouting from time to time. „Problems, Gogu?! You’ve wasted around three trees to print all those CVs. It wouldn’t have been easier to read them on the computer? „ Hmm, never mindMişu, if a catch you using the printer, you’ll see only trees!ThoughtGogu, but he refused to argue out loud. He stood there unmoved by the remark and deepened himself in the study of the papers. Mişuwanted to add something, to twist the knife in the wound, but he stopped when he saw the Chief coming: „Are you done, Gogu? The HR told me he gave you the selected CVs for you to choose the project managers you would like to interview. How many are they?” Hm, when I’ll put my hands on the HR... I wonder what’s going on today... I run only into slicks!!He answered to the Chief: „I need more time.They sent me 32 CVs, I’ll try to reduce the interviews atten. Is it ok?” „Go ahead;it’s better now than the last time when we hadn’t received any CVs. It’s good that you have eliminated some of the criteria, but let’s try not to reach the other extreme and interview candidates without studies or experience. Still, I don’t worry, I know you’ll verify themthoroughly. Come to me when you are ready.” Of course,``verify thoroughly”Gogu hummedthe Chief to himself. He read one more time the sentence from the CV standing in front of him: „significant experience in large projects with excellent results”.Like
someone could be that vague - he thought - how many years does „significant” mean, what budget does a „large” project have and on what basis are the results called „excellent”? „Intensive training in project management” –he read next. Yeah, sure...if this training of yours had been something you would have given me some clear details, notterms like”significant” and „large”. „Mişu, they all say, each one louder than the other, that they have experience and knowledge. How can I ”verify thoroughly” this fact?I can neither them all to come to the interview... Although it would be worth it, just to see the Chief ’s face...” His eyes fell on „PMP Certification obtained in 2010”. „This PMP is that international certification from PMI, isn’t it? „ He suddenly realized he had interrupted Mişu. „Well, that’s exactly what I was saying. You did it again: ask something and not paying attention to the answer. That annoys me... well, yes, that’s exactly the certification you don’t want to obtain, lucky for you with this Chief of ours. „ „Give me a break, will you?! I’m sure it might be fashionable to have a certification in the field, but this is hardly a guarantee of the project manager’s skills. A multiple choice exam cannot be conclusive for my skills. A project manager must be able to apply the theoretical knowledge gained, to coordinate the team, to manage the contingencies that arise during the project... where are these issues taken into consideration in the certification exam? Who verifies them?” „You are being grumpy as usual. Remember that Chief told us that there are some eligibility conditions you must fulfill – a certain number of training hours plus experience in project management? Actually, isn’t this a proof that you also acquired some skills in the field? „
www.todaysoftmag.com | nr. 3/2012
43
others
„Nope! I wouldn’t say so... „ But the tiny wheels inside his head began to turn: Hold on, they might not have skills, but at least someone has already verified that theyhave knowledge and experience. It seems that I can get away with it.... He started to check again the CVs, putting aside those with international certification. He counted them. There were five. That meant five interviews. And, at the interview, you check for the skills, well, as far as possible... and the Chiefreally knows how to get what he wants from the candidates. He added three more CVs that offered specific details about projects and gained experience and he felt satisfied. There were eight candidates for eight interviews. And then we’ll see if something comes out. It turns out that this PMP actually has some utility!
44
nr. 3/2012 | www.todaysoftmag.com
sponsors
powered by