No. 10 • April 2013 • www.todaysoftmag.ro • www.todaysoftmag.com
TSM
T O D A Y S O F T WA R E MAG A Z I NE
ky
es h t n di boar
p
cli t a e he gr
T
I o AP i d u WebA : 5 L HTM -Ray X e s : HBa a t a Big D
NoSQL - Introduction The Challenge of Leadership - Part 3 The Cluj IT History (V) - Timeline Project The Challenges of a Business Analyst Book review: Going Agile by Gloria J Miller Aspect Oriented Programming
Interview with Tim Huckaby NoSQL databases a comparative analyse Communities of practice, learning by doing and exploration Enterprise Application Development Superman Syndrom
6 The great clipboard in the sky Mircea Mare
8 Interview with Tim Huckaby
28 NoSQL Introduction Radu Vunvulea
31 NoSQL databases a comparative analyse
Mihai Tătăran and Tudor Damian
Traian Fratean and Bogdan Flueraș
10 Liberty Technology Park Cluj
35 Big Data: HBase X-Ray
Echipa Liberty Technology Park Cluj
12 The Cluj IT History (V) - Timeline Project Marius Mornea
13 HTML5: WebAudio API Radu Olaru
16 Enterprise Application Development
Cătălin Roman
37 Migrating MVC 3 website + DB to Azure Dragoș Andronic
39 The Challenge of Leadership - Part 3 Martin Mackay
41 Book review: Going Agile by Gloria J Miller
Lucian Ciufudean
Gloria J. Miller
19 Aspect Oriented Programming
44 Superman Syndrom
Knall Andreas
22 The Challenges of a Business Analyst Daniela Haliga
25 Communities of practice, learning by doing and exploration Cristina Donea
Antonia Onaca
46 WordPress and the Community Spirit Cornelia Stan
editorial
I
Ovidiu Măţan, PMP
ovidiu.matan@todaysoftmag.com Founder & CEO @ Today Software Magazine
ssue No. 10 of TSM has arrived! We are very excited that there is a community that shares the passion for software development. March came with a special event, namely … even mammoths can be Agile, where we joined the Colors in Projects team and the re-branding of Confucius Consulting led by Simona Bonghez. She is known as the author of the Gogu series and also as an experienced project management trainer. Gogu is on holiday this month, but we have his promise of returning soon. The event was a success. In Cluj we had 230 participants with a lot of local and international speakers. I would like to mention that Gloria Miller had a technical presentation on Agile. In the current issue she presents her book Going Agile. I would like to congratulate Dan Suciu for the presentation he made for the event, leading on our YouTube channel with 1,300 cumulative minutes. There was an exceptional presentation a week away in Bucharest made by David J. Anderson, known as the father of Kanban methodology. I was impressed by the simplicity of this methodology and the effectiveness of its application in certain areas such as the media. Another special event in March was the first edition of Cluj IT Cluster - Innovation Days. The opening event brought together the world of business and politics. We talked about local development projects and collaboration within global companies like IBM. This is new for Romania and the increased interest of local companies to take part in it is a first step towards a better understanding and shared growth. The goal of the project is to develop an ecosystem that will focus on innovation through the collaboration with the IT community and various related fields, such as medicine. The event planning in May should consider the IT Camp, a large scale event on Microsoft’s major themes and more. Organizers got an interview for TSM with Tim Huckaby, specialist in NUI (Natural User Interface) using tactile stimuli and neural gestures. They promised we could see Tim again this year in Cluj during the event. TSM starts a new initiative, the Timeline project, where we want to create data graphics about the software development companies in Romania. We will represent the starting point of a company, the main projects carried out over time, the achievement of certain criteria such as number of employees or turnover. So will be able to graphically follow these developments and better show the maturity of local companies and why not, their contemporary history. Finally, we would like to congratulate the Cloud Clipboard team, winner of the last edition of Startup Weekend. We are happy to promote Romanian initiatives by publishing their general experiences of the event and we wish them further success. In fact, still in May, we will have a similar event dedicated to those who want to put an idea into practice, namely Startup Live. Moving on to the area of organisational management , we will join Dan Ionescu, www.danis.ro, in his initiative to establish a set of best practices for managing Romanian companies which cannot always follow the American recipe for success. In the pages of this issue find you will find the NoSQL topic addressed in detail in three articles written by different authors: NoSQL - Introduction, NoSQL databases – a comparative analysis and Big Data: Radiography HBase. You will also find other technical articles, such as HTML5: WebAudio API, Aspect Oriented Programming and Migrating website MVC 3 + DB in Azure. I would also like to mention the review of the book Thinking in Java by Bruce Eckel, a well-known and appreciated book by the Java developers. Communities in various forms are showed in Communities of practice, learning through action and exploration and WordPress and the community spirit. We wish you a pleasant reading!
Ovidiu Măţan
Founder & CEO of Today Software Magazine
4
nr. 10/April, 2013 | www.todaysoftmag.com
TODAY SOFTWARE MAGAZINE Editorial Staf
Authors Cristina Donea
Founder / Editor in chief: Ovidiu Mățan ovidiu.matan@todaysoftmag.com Editor (startups and interviews): Marius Mornea marius.mornea@todaysoftmag.com
Martin Mackay
mmackay@neverfailgroup.com CEO @ Neverfail Group
Mircea Mare
Graphic designer: Dan Hădărău dan.hadarau@todaysoftmag.com Marketing: Ioana Fane ioana.fane@todaysoftmag.com Translator: Cintia Damian cintia.damian@todaysoftmag.com Reviewer: Tavi Bolog tavi.bolog@todaysoftmag.com Reviewer: Adrian Lupei adrian.lupei@todaysoftmag.com Made by
Today Software Solutions SRL str. Plopilor, nr. 75/77 Cluj-Napoca, Cluj, Romania contact@todaysoftmag.com
mircea.mare@gmail.com Software storyteller @ Cloud Clipboard
cristina.donea@isdc.eu HR Specialist @ ISDC
Traian Frătean
traian.fratean@3pillarglobal.com Software Engineer @ 3Pillar Global
Silviu Dumitrescu silviu.dumitrescu@msg-systems. com
Radu Vunvulea
Java consultant @ .msg systems Romania
Senior Software Engineer @iQuest
Marius Mornea
Bogdan Flueraș
marius.mornea@todaysoftmag.com Software Engineer and Founder of Mintaka Research
Radu.Vunvulea@iquestgroup.com
bogdan.flueras@3pillarglobal.com Software Engineer @ 3Pillar Global
Marius Mocian
marius@mocian.com Founder @ Transylvania Innovation Ventures Organizer @ OpenCoffee Club Cluj-Napoca
Gloria J. Miller
gloria.miller@maxmetrics.com Founder @ MaxMetrics
Mihai Tătăran
www.todaysoftmag.com www.facebook.com/todaysoftmag twitter.com/todaysoftmag ISSN 2285 – 3502 ISSN-L 2284 – 8207
mihai@itcamp.ro Microsoft MVP CodeCamp
Daniela Haliga
Co-fondator ITCamp
Business Analyst @ Endava Iași
Radu Olaru
Cornelia Stan corneliastn@gmail.com
daniela.haliga@endava.com
rolaru@smallfootprint.com Senior Software Developer @ Small Footprint
Lucian Ciufudean
Tudor Damian
Server Automation Functional Architect @ HP Software Cluj
Microsoft MVP ITSpark
lucian.ciufudean@hp.com
Copyright Today Software Magazine Any reproduction or total or partial reproduction of these trademarks or logos, alone or integrated with other elements without the express permission of the publisher is prohibited and engage the responsibility of the user as defined by Intellectual Property Code www.todaysoftmag.ro www.todaysoftmag.com
Copywriter/Organizer @ WordCamp Transylvania
tudy@itcamp.ro
Co-founder ITCamp
Knall Andreas
Cătălin Roman
Java Team Lead @ .msg systems
Software Architect @ Nokia, Berlin
Dragoș Andronic
Antonia Onaca
knall.andreas@msg-systems.com
dragos@txtfeedback.net CTO @ TXTFeedback
catalin.roman@nokia.com
anto@aha-ha.com with 10 years of experience, psychologist, consultant as an entreprenour
www.todaysoftmag.com | nr. 10/April, 2013
5
startups
The great clipboard in the sky
F
riday, first day of spring, 7:30 PM. 150 people in a wedding venue. The atmosphere was pretty dull. After walking in, I knew I wasn’t going to pitch. Not only that, but I was asking myself what I was doing there in the first place.
Mircea Mare
mircea.mare@fortech.ro Software storyteller @ Cloud Clipboard
6
nr. 10/April, 2013 | www.todaysoftmag.com
I had an idea, and I made a short presentation in my head in the previous days, which I polished it until it got to one minute. At first I didn’t know whether I wanted to present my idea or not, but I came to terms with myself that I will make a final decision when I’ll arrive at the event, depending on the atmosphere. “Pure cross platform file system.” It wasn’t an idea for a startup. Well, at that moment I didn’t even know what a startup is. I later found out... it’s an idea which longs to become a product. It’s not something exclusively tied to the IT world, only in our environment it’s relatively easy to come up with a prototype and do some market research. There’s one more essential thing: huge growth potential - index, spirals, segment - and other terminology I’m bewildered about. The efforts of the host to invigorate and dominate the audience had the opposite outcome of the desired one, but in the end, the pitch fire started. There were 36 pitches. Reactions ranged from “c’mon, get real”, “hmm, interesting”, “this might actually work.” I noticed: Catwalk15, FridgeChef, DoItForMe, CloudClipboard. Next, some of the pitchers posted themselves by their posters, in an attempt to further persuade people to vote on their idea. This shouldn’t have happened, the pitch should have sufficed. But... it seemed that lots of people had already set their minds on winning. Is this the secret of success? ... I hope by the end of the article to answer this very question. 17 ideas made it past the voting and
teams started to form. Piki (Istvan Hoka), which I knew from a CodeRetreat, looks at me, points to one of the teams, and asks “CloudClipboard?” Călin’s pitch could be summarized as: “Seamlessly copy/paste across devices. As simple as CTRL+C, CTRL+V”. I don’t think I’m overstating if I say he uttered “seamlessly” at least a hundred times during the event.
And thus it began...
Călin proved to be of contagious enthusiasm. He welcomed us in his team, we presented ourselves, gave our background in brief and as new members were joining the team, Călin introduced everyone of us. He quickly remembered our names and the things we’re good at. He also complimented us: “you look like smart guys”. These being said, it was time to get to work. We started with a brain dump. In the meantime, Kamillia (from the US) and Tudor joined us. Tudor is a developer but expressed his intention to be in the marketing team from the start. We found that Tudor already has his own startup - KeenSkim - with which he took part in the “Eleven” business accelerator from Bulgaria. Tudor was pressing us to focus on the business, on an actual product, not just a cool feature. That proved to be somewhat hard, with 7 developers around him, all having the same expression: “Forget that, let’s hack something!”. We consented over the minimum viable product; for the final presentation we should be able to “copy” from a device
TODAY SOFTWARE MAGAZINE followed by a “paste” on another. And the following teams were outlined: marketing, backend, Android, Windows, Linux and “the catalyst” (Călin.) We each shared our ideas and debated until around 11:30 PM. To the delight of the majority, the plan was utterly simple: “tomorrow we code.” Java, C#, Ruby, Publish/subscribe, Pubnub API. The venue was about to close at midnight and after being “kicked out” we stood in front of the entrance to further our discussion on - what else - implementation details. Pairing, backend, parasite on Google Docs or Dropbox for file transfer, discovery between devices. Saturday morning we started off with a lot of energy. By 1 PM we had a chat room on CampFire, Github repository, documents describing the mentors and organizational details. By lunch, our prototype was catching shape, the first messages started being sent over the Pubnub channel. But leaving the technical enthusiasm aside for a bit, I’ll reference some of the pertinent advice we got from the mentors. Ibrahim Evsan encouraged us to find us the behaviour of users, what kind of data they have in their clipboard and recommended us to summarize what our product does, to reach a use case. He suggested, that we can even build a new type of search engine around our concept. David Zwelke was simply genius. He was pouring out ideas and, after how much time he spent with us, I’m convinced he enjoyed our company. He suggested packaging our product in “Goldfish” and “Elephant” edition, idea which made it to our final business plan. He offered us recommendations about encryption, pairing (bump, QR code, Bluetooth-like), security, legal aspects and social features. He suggested a pain point for which our product could be a solution: the frustration of not being able to share information quickly. He also underlined the importance of setting ourselves apart from the competition. One of his delicious comments, was something along the lines: “All those Linux neck beards will go: I could pipe my clipboard over SSH back in 1992.” After which he turns to me and says “Your neck beard looks better than most of them.” “Forget everything you used to forget”, was a suggestion for a motto, and as features he suggested adding autofill between devices, plugins, ability to pull predefined things.
Sebastian Presecan suggested some questions to which we should answer: who’s our target group, how are people transferring information now, how much time are they spending doing it, why use copy/paste from the start, how is the problem handled at the present moment. He also suggested avoiding terms too technical, to communicate in layman language. Christoph Raethke suggested us to direct our solution at corporations or to bundle it with existing solutions. On the other hand, Simon Obstbaum suggested that this is a user issue, not a corporate issue. Around 3 PM, the marketing team was chatting with Bradley Kirkham. I barge in saying “Sooo, live demo!” At first, Călin laughed, thinking I was joking... “Really?!” Piki copies a text from his Mac, and I paste it on my laptop running Linux, Călin’s face lights up and Bradley goes “That’s so cool!” After the conversations with the mentor, I, for one, felt discouraged. People were charmed by our idea, but did we really have a product? We spent the rest of the day chatting and... searching for new features to implement for the next day.
Sunday.
Tudor: “Who’s your target audience” Călin: “Me.”
Was a conversation his happened around three times. After a while things started sounding better. “A market of young IT professionals, 90% male, programmers, journalists who wish to be productive and stay organized, in the context of having to switch often between ideas.” CloudClipboard proposes a process of two steps for data transfer, and the key combination is ingrained in the muscle memory of every user. An intelligent clipboard, global, data aware. There are other similar solutions out there but their biggest shortcoming is that they synchronize data using an interface (e.g. Evernote) or is restricted to one OS (take iOS, synchronization made via iCloud.) Things were looking good, so we decided to start working on the presentation and slides. We filmed a demo: “copy” a phone number from a smart phone and “paste” in a Skype chat. We made a testdrive of the presentation with Philip Kandall (Skobbler), who suggested we find more realistic numbers about our market (the total number of IT workers
didn’t seem credible). Thus, we studied the Evernote and Dropbox markets and included this in our presentation. We aced the final presentation and Christoph (Berlin Startup Academy) came even before the winners were announced to invite us to Berlin, regardless of whether we would be announced as winners, or not. The guys at Today Software Magazine made a popularity poll among the participants and I noticed that we had about double the number of votes compared to the next team. I started sensing that things are starting to happen, but I refrained myself from thinking we would win. Neither one of us said anything about it. But, perhaps not surprising at all ... CloudClipboard won! We complemented each other. This is the reason for which a great team was formed; we felt each other’s intentions and resonated in approach. We organized quickly and, at first, we simply didn’t care about anything. We also had the sincere desire just to have a good time. We didn’t think about winning, but we did our job well because it’s in our nature. And we followed the advice. One of Călin’s conclusions was “always listen, don’t just pretend.” Plus we had a great time. For the prototype we didn’t have pairing. We were all connected to the same channel. This is how “paste roulette” was born. You just “paste” and you don’t know for sure what you get, and from whose clipboard. It was the biggest startup evening in Romania until now, and Cristoph stated on his blog that the potential he saw at SWCluj is similar to that in the German world of startups. We also intuited very well that “outsourcing companies in Romania [...] sweep the plate clean when it comes to hiring the best developers.” Things are promising, we’re not lacking talent and we’ll be hearing more and more often about startups. As for the afterparty, Cristoph summarized it as follows: “the participating geek girls had revamped into bombshells.” For team Omnipaste (renamed after a name clash with another application) the real fun starts with 4 days in Berlin.
www.todaysoftmag.com | nr. 10/April, 2013
7
interview
Interview with Tim Huckaby
T
im Huckaby is focused on the Natural User Interface (NUI)- Touch, Gesture, and Neural in Rich Client Technologies like HTML5, Silverlight, WPF, & IOS on a broad spectrum of devices that include computers, tablets, the Surface, the Kinect, and mobile devices. Tim has been called a “Pioneer of the Smart Client Revolution” by the press. Tim has been awarded many times for the highest rated technical presentations and keynotes for Microsoft and many other technology conferences around the world. Tim is consistently rated in the top 10% of all speakers at these events. Tim has been on stage with, and done numerous keynote demos for many Microsoft executives including Bill Gates and Steve Ballmer. He will be present at IT Camp which is 2 days, 3 tracks, over 20 well-known national and international speakers, 30+ hours of content and open panels, pre-conference workshops and lots of networking opportunities. When did you start your life as an IT professional? [Tim]: it depends on how you define the word “professional”. I most certainly helped pay my way through university by consulting and doing programming projects. Upon graduating I joined EDS… that was back when Ross Perot ran the company. What made you choose this profession? [Tim]: that is a pretty interesting story. I went to an all boys catholic high school. We didn’t have wood-shop; we had latin. We didn’t have computers; we had calculus. I didn’t touch programming until I was given a pascal programming class in college. I fell in love with programming because of that class. I ended up taking every programming class my university had; every one. You authored a few books on Microsoft technologies. How did you manage to be so successful? [Tim]: Success is relative. I still have so much to achieve professionally. As technologists we are so goal driven. And I must certainly am. But, I did write 3 books and thousands of magazine articles. I still write magazine articles and a monthly appdev editorial, but it would be hard to get me to write another book. I wrote those books because I worked on the product teams at Microsoft. I knew the products inside and out so it was just a matter of explaining them. The expertise I have now is in user experience, NUI, user engagement, etc. it’s hard to write it. it’s much more effective to show it. which, I guess is why I do so many presentations.
8
nr. 10/April, 2013 | www.todaysoftmag.com
You and your company Interknowlogy are very specialized on Microsoft technologies. How do you manage to keep up with the speed they evolve with? [Tim]: I don’t. I used to know the msft stack end to end. Now I just keep up with the parts of my world I am most interested in. I have so many brilliant people to lean on in the company that whenever I need an update I turn to them. The msft stack is so big these days. No one can know it all. Tim Huckaby
How does Tim Huckaby spend a typical working day? Do you still have time for technical work? [Tim]: I think that it why my life is so exciting. I never have a “typical day”. I never have the same day twice. I tend to travel a lot so that provides a lot of the excitement. And, of course, there is a crisis every week. As for technology I still love it. but, like mentioned earlier I tend to spend my technical days “above the hood” as opposed to in the plumbing where my career began. And interknowlogy is a project company that writes software for all industries. Our clients go from cancer research to retail to NASA. So, the work is really interesting. But, as I get older I’m a lot more engaged in the complexities of business. And the challenges in business are just as challenging as in programming…. If not more.
I stepped back from my ceo job and promoted my number 1. That was always the plan. I missed my product roots and now, although I still own interknowlgy and serve as chairman, I am not involved in the day to day as much. I spend more time in my start up. It is called actus and makes interactive digital signage products (gesture and voice driven with Kinect). But, I firmly believe in family and hobbies. I don’t watch much tv (except for sports and comedy) and I don’t play xbox. But, I do love the outdoors. I take a snowfboard or a fly rod on every business trip I take. These days it is rare when I don’t sneak a couple hours of fly fishing on the business trip. I would really love to fly fish in the mountains of Romania.
What are your hobbies? Do you have time for them? [Tim]: there is never enough time in the day…. it took me a decade to build InterKnowlogy to the company it is today. and at 15 years old, it is a great company run by great people. A couple years ago
If you would be 25 yoa again, what would you do professionally? [Tim]: man, that is tough. I have always loved software, but sometimes I feel like I missed my calling in science. I would love to be a field biologist. Because of the fly fishing I know more about entomology
Microsoft RD, MVP InterKnowlogy
TODAY SOFTWARE MAGAZINE
(bugs) that most humans. Also, I always felt like I had a musical bent to me. I never was afforded to play an instrument; I think had I did, I might have been good at it. when I do retire, I think I might be learning an instrument when I’m not fly fishing or snowboarding.
kids and such she rarely comes. But, she came to Romania last year and we had an absolute blast. I rented a car and travelled the country. What an adventure that was. Let’s just say the gps took us on roads that were worse than anything we have seen in mexico. Additionally the people drive crazy! And I have driven in rome… I have It is going to be your second visit to never seen crazier drivers than in romania. Romania. What did you know about our I’m bringing Kelly again this year. cannot country before your first visit? What is your wait! opinion now? [Tim]: well, I’m lucky. For the last 9. How do you see Romanian IT pro15 years Microsoft has sent me all over fessionals? Both good and bad. the world. Being that said, I had been all [Tim]: I really don’t see bad. I see exciover the Baltics, all over eastern Europe ted technologists. We have employees in and never had the opportunity to get to eastern Europe. There is so much software Romania. So, when I got the chance I engineering talent in eastern Europe. jumped on it. the really interesting thing Granted the infrastructure, economy, culis that my wife Kelly gets asked to go on ture, etc. might have slowed a technical all these trips with me. and because of the revolution a bit in a few places. But, it’s not
like it’s not coming…quickly. Hey, we have tons of problems in the US. Everywhere in the world has issues that prevent technology. Technology is rarely the problem.
Mihai Tătăran
mihai@itcamp.ro Microsoft MVP CodeCamp Co-fondator ITCamp
Tudor Damian tudy@itcamp.ro Microsoft MVP ITSpark Co-fondator ITCamp
www.todaysoftmag.com | nr. 10/April, 2013
9
business
Liberty Technology Park Cluj
L
iberty Technology Park Cluj is the first technological park in Romania, a park for creative ideas built in a revolutionary place, designed to offer exceptional growth and quality environment for companies in the IT&C and R&D domains, all in one unique area both conceptually and architecturally.
Silicon Valley, a model to look up to Liberty Technology Park Cluj team office@libertytechpark.ro
10
nr. 10/April, 2013 | www.todaysoftmag.com
furniture factory, the place where Liberty Technology Park is being built, a project designed on the same conceptual premises as Silicon Valley. Situated in Cluj-Napoca, the most important city in Transylvania, Liberty Technology Park Cluj wants to change the local landscape in the most profound way, offering a place where several companies from the IT&C and R&D domains can develop, connect, engage and foster inspiring people and creative ideas in a perpetual flux of innovative development. Liberty Technology Park Cluj is intended to function as a dynamic ecosystem for local and international companies driven by that particular vision that makes ideas come to life and businesses thrive. Designed on the grounds of the former Libertatea furniture factory, Liberty Technology Park Cluj reinvents the factory’s identity by restating the industrial heritage that came along with the land into an environment created to meet the needs of every IT&C and R&D company. Architecturally, the central principle of Champan Taylor’s design was to restore the existing spaces in a technology park A dynamic ecosystem in the heart of that would reflect the energy of the tech Transylvania companies fueled by Cluj-Napoca’s human Back in the present we’re taking a resources potential. trip on Gării Street, in Cluj-Napoca, on the grounds of the former “Libertatea” The Ideal Tech Habitat Picture this: we’re taking a step back in history, 60 years ago, in the U.S.A., near Stanford University and we’re watching how right before our eyes the community known today as Silicon Valley, the first and most renowned technological park, takes life. In order to properly address their financial and educational needs but also to provide local employment opportunities for graduating students, in the years after WW2, Stanford University proposed the leasing of their lands for use as an office park, initially named the “Stanford Industrial Park”, later becoming “Stanford Research Park”. Leases were limited to high technology companies. The premises that founded the technological park as a concept are highly connected to the idea of fostering people with similar interests in developing creatively their domains of activity. Exchanging ideas, communication and encouraging young inspiring people to find and exploit the necessary resources for the development of new technologies are also very important.
business This tech habitat will also foster the most complex business accelerator in Romania: Spherik. Spherik will be the first Romanian platform of its kind meant to help, to grow and to create thriving businesses, a platform designed for developing and implementing business both nationally and internationally. The most important goal of Liberty Technology Park Cluj is to create a sound and valuable environment for all the employees in the IT&C and R&D companies, an environment that concentrates from the very beginning on innovation and creativity, on finding the most efficient
TODAY SOFTWARE MAGAZINE solutions and capitalize the human resources potential. In order to succeed Liberty Technology Park Cluj offers several types of spaces designed to create the ideal working place. Starting with the office spaces and continuing with the time spent outside the office, this habitat meets the needs of every company, including a wide range of services and premium facilities. The park also includes an event area, conference rooms, wide green garden like areas, leisure area with a restaurant and a coffee shop, multifunctional sport grounds and a retail and medical area all designed to complete the ideal tech habitat.
Liberty Technology Park Cluj is a project developed by Fribourg Development and seeks to intensify connections and communication between companies that activate in similar domains in order to generate innovation and progress by creating a revolutionary space..
www.todaysoftmag.com | nr. 10/April, 2013
11
history
The Cluj IT History (V) Timeline Project
T
12
he topic of this issue is straightforward. We want to build a common timeline for as many IT companies as possible in Cluj. We started with the idea of creating a list to populate the IT map (from the previous issue), using varied and unstructured sources, without
validated online on the companies’ websites, it is however likely that there will be some differences from the official brand or the one in popular culture such as Betfair vs. TSE Development, Nethrom Software vs.Yonder, TORA vs. Tora Trading Services, etc. To facilitate feedback, the
attempting to comprehensively explore the business environment. For example, the IT Cluster members’ list, ARIES, the companies present at JobShop, the BestJobs ads, the LinkedIn contacts, the business cards stacked on the desk, etc. And because this project belongs to the community, TSM being just a catalyst, we decided to present this initial list, accompanied by a request to expand it and possibly correct it. Although most of the used names were
above wordlet was born. replace the business card of any company. In an attempt to combat the iner- I hope you will enjoy the challenge and tia typical to our readers and explore the answer in a large number! historical theme of the series, we challenge you to attach a brief timeline of your company to each name of this wordlet. Marius Mornea Examples of events that may populate the marius.mornea@todaysoftmag.com timeline could be: the starting point of a company, exceeding a certain number of Software Engineer and Founder customers, employees, turnover, the brand of Mintaka Research change, partnerships, product launches,
nr. 10/April, 2013 | www.todaysoftmag.com
key contracts, etc. We promise to combine all incoming timelines in one, to provide a common and brief perspective on the development of the local IT community. We consider this exercise both useful and individual, through its self-exploratory character and simple outcome that can
programming
TODAY SOFTWARE MAGAZINE
HTML5: WebAudio API
T
Radu Olaru
rolaru@smallfootprint.com Senior Software Developer @ Small Footprint
hese recent years have shown us that online applications push forward and forward the web limits. We might have all been edgy about web applications at some point. We hoped that at some point the world will calm down and see that the web cannot sustain complex applications and the whole Universe would return to normal. Big applications are for the desktop and web applications are just dreams of some Google enthusiasts. But then almost out of nowhere HTML5 arose. And HTML5 revolutionized not just the web of the big business applications, but also it opened wide the gates of multimedia applications on the web – especially games.
The multimedia innovation
HTML5 comes with a big surprise for the multimedia application developers. We already know too much about the audio and the video tag. We won’t talk about them in this article. A superficial observation would reveal that the audio tag is just not good enough for complex multimedia applications. It was designed to hide the complexity of audio play. Yes, the tag could provide a gorgeous musical background for those feeling nostalgic for the ’90 sites. But for those who need to develop real time sound processing applications, HTML5 comes with a brand new interface: WebAudio. WebAudio allows 3D sound positioning, multiple audio source mixing and powerful modular sound processing. The available audio effects include a convolution reverb engine, delays and compression. Also, WebAudio allows real time audio data analysis. These features have a very specific target: games and audio processing applications. Software DAWs were strictly on the realm of the desktop. Never again has been heard of an artist to start his browser in order to perform live on a stage. Nor did it ever make sense for a MIDI keyboard to communicate with a web browser. All of this changes with WebAudio.
A sophisticated infrastructure
To raise the multimedia standard in web browsers so high, WebAudio requires a special foundation. To begin with, the audio data and processing resides in a separate thread from the rest of the application. This way, any processing is safe from any other application interference. The audio data, the sources and destinations and the processing nodes are all located in an independent context with a separate sync time, keeping track of many sets of audio sources. After such a context is instantiated, we can already specify audio sources, destinations and sound processors connected in different configurations: var context = new AudioContext(); var source = context.createBufferSource(); var volume = context.createGainNode(); source.buffer = buffer; volume.gain.value = 0.6; source.connect(volume); volume.connect(context.destination); source.start(0);
WebAudio follows a plugin architecture. The sound is loaded in a buffer (we will see later how), the buffer is sent to an audio source, then to a sound processor and finally to a destination. The default destination for a context is the operating system’s current audio output: usually the line out jack of the sound board. Although using an intuitive syntax, the sound processing is done directly by the audio driver. If the system has a dedicated
www.todaysoftmag.com | nr. 10/April, 2013
13
programming HTML5: WebAudio API audio card, the driver will run using the card’s hardware. If the system has just a simple codec, the river will run its processing on the CPU. No matter the case, the final decision is made by the browser. The way the browser compiles and runs the WebAudio instructions determines the fate of the audio data. But a good implementation will tend to use dedicated audio resources and avoid all blocks in the audio processing stream.
Nodes and oriented graphs
The initiation of an audio stream is done by defining an audio source. This may be a live input of the audio card (a microphone), an existing audio file or a sound synthesizer (an oscillator). Of course, the graph allows many sound sources of any kind. All the sound sources may be played at once or on demand (at the press of a key for example or when receiving a MIDI message). Then, the sound may be sent to different audio processing nodes. The processors include panning, Doppler effects, filters, reverbs, echoes and others. Each node allows multiple ins and outs and may be connected in series or in parallel. There is also an offline audio context, disconnected from a hardware playback source, which may be used to generate audio data ahead of time. The context generates the sounds as a data packet and delivers it in a callback function: callback OfflineRenderSuccessCallback = void (AudioBuffer renderedData) interface OfflineAudioContext : AudioContext { void startRendering(); attribute OfflineRenderSuccessCallback onComplete; }
acquiring method: existing file on disk, code generated audio samples, oscillators and hardware audio card inputs. After the sound acquisition and formatting, it may be further sent to processing nodes or to the audio destination.
AudioBuffer
The AudioBuffer formats the sounds as samples, just like in the audio files. The basic characteristic of an AudioBuffer is the sample rate – the frequency at which the sound was analyzed to obtain useful processing information. These values are specific to the digital audio processing industry and are part of a vast knowledge base of which we won’t write here. In order to use an audio file, it first must be loaded in a buffer which is then connected to the context destination: var request = new XMLHttpRequest(); request.open(„GET”, „sound.mp3”, true); request.responseType = „arraybuffer”; request.onload = function() { audioContext.decodeAudioData(request. response, function(buffer) { // buffer conține un obiect de tip AudioBuffer }); }
The buffer allows direct sample access:
var buffer = audioContext.createBuffer(1, 5, 44100); var bufferData = buffer.getChannelData(0); bufferData[i] = 0.3;
The above code creates a buffer with a single audio channel (mono), which contains five samples at the 44100Hz sample rate. The buffer data vector is accessible using the getChannelData method. The vector is writable. This way we may create sounds using equations or other generating methods. Finally, the playback of an AudioBuffer is done using the AudioBufferSourceNode: var source = audioContext.createBufferSource();
Audio sources and graph links
source.buffer = buffer;
The audio processing graph receives source.connect(audioContext.destination); source.start(0); data from audio sources. The audio data passing through the graph consists of reco- Oscillator ded sounds, accessible to the browser for Another audio source is the oscillator. processing and playback. There are more The oscillator formats the sound as periotypes of audio sources based on the sound dic functions, infinitely looped – its basic
characteristic is the generator function. There are four types of predefined oscillators: sinus, square, triangle and saw tooth. The oscillators may then be connected to compose complex sounds. var source = context.createOscillator(); source.type = 0; // oscilator sinusoidal source.connect(context.destination); source.start(0);
Also we can create custom oscillators by composing mathematical functions in a single generator: var first = new Float32Array(100); var second = new Float32Array(100); for (var i = 0; i < 100; i++) first[i] = Math. sin(Math.PI * i / 100); for (var i = 0; I < 100; i++) second[i] = Math.cos(Math.PI * i / 100); source.setWaveTable(context. createWaveTable(first, second));
Live
In order to use live audio sources like a microphone, the browser has to obtain an audio stream from the operating system. For this reason, these sources are accessible only from the navigator Javascript object: navigator.getUserMedia( { audio: true }, function(stream) { var context = new audioContext(); var source = context.createMediaStreamSo urce(stream); });
source.connect(context.destination);
This audio stream cannot be directly altered like the other audio sources (buffers or oscillators) but, like any other source node, it may be linked to any other processing node defined in the graph. This type of audio source works very close with the operating system and the installed drivers and the acquisition quality depends on both. The Chrome browser optimizes in particular the live audio data streams and allows the development of true real time sound processors. The implementation is based on quick routing of all audio processing to the audio driver. If the system has a dedicated audio board, the audio processing will have a minimum delay. In the Resources section of this article there are some links which showcase the execution speed of the WebAudio platform in the Chrome browser.
Changing the audio volume
There are many audio processors, each with its specifics, but in this article we will take a look only at the gain processor. We may change the audio volume of a source by connecting it to a GainNode. The node source may be any audio input or any other processing node receiving audio data: var gainNode = context.createGainNode();
Figura 1 - Context audio
14
nr. 10/April, 2013 | www.todaysoftmag.com
source.connect(gainNode); gainNode.connect(context.destination); gainNode.gain.value = 0.8;
TODAY SOFTWARE MAGAZINE This way we may crossfade between two audio sources for example: var envelope = context.createGainNode(); var now = context.currentTime; source.connect(envelope); envelope.connect(context.destination); envelope.gain.setValueAtTime(0, now); envelope.gain.linearRampToValueAtTime(1.0, now + 2.0);
This way we may crossfade between two audio sources for example.
Tendencies
The current state of the WebAudio platform is not quite good. Basically it is only completely supported by the Chrome browser â&#x20AC;&#x201C; and only on the desktop right now. Only the latest Chrome versions on the Android platform support a minimal subset of the WebAudio platform and the implicit Android browser is not included here. WebAudio is also supported by iOS6, ChromeOS and ChromeFrame. But the platform is defined and accepted in the HTML5 standard, which assures a bright future for the web multimedia application development. Already we may build audio synths and real time audio processors, and soon Chrome will implement MIDI communication allowing music composition using MIDI keyboards and other professional multimedia devices. And, as an innovation, WebAudio may work together with WebSockets allowing true distributed, real time audio performances on the web. Users may connect to a single web site and start creating together high quality music. The first demo of this is the jamwithchrome site.
Resources and references http://dashersw.github.com/pedalboard.js/demo http://www.jamwithchrome.com http://kevincennis.com/mix http://www.tenthcirclesound.com/sympyrean http://docs.webplatform.org/wiki/apis/webaudio http://webaudio-io2012.appspot.com https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html
www.todaysoftmag.com | nr. 10/April, 2013
15
technologies
Enterprise Application Development
B
ack towards the end of the nineties, most of the CDs that accompanied books were labeled as “multimedia” CDs: 3 images and an audio file were enough to promote that CD stuffed with 600 pages of text to a “multimedia” CD.
Lucian Ciufudean
lucian.ciufudean@hp.com Server Automation Functional Architect @ HP Software Cluj
Similarly, software companies that used to produce “applications” started to brand themselves as producing “enterprise applications”. I won’t follow the controversial road of defining what an enterprise application is, but instead I will enumerate a few important characteristics of our development process and of an application we are developing in Cluj. Here we build applications with specialized teams: we let developers and testers work on the code, while Information Engineers write the official product documentation. We make sure that our visual interfaces are optimally used by our users by working with UI Designers. The Product Management team sits close to the customer and builds the vision of the next product versions. The Test Automation team is our supplimentary line of defense against poor quality – they cover the product with automatic functional tests
(besides the programmer-written ones) “I recommend that developers write their own unit tests [...] I recommend that an independent tester write integration tests.” – Kent Beck We deploy the Functional Architect role so that we can work effectively in SCRUM without requiring the end user’s availability: the Functional Architect is the SCRUM Product Owner. We work in short sprints so that we can react fast to changes, and because SCRUM is about continuous improvement, well known consultants provide us with SCRUM trainings. Our Support team handles the postdelivery escalations: ultimately these can be translated into new features and hot fixes. The Support and the Current Product Engineering (CPE) teams are very active in the public product forums in order to help clients with their issues. We position these
Server Automation: performance graphs. (Note: Confidential information was edited out of the image)
16
nr. 10/April, 2013 | www.todaysoftmag.com
TODAY SOFTWARE MAGAZINE two teams between the Client and RnD in order to protect the latter from having to deal with external incidents. We use a dedicated team we call Professional Services that help customers deploy the application in complex environments (see deployment topologies for Server Automation below) or when a feature customization is requested. We are interested to know what clients think of our plans, so we run the Design Partner Program where we present them potential new features and ask for feedback. We help the Sales team work more efficiently by delivering deep-dive demos of the features included in the next product version - this is our Early Program Release. Our customers are the big worldwide players in their industry. They have a large number and variety of feature requests, which we also see as ideas for new functionalities: after all, customers are the ones who use the product in a business environment, not just in the lab. As an example, Server Automation is used in the customersâ&#x20AC;&#x2122; datacenters to manage thousands of servers. It supports over 90 combinations of operating systems
on various architectures like x86, x64, IA64, IBM zSeries, Solaris SPARC and PowerPC. To account for such variety, R&D uses a large virtualized lab with about 1500 machines for development activities. Server Automation is an application that fully automates and manages data centers. Starting with the discovery of servers
in the network, it also offers installation and configuration of applications and patches, auditing, unified management of physical and virtual machines and integration with virtualization stacks from Oracle, VMware and Microsoft. We deploy Server Automation in diverse topologies to allow the management of
www.todaysoftmag.com | nr. 10/April, 2013
17
technologies Enterprise Application Development libraries for the following client types: Java RMI, Python, SOAP, C# and Global Shell scripts. The Javadoc is not only our standard way of documenting the API: Twister is an application that allows exploring and calling the API from the browser, built to look and feel like the “Javadoc” We respect our API users, so a dedicated team of experienced developers have the task to ensure that the binary and functional compatibility of the API is preserved across releases. We know that the SA performance is paramount, so we have a dedicated team to test its non-functional aspects. Here are a few performance graphs they produced: Multimaster topology: Application nodes (SA Cores) that replicate data Server Automation is one of the 4 enterprise applications that a team of 150 geographically dispersed data centers. The next commands list the servers and employees develop in HP Software Cluj. Satellite deployments are appropriate their OS depending on the permissions of Here we support innovation through a glofor smaller data centers where the ban- the current user: bal HP program (InnoStream) and locally dwidth might be an issue. The next picture through Vibrant Day – one day a month $ ls /opsw/Server/@ shows a key element that allows managing abc.hp.com m33.hp.com gist.hp.com when you have the freedom to develop $ cat /opsw/Server/@/abc.hp.com/attr/osVersion datacenters that span multiple networks: anything you like, be it a personal or an Microsoft Windows 2000 Advanced Server Service the gateway, one of which is deployed at Pack 4 Build 2195 HP project, by yourself or part of a team. each end of a communication channel: If your idea is really that good, we support you to present it during the well-known technical HP event, TechCon … And we do all these so that our enterprise applications get better each day.
The end users are mainly system administrators: we know they like the Unix world so we offer a Unix-like console (SA Global Shell) they can use to browse the full application model as a virtual file system. SA Global Filesystem offers a unified view of the SA data model (servers, packages, customers, facilities, audit policies, etc.) and the managed server content (file system, etc.)
18
nr. 10/April, 2013 | www.todaysoftmag.com
And because we respect users irrespective of their OS preference, we can even list the registry entries of a Windows server from this Unix-like console: $ ls /opsw/Server/@/abc.hp.com/registry/Administrator HKEY_CLASSES_ROOT HKEY_CURRENT_CONFIG HKEY_ CURRENT_USER HKEY_LOCAL_MACHINE HKEY_USERS
Application interoperability plays a major role in the enterprise world, so we offer a comprehensive API and client
programming
TODAY SOFTWARE MAGAZINE
Aspect Oriented Programming Introduction & general ideas
W
hile paradigms such as MDD (Model Driven Development) or TDD (test driven development) play an important role in today’s software development, a new paradigm, called AOP (Aspect Oriented Programming) has become increasingly popular lately. The role of AOP is to modularize some central aspects of an application, also known as called cross cutting concerns.
Knall Andreas
knall.andreas@msg-systems.com Java Team Lead @ .msg systems
The following examples illustrate in a simple manner several cases where the use of AOP in a software application can make a significant difference: • We have took over an existing Java application from another provider and the application doesn’t have a logging mechanism implemented yet. This example will be addressed in a more simplified form later on. • In my Java application I want to validate the parameters of all methods from the service layer of the application. • I plan to „repack” some exceptions in the service layer before they are transferred to the client. Usually a Java specific exception is converted into an application specific exception. Needless to say that we can encounter a series of similar situations like those mentioned above, but these three situations were chosen precisely because they all have at least two things in common. First, these situations fall into the same category, which is cross cutting concerns. This expression covers all the features that are common to a set of classes, but which represent secondary functionality of an application. In addition to the examples previously stated (logging and error handling), other features, such as authorization or application security can be considered. The role of these features is not minimized, but they are considered secondary features, due to the fact that they are not a critical part of the business logic. Secondly, assuming that we are dealing with a typical enterprise application, it is
obvious that the implementing any of the ideas above, usually involves introducing almost identical code in several parts of an application. In other words we have a high potential for duplicate code. An elegant solution would be creating a single code block for each presented issue, and to reuse this code block wherever needed in the application. For this type of problems, AOP can be the saving solution. Before presenting a specific example of AOP, first let’s review some of the concepts underlying this paradigm. • Advice. An ‘advice’ is our ‘crosscutting concern’ functionality, that we would like to apply to some of our code. • Join Points.A point in program execution, at which an Advice can be invoked. • Pointcut. A ‘Pointcut’ is a way to quantify and identify ‘Join points’ • Aspect. An ‘Aspect’ is the combination between an ‘Advice’ and a ‘Pointcut’ . This paradigm offers different implementations in different programming languages. In the world of Java there are several frameworks that provide AOP facilities. The following programming languages and framework are some of the ones offer AOP capabilities: .NET Framework, Java, Cobol, Javascript. If we think about Java in particular, the most popular frameworks that provide AOP features are AspectJ and Spring AOP. Please note that every AOP implementations may have a slightly different model , more or less evolved than that represented by the four elements
www.todaysoftmag.com | nr. 10/April, 2013
19
programming Aspect Oriented Programming outlined above (advice, join points, pointcut and aspect). All implementations differ from one another, some offering complete AOP solutions, such as AspectJ, and others just basic functionalities. To exemplify this statement we will present the four types of Advice offered by Spring AOP: • Before advice: Advice that is invoked before executing a method. • After returning advice: Advice to be executed after a method completes normally. • After throwing advice: Advice to be executed if a method exits by throwing an exception. • Around advice: Advice that combines all three Advices mentioned above. As you can clearly see that Spring AOP provides only the possibility of applying an Advice to a method. Other frameworks, such as AspectJ, offer the possibility of adding an aspect to class members. Moreover, AspectJ provides a wide range of functionalities such as an ‘Around Advice’ for methods, supporting only those with a certain type of parameter for instance. We continue by presenting a real life AOP example by using the AOP implementation of the Spring framework. The choice is determined by Springs simplicity and increasing popularity, which makes it easier to understand the following example. The presented example uses all of the concepts presented above (advice, join points, pointcut and aspect) and provides a logging facility for DAO classes within a Java enterprise application. Further on, assuming that we have the following object of DAO (data access object), BookDAO, which role is to read books from the database and insert books into database. The reading method gives an exception specific to the application when the book is not found. public class BookDAO { public BookEntity findBookById(Long name) throws ItemNotFoundException { //Finding Book code }
}
public BookEntity saveBook(){ //Saving Book code }
The book entity, in its simplified form, looks like this: public class Book { private Long id; private String name; public void setName(String name) { this.name = name; } public void setId(Long id) { this.id = id;
20
nr. 10/April, 2013 | www.todaysoftmag.com
}
that defines all the AOP elements, presented for our example above. The reader who is not familiar with specific Spring elements public String geName(){ return name; can easily understand the key points from } } this configuration file, by following the If the Book class has a simple form, the element’s comments. implementation of our advice for login is <beans xmlns=”http://www.springframework.org/ a bit more complicated, implemented in schema/beans” xmlns:xsi=”http://www.w3.org/2001/XMLSchemaMyAroundMethod class. In order to offer a instance” xsi:schemaLocation=”http://www.springframework. concrete example, I have decided to imple- org/schema/beans http://www.springframework.org/schema/beans/ ment an Around Advice type of advice. To spring-beans-2.5.xsd”> achieve this, our MyAroundMethod class // Aici este definit beanul Book <bean id=”BookDAO” class=”com.msg.Book” /> must implement the MethodInterceptor // Aici este definit Advice-ul nostru ca si interface. // bean, si este folosit la definirea aspectului public Long setId(){ return id; }
public class MyAroundMethod implements MethodInterceptor { @Override public Object invoke(MethodInvocation method Invocation) throws Throwable { System.out.println( „Numele Metodei in care ne aflam : „ + methodInvocation.getMethod().getName()); System.out.println( „Parametrii metodei in care ne aflam : „ + Arrays.toString(methodInvocation .getArguments())); System.out.println( „MyAroundMethod: Interceptie inaintea” + “executiei metodei”); try { Object result = methodInvocation.proceed(); System.out.println(„MyAroundMethod : ” +” Interceptie dupa executarea metodei”); return result; } catch (ItemNotFoundException e) { System.out.println(„MyAroundMethod :” +”Interceptie in urma aruncarii unei “ +”exceptii”); throw e; } } }
MainClass, as the name already suggests, is the class that generates our small application. The followed script is: we search for a book from the database, we modify its name and we save it again in the database. Then, we try to from the database load the book with the ID 2. And, as we want it to be a successful example, it is not found in the database. public class MainClass { public static void main(String[] args) { ApplicationContext appContext = new ClassPathXmlApplicationContext( new String[] { „Spring-DAO.xml” }); BookDAO bookDAO = (BookDAO) appContext .getBean(„BookDAOProxy”); try { System.out.printn(„*******************”); BookEntity bookEntity = bookDAO. findBookById(new Long(1)); bookEntity.setName(„Nume nou de carte!”); System.out.println(„********************”); bookDAO.saveBook(bookEntity); System.out.println(„*********************”); BookEntity bookEntity = bookDAO. findBookById(new Long(2)); } catch (Exception e) { //prinde o exceptie } } }
Spring configuration, presented through the Spring-DAO.xml file is the file
<bean id=”MyAroundAdvice” class=”com.msg.MyAroundMethod” /> // // // // // //
Aici definim un proxy care indică pentru beanul nostru BookDAO aspectul sau aspectele folosite. Prin setarea proprietatii target indicăm beanul vizat de aspect,care este ulterior indicat prin setarea proprietatii interceptorNames.
<bean id=”customerServiceProxy” class=”org.springframework.aop.framework. ProxyFactoryBean”> <property name=”target” ref=”BookDAO” /> <property name=”interceptorNames”> <list> <value>myOwnAdvisor</value> </list> </property> </bean> // Aici este definit Ascpectul, care este format // din pointcut si advice <bean id=”myOwnAdvisor” class=”org.springframework.aop.support.Regexp MethodPointcutAdvisor”> // Aici este implementat pointcutul nostru <property name=”patterns”> <list> <value>.*URL.*</value> </list> </property> // Aici definim advice-ul nostru in cadrul // aspectului <property name=”advice” ref=”MyAroundAdvice” /> </bean> </beans>
What is observed is the lack of a join point’s specific list from the configuration file. These points are usually not defined in the configuration files, but are the result of defining one or more of the pointcut(s). In this case we defined a pointcut using a Java regular expression. These pointcuts can be defined in several ways, obviously depending by the chosen AOP solution. We chose this regular expression to illustrate how powerful this feature of identifying joinpoint’s is, as the power and flexibility of Java regular expressions is well known. In our example join points are the two methods defined in BooksDAO. What one can see from the console code execution is the following sequence of messages: ************************* Numele metodei in care ne aflam : findBookById Parametrii metodei in care ne aflam: [Long(1)] MyAroundMethod : Interceptie înainte executiei metodei MyAroundMethod : Interceptie dupa executarea metodei ************************* Numele metodei in care ne aflam: saveBook Method arguments : [Book] MyAroundMethod : Interceptie înainte executiei metodei
TODAY SOFTWARE MAGAZINE MyAroundMethod : Interceptie dupa executarea metodei ************************* Numele metodei in care ne aflam: findBookById Parametrii metodei in care ne aflam: [Long(2)] MyAroundMethod : Interceptie înainte executiei metodei MyAroundMethod : Interceptie in urma aruncarii unei exceptii
Thus, using AOP we managed to introduce a logging mechanism for all our DAO classes without writing a lot of code. Analog each cross-cutting concern in your application can be implemented in a transparent manner, in its own class (separated by business logic) and used in all places where it is needed. In the last part, we would like to expose some ideas that have crystallized over time around the concept of AOP and our team’s point of view gained from using AOP. The most important thing to remember is that AOP is not a universal solution for many of the problems that occur during software development. It is a relatively new approach that can help in some situations but in others not. Of course, there are disadvantages also, and they are mainly related to two important aspects. The first is related to performance, when in a large application we use many aspects that can possibly interact. Therefore, it is preferable that only some features of an application, and here we refer to the really important ones, to be shaped and modularized with AOP. A second disadvantage is the organization and execution of the code, which after introducing AOP, makes the debugging process difficult. We present a common scenario that seems to be frequently met, when using such an approach. Imagine a medium to large, enterprise application, in which a number of aspects cover the most of the secondary functionality and development team members, whom are not familiar with AOP, do debugging to a relatively complex functionality. In such cases, it sometimes happens that members of the development team are unable to fully explain the code flow which can lead to at least bizarre effects at code execution. The mystery can be scarcely elucidated if the aspects are defined through well hidden and unfamiliar configuration files instead of annotations. Even when we overcome these initial inconveniences and we use specialized software tools, debugging can be, cautiously named, uncomfortable and the flow code difficult to follow. At this point it should be noted that there are different forms in which elements of AOP can be defined. For example, an advice can be defined as in our example, the old school way through a configuration file or in a more contemporary way, through annotations. But it’s worth mentioning again, that these elements vary from implementation to implementation.
Conclusion
We exposed the use of AOP based on a short Spring AOP example. What should we remember? AOP is a new paradigm, with multiple implementations, which, if used properly, can bring major benefits to software development.
www.todaysoftmag.com | nr. 10/April, 2013
21
management
The Challenges of a Business Analyst
T
Daniela Haliga
daniela.haliga@endava.com Business Analyst @ Endava Iași
he Business Analyst’s main responsibilities are: understanding the business need, analyzing and modeling the business processes associated with the software projects development. Usually, these projects are complex and therefore the Business Analyst must obtain information from all the available sources, from all the key stakeholders. And it is not an easy job to do. The whole process of developing and implementing the software solutions puts the Business Analyst in the situation of dealing with some challenges. Which one is the highest? Probably it would be difficult to establish, but we can say that the following have a weight and a direct, positive or negative impact on a Business Analyst’s day to day activities.
Achieving the objectives
How many times did you have to organize different meetings that have set different objectives? And somehow you ended up in the following situation: “we’ve met, we’ve talked for a couple of hours and we haven’t decided anything”? No matter the objective to be accomplished (identifying opportunities or key stakeholders, documenting the requirements, business analysis) it is recommended, first and foremost, that you have the big picture of the business need and its objectives. To be more specific, to know where you are, or the phase your project is in. It is best practice to know before you start the meeting precisely what you want to obtain in the proposed time. You have to focus on what you can achieve (according on priorities), and those things that need a further and deeper analysis must be postponed to a future meeting.
Establishing a business relationship with the stakeholders
One of the greatest challenges that a Business Analyst faces is that of breaking the communication barriers (among the team or within the organization). And in order to succeed, the Business Analyst must make an extra effort to find out the answers to all the problems. What can you do? Identify all the key stakeholders which can define the business
22
nr. 10/April, 2013 | www.todaysoftmag.com
need, which can hold valuable information or which can have a saying regarding the implemented solution. The simple fact that they don’t want to share information, should raise a question mark. And it is the duty of the Business Analyst to understand the motives that are at the base of this behavior: • Are they resistant to change? Are they used to the way they work and therefore they don’t accept or they are not comfortable with the new system? • Are we dealing with political issues or pride? • Or they just don’t understand why the change is occurring/happening. After understanding the underlying motives, you can act accordingly! What you must do, since it works almost all the time, is to win the client’s trust. This happens after a few working sessions. Once the contact with the client is initiated, there is better communication. In case you are unable to communicate directly, then some extra abilities and qualities are needed: a positive attitude, efficient communication, and a problem – solving oriented, team player, trust, flexibility, and adaptability. In order to facilitate a connection with the client, you can try lines like: “what have you been doing?”; “how’s the weather?” Get close to your client! Earn the client’s trust and establish long term relations. Involve the client in project development;
TODAY SOFTWARE MAGAZINE listen to your client!
Facilitating the meeting
Being a leader, sometimes the Business Analyst must fulfill multiple roles. For example, he facilitates the meeting while simultaneously taking notes. There is a common mistake when associating the role of the scribe with the one of facilitator. Some tricks to help you to be more efficient: • Use an agenda with blank spaces between questions; • Use templates; • Us e a c r o n y m s ( N R – n e w requirements); • Put down all the important ideas ; • Give yourself a few minutes to verify you’ve covered all the important things ; • Establish a necessary time and try to fit in; • Close the meeting by emphasizing the main ideas on which everybody agreed on – this proves that the objective was achieved. Many times what you thought you understood can be different from what the others understood
Invest in good requirements
There is an old saying that speaks about six blind people that face an elephant for the first time in their lives. Although they couldn’t see the elephant, they wanted to know what an elephant looks like. Each of the six blind men touched a part of the animal. The first puts his hand on the elephant’s leg and says: “Oh, the elephant is like a tree!” “That is not true!” says the second one, who is holding the tail of the
elephant. “It looks exactly like a rope!” The third one says: “No, the elephant is like a wall!” The fourth, with the hand on the trunk: “It’s like a snake!” The fifth, with his hand on the teeth, says it is like a spear, and the last one says: “You are all wrong. The elephant looks like a fan” (while he was holding the elephant’s ear)1. What are the lesson and the connection with our theme? All six men were right. The elephant has all the six characteristics they described. But, none of those six describe the complete image of the elephant, because every one of them describes the elephant from his own perspective. The same rational logic can be applied in the case of documenting the requirements. It is best practice to develop and analyze requirements from different perspectives. Use Wh questions (Why/Why now/Why not) and avoid can/could/should, which might lead to a response. What is the purpose? The purpose is to compare the requirements from different stakeholders and to reveal duplicities or miss interpretations. If these exist, then they are problems that should be resolved.
Align different perspectives
We have seen earlier that the same system’s functionality may have different meanings, therefore different visions from different perspectives. These differences came from the stakeholder’s area of activity (the marketing department’ employee sees the product differently from the IT department employee). Aligning all these perspective to one single direction – that is 1
sursa: www.modernanalyst.com
the real challenge that a Business Analyst faces. This small thing can mean the failure or the success of the project. What can you do? The Business Analyst must have an overview of the whole business, must know what is the real need of the business that must be satisfied; he must take into consideration all the perspectives, and know the organization (environment, product, market, competitors) and the stakeholders involved (who they are, what is the level of interest/power), as well as their attitude towards the implemented solution (they will be in favor or against). The Business Analyst must be a true mediator (solve the conflicts) and a good negotiator, so that he facilitates an agreement or a compromise.
The lack of stakeholder’s engagement
The participation of stakeholders is crucial, because they hold key information on the business, which has an impact over the progress of the project development. If we don’t have their approval and the collaboration is difficult, then the project may be at risk. Here are some most common solutions to the problems between Business Analysts and Stakeholders: 1. Stakeholders that can’t get involved – are those who were involved in planning/ configuring a project that did not end up well and now these stakeholders are afraid to take responsibilities. This thing puts also the Business Analyst in a difficult position because he does not want to miss something important that would prejudice the software solution.
www.todaysoftmag.com | nr. 10/April, 2013
23
management The Challenges of a Business Analyst What can you do? If there is time, use discovery techniques (brainstorming, workshops) and fewer interviews, observations. These techniques facilitate new ideas and group creativity and therefore bring confidence to stakeholders. Assure the stakeholders that if something was missed then those things will be given higher priority and they will be implemented in the next versions of the system. 2. Stakeholders that don’t compromise – are those with strong personalities, with different perspectives. What can you do? Underline the fact that the lack of the compromise can represent a risk, because the time is wasted and the team can begin to feel pressure and frustrations. Make sure you have taken into consideration all the perspectives and underline that the compromise itself will not affect the performance and the safety of the job. 3. Stakeholders that are not interested – are those that consider the project unimportant. What can you do? Try diplomacy. For example, if you talk directly to the person:” Andrew, no other employee knows the system better that you. And for that, we
24
nr. 10/April, 2013 | www.todaysoftmag.com
really need you. You should have a signifi- communication skills, leadership. cant contribution if you would share your knowledge and therefore we can be sure Remember that everything is in place”. The specialists recommend us to always investigate the root causes, and not to treat A technical or non-technical Business the symptoms. Always choose agility and Analyst not perfection. Organizations should resMust the Business Analyst hold the pond to the external factors and recognize knowledge of the latest technologies or the importance of the relevant and new all the technical details of the system? If solutions. yes, what is the degree of understanding But above all, it is best practice to be all these things? How technical should a always flexible and adaptable to any type Business Analyst be? Well, the opinions on of situation and audience. We may have this subject are divided. But what is cer- concepts, experience and knowledge, but, tain and most agree with, is the fact that because it is unpredictable and can have a a Business Analyst should hold sufficient huge impact (positive and negative) on all technical knowledge or competences that levels, the human factor, from my point of would facilitate communication with the view, remains the greatest challenge. development team. Regarding the relation with the client, he would have to under- Sources stand in what way the technology could “Business Analysis”, second edition by Debra Paul, improve the business. It is very important Donald Yeates and James Cadle; that the “technology should never dictate “The Business Analyst’s role” by Allan Kelly; the way business is going”. For that reason, www.iiba.com; the answer to the questions “Is it possible www.businessanalyst.com; to be a non-technical person and still excel http://www.bridging-the-gap.com as a Business Analyst?” is Yes”. Of course, in this case you should compensate with some abilities like: analytical thinking,
HR
TODAY SOFTWARE MAGAZINE
Communities of practice, learning by doing and exploration Involving the brain and body in the learning process
E
ven if many aspects of human evolution are controversial, one thing is unanimously accepted by anthropologists: We moved! (Brain Rules, John Medina; 2008). Over several hundred thousand years, from Homo Habilis to Homo Erectus and later Homo Sapiens, our brain developed by moving on distances of “about 10 to 20 km a day for men and about half that distance for women”, says the anthropologist Richard Wrangham.
Cristina Donea
cristina.donea@isdc.eu HR Specialist @ ISDC
In the history of human kind, education and learning had an indisputable adapting value. Since antiquity, the learning process benefited from special attention, the models and principles used being extremely diverse and dynamic. These would stimulate the imagination, creativity and involvement of the student, having a playful, imagistic, practical style, in line with the natural learning process of the brain. The decline of the holistic methods in education started with the invention of the typing machine by Johannes Gutenberg in 1440, even if it is not the first thing we think of when talking about education. This innovation had various repercussions on education, some of which are: • word processing over images; • indiv idu a l over col l ab orat ive learning; • abstract over practical and concrete concepts; • separation of the mind and body in the learning process.
training and development activity. Trainers often focus a lot on the content and spend approximately 80% of the total time for preparing the training materials (course support, handouts, slides). Authentic learning happen when the trainee is actively, dynamically and entirely engaged and less when s/he simply listens to a presentation. In ISDC, the motto of the training process is “focus on learning, not training”. In fact, the trainees are the ones who have to be on the catwalk, to master their learning process. ”The facilitator’s role is to initiate the learning process and then to get out of the way” (John Warren). Often trainees are treated as consumers of information, knowledge, abilities, being overwhelmed with models and theories that they forget as soon as they leave the classroom. Or they never use them. Educational revolution implies a paradigm shift of treating the trainees as creators and stewards of their own learning process, of their new content of information, knowledge and abilities. The optimum percentage of the teaching These effects are maintained until pre- activity versus practice within a learning sent day and unfortunately form the basis experience is 30% - 70%. for the traditional Romanian education system. For this reason, training programs On average, people remember: have great popularity and the non-conven• 20% of what they read tional learning principles and contexts have • 30% of what they hear been very successful among trainees • 40% of what they see • 50% of what they say Teaching vs. authentic learning • 60% of what they do Teaching is not learning. This common • 90% of what they see, hear, say and do fact, which can be overlooked and neglecSource: Rose, C., & Nicholl, M.J., ted, is fundamental for each successful Accelerated Learning for the 21st Century. www.todaysoftmag.com | nr. 10/April, 2013
25
HR Communities of practice, learning by doing and exploration Therefore, it is extremely important for trainers to create the most appropriate context, environment, situations for maximizing the effects of the training. Moreover, the application of the principles of accelerated learning leads to such positive effects as decrease in the time spent in the classroom (even to half) and in any type of costs, increase in transfer, satisfaction, motivation and engagement of the trainee in work and in applying the newly gained knowledge. Scientific discoveries referring to the way our brain works (Bransford, Brown & Cocking 2000; Damasio 1999; Medina 2008; Pinker 1997) determine a re-examination of the traditional design, teaching and delivery principles of the educational programs.
were obtained (OCJP, Microsoft, ISTQB, Certified SCRUM Master, ITIL, Prince 2 Foundation and Prince 2 Practitioner). Some of the external events in which we participated in 2012 were QCON (London), NOSQL (Amsterdam), Devoxx (Antwerp), Essentials of Mule (London), Springone on the road (London), Mule Su m m it ( A m s t e rd am ) , E U R O St ar (Amsterdam), NEXT 2012 (Cluj-Napoca), SCRUM Gathering (Barcelona), Open Agile (Cluj-Napoca), Sakai Conference (Atlanta), Service Oriented Architecture Suite 11g Implementation (Bucharest), Liferay Symposium (Budapest), IT Camp (Cluj-Napoca), Code Camp (Cluj-Napoca). After the events, the participants organise transfer sessions with their colleagues interested on the topic and/or the communities of practice they are part of.
How do we accelerate the effects of trainings? Communities of practice The most successful training programs are based on the fact that learning involves both the brain and the body. It is a conscious, rational, verbal process, but it also involves emotions, senses, receptors. “We are powerful and natural explorers (...) The desire to explore never leaves us despite the classrooms and cubicles we are stuffed into” (Brain Rules, John Medina). During training, the trainees should be stimulated to: • Work in teams; • Create cognitive maps; • Build a model (concept, process or procedure); • Actually work on the computer, putting what they are taught into practice in real-time and enjoying the trainer’s feedback; • Discuss after each exercise, simulation, experience. Conclusions where required; • Debate on different topics; • Carry out projects which require movement, exploration and active experience (one example would be field trips) In ISDC, we organize training and knowledge transfer sessions, debates, certifications and we attend conferences and external events. The number of structured and organised learning events increases each year, for instance in the training calendar for 2012, 160 events were hosted (as compared to 89 in 2011) and 46 certifications
26
nr. 10/April, 2013 | www.todaysoftmag.com
The methods listed above are just some suggestions for the acceleration of learning and these grant the trainees the role of information creators. In ISDC, one of the most successful and popular learning and development activities, with active focus on creating new knowledge are the communities of practice.
As a concept, the communities of practice are inspired by the organisation pattern of gilts. Seth Godin supports in his book, Tribes, the natural means of organization and progress by sharing knowledge and through debates in communities: “A tribe is a group of people connected to one another, connected to a leader, and connected to an idea. For millions of years, human beings have been part of one tribe or another. A group needs only two things to be a tribe: a shared interest and a way to communicate.” Seth Godin, Tribes: We Need You to Lead Us. Starting from people’s desire and need to be part of a group, the communities of practice introduce a new form of learning and transfer of knowledge within organisations. Moreover, communities may have a significant contribution in: • Aligning successful practices, procedures and tools used within the company • Tracking knowledge and identifying training gaps/needs • Research based on interest areas and sharing results within the community and at company level • Sharing information between the members of the community • Identifying reusable components
TODAY SOFTWARE MAGAZINE • Technical support offered by the experts within the community. Approaching this method of learning and sharing within a company implies: 1. Identification of knowledge/practice sharing and alignment needs at company level; 2. Putting together a short list of topics of interest (technology, discipline or role) which would lead to the highest added value within the company and have the greatest impact; this step must be in line with the organisation’s vision and strategy; 3. Deciding which communities have the greatest impact; this step implies that the management and the other stakeholders agree to these. To start with, a small number of communities is recommended, in order to facilitate monitoring and continuous improvement; 4. Setting a framework in which the communities act; 5. Choosing a sponsor/accountable/
Product Owner from the management team for each community; 6. C h o o s i n g a l e a d e r / S C RU M MASTER for the community; 7. Drafting a plan of approach for the community’s activity (community’s current situation, mission and vision, general objectives, strategy, manner in which the benefits and progress will be measured, activities, learning methods, etc.); 8. Appointing the key members of the community; 9. Defining the selection criteria for new members; 10. Defining a set of norms according to which the community functions in order to insure equal opportunity for the members to contribute to meetings, increase efficiency and facilitate the achievement of the agreed goals. These norms should contain expectations referring to participation, decision making, contribution, confidentiality, code of conduct during the meetings.
As mentioned in the previous chapters, in order to transform the training experiences into authentic learning, the creativity and active involvement of trainees is essential in creating the informational content. Learning is facilitated by exploration and people are natural explorers. One of the most efficient learning methods, but also of knowledge management within an organisation, is the community of practice, an active, motivating, creative method which is most certainly worth exploring.
www.todaysoftmag.com | nr. 10/April, 2013
27
programming
NoSQL Introduction
N
oSQL – one of 2013’s trends. If three or four years ago we rarely heard about a project to use NoSQL, nowadays the number of projects using non-relational databases is extremely high.
Radu Vunvulea
Radu.Vunvulea@iquestgroup.com Senior Software Engineer @iQuest
In this article we will see the advantages and challenges we could have when we use NoSQL. In the second part of the article we will analyze and emphasize several nonrelational solutions and their benefits.
What is NoSQL?
The easiest definition would be: NoSQL is a database that doesn’t respect the rules of a non-relational database (DBMS). A non-relational database is not based on a relational model. Data is not groups in tables; therefore there is no mathematical relationship between them. These databases are built in order to run on a large cluster. Data from such storage does not have a predefined schema. For this reasons, any new field can be added without any problem. NoSQL has appeared and developed around web applications, consequently the vast majority of functionalities are those that a web application has.
Benefits and risks
A non-relational database model is a flexible one. Depending on the solution that we use, we could have a very ‘hang loose’ model that can be changed with a minimum cost. There are many NoSQL solutions that are not model-based. For example, even though Cassandra and HBase have a pre-defined model, adding a new field can be easily done. There are various solutions that can store any kind of data structure without defining a model. An example could be those storing keyvalue-pairs or documents.
28
nr. 10/April, 2013 | www.todaysoftmag.com
In NoSQL taxonomy, a document is seen as a record from relational databases and collections are seen as tables. The main difference is that in a table we will have records with the same structure, while a collection can have documents with different fields. Non-relational databases are much more scalable than the classical ones. If we want to scale in a relational database we need powerful servers instead of adding some machines with a normal configuration to the cluster. This is due to the way in which a relational database works and adding a new node can be expensive. The way in which a relational database is built easily allows a horizontal scaling. Moreover, these databases are suitable for virtualization and cloud. Taking into account the databases’ dimensions and the growing number of transactions, a relational database is much more expensive than NoSQL. Solutions like Hadoop can process a lot of data. They are extremely horizontally scalable, which makes them very attractive. Concerning costs, a non-relational database is a lot cheaper. We do not need hardware custom or special features to create a very powerful cluster. Using some regular servers, we can have an efficient database. Certainly, NoSQL is not only milk and honey. Most of the solutions are rather new on the market compared to relational databases. For this reason some important functionalities, especially enterprise, may
programare be missing – business mine and business intelligent. NoSQL has evolved to meet the requirements of web applications, which is the main cause for some missing features, not necessary on the web. That does not mean that they are missing and cannot be found, rather they are not quite mature enough or specific to the problem that the NoSQL solution is trying to solve. Because they are so new to the market, many NoSQL solutions are pre-production versions, which cannot be used every time in the world of enterprise. The lack of official support for some products could be a stopper for medium and large projects. The syntax with which we can interrogate a NoSQL database is different from a simple SQL query. We usually need to have some programming concepts. The number of experts in NoSQL databases is much lower than the one in SQL. The administration may be a nightmare, because support for administrators is presently weak. However, ACID and transactions support is not common in NoSQL storage. Queries that can be written are pretty simple, and sometimes storages do not allow us to „join” the collections, therefore we have to write the code to do this. All these issues will be solved in time, and the question we must ask ourselves when we think about architecture and we believe NoSQL could help is „Why not?”
The most widely used NoSQL solutions On the market there are countless NoSQL solutions. There is no universal solution to solve all the problems we have. For this reason, when the we want to integrate a NoSQL solution, we need to study
TODAY SOFTWARE MAGAZINE several types of storage. We may identify within our application several problems which require a NoSQL solution. We may need different solutions for each of these cases. This would add extra complexity because we would have two storages that we need to integrate.
MongoDB
This is one of the most used types of storage. In this type of storage all content is stored in the form of documents. Over these collections of documents we can perform any kind of dynamic queries to extract different data. In many ways MongoDB is closest to a relational database. All data we want to store is kept as a hash facilitating information retrieval. Basic CRUD operations work quickly on MongoDB. It is a good solution when you need to store a lot of data that must be accessed in a very short time. MongoDB is a storage which can be used successfully. If we do not perform many insert, update and delete operations, information remains unchanged for a period of time. It can be successfully used when properties are stored as a query and /or index. For example, in a voting system, CMS or a storage system for comments. Another case in which it can be used is to store lists of categories and products in an online store. Due to the fact that it is directed to queries and the list of products does not change every two seconds, queries to be made on them will be rapid. Another benefit is the self-share. A MongoDB database can be very easily held on 2/3 servers. The mechanism for data synchronization is very well established
and documented.
Cassandra
It is the second on the list with eCommerce solutions for the storage of NoSQL solutions. This storage can become our friend when we have data that changes frequently. If the problem we want to solve is dominated by insertions and modifications of stored data, then Cassandra is our solution. Compared to insert and change, any query we do on our data is much slower. This storage is more oriented to writings, than to queries that retrieve data. If in MongoDB the data we work with was seen as documents with a hash attached to each of them, Cassandra stores all content in the form of columns. In MongoDB, the data we access may not be in the latest version. Instead, Cassandra guarantees us the data we obtain through queries has the latest version. So if we access an email that is stored with the help of Cassandra, we get the latest version of the message. This solution can be installed in multiple data centers from different locations, providing support for failover or back-up - extremely high availability. Cassandra is a storage that can be successfully used as a tool for logging. In such a system we have many scripts, and the queries are rare and quite simple. For this reason it is the ideal solution when you have an eCommerce solution, where we need a storage system for our shopping cart. Insert and update operations will be done quickly, and each data query will bring the latest version of the shopping cart - this is very important when we perform check-out. Cassandra came to be used in the financial industry, being ideal due to the performance of insert operations. In this environment data changes very often, the actions’ value being new in every moment.
CouchDB
If most of the operations we perform are just insert and read, no update, then CouchDB is a much better solution. This storage is targeted only to read and write operations. www.todaysoftmag.com | nr. 10/April, 2013
29
programming NoSQL - Introduction Besides this, we have an efficient support to pre-define queries and control the different versions that stored data may have. Therefore, update operations are not so fast. From all storages presented so far, this is the first storage that guarantees us ACID through the versioning system it implements. Another feature of this storage is the support for replication. CouchDB is a good solution when we want to move the database offline. For example, on a mobile device that does not have an internet connection. Through this functionality, we have support for the distributed architecture to support replication in both directions. It can be a solution for applications on mobile devices, which do not have 24 hour internet connectivity. Simultaneously, it is very useful in case of a CMS or CRM, where we need versioning and predefined queries.
and content replication becomes an easy Conclusion process. In conclusion, we can say that NoSQL It is very common in games backend, databases that must be part of our area of especially online. Many systems that work knowledge. with real-time data they need to manipulate or show use Membase storage. In these cases Membase may not be the only storage level that the application uses.
Redis
This storage is perfect when the number of the updates we need to do on our data is very high. It is an optimized storage for such operations. It is based on a very simple key-value. Therefore the queries that can be made are very limited. Although we have support for transactions, there is still not enough mature support for clustering. This can become a problem when the data we want to store does not fit in memory - the size of the database is related to the amount of internal memory. Redis is quite interesting when we have real-time systems that need to communiHBase cate. In these cases Redis is one of the best This database is entirely integrated solutions. There are several stock applicatiinto Hadoop. The aim is to be used when ons using this storage. we need to perform data analysis. HBase is designed to store large amounts of data What does the future hold for us? that could normally not be stored in a norWe see an increasing number of applimal database. cations that use NoSQL. This does not It can work in memory without any mean that relational databases will disapproblem, and the data it stores can be pear. The two types of storage will continue compressed. It is one of the few NoSQL to exist and often coexist. Hybrid applicadatabases that support this feature. Due tions, which use both relational databases to its particularity, Hbase is used with and NoSQL, are becoming more common. Hadoop. In some cases, when working Also, an application does not need to use with tens / hundreds of millions of records, only a single database. There are solutiHbase is worth being used. ons using two or more NoSQL databases. A good example is an eCommerce appliMembase cation that can use MongoDB to store the As the name implies, this non-relati- list of items and categories, and Cassandra onal database can stay in memory. It is a to store the shopping cart to each of the perfect solution with very low latency, and clients. the competition is high. Creating a cluster
30
nr. 10/April, 2013 | www.todaysoftmag.com
Compared to relational databases we have many options, and each of these does one thing very well. In the NoSQL world we do not have storage to solve all the problems we may have. Each type of storage can solve different problems. The future belongs neither to non-relational databases, nor to relational ones. The future belongs to applications that use both types of storage, depending on the needs.
programming
TODAY SOFTWARE MAGAZINE
NoSQL databases a comparative analyse
B
igData - a fashionable topic that’s confirmed in this current issue where we write about it. BigData concepts and introduction were made in the previous numbers 2, 3 and 4 of the magazine. To summarize, BigData means storing and analyzing large data volumes, of Terra Bytes magnitude. Handling such data amounts raises problems in terms of volume, speed and variety.
Traian Frătean
traian.fratean@3pillarglobal.com Software Engineer @ 3Pillar Global
Bogdan Flueraș
bogdan.flueras@3pillarglobal.com Software Engineer @ 3Pillar Global
CAP theorem defines and constraints Big Data systems: Consistency, Availability and Partition Tolerance. Consistency refers to data consistency in terms of system’s user. Simply put, all clients see the same data at all times. Availability guarantees that every request will get an answer. Partition tolerance enables a system to continue working even when some of the system’s components fail. According to CAP theorem, Big Data systems cannot simultaneously satisfy all of these three constraints, but can excel in any two of them. To stay within the same trend and to share to the community some practical experience gained on internal projects; we bring to your attention a case study with four among the most popular NoSQL solutions: Riak, Couchbase, Hypertable and Cassandra.
Context and Requirements
One of company’s major customers is facing a technical problem - it has a huge amount of data that it can no longer manage, therefore it needs a NoSQL solution. The data model already exists: a composite of POJO’s1. Take for example the UML representation of the data model - it represents an Article:
Our requirements are to: • insert data as quickly as possible • search for any particular fields: title , tags, country a.s.o. Moreover the data needs to have an acceptable level of consistency and to be available.
Analysis and modelling
Mapping the data model mentioned above, which will be further refined, is specific for each NoSQL solution storage strategy.
Storage strategies Key-Value Key-Value databases, like Riak, are distributed dictionaries without a schema, thus being schemaless. The key can be synthetic or auto-generated while the value can be String, JSON, BLOB etc. Another concept specific to key-value storages is the bucket: a logical group of keys - they don’t physically group the data. There can be identical keys in different buckets. To read a value you need to know both the key and the bucket because the real key is a hash(Bucket + Key). Thinking in CAP theorem terms the key-value stores excel at A and P sacrificing C - providing eventual consistency only: “i will give you old data (on some nodes) but fast while guaranteeing that inserted data will be consistent sometimes in the future”
Document Couchbase and MongoDB are the most popular document based 1 http://en.wikipedia.org/wiki/Plain_Old_Java_Object
www.todaysoftmag.com | nr. 10/April, 2013
31
programming NoSQL databases a comparative analyse databases. Having no predefined schema they are very flexible in terms of content. Conceptually they are dealing with different data JSON, BSON, XML and blobs: PDF, XLS They are a specialization of Key-Value databases. A document is written/ read using a key. Besides regular Key-Value functionality they add features to find documents based on their content. With respect to the CAP theorem, document based databases excel in C and P.
Columnar Databases from BigTable 2 category, such as HBase3 and Hypertable4 are columnar - schemafull. Data is stored in cells grouped in columns. Columns are logically grouped into column families. They can contain a virtually unlimited (limited depending on the specific implementation) number of columns that can be created at runtime or the definition of the schema. You will probably ask: what is the benefit of storing data in columns rather than rows, as do bases relational database? Short answer: fast search/ access and data aggregation. Simplified long answer: Relational databases store a single row as a continuous disk entry. Different rows are stored in different places on disk. Columnar databases store all the cells corresponding to a column as a continuous disk entry. For a better understanding we propose the following simplified use case(without indexes, caches etc.): 2 http://research.google.com/archive/bigtable.html 3 http://hbase.apache.org/ 4 http://hypertable.org/
32
nr. 10/April, 2013 | www.todaysoftmag.com
I want to query the title of one billion article items. Relational databases iterate over different disc locations to fetch the title of each article, resulting in 1 billion iterations and disk accesses.
Data conflicts are resolved using vector clock (logical time). Riak comes with a pluggable storage mechanism: you can choose between Bitcask - all keys in RAM or LevelDB all keys and values on disk, but provides an API that allows implementing your own storage. Depending on the deployment environment you can make choices for optimizing production costs. Disk representation RDBMS vs. Columnar All nodes in a cluster communicate through gossip protocol hence network Columnar databases require one disk traffic is not negligible. This type of proaccess for fetching the titles of all arti- tocol does viral replication and provides cles because they are in a continuous disk continuous availability. location. By extrapolating with real life conditions we get a dramatically redu- Modelling ced number of iterations/disk accesses for Data is stored in key-value tuples in a columnar databases. In terms of CAP the- bucket. Values can be: text, JSON, XML, orem, columnar databases satisfy C and P BLOB but sacrifice A. etc. In the aforementioned data model (see diagram on context and requirements) Solutions NoSQL we have the JSON representation below:
Riak Features Riak5 is a distributed key-value store, with open-source enterprise support, developed by Basho Technologies. It is designed to scale horizontally and to be resilient. Riak considers all nodes in the cluster equal, without master, so there is no single point of failure. It excels at availability, scalability, tolerant to network partitions and ensures data replication in 3 locations. It brags as the most resilient key-value storage put into production. To ensure these characteristics, Riak provides eventual data consistency. 5 http://basho.com/
{ article: id: 123, title: ”Article title”, content: ”This is the content”, location: { country: “Romania”, latitude:123.45, longitude:123.45} tags: [ { name: “tag1”, date: “10 Mar 1981”, author:”John B”}, { name: “tag2”, date: “21 Mar 1981”, author:”John B”} ], }
Installation Installing is easy in both local and cluster. In cluster the nodes are added to the staging area and can be activated by moving in the commit area. For monitoring you can install Riak Control.
Configuration Two files: vm.args and app.config include all necessary configuration: security, cluster tuning etc.
TODAY SOFTWARE MAGAZINE Automatic sharing works by default. installation. Pay attention while setting the ring size and the number of vnodes, otherwise you must Configuration reconfigure the cluster and may require Configuration is very fast, even in diseven re-importing all data. tributed mode. Couchbase however gets a big minus because is heterogeneous in Practical experience terms of RAM used. This limitation cannot For most languages there are libraries exploit all the available memory on each which communicate via Protobuf or REST. node. REST implementation provides more features for a speed penalty against Practical experience Protobuf. REST also offers the possibility The Java API provides a simple and for non-complex Map Reduce queries. intuitive interface so you can code in a few Using Bitcask we had trouble when minutes. Documentation is also sufficient listing keys in a bucket in the production and without errors. Because the data is stoenvironment. In the cloud Riak commu- red in-memory Couchbase performance is nicates through Gossip with other clusters very good. - risking data leaks if the network is not configured properly or Riak not secured. Hypertable
Couchbase
Couchbase6 is a JSON document based storage and has its roots in CouchDB a project under the Apache umbrella. The company CouchOne Inc., which offered commercial support for CouchDB merged with Membase Inc. and formed Couchbase Inc. The new solution, Couchbase adds performance and scalability to the data model, indexing and query features from CouchDB. Being developed as a commercial open-source project, CouchBase is offered in three licensing modes: open-source community and commercial.
Characteristics The data model is simple: JSON document, without any relationships between documents which are stored in buckets, each document being limited to a maximum 20 MB size. Cluster data is kept in memory, replicated, being consistent and flushed to disk asynchronously. Between different clusters data is eventually-consistent. Data can be accessed directly based on the key, Map-Reduce, the custom query language UnQL7 or via Hadoop8 integration.
Modelling Data modelling is the same as for Riak.
Installation One of the chapters where Couchbase excels is installation. A fully functional cluster installation can take a couple of minutes, with the added benefit of integrated monitoring tools within the default 6 http://www.couchbase.com/ 7 http://www.unqlspec.org/display/UnQL/Home 8 http://hadoop.apache.org/
modelled as a different column-family. Taking as input the following article, represented as JSON: { article: { id: 123, title: ”Article title”, content: ”This is the content”, location: { country: “Romania”, latitude: 123.45, longitude: 123.45} tags:[ { name: “Cool”, date: “10 Mar 1981”, author: ”John B”}, { name: “TSM”, date: “21 Mar 1981”, author:”John B”} ], }
We generated the following columnar model:
We chose this structure of representation because it allows a quick search for any As previously mentioned, Hypertable property of the object is based on the Google Big Table philosophy and competes with HBase. Through a Installation generic API, it supports various distributed It installs locally easy and on a cluster file-systems with the most common being much of the installation work can be autoHDFS - Hadoop Distributed File System . mated, but it has a medium complexity for a HDFS is a distributed file system provi- distributed file system because HDFS needs ded by Hadoop and inspired from Google to be separately configured and started. File System. Hadoop is an open source project Configuration originally developed by Yahoo! for distriIs done on every node and can be buted processing of large data volumes. The also automated. There is a satisfactory set current version of the Hadoop ecosystem of configuration options, but the significontains various modules from ad hoc cance of these options is not sufficiently query support to data mining and machine documented. learning. Another noteworthy aspect is that Hypertable has a single point of failure: Usage the name-node, thus risking the unavailaThere are clients for popular programbility of data. ming languages: Java, PHP, Python. Client communication is through Hypertable Modelling Thrift protocol. Specific columnar databases terms with their relational databases correspondents Practical experience are: Although the community is small the • Namespace analog with Database support is good. The deep influence of C is • Table analog with Table shown from the low level client interface up • Column analog with Column to the API and the error messages. • Column family, Cells, RowKey don’t After testing, we noticed that heavy have analogues inserts performs slower than Cassandra, Riak and CouchBase. The composite data model described In the current version, Hypertable in „Background and requirements” was is limited to a maximum of 255 column modelled from a columnar approach like families so you need to think twice when this: designing the scheme to be flexible for Every POJO from the model (not the further extensions. For practical reasons instance) has associated a column-family. (stability, knowledge base, work-around) Every atomic member of a POJO that can version 0.20 of Hadoop is mainly used in be represented as text/string has associated the detriment of the current version. For a a column within the same column-family. painless experience use Cloudera Hadoop Compounds members were in turn
Characteristics
www.todaysoftmag.com | nr. 10/April, 2013
33
programming NoSQL databases a comparative analyse distribution9 installation, configuration Installation and a far better documentation. Recommended distribution for installation is provided by DataStax . As a negative Cassandra feature requires Oracle Java 1.6 which had Cassandra10 started as an open source end of support in February 2013. Installing project at Facebook, and came under on a Linux cluster raises no other special the Apache umbrella. The implementa- problems. tion combines concepts from Amazon Dynamo11 and Google BigTable. It can be Configuration characterized as an eventually consistent Cassandra was designed to mould to key-value store. the user needs. It provides different levels of consistency for write operations (5 levels) Particularities and reading (3 levels). Configuration shoLike Riak, Cassandra starts from con- uld be considered carefully because the cepts published in Amazon Dynamo default setting is not suitable for cluster paper. This paper presents the design and usage - check replication factor (RF) and implementation of Amazon DynamoDB a Consistency level(CL). highly available database that stores data in The only configuration difficulty key-value format. encountered was related to the ring’s creaStarting from Dynamo, Cassandra tion. Machines are organized as a ring, each gave up to vector-clocks for conflict reso- with a certain range of tokens allocated. lution and implemented a different storage In the documentation (version 1.2.0) the strategy that relies on ColumnFamily. It maximum token is incorrectly documented integrates with Hadoop and can distribute tasks on different machine groups. In Practical experience this way we can make two partitions - one For communication with the cluster the running real-time tasks, other running Thrift protocol is used. There are a lot of analysis tasks that can consume more time. client libraries13, for many programming languages. This variety makes the decision Data Model12 more difficult. There are 9 Java libraries for Data model used is identical to the the current version. Hector and Astyanax one used for Hypertable. Storage strategy are worth mentioning. Hector is the most is based on columnar data. The concepts commonly used. Astyanax is a fork of used are Hector, made by Netflix, and offers a sim• keyspace similar to the relational plified interface. model database. Groups a set of logically Cassandra has a dynamic scheme and related column-families. you can add any column-families at run• column-family similar to the tables time. When creating a keyspace, if not in the relational model. A Column-family specified, default RF is 1. Thus data is not contains rows and columns. Unlike rela- replicated and in the case of a machine faitional model rows do not necessarily lure data become unavailable or lost. have the same columns. Moreover, you To avoid losing data you should also can always add new columns in a row. know that once inserted data is kept in • super columns are a group of memory and written in the commitlog after columns. a configurable time interval. Thus, if there’s • row is a row uniquely identified by a hardware or software problem you may a row key. lose data that were not written in commi• secondary index is an index on a tlog. Unlike a regular log, commitlog has to column. It is called secondary index to be taken seriously because it contains data distinguish against each row predefined not yet written on the disk. index 9 http://www.cloudera.com/content/cloudera/en/products/cdh.html 10 https://cassandra.apache.org/ 11 http://www.allthingsdistributed.com/files/amazondynamo-sosp2007.pdf 12 http://www.datastax.com/docs/1.2/ddl/index
34
nr. 10/April, 2013 | www.todaysoftmag.com
13 Cassandra High Level Clients: https://wiki.apache. org/cassandra/ClientOptions
Conclusion
NoSQL is a natural database evolution suited for modern/ web applications. It’s not replacing mature relational systems but complements them. There are several NoSQL solutions with different traits, each suited for a particular problem and not as a general purpose solution. Before choosing the right NoSQL solution you should carefully address these items: • Definition and analysis of functional and non-functional requirements such as performance, consistency, availability, scalability, security a.s.o. • Detection and adaptation of data model for the most common queries. • Identifying solutions and possible relaxation of requirements. • Existing data migration, if applicable.
programming
TODAY SOFTWARE MAGAZINE
Big Data: HBase X-Ray
I
ssue #3 of the magazine the article on Big Data makes a reference to a distinct NoSQL system: column oriented database. A concept at least curious that deserves attention. The mission of this article is to take an x-ray of one of the systems that implements this concept.
Let’s take a look at the one of the public online maps services like Nokia Maps or Google Maps. To be successful, such a service needs additional content, besides maps. Such content are the point of interests (POI). It is estimated that world contain about 2 billion POIs. To publish these POIs, first these must be collected from various sources. Often it happens that the same POI comes from the different sources, and deduplication is necessary. Moreover, some attributes are specific to certain categories. For instance, restaurants have cuisine type, number of seats, etc... And the parking areas have price per hour, number of spots and the infrastructure type (subterranean or multi storey). HBase is column-oriented database, whose main advantages are consistency and scalability. It was designed based on BigTabe, a propertary database from Google whose characteristics have been published in 2006 paper, named: “Bigtabe: A Distributed Storage System for Structured Data”. It is actively used in companies like Facebook, Nokia, eBay, Yahoo in various applications that requires the storage and analytics of large data quantities. At first glance, HBase seems to be a relational database. HBase stores the data in tables that contain cells, which are formed at the intersection of rows and columns. But it’s not really like that. The tables do not maintain relations between themselves, the rows are not records and the columns are completely variable. The schema is there, but it’s only to guide and not to enforce. HBase has functionalities that other database systems are lacking, such as versioning, data compression and garbage collection. When a value is written in an existing cell, the old value stays indexed by its timestamp. If the stored values are too large, the data is being compressed using algorithms such as Gzip and LZO. From CAP theorem point of view, HBase is a CP system (checkout issue #3
about Big Data). It provides strong guarantees for data consistency. If a client does a successful write operation of a value, that value can be read of all the clients. As a distributed system, if a cluster node fails, HBase stays available. If a single node from the cluster remains available, then all writes are rejected. For didactical purposes, HBase can run in stand-alone mode. The recommendation for a production cluster is minimum 5 nodes. If a map is like pair of key-values, like Java Hashmap, a table in HBase is a map of maps. The keys are arbitrary strings that maps rows with data. A row is a map, where keys are the columns and values are uninterpretable byte arrays. Columns are grouped column families. A fully qualified name of a column is made up of the name of the group and the name of the column (family:qualified). In the below example: Identity:id, Identity:provider, etc.. Above table, is an example on data flexibility. In the Identity column it may be seen that for parking, the name attribute is
missing, and in the Amenities column, the attributes are completely different. HBase offers a shell, based on JRuby that allows the interaction with tables, changing the schema and other complex action that may be automated by using JRuby scripting. To create a table using the shell, one would run: hbase> create ‘place ', ‘identity’, ‘address’, ‚‘amenities’
This will create the table ‘place’ having 3 family columns: ‘identity’, ‘address’ and ‘amenities’. For a write operation we would have to type something similar like hbase> put ‘place’, ‘1234’, ‘identity:cuisine’, ‘french’
or to read a value: hbase> get ‘place’, ‘1234’
or
hbase> get ‘place’, ‘1234’, ‘identity’
A complete command shell list is available here: http://wiki.apache.org/hadoop/
www.todaysoftmag.com | nr. 10/April, 2013
35
programming Big Data: HBase X-Ray Hbase/Shell Since HBase was developed in Java it is expected that its main API is in Java. First, we’ll need a configuration object, which is used by the client to connect to server. When the instance is created it seeks its initializations properties in hbase-site.xml or in hbase-default. xml which have to be in the CLASSPATH. private static Configuration conf = HBaseConfiguration.create();
public void getAllRecord (String tableName) { try { HTable table = new HTable(conf, tableName); Scan s = new Scan(); ResultScanner ss = table.getScanner(s); for (Result r : ss) { for (KeyValue kv : r.raw()) { System.out.print(new String(kv.getRow()) + „ „); System.out.print(new String(kv.getFamily()) + „:”); System.out.print(new String(kv.getQualifier()) + „ „); System.out.print(kv.getTimestamp() + „ „); System.out.println(new String(kv.getValue())); } } } catch (IOException e){ e.printStackTrace(); } }
HTable is used to establish the desired connection. To add a new row Put is being used. In HBase everything it gets stored as an The rows in the tables are stored sorted by their keys, but there array of bytes. The Bytes class converts any Java type in byte arrays. is no other way of sorting or indexing by columns. Another disadvantage would be the lack of data types. public void creatTable(String tableName, String[] familys) throws Exception{ Everything is a byte array. There is no difference between Date HBaseAdmin admin = new HBaseAdmin(conf); if (admin.tableExists(tableName)) { and String. System.out.println(„table already exists!”); } else { HTableDescriptor tableDesc = new HTableDescriptor(tableName); for (int i = 0; i < familys.length; i++) { tableDesc.addFamily(new HColumnDescriptor(familys[i])); } admin.createTable(tableDesc); System.out.println(„create table „ + tableName + „ ok.”); }
HBase never comes alone. To work it requires Hadoop ( a MapReduce platform), HDFS ( a distributed file system) and Zookeper ( a system that coordinates the nodes in a cluster). More } about these technologies in a future issue. It is important to know that HBase is not the solution for all HTable is used to establish the desired connection. To add a big data storage. HBase is not a solution for small “problems”. new row Put is being used. In HBase everything it gets stored as an We could end here the X-Ray of HBase, but it is worth menarray of bytes. The Bytes class converts any Java type in byte arrays. tioning how Facebook found an ingenious function for HBase. Facebook it uses HBases as a central component of its messaging public void addRecord(String tableName, String rowKey, String family, String qualifier, String value) system, both from users message storage and for keeping an inver throws Exception { try { sed indexed used for message searching. In the index table, the HTable table = new HTable(conf, tableName); Put put = new Put(Bytes.toBytes(rowKey)); keys are user IDs, columns are words that appear in user messages put.add(Bytes.toBytes(family), Bytes.toBytes(qualifier), Bytes.toBytes(value)); and timestamps are the message IDs. table.put(put); Since user messages are immutable, the entries in the index System.out.println(„insert recored „ + rowKey + „ to table „ + tableName + „ ok.”); are static. The versioning doesn’t make sense here, but Facebook } catch (IOException e) { e.printStackTrace(); is using it to store the message IDs. Basically, they won another } } dimension for that data. Reading a record is made using Get class, and the response is Bibliography: wrapped in a Result object. http://hbase.apache.org/book/quickstart.html public void getOneRecord (String tableName, String rowKey) throws IOException { HTable table = new HTable(conf, tableName); Get get = new Get(rowKey.getBytes()); Result rs = table.get(get); for(KeyValue kv : rs.raw()){ System.out.print(new String(kv.getRow()) + „ „ ); System.out.print(new String(kv.getFamily()) + „:” ); System.out.print(new String(kv.getQualifier()) + „ „ ); System.out.print(kv.getTimestamp() + „ „ ); System.out.println(new String(kv.getValue())); } }
When the keys are not known, Scanner is used a cursor like interface.
36
nr. 10/April, 2013 | www.todaysoftmag.com
Seven Databases in Seven Weeks, Eric Redmond
Cătălin Roman
catalin.roman@nokia.com Software Architect @ Nokia, Berlin
programming
TODAY SOFTWARE MAGAZINE
Migrating MVC 3 website + DB to Azure
R
ecently we had to take a decision – our hosting solution was becoming inadequate for the performance needs of our system & our clients. He decision to take was choosing between a better dedicated server (actually a pair of servers + a load balancer) or migrating to a cloud based hosting solution.
I believe other businesses are also confronted with this choice (the number of comments on this article will indicate whether I’m writing or not) so I will present: • The decision making process that lead to choosing Windows Azure as a hosting solution (why Azure and not another cloud hosting platform) • The migration procedure itself – what was easy, what was hard, what tools we used
Picking the right scaling solution
Each project/architecture has its particularities so don’t look for a recipe that fits all situations but look for the solution that best fits your scenario. This is the only way you could maximize the ROI (return on investment).
The first step – define the context What exactly are we migrating? Our architecture consisted of: a web portal developed in asp. Net MCV (3 and 4) running on a MS SQL database abstracted
by Entity Framework There are already some drawbacks – a SQL Server needs a machine operating an OS Windows (Windows Server or anything else). For the web portal in ASP.net MVC the easiest way is to use a Windows machine with IIS – you could also opt for a Linux machine and run the portal in mono. Since the database asks for Windows as OS, to make it easier on ourselves we will pick Windows/IIS as the web portal hosting solution. Why is this aspect important? If we decide to build/rent a dedicated server we need to install Windows on this machine and – depending on the hosting provider – we might be required to pay the Windows license (that would cost us about 10 EUR per month). In a cloud hosting solution (Amazon EC2, Windows Azure) you can rent a dedicated machine with a preinstalled Windows OS – so no additional costs involved there. The same reasoning applies to SQL server – on a dedicated server we will be required to install SQL server ourselves (or pay someone to do this for us) and a license will be required – you can rent a MS SQL Server Web Edition license for around 15 EUR/ month. On a cloud based hosting solution you might get this license for free (depending on the cloud solution) Summing up: for a dedicated hosting solution we will pay for hosting + licenses (bought or rented) + (optional) paying someone to help us with setting up the machines. This all will come up to a fixed cost, easy to forecast. For a cloud based solution the cost varies, depending on what machines (how many machines) we use and how much we use these machines (hours of usage). If we go for SQL Azure instead of SQL server we will be charged for the generated outbound traffic.
If the pay per hour of usage/per machine seems hard to forecast – that’s because it is. Cloud hosting providers usually give us price calculators where you could input all the data required and get an estimated monthly cost. But usually you don’t have the necessary input data for these price calculators – so the best thing to do is to do a test (where the test is to be as close to the reality as possible). Amazon and Azure give developers trial programs http://aws.amazon.com/ free/ , and http://www.windowsazure.com/ en-us/pricing/free-trial/ that are usually sufficient to run performance/load tests that will help us get a better idea regarding the expected costs. You could also test SQL Azure on a trial program: http://www.windowsazure.com/en-us/pricing/free-trial/ Once you have migrated your architecture (or a good enough mock of it) in the cloud, run the performance/load/stress tests until you reach the limits of the trial packages – by that time you should have enough data to accurately use the price calculators. Summing up: the context (architecture to be migrated) directly influences our migration options and our costs – on a dedicated hosting solution the costs are fixed, easily to budget; on a cloud solution the costs are variable and harder to budget. The fact that you could also test the Azure and Amazon clouds before actually committing to a platform are invaluable in the decision making process. Disclaimer: as cloud hosting solutions we have used only Amazon and Azure, mainly due to their notoriety and their startup kick-start programs. This doesn’t mean that you cannot go for something else: Google App Engine (https://cloud.google.com/) or regional cloud hosting providers – a search on your favourite search engine will provide you more information in this regard. Regional cloud hosting providers also have trial/ startup kick start programs – if you cannot find these information on their website, their sales department will provide all the data.
www.todaysoftmag.com | nr. 10/April, 2013
37
programming Migrating MVC 3 website + DB to Azure Second step: know how One of the advantages of a dedicated server is that you get to enjoy 100% of the performance of that machine – and for the time being, you get a better price per processing power (or per mega ram) with a dedicated server that a cloud based solution. One of the disadvantages of a dedicated server is that you are in charge of maintaining that machine running and ensuring that you have a valid plan B (and C, and D) in case of a system crash. Our team did not possess the knowhow of a system/database administrator – we could have externalized these tasks (you could check out how much this would cost you by looking at livehosting – a provider we have worked with an with whom we are happy - https://www.livehosting.ro/Datacenter/Servere-Dedicate/ Administrare-servere.aspx) until we brought this know-how to the team, or train a person in this field. None of these options worked for us – a SLA (service level agreement) strict obliges us to have a system/ database administrator in our team at a certain point in time, but developing/ bringing the know-how o such a person was not a priority for our company at the present. In other words: we preferred getting better in domains with which we were already familiar (web development, mobile development, linq) than develop ourselves horizontally and learn more about system administration and database administration. The promise made by cloud based hosting is that scaling and creating redundant systems will be easy – without having the skills of a system admin you could set up a system that conforms to a strict SLA and can be easily scaled. This promise was to our liking. Yet, migrating to a cloud based solution also involves know-how – getting familiar with new terminology (each cloud hosting solution uses its own wording), new tools, and new way of working. Developing a solution on the .net stack and using Visual Studio as an IDE, Windows Azure became the cloud solution most convenient to use – from my point of view, Microsoft did an excellent job by
38
nr. 10/April, 2013 | www.todaysoftmag.com
integrating Windows Azure SDK in Visual Studio – you can (almost) transparently deploy you solution in the Azure cloud. The almost is because some changes will be required. Regarding the database, by using Entity Framework we could migrate from SQL Server to SQL Azure and the system wouldn’t even know it. Note: when dealing with php based architecture we have found the Amazon cloud easy to use (as long as a Linux machine does not scare you) and updates to the migrated solution easy to perform. What did we do in order to see how easy it would be (and how much it would cost us) to administer our own dedicated servers versus administering a cloud based solution: • We researched about what it takes to configure an IIS server, install a load balancer, install and administer a SQL server, setting up an automatic backup and restore mechanism on the database. You can find all the information you require via search engines. • We researched what Windows Azure and SQL Azure are about, what it takes to set up a redundant system and how to scale it – in this regard the trainings from Pluralsight (www.pluralsight.com) proved invaluable and I highly recommend them • Most importantly: we discussed with professionals with experience in dedicated solutions/ Windows Azure. When you are at the beginning you might now even know the correct questions to ask – and here someone that already went through this experience could offer advice and guidance (better that any search engine could). Here I would like to thank: • My ex colleagues from evoline that helped me with advice/ answers regarding IIS and Sql Server management • To Radu Vunvulea from iquest that helped me with advice/answers regarding what and how could we migrate to Azure Note: the buzz word “cloud” was also an element in our choice – at least marketing wise, it looks more long term efficient to add to your skill set the know-how of working with a cloud based solution (and migrating solutions to the cloud) than learning to administer dedicated solutions. To sum up: managing a dedicated server is not an easy job – even if you can externalize this for a period, you will have
to either train or hire someone to do this. On a cloud based solution you will spend more time when initially migrating the solution but then the redundancy, scalability, backup & restore are handled by the cloud. Both approaches (dedicated vs cloud) have costs and tradeoffs – we were relatively inexperienced in both of these fields, so the costs regarding getting the know-how would have been similar. In your situation, if you already have experience with one of these approaches the choice could be easier. From our analysis a dedicated solution offers better a better performance/per euro ratio, so if you have the know-how required to configure/manage a dedicated server yourself, this could be a very tempting option. The cloud makes it easier for a programmer – it transparently hides the complexity of configuring/managing a server and scaling the system so you could focus on development and not on system administration.
Final words We spent around 3 weeks analyzing different options, talking to experts, training ourselves on Azure and reading about server administration before deciding to choose Azure. The fact that we could test the Azure cloud through Bizspark before commiting also helped. In the next article we will speak about the way we migrated the database to SQL Azure and the web solution to Azure, about the challenges and the tools we used.
Dragoș Andronic
dragos@txtfeedback.net CTO @ TXTFeedback
business
TODAY SOFTWARE MAGAZINE
The Challenge of Leadership - Part 3
T
his is the final article in the series on the challenge of leadership by Martin Mackay, recently appointed CEO of Neverfail. In the first article Mackay laid out the strategic framework he is employing at Neverfail to drive change. In the second he discussed in more detail the five key responsibilities a leader has to recognise. Here he addressed the 7 key behaviors a leader has to demonstrate to be effective. As we have previously discussed there are more demands on a leader today than arguably any time before. Not only are the needs of investors and stakeholders paramount (and often unreasonable particularly in the current global economic situation) but also employees have incredibly high expectations of their leaders. The immediate availability of information allows leadership “best practices” to be disseminated and widely read across multiple media; and employees will then harshly judge those leaders who fail to measure up to their expected standards. Sometimes the temptation for every leader is to “return to the ranks” and focus on his or her own wants. An early lesson I learned in my career is that the best sales people have to be selfish (or at least self-focused) but the best sales managers (and by extension leaders of all types) have to be selfless. Being selfish is not an acceptable course of action for the leader but it certainly can be appealing on occasions! One of my industry mentors once said to me: “As CEO, I used to worry about whether I should be liked, respected or feared. Experience has taught me that actually I need to be all three!”. Wise words, but again no easy task. There is no formula for leadership success and every leader needs to find his or her own pattern of behavior. Ultimately as long as authenticity is demonstrated then the leader should at least have the respect of his team. However, in this final article I will review seven behaviors which I endeavour to adopt and which in my mind epitomise good leadership. The first behavior is to give, not take. Tales of corporate greed and lack of accountability have in recent years become
all too common. I am currently reading an incredible book: “Exposure” which is the story of the Olympus corporate scandal. The executives in the Japanese corporate headquarters thought only of taking, not giving. Here’s how the “giving not taking” thesis runs: employees who feel good about themselves and where they work will perform better and work harder; then as a result customers will feel better about the products and services they are purchasing and will buy more; then as a result there will be “more money in the bank”. To achieve this employees need to feel engaged. To create engagement the leader needs to create an environment of open communication and elaborate a clear direction for the company so that employees not only understand where they are going they feel part of the journey to get there. This means the leader must spend all his time giving: direction, guidance, encouragement, communication, energy, empathy. In this time of austerity there is one other key point for the leader to remain credible: whilst the leader is almost certainly going to be better remunerated than the majority of employees it is also important that the employees themselves feel fairly compensated (and have the chance ideally through bonus schemes to participate in the company’s success). The second behavior demanded of the best leaders is self-knowledge: be aware of what you are good at, what you are not good at and be comfortable hiring people who are more talented and experienced than you! This is easier said than done because it requires complete comfort on the part of the leader with his or her own self-worth. Think back to the best leaders you have
worked for: they will almost certainly have been willing to give you the reins to make a project a success; they will have encouraged your own ideas to flourish and to be made real; they will have challenged you but only to stretch your abilities; they will have been comfortable sharing their own weaknesses and vulnerabilities with you; they will not have needed to be the smartest person in the room! These are the characteristics of self-knowledge which allow leaders to create incredibly powerful teams. Third leaders must ensure that goals are set and strategy is clear: this means they ensure everyone is clear on what needs to be done but they do not prescribe how things should be done. Note the language here: leaders themselves have to tap into all the talent available to create the strategy and the associated goals; they do not have to set the strategy and objectives themselves but make sure they are clearly set. Coming back to the second behavior to exhibit self-knowledge, the best leaders have a team who are far more capable than they are in any given functional area of responsibility. “Hire people who are better than you and give them space to be successful” is the best advice I can offer. This key behavior means that leader bocomes goal-oriented but not task-oriented. Task-oriented leaders morph easily into micro-managers and believe that only they are actually capable of getting things done. As an example consider Margaret Thatcher: whatever your view of her politics her courage as a leader, the vision she set out for the UK, the goals she set for her team and her determination were the driving force behind the transformation of the UK in the 1980s. Her demise was brought about when she started to believe that only she
www.todaysoftmag.com | nr. 10/April, 2013
39
business The Challenge of Leadership - Part 3 was capable of delivering her legislative programme. She stopped working with her team and started to disenfranchise and denigrate them; she forgot about goals and managed tasks herself. Ultimately this led to her being forced from government not by the opposition but by her own political party. The fourth behavior good leaders demonstrate is that they communicate to engage. They use multiple media and both formal and informal opportunities to share details of progress and success as well as challenges the business faces. The most effective communication is where the leader tells stories and share experiences. These stories will often be underpinned by authentic, self-deprecating humour (if they relate to the leader’s own experience) but will use analogies – I am personally fond of analogies from the world of sport – or tales of similar experiences in other businesses to illustrate the key points. Perhaps one important aspect of this behavior is that the communication needs to be consistent: it could be called “over-communication” but if the leader has ensure strategy and goals are clear and set then repeatedly referencing the same strategy and progress towards the same goals demonstrates consistency and authenticity. Communicating to engage around goals by definition means the goals must be inspiring and relevant to the employees. Too often the goals reflect the leader’s selfinterest or are limiting and not relevant to employees. For example, setting the objective for the business to be sold in 2 years may be relevant to investors and shareholders personally but it is hardly inspiring to the workforce. Painting a picture of the business in terms of strategic assets (products, partnerships), market presence (customers, reputation), company culture and ethos as well as target financial metrics to measure progress objectively is much more inspiring because it allows the employees to engage in the aspiration to work for a successful business where they can feel proud to work. If those strategic goals are delivered then that may well result in a profitable exit for investors but that is the consequence of achieving the goals not the goal itself. For any leader to be successful there can be no place for arrogance. I have learned this the hard way in my career where in the past, having felt threatened by employees for asking difficult questions, I have reacted in an aggressive, over-bearing way: my arrogance being that why should
40
nr. 10/April, 2013 | www.todaysoftmag.com
I allow the challenge from someone who clearly does not respect my authority? It is said that self-confidence is comes from having a good opinion of yourself which is shared by others and arrogance comes from the same good opinion which is not! If the leader can demonstrate confidence with humility, put himself in the place of the employee and then respond to any challenge positively, then the leader will be perceived as truly authentic. Of course at the same time leaders are only human: if you challenge and are then just blatantly disrespectful or rude expect unpleasant consequences! However the best leaders eschew arrogance and deal with challenge, adversity and success in a balanced fashion. As Kipling put it: If you can keep your head when all about you Are losing theirs and blaming it on you, If you can trust yourself when all men doubt you, But make allowance for their doubting too. As a penultimate behavior the best leaders inspire employees to be engaged in their work and the company’s future and to take personal accountability for their own contribution; they also provide substantive evidence of progress, of the contribution the employee is making and of the “up and to the right” metrics of the business. In other words they lead with the heart and follow with the head. Ultimately the best leaders work extremely hard to engage employees so that the employee experience is memorable; this means the customer experience will be memorable and they recognise this is the only way to build a sustainable business. So, the leader needs to build trust and respect so that employees believe in the leader. The leader needs to motivate so that the employees are inspired to play as a team rather than as a disparate collection of individuals. The leader needs to be sufficiently approachable so that employees are willing to express their points of view without fear of reprisal but the leader also needs to show “teeth” – there needs to be both the feeling that employees do not want to let the leader down and the acceptance that those who do not want to be part of what the leader outlines will not be tolerated. By illustration remember the great line from Arnold Schwarzenegger in the film “True Lies” where his wife, Jamie Lee Curtis, discovers he is a spy. She says: “You mean you’ve actually killed people?” to which he replies:
“Yes, but they were all bad!”. Steve Jobs put it another way: “Get rid of the Bozos!”. This does not mean the random dismissal of those who disagree or challenge; it means ultimately ensuring that all employees’ own personal values match those by which the company itself lives. How can the leader achieve all this? By demonstrating the final behavior which is in my view the most important and without which nothing else matters: integrity. Integrity is a combination of consistency, honesty, openness, authenticity and trust. Demonstrate that and everything good will follow! Postscript: I hope these articles have proved at least interesting and thought-provoking. What I have attempted to do is to outline my leadership philosophy and the way in which we are attempting to transform Neverfail globally. Will it work? The only true test will be to ask our employees about how they feel about working for the company. I cannot be certain we will pass this test: I can be really clear, however, that this is our overarching goal as a leadership team and we will certainly have a lot of fun trying to do so!
Martin Mackay
mmackay@neverfailgroup.com CEO @ Neverfail Group.
management
TODAY SOFTWARE MAGAZINE
Book review: Going Agile by Gloria J Miller
O
ver the summer of 2012, I decided to go for the Project Management Institute (PMI) launched an agile project management certification, the PMI-Agile Certified Practitioner (PMI-ACP®). PMI recommended some 10 books to read in preparation for the certification. I found the books very helpful and informative. I learned some new terms and new methods of working. Even though I have several years of experience with different agile methods, it was difficult to align those new ideas with the project management practices I was already using. I kept thinking, what now. If I started a project tomorrow, what would I do different from on my last project. Therefore, I decided to jot down my ideas and thoughts in short reference book. The book “Going Agile Project Management Practices” is the result.
What is the book about
“Going Agile Project Management Practices” is an extensive review of the literature with the content selected and influenced by my 30 plus years of experience. Agile and project management field consultants helped to review and form the structure and content of the book. They also added their expertise in the form of case studies and personal experiences. The book is not specific to any one agile methodology. It synthesizes definitions, concepts, and practices that would be applicable to an executive, manager, or project manager that wants to be informed about agile. The book is divided into four sections: A definition of agility, its benefits, and practices; a description of the agile practices as aligned with the PMI Project Management Body of Knowledge (PMBOK®) Guide—Fifth Edition; an overview of the considerations for selecting and implementing an agile methodology; and a reference guide of selected methodologies, terms, and people. The key topics included in the book include: • The applicability of agile, its benefits, and failures • Description of selected methodologies (Scrum, Kanban, Lean, XP) and
agile practices • The agile techniques and skills needed for an agile coach • Agile practices aligned to the Project Management Body of (PMBOK®) Guide Edition 5 knowledge areas • Personal experiences from agile coaches and team members • A glossary of terms used in the book. • Profiles of agile and management experts referenced in the book.
the required functionality” for the agreed budget. It turns out, agile projects use of “functional contingencies” and the review meetings to manage budget expenditure. A functional contingency is a reserve of features that may not be implemented should the project not be able to realize all product features due to budget or time constraints. The review meetings are used to evaluate progress against expectations and authorize or decline further investment in the The originality in “Going Agile Project project. Management Practices” book lies in its structure. First, it is a reference book or Book excerpts supporting cost pointer to guide people on getting started management with agile practices. Second, it approaches The following excerpts from “Going the going agile topic from two directions: Agile Project Management Practices” are adding agility to traditional projects or related to managing project scope and de-risking agile projects. Finally, it covers costs. “agility” in projects as a topic and not one Chapter 6 - Integration: Scope is manaspecific agile methodology. ged by prioritizing features to deliver the For example, one of the key questions most valuable features first, establishing I had about agile practices related to cost business rules for addressing changes, and management. If the agile teams are self- deciding on the iterations scope at the last managing, deciding their own definition possible moment. of done, and agreeing what scope they Chapter 7 - Scope: Functional contincan complete, how does that relate to cost gency is defined based upon the feature management? In my world, projects have prioritization. It is a reserve of features budgets that are established and agreed at that may not be implemented should the the executive level. The executive expec- project not be able to realize all product tation is that the project will deliver “all features due to budget or time constraints. www.todaysoftmag.com | nr. 10/April, 2013
41
management [Dynamic Systems Development Method] DSDM suggests that the contingency is the effort to implement any features beyond the “must have” features and that the “must have” features should represent 60% of the total project effort. In this scenario, the contingency is 40%. Chapter 8 - Time: Schedule contingency is a time buffer that is added to the schedule baseline to account for adjustments and fluctuations based upon risks or unknown events. The method for estimating the contingency will vary depending upon the estimation method used for sizing the project and a range to the velocity [the rate at which the team can deliver the features]. The simplest method is to calculate the expected project duration and add a percentage of time for contingency (e.g., 20%). For example, if the project was expected to complete in five iterations, a 20 percent contingency would have it completing in six iterations. Chapter 9 - Cost: The costs baseline includes all the costs and the contingencies, including staff costs and other costs. Since the scope is delivered in iterations, the burn rate per iteration and burn rate per size unit are perfect for evaluating the cost performance. When the budget is at risk, the functional contingencies can be used as a tool to adjust the features delivered and maintain the project within its costs (Chapter 7). Chapter 10 - Quality: The Test Plan should consider using Continuous Integration, Test Automation, and an Agile Tester as part of the quality processes. These practices are necessary to maintain a high throughput of user stories, minimize the build of technical debt, and deliver a high quality solution. Chapter 11 - Human Resources: At least 10% of the team members should be experienced with agile practices if pair programming is being used; if not, at least 25% of the team members should be experienced. Splitting teams, training, and [external] coaching are effective methods for including experience into the project team. Chapter 12 - Communication: Team space should be organized to facilitate free flowing communications amongst colocated and distributed team members. This may require adapting the facilities, using collaborative technologies, or undertaking additional activities. Chapter 13 - Risk: Risks are scheduled for resolution by placing them in the product backlog during release planning,
42
nr. 10/April, 2013 | www.todaysoftmag.com
planning the risk actions during the iteration planning, and working on risk during the project iterations. Chapter 14 - Procurement: The scope, change management, acceptance, termination, and deliverable sections of agreements should be adjusted to fit the operations of an agile project, including evolving scope, adapting to change, and early termination. Chapter 15 - Stakeholders: The project stakeholder will have different interests in the project and will come from inside and outside the organization. Map the interests of stakeholders onto the project events, artifacts, and information radiators that will be used to engage them in the project or inform them on the project status. Summarized altogether, the project budget should consider 100% of the functional scope, risks actions, quality infrastructure, facility changes (e.g., team space, collaboration tools), training and external coaching, and schedule contingencies. Change management involves establishing rules for when and how to use the functional or schedule contingencies. The project reviews become important milestones for stakeholder engagement and for monitoring and managing budget expenditure. Figure shows the budget considerations in agile projects.
Recommendations from the book for going agile
There is no one right way for organizations or projects to transition to being agile. Projects are unique undertakings, and organization have different cultures for decision-making and management controls. Therefore, the book lays out the facts, describes the agile practices, and then gives the choice to the reader on how to best approach agility depending upon their situation. The following are some things to do and to avoid for those considering adopting agile practices for the first time. Select and customize the methodology based upon the environment. Do not assume that any one methodology will work out-of the box. Spend some time to understand the method and how it would fit into your environment. Understanding it, might mean applying it in a few iterations before customizing it. Educate the leadership team. Management support is key and needed to implement and sustain agility. Invest time in getting the leadership team on board. They need to understand what changes and how they can support the team.
Define success. Define what a successful agile project looks like and manage expectations along the way. From Scrum, we learned about the term “definition of done.” Define and agree what it means. Otherwise, the team may think they are successful while management may not. Set-up the infrastructure. Being agile requires an investment in having infrastructure that supports quick changes. Define what infrastructure is needed and set it up. Adapt management reporting. Projects need to be evaluated more on their functional scope coverage than they do on their on-time performance. Adapt the expected management reports and performance metrics.
TODAY SOFTWARE MAGAZINE Hold a retrospective and tune the processes. Review often what is working well and worth repeating, what could use optimizing, and what should never be repeated. There is no cookbook for going agile. Avoid software-driven approaches. Avoid new project management software solutions until after the first two or three projects. Work without introducing new project management technologies until you learn about being agile. Introducing
technology too early makes the transition about the technology and not about the practices. Avoid confusion. Avoid trying to keep old practices while adapting to the new practices. It is tempting to try to keep old practices in place and not make the radical changes required. For example, going from a command and control project structure, where there is a project manager in charge, to a self-directed team means that
the distributed authority can cause some uncertainty on who is in charge. Avoid trying to make the agile coach the project manager to avoid this discomfort. Agile practices do work, but they have to be given a chance. “Going Agile Project Management Practices” is 442 pages is available at amazon, Barnes and noble, and other e-tail outlets for 18,10 euro epub and kindle and 23,99 paperback.
About Gloria J. Miller Gloria J. Miller is the founder of MaxMetrics, a Management, and Information Technology Consulting company that is specialized in providing experts and expertise in international projects and programs. She has more than 20 years’ experience providing consulting in managing complex projects and realizing organizational change. Gloria has a master’s degree in business administration and undergraduate degrees in computer science and electronic technology. She is a Certified ScrumMaster (CSM), a PMI-Agile Certified Practitioner (PMI-ACP®), and a Project Management Professional (PMP®). Maxmetrics has offices in Heidelberg, Germany and Atlanta, Georgia, USA. www.todaysoftmag.com | nr. 10/April, 2013
43
HR
Superman Syndrom
W
hen Time Management is the subject I’m definitely the target audience. The problem with this is that I shouldn’t be. I am a trainer and a psychologist for almost 10 years. I delivered trainings on TM and I read everything from research to POP literature on this subject. If this is a serious need and I also have the know-how about it why do I procrastinate, why do deadlines are more like guidelines, I forget? Well, the issue is not that we don’t know what we have to do or how to do it. If that was the problem I and every other TM guru would do anything we set our mind to when we set our mind to it. All this could be named the Superman Syndrome in Time Management.
How we would define this?
it does not exist, but to create your own system, that makes sense for you and works for you. You can find inspiration but do not expect to find a tool or a training that will provide you with the perfect system for you. Ok. Then, what to do? Create your own Time Management system. You can do that by identifying you TM profile and start from there. How you identify your TM profile? 1. Figure out the psychology behind DOING (starting and maintaining a behavior) 2. Monitor the way you tend to do things in order to draw-up your profile 3. Plan using that profile as a rule without being Superman..
It is that terrible condition that gets us to plan our time like someone else would come and make it happen. Someone else who is more motivated, always motivated, makes time for what matters, doesn’t glorify busy, doesn’t require breaks, has a constant rhythm … and any other characteristics that would enable him to make that plan reality. In the previous article we discussed about the first step, defining clearly what The psychology behind behavior needs to be done. This is the first step. In A research from psychology states that this article we will address the second step in order to initiate a behavior you need (the essential one). certain premises/antecedents available. The relationship between antecedents and Cause behavior is an “if only” type of relationsIt seems that when we want to do hip. That means that only if all antecedents something and we plan that thing to do it are present than behavior can be initiated. in spite of ourselves. Exactly! I underlie can be initiated as having all When we plan, instead of seeing antecedents are doesn’t mean that behawhat our time profile is, how we function vior will be initiated; it only means that physically and psychologically and plan without them it can’t. The antecedents according to that we prefer to ignore all are: 1) Context; 2) Procedural knowledge that and plan against ourselves. and competencies; 3) Self-efficacy and 4) My advice when it comes to TM is not Motivation. to find or create some miraculous system,
44
nr. 10/April, 2013 | www.todaysoftmag.com
1. Context: I have available all resources needed to do the behavior I want at hand when the behavior is planned. Here we discuss about two types of resources: External and Internal Ex: External: Do I have the manuals from where I need to study? Do I have the sneakers with me when I planned to run? Ex: Internal: Do I have the physical, intellectual, psychological and any other internal resources that are required from me when doing the behavior? (Am I physically fit to exercise for 30 minutes? Do I have the emotional set-up needed to study?) 2. Do I have the ability to do that behavior? Ability has two stages. The first one is to know how to do that behavior (the steps required to do it) and the second one is do I have the competencies needed to do that behavior. Ex: When I plan to study I need to ask myself: Do I know how to study? Do I have the competency of studying? It is indeed surprising how many of us don’t know how to study easy and efficiently and that makes study hard and un-fun. 3. Self-efficacy: Do I believe that I can do that behavior correctly/according to my expectations? This is different than believing in yourself or from you self-esteem. SE is behavior specific. SE can be
TODAY SOFTWARE MAGAZINE inferred from experience in doing that I strongly recommend that when planbehavior but it’s not necessarily an objective ning to do it around ourselves not in spite measure of that experience. of ourselves Low SE usually sounds like: “I don’t think it will work out”; “I’m not sure about So how do we plan around ourit/that I can do it right” selves? First let’s know ourselves. 4. Motivation: Am I motivated to do that thing? Do I have the drive needed to do it when I planned it? Do I have the drive to allocate time and effort? Is the drive for this specific behavior more powerful at the moment I planned to do it than the drive for any other behavior? The question, for example, when studying a new programming language is: “when I planned to study will I want to study? Will I want to study more that I would want to do any other thing? What does all this mean? It means that when I plan to study “every Tuesday from 20:00 to 22:00” I need to check all that all of the four antecedents are present at the moment that the behavior is planned to happen. What happens is that we tend to plan against ourselves. We plan not taking into account that we need some things available for behaviors to happen. We forget to take into account our personal rhythm (physiological, psychological, etc.)
can control how you spend your time. The only way to increase the perceived control of time is to know how you tend to spend your time so you would plan your things when there is the highest probability for them to happen. That is what the Personal Profile is The purpose of time management is not about. to have some miraculous plan but to have First I recommend a week of monitoring a plan that represents reality. A plan that yourselves. You will be surprised of how makes us feel guilty, irresponsible, un-ambimany surprises you will have. tious, lazy, un-motivated, angry, scared or Carry a piece of A4 paper at all time upset is not a plan, it is a torturing device. with you and write down all the things you do throughout that week. And I do mean all. Planning according to profile (make sure no one finds that paper) Take everything you learned about After you monitored try to look at the yourself and use it. So when planning asks papers and try to draw a profile of how you yourself: “Where should I plan this activity, spend your time by looking for patterns. when it has the highest chance of happening, Anything from when you need a break, to what antecedents do I need to make availawhen you “just don’t feel like it” to when you ble to make it happen? are the most creative and so on The second step is to see why the current So let’s build plans that work for us and plan doesn’t work? not against us. Make a plan, the way you usually do it and then track down to what is actually happening. Figure out why (which of the 4 antecedents are not present) when you reschedule something or you don’t do something you planned. Antonia Onaca anto@aha-ha.com Time management studies have found with 10 years of experience, that the TM tools alone don’t work, not psychologist, consultant as an entreprenour unless you have perceived control of time. This means the subjective feeling that you
www.todaysoftmag.com | nr. 10/April, 2013
45
communities
WordPress and the Community Spirit
H
aving success – easy and fast - further away of being a goal which could belong to a community - it is, still, one of the most discussed editorial subjects in business publications and many more. Of course, the innate ambitions of human beings nourish this subject, but - in the meantime – there is also being nourished an illusion: that success it is a purpose in itself, or, simply put, a superior limit we must achieve at any stake. Success it is a destination, an absolute achievement. From a different perspective, though, what could it mean to be successful? It could very well mean that the project or the idea you’re working on has achieved that particular stage in which it has the potential to evolve on a long-term basis. Success materialises step-by-step, with each border that is crossed - in the process of the ideas’ evolution. Associated with real life examples: success is when you notice that the team you work in gets bigger and bigger, that the projects you work for are more and more diverse and complex, or - simply - when your idea manages to change something. Finally, this perspective over success can belong to a community, to a group of people who have one or more mutual purposes. WordPress is one of those organizations of which way of functioning proves that this perspective is feasible, and moreover, that it can bring altogether people to build a network (a community) with an influence at a global level. WordPress explores the so-called “pavilion of curious people” in the world, defying, by its way of working and developing, the natural tendency people have - that of acquiring knowledge strictly for personal interest. That way, WordPress counts through the first online organizations which take further on the community spirit, through the platform they’ve developed.
What is WordPress?
WordPress is a PHP-based software (MySQL based), developed by and for the community. Whilst being a content administration system (CRM), it is used mainly for publishing content. The access to the platform is free and the source-code is, also available to anyone - being an open-source type software..
46
nr. 10/April, 2013 | www.todaysoftmag.com
What is “the sense of community”?
Word Pre s s m e ans “c om mu n it y without frontiers”; it means collaboration, interaction, restating the importance of quality content on the Internet. WordPress encourages what we could always call collaborative knowledge, having as an exponent the code, which is also an exponent for innovation in the tech world. Moreover, WordPress is “openness”; it is the openness of programmers’ communities which strive to make things better, but also the openness people can have towards the critical mass called „blog-o-sphere”. WordPress inspires us and, through the over-60-million blogs they host, they teach us a lesson about the power and value of the community. They teach us that beyond the code - there stand the people - who have that certain… sense of community. And still, what does the sense of community means? A sense of community always starts from a thinking framework people set up the moment they decide to be part and to be involved in a community. Which could be the attributes of such a thinking framework? Let’s see: • the direct purpose of the community is evolution (of the community and projects) • the projects can have a global or a local impact • the higher the impact of the project is, the higher the expectancies are and also the level of satisfaction • the pleasure of getting things done • the pleasure of working with other people • a community project is a project from which many people will benefit • ( i n o u r c a s e ) p a s s i o n f o r programming
Those principles could contribute to the development of the community, its cohesion, whilst keeping clear the purposes of the community, in order to be pursued by each member. On the other side, the process of evolution in a community can be much more rapid, thanks to expertise diversity or different levels of knowledge. In a community, you can do more - altogether - and maybe this is one of the most important aspects.
WordCamp Transylvania
In the same spirit, between 1st and 2nd June, Cluj-Napoca will host the conference about WordPress - WordCamp Transylvania. For two days, WordCamp Transylvania will be a meeting place for the Romanian WordPress community - from experienced programmers, students or designers passionate about WordPress. The event puts its stakes on the exchange of ideas between participants and special guests: the first day will be dedicated exclusively to the conference, whilst the second day - will be dedicated to a HackDay. The conference has the purpose to bring the specific collaborative WordPress spirit closer to the programmers’ community in Transylvania. For more details on the event, you can access: http://2013. transylvania.wordcamp.org
Cornelia Stan corneliastn@gmail.com Copywriter/Organizer WordCamp Transylvania
sponsors
powered by