No. 5 / 2012 • www.todaysoftmag.com
TSM
T O D A Y S O F T WA R E MAG A Z I NE
Your own startup
Social networks
Internal SEO techniques - part I
Core Data – Under the hood
Google Guice
5 Java practices that I use
Agile & Testing & Mobile Three converging Concepts
Microsoft Kinect – A programming guide
How to Grow An Agile Mentality in Software Development?
Testing - an exact science
Play „Hard Choices” in every sprint and pay your debts User Experience Design and How to apply it in Product Development Build dynamic JavaScript UIs with MVVM and ASP.NET UseTogether. What? How? and Why?
What’s new in Windows Communication Foundation 4.5 Service Bus Topics in Windows Azure Android Design for all platforms Malware pe Android statistics, behavior, identification and neutralization Gogu
6 Your own startup
32 Build dynamic JavaScript UIs with MVVM and ASP.NET
Călin Biriș
Csaba Porkoláb
10 UseTogether. What? How? and Why?
34 Social networks
UseTogether team
Andreea Pârvu
10 Internal SEO techniques (I)
36 Core Data – Under the hood
Radu Popescu
13 Google Guice Mădălin Ilie
16 Agile & Testing & Mobile 3 converging Concepts Rares Irimies
20 Android Design for all platforms Claudia Dumitraș
23 How to Grow An Agile Mentality in Software Development? Andrei Chirilă
26 Play „Hard Choices” in every sprint and pay your debts Adrian Lupei
28 User Experience Design and How to apply it in Product Development Sveatoslav Vizitiu
Zoltán, Pap-Dávid
39 5 Java practices that I use Tavi Bolog
42 Microsoft Kinect A programming guide Echipa Simplex
44 Testing an exact science Andrei Contan
45 What’s new in Windows Communication Foundation 4.5 Leonard Abu-Saa
47 Service Bus Topics in Windows Azure Radu Vunvulea
50 Malware pe Android statistics, behavior, identification and neutralization Andrei Avădănei
52 Gogu Simona Bonghez
editorial
Editorial
A
Ovidiu Măţan, PMP
ovidiu.matan@todaysoftmag.com Founder & CEO Today Software Magazine
pple has recently released iPhone 5, and this information is likely to be more popular than an entire football championship. I will not discuss whether the hardware is more interesting than the one from the competition, because the fact is that they are similar and the major differentiating factor is the operating system and the diversity in App Store. From this point of view iOS 6 has changed a bit the general rules of the game by introducing their own maps, by removing the YouTube application, through a better integration with social networks and improved data synchronization between devices in almost real time. Paradoxically, this makes it similar to the Microsoft strategy, where we see the same thing: their system of maps by Nokia using Navteq, a very good integration of social networks in OS and the synchronization support. Android continues the trend of disruptive business by offering it for free, but the multitude of devices that can be supported is sometimes quite challenging for application development. Depending on the device the user experience varies greatly. If all that mattered ten years ago was the hardware and consequently the device manufacturer such as Nokia or Sony Ericcson, today we pay attention to what the operating system itself provides. The ecosystem to which it gets access is more valuable than the purchased hardware. From all this, a software developer should understand that a single product can have a short term success and what matters on the long term is the creation of an ecosystem that contains a more complete experience of the target users. For programmers this lesson would be translated into emphasis on diversity and the desire to understand all levels of such an ecosystem, to be actively involved in its development, starting with the application running on the device and ending with the backend. The current issue of the magazine contains 40% more articles than the previous one and can be considered an expression of this diversity. The emphasis on Agile methodology and many items that start with the definition of the layout application on Android devices, SEO optimization techniques, The Google Guice, Core Data iOS, Java best practices, WCF 4.5 latest news, Windows Azure Service Bus, the user experience design, the statistics and behavior of Android viruses and many others that you can find in the magazine offer readers a wide range of subjects. The importance of the ecosystems can be locally extended and that’s where the TSM journal supports the community for better communication and sharing of technical expertise of programmers and beyond. The academics involvement in order to collaborate with the IT companies from Cluj and the increased importance of the research will be discussed during a special event; we will come back with details. The collaboration with local researchers may also be a good opportunity for local startups which would then prove to be truly innovative. I conclude by thanking all the contributors, sponsors and last, but not least, to you, the readers. Thank you,
Ovidiu Măţan 4
nr. 5/2012 | www.todaysoftmag.com
Founder & CEO Today Software Magazine
TODAY SOFTWARE MAGAZINE Editorial Staf
Authors Călin Biriș
Founder / Editor in chief Ovidiu Mățan ovidiu.matan@todaysoftmag.com Editor (startups and interviews) Marius Mornea marius.mornea@todaysoftmag.com Graphic designer Dan Hădărău dan.hadarau@todaysoftmag.com
calin.biris@gmail.com Călin Biriş is the marketing crocodile of Trilulilu and President of IAA Young Professionals Cluj.
Reviewer Romulus Pașca romulus.pasca@todaysoftmag.com Reviewer Tavi Bolog tavi.bolog@todaysoftmag.com Translators Cintia Damian cintia.damian@todaysoftmag.com
User Experience and User interface Senior Designer
Mădălin Ilie
madalin.ilie@endava.com
Andreea Pârvu
Cluj Java Discipline Lead @ Endava
andreea.parvu@endava.com
Radu Popescu
Colaborator marketing: Ioana Fane ioana.fane@todaysoftmag.com
Sveatoslav Vizitiu info@sveatoslav.com
rpopescu@smallfootprint.com
Recruiter for Endava and trainer for development of skills and leadership competencies, communication and teamwork
QA and Web designer @ Small Footprint
Radu Vunvulea
Rareș Irimieș
Senior Software Engineer @iQuest
Senior QA @ 3Pillar Global Romania
Zoltán, Pap-Dávid
Radu.Vunvulea@iquestgroup.com
rares.irimies@threepillarglobal. com
zpap@macadamian.com
Claudia Dumitraș claudia.dumitras@skobbler.ro
Software Engineer @ Macadamian
Android Developer @ Skobbler
Simona Bonghez, Ph.D.
Made by
Today Software Solutions SRL str. Plopilor, nr. 75/77 Cluj-Napoca, Cluj, Romania contact@todaysoftmag.com www.todaysoftmag.com www.facebook.com/todaysoftmag twitter.com/todaysoftmag ISSN 2285 – 3502 ISSN-L 2284 – 8207
Simona.bonghez@confucius.ro
Tavi Bolog tavi.bolog@nokia.com
Speaker, trainer and consultant in project management,
Development lead @Nokia
Owner of Confucius Consulting
Andrei Chirilă Andrei.Chirila@isdc.eu
Echipa Simplex
simplex@todaysoftmag.com
Team Leader Technical Architect @ ISDC
Andrei Conțan
andreicontan@hotmail.com
Adrian Lupei alupei@bitdefender.com Project Manager and Software Engineering Manager @ Bitdefender
Copyright Today Software Magazine Any reproduction or total or partial reproduction of these trademarks or logos, alone or integrated with other elements without the express permission of the publisher is prohibited and engage the responsibility of the user as defined by Intellectual Property Code
Leonard Abu-Saa
Andrei Avădănei andrei@worldit.info
leonard.abu-saa@arobs.com System Architect @ Arobs
Principle QA @ Betfair Co-Founder Romanian Testing Community
Founder and CEO DefCamp CEO worldit.info
www.todaysoftmag.ro www.todaysoftmag.com www.todaysoftmag.com | nr. 5/2012
5
startups
Your own startup
T
hink of three local online brands, which started in Cluj and came to succeed at national or international level. They are not so hard to find, are they? However, if you think about the major IT centers in Romania, Cluj is in the first three. We have many IT practitioners who are absorbed by outsourcing companies. Few are the companies that have developed their own products.
Călin Biriș
calin.biris@trilulilu.ro Călin Biriş is the marketing crocodile of Trilulilu and President of IAA Young Professionals Cluj.
I heard that the market in Cluj will change. The wages are rising and they reached a high level, comparable to the wages in the Western countries. The outsourcing companies will begin to suffer from this reason and their customers will begin to move their projects to other cheaper markets. A healthy way for any local outsourcing company would be to start building their own products to address to multiple client markets. Thus, by varying the portfolio, they will not venture all their eggs in one basket and the risks will be divided. But why wait for the company where you work to put you in a team for a new product, when you can start a startup outside office hours and work for you? Your advantage at this point is that you’re working on a good salary and in your free time you can focus on your own future projects. If you think of a startup, the most important things that you should take into
6
nr. 5/2012 | www.todaysoftmag.com
account are: team, money and idea.
The team
The most important resource in a start-up is the human capital, i.e. the specialists. Many who are thinking to start a start-up fall into the money trap believing that without investment money you can’t achieve a successful product on the market and therefore they no longer start anything. I think that if you make a good team who delivers a valuable product with a healthy business model, money will come. The problem is that this team should be composed of members with complementary skills. When Trilulilu started, the initial project was a team made from the best specialists in design, programming, server administration, management, branding and legal, a handful of people who complemented each other and have worked for the same purpose to launch an impact service on the market.
management
TODAY SOFTWARE MAGAZINE
Think about the skills you need in your you have a good answer to this, the invest- good? startup and find the best people to engage ment will come along with an increasing with you. number of payers customers. The easiest way to test an idea is trying to sell the product before it is actually Money The idea achieved. You might find customers willing If you start a startup that you think it I had several opportunities to talk to to pay now for a solution which is to be has potential, it’s easier than you expected developers who wanted to be involved in delivered just over a period of time. With to reach people willing to provide feedback, their project, but who didn’t believe that it the same test you can discover needs that or even to invest in the idea of your team. will be a market success. In addition to this, you haven’t originally thought they would due to the work on the projects of other be covered by service or your product. I recently wrote on my blog about the customers, they had no time to think of events dedicated to start-ups, where the their own. The best they could do is to try But still, why start working on a startAngel type investors meet teams who need to find ideas through networking. up and not settle for what you have now? money for a product which they develop. A Because you can: handful of such events is already happening By taking part in several start-up comonce a year. petitions I have realized that the best ideas • Work on your own ideas, come from people who are not so oriented • Gain better But not all investors want to promote on the IT but more towards the business • Be more appreciated for what you the fact that they have money to give. We side. Therefore, several meetings should create, found out about people who have money be organized more often between experts • Meet more people and travel more set aside and are waiting for the right team from business and IT specialists who try to • Keep all your eggs in one basket, in to invest in, even if they don’t participate in find those start-up ideas that would have a case something happens. such events. chance of success. In vain is Cluj among the top IT cenAlso, you can think of other financing If you don’t have a startup idea, try ters in the country, if we cannot boast about sources such as loans, bank grants or the meeting more and more people from our achievements. It’s up to you to try 3F - friends, family and fools. different areas and discover the opportuni- something new, to make a team and start ties they saw in different markets. Maybe something, anything. You have a lot more But more important than „what you together you can find solutions that could to gain than to lose. Think about it! do to get the investment for your start-up be successful. „is to think what business model you will follow and how you will make money. If But how do you realize that an idea is
www.todaysoftmag.com | nr. 5/2012
7
startups
UseTogether. What? How? and Why?
I
t all started about 6 months ago at Startup Weekend Cluj where a group of 12 newly acquainted people joined hands around a simple, yet powerful idea: why buy when you can borrow? The plan seemed simple enough: all we had to do was put together the best ideas we had and come up with a proof of concept for an online platform centered around borrowing and lending of objects. Fortunately, the whole experience ended up being more than positive: we left the event with a ton of advice, new ideas, fresh perspectives and most importantly, we Washington Times and Public Broadcasting received an all-round thumbs up for the Service. idea on top of which the project was built. With the technology issue out of the way, the only thing left to figure out was As you might expect, a long series of a choice of project management / bug meetings, plans, sketches and doodles issue software. We first settled for Trac. As followed, and working between jobs, uni- described on http://trac.edgewall.org/, Trac versity, family, friends and all sorts of other is minimal software system which combiobligations, today we can happily say that nes a wiki and an issue tracker. It’s ideal for UseTogether is finally up in beta and is hea- projects where a roadmap is firmly in place ding (hopefully) in the right direction. and face to face communication takes place easily and frequently. Unfortunately, our This article will describe the dev adven- team met none of those requirements. We tures behind UseTogether and the stack of did not have a clear idea on how we were technologies that we use and that make our going to tackle the project and because life easier. we also had jobs and university to attend, we only managed to get together during Although we won Startup Weekend weekends. Cluj using the classic PHP, MySQL and JavaScript formula, we soon realized that So we had to look for something a bit we needed something more powerful and more complex feature-wise, something more flexible if we wanted to give the idea that would allow us to be firmly connecbehind UseTogether a proper implementa- ted with the development process at all tion. After a long series of discussions, we times. Fortunately, the guys at Facebook finally decided to use Django as the cor- had similar problems long before us and as nerstone of our project. Django is a MVC an answer to those problems, Phabricator web framework written in Python which was born. Phabricator is a stack of web apps emphasises fast development and sim- meant to make developers’ lives a bit easier ple, pragmatic design. It was first released and although it looks a lot like systems such publicly around 2005 out of its creators’ as Trac or Redmine, o short visit to http:// frustration of having to maintain complex phabricator.org/ will reveal that the former websites written in PHP and it was origi- takes itself a lot less seriously: Facebook nally intended to be a thin layer between engineers rave about Phabricator, descrimod_python and the actual Python code. bing it with glowing terms like „okay” and Nonetheless, sick of having to constantly „mandatory”. From descriptions such as copy and paste code from project to pro- “Shows code so you can look at it” to Close ject, the creators managed to develop the a ticket “out of spite” or the submit button initial thin layer enough that currently, that has been renamed to “Clowncopterize” working in Django is pretty much a bre- (because clowns and helicopters are aweeze. Still, why Django and not say Ruby some), Phabricator is like a long walk on Rails? For us, the main reason was sim- through Nerdtown down Geekstreet. A tool ple: all of us had previous experience with written by programmers for programmers. Python and of course, Django was named after a guitar player - Django Reinhardt. Beyond the funny interface that resembles Facebook a lot, Phabricator provides As a note, the list of popular web- everything you could ask from a modern sites built with Django includes names project management tool: code review, such as Pinterest, Instagram, Mozilla, The issue tracking, source browsing, wiki,
8
nr. 5/2012 | www.todaysoftmag.com
CLI and even an API. Versioning wise, Phabricator can work with Git, Subversion and Mercurial and can run on Linux, Mac and even Windows. For us, code reviews have proven to be a real lifesaver. If you might be willing to give up on code quality and take the quick and dirty road when you’re the one writing the code, fortunately, we humans are a pretty unforgiving bunch, because we’re more than willing to see every little mistake others have made. And if at times you’re blind to your own huge faults, I assure you that this workflow has brought to attention every small and seemingly insignificant hack in our coding. I can’t overstate the importance of code reviews in a project like UseTogether, with a really small team of developers, where each person has to be able to understand and modify any part of the codebase. And last but not least, our toolset had to include Git. Why Git? Because it’s fun, fast and very easy to use. In the end, we wish you all as few critical bugs as possible and we leave you with a little piece of wisdom: “Python is a dropin replacement for BASIC in the sense that Optimus Prime is a drop-in replacement for a truck.” - Cory Dodt.
The UseTogether team From left to right, on the top row: Daniel Rusu, Mircea Vădan, Alex Țiff, Cătălin Pintea, Gabi Nagy, Paul Călin, Adriana Valendorfean, Victor Miron; and on the bottom row: Larisa Anghel, Ioana Hrițcu, Sorina Andreșan, Alina Borbely
TODAY SOFTWARE MAGAZINE
Local communities
T
he community section commits to keeping track of the relevant groups and communities from the local IT industry and to also offer an upcoming events calendar. We start with a short presentation of the main local initiatives, and we intend to grow this list until it contains all relevant communities, both from the local landscape and the national or international one with a solid presence in Cluj. The order is given by a function of number of members and number of activities reported to the lifespan, thus we are striving to achieve a hierarchy that reveals the involvement of both organizers and members.
Transylvania Java User Group Java technologies community. Website: http://www.transylvania-jug.org/ Started on: 15.05.2008 / Members: 493 / Events: 38 Romanian Testing Community Community dedicated to QA. Website: http://www.romaniatesting.ro Started on: 10.05.2011 / Members: 520 / Events: 1 Cluj.rb Ruby community. Website: http://www.meetup.com/cluj-rb/ Started on: 25.08.2010 / Members: 112 / Events: 27
Calendar September 25
Technical Days C++ C o nt a c t : ht t p : / / b l o g . p e o p l e - c e nt r i c . r o / n e w s / un-nou-eveniment-technical-days-c-la-cluj
September 26
Open Coffee Meetup - #HaSH - Hack a Server Hackathon Contact: http://www.facebook.com/opencoffeecluj/events
September 29
Windows 8 Dev Camp Contact: http://codecamp-cluj-sept2012.eventbrite.com
The Cluj Napoca Agile Software Meetup Group Community dedicated to Agile development. Website: http://www.agileworks.ro Data テョnfiinナ」トビii: 04.10.2010 / Members: 234 / Events: 13
October 2
Cluj Semantic WEB Meetup Community dedicated to semantic technologies. Website: http://www.meetup.com/Cluj-Semantic-WEB/ Started on: 08.05.2010 / Members: 125 / Events: 18
HTML5 and Problem solving in Web Design Contact: http://www.meetup.com/Cluj-Semantic-WEB
Romanian Association for Better Software Community dedicated to IT professionals with extensive experience in any technology. Website: http://www.rabs.ro Started on: 10.02.2011 / Members: 173 / Events: 9
Patterns for Parallel Programming Contact: http://www.rabs.ro
October 17
November 9
Artificial Intelligence, Computational Game Theory, and Decision Theory - Unifying paths Contact: workshop2012@rist.ro
Google Technology User Group Cluj-Napoca Community dedicated to Google technologies. Website: http://cluj-napoca.gtug.ro/ Started on: 10.12.2011 / Members: 25 / Events: 7 Cluj Mobile Developers Community dedicated to mobile technologies. Website: http://www.meetup.com/Cluj-Mobile-Developers/ Started on: 08.05.2011 / Members: 45 / Events: 2
www.todaysoftmag.com | nr. 5/2012
9
programming
Internal SEO techniques part I
I
n the previous issue we saw the most important changes in Google’s search algorithms. In the current issue we decided to write an article to present some of the most important techniques of search engine optimization, applicable in the area of internal SEO. These techniques, although easy to apply, offer very good results in long term – concerning the increase of the organic traffic.
Radu Popescu
rpopescu@smallfootprint.com QA şi Web designer @ Small Footprint
Meta tags
The purpose of the <title> tag is identical to the purpose of the <meta description> tag, which is to increase the click rate in results page. The use of some interesting titles or keywords within the tag will increase this click rate. In case you want to use a brand name in the <title> tag, there are two approaches. If the brand is famous, it is recommended for it to be placed in the beginning of the tag (e.g.: Brand Name|Page Name). In case of small or new brands, we have to add the brand’s name at the end of the tag (e.g.: Page Name| Brand Name). These approaches are based on the idea that an important brand name has a very powerful impact on the decision to click on that specific result, while unknown brands will not attract clicks. The length of the <title> tag has to be between 10-64 characters, knowing that there are words with no descriptive value that occupy the space such as “of ”, “on”, “in”, “for”, etc, so we have to avoid them.
keywords as possible in a page, because they are important in SERPS. This is not true. Ever since September 2009, Matt Cutts (which at the time was the Search Quality Manager) announced on one of Google’s blogs, that the famous search engine will not take into account the <keta keywords> anymore. Adding and using this tag is not harmful, but still it takes some time we can use for other tasks.
Another myth is that the <meta description> tag would help a page appear higher in the Google results. In fact, this meta tag’s purpose is to convince as many people as possible to click on our result. <meta description> is a tool which helps very much to increase the click rate in SERPS. All the people who see a results page (after a Google search) will have to decide on which of the links will click. The description that offers a very good or interesting summary will determine most people to click on it. A more controversial approach in this meta tag’s case is the use of some There is a myth related to <meta very interesting descriptions, but incomkeywords>, that you need to have as many plete (finished in the middle of an idea with
10
nr. 5/2012 | www.todaysoftmag.com
TODAY SOFTWARE MAGAZINE
suspension points). This will determine the person faced with that result to want to find out more details on the description, so he will click on it. In what concerns the length of the description, we have to mention that it should have between 50-150 characters in order to have the best results.
Fişierul Sitemap.xml şi rolul său
The Sitemap is an XML file type that helps search engines finds more easily the pages that they will index, offering a list that contains the entire site URLs. This file was used for the first time by Google in 2005 and in 2006 MSN and Yahoo search engines have announced that they will also use the sitemap for indexing. The file format is very simple and anyone can create it. There are, though free online generators that can do this in a much shorter time. Among these, the best and the fastest is www.xml-sitemaps.com. Note that if this file will contain over 50.000 URLs or it will have more than 10 Mb, it has to be compressed in gzip format. <?xml version=”1.0” encoding=”utf-8”?> <urlset xmlns=”http://www.sitemaps.org/schemas/ sitemap/0.9”> <url> <!-- această secţiune se va repeta pentru fiecare pagină în parte --> <loc>http://example.com/</loc> <!-- adresa paginii --> <lastmod>2006-11-18</lastmod> <!--data ultimei modificări a conţinutului --> <changefreq>daily</changefreq> <!--intervalul de schimbare al conţinutului--> <priority>0.8</priority> <!-- prioritatea paginii (poate varia de la 0.0 pana la 1.0) --> </url> </urlset>
Once created, the Sitemap.xml file needs to be added in the site root and it will be accessible at the following address www. example.com/sitemap.xml. nothing.
HTML code validation and its importance
Among the most common HTML valiOne thing that it’s very often forgotten dation errors, we mention the following: or left behind because of the short develo• Closing some tags. <div> tag will ping time is the validation of the HTML be closed with </div>, while <img> code. We all remember the days when most tag will be closed in a single format sites had small labels or logos saying that <img/> the website contains valid HTML and CSS • Using an incorrect DOCTYPE code. Actually, they weren’t just a reason for • Lack of ALT attribute inside the brag, but really helped in SEO. The search image tag engines have to parse the HTML code in • Incorrect tag nesting. Here is order to find the content. If there are errors, an incor re c t way of nest ing it is possible that some parts of the website <div><b>TEXT</div></b>. The can’t be taken into consideration and all correct way is <div><b>TEXT</ the content optimization work will be for b></div>
•
Converting special characters in type Entity symbols. For the “©” character we will use “&copy” in the HTML code
Checking the validation of the HTML code can be done using a free tool, available for anyone on www.validator.w3.org. Although not all errors can have a negative impact over SEO, it is recommended to have a valid and clean code, at least for professionalism and quality reasons.
www.todaysoftmag.com | nr. 5/2012
11
programming
<Bold> vs. <strong> and <i> vs. <em>
We have to understand that search engines, when they look at a web page, donâ&#x20AC;&#x2122;t see the same thing as a real person. They see the source code and they analyze it. Because of this, although for us, a text inside the <b> tags (bold) looks the same as a text inside the <strong> tag, search engines see two totally different things. <b> tag is perceived like a design tag, giving the words a more pronounced style, while <strong> tag gives semantic information which emphasizes the content. Generally, <em> tag will be used for the content which is generated by the users (testimonials or reviews), <strong> tag to mark different keywords inside the content and for stylization of the text it is recommended to avoid as much as possible using <b> and <i> tags and to use instead <font-weight> and <font-style> CSS properties.
Redirecting the address non-www to the address www
301 redirect is a type of permanent redirection which will allow transferring over 90% of the optimization and SEO benefits from the redirected domain to the new domain. Generally, this type of redirect will be used when moving a website on another domain without losing a lot of the organic traffic coming from Google. Many
12
nr. 5/2012 | www.todaysoftmag.com
Internal SEO techniques - part I
times it is overlooked the fact that the addresses www.example.com and example. com are seen as being different by search engines and the SEO power of the external links may be cut in half. 301 redirection of the address example.com to www.example.com will help us have a better rank. To better understand this, we can imagine that we have 50 links from some friendsâ&#x20AC;&#x2122; blogs to example.com and 50 links from different other websites to www.example.com. Those 100 links divide between two different addresses, but with a simple 301 redirect, we can have 100 links to the same address, almost doubling the authority of our website. This type of redirection can be done through the hosting dashboard, by using an .htaccess file in case of the websites built with PHP or by using the webconfig file in case of websites that are using Microsoft technologies.
Conclusions
A very important thing to know is that the excessive optimization may have a negative impact on your website, after Penguin update was introduced. Using exactly the same word or the same group of words in the meta tags, in the page title, in the <h1> tag and also inside the content may bring a penalty. It is recommended to use as many variations as possible of the same keyword or different constructions of
a certain group of words. Although most of these techniques seem, at a first glance, simple, they present a big advantage in optimization for search engines. The internal branch of the SEO is related mainly to the actions of the people who build or own a website, rather than to external factors and this makes us pay special attention and gives us the confidence that the results will be real. In the next issue, we will continue this article with other techniques for the internal optimization.
programming
TODAY SOFTWARE MAGAZINE
Google Guice
A
s promised in the previous article I’ll continue presenting Google Guice also for web applications. In order to do this you’ll need to get the servlet extension – part of the standard distribution, along with other extensions like JMX, JNDI, Persist, Struts or Spring. Using Guice, the web.xml will be reduced at minimum - just make the Guice container start. The rest of the configurations will be easily done in Java in the same type-safe manner presented in the previous article.
Mădălin Ilie
madalin.ilie@endava.com Cluj Java Discipline Lead @ Endava
The servlets will benefit from: Constructor injection Type-safe configuration Modularization AOP In this article I’ll present the following scenarios: • Developing a web application from scratch • Adding Guice to an existing web application • • • •
Starting a new web application using Guice
Besides the core Guice libraries presented in the previous article, we must also add the guice-servlet.jar in the application’s classpath (please check the end of the article for the Maven dependency). After the classpath is properly configured, we must first define the Guice Filter in web.xml. This will actually be the only configuration present in this file. This is the web.xml: <?xml version=”1.0” encoding=”UTF-8”?> <web-app xmlns=”http://java.sun.com/xml/ns/javaee” xmlns:xsi= ”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation= ”http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee /web-app_3_0.xsd” version=”3.0”> <display-name>Guice Web</display-name> <filter> <filter-name>guiceFilter</filter-name> <filter-class> com.google.inject.serv let.GuiceFilter </filter-class>
</filter> <filter-mapping> <filter-name>guiceFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> </web-app>
We are sure now that all requests will be processed by Guice. Next step is creating the Injector and defining the Modules. Besides the “normal” modules used by an application – presented in the previous article – in order to actually use the Guice Servlet module we must declare an instance of com.google. inject.servlet.ServletModule. This Module www.todaysoftmag.com | nr. 5/2012
13
programming Google Guice
is responsible for setting the Request and Sessions scopes and this is the place where we’ll configure the servlets and filters within the application. Considering that we write a web application, the most logical and intuitive place to create the Injector is within a Ser vletContextListener. A ServletContextListener is a component that fires just after the application is deployed and before any request received by the server. Guice comes with its own class that must be extended in order to create a valid injector. As we’ll use Servlet 3.0 API we’ll annotate this class with @WebListener – we won’t need to declare it in web.xml. I was saying that the ServletModule is the place where we configure our servlets and filters. This is the content of the class that it will extend this module. It will configure a servlet mapped to all the .html requests: package endava.guice.modules; import endava.guice.servlet.MyServlet; import com.google.inject.servlet.ServletModule; public class MyServletModule extends ServletModule { @Override protected void configureServlets() { serve(„*.html”).with(MyServlet.class); } }
The ServletContextListener: package endava.guice.listener; import javax.servlet.annotation.WebListener; import endava.guice.modules.MyServletModule; import com.google.inject.Guice; import com.google.inject.Injector; import com.google.inject.servlet. GuiceServletContextListener; @WebListener public class MyGuiceConfig extends GuiceServletContextListener {
import endava.guice.service.MyService; import com.google.inject.Inject; import com.google.inject.Singleton; @Singleton public class MyServlet extends HttpServlet { private static final long serialVersionUID = 1861227452784320290L; @Inject private MyService myService; protected void service( HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.getWriter().println( „Service: „ + myService.doStuff()); } }
We can have 2 directions: Use Guice only for new things – of course, this is not a best practice, but this is a normal scenario for big applications with legacy code • Guicefy the entire application – ideal case •
For the second case you must follow the same path presented in the first part of the article.
Let’s analyze this code: 1. A servlet must be a singleton – mark For the first case we’ll end up using DI it using the @Singleton annotation – in servlet classes that are not instrumenotherwise the application will throw ted by Guice. We can access the Injector an Excepton instance using the ServletContext 2. We use field injection in order to get Injector injector = a MyService instance (Injector) request.getServletContext(). getAttribute(Injector.class.getName()); 3. The class extends HttpServlet just like any other servlet In order to get all the dependencies injected you can: The MyService interface: 1. Call injector.injectMembers(this) – package endava.guice. this will inject all the dependencies service; import com.google.inject.ImplementedBy; 2. Call injector.getInstance(clazz) @ImplementedBy(MyServiceImpl.class) for each instance that needs to be public interface MyService { injected String doStuff();
}
Request and Session Scope And its implementation:
package endava.guice.service; public class MyServiceImpl implements MyService {
The servlet extension adds 2 new scopes: Request and Session. We’ll see next an example of using the Session scope.
We’ll slightly modify some of the classes presented before. Considering that we’ll need to mix scopes and we want to access The application is ready to be deployed. an object with a narrower scope from an This is the result of calling index.html: object with a wider scope (access a Session scoped object from a Singleton) we’ll use Integrating Guice into an existing Providers (see the Note section for details). @Override public String doStuff() { return „doing stuff!”; } }
web application
The servlet module will look like this:
@Override protected Injector getInjector() { return Guice.createInjector(new MyServletModule()); }
package endava.guice.modules;
Please note inside the getInjector() In order to integrate Google Guice into method the creation of the Injector based an existing web application we must make on the servlet module defined before. If the sure that everything is in place: application has many other modules all of • The required jar are in classpath them must be declare here. • The Guice filter is defined in web. xml Also, you can see how intuitive the • We have a ServletContextListener declaration of the servlet mapping is. t h a t e x t e n d s This is the MyServlet class: GuiceServletContextListener
import com.google.inject.servlet.ServletModule; import com.google.inject.servlet.ServletScopes;
package endava.guice.servlet; import java.io.IOException; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse;
14
nr. 5/2012 | www.todaysoftmag.com
import endava.guice.provider.PojoProvider; import endava.guice.servlet.MyServlet; import endava.guice.servlet.PojoClass;
At this stage, all these configurations will not have any impact on the application – everything will work as before.
public class MyServletModule extends ServletModule { @Override protected void configureServlets() { serve(„*.html”).with(MyServlet.class); bind(PojoClass.class). toProvider(PojoProvider.class). in(ServletScopes.SESSION); } }
Please note the ServletScopes.SESSION binding. The PojoProvider class package endava.guice.provider; import endava.guice.servlet.PojoClass; import com.google.inject.Provider;
programare public class PojoProvider implements Provider<PojoClass> { public PojoClass get() { return new PojoClass(); } }
TODAY SOFTWARE MAGAZINE access the application twice in the same session. The first time it will display the below result:
The PojoClass class:
•
package endava.guice.servlet; public class PojoClass {
•
private String name; public void setName(String s) { this.name = s; } public String getName() { return this.name; } }
In order to prove that the application is actually working, we’ll modify the MyServlet class to display additional information: package endava.guice.servlet; import java.io.IOException; import import import import import
javax.servlet.ServletException; javax.servlet.http.HttpServlet; javax.servlet.http.HttpServletRequest; javax.servlet.http.HttpServletResponse; endava.guice.service.MyService;
import com.google.inject.Inject; import com.google.inject.Provider; import com.google.inject.Singleton; @Singleton public class MyServlet extends HttpServlet { private static final long serialVersionUID = 1861227452784320290L; @Inject private Provider<PojoClass> pojoClass; @Inject private MyService myService; protected void service( HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.getWriter().println( „Service: „ + myService.doStuff() + „ with „); if (pojoClass.get().getName() == null) { pojoClass.get().setName(„name”); } else {
•
The second time it will display the following result:
when it will actually use it (lazy loading) You want to inject a narrower scoped object into a wider scoped object Additional logic is needed in order to create the object being injected You want to control the process of creating instance per binding
As you can see in the previous examples, it is very easy to write a Provider. You just need to implement the Provider<T> interface, where T is the concrete type of the object being injected.
This example can be easily changed in order to use Request scope instead of Session scope. This is how a simple Guice web application looks like. I tried to touch the most important points from the Guice Servlet extension. As mentioned in the previous article, this is just a small intro. You can continue experimenting different situations and you’ll learn the most by actually using Guice into a real project.
Guice as a Maven dependency <dependency> <groupId>com.google.inject</ groupId> <artifactId>guice</artifactId> <version>3.0</version> </dependency>
Providers
Providers address the following situations: • A client needs more instances of the same dependency per injection In order to demo the functionality we’ll • A client wants to get the dependency
pojoClass.get().setName(„existing name”); } response.getWriter(). println(pojoClass.get(). getName()); } }
www.todaysoftmag.com | nr. 5/2012
15
testing
Agile & Testing & Mobile Three converging Concepts
A
modern overview of the IT universe reveals mobile technology as a particularly dynamic domain. This market sector is presently disputed between three major competitors, namely Apple, Nokia and the extended family of Android devices (Samsung, Motorola, Sony-Ericsson, etc.). Since recently mobile devices only offered users access to basic applications (e-mail, browsers, calculator and rudimentary games), yet nowadays we are bombarded with financial, health and insurance applications, personal assistant applications and advanced graphics games. Rareș Irimieș
rares.irimies@threepillarglobal.com Senior QA @ 3Pillar Global Romania
These applications have quickly determined us to use our mobile phones or tablets throughout the day even for the least important tasks. As customer needs evolved, mobile application developers have shifted their focus from the qualitative aspect, favoring the complexity of their product. Increasing customer demands have led to frequent problems related to time allocation for the development and testing of applications.
universe and analyzing their individual impact upon product development.
Defining the testing strategy
Even though we cannot predict the obstacles we could come across with during testing , it is essential to establish the basic testing strategy from the very beginning. We shall therefore need to: • Establish the testing area; • Determine the testing environment; • Define the types of testing to be A solution for this issue could be applied. identified by discussing the development methodology best suited for customer Testing area needs and which requires as less time The customer plays a very important and resources as possible. Standards shall role in establishing the area of testing be established at company level and will (devices, operating systems, platform comhave to be adjusted and implemented so binations, browsers etc). The QA (Quality as to respect both customer demands and Assurance) team has to inform the custothe product for delivery. By following the mer about the risks presented by a reduced “Agile” methodology, the company that area of testing or about likely risks once the I work for has managed to define a set of application is being developed. To gather guidelines that ensure the product will this information, the team needs to procorrespond to the customer’s vision and spect the market so as to establish which market ideal. are the most commonly used devices and These basic principles are: operating systems. During a customer• defining the testing strategy; team discussion, the following aspects • defining the testing objective; must be agreed upon: • constant adaptation to objective • which platforms are targeted in changes; development; • identifying risks and establishing a • which devices and operating prevention strategy; systems combinations testing should • continuous feedback. be focused on; • how likely it is that mobile phone We shall further discuss each of these operators or device producers principles targeting the mobile application h av e i mp l e m e nt e d c o m m o n
16
nr. 5/2012 | www.todaysoftmag.com
management
TODAY SOFTWARE MAGAZINE
functionalities upon which the Testing environment application would rely on. Once the area of testing has been established, the testing environment needs to Due to limited time or budget, actual be defined. We shall consider the following testing will not be able to cover sufficient aspects: devices and operating system versions to • which testing types to use considerably reduce the incidence of bugs. • the costs of obtaining the specific In this event, it is recommended that the environment most frequently used device-operating • the testing effectiveness on such an system combinations be tested, as well as environment the devices with the highest bug report. • can the generated environment be used for testing other applications Another major step in establishing the area of testing is positioning the applicaWe can thus make a comparison tion on the market (games, social, banking, between the following testing environhealth etc.), since we will be able to decide ments: emulators, physical devices and which devices to use for testing according shared infrastructure to the market segment the application is intended for. For example, we would need Emulators to cover as many devices as possible for a The emulators are convenient to use, banking application to ensure easy access but a series of characteristics specific to for users, whereas for games we would emulator testing have to be taken into focus on devices mostly used by children account if they are used as the main testing (iPod, iPad, Android tablet). environment: • e f f e c t i v e i n : S a n it y t e s t i n g , Another problem that may occur Compatibility testing, Functional during the development of a complex applitesting, Smoke testing; cation is using APIs provided by existing • free download; applications (Facebook, Twitter, etc) or by • 40% of testing can be executed on third-party libraries (TapJoy, iAD etc). The the emulator; customer needs to be aware that integrating • device-operating system combinatithese components may push the deadline ons can be obtained; back or may have a notable impact upon • not fully effective for day-to-day the application’s stability (third-party bugs scenarios; that cannot be fixed in due time or the avo• relatively slow testing environments; idance of which could unpleasantly affect • intensely used in automation testing. the application’s functionality).
Shared infrastructure
This alternative has a few features worthy of mention: • effective in: Exhaustive testing, Compatibility testing, Interruption t e s t i n g , Fu n c t i o n a l t e s t i n g , Regression testing; • n o t f r e e ( D e v i c e A ny w h e r e , PerfectoMobile); • testing can cover a wide array of device-operating system combinations; • slow because internet connection is required; • effective for reproducing day-to-day scenarios.
Physical Devices
The approach of testing using real devices can be considered the closest one to the authentic experience, since user behavior can be perfectly simulated. The features important to remember are: • Effective in: Functional testing, Smoke testing, Acceptance testing, Regression testing, Bug fix testing; • Accurate for reproducing problems occurred while using the device • 50% has to be conducted on a physical device; • Effective in automation testing; • Offers quick and precise results.
Testing types
After defining the testing environment, the team has to decide which testing types to resort to. Be it manual or automated, the testing process must be adapted to
www.todaysoftmag.com | nr. 5/2012
17
testing Agile & Testing & Mobile - Three converging Concepts
the project complexity, allocated resource budget as well as the amount of time available. Consequently, when comparing manual with automation testing, we must take the listed aspects into account:
Which is the best testing method?
Manual testing is 100% effective, since it is easily able to mimic user behavior and should be used for at least 70% of the entire project. Automation testing is only 60-70% effective because the scripts applied cannot be fully complete or correct. Presently, the question of Manual vs. Automation is highly debated and most conclusions state that automation is not able to substitute manual testing.
What are the costs of each method?
To provide a short answer, we need to consider the complexity of the application and the amount of time allocated for testing. Automation testing is more costly in the initial phase, in which time and resources are consumed by developing automated scripts, setting up and using a Continuous Integration environment. Still, this method will prove useful in the regression testing stage. On the other hand, manual testing is able to isolate bugs in the earliest stage of product development, but presupposes higher costs in terms of the physical devices required.
What are the advantages of this testing method?
Automation testing is very useful in the regression testing stage by saving a great deal of testing execution time and plays a leading role in load testing or performance testing for the entire project. Certainly, as previously stated, the possibility of integrating automation testing into a Continuous Integration system (ex. Bamboo, Jenkins/ Hudson etc.) should be considered. The main testing methods for manual execution are Functional and Bug-fix testing and the objectives are: a. User interface (fonts, colors, navigation from one screen to the next); b. Basic application functionalities; c. Usability testing – buttons such as “Home” or “Back” in each screen; d. Download, installation or uninstallation of the application; e. Interruption testing – application behavior when it runs in the
18
nr. 5/2012 | www.todaysoftmag.com
foreground and the user receives a text message or phone call; Monkey testing –chaotically inserting characters or images in order to test the application’s resistance to stress.
the tester will only be able to validate the happy flow. As the code becomes more f. stable, automation testing, performance testing and negative testing scenario will be possible. Depending on the complexity of the product, the manager needs to allocate 1-2 sprints for testing and bug-fixing before Aside from manual and automation the product is installed on the production testing, increasingly more companies resort environment and becomes public. to “Crowd sourcing” services, which serve to establish if the product is correctly built Continuous alignment to scope by transferring it to a group of future users change (Alfa testing). This rule is rather uncomfortable for most testers. Since in “Agile”, the priorities Testing Strategy and functionalities of the application may Having detailed the previous three tes- change, the test plan has to be constantly ting components (area, environment and upgraded to reflect the occurred changes. type), a four-step mobile application testing For example, the decision to allow users strategy can be conceived: access to the backend part of the applica1. Plan tion is made, in order for them to add or a. Understanding the documenta- delete other user accounts. This, however, tion and requirements; has not been mentioned in the initial docub. Establishing the devices and com- mentation of the application. In this case, binations with operating systems; new methods of testing will be integrated c. Describing the emulators or the into the test plan: physical devices to be used in • Security testing (depending on the testing; user’s role, access to the data base is d. Describing the testing type to be granted or denied) used (manual or automation). • Load testing (checking behavior in 2. Design the event that multiple users access a. F i n d i n g a d e q u a t e t o o l s the database at the same time) (for Automation, Test Case Management, C ontinuous This is a mere example, but openness Integration); towards a constant shift of purpose and b. Procuring the devices; methodology is permanently welcome c. Establishing test scenarios. 3. Development Risk assessment and mitigation a. Creating test data; strategy b. Creating test scenarios. It is highly important that the test plan 4. Executare foresee and include risks that may arise a. Configuring the test environment; during the development and testing stage. b. Actual testing; For example, in order to test a more comc. Bug reports; plex functionality, additional testing time d. Execution matrix. is required. The approach and risk prevention strategy could consist in allocating Defining the testing purpose more resources for a short period of time In the Agile-Scrum methodology, the to allow testing as many risk scenarios that QA process unfolds throughout the entire may be executed by a user or, alternatively, project, in each sprint. At the end of each could mean reducing the number of tests sprint, all applications must have a compi- and prioritizing them depending on how lable code which can be tested. Given that they affect the application. the generally allocated time for QA is 2-4 days, the test plan needs to be very clear Continuous feedback and has to define the degree of quality of Perhaps most importantly, in order the code when the Demo is ready. to obtain feedback, a permanent open dialogue needs to be carried between the During the first sprints (where new QA team and the customer. When strifunctionalities are generally implemented), ving for quality, it is essential that new
TODAY SOFTWARE MAGAZINE functionalities or major changes brought to the application be discussed with each individual involved in the project. These modifications have a great impact upon the test plan and quality of the existing product. For this reason, the QA team has to be involved when these decisions are made by the customer or the development team.
Conclusions
Testing mobile applications will be increasingly difficult with the passing of time, as products become more complex and customers grow more pretentious. Opting for the correct strategy regarding testing types and environments lightens the workload, cuts costs and may help identify possible problems in advance. Perhaps the most important ingredient is not being reluctant towards change and the ability to adapt to an ever-changing purpose.
To support this rule, one could also bring into discussion the idea of testing in the early stages of the project and a high degree of implication on behalf of the testers and the product manager in the quality assurance process. The quality of the product needs to be ensured collaboratively Testing mobile applications has in itself and the development team must, in this become a challenge since the mobile envisense, be aware of the quality they produce. ronment is the most dynamic of its time.
www.todaysoftmag.com | nr. 5/2012
19
programming
Android Design for all platforms
I
would like to emphasize from the very beginning that this article does not bring anything new to Android programming, but it’s rather a synthesis of the information available through the Android system.
Claudia Dumitraș
claudia.dumitras@skobbler.ro Android Developer @ Skobbler
One of the problems that the Android application developers face is launching an application that works correctly on all supported platforms. What works perfectly on some phones may not work at all on others, this forcing some to give up the attempt of covering a category as large as the Android phones, including tablets.
for a flexible usage of the visual elements. Creating applications with an advanced UI - to react quickly to user action - is not enough. It should also be an intuitive one, so that all elements are clear and fully visible. Fortunately, the Android framework is continually growing in this direction, namely to support the development applications for several types of phones. All graphical models that each version of SDK brings should not be considered restrictions, but rather ways of development in this direction.
The main constraints to be taken into account when writing an application are: • supported SDK version, • Runtime configuration options such as language, phone orientation (Portrait or landscape), operators, etc. • different hardware versions, each From the outset, the framework of UI with its own limits; was designed to be adjusted according to • phone or tablet size. the available screen space. An example at hand might be the ListView component Graphic constraints which is able to change height according to A special place in the design and deve- the screen size, which varies depending on lopment of an application is occupied by the QVGA HVGA and WVG the graphic architecture in order to cover as many types of phones as possible. A Graphic design for mobile phones GUI should be dynamically built in order The graphic is constructed using the to be changed according to the expecta- .xml files called layouts. A new concept tions and needs of users; visual language of screen density was introduced since must be taken beyond the static screens Android 1.6, making scaling on multiple
20
nr. 5/2012 | www.todaysoftmag.com
management
resolutions much easier, even when phones were about the same physical dimension (eg the option was very useful for Droid mobile genre with high resolutions that have appeared on the market). The screen density means the number of pixels on a given screen surface or in other words, dots per inch. Thus, the graphics (layouts) can be classified into: „Small” for QVGA, „normal” HVGA and „large” for WVGA, making possible to use different resources in compliance with the screen’s dimension.
TODAY SOFTWARE MAGAZINE
namely it is recommended to use dps as a unit of measurement (Density independent pixels). A dp is a virtual unit, describing the sizes and positioning of graphical elements in a manner independent of density. A dp is equal to a physical pixel on a screen of 160 dpi (dots per inch), which is the density reference for a „medium” screen density. At run time, the platform handles any scaling of the dp units needed, based on the actual density of the screen in use.
Depending on the configuration qualifiers that are present in the application, the system is able to choose the appropriate resources according to the specific characteristics of the screen. A configuration Figura 1 Strategii de a trata design-ul layoutqualifier is the string added to resource urilor pentru tablete portet şi landscape. files.The example below shows a list of Sursa: http://static.googleusercontent.com/external_content/ resource file names according to different untrusted_dlcp/www.google.com/en//events/io/2011/static/presofiles/ layouts and examples of different image Graphic design for tablets designing_and_implementing_android_uis_for_phones_and_tablets.pdf files for „small”, „normal” and „high” For the first generation screen densities: of tablets running Android 3.0, the pro- separating tablets 7 „ the 5 „was required, per way to declare tablet layouts was to because these two were in the large group // Layout for screen size “normal” res/layout/my_layout.xml put them in a directory with the xlarge phones. The amount of screen space of // Layout for screen size “small” configuration qualifier (for example, res/ these two tablets is significantly different, res/layout-small/my_layout.xml layout-xlarge/). In order to accommodate as is the style of user interaction. // Layout for screen size “large” res/layout-large/my_layout.xml other types of tablets and screen sizes—in // File image for low density “low” particular, 7” tablets—Android 3.2 introTo make it possible for you to provide res/drawable-ldpi/my_icon.png duces a new way to specify resources for different layouts for these two kinds of scre// File average density images “medium res/drawable-mdpi/my_icon.png more discrete screen sizes. The new tech- ens, Android now allows to specify your // File images to high density “Heigh” nique is based on the amount of space your layout resources based on the width and/ res/drawable-hdpi/my_icon.png layout needs (such as 600dp of width), or height that are actually available for your rather than trying to make your layout application’s layout, specified in dp units. Even if your application has multi- fit the generalized size groups (such as For example, after you’ve designed the ple resource directories, it is good to use large or xlarge). While these two devices layout you want to use for tablet-style devicertain standards in all existing layouts, are seemingly close to each other in size, ces, you might determine that the layout
WE WANT YOU & T O A P P LY ! A N
A W E S O M E
C + +
D E V E L O P E R
NOW IT’S TIME
to work with the latest kick-ass technologies to do some serious rocket science with more than 3 mio customers, who drive 2 mio km/week
to rock the Romanian IT industry to change the LBS community worldwide
www.skobbler.com/careers apply@skobbler.com
www.todaysoftmag.com | nr. 5/2012
21
programming Android Design for all platforms
orientation. In the following figure t h e re are s om e strategies proposed by the Android team, in order to resolve any problems that might appear in portrait or landscape.
targetSdkVersion). It is usually chosen according with the highest number of mobile phones existing on the market. This number represents the upper API limit and is very important because it could influence certain functions. The lower the version number is, the more troubles can arise about API limitations. Also, many of the native widgets could have an outdated image.
Fragments
Conclusion
The implementation of multiple panels is most easily achieved using f r ag me nt s ( A PI Fragments). A Figura 2. Exemplu de definere a modulelor grafice care fragment can be a special function, conțin fragmente, ce pot fi combinate într-o activitate or part of a grapentru tablete, dar separate petru telefoane phic a l inter face. Sursa http://developer.android.com/guide/components/fragments.html) You can combine stops working well when the screen is less multiple fragments in a single activity to than 600dp wide. This threshold thus beco- build a multi-pane UI. This is an advanmes the minimum size that you require for tage, because it can be reused in multiple your tablet layout. As such, you can now activities. Each sub-layout can be divided specify that these layout resources should in fragments. A fragment can be regarded be used only when there is at least 600dp as a mini-Activity (Activity - the specific of width available for your application’s UI Android class that supports the graphics) // pentru tablete de 7” (600dp wide and bigger) which cannot function independently, res/layout-sw600dp/main_activity.xml but must be included in an Activity class. // For 10” tablets (720dp wide and bigger) res/layout-sw720dp/main_activity.xml A fragment must always be embedded in an activity and the fragment’s lifecycle For tablets, it is usually desired to use all is directly affected by the host activity’s the available space and graphics expansion lifecycle. For example, when the activity is throughout the length of the screen, which destroyed, so are all fragments. However, does not always have a pleasing result. A the fragments can be added or deleted solution at hand is to divide the layout in while the activity is running smaller panels, a technique called multipane. T his separation must be made in an The minimum SDK version ordered and clear way so that the content The minimum SDK version is an attrishould become more detailed, but this divi- bute which is declared in the Android sion must be kept regardless of the tablet manifest file application (android:
It is important that the application design is compatible with all available platforms, thus increasing the number of users. This, however, is not enough. Each screen size provides different opportunities and challenges for user interaction, so to be really impressive, we need to make a step forward, namely we should optimize the experience for each type of configuration. Even if you cannot buy all the latest devices on the market to test your application, Android provides multiple methods to test the final result on multiple platforms. But before this, each application must comply with all standards required by the system.
Referinţe http://developer.android.com/guide/practices/screens_support.html http://android-developers.blogspot.ro/2011/02/android-30-fragments-api.html http://developer.android.com/guide/components/fragments.html http://www.google.com/events/io/2011/sessions/designing-and-implementing-android-uis-for-phones-and-tablets.html http://static.googleusercontent.com/external_content/untrusted_dlcp/www.google.com/en//events/io/2011/static/presofiles/designing_and_implementing_android_uis_for_phones_and_tablets.pdf http://www.youtube.com/watch?v=2jCVmfCse1E&feature=relmfu http://developer.android.com/guide/topics/manifest/uses-sdk-element.html http://developer.android.com/training/multiscreen/index.html
22
nr. 5/2012 | www.todaysoftmag.com
management
TODAY SOFTWARE MAGAZINE
How to Grow An Agile Mentality in Software Development?
T
here is no better way of describing the essence of the Agile Mentality than starting from the principles of the Manifesto for Agile Software Development
„We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:
Andrei Chirilă Andrei.Chirila@isdc.eu Team Leader Technical Architect @ ISDC
• • • •
Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan
That is, while there is value in the items on the right, we value the items on the left more.” agilemanifesto.org
Building a team
It is easy to understand from the principles of the Agile Manifestio that the agile movement is modeling people as unique individuals and not replaceable resources and that the greatest value it brings is not given by individual performance (although this plays an important role) but by the power of interactions an collaboration between individuals. The agile best practices recommend building cohesive self-organizing teams with clearly defined roles, with the end goal of delivering added value. This means, of course, that the team members are empowered to act as they think it is best for them according to their own principles and without having anybody from the outside pushing a particular way of working to them. Also, this means that they are equally responsible for their
results and performance. Building an agile team or, in other words, the way people are assigned to a certain team (based on their competences, abilities, compatibility etc.) is crucial for having a successful team. On a larger scale, building an agile team is very important for developing an agile environment and, hence, an agile company which ultimately should result in creating an agile mentality. More details will follow later on. Regarding the team, its configuration is of great importance as much as the responsibilities of its memebers and their way of collaborating for achieving successful relationships. The compatibility of team members, the way they act and interact are crucial for growing a successful team. Influencing and mimicing performance www.todaysoftmag.com | nr. 5/2012
23
management How to Grow An Agile Mentality in Software Development?
is obvious when the team has not reached a certain maturity level and when the most experienced individual is perceived as a model by the rest of his/her colleagues. The power of example is greater than we can imagine. Generally speaking, attitude can be easily copied. People have the natural tendency to behave as the people they empathize with, to copy or to reject the attitude of their superiors and, because of that, the “antourage”, the team and the working context are very important as they can set the ground for growing a team and a collective vision. An anonymous saying states: “To be inspired is great, but to inspire is incredible”. A true leader will be recognized and appointed by the team and will know to motivate the others to go beyond their limits. The role of this leader (or SCRUM Master in the context of a SCRUM set-up) is so important because (s)he not only needs to make people perform but also to protect them when they are vulnerable
Empowerement and responsibility
Most often, the attitude of people within a company towards empowering and making teams responsible makes the difference between a mature company and a less experienced one. A team is more productive when its team members feel responsible for their actions and also empowered to decide what is best for their product and the context they work in.
24
nr. 5/2012 | www.todaysoftmag.com
In a classic set-up, the power of a manager, the insight and single-point control of projects are just an illusion; this is mainly because people feel tempted to see this act as a way of deferring responsibility to higher levels. In this context, the manager is usually informed while the team meber is indirectly freed of the any “charge” of being responsible, of acting. In an agile environment, the perspective is different because the empowerment of individuals is not meant only for motivating but also for improving the management of decisions. The even spread of information and responsibility across a network (i.e. team in our case) is more efficient than isolating them to a single node of a network (i.e. individual in our case). A team that values code writing over adding business value and over getting involved in decision making without taking responsibility cannot be considered a mature team. Generally speaking, people should be encouraged and empowered to make decisions with the information they have access to; they should be motivated to build their own success stories without waiting for other people (i.e. their bosses) to lead them towards success. Responsibility and empowerement don’t naturally occur from one day to another, but they can be “encouraged”. If initiating this process is the tough part, once started, things will evolve on their
own until a certain maturity level is reached. Mature teams don’t normally need too much maintenance. They own enough information and have a great deal of experience to fix most of the problems they have to deal with. Moreover, they should acknowledge the limits of their circle of influence and also know when to ask for help. That’s why a true team leader has to encourage the team to strive for more by making its members get out of their comfort zone but without ignoring to equally protect them of the outer forces and of the risk of overestimating their forces. Shortly put, a true team leader will know to grow a team, to coach it and to protect it.
People over Processes
Although the Agile Manifesto is talking about the ‘people over processes’ paradigm, it is important to notice that this does not mean that processes do not add value to software development. On the contrary, this should complement the Agile Model. Even though some ‘Process Improvement’ implementations (such as CMMi etc) argue about the areas to be improved, they do not come with an actual approach. This is the place where the agile methodology can help because the need for changing and perfecting the way of working are core pillars within the agile philosophy if we think about the notorious principle of “inspecting and adapting”.
TODAY SOFTWARE MAGAZINE
management
powerful model for the inspiration leadership, that stresses on the role of the “Why” and on the way answering this question can lead to a conventional or exceptional resolution of problems.
An agile mentatlity means much more than being agile
The “inspect and adapt” principle promotes changing things that are not optimal for the team through the development of microprocesses, processes that are mostly relevant in the context of a team. On long term, these microprocesses can turn into macroprocesses because, by sharing them, they can help other teams and the organization learn from history. The more frequent the sharing, the more beneficial it becomes to the other teams and, in the end, to the organization because it promotes learning from past mistakes, establishing best practices and building a collective identity.
How vs What vs Why
The members of an agile team will face challenges they will have to cope with, they will learn and, in the end, they will be forced to evolve. Driven by their past experiences, they will develop new processes, rules and procedures that will help them be more efficient in the way they work, in the quality they deliver and in the way they learn from each other. By doing so, people will actually focus time, effort and
attention on perfecting the “Why”. While some teams will do this fine, other teams will be “rockstars” at doing it. But here comes the interesting part. The more mature an agile team is, the more it will move its focus from defining the functionality it has to deliver (the “What”) and, subsequently, from the way this is done efficiently (the “How”) towards the reason behind the scenes for building functionality, the Business Value (the “Why”). To sum up, a mature team will want to hear more often the Business needs, it will want to challenge invalid requests, it will give support and ask for support, it will focus on adding value and not just on developing applications. But the question is why an overworking, firefighting team should be interested in knowing the “Why” and its roots. The answer is relatively simple: the more you challenge the reason, the more you challenge yourself and your client and the more you develop a stronger relationship with that client. Linking to this topic, there is an interesting technique called the “5 Whys Technique” that came into the spotlight in the IT field several years ago for getting to the rootcause of problems, but ended up being used in challenging the needs. Through this change of perspectives, the team is not considered to only do software development per se, but it is also seen to have a more strategic position beacause it is proactively involved in the decision making process. A similar model can be met in Simon Sinek’s Golden Circle, a simple but yet
As simple as it might sound at first, having an agile mentatlity means much more than being or behaving in an agile way. Being agile translates to having an agile behavior, which is determined by the characters of individuals, by their experiences in agile projects, by the way they act and interact within a team. An agile mentality is not born within individuals. On the contrary, it is determined by sharing experiences, opinions, best practices, behaviors across teams and it equally contributes to the construction of a knowledge inventory. In other terms, an agile mentality cannot exist prior to having already existing agile teams.An agile mentality has impact at higher levels. In the beginning, the set up is given by an agile environment promoted by agile teams, which gradually leads to growing an agile mentality. An agile mentality then stimulates the growth of an agile company which in turn leads to more agile teams and so on and so forth. The agile mentality is the one that has visible impact at all levels of a company and the one that sustains scalability and growth on the long run. The more people and teams share a common agile vision and, hence, an agile mentality, the more the organization becomes capable of facing change and, ultimately, evolving. Nowadays, the IT world has turned into a complex chaotic system. Companies are stuck, incapable of innovating and bringing new extra values without evolving and adapting to the needs of their market. Having an agile mentality means being open to change and innovation and that is why the vision people have and share about innovation plays such an important role. An anonymous saying states that “innovation is the ability to see change as an opportunity and not as a threat” because through innovation we can make ourselves better and still through innovation we are empowered to change the world around us. www.todaysoftmag.com | nr. 5/2012
25
management
Play „Hard Choices” in every sprint and pay your debts
W
elcome to debt era! America owes about 16 trillion dollars and if you didn’t have the chance to go to New York’s Times Square you can see this debt directly online on sites like http://www.usdebtclock.org/. Is not a surprise to anyone that Europe is in a debt crisis and countries that excel in this chapter are Greece, Spain, Italy, Ireland and Portugal with almost 120 billion euros borrowed. Also Romania is affected by this crisis or participates in it with various political or economic actions. Thus, debt is everywhere! Do we need to pay the debt? When? Adrian Lupei alupei@bitdefender.com Project Manager and Software Engineering Manager Bitdefender
26
nr. 5/2012 | www.todaysoftmag.com
But what is the meaning of countries’ expressions that points out the fact that the debts in the story of software debt? For sure team is accumulating debt: we can make an analogy, we are also talking • We don’t have time right now, but about large and small organizations, on we’ll do it later. their ability to be competitive or not, on the • We don’t need this now, we will do it power to grow faster or slower, the speed to when will be necessary. react to customer needs. • Where I worked before we did not do this and things went well. Agile development methodologies tell • We don’t know how to do this or we us that you can quickly adapt to market did not do this kind of things before. needs if you are agile. But what if you know • We don’t have time to do this now, you’re agile but in order to respond quickly but if something goes bad we will to customer demands you need to put some find a solution. dirt under the carpet? How many times can you do that? If you are a project manager And if debts are extremely high sometihow often can you say that the technical mes we hear expressions like „that worked team has not delivered on time? If you are fine on my computer”, „we could not reprodeveloper how many times can you re-esti- duced it” or „it should have been finished mate and delay a task? Can you demand yesterday”. every time to re-implement everything from scratch? If you are a testing engineer It is clear that on short-term period the how many times can you delay the release team has no problem. But what happens because you received the binaries yesterday on a long term period? You may encounand didn’t have time to test them? If you ter problems on a demo and then you need are a release manager how many times can one or more sprints in order to pay your you say you should have released yesterday technical debt. You can have big problems but did not have the ok from the QA team? at customers and you can’t produce a quick Where are all these problems came from? fix without damaging other components. Wouldn’t be possible that the product or I think you got the idea that software the service that you are working at loose debt sums up all decisions you postpo- its customers or, even worse, the business ned or even ignore valuable activities in became bankrupt? software development. There are certain I am absolutely convinced that some
management
TODAY SOFTWARE MAGAZINE
readers of this article are expecting at methodologies, recipes, tools, or agile software development practices but because all depends on you and the changes that you make every day in your organization, we will talk a bit about a board game invented by Software Engineering Institute (one of us). The Hard Choices game is a simulation of the software development cycle meant to communicate the concepts of uncertainty, risk, options, and technical debt. In the quest to become market leader, players race to release a quality product to the marketplace. By the end of one game all players experiences the implications of effort invested in „making a good job done” in order to gain competitive advantage, and also the price paid for taking a shortcut. Players can call or even combine the two strategies to cope with uncertainty and this is similar with a team choosing to use one methodology versus another for software development. The game has a few simple rules in order to be played quite easily, just like Scrum and Kanban has few rules. Likewise, playing rounds can be compared to sprints or iterations that agile teams are already familiar with. Rules of the game points out that it doesn’t matter only who finishes first but also the number of points collected during the game, because each collected tool represents a valuable point that helps determine the final ranking. The most interesting aspect of the game Hard Choices is that when you decide to choose a shortcut will be penalized until you pay your debt, similar with sprints for „technical debt”. How much delay will get the lack of unit testing? What is the penalty for not having automated tests to validate the build? What is the cost for delaying the delivery of software or bug fixing? I wonder what the cost is when customer does not have the same experience after a software upgrade. When players have a tool card and they land on a tool square they may play it immediately by throwing the dices or collect another tool card for points. This rule is similar to the decisions made inside of a sprint, for example in a retrospective case, decisions taken to improve a particular
component or a particular process, impro- differentiated in terms of efficiency. vements can be seen immediately in the next sprint, by throwing the dices or, at the However the purpose of the game is not end when the product is released to the to reach the end as quickly as possible, but, customers, the game ends. to gain points, as they say in agile “to deliver value”. It matters who reaches the end To better explain the software debt, few first because wins more points, but it also engineers have tested the game using the matters how many points you accumulate following three strategies: along the game from 1. player always chooses the shortcut collecting tools. and he is penalized throughout all the game by getting one point with In conclusion debt accumulation is minus every time he throws the closely related with the postponement of dices. decisions and ignoring certain practices 2. player selects shortcut but pays that may give long-term results. Perhaps it immediately by sitting around is easier to say that you do not get involwithout playing. ved in the team because you want to learn 3. player never chooses the shortcut another technology or that you want to change the project but it is not likely to find Strategy 2 represents the winning one elsewhere and higher debt? What if other and strategies 1 and 3 are about equally teams or companies do not includes you in effective for the simple rules of this game. the team if you are not open to pair proBut what if we stay two rounds or sprints gramming or Test Driven Development? in order to pay the debt? What if we were penalized two points for each roll of the dice? Clearly strategies 1 and 3 would be www.todaysoftmag.com | nr. 5/2012
27
UIX
User Experience Design and How to apply it in Product Development
U
ser experience Design(UX) refers to a concept that places the end-user at the focal point of design and development efforts, as opposed to the system, its applications or its aesthetic value alone. The requirement for an exemplary user experience(UX) is to meet the exact needs of the customer. One of the primary goals of any good designer is communicating the intended message in such a way that it leads to a positive user experience. Design is less and less about solving problems, testing less and less about eliminating Sveatoslav Vizitiu info@sveatoslav.com User Experience and User interface Senior Designer
28
nr. 5/2012 | www.todaysoftmag.com
frustration. It’s all becoming more and more about making a good experience for users... Now it’s not good enough to just be usable. The design has to fit into peoples’ lives. It actually has to make people happy, and anticipate their needs. [2] The User experience(UX) includes all aspects of the end-user’s interaction with the company, with services, and with products. A good user experience is to meet the exact needs of the customer. To achieve highquality, user experience must be a seamless merging of the services of multiple disciplines, including interface design, industrial design, engineering and marketing. Each designer does affect the user experience; every decision is forming the future of the people we design for. Primary goal of any good designer is communicating the intended message that leads to a positive user experience. The colour of text, the alignment of page, the images, the design pattern - are all part of this communication, the hints which often remain not noted or unexpressed, that ultimately make up the user’s experience. This may include interactions with your software, your web site, your call centre, an advertisement, with a sticker on someone else’s computer, with a mobile application, with your Twitter account, with you over email, maybe even face-to-face. The sum total of these interactions over time is the
user experience. [3].
PRINCIPLES OF USER EXPERIENCE
For most people, changes evoke fear and stress and impose familiarity with product, on the other hand, there are comforting. It allows us to live and operate with a certain level of ease that doesn’t require active thought at all times. Building successful products rarely happens by doing what everyone else is doing. Successful products happen by fundamentally changing people’s perception of what will fulfil their need and providing a painless transition at the “new” product
Good Experiences are Simple
“Simplicity is the ultimate sophistication.” - Leonardo Da Vinci When the product is so complete that there is truly nothing else to add and nothing else to take away. When things are perfectly understood and represented by their state and appearance, to behold them is to understand and know them. If people can understand or use something with little difficulty, then we’ve made something simple. But simple is not always easy to design, it only seems that way.
Lifecycle
As users interact with your product or service, they proceed through a series
UIX of steps called the usage lifecycle[4]. Like other lifecycles, the usage lifecycle has a beginning, middle, and an end. Lifecycle Stages: • First Contact - People become aware of product • First Time Use - The first actual usage of product and when the user seriously considers the long-term obligation. It is the first real impression about product. • Ongoing Use - Regular use of your product. • Passionate Use - Users get into a state in which they are highly immersed. • Death - People stop using product All people are in life cycle, defines a context for the user so much, how many something else. We need to “know your user” we need to know what they’re doing.
Conversation
UX is really just good marketing. It’s about knowing who your market is, knowing what is important to them, knowing why it is important to them, and designing accordingly. It’s also about listening after you’ve designed and adjusting to the changing marketplace: improving the experience of those in your market. It’s easy to recognize this when you consider that users = market. That’s what users are: your users are the market you’re designing for
Invisibility
TODAY SOFTWARE MAGAZINE the content.
Social
attempted to solve the same or a similar problem is extremely valuable in understanding what pitfalls to avoid, identifying where an existing pattern failed or needs improvement and revealing the moments in the customer’s journey that we can surprise and delight them when others have failed them. We will find ourselves asking this question over and over throughout the entire design process.
We think too much about what we are trying to achieve, about what we have designed or built, and thus in terms of what it does or should do. This leads us to think in terms of controlling outcomes, or tweaking features for new behaviours... Social is happening out there, and your users do not have you or your product in mind, but their own experiences. Change When should we begin to get user your frame. feedback? We don’t deal with people anymore we Early and often is usually the answer deal with social lives. to this question, but that may not always be true. If we introduce it too early we run USER EXPERIENCE STRATEGY the risk of letting the user drive the product Design strategy is about serving peo- development. If we introduce it too late, we ple... The real challenge is trying to solve may miss out on valuable feedback that the human problem. It’s about understan- could have saved you from overbuilding. ding their needs, their aspirations, and then Understanding who our customers are and meeting them in some way. But sometimes knowing where you can improve on existheir needs are to be surprised and deligh- ting solutions will help us know when we ted, and they can’t tell us how to surprise can begin to incorporate user-testing/feedand delight them. That has to come from us back into the product development cycle. as creative people in our profession. A User Experience Designer being able Why does our product solve the to accurately answer these 5 questions can problem? make a product that instantly resonates Having identified what problem we with the customer. are trying to solve, who our target users In order to capture that vision and stra- are, and where we can improve on existing tegy, a User Experience professional should solutions, we should be able to articulate be included at the outset of the project, as why our solution solves the problem. This they are uniquely positioned to help answer should be the foundation for the shorteach of these questions: term goals with a long-term strategy.
When people are having a great experience, they rarely notice the work that What problem are we trying to solve? has been put into place to make it happen. Before we can begin building products, Great UX is to be so successful that nobody we must identify what problem this protalks about designers. duct is attempting to solve? Articulate the problem as clearly and concisely as possiContext ble. Once we are able to do this, we have There is no right answer to a design a lens through which we can answer the problem. There are only bad, good and remaining questions, further clarifying better answers for the current situation. your purpose and informing your strategy Each of the potential solutions sits within going forward. a particular context... To find the better answers for your design problem, you need Who is the customer? to know the context it sits within. You need One of the most important questions to know what you are trying to achieve, to answer for any new product. Without a what a successful outcome is and what you clear understanding of our target audience have to do to get there. we run the risk of building something that To find the better answers for a design doesn’t meet or fit our customer’s expectaproblem, we should know the context in tions, use-cases or mental model of what it which its limits are. Designers need to is they came to us for. User research, ethknow what they are trying to achieve, what nography, and personas are components of a successful outcome is and what they have the UX toolset that can help answer these to do to get there. For many application questions. and web design problems, designers need Where can we improve on existing to know about the goals, what people want patterns and solutions? to do, what they already know and what’s Understanding how others have
How can data help you understand what you are building? There are countless analytics that can be used to validate assumptions, confirm design decisions and clarify our product/market fit. Techniques such as: data mining, eye-tracking, A/B testing, user flows, ethnographic research, usability benchmarking and many others can be used to gain better insights. No detail is too small to test. Copy, layout, interaction - in all of these cases good data can help us understand the results we are seeing and adjust accordingly. Companies that start by answering these questions and engaging in a process of UX-driven design have a much greater chance of going on to create viable and longer-lasting products than those that start building without these answers. In the ever-expanding start-up world, those who value and apply UX from the start will be those at the head of the pack. An objective tool for measurement and analysis helps us provide our clients with www.todaysoftmag.com | nr. 5/2012
29
UIX User Experience Design and How to apply it in Product Development
fact-based recommendations, as oppo• The product delivers on the perceised to simple conjecture and opinion. The ved promise of the brand. methodology we’ll explore in this article • The product leverages the capabiwill help you to: lities of the medium to enhance or • Remove subjectivity from the equaextend the brand. tion as much as possible. The product leverages the capabilities • Enable persons with different of the medium to enhance or extend the backgrounds (designers, developers, brand. clients) to share a common understanding of the product. Functionality Functionality includes all the technical and ‚behind the scenes’ processes and applications. It entails the product’s delivery of interactive services to all end users, and it’s important to note that this sometimes means both the public as well as administrators. Statements used to measure a product’s functionality can include: • Users receive timely responses to their queries or submissions. • Ta s k p r o g r e s s i s c l e a r l y communicated
Usability
Usability entails the general ease of use Figure 1. The User Experience is made of all product components and features. up of four interdependent elements. Statements used to measure content can include: • Enable persons with different • Effectiveness: A user’s ability to backgrounds (designers, developers, successfully use a Website to find clients) to share a common underinformation and accomplish tasks. standing of the product. • Efficiency: A user’s ability to quickly • Provide our clients with a factaccomplish tasks with ease and based, visual representation of their without frustration. product’s benefits and limitations. • Satisfaction: How much a user The user experience is primarily made enjoys using the Website. Error up of four factors[9]: frequency and severity - How often • branding do users make errors while using the • usability system, how serious are these errors, • functionality and how do users recover from these • content errors? Independently, none of these fac• Memorability - If a user has used the tors makes for a positive user experience; system before, can he or she rememhowever, taken together, these factors conber enough to use it effectively the stitute the main ingredients for a product’s next time or does the user have to success. start over again learning everything?
Branding
Branding includes all the aesthetic and design-related items within a Product. It entails the product’s creative projection of the desired organizational image and message. Statements used to measure branding can include: • The visual impact of the product is consistent with the brand identity. • Graphics, collaterals and multimedia add value to the experience.
30
nr. 5/2012 | www.todaysoftmag.com
Content
Content refers to the actual content of the product (text, multimedia, images, etc) as well as its structure, or information architecture. We look to see how the information and content are structured in terms of defined user needs and client business requirements. Statements used to measure content can include: • Content is structured in a way that facilitates the achievement of user
goals. Content is appropriate to customer needs and business goals. • Content across multiple languages is comprehensive. Measuring the effectiveness of design is new for many designers. And indeed, we are still very early on in being able to do it well. There are several reasons why design isn’t measured, including not having agreement on success metrics, not knowing how to measure a positive user experience with your product/service, and not being able to put measurement methods in place. None of this stuff is easy: it takes a culture dedicated to gathering feedback and improving by it, the ability to access customers and web site analytics data, as well as the scheduling ability to iterate and get things done when metrics aren’t going in the right direction. Every project may need some tools in various orders depending on the challenges you’re confronted with. Which will help you, at any given time, and can give the most clarity and boost productivity [13]: Sketches are usually hand-drawn graphics that contain screen ideas or explanatory graphics outlining the high-level problem or solution. They are the most valuable when the idea hasn’t been fully formed, explored, or realized in any way. Sketches will help you understand what general pieces are needed to accomplish your goals. Wire-frames tend to be computer-generated graphics illustrating the organization of content, features and functionality. Prioritizing the elements of a design and determining general page layout; can be a very messy part of any project. A well-built wire-frame will help you pull apart the individual pieces and make sure they are appropriate to the goal of the page. Mock-ups are rich graphics intended to simulate the look and feel of a project so you can understand the impact visual elements have on the brand. A mock-up can set the right impression and communicate emotions and personality. Without actually building the website (which a lot of people do), there really isn’t any other way to concretely define what a website should look like. are partially complete versions of a website used to understand how pages interact with each other and flow from one •
TODAY SOFTWARE MAGAZINE
area to another. More complicated interactions between sophisticated components might require a fully functional prototype to actually understand. When there are a lot of moving parts and goals have multiple steps involved, HTML prototypes can really help you find the gaps in your plans. For most of us, the majority of our work involves refining, updating and improving existing systems. However, we must never forget that our job is fundamentally about shaping and creating the future. As designers, the very heart of what we do is to visualize in our mind what does not presently exist and then set about creating it. We spend our time categorizing, organizing, labelling and identifying patterns and components. New design frameworks are emerging with the goal of enabling reusable, highly extensible designs and
providing a roadmap to innovation. As our applications and products become increasingly more complex, we certainly don’t want to spend our time re-inventing the wheel. But these systems, frameworks and “best-practices” can also prevent us from breaking out of present patterns and making the space and time to envision something entirely new.
from misunderstanding the process and requirements of a formal usability test. Yes, quantitative tests, qualitative tests, and developing comparative tests can be overwhelming. The truth is that some testing is better than no testing at all. You may still be thinking that this is not going to work for you. The “do-it-yourself ” style of usability testing has definitely begun to resonate with designers, engineers and proCONCLUSIONS duct managers. Usability testing is one of the best It can help you improve your product, things we can do to understand whether which in turn will make for happy, satisfied or not people can use the product as you customers. intended, and from there make informed iterative improvements. We have often heard people saying that usability testing is difficult or inconclusive or even a waste of time. We believe that many of these notions stem
www.todaysoftmag.com | nr. 5/2012
31
programming
Build dynamic JavaScript UIs with MVVM and ASP.NET
K
nockout is a Javascript library helping us to create desktop like web pages. These pages have a clear data, and it is synchronizing perfectly the UI to data model. Knockout is an open-source project, created by Steve Sanderson, who is developer at Microsoft. Knockout is his personal project, it doesn’t belong to Microsoft. The source code can be downloaded from GitHub and a relevant documentation and notifications can be found at http://www.knockoutjs.com The source code is native javascript; third party javascript libraries are not used. It Csaba Porkoláb
cporkolab@macadamian.com Software developer @ Macademian
is working well with ASP.NET and with other server-side technologies. The rendered HTML should contain the reference to Knockout library.
Why Knockout? •
(Model-View-Presenter) framework. In this pattern the model represents the application data, the View represents the user interface and the presenter is linking the View to the UI. The Presenter has references to the Models and to the Views.
Introduces an elegant dependency tracking – it is updating the UI Properties of MVVM: every time when the Data Model is • The Model stores the data that needs changing. to be passed to UI. Each business • Uses Declarative Bindings – having class (domain objects) belongs to a specific syntax, Knockout allows the Model. us to connect the UI to the Data • Between Views and ViewModels Model. there is a one to one relation, • It is extensible – a lot of plugins each View has its own instance of have been released. Knockout ViewModel. The functionality is already has a well formed active implemented in ViewModel. In this community around it. way the functionality is clearly sepaThe DOM can be easily manipulated rated from the logic and UI. by JQuery, but JQuery doesn’t perfectly • The separation of the implementaseparate the functionality from the UI. The tion logic from UI means that it can result of using Knockout library will be a be more easily unit tested and mainmuch more structured code. Knockout tained. Unit tests are very important uses MVVM pattern. Using Knockout we in the case of a complex applicacan eliminate the complicated intersecting tion. The MSDN site is showing event-handlers. Everything is declarative in the following diagram for MVVM Knockout. (Figure 1). If we use ASP.NET / MVC for server About MVVM pattern (Model View side, then the Business Logic and Data ViewModel) will be a controller class having functions Model-View-ViewModel (MVVM) returning JSON. Using the MVC routing pattern is helping us to separate the busi- each function of the controller will have ness level from the user interface (UI), its own URL, URL that can be called from helping us to eliminate a lot of develop- Razor (View). Presentation Logic will be a ment and design problems and making the javascript class having the same structure testing and support more accessible. as the C# class and the UI logic will be defiThe parent of MVVM is the MVP ned in Razor template. The ViewModel can be generated from JSON by using the knockout mapping plugin. var viewModel = ko.mapping. fromJS(jsonString);
Figure 1. - MVVM pattern
32
nr. 5/2012 | www.todaysoftmag.com
<div id=”PatientForm”> <p><label>Firstname: </label> <span data-bind=”text: firstName”>
TODAY SOFTWARE MAGAZINE </ul>
</span></p> <p><label>Lastname: </label> <span data-bind=”text: lastName”> </span></p> <p><label>Fullname: </label> <span data-bind=”text: fullName”> </span></p> </div> <script language=”javascript” type=”text/javascript”> var viewModel = { firstName: ko.observable(‘John’), lastName: ko.observable(‘Harlid’), fullName: ko.computed(function() { return this.firstName() + “ “ + this.lastName(); }, this) } ko.applyBindings(viewModel, $(‘#PatientForm’)); </script>
We have a defined viewModel and a HTML template in the examples above. The applyBindings function will bind the viewModel to a DOM element. If we call ko.applyBindings(viewModel) with only the viewModel parameter then the binding will be realized with the HTML body element. Properties firstName and lastName are observables. If their value is changed, then the UI will be updated. FullName is a computed property being updated when we change the value of firstName or lastName. An observable is a function which is returning the latestValue when it is called without any parameter. We can use arrays as observables by using the observableArrays class. Knockout has a lot of predefined functions for function manipulation, such as push, remove, removeAll. The predefined bindings can be divided into three categories: (We are going to enumerate only few of them). 1. Binding controlling text and appearance: a. Visible binding: causes the associated DOM element to become hidden or visible <div data-bind=”visible: isVisible”>
(isVisible is an observable and returns a boolean value) b. Text binding: Sets the innerText property of a DOM element <span data-bind=”text: myMessage”> (myMessage is an observable and returns string) c. Html binding: Sets the innerHTML property of a D OM element <span data-bind=”text: myHtml”> (myHtml is an observable and returns string) d. Css binding: adds or removes one or more named CSS classes to the associated DOM element according to one or more conditions.
<span data-bind=”css: {errorWarning: errorCount() > 0, hide: errorCount () ==0}”>
e. Style binding: adds or removes one or more style values to the associated DOM element according to one or more conditions <span data-bind=” css: { color: errorCount() > 0 ? ‚red’ : black’}”>
f.
Attr binding: provides a generic way to set the value of any attribute for the associated DOM element DOM <a data-bind=”attr: { href: url, title: details }”> 2. Control flow bindings: a. Foreach binding: duplicates a section of markup for each entry in an array, and binds each copy of that markup to the corresponding array item <ul data-bind=”foreach: students”> $index<li data-bind=”text: name”> </ul>
The example above displays student names. $index displays the index of iterator. $parent is the parent object. b. If binding: Allows us to include or skip a DOM element according to a boolean observable. If displayMessage returns false then the “div” element will be skipped. Instead of a Boolean observable we can use complex logical expressions
<div data-bind=”if: displayMessage”> Here is a message. </div>
3. Event bindings: a. Click binding: Will execute a function defined in ViewModel when the click event occurs <button data-bind=”click: doSomething”>
b. Event binding: Attaches a javascript event to a DOM element.
<div data-bind=”event: { mouseover: enableDetails, mouseout: disableDetails }”>
Virtual bindings
“Control flow” binding can be applied to virtual DOM elements <ul> <li class=”heading”>My heading</li> <!-- ko foreach: items --> <li data-bind=”text: $data”> </li>
Figura 2. - Clasele KO
<!-- /ko -->
Custom Mappings
We are not limited to use only bindings defined in Knockout library, we can define new ones. We have to register the binding as a sub property of class ko.bindingHandlers: ko.bindingHandlers.legaturaNoua = { init: function(element, valueAccessor, allBindingsAccessor, viewModel, bindingContext) { update: function(element, valueAccessor, allBindingsAccessor, viewModel, bindingContext) { } };
The init function will be called when the binding is first applied to an element. The update function will be called once when the binding is first applied to an element, and again whenever the associated observable changes value. The functions above have the same input parameters • element – The DOM element involved in this binding • valueAccessor – A JavaScript function that you can call to get the current model property that is involved in this binding. Call this without passing any parameters to get the current model property value. • allBindingsAccessor – A JavaScript function that you can call to get all the model properties bound to this DOM element. Like valueAccessor, call it without any parameter to get the current bound model properties. • viewModel – The view model object that was passed to ko.applyBindings. Inside a nested binding context, this parameter will be set to the current data item. • bindingContext – An object that holds the binding context available to this element’s bindings. This object includes special properties including $parent, $parents, and $root that can be used to access data that is bound against ancestors of this context
Concluzii
Knockout is a compact library. It is suggested to be used for complex rich web pages having a lot of Ajax calls. The library is acting slowly when the page is overcrowded with bindings. The generation process for 30 sections bind to observableArrays can take 1 second and the transition is visible. www.todaysoftmag.com | nr. 5/2012
33
HR
Social networks
F
or sure everybody heard about „Social Networks” because lots of articles were written on this topic, about the history of Social Networks, advantages and disadvantages or confidentiality of the information. But, the purpose of this article is to identify the impact that this kind of site could have in the recruitment process. There are a lor of Social Networks, but I will present in the following pages only 2 of them: Facebook and LinkedIn.
Andreea Pârvu
andreea.parvu@endava.com Recruiter for Endava and trainer for development of skills and leadership competencies, communication and teamwork
There is a saying „If you’re not on Facebook, you’re not alive”?! The first time when I’ve heard it I started to laugh because honestly, it is very funny. Facebook reached more than 1mld users and only now I can say that the previous affirmation is very valuable. Everything happens on Facebook, from the recruitment campaigns to real candidate’s selection. Facebook has it pluses and minuses like every web site, but first of all even though it is considered a personal space, let’s be honest, because every post on the internet becomes visible and public for the others. Lots of recruiters are using Facebook as a very good source of indentifying the candidates. It happened also to me to find more accurate information on Facebook comparing with the recruitment classical web sites. On Facebook everybody has an updated profile: from the companies where they are hired to all the extra activities that they do. The purpose of this article is not a user guide of Facebook, because I am sure that everybody knows how to use it, but certainly very few of us take into consideration the impact that what we publish on our Facebook page in recruiting and selecting the best candidate, this is why I am going to describe every big section presented on the web site through the eyes of a recruiter „This is what I would like to discover on every Facebook profile”. About you Section – I consider it extremely relevant to find information about the candidate with whom I am going to interract face to face during an interview. The detailed it is, the bigger the impact you can have in a recruitment eye. What king of information are they look for? I would answer with an example, because it is much easier. E.g.: a person who has as a quote
34
nr. 5/2012 | www.todaysoftmag.com
„Nothing is impossible”, could be a results oriented and problem solver. Work and Education Section – I do recomment to leave it visible for everybody, because the biggest plus is brought by the
continuous update and improvement of it and if possible attach a link of the web site of the companies where are you currently working or have you worked. Contact Info Section – in this section it is totally up to you if you want to make public all your personal contact information. For me as a recruiter the ideal situation would be to find at least a phone number in order to be able to contact you
TODAY SOFTWARE MAGAZINE but as I previously said it depends on you how you want to keep the confidentiality of them.
•
Events Section – if you are actively involved in different events and you are a good networker use this section to increase your contact list with persons who share the same interests or hobbies, because the most people you know the more visibility you have on the market.
what you’re good at “praise yourself, as much as possible”. Experience describe clear and concise the responsibilities that you had at all your jobs, including volunteering experiences. The biggest plus that LinkedIn offers is the possibility to ask for recommendations. An excellent profile that has recommendations it is impossible not to draw the recruiters’ attention.
In the end I will talk about a very disputed subject „Pictures”. I have heard so many times the question „What kind of pictures should I post on Facebook?”. I have to confess that every time I have answered in a very personal way, because I consider that you can publish as many pictures as you want which do not put you in embarassing situations. I have started with „Facebook”, because it is the most used „Social Network”, but still has to improve a lot on the professional side. Facebook will become for sure the most important source that can provide many information about us, because in the near future companies will pay big amount of money just to have access at very detailed information, this is why I recommend you to think twice what kind of information you make it official on Facebook. Another social site that I will talk about is „LinkedIn”, a much more professional site which used at fullest potential could bring you lots of benefits. The advantages of LinkedIn are bigger than the ones on Facebook, because this site has a professional approach, because the sections offer you the possibility to create an online CV. Statistically LinkedIn reached almost 100 mil users and a big part of them are using it to find a job or to interact with business professionals. Profile Section – offers the possibility to create an online CV • In the Summary section you can provide information that are relevant to describe your career evolution, your strong points or knowledge that could be an added value for a company, or the main achievements in your jobs, in a few words, what makes you unique comparing with the other candidates. • Specialties say as many things about
•
My advice is to ask for as many and different recommendations in order to have a valid feedback from the persons with whom you’ve interacted. Education subsection is about formal education. Mention the last ones that you have graduated: masters or bachelor degree. The last part from this section refers to your interests and different other personal information.
Groups Section – join all the groups of interest for you. The purpose of this section is better defined than on Facebook, because you can have access to different opinions and ideas of some professionals in that domain, based on the discussion that you can start as a member of the group.
Jobs Section - with maximum utility for those interested in a new job is one of the most important resources that LinkedIn provides to the users.
Rereading the article I realized that I did not make a very relevant observation. When you want to add a new person on you contact list, personalize the invitation and explain the reason why are you sending it. The increased impact that this 2 „Social Networks” started to have, it’s demonstrated by the new users that are joining every day either Facebook or LinkedIn. Comparing with 2011, LinkedIn registered a growth with 4 mil users and Facebook with more than 300.000.000 new users, because more and more people understand the benefits of this 2 web sites in providing new career or business opportunities. For the ones who don’t have yet a LinkedIn account, I recommend not to procrastinate and to make one fast, and for the others who already have one, update it and become more and more visible. Good Luck!
www.todaysoftmag.com | nr. 5/2012
35
programming
Core Data – Under the hood
C
ore Data is an object graph and persistence framework provided by Apple. Is a relatively small but very robust framework, it provides solutions for many general problems and fits perfectly with Cocoa and other API provided by Apple. (In the MVC pattern Core Data takes the place of the Model).
Zoltán, Pap-Dávid
zpap@macadamian.com Software Engineer @ Macadamian
Core Data API or stack, can be decomposed to the following three components: Persistent Store Coordinator (NSPersistentStoreCoordinator), Managed Object Model (NSManagedObjectModel) and Managed Object Context (NSManagedObjectContext). All these three parts are working together to allow storing/retrieving Managed Objects thread safe. Three types of repositories are (NSManagedObject). included with Core Data API: SQLite, XML and binary. All of them can be serialized to disk or in-memory (the last one is only a temporary store). XML and binary are atomic stores (the entire data file is rewritten on every save). Although atomic stores have their advantages, they do not scale as well as SQLite store. Also, they also are fully loaded into memory and they have larger memory footprint than SQLite store.
Managed Object Model (NSManagedObjectModel)
NSManagedObjectContext is the object which we access when we want to save to disk, when we want to read data in the memory, and when we want to create new objects, or delete them. Is on top of the stack, and all managed objects can exist only with it. Also is not thread safe, therefore all threads which want to use for data access, needs a different context instance. It is important to keep in mind that, like the UI, NSManagedObjectContext should be accessed only on the thread that created it.
Persistent Store Coordinator
Sits at the bottom of Core Data stack, and is responsible for persisting the data into its repository. The repository can be stored on disk or in memory. NSPersistentStoreCoordinator is not
36
nr. 5/2012 | www.todaysoftmag.com
SQLite is a software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. Is the most widely deployed SQL database engine in the world. By using a relational database as persistent store, we don’t need to load the entire data set in memory, and our database can scale to very large size. SQLite itself was tested with data sets measured in terabytes. The data on the disc is efficiently organized, and only the data we are using at the moment is loaded in the memory. Because we have a database instead of a flat file, we have access to many performance-tuning options. We can also control how objects are loaded into memory. For most scenarios Core Data provides transparent object access for this store, but in order to write efficient scalable applications based on Core Data we need to know how the framework works, otherwise we can face some unpleasant situations.
TODAY SOFTWARE MAGAZINE In the following sections we will pick up some Core Data specific behavior/property and we will investigate its impact over application performance.
Faulting
Faulting is the one of the most sensitive topic regarding Core Data. It happens transparently; therefore simple applications can be developed without knowing about it. To demonstrate the faulting impact over Core Data application performance I’ll use a simple example. (The example is not a Core Data “real-life example”, using Core Data power, could be implemented more efficiently. The idea behind it is to demonstrate basic functionality) Consider an empty model, and create a single entity (Person) with three attributes (age, gender, name), and no relation. Consider a store with 5000 instances of this entity. We want to iterate over all objects and check an attribute (of course we can make it more efficient with defining a predicate, but we want to ensure that all objects will be loaded in memory): NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@”Person” inManagedObjectContext:context]; [fetchRequest setEntity:entity]; NSArray * result = [context executeFetchRequest: fetchRequest error:&error];
//execution time 0.552 s for (Person * person in result) { if ([person.age intValue] < 25) value++; }
//execution time 0.083s
We can see that we can inspect all objects in a decent time. Now we want to extend the model to have another entity, and a relation between them. We create the Car entity with three attributes (engine, manualTransmission, name), and a relation between Person and Car entities. We are using a store, which contains 5000 Person objects, and every person object has a Car (totally 10000 objects). Also we want to extend the following
query, but now instead inspecting an attribute from the target object, we want to use the relation, and to inspect a related objects’ property.
C. Core Data looks in internal cache for the specific object. Because it is not in cache it loads from repository,
NSFetchRequest *fetchRequest= [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@”Person” inManagedObjectContext:context]; [fetchRequest setEntity:entity]; NSArray *result=[context executeFetchRequest :fetchRequest error:&error];
//execution time 0.351 seconds
for (Person * person in result) { if (person.car.manualTransmission) { value += [person.age intValue]; } }
//execution time 21.201 seconds
We can obser ve drastic performance degradation in this case. But what happened? If we are looking closer in debugger to a person object, we can see Lets check what’s happening when we are fetching objects.
Fetching objects
To optimize memory usage Core Data is using lazy loading when fetching data. “Fetching” is the term for resolving NSManagedObject from the repository. When we are using SQLite store, after executing a fetch request, it is quite possible that an object we think is in memory is actually on disk and needs to be loaded in memory. Similarly, objects that we think we are done with may actually still sit in a cache. When NSFetchRequest is executed, we will retrieve an array of NSManagedObject, but behind those objects, in memory we will have only placeholder objects (Faults). The only attribute that is loaded in memory is the NSManagedObjectID. When we want to access a field, the fault is fired, and the object is loaded fully in memory. Let’s have a closer look. A. After executing NSFetchRequest, the result will be an array of faulted NSManagedObjects,
B. When we begin to iterate over the array, and we are accessing a property of the first object, a fault will be fired automatically to lower levels of the stack,
D. With a single access to the store, the cache will be filled with data, and the requested object is fully loaded in memory. The property value can be used.,
E. Iterating through other values will be quick, because when the fault is fired, the object is loaded from internal cache. Because single database access is not efficient, data is loaded in chunks. This behavior can be fine tuned by setting NSFetchRequest’s FetchBatchSize property. Default value is 0, which corresponds to ∞ (all items will be loaded in cache). For long result lists we could consider changing the batch size, which will result in lower memory consumption at the beginning, but as the fault are fired, more database accesses. Therefore we need to consider every case, when we want to change this default behavior. Also we could turn off faulting properties by calling before executing the request. In this way all properties will be loaded in memory after executing the request, faults will cause cache hit. By introducing Faulting, Core Data tries to optimize memory consumption. Faults are fired only when properties are accessed. Here we need to notice, that not all properties and methods of NSManagedObject cause faults to be fired. As we saw previously, the object id is loaded in memory; therefore no fault is fired, when the property is accessed. Also there is a long list of methods of NSManagedObject which are not causing faults to be fired: isEqual:, hash, superclass, class, self, zone, isProxy, isKindOfClass:, isMemberOfClass:, www.todaysoftmag.com | nr. 5/2012
37
programming
Core Data – Under the hood
conformsToProtocol:, respondsToSelector:, retain, release, autorelease, retainCount, description, managedObjectContext, entity, objectID, isInserted, isUpdated, isDeleted, isFault
This tricks solves our problem for the moment, but is fragile, because we need to maintain the code as the schema is modifying, because we are hardcoding relations.
Fetching and relations
Conclusions
Previously we saw what’s happening when we are working with objects without relations. The next step up in the scale of loading data is to fetch the relationships while loading the targeted entities. This does not fetch them as fully formed, but as faults. This step up can have a significant impact on the performance of a Core Data application. We could saw on the second example, that the performance was ruined. The reason for that is the fact, that faulting relations are resolved one by one. Meanwhile accessing the first faulting Person object, results loading all 5000 Person objects into the cache with a single disk access; accessing the Car faulting objects (referred by relation), will brings in the cache only the respective object. Therefore in the second case totally 5001 disc accesses will be used instead one disk access from the first case. 5001 disc access instead 1. That’s why the performance was ruined. The solution for this problem is prefetching. This is a special case of batch faulting, performed after another fetch. Anticipation of future needs to avoid individual fault firing. This can be achieved simply specifying the relations, which we will use on the fetched objects. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@”Person” inManagedObjectContext:context]; [fetchRequest setEntity:entity]; [fetchRequest setRelationshipKeyPathsForPrefetching: [NSArray arrayWithObject:@”car”]]; NSArray * result = [context executeFetchRequest:fetchRequest error:&error]; //execution time 4.006 seconds for (Person * person in result) { if (person.car.manualTransmission) { value += [person.age intValue]; } }
//execution time 0.974 seconds
38
nr. 5/2012 | www.todaysoftmag.com
As we saw, firing faults individually is one of the most common causes for the poor performance of Core Data applications. Faults are a double-edged sword that can make great improvements to the speed and performance of our applications or can drag the performance down to the depths of the unusable. We need to know this behavior of Core Data, and we need to keep a balance between the number of disc accesses and the memory consumption, and fetch only data we need it and when we need it. Also we need to keep this in mind when we create the schema of our model to organize data in an efficient way to need smaller amount of storage accesses to load them in memory. When fetching data, if we fetch too little, our application will feel unresponsive, because the big number of disc access. On the other hand if we load big amount of data in memory, we will have quick access to our loaded data, but we will bump in low memory issues on devices (don’t forget that iPad2 has 512MB internal memory). Also we need to be aware, that because no memory virtualization is provided by the iOS, Core Data does some clever housekeeping, and frees up objects from cache when they are not used. This could cause disc access for objects that were previously in memory. In order to optimize our Core Data applications we could use the following tricks: •
Warming up the cache. We can preload the data in the cache to be sure the data will be available, when it will be needed. When the application started, in a background thread we fetch data that we know, will be used, and in this way when it will be used, will be already in cache.
•
In this case doesn’t matter if we are using more threads, because all fetching operations will be directed to NSPersistentCoordinator, which has a single cache. Us e d i s c a c c e s s e f f i c i e n t l y . Considering disc access is less slow than memory access, we can reduce the disc access numbers to optimize application. We saw how we can do this on fetching data. The same problem could be extended when we save data. Is more efficient to save in batches instead of saving every object separately. Deleting object is also a sensitive topic, which needs special attention. Even if we are deleting a single object, this could imply cascaded object deletion based on relations between objects.
programming
TODAY SOFTWARE MAGAZINE
5 Java practices that I use
T
helpful
his article presents 5 Java practices that I use while coding. It’s interesting that simple things make your developer (and fellow colleagues) life easier. I have no intent to create some sort of top here, but just to illustrate things I consider
Implement equals, hashCode and toString methods inherited from java.lang.Object Tavi Bolog tavi.bolog@nokia.com Development lead @ Nokia
All 3 methods are part of java.lang. Object since JDK 1.0. They could be really useful when used in your classes as follows: Object#equals(Object obj): determines if the current object is equal with the object supplied as parameter. Implementation coming from Object#equals verifies only the equality of references (this == obj), but this covers only a sub-set of cases. Usually, a full blown implementation of equals checks: a. If (obj == null) returns false; b. If (this == obj) returns true; c. If (!getClass().equals(obj.getClass()) returns false; d. Type cast obj to type of this e. Check the equality of attributes between this and obj and returns true or false. Overriding equals has benefits when equality of objects state is important for a program to run properly (e.g. objects are part of collections and they need to be found during a “get” operation). Another situation would be writing of unit tests, when equality of objects is pretty often verified by the assertion statements.
equals. General contract for hashCode is: a. Equals is overridden b. If two objects are equal, they generate the same hash code c. It should use the same fields as used to check equality d. If the fields used to implement “equals” are not changed, subsequent calls of the hashcode should return the same value; still it is not mandatory that the same value to be returned on subsequent application executions. The hash based collections and maps are built using hash based buckets, so operations like “get” may produce unexpected behavior if hashCode is not properly implemented. Because of statement d) above, the client code may not be able to find an object is a hash if the state of the object changes meantime (and that state is used by “equals”). Best way to avoid this is to use immutable objects (the state of the object will not change during his life cycle). As well, if the object is immutable, hashCode implementation could be cached to improve performance (because for sure the state of the object will not change after initialization as being immutable).
How m a ny t i m e s w e l o o k o n some log files and see something like: Along wit h Obj e c t#e qua ls, t he “value is: Car@29edc073”? Yes, I know Object#hashCode is another method that ver y annoying and useless; and all could be useful in similar conditions as this because Object#toString was not www.todaysoftmag.com | nr. 5/2012
39
programming
overridden. This method is very handy when comes to logging the state of objects. After all seeing something like: “value is: Car[type=BMW,number of doors=5,isAllWheelDrive=false]” makes more sense. One hint on using toString(): it is preferred to use something like: “value is “ + car instead of “value is “ + car.toString() because “car” is anyway evaluated using toString because is concatenated to a string (“value is “) and will also avoid a NullPointerException in case “car” is null. For all 3 methods above, Apache Commons Lang 3 (http://commons.apache.org/lang/) provides utilities that make developers life easier. See https://github. com/tavibolog/TodaySoftMag/blob/master/src/main/java/com/todaysoftmag/ examples/objectmethods/Car.java and the associated test.
Avoid deep nesting of statements
In general, I like to keep my code simple and easy to read. Of course, this doesn’t always happen. As a rule of thumb, I prefer to close statements as soon as possible. By doing this, you limit the complexity of reading through nested statements. Closing statement could be achieved by calling: return, break, continue or throw. In my opinion using:
5 Java practices that I use
case argument is null */
b. Use assertions. These are Boolean statements that allow developers to verify assumptions on the code being executed. So, assertions are not for the users. When an assertion fails, most likely the library which contains the “assert” has a bug. I think the most common use of assertions is related to the design-by-contract paradigm: • Pre-conditions: check what needs to be true when a method is called. This could be a solution for private methods, but not for public methods, since the application could run without assertions enabled and also assertions can only throw AssertionError and not a specific type. So, the public methods should check the validity of the parameters. Still, the private methods could use assertions before doing any operation. private void divide(int a, int b) { ….. }
•
If (a == null) {return;}
is preferable to:
If (a != null) { //do something }
A way to enforce this practice could be limiting the number of characters per line in your IDE. I keep mine to 120. See https://github.com/tavibolog/ TodaySoftMag/blob/master/src/main/ java/com/todaysoftmag/examples/nested/ Utility.java for 2 versions of the same utility one using nested statement and the other trying to avoid them. Then choose yourself the one that looks cleaner.
Check validity of method’s parameters
Non e l i ke s an i n f am ou s N P E (NullPointerException) to show up in the stack trace of his code. Me neither. There are various techniques to avoid this: a. Use JavaDoc to document your input parameters. The problem is most people don’t read Java docs. /** @throws NullPointerException in
40
nr. 5/2012 | www.todaysoftmag.com
assert (b != 0);
Post-conditions: check what needs to be true before a method returns. Here, the assertions are allowed for public methods also, since the client code might be able to handle an eventual unexpected result (e.g. null account below)
private Account createAccount() { Account account = null; // create the account … assert (account != null); return account; }
•
Class invariants: check what needs to be true at any point in time for an object. In this situation, any constructor or public method of a class should call the invariant before returning, to ensure the consistency of the state of the object.
public class Account { String name = null; private boolean checkNameForNull() { // class invariant return name != null; } public Account(String name) { … assert checkNameForNull(); } }
Mind that there is no mechanism to recover from an assertion being thrown, since the main purpose is to help building code that is testable and protect against executions with corrupted state of the
application. By default the assertions are disabled. To enable, run: java –ea MyClaass or java –enableassertions MyClass c.
c. Check the validity of input parameters and throw exceptions as needed. In case the code can recover from invalid input, then you should use checked exception, otherwise use un-checked exception. This is according to Oracle guidelines from here: http://docs.oracle.com/javase/ tutorial/essential/exceptions/runtime.html. Usually the constructors of object can’t properly instantiate an object if input parameters are invalid, so I prefer to throw IllegalArgumentException. Also specifying this in the Java doc is a good practice.
See https://github.com/tavibolog/ TodaySoftMag/blob/master/src/main/ java/com/todaysoftmag/examples/check/ Dog.java and the associated test NOTE: checking the input parameters before assigning may be problematic in the so called “window of vulnerability” (in the period between a parameter check and parameter assignment, another thread could modify the input parameter as invalid making our initial parameter check useless). To avoid this, the validity check could be done on the actual assignee (e.g. left operand of parameter assignment) after the assignment was done.
Avoid un-necessary noise It’s always best to keep the code as clean as possible. Some developers may have good intentions to apply good practices, but the result is not really bringing too much benefit, rather un-necessary noise. Comments are helpful only if they say more than the code itself. This is a tricky statement. If the code is self-explanatory then comments like below don’t have much value: a. /** set person */ public void setPerson(Person p){}
b. … // read file
HR
readFile(new File(“…”));
… As a rule of thumb, better name your methods or variables as meaningful as possible to avoid the noise coming from comments. Let’s not also forget that comments need to be maintained as part of a refactoring process, which add even more work to do.
TODAY SOFTWARE MAGAZINE
Make sure you clean up your resources
Managing resources is key to ensure that a system will properly run without any restarts or unpredictable behavior. One of the most common mistakes on managing resources is forgetting to close the streams once they are open. The sequence should be: a. Open stream Next item to discuss is usage of this. b. Manipulate stream This makes sense, in my opinion when: c. Close stream a. is used to call a constructor inside of another constructor of the same Last step needs to be executed no matclass to avoid eventually duplicate ter if exceptions happen in the prior steps. code to initialize an object Up to Java 6, this could be done using public Person(String name, int age) finally statement. Here is how usually the {this(name);} code will look like: b. when the name of an attribute FileInputStream fis = null; is identical with the name of a try { fis = new FileInputStream(new parameter File(„README”)); public setPerson(Person person){this. person = person;}
Other wise usage of this sounds like noise too me. Below there are few counter-examples: System.out.println(this.name); System.out.println( this.COUNT_OF_MESSAGES);
The last item I want to discuss here is the usage of final. This makes sense to signal that a variable could be used in an inner class or if you want to clearly state for later changes that the variable should not be reassigned. I used first situation more often, while the second makes more sense when defining a constant, but should not be abused in other circumstances unless really needed. See this example:
resources. See https://github.com/tavibolog/ TodaySoftMag/blob/master/src/main/java/ com/todaysoftmag/examples/resources/ ResourceCleaner.java and the associated test to illustrate both situations (properly closing streams – including a JDK 1.7 example - and failing to close steams). Especially on Linux/Unix systems, failing to close streams will generate a “FileNotFoundException” with “too many open files” message.
Another interesting concept in Java 7 is the automatic handling on exception masking. Exception masking is the situation where one exception is masked by another exception and is not related to the situation when a client code wraps around exception coming from low level API, but for example to the case where an exception is thrown in the try block and another exception is /// do your file manipulation here } catch(IOException ioexp) { thrown in the finally block and the client // handle exception } finally { code will mainly notice the exception from //make sure the stream gets closed if (fis != null) { the finally block only. Java 7 handles autofis.close(); } matically (and also programmatically using } Throwable.addSuppress(…) method) this What could happen here is that the situation, by suppressing the 2nd exception finally block to throw an exception and the (from finally) in order to handle the excepcode eventually need to catch that, which tion from the try block. means more boiler plate to handle by the developer and yet another source of errors. It is recommended to let the compiler generate the boiler plate, since it will add Starting with Java 7, the syntax of try the minimal code to handle exception statement was extended to define scope for suppressing, rather than writing this code assignments, called try-with-statements. ourselves (a lot of ugly boiler plate code that could generate other problems). try (FileInputStream fis = new FileInputStream(new File(„README”))) { // do your file manipulation here }
Other situation that fall under cleaning your resources section are:
This means that at the end of the try block, the resources which implement a. Failing to close database connections AutoCloseable interface will be automatib. Failing to close sockets after these cally closed. You can even specify multiple are not needed anymore. This is correct, but wouldn’t be simpler resources to be closed at the end of the to write: same block. To support this new concept, a NOTE: the code examples were tested lot of the existing JDK classes, now imple- using Ubuntu 12.04 running open jdk 7 public int add(int a, int b) { return a + b; ment AutoCloseable interface, including and maven 3. } java.io.*, java.nio.*, java.net.*, etc classes, still maintain the backward compatibility of the existing written code. As well, the compiler generated code handles null public int add(final int a, final int b) { final int c = a + b; return c; }
www.todaysoftmag.com | nr. 5/2012
41
programming
Microsoft Kinect – A programming guide Part II – Displaying the entire skeleton of the user
I
n our previous issues, we introduced a code sequence involved in initiating Kinect and we built a quick Hello World application type. In what follows, we will continue the development of the application through a new functionality: displaying the entire skeleton of the user. Simplex team
simplex@todaysoftmag.com
void sensor_AllFramesReady(object sender, AllFramesReadyEventArgs e) { if (closing) { return; } Skeleton first = GetFirstSkeleton(e); if (first == null) { return; } ContainerSkeleton.Children.Clear(); Dictionary<JointType, Joint> jointsDictionary = new Dictionary<JointType, Joint>(); foreach (Joint joint in first.Joints) jointsDictionary.Add(joint.JointType, getScaledJoint(joint, e)); drawJoints(jointsDictionary); drawBones(jointsDictionary); }
Once we receive a frame from Kinect, we check first if it is null, then extract the wrists from the first discovered skeleton and retain them in a dictionary. void drawJoints(Dictionary<JointType, Joint> jointsDictionary) { foreach (KeyValuePair<JointType, Joint> joint in jointsDictionary) { drawJointEllipse(30, Brushes.LightGreen, joint.Value); } } private void drawJointEllipse(int diameter, Brush color, Joint joint) { Ellipse el = new Ellipse(); el.Width = diameter; el.Height = diameter; el.Fill = color; Canvas.SetLeft(el, joint.Position.X - diameter / 2); Canvas.SetTop(el, joint.Position.Y - diameter / 2); }
main.ContainerSkeleton.Children.Add(el); el.Cursor = Cursors.Hand;
The DrawJointEllipse method draws one wrist of the user’s body in the Canvas type element, ContainerSkeleton. Each joint is graphically illustrated by an ellipse. void drawBones(Dictionary<JointType, Joint> jointsDictionary) { Brush brush = new SolidColorBrush(Colors.LightGray); ContainerSkeleton.Children.Add(getBodySegment(new PointCollection() { new Point(jointsDictionary[JointType.ShoulderCenter].Position.X, jointsDictionary[JointType.ShoulderCenter].Position.Y), new Point(jointsDictionary[JointType.Head].Position.X, jointsDictionary[JointType.Head].Position.Y) }, brush, 9));
42
nr. 5/2012 | www.todaysoftmag.com
TODAY SOFTWARE MAGAZINE ContainerSkeleton.Children.Add(getBodySegment(new PointCollection() { new Point(jointsDictionary[JointType.ShoulderCenter].Position.X, jointsDictionary[JointType.ShoulderCenter].Position.Y), new Point(jointsDictionary[JointType.ShoulderLeft].Position.X, jointsDictionary[JointType.ShoulderLeft].Position.Y), new Point(jointsDictionary[JointType.ElbowLeft].Position.X, jointsDictionary[JointType.ElbowLeft].Position.Y), new Point(jointsDictionary[JointType.WristLeft].Position.X, jointsDictionary[JointType.WristLeft].Position.Y), new Point(jointsDictionary[JointType.HandLeft].Position.X, jointsDictionary[JointType.HandLeft].Position.Y)}, brush, 9)); ContainerSkeleton.Children.Add(getBodySegment(new PointCollection() { new Point(jointsDictionary[JointType.ShoulderCenter].Position.X, jointsDictionary[JointType.ShoulderCenter]. Position.Y), new Point(jointsDictionary[JointType.ShoulderRight].Position.X, jointsDictionary[JointType.ShoulderRight]. Position.Y), new Point(jointsDictionary[JointType.ElbowRight].Position.X, jointsDictionary[JointType.ElbowRight].Position.Y), new Point(jointsDictionary[JointType.WristRight].Position.X, jointsDictionary[JointType.WristRight].Position.Y), new Point(jointsDictionary[JointType.HandRight].Position.X, jointsDictionary[JointType.HandRight].Position.Y)}, brush, 9)); ContainerSkeleton.Children.Add(getBodySegment(new PointCollection() { new Point(jointsDictionary[JointType.ShoulderCenter].Position.X, jointsDictionary[JointType.ShoulderCenter]. Position.Y), new Point(jointsDictionary[JointType.Spine].Position.X, jointsDictionary[JointType.Spine].Position.Y), new Point(jointsDictionary[JointType.HipCenter].Position.X, jointsDictionary[JointType.HipCenter].Position.Y)}, brush, 9)); ContainerSkeleton.Children.Add(getBodySegment(new PointCollection() { new Point(jointsDictionary[JointType.HipCenter]. Position.X, jointsDictionary[JointType.HipCenter].Position.Y), new Point(jointsDictionary[JointType.HipLeft].Position.X, jointsDictionary[JointType.HipLeft].Position.Y), new Point(jointsDictionary[JointType.KneeLeft].Position.X, jointsDictionary[JointType.KneeLeft].Position.Y), new Point(jointsDictionary[JointType.AnkleLeft].Position.X, jointsDictionary[JointType.AnkleLeft].Position.Y), new Point(jointsDictionary[JointType.FootLeft].Position.X, jointsDictionary[JointType.FootLeft].Position.Y)}, brush, 9)); ContainerSkeleton.Children.Add(getBodySegment(new PointCollection() { new Point(jointsDictionary[JointType.HipCenter]. Position.X, jointsDictionary[JointType.HipCenter].Position.Y), new Point(jointsDictionary[JointType.HipRight].Position.X, jointsDictionary[JointType.HipRight].Position.Y), new Point(jointsDictionary[JointType.KneeRight].Position.X, jointsDictionary[JointType.KneeRight].Position.Y), new Point(jointsDictionary[JointType.AnkleRight].Position.X, jointsDictionary[JointType.AnkleRight].Position.Y), new Point(jointsDictionary[JointType.FootRight].Position.X, jointsDictionary[JointType.FootRight].Position.Y)}, brush, 9)); }
The DrawBones method draws a segment for each of the major bones of the user’s skeleton. The segments are added to the container that contains ellipses representing previously identified wrists. private Polyline getBodySegment(PointCollection points, Brush brush, int thickness) { Polyline polyline = new Polyline(); polyline.Points = points; polyline.Stroke = brush; polyline.StrokeThickness = thickness; return polyline; } private Joint getScaledJoint(Joint joint, AllFramesReadyEventArgs e) { var posX = (float)GetCameraPoint(joint, e).X; var posY = (float)GetCameraPoint(joint, e).Y; joint.Position = new Microsoft.Kinect.SkeletonPoint { X = posX, Y = posY, Z = joint.Position.Z }; }
return joint;
One way to display the screen wrists was using the ScaleTo method. However, ScaleTo tends to distort the user’s body, so as to occupy all available space on the display. To preserve anatomical proportions, we will manually convert the coordinates of each wrist from the metric system into pixels through the GetCameraPoint function. Point GetCameraPoint(Joint joint, AllFramesReadyEventArgs e) { using (DepthImageFrame depth = e.OpenDepthImageFrame()) { if (depth == null || sensor == null) { return new Point(0, 0); } DepthImagePoint jointDepthPoint = depth.MapFromSkeletonPoint(joint.Position); ColorImagePoint jointColorPoint = depth.MapToColorImagePoint(jointDepthPoint.X, jointDepthPoint.Y, ColorImageFormat.RgbResolution640x480Fps30); return new Point((int)(ContainerSkeleton.ActualWidth * jointColorPoint.X / 640.0), (int)(ContainerSkeleton.ActualHeight * jointColorPoint.Y / 480));
} }
Conclusions
Complete skeleton display is especially useful for debugging (we can easily learn how precise the joints are positioned on the user’s body), or for some graphics applications and games.
www.todaysoftmag.com | nr. 5/2012
43
testing
Testing - an exact science
T
he field of software testing has become increasingly dynamic, new test methods are introduced, more concepts are refined or reinvented. The famous phrase „anyone can test” begins to be increasingly difficult to be confirmed, due to the high technical level and technologies used in application development.
A simple scenario is given: a student or recent graduate is hired on a position of “Software Testing Engineer”, generically called QA. In the first days he becomes familiar with the company, colleagues, the application he is about to test. Moreover, it is suggested that he should study for the ISTQB Foundation Level certification, for a better understanding of the testing process. Our QA studies conscientiously and he finally obtains the certification diploma for the first level of software testing. At one point some concepts are forgotten, as we are just people, and he calls the Google friend to refresh his memory. Inevitably he finds several definitions for the same concept, similar in explanation, but with a slightly different interpretation (E.g. „Test Automation”). The question that follows is: What is the best definition, is there the „single source of truth” that we all dream of? Yes, there may be few who choose the method of documentary studies or published scientific articles on this topic and most of them will go for a quick search on the internet. I will try in the following example to show that automatic testing is more than test rolling, and an automatic process structure is a laborious work which involves many factors, not only what we find in a simple search on the net. ISTQB is a certification recognized all over the world, and yet it is not the only one. This type of certification is not considered important by all people. However, a QA and a software testing researcher can speak the same language. Software testing tends to be an exact science well studied and defined. It is an area where a concept receives a definition after an evaluation process and feedback, laborious and used by academics, based on real data. To give a concrete example: a simple Google search on „Test Automation” goes to a Wikipedia1 article where you will find
44
nr. 5/2012 | www.todaysoftmag.com
the following definition: „ In software testing, test automation is the use of special software (separate from the software being tested) to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test con¬trol and test reporting functions. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.” Instead, if you search for the same concept in studies and research articles, you will stumble upon Parasuraman2 principle that defines automated testing In the following way: “We propose that automation can be applied to four broad classes of functions: a) information acquisition; b) information analysis; c) decision and action selection; and d) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully auto¬matic.[…] We therefore use a definition that emphasizes human-machine compa¬rison and define automation as a device or system that accomplishes (partially or fully) a function that was previously, or conceiva¬bly could be, carried out (partially or fully) by a human operator [8].” It can be seen that this version of Wikipedia refers only to running of test scenarios which, in terms of the academic principle, it’s just an intermediate test level where full automation is defined by the absence of any human intervention. I learned in the early days of QA that on a scale of 1 to 10 (1 – the system does not offer any assistance, all decisions and actions are human; 10 – the system decides everything, acts autonomously, ignoring human interference) Test Automation is on level 5, a level involving only test execution, any other decision being taken by man (test data definition, test selection, results analysis, etc.). There are many automation models,
involving more or less human intervention and the example offered is only a starting point for a better understanding of what each team wants when using automated testing. Finally, an efficient and successful process of software automation comes down to factors like productivity /cost / quality. In conclusion, I would just like to mention that everyone chooses his documentation and sources of information from where he considers appropriate and it is ultimately irrelevant whether it is online or offline. What is truly relevant is the information that remains and the accuracy of that information. Software testing, especially the automatic one, is not a part of the creative art where each of us can come with his own definition of test automation or testing or any other term. And speaking of art, automation can become an art, to the extent that the processes used, scientifically defined and demonstrated, will receive the admiration of people in software and beyond
Bibliography
1. http://en.wikipedia.org/wiki/ Test_automation 2. Raja Parasuraman, Thomas B. Sheridan, Fellow, IEEE, and Christopher D. Wickens – “A Model for Types and Levels of Human Interaction with Automation” in IEEE transactions on systems, man, and cybernetics—part a: systems and humans, vol. 30, no. 3, may 2000
Andrei Conțan
andrei.contan@betfair.com Principle QA @ Betfair Co-Founder Romanian Testing Community
programming
TODAY SOFTWARE MAGAZINE
What’s new in Windows Communication Foundation 4.5
N
ew features in WCF 4.5 tend to fall into two aspects: simplicity and scalability. The biggest problem there is when it comes to WCF configuration. What they wanted from the new version was easier and simpler configuration. We all know that is not easy to configure a WCF service. Indeed, after being set up and go, it becomes a great advantage in any system of distributed applications.
Leonard Abu-Saa
leonard.abu-saa@arobs.com System Architect @ Arobs
Modernization and scalability have been achieved by implementing new features like: web sockets, UDP protocol support, support for the new async programming model „awayt / assync”. Let’s start with the configuration file. You look familiar scenario in which you create a service endpoint expose it through a http / tcp, and when you add a reference client application’ve found a ton of settings for a binding? Fortunately this has changed and things will look different. It looks like a default configuration in a client application binding in WCF 4.0
That will show in WCF 4.5!
Beautiful. No? Another aspect of simplicity is that they have changed some settings and values that services are „production ready”. In this regard were Modified Reader Quotas, Throttles, and Timeouts in order to avoid the many exceptions that generated the previous default settings.
www.todaysoftmag.com | nr. 5/2012
45
programming
A newly introduced feature is support for multiple authentication modes per endpoint. If this was previously only possible by creating multiple endpoints is now possible by setting a single endpoint that supports Windows authentication and Basic example simultaneously. Defining an endpoint and a service is easier in the configuration file inllisense new media, which now gives us a list of contracts and names of classes that can be used.
disk. This was possible before only workaround sites that brought overhead and difficulty in maintenance. This feature is clearly called Code-based configuration. All of these features were introduced for simplicity and ease of working with WCF. Next we will talk about what has brought modernization and addition to the scalability of this framework. I will start with new ways in which to write asynchronous, task-based model. This feature allows you to write asynchronous code on both the server and the client’s calling all asynchronously. As a principle, any method that returns Task or Task <T> is a method asynchronously on the server. These methods can be invoked using the new keyword sites: await and If an item has not been configured, the async. Consider the following example: window of warnings will be aware of this public static async void CopyToAsync fact. This feature is called „Validation over (Stream source, Stream destination) { configuration”. byte[] buffer = new byte[0x1000]; int numRead; WCF 4.5 comes with a new binding. while((numRead = await source. ReadAsync(buffer, 0, buffer.Length)) > 0) It introduced a basicHttpsBinding. This is { await destination.WriteAsync(buffer, 0, similar to basicHttpBinding, and has the numRead); } following default settings: SecurityMode } = Transport and ClientCredentialType = None. Basically we have a copy of asynchroSettings can now be done in ser- nous streams. Async modifier method vice of the Code by a new type, called needs because inside they await operator S er viceC onf iguration. In cas e the use. In this way the compiler knows that service is void static method Configure it is asynchronous and when executing the (ServiceConfiguration config), it will be code is ready to continue with any other called by default. With this new data can instructions remain to be executed. If we be added behavior’s new endpoints us and use but we await in async method signature other common settings. Settings can now we have a compilation error. upload a file in a different location on the As compared’ll add the same example public static IAsyncResult BeginCopyTo(Stream source, Stream destination) { var tcs = new TaskCompletionSource(); byte[] buffer = new byte[0x1000]; Action<IAsyncResult> readWriteLoop = null; readWriteLoop = iar => { try { for (bool isRead = iar == null; ; isRead = !isRead) { switch (isRead) { case true: iar = source.BeginRead(buffer, 0, buffer.Length, readResult => { if (readResult.CompletedSynchronously) return; readWriteLoop(readResult); }, null); if (!iar.CompletedSynchronously) return; break; case false: int numRead = source.EndRead(iar); if (numRead == 0) { tcs.TrySetResult(true); return; } iar = destination.BeginWrite(buffer, 0, numRead, writeResult => { try
46
nr. 5/2012 | www.todaysoftmag.com
{
}
}
}
using APM (Async Programming Model), the usual that we have been used so far. It is clear that major things have been simplified. Using the new task based model is almost as easy as writing asynchronous and synchronous methods. Just as the return type of the method Task type to <T> and add sites await and async keywords. Return to the bindigs which was introduced UdpBinding. This is useful if you want to transmit data more frequently to cool-ri, like broadcast and multicast scenarios. In this case it is not as important as content when sending messages. As a limitation that does not support WAS. Another very important feature is support for WebSockets. They allow bidirectional communication, full duplex, in a way easier than it was before. In conclusion, in WCF 4.5 attempts to simplify and modernize development platform by setting „production ready”, intellisense and validation configuration files, support for WebSockets, UDP, taskbased model and other sites feature. My opinion is that these sites are useful feature welcomed by reference to the previous versions and cannot wait to put them into practice.
if (writeResult.CompletedSynchronously) return; destination.EndWrite(writeResult); readWriteLoop(null);
} catch(Exception e) { tcs.TrySetException(e); } }, null); if (!iar.CompletedSynchronously) return; destination.EndWrite(iar); break;
} catch(Exception e) { tcs.TrySetException(e); } }; readWriteLoop(null); }
return tcs.Task;
public static void EndCopyTo(IAsyncResult asyncResult) { ((Task)asyncResult).Wait(); }
technologies
TODAY SOFTWARE MAGAZINE
Service Bus Topics in Windows Azure
T
he first CTP of Windows Azure was announced in 2008 and after two years the commercial version was launched. Since then each new version of Windows Azure has brought new functionalities. If in 2010 the web role and worker role were the main strengths, Windows Azure from now is more complex and allows us to do things we could not imagine.
Radu Vunvulea
Radu.Vunvulea@iquestgroup.com Senior Software Engineer @iQuest
Windows Azure Service Bus provides us the infrastructure to communicate in a distributed system without any problems. All issues related to scalability, security, load balancing are resolved by it. A very important property of this infrastructure is that it assure us that no message will be lost, each one will reach its destination once it has been received by the Service Bus.
Service Bus Relay allows us to integrate and expose WCF services that are hosted onpremises servers using Windows Azure. Service Bus Queue is a service that allows communication between two modules via message queues. This operation is completely asynchronous and the modules are not required to wait after other modules. Any message that was successfully sent to the message queue will not be lost, and At this point when we talk about this fact is assured by Windows Azure. Service Bus we have to mention three very Even when there is no messages consumer, important components: they will be retained by the Service Bus • Service Bus Queues, Queue. • Service Bus Topics, • Service Bus Relay. The number of messages in a queue is not limited. There is a limit only for Service Bus Queue, which we will talk the maximum capacity a queue can have, about in this article, is a service that allows which is 5GB. Each message can have a us to work with message queues within the maximum size of 256KB. Inside it we can Service Bus. Service Bus Topics is quite add information in the form of (key, value) similar to Service Bus Queues, the main or any serializable object. We can even add difference is that a message can reach to data streams, as long as their size does not more listeners (Subscribers), not just one. exceed the maximum capacity of a message.
www.todaysoftmag.com | nr. 5/2012
47
technologies
Each message added to the Service Bus Queue has a TTL (time-to-live), the maximum value accepted is infinite. When the TTL expires, a message is automatically removed from the queue. Besides this we also have property where we can specify when a message queue becomes active and can be consumed. In this way we can send a message queue that can be processed only after the date that we have defined. Service Bus Queue is a service that allows us to have an unlimited number of which a namespace can be accessed. producers and message consumers. This This can be regenerated at any time service also includes a mechanism to detect through the portal. duplicates. In terms of security and access to the Service Bus message queue, it can be All this information and the location done through: where it has to be added to a project can be found on the Windows Azure portal. • ACS claims This can automatically generate the XML • Acces pe bază de rol that has to be added. All we have to do is to • Identify provider federation copy/paste this information. The following examples will be in C #, but this does not Currently Shared Access Signature can- mean you cannot use Service Bus Queues not be used to access the Service Bus queue from PHP, Node.js or any other programor any other resource in the Service Bus. ming language. This service is exposed in REST format and any library you use is To use a queue or any other ser- actually a wrapper over it. vices available through the Service Bus is necessary to create a namespace. A namesBefore you create a queue is advisable pace is a unique name through which we to check if it does not already exist. If we can identify the resources and services we want to create a queue with a name that have available in the Service Bus. It can already exists an exception will be thrown. be created on the Windows Azure por- NamespaceManager allows us to access any tal, under „Service Bus, Access Control & service within Service Bus. Caching”. Once you have a namespace, a NamespaceManager namespaceManager = NamespaceManager.CreateFromConnectionString( queue can be created in various ways, from CloudConfigurationManager.GetSetting( “ServiceBusConnectionString”)); portal to configuration files or code. We QueueDescription queueDescription = new can give any name to a queue as long as QueueDescription(„FooQueue”); respect the following rules: queueDescription.DefaultMessageTimeToLive = new TimeSpan(0, 10, 30); • It contains only numbers, uppercase if(!namespaceManager.QueueExists(„FooQueue”)) { and lowercase letters namespaceManager. • It contains characters like: ‘-‘, ‘_’, ‘.’ CreateQueue(queueDescription); //sau • It does not exceed 256 characters namespaceManager.CreateQueue(“FooQueue”); }
Applications that want to access the Service Bus Queues need to know three information: • Endpoint – : the url through which we can access services from Service Bus; this address generally has the following form:
Any message we add to the queue is a BrokeredMessage. In this object it is necessary to put all the information that SharedSecretIssuer – the name of must reach the consumer. A message conthe issuer (the owner namespace) sists of two parts: SharedSecretValue –secret key on • Header –contains the message
sb://[namespace].servicebus.windows. net/
•
48
As you can see, you can create by only specifying a queue name. If we want to set up some parameters value that is different from the default we can appeal to QueueDescription.
nr. 5/2012 | www.todaysoftmag.com
•
configuration and all the properties that we want to add Body – message content; it can be an serialized object or a stream.
Operations on a queue can take place both synchronously and asynchronously. The mechanism for adding a message in the queue is quite simple. When we want to extract a message from the queue we have two mechanisms: • Receive and delete – - a message is taken from the queue and immediately deleted • Peek and lock – a message is taken from the queue and blocked. It will be deleted only when the consumer specify this. If on a predefined time interval the consumer does not confirms that the message has been processed, Service Bus Queue will make the message to be available again on the queue (the default is 60 seconds). • All this operations can be made through a QueueClient object. BrokeredMessage message = new BrokeredMessage(); message.Properties[„key”] = „value”; QueueClient queueClient = QueueClient.CreateFromConnectionString( connectionString, „FooQueue”); queueClient.Send(message); BrokeredMessage messageReceiveAndDelete = queueClient.ReceiveAndDelete(); try { BrokeredMessage messagePeekAndLock = queueClient.Receive(); // ... messagePeekAndLock.Complete(); } catch() { messagePeekAndLock.Abandon(); }
The message queue operations are transactional, and both the producer and the consumer may work with batch’s messages. If you reach to a point where you add a lot of data in a message or want to add
TODAY SOFTWARE MAGAZINE
Service Bus Topics in Windows Azure
the contents of a stream in a message try to think twice if you need this information and if you cannot put it in another place where both producer and consumer can access it (e.g. . Windows Azure Tables, Windows Azure Blobs). byte[] currentMessageContent = new byte[100]; //... BrokeredMessage messageToSend = new BrokeredMessage(new MemoryStream( currentMessageContent)); queueClient.Send(messageToSend); BrokeredMessage messageToReceive = queueClient.Receive(); Stream stream = messageToReceive. GetBody<Stream>();
After calling the Receive () method we should check if the returned object is null. However, the thread will remain blocked at the call of ReceiveAndDelete method until a message from the queue is available. Each message that is added to the queue can have attached a unique session id (sessionID). Through this session we can be configure the consumer to receive messages only from this session ID. This is a way to send data to a consumer that normally would not fit in a single message. On the producer we must set only the sessionID property of the message, and the consumer must use the MessageSession, instance that can be obtained by calling the „AcceptMessageSession ([sessionID])” method of the QueueClient. MemoryStream finalStream = new MemoryStream(); MessageSession messageSession = queueClient. AcceptMessageSession(mySessionId); while(true) { BrokeredMessage message = session. Receive(TimeSpan.FromSeconds(20));
queues. All messages that could not be processed (e.g. messages that have expired or when the queue has reached the maximum capacity) end in a „Death Letters” message queue. This queue can be accessed, is very useful when we are debugging or when we need to log messages that could not be processed. In this queue we can also have messages that are not added by the system. These messages may be added from the code through the DeathLetter method of BrokeredMessage.
QueueClient deathLeterQueue= QueueClient. CreateFromConnectionString( myFooConnectionString, QueueClient.FormatDeadLetterPath( „FooQueue”) );
A special feature is the integration of the Service Bus Queues with WCF. A WCF service can be configured to use message queues to communicate. By this way, our service will be flexible and scalable. All load balancing problems are solved almost alone in this way. If the service is down, the consumer can still send messages without errors and without losing these messages. In some cases this can be very helpful. Also by this way we can expose WCF services from private networks without having security issues.
There is a scenario where this functionality can be very useful. Imagine that a message consumer tries to consume a message that cannot be processed. Consumers will try forever to process this message. This type of message is called „Poison Message” and a poorly thought From some points of view Service system can be messed up because of the Bus Queues are quite similar to Windows queue. Azure Queues. We can imagine Service Bus Queues as a Windows Azure Queues In the following example we will try with steroids. When we need features such to process a message three times. If we as guaranteeing message order, transactifail, it is automatically marked as „Poison onal operations, TTL greater than 7 days Message”. or integration with WCF, the Service Bus Queues is our solution. But there are cases while(true) when Windows Azure Queues is a much { BrokeredMessage message = queueClient. better solution. That’s why you should comReceive(); pare the two types of message queues from if(message == null) { Windows Azure and see which is best for Thread.Sleep(1000); continue; your application. }
try {
if(message != null) { // ... message.Complete(); continue; } break; }
A message queue from Service Bus Queues doesn’t contain only one queue. We can say that a queue can contain several
death letters can be obtained as follows:
}
Finally Service Bus Queues is extremely powerful and has 4 strengths that should } catch(Exception ex) not be overlooked: { if( message.DeliveryCount > 3 ) • Loose coupling (the consumer and { message.DeadLetter(); producer do not have to know each } message.Abandon(); other) } • Load leveling (the consumer doesn’t A reference to a message queue for have to run 24/7) • Load balancing (we are able to register consumers and producers very easily) • Temporary decoupling (producer or consumer can be offline) ... message.Complete()
www.todaysoftmag.com | nr. 5/2012
49
technologies
Malware pe Android statistics, behavior, identification and neutralization
A
ndroid, the operating system for Smartphones, the most popular OS mobile in the U.S. and perhaps the most popular on the planet, enjoys increasing global attention due to the diversity of the gadgets on which is installed.
Andrei Avădănei andrei@worldit.info Founder and CEO DefCamp CEO worldit.info
Unfortunately, Android doesn’t escape Service. The application infected 500,000 the attention of hackers and cyber crimi- devices and sent cash in the accounts of nals just waiting for new technologies to some online game profiles. become popular in order to get informational, technological and financial illegal What options does a public spy apearnings. However, what is the present plication have? situation of Android? For about $ 16 per month or less, there are spy applications of mobiles with According to a study performed by Android (note, not only)extremely comTrendLabs, during the first 3 months of plex and well-developed that publicly 2012, 5000 malware applications were address the parents who want to be clofound and then in April almost 15 000. It ser to their children, the managers of the is expected to reach nearly 130 000 mobile companies which suspected Industrial and malwares by the end of the year. It is government espionage. worrying that only 20% of mobile applications have a dedicated application security The application control is carried out and law evidence, 17 malicious applications through SMS and the Internet, when posfrom Google Play have been downloaded sible. You can record calls and view call 700,000 times until they were eliminated. history, you can read incoming and sent SMS, you can define the SMS keywords, Android malware behavioral classi- you can record the sound to over 10 meters fication shows that the top applications around, you can read emails from Gmail or are Premium Service Abuser with 48% any service explicitly defined in the default followed by Adware with 22% and at a email application from Android, you can short distance by data theft, with 21%. access all contacts stored on your phone, Intelligence tools currently occupy only 4% all calendar activities, you can view photos, which demonstrates that ,by now, they are videos, music; there is access to the browser used for targeted purpose. An example of ‘s history, location monitoring with or malicious application was SMSZombie, not without GPS and the list goes on. too popular in Europe and the U.S. because they addressed a service predominantly I repeat, this can only make a public used by Chinese - SMS Mobile Payment espionage application. Wifi network monitoring, obtaining control over networks, the installation of 0-day exploits on the computer network, bluethoot monitoring would complete the list.
What options could a botnet for Android have?
Although situations of this type were quite few until now, we can always
50
nr. 5/2012 | www.todaysoftmag.com
tehnologii
expect a real botnet popularized through Google Play + something magic. Imagine the following scenario - A new app that apparently solves a general public’s need was loaded in Google Play. After being downloaded, this automatically gives like, +1 and a good score for the application of the market, eventually leaving a random comment from a predefined list, even trying to link a dialogue between two „clients”. The malicious software exploits a vulnerability that allows getting the root right on your mobile phone. Among others, it pings to a hijacked website, which has a communication service especially installed for malware. The latter communicates a list of decentralized hijacked websites. Well, it makes a complete mobile fingerprinting (the operation can also be done before getting root access and backdoor) deciding whether an additional tool in one of the servers is necessary. If not, information is gathered as - email address, numbers, telephone conversations, history, cookies, accounts. Phones could be infected until the target is reached, the operation for which it was created is performed and then it is diluted. The rest is variable. Still, the problem is the spreading on other phones –through wifi networks, offering personalized messages to friends sorted by country and city, through 0-day sites and building permanently infected applications.
An introduction to Android security
Android is a modern mobile platform that has been created for further development. The android applications use the hardware and software technology to provide users with the ideal experience. To protect the value created by the operating system, the platform provides an environment that ensures the safety of users, data, applications, and the device’s network. Securing an Open Source Platform requires a robust architecture security and a rigorous security process. Android was designed so that security could be divided into several layers, thus providing greater flexibility and protecting the platform users.
TODAY SOFTWARE MAGAZINE
development the components are reviewed by the Android Security Team, Google Information Security Engineering Team and independent security consultants. The purpose of this analysis is to identify weaknesses and potential vulnerabilities long before they are Open Source published information. At the time of publication, Android Open Source Project allows maintenance and code review by an indefinite number of enthusiasts and volunteers. After the source code was published, the Android team quickly reacts to any report on a potential vulnerability to ensure that the risk of users to be affected is minimized as soon as possible. These reactions include rapid updates, the removal of Google Play applications or the removal of applications from the devices.
• defining permissions for applications and users.
The Security Architecture of the Android Platform
Conclusions
The Security Architecture of the Android Platform Android aims to be the safest operating system for mobile platforms, trying to ensure: • users data protection, • s y s t e m re s ou rc e s prot e c t i on (Including the network), • application isolation. To reach this point, some key points in the platform architecture have to be achieved: • Robust security OS level through Linux kernel, • Application Sandbox, • Secure communication between processes, • signing applications,
More details about the Android architecture is available on source.android.com.
What Android Anti-Virus applications are there?
According to the results of AV-Comparatives in September 2012, the most effective mobile security applications are: avast!, Mobile Security, BitDefender Mobile Security, ESET Mobile Security, F-Secure Mobile Protection, IKARUS mobile.security, Kaspersky Mobile Security, Lookout Premium, McAfee Mobile Security, Qihoo360 Mobilesafe, SOPHOS Mobile Security, WEBROOT. Their efficiency is over 93% in descending order. Although statistics seem worrying, the reality is that most malware applications so far have not behaved very dangerous. At the same time, modern anti-virus technology are moving on mobiles with surprising speed, most market players coming with an already prepared security solution for smartphones. Moreover, the Android Development team claims that the last version of the operating system brings additional security, as confirmed by some independent security specialists. From experience, I tend to think that the problem of viruses on mobile phones will soon be as sensitive as the one on computers and we have to be more attentive than ever to what we are installing. Stay safe, stay open!
The Android security program
The Android security process begins in the early stages of development by creating a complete and configurable security model. Each major facility of the platform is reviewed by engineers. During its www.todaysoftmag.com | nr. 5/2012
51
misc
Gogu
T
he Chief opened the door and, by looking at him, Gogu soon realized that something was wrong. The truth was that if he had looked around, he would have noticed that the entire office was shocked: everyone found it hard to believe the way that Mişu and Gogu were fighting and yelling to each other. Well, that’s because he’s so stubborn! Gogu said to himself. But wait a minute, we’ll settle it right now… - How good that you came, Chief! Can you help us solve an issue? - Just one issue?! Judging by the scandal you made, I can hardly believe is only about one issue! Let me hear it, what is the apple of discord? - What apple, Chief, do you think we are in the mood for joking? Mişu interfered with his red cheeks, glassy eyes and scant breath as if he had run the marathon. Enlighten Gogu yourself because I’m out of arguments! - You’re out because there aren’t any! That’s why you get so nervous and raise you tone! Gogu could not refrain. Chief realized that the discussion had already last for a while and it was obvious – for who knew him, of course – that Gogu was striving to look calm, but he was in fact as “involved” as Misu was (only he was not as “red” as him). Immediate measures were necessary and therefore Chief walked back and, with a low and talkative voice, let the words come out loud and clear: - I think it’s time for both of you to calm down and explain – calmly, without raising tones and talking at once – what this is all about. Obviously, both of them started talking and explaining so Chief felt the need to repeat: - Calmly, without raising tones and talking at once, I said. Gogu, you tell me. Gogu tried to refrain himself from showing the victory smile (it was clear that I would explain, I’m older in the company and certainly more skilled) and after he slightly theatrically took a breath, began to explain: - Chief, here’s what this is all about: the pyramids of Egypt were built some five thousand years ago, right? - That’s right, Gogu! - The Great Chinese Wall is more recent, around 2200 years, right?
52
nr. 5/2012 | www.todaysoftmag.com
- That’s right, Gogu! - The Mayan pyramids are even more recent, but still they have 1,500 years. - What’s this, Gogu, a history lessons? Chief lost his patience. Mişu wanted to interfere, but the Chief ’s look stopped him. Gogu continued relentlessly: - The construction of Taj Mahal lasted 22 years and ended in the 17th century. The Eiffel Tour was build more recently and it is also an architectural masterpiece. - Are we getting anywhere? - Could all of these extraordinary constructions be referred to as “projects”? Gogu continued. - Yes, Gogu, yes… - Then – he ended gloriously – isn’t it right that project management has existed for almost 5000 years?! Mişu – who turned from red to cherry – was about to have a heart attack so he exploded: - Project management – you may want to read a book or two, you megalithic mule! – appeared in 1950. You’ll find the information on the internet with a minimum effort… And therefore, if project management was not, neither were projects, you obsolete! Chief started laughing loudly and lustily, without any sign of wanting to stop. One by one, the entire office started laughing. Our two heroes looked at Chief in a stunned way, but it was impossible not to get “infected” with his laugh so they found themselves smiling at first and then laughing. Howling with laughter, Gogu said to himself: only Chief can make a comedy out of a big scandal. Once they got calmer, Chief sat comfortably on a desk, with his legs draping, preparing for the explanation. Clearly, he was enjoying it: - Nice subject you raised but still there was no need to get so agitated. Let me tell you how are things with projects and their management. By definition, projects are complex processes and relatively unique for the company that executes them. Do you remember the last product, the one we released two months ago? Making and releasing it was, for us, a project. Now, that product is made in series and therefore the activity has turned into a process. Meaning that, once we have learned how it’s made, we can produce
everything by processes that we know and are managed by applying the concepts of operational management. For projects, we apply project management. But now I have a question for you: what happens if I have no idea about was project and project management means? Could I still be able to make and released a new product? - Five years ago we knew nothing about project management – interfered Zoli – and we still launched a new product every year. The others approved silently. It looked like the Chief ’s ”story” caught everyone. He continued: - That’s right! Of course we did. Only that, things didn’t always turn out as we wanted, costs were not monitored, and deadlines were almost each time exceeded. We worked by our own methods, just like Egyptians and Chinese had done in their constructions. They weren’t too passionate about savings and deadlines. Your examples, Gogu, are successful ones in the history of mankind, but there are also many other - hundreds, thousands – initiatives that had been started but never finished, precisely because of their complexity. - Sagrada Famiglia! Mişu whispered and Chief nodded approvingly… - The concept of “project” appeared when people tried to apply new methods to ensure the success of complex, risky, and with huge investments processes. Actually, project management is about granting an increased managerial attention, materialized in assigning a project manager and a team to be responsible for the execution of the complex and unique process – that we now know as project. You fighters are both correct and wrong in the same time: Gogu’s examples are indeed projects as we understand them nowadays, but those who participated to their execution had no idea about project management and certainly were not applying it… And now the break is over, let’s get back to work! Simona Bonghez, Ph.D.
simona.bonghez@confucius.ro Speaker, trainer and consultant in project management, Owner of Confucius Consulting
sponsors
powered by