Contents
Editor
Rahul chopRa
Editorial, Subscriptions & Advertising Delhi (hQ) D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020 Phone: (011) 26810602, 26810603; Fax: 26817563 E-mail: info@efyindia.com BeNGAlURU Ms Jayashree Ph: (080) 25260023; Fax: 25260394 E-mail: efyblr@efyindia.com
Customer Care
e-mAil: support@efyindia.com
ers
er
55
Get started with Android NDK
48
A Career in the Cloud
61
The Semester Project—Part VII The File System in Action
51
“Intel involves a majority of developers with its initiatives”—Narendra Bhandari, director, Intel Software and Services Group, Intel South Asia
71 75
Python, Behave and Mockito-Python
88
System Programming Using POSIX Threads
s
100 Creating OpenMRS Modules
AdMin 92
Getting Started with Tcpdump
95
Cyber Attacks Explained Device Evasions
69
Back Issues
CloudBees: Deploying Applications
Exploring Software: Are We Safe Using Apps on Tablets?
79
CodeSport
83
The Joy of Programming: Static Analysis Vs Dynamic Analysis Tools
Android with Aurdino
81
The What, Why and How of Testing
85
Getting Started with Raspberry Pi
Advertising CheNNAi Venkat CD Mobile: 9742864199 E-mail: efychn@efyindia.com hYDeRABAD D S Sunil Mobile: 8977569691 E-mail: efyenq@efyindia.com KolKAtA Gaurav Agarwal Ph: (033) 22294788; Telefax: (033) 22650094 Mobile: 9891741114 E-mail: efycal@efyindia.com mUmBAi Ms Flory D’Souza Ph: (022) 24950047, 24928520; Fax: 24954278 E-mail: efymum@efyindia.com PUNe Sandeep Shandilya; Ph: (022) 24950047, 24928520 E-mail: efypune@efyindia.com GUJARAt Sandeep Roy E-mail: efyahd@efyindia.com Ph: (022) 24950047, 24928520 SiNGAPoRe Ms Peggy Thay Ph: +65-6836 2272; Fax: +65-6297 7302 E-mail: pthay@publicitas.com, singapore@publicitas.com
urus 44
Kits ‘n’ Spares New Delhi 110020 Phone: (011) 26371661-2 E-mail: info@kitsnspares.com Website: www.kitsnspares.com
g of An A-Z listin tions Android Solu Providers
104
UNiteD StAteS Ms Veronique Lamarque, E & Tech Media Phone: +1 860 536 6677 E-mail: veroniquelamarque@gmail.com ChiNA Ms Terry Qin, Power Pioneer Group Inc. Shenzhen-518031 Ph: (86 755) 83729797; Fax: (86 21) 6455 2379 Mobile: (86) 13923802595, 18603055818 E-mail: terryqin@powerpioneergroup.com, ppgterry@gmail.com tAiwAN Leon Chen, J.K. Media Taipei City Ph: 886-2-87726780 ext.10; Fax: 886-2-87726787
Exclusive News-stand Distributor (India)
REGULAR FEATURES 08 You Said It...
22 FOSS Bytes
12 Q&A Powered By LFY Facebook
50 CodeChef
11
Offers of the Month
14 New Products 18 Open Gadgets
103 Events & Editorial Calender 106 FOSS Jobs 108 Tips & Tricks
iBh BooKS AND mAGAziNeS DiStRiBUtoRS Pvt ltD Arch No, 30, below Mahalaxmi Bridge, Mahalaxmi, Mumbai - 400034 Tel: 022- 40497401, 40497402, 40497474, 40497479, Fax: 40497434 E-mail: info@ibhworld.com Printed, published and owned by Ramesh Chopra. Printed at Tara Art Printers Pvt Ltd, A-47, Sec-5, Noida, on 28th of the previous month, and published from D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020. Copyright © 2011. All articles in this issue, except for interviews, verbatim quotes, or unless otherwise explicitly mentioned, will be released under Creative Commons Attribution-Share Alike 3.0 Unported License a month after the date of publication. Refer to http://creativecommons.org/licenses/by-sa/3.0 for a copy of the licence. Although every effort is made to ensure accuracy, no responsibility whatsoever is taken for any loss due to publishing errors. Articles that cannot be used are returned to the authors if accompanied by a self-addressed and sufficiently stamped envelope. But no responsibility is taken for any loss or delay in returning the material. Disputes, if any, will be settled in a New Delhi court only.
SUBSCRIPTION RATES eriod News-stand price You Pay Year Five Three One
Overseas
(`) 6000 3600 1200
(`) 3600 2520 960
— — US$ 120
Kindly add ` 50/- for outside Delhi cheques. Please send payments only in favour of EFY Enterprises Pvt Ltd. Non-receipt of copies may be reported to support@efyindia.com—do mention your subscription number.
6 | november 2012
Trained participants from over 42 Countries in 6 Continents Linux OS Administration & Security Courses for Migration LLC102: Linux Desktop Essentials LLC033: Linux Essentials for Programmers & Administrators LLC103: Linux System & Network Administration LLC203: Linux Advanced Administration LLC303: Linux System & Network Monitoring Tools LLC403: Qmail Server Administration LLC404: Postfix Server Administration LLC405: Linux Firewall Solutions LLC406: OpenLDAP Server Administration LLC408: Samba Server Administration LLC409: DNS Administration LLC410: Nagios - System & Network Monitoring Software LLC412: Apache & Secure Web Server Administration LLC414: Web Proxy Solutions Courses for Developers LLC104: Linux Internals & Programming Essentials LLC106: Device Driver Programming on Linux LLC107: Network Programming on Linux LLC108: Bash Shell Scripting Essentials LLC109: CVS on Linux LLC204: MySQL on Linux LLC205: Programming with PHP LLC206: Programming with Perl LLC207: Programming with Python LLC208: PostgreSQL on Linux LLC504: Linux on Embedded Systems LLC701: Android Internals LLC702: Android Application Development
Advanced Administration Training on DNS, Samba, Nagios & Postfix Apache Web Server Admn.: 10 Nov 2012 JBoss: JB248: 19 Nov 2012
RHCVA / RHCSS / RHCA Training - Exams RH318: 10, & 24 Nov 2012; EX318: Call; RHS333: 10 Nov; RH423:17 Nov; RHS429: Call; RH436: 27 Nov, EX436: Call; RH442: 10 Dec; EX442: 14 Dec RH401: 17 Dec; EX401: 21 Nov
RH299 from 10, 19, 24 November RHCSA & RHCE Exam 9, 16, 23 & 30 Nov LLC - Authorised Novell Practicum Testing Centre NCLP Training on Courses 3101, 3102 & 3103
CompTIA Storage+ Training & Certification 19 November’12 Microsoft Training Co-venture: CertAspire
RHCE Certification Training RH124: Red Hat System Administration - I RH134: Red Hat System Administration - II RH254: Red Hat System Administration - III RH299: RHCE Rapid Track Course RHCVA / RHCSS / RHCDS / RHCA Certification Training RHS333: Red Hat Enterprise Security: Network Services RH423: Red Hat Enterprise Directory Services & Authentication RH401: Red Hat Enterprise Deployment & Systems Management RH436: Red Hat Enterprise Clustering & Storage Management RH442: Red Hat Enterprise System Monitoring & Performance Tuning RHS429: Red Hat Enterprise SELinux Policy Administration RH318: Red Hat Enterprise Virtualization
Microsoft Certified Learning Partner
www.certaspire.com For more info log on to:
www.linuxlearningcentre.com Call: 9845057731 / 9449857731 Email: info@linuxlearningcentre.com
RHCSA, RHCE, RHCVA, RHCSS, RHCDS & RHCA Authorised Training & Exam Centre
NCLA / NCLP Certification Training Course 3101: SUSE Linux Enterprise 11 Fundamentals Course 3102: SUSE Linux Enterprise 11 Administration Course 3103: SUSE Linux Enterprise Server 11 Advanced Administration
Registered Office: # 635, 6th Main Road, Hanumanthnagar, Bangalore 560019
# 2, 1st E Cross, 20th Main Road, BTM 1st Stage, Bangalore 560029. Tel: +91.80.22428538, 26780762, 65680048 Mobile: 9845057731, 9449857731, 9343780054
Gold
Practicum
TRAINING PARTNER
TESTING PARTNER
YOU SAID IT On changing the name of your magazine! I was quite disappointed when I came to know that the magazine’s name had been changed from LINUX For You to Open Source For You. I understand that nothing else has changed except the name, but it still seems that something is missing. I agree that the magazine delivers a lot more than Linux, but I somehow feel that the original flavour of the magazine is missing as you have removed the word ‘Linux’. Don't forget that LINUX For You has given lovers of open source a lot to cherish. I feel you should have taken your readers' feedback in the form of a poll on Facebook. I can bet that the percentage for OSFY would be negligible compared to those rooting for LFY. All the best! —Adarsh Singh addi_adarsh@yahoo.co.in ED: Thank you for your candid feedback. We appreciate your sentiments too. This decision has not been easy for us either—but based on reader surveys (current readers and those who do not read LFY), we realised that we could make an impact on a much larger population of readers by switching the name. We did do quite a few surveys on our Facebook page and repeatedly found more ‘likes’ than ‘dislikes’. Having said that, we do not want to ignore the sentiments of loyal readers who ‘disliked’ the change in name, but would like to retain their continued support for OSFY, which will have an increased number of pages (and soon perhaps an additional CD too) to reach out to the growing population that uses Linux' cousin— Android. We look forward to your continuing support.
Accessing archives of the CodeSport and JoP columns I am a regular reader of OSFY and I find columns like CodeSport and The Joy of Programming interesting. I would like to have access to the archives of these columns. Please guide me on how I could go about it. —S Kannan Kannan.s3@gmail.com ED: It’s great to hear that you like these columns. To read our previous issues, you can access our ezine, which I’m sure you will enjoy. For OSFY subscribers, the ezine service is free. You can check the details on http://ezines.efyindia.com/. Please do write to us at osfy@efyindia.com if you have any further queries.
Make Linux popular I am not a regular reader of your magazine, though I tend to pick it up from the stands whenever I find any article of interest. After reading an issue of OSFY, I came to know that the Zorin distro is
the OS for those who want to make a transition from Windows to Linux. I would like to request the OSFY team to provide the said OS image in the DVD for all those who want to try it out. The only reason why Linux is not popular among the general populace is because it is not user-friendly. Even installing an application is a tedious task in Linux, whereas it requires just doubleclicking to install applications in Windows. To popularise Linux, I would suggest that keyboard shortcuts in Linux should be similar to those in Windows. Of course, there are some shortcuts that are similar to both Windows and Linux like Ctrl+C, Ctrl+X, Ctrl+V, etc. But there should be more similarities to make the learning easier. Could you also feature a list of software that works on Windows and its equivalent in Linux, on your website? I am sure that would go a long way in popularising Linux. Keep up the good work! —Pritam Haobam, pritamhaobam@yahoo.com ED: We really appreciate you sharing your views on how to popularise Linux. Our readers will definitely find your suggestions helpful. And we will certainly try to incorporate them in our future editions; just give us a little time to work on them. Till then, please feel free to send in more ideas, suggestions and comments at osfyedit@efyindia.com
Include articles on DNS configuration I have been reading LFY (now OSFY) for the last couple of months. Being a Linux administrator, I must say that your magazine has played a key role in shaping my career and still continues to do so. I am sure that OSFY will continue to enthral its readers in the future too. I would sincerely request you to include articles on mail servers and DNS configuration in Redhat5 Enterprise Edition, in any one of the upcoming editions of OSFY. —N B Riaz Ahmed, nbriaz@rediffmail.com ED: We are delighted to get such wonderful feedback from you and discover that our magzine has been a great help in your career. Your suggestions have been taken into considertion, and we will certainly try to include articles on mail servers and DNS configuration in our forthcoming editions. With your experience as a Linux admin, you too could consider writing for us. To do so, you need to send us a detailed Table of Contents (ToC) on the topic you wish to cover at osfyedit@efyindia.com. Our editorial team will review the ToC and once they give you a thumbs up, you can go ahead with the article.
offe
rS
th
NOVEMbEr 2012 | 11
Powered By
www.facebook.com/linuxforyou
Sarfaraz Alam:
Can anybody help me how can I download teamviewer? If anybody has any link then please send me link for rhel6? Like . comment
Thiyagarajan Varadharaj: You can use “showmypc” for an alternative. Try out.
Mankala ShravanKumar:
I bought Lenovo G580 laptop and installed Ubuntu 12.04, Wired connection is not working. Can someone help me? Like . comment
Daniel Ribeiro:
I need help. I have a multi-boot Samsung Netbook with Mint Xfce, Xubuntu and Windows. I can't change brightness of the monitor in the Linux distros. I'm only able to change brightness when I boot it with Windows. Please help me. Like . comment
Raghava Karthik Reddy: Change bios settings to default. Use this commands in your terminal! 'xbacklight'-Press three times to increase the brightness. 'xbacklight' -set 50"-to set half-brightness! I hope this works for you! Daniel Ribeiro: Thanx!
Muziwakhe Mzk Nhlapo: Is your wireless working?
Dux Brandon: MZK guru Mankala ShravanKumar: Wireless working but wired connection not working.
Magimai Prakash:
Hi, usually, I can use internet from my mobile using PC suites in Windows. In Linux, is there any solution for using internet via mobile? Like . comment
Praveen JR Mohadeb: Just connect the
phone just like you do in Windows, go to the network and set up a new mobile broadband connection. You don't need to install any software as it works out of the box in Linux.
Vamsi Indana:
How can I upgrade to Ubuntu 12.04 64 bit from 32 bit? Like . comment
Raghava Karthik Reddy: Get a 64 bit computer!
Aboobacker Mk: Sorry, it is impossible. Vamsi Indana: In 64 bit system only, i installed
Ubuntu 12.04 32 bit, now I need to upgrade to 64 bit.
Vineesh Valsalan: In any case it is not possible. Backup your files, and re install 64 bit OS. Image quality is poor as the photos have been directly taken from www.facebook.com
Gerald Hickman:
I need some help. I have a Samsung np305e5a running multiple boot; Windows (which will not boot), chakra and Ubuntu. Everything was working fine until I added some home-school programs and my WiFi quit working. I've run some commands given me through others and apparently the WiFi address is no longer assigned and I have no idea how to assign it. Any help is greatly appreciated. Like . comment
Workhorse Black: Putting the code to wireless router in this should help. It gives the computer information that runs your WiFi. Gerald Hickman: Thanks but I'm talking about assignment inside the computer itself.
Tushar B Kute:
I want to install software in Ubuntu from 'Software Centre'. I get search results for a particular software but none of the software shows "Install" button in front of it. Instead, it shows button "Get the source". What kind of problem is this? Like . comment
Raghava Karthik Reddy: It's not a problem! Some softwares are only available in source format! Users can modify it and use it for their particular requirement! For example, if a software can do two capabilities, users can omit an unwanted part and include the useful part! It provides more flexibility to the user!
Powered By
www.facebook.com/linuxforyou
Praveen Singh Yadav:
How to cancel print job in Linux? Like . comment
Aboobacker Mk: Refer to this link.
http://www.linuxforums.org/forum/mandrivalinux/24221-killing-print-jobs.html
Kousik Mandal:
I have a small computer network (10 computer connected). I want to install server on one computer and the computer configuration are AMD64 ,1GB DDRIII RAM. How can I install & configure the server into my network. I have the Ubuntu server but the DVD does not display the graphical mode at the time of install . I installed it but the graphics mode not coming. Please help me. Like . comment
Aboobacker Mk: Ubuntu server does not have GUI by default. To install GUI, type sudo apt-get install ubuntu-destop ins hell. Ross James Rowlinson:
I got Ubuntu on me Netbook I hardly use the Netbook but want to get rid of Windows . It only gives me the option of dual boot. How do I get rid of Windows totally? Like . comment
Pranav Garg: Install gparted. sudo apt-get install gparted. Delete the Windows partition and extend the Ubuntu one.?
Ankur Agrawal:
I have configured IP settings on Ubuntu 10.0.4 TLS but I am unable to access the internet. Can somebody help me? Like . comment
Raghava Karthik Reddy: Did you configure
DHCP settings?
Ankur Agrawal: I am unaware of it. Please guide
and thanks for the reply. Earlier I was using the same Internet connection on Windows and it was working fine. I never entered DHCP settings there.
Raghava Karthik Reddy: Can you access
internet through Windows(if you are using)?Does your Windows system directly connects to the internet or do you have to manually configure it?
Princess Soumya Singh Sisodia:
How should I install Ubuntu on Windows 7? Please tell me the whole process. I have Ubuntu 11.04. Like . comment
Shivam Gupta: Soumya you might have
the DVD of Ubuntu. Restart your terminal. Insert Ubuntu DVD and it would start. Else, you need to select the option from the 'boot device priority' option from 'bios'. It would show two options: 1. 'Try Ubuntu without installing.' This option lets you to use or simply taste Ubuntu without installing on your hard disk. 2. 'Install Ubuntu'. We would probably love to choose this option. Next, you need to select your keybord and language part, and by default it detects itself. Next, the most important part. Select ,on which partition you need to install Ubuntu. Caution: The partition you would select would get formatted. But if you want to run with Windows 7 and you are not sure about the partition, select the first option stating, 'Install Ubuntu side by side with Windows'. Next, it would simply ask for your password, date, netwok, time zone, etc. Select the appropriate details. Finally Ubuntu would start installing. And at last, it would reboot and ask for the dvd removal. Remove dvd and restart. Enter username and password and you u might love to have full-access control of Ubuntu. Right click on desktop, open the terminal and type "sudo -i. You would get the root acc. Type "passwd" command and enter new password. Restart machine again. You would be asked to choose the operating system. Select Ubuntu 11.04 for desktop ubuntu. Select ubuntu recovery when you need to do trouble shooting.
Shivam Rohilla: You can also extract the iso and double click on wubi. It will automatically install Ubuntu on Windows 7. You can also remove that ubuntu if you want via add/remove program option. K Kt Kittu: Yes, Shivam Gupta, you are right. I would like to tell you that we can also go for slice partition! Saravana Kumar: You can install any virtual machine in Windows -Vmware, Oracle and then inside virtual machine you can install Ubuntu . Shivam Gupta: that point.
Thanks Kittu, we could use
Image quality is poor as the photos have been directly taken from www.facebook.com november 2012 | 13
NEW PRODUCTS
With Android Smart Stick, convert your TV into a smart TV
Hearing a lot about smart TVs these days and getting tempted? While the price may keep you away from making the investment, what if you were told that you could convert your regular TV into a smart TV? Micromax has launched the Micromax Smart Stick, which is basically a dongle that runs the Android Ice Cream Sandwich OS. Mukesh Gupta, director, Micromax Informatics Ltd, said, “The Smart Stick brings the Android experience to your television screen. With this stick, you can enjoy the Internet on any TV set that has the HDMI port functionality.” The stick has a standard HDMI interface, along with a USB host. Micromax’s Smart Stick has 4 GB of flash memory, as well as Wi-Fi Direct connectivity. You can also connect it using a wireless mouse and keyboard, which can operate on the 2.4 GHz wireless frequency. The Android Smart Stick has been priced at Rs 4,990. The company is also offering a BluRay player supporting both 2D and 3D audio/ video formats. Price: ` 6,490 Address: Micromax House, 697, Udyog Vihar, Phase-V, Gurgaon Email: care@micromaxinfo.com Ph: 0124-4811000 Website: www.micromaxinfo.com/ index.php
16 | November 2012
Samsung brings out an Android-powered Galaxy camera The Samsung Galaxy camera is the latest and most unexpected device from Samsung’s kitty. The camera runs Android’s latest 4.1 a.k.a. Jelly Bean OS. It has a 16 MP sensor with a 21x zoom lens, and features a 12.1-cm (4.8-inch) HD LCD screen. The camera is wireless-enabled, which makes loading applications for photo-editing and sharing easier. The Galaxy camera also includes a set of 35 photo editing features through the ‘Photo Wizard’, which allows professional quality edits, on the go. The camera’s ‘Auto Cloud Backup’ feature saves photos onto the cloud via Samsung’s AllShare automatically, the moment they are captured. The Galaxy camera comes with two connectivity options: a 3G version with Wi-Fi and a 4G version with Wi-Fi. The company has not shared any information on the pricing of the Galaxy camera as of now. The camera has been unveiled by the company in a recent press meet and will be available in the market in November. Price: Yet to be revealed Address: Samsung India, 2nd, 3rd and 4th Floors, Tower C, Vipul Tech Square, Sector 43, Golf Course Road, Gurgaon 122002 Email: supportindia@samsung.com Ph: 0124-4881234 Website: www.samsung.com
Micromax adds ‘Funbook Talk’ to its tablet range Micromax has added another device to its Funbook collection of tablets with its latest budget Android offering, the Funbook Talk. This device comes preloaded with Android 4.0 a.k.a. Ice Cream Sandwich and sports a 17.7-cm (7-inch) capacitive TFT LCD display with an 800x480 pixel resolution. The tablet is powered by a 1 GHz processor with 512 MB of RAM. As the name suggests, the tablet doubles up as a phone, thanks to built-in voice calling and 2G connectivity support. This is the first tablet from the company to do so, with earlier Funbook tablets supporting 3G via USB dongles. It also has a mini HDMI port. The tablet features a VGA front camera and comes with a 2800 mAh battery, which the company claims has up to 5 hours of talk time. The Funbook Talk has 4 GB of internal storage with expansion options up to 32 GB via a microSD card. Connectivity options include Wi-Fi and 3G dongle support via USB 2.0. Funbook Talk features a host of apps for media, education and social networking. The tablet supports a variety of audio and video formats, and also has a TV mode to stream live television. Educational apps on the device offer study material for primary classes as well as papers for medical, engineering and MBA exams. There are video tutorials too. Rajesh Agarwal, managing director, Micromax Informatics Ltd, said, “This launch has been timed to attract the buyers during the festive season. We are expecting a major increase in the sales of tablets during that period.” Price: ` 7,249 Address: Micromax House, 697, Udyog Vihar, Phase-V, Gurgaon. Email: care@micromaxinfo.com Ph: 0124-4811000 Website: www.micromaxinfo.com/index.php
SMARTPHONES Karbonn A11
HTC Desire X
iBall Andi 4.3j
Spice Stellar Horizon Mi 500
OS:
Android 4.0
OS:
Android 4.0
Launch Date:
October 2012
Launch Date:
October 2012
MRP:
` 9,990
MRP:
` 19,799 ESP:
` 19,799 Specification:
NEW
ESP:
` 8,499 Specification:
NEW
10.16-cm (4-inch) Super LCD WVGA display, 1GHz Qualcomm MSM8225 Snapdragon processor, 1650 mAh battery, 5 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi
10.2-cm (4-inch) capacitive touch screen, 480 x 800 pixels screen resolution, 1500 mAh battery, 5 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, Wifi
Retailer/Website: www.ebay.com
Retailer/Website: www.saholic.com
Karbonn A21 Android 4.0
Android 4.0
October 2012
October 2012
` 10,490 ESP:
` 10,490 Specification:
NEW
OS:
Android 2.3
Android 4.0
September 2012
September 2012
` 7,190
` 10,000
NEW
Retailer/Website: www.flipkart.com
ESP:
` 8,500 Specification:
NEW
7.6-cm (3-inch) TFT LCD display, 320 x 240 pixels screen resolution, 850 MHz processor, 2 MP rear camera, internal memory 4 GB, expandable up to 32 GB, 3G, Wifi, Retailer/Website: www.saholic.com
Samsung Galaxy Note 2
Sony Xperia Miro Android 4.0
Android 4.0 September 2012 ` 39,990 ESP:
` 38,990 Specification:
NEW
Launch Date:
Launch Date:
MRP:
MRP:
` 6,990 Specification:
October 2012 ` 4,290
NEW
ESP:
` 4,290 Specification:
NEW
8.89-cm (3.5-inch) capacitive screen, 1 GHz processor, 1500 mAh battery, 3 MP rear camera, 3G, WiFi Retailer/Website: www. saholic.com
Sony Xperia Tipo Dual
OS:
OS:
Launch Date:
Launch Date:
MRP:
MRP:
ESP:
ESP:
Android 4.0
` 9,399 Specification: 8.1-cm (3.2-inch) TFT capacitive touchscreen,320 x 480 pixels screen resolution, 800 MHz processor, 3.2 MP camera, 2.5 GB internal storage, expandable up to 32 GB, 3G, WiFi
Android 4.0 September 2012 ` 10,499
NEW
Retailer/Website: www.naaptol.com
Micromax A87 Ninja 4
MRP:
Specification:
Android 2.3
Sony Xperia Tipo
September 2012
` 14,499
Retailer/Website: www.saholic.com
Karbonn A1+
ESP:
NEW
12.7-cm (5-inch) multi capacitive touchscreen, 800 x 480 pixels screen resolution.1 Ghz dual-core processor, 2400 mAh battery, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi
OS:
Android 2.3
ESP:
Specification:
OS:
OS:
Launch Date:
` 14,499
MRP:
` 12,499
Karbonn A7+
Launch Date:
September 2012
Launch Date:
Retailer/Website: Reliance Digital outlets
ESP:
` 10,299 Specification: 8.1-cm (3.2-inch) TFT capacitive touchscreen,320 x 480 pixels screen resolution, 800 MHz processor, 3.2 MP camera, 2.5 GB internal storage, expandable up to 32 GB, 3G, WiFi
NEW
Retailer/Website: www.flipkart.com
MicroMax A 25 OS:
OS:
OS:
Specification:
NEW
` 9,999
MRP:
ESP:
` 9,499
September 2012
Launch Date:
MRP:
` 15,999
ESP:
Retailer/Website: www.flipkart.com
OS:
Launch Date:
8.9-cm (3.5-inch) capacitive display, 800MHz processor, 5 MP rear camera, a front camera, 1300 mAh battery, 256 MB RAM, expandable up to 32 GB, 3G
Retailer/Website: www. flipkart.com
Samsung Galaxy Chat GT-B5330
Idea Aurus
MRP:
` 9,499
3.5-inch Capacitive Touchscreen, 320 x 480 Pixels screen resolution, 1 GHz processor, 1420 mAh battery, 5 MP camera, 157 MB internal memory, expandable to 32 GB,3G, WiFi
10.1-cm (4-inch) IPS WVGa display touchscreen, 1.2 GHz processor, 1420 mAh battery, 5 MP camera, 3G, WiFi
Retailer/Website: www.snapdeal.com
Specification:
` 9,290
NEW
Specification:
11.4 cm (4.5 inch) capacitive touchscreen, 1.2 GHz processor, 1800 mAh battery, 5 MP camera, 4 GB of internal memory expandable to 32 GB, 3G, WiFi
` 7,190
ESP:
October 2012
MRP:
` 6,990
MRP:
` 9,290
Launch Date:
October 2012
October 2012
Launch Date:
MRP:
Android 4.0
Launch Date:
Android 2.3
OS:
Launch Date:
OS:
Android 2.3
4.3-inch capacitive touch display, 1 GHz processor,1,630mAh and 900-mAh dual battery, 5 MP camera, 2 GB internal memory, 3G, Wifi
Karbonn A9+
OS:
OS:
MRP:
` 5,990
NEW
ESP:
` 5,990
Android 2.3 Launch Date:
September 2012 MRP:
` 3,999 ESP:
` 3,999 Specification:
3.5-inch TFT capacitive touchscreen,320 x 480 pixels screen resolution, 800 MHz processor, 1500 mAh battery, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi
Specification:
5.5-inch HD Super AMOLED screen,1280 x 720 pixels screen resolution, 1.6 GHz processor, 3,100 mAh battery, 8 MP camera, 3G, WiFi
4-inch capacitative LCD touchscreen, 1400 mAh battery, 1GHz processor, 2 MP camera, memory expandable up to 32GB
2.8 inch capacitive touchscreen, 320X240 pixels screen resolution, 1GHz processor, 1280mAh battery, 1.3 MP rear camera, 256MB RAM, 512MB ROM, 120MB internal storage
Retailer/Website: www.snapdeal.com
Retailer/Website: www.flipkart.com
Retailer/Website: www.saholic.com
Retailer/Website: www.saholic.com
OS:
OS:
Android 2.3
Android 2.3
Launch Date:
Launch Date:
September 2012
September 2012
MRP:
MRP:
` 6,990
` 4,999
ESP:
ESP:
` 6,990
` 4,890
Specification:
Specification:
7.1 cm (2.8") QVGA Display, 832Mhz processor, 1200 mAh battery, 2MP Camera, 4 GB Internal memory, expandable up to 32GB
3.5-inch capacitive touch display,480 x 320 pixel screen resolution, 1GHz processor, 1,400 mAh battery, 3MP rear camera, 512MB of built-in storage, expandable up to 32GB,3G, Wifi
Retailer/Website: www.snapdeal.com
Map My India Car Pad 5 OS:
Android 2.3 Launch Date:
August 2012 MRP:
` 19,990 ESP:
` 17,990 Specification: 12.7-cm (5-inch) capacitive touchscreen display, 800 X 480 pixels screen resolution, 1 GHz processor, 384 MB RAM,3G, WiFi Retailer/Website: www.snapdeal.com
Retailer/Website: www.maniacstore.com
Intex Aqua 4.0 OS:
Android 2.3 Launch Date:
August 2012 MRP:
` 5,490 ESP:
` 5,490 Specification:
3.5-inch display screen, 480 x 320 pixels screen resolution, 800MHz processor, 1400mAh battery, 512MB of RAM, 3 MP rear camera, WiFi Retailer/Website: www.maniacstore.com
Wicked Leak Wammy Note
MTS MTag 351
OS:
OS:
Launch Date:
Launch Date:
MRP:
MRP:
ESP:
ESP:
Android 4.0 August 2012 ` 11,000
Android 2.3 August 2012 ` 7,499
3.5 inch capacitive touchscreen, 320x480 pixels screen resolution, 800MHz processor, 1300mAh battery, 3 MP camera, 128MB internal memory, expandable up to 32 GB
Retailer/Website: www.wickedleak.com
Retailer/Website: At your nearest MTS Store
Micromax Superfone Pixel A90
Karbonn A18
OS:
Android 4.0 Launch Date:
August 2012 MRP:
` 12,990 ESP:
` 12,990 Specification:
OS:
Android 4.0 Launch Date:
August 2012 MRP:
` 12,990 ESP:
` 9,849 Specification:
4.3-inch AMOLED multi-touchscreen display, 480x800 pixel resolution, 1GHz processor, 1600mAh battery, 8 MP rear camera, 512 MB of internal memory, expandable up to 32 GB, 3G, WiFi
4.3 Inch WVGA touch screen, 480 x 800 pixels screen resolution, 1 GHz processor, 1500 mAh battery, 5 MP rear camera, internal memory 1GB, expandable up to 32 GB, 3G, Wifi
Retailer/Website: www.snapdeal.com
Retailer/Website: www.infibeam.com
Free Download Installation
Specification:
12.7-cm (5-inch) touch screen, 1 GHz processor, 2,500 mAh battery, 8 MP camera, 512 MB of RAM, expandable up to 32 GB, 3G, WiFi
Maintenance
` 7,499
Specification:
www.myOpenSourceStore.com
` 11,000
ONE STOP SOLUTION FOR ALL OPEN SOURCE SOFTWARE
Micromax A57 Superfone Ninja 3
Your Window to FREE professional Software
Samsung Galaxy Y Duos Lite
Contact us : 080-4242-5042, E-mail: contact@myOpenSourceStore.com
SMARTPHONES
november 2012 | 19
Tablets Mercury mTAB7 OS:
Android 4.0
OS:
Android 4.0
Launch Date:
October 2012
October 2012
` 6,499 ` 6,499 Specification:
` 12,899 ESP:
` 9,250
NEW
Specification:
Retailer/Website: www.naaptol.com
Retailer/Website: www.snapdeal.com
Swipe Tab All in One
Datawind UbiSlate 7R+
7 Inches (17.78 cm) CAPACITIVE Touch Screen, 3500 mAH battery, 3.2MP rear camera, internal memory 4 GB, expandable up to 32GB, 3G, Wifi
OS:
Android 4.0
October 2012
September 2012
` 11,999 ESP:
` 11,999 Specification:
NEW
ESP:
` 3,499
NEW
17.8-cm (7-inch) capacitive touchscreen, 1028 x 768 pixels screen resolution, 2 MP rear camera, 8 GB internal memory, expandable up to 32 GB, 3G, WiFi
Specification:
Retailer/Website: Company’s Online store
Retailer/Website: www.naaptol.com
Winknet Ultimate (TWY300)
Winknet Wonder (TWY100)
7- inch resistive touchscreen, 800 × 480 pixels screen resolution, 1 GHz processor, 3200 mAh battery, front camera, 4 GB internal memory, expandable up to 32 GB, 2G, WiFi
OS:
Android 4.0 September 2012 ` 15,995 ` 14,999 Specification:
Android 4.0
9.7-inch, TFT LCD (4x3), Capacitive Multi-touch screen, 1024 x 768 pixels screen resolution, 1.5 GHz processor,8000 mAh battery, 2 MP rear camera, 16 GB internal memory, expandable upto 32GB Retailer/Website: www.snapdeal.com
` 7,499 Specification: 7-inch capacitive touchscreen, 800 x 480 pixels screen resolution, 1.5 GHz processor, 3000 mAh battery, 0.3 MP camera, 8 GB internal memory, expandable up to 32 GB, 3G, Wifi
NEW
OS:
Launch Date:
September 2012
Launch Date:
MRP:
` 6,499
MRP:
ESP:
ESP:
` 6,999 ESP:
NEW
Android 4.0 September 2012 ` 3,999 ` 3,999 Specification: 7”LCD Resistive Touch Screen, 800x 400 pixels screen resolution, 1.2 GHz Cortex A8 Processor,3000 mAh battery,0.3MP front camera, 4 GB internal memory, expandable up to 32GB, 3G, WiFi Retailer/Website: www.teracomstore.in
` 6,899 Specification:
NEW
7.0 inches, Multi-Touch Capacitive Touchscreen, , 800 x 480 pixels screen resolution, 0.3 front camera, 4 GB of internal memory expandable to 32 GB, 3G, WiFi Retailer/Website: www.snapdeal.com
OS:
Datawind UbiSlate 7C+
Launch Date:
Android 4.0
MRP:
September 2012
Datawind UbiSlate 7Ci
OS:
Launch Date: MRP:
` 2,999-` 4,999
ESP:
` 2,999-` 4,999 Specification: 7- inch capacitive touchscreen, 800×480 pixels screen resolution, 1 GHz processor, 3200 mAh battery, front camera, 4 GB internal memory, expandable up to 32 GB, 2G, WiFi
ESP:
NEW
Retailer/Website: www.flipkart.com
Celkon CELTAB
` 2,999-` 4,999 Specification:
Retailer/Website: www.flipkart.com
Micromax Funbook Infinity OS:
Launch Date:
Launch Date:
MRP:
MRP:
ESP:
ESP:
` 7,499 Specification:
NEW
7- inch capacitive touchscreen, 800×480 pixels screen resolution, 1 GHz processor, 3200 mAh battery, front camera, 4 GB internal memory, expandable up to 32 GB, 2G, WiFi
OS:
Android 4.0 September 2012 ` 6,699
NEW
17.7-cm (7-inch) touchscreen, 1 GHz Processor, 512 RAM and a provision for 3G dongle Retailer/Website: www.flipkart.com
` 6,814 Specification:
NEW
7-inch capacitive touch display, 800 x 480 pixels screen resolution, 1.2 GHz processor, 2800 mAH battery, 2 MP camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website: www.infibeam.com
Retailer/Website: www.snapdeal.com
Android 4.0
Retailer/Website: www.infibeam.com
MRP:
` 7,499
OS:
7 Inch capacitive TFT multi-touch LCD screen, 800 x 480 pixels screen resolution, 1 GHz processor, 2800 mAh battery, internal memory up to 4 GB, expandable up to 32GB, 3G, WiFi
Launch Date:
Super AMOLED capacitive touchscreen, 960×540 pixels screen resolution with multi touch panel, dual core 1.5 GHz processor, 1520 mAh battery, front VGA camera (640×480), rear camera 8 MP, 2G Retailer/Website: www.shopping. indiatimes.com
September 2012
MTNL Lofty TZ300
Specification:
Specification:
MRP:
Micromax Funbook Alpha
` 5,999
` 64,999
Android 4.0
ESP:
NEW
October 2012
Launch Date:
` 8,995
MRP: ESP:
OS:
September 2012
Launch Date:
MRP:
` 2,999-` 4,999
MRP:
` 3,499
Android 4.0
September 2012
Launch Date:
MRP:
OS:
Launch Date:
Android 4.0
OS:
Launch Date:
Android 4.0
ESP:
17.8-cm (7-inch) capacitive touch display, 480 x 800 pixels screen resolution, 1.2 GHz Cortex A8 processor, 2300 mAh battery, 0.3 MP front-facing camera for video calling, 4 GB internal storage, expandable up to 32 GB, 3G, WiFi
Android 4.0
Penta T-Pad WS703C
OS:
` 64,999
MRP:
NEW
Asus PadFone October 2012
Launch Date:
MRP: ESP:
Adcom Tablet PCAPad 721C
MTNL Lofty TZ 200 OS:
Android 4.0
Mercury magiQ OS:
Launch Date:
Android 4.0
MRP:
August 2012
September 2012 ` 6,499 ESP:
` 6,499 Specification: 7”LCD Resistive Touch Screen, 800x 400 pixels screen resolution, 1.2 GHz Cortex A8 Processor, 3000 mAh battery,0.3MP front camera, 4 GB internal memory, expandable up to 32GB, 3G, WiFi Retailer/Website: www.teracomstore.in
Launch Date: MRP:
` 12,700 ESP:
` 12,700 Specification: 5 inch capacitive multi touchscreen, 1 GHz processor, 12 MP camera, 4 GB internal memory, expandable up to 32 GB, 3G, Wifi Retailer/Website: www.loginmobile.com
Tablets EduBridge EduTab tablet OS:
Android 4.0
Launch Date:
August 2012
MRP:
` 8,500 Specification:
NEW
17.8-cm (7-inch) Capacitive Touch Screen, 1 GHz Processor, 2800 mAH battery, Front camera, Expandable Storage memory up to 32 GB, 3G, Wifi Retailer/Website: www.snapdeal.com
INTEX i-Buddy OS:
Android 4.0 Launch Date:
July 2012 MRP:
` 6,490 ESP:
` 6,490 Specification: 7 inch capacitive touch screen display, 800 x 480 pixels screen resolution, 1 GHz processor, Frontfacing camera for video chatting, 2350 mAh battery, 4 GB internal memory, Expandable memory up to 32 GB, 3G, Wifi Retailer/Website: www.snapdeal.com
HCL ME Y2 OS:
Android 4.0 Launch Date:
July 2012 MRP:
` 14,999 ESP:
` 14,999 Specification: 7.0 inches, IPS LCD Capacitive Multi-Touch Screen, 1024 x 600 pixel screen resolution, 1 Ghz processor, 4000 mAh battery, 2 MP rear camera, 8 GB internal memory, Expandable memory up to 32 GB, 3G, Wifi Retailer/Website: www.snapdeal.com
Karbonn Smart Tab 1 OS:
Android 4.0 Launch Date:
May 2012 MRP:
` 7,999 ESP:
` 7,290
` 5,899 ESP:
` 5,899 Specification:
NEW
7-inch capacitive multi-touchscreen display, 800x480 pixels screen resolution, 1.2GHz Cortex-A8 processor, 2,800 mAh battery, internal memory 4GB, expandable up to 32 GB, 3G, wiFi Retailer/Website: www.flipkart.com
Penta Tablet IS701C OS:
Android 4.0
MRP:
August 2012
ESP:
` 9,999 Specification:
Launch Date:
Reliance 3G Tab OS:
MRP:
July 2012
` 4,999 ESP:
` 3,699 Specification: 17.78-cm (7-inch) TFT LCD capacitive multi-touch screen, 800X480 pixels screen resolution, 1GHz processor, 0.3 MP front camera, 3000 mAH battery, 4 GB internal memory, Expandable memory up to 32 GB, 3G, Wifi
` 14,699
NEW
ESP:
` 14,699 Specification: 8-inch capacitive multi-touchscreen, 800 x 552 pixels screen resolution, 1 Ghz processor, 2 MP rear camera,0.3 MP front camera, WiFi Retailer/Website: www.snapdeal.com
Retailer/Website: www.naaptol.com
Android 2.3
July 2012
MRP:
10.1” TFT LED Multi-touch Capacitive Touch Screen, 1.2 GHz processor, 5500 mAh battery,2 MP rear camera, 8GB internal memory, expandable upto 32GB, 3G, WiFi
Launch Date:
Launch Date: MRP:
` 14,499 ESP:
` 12,699 Specification: 7inch capacitive multitouch touchscreen, 480 x 800 pixels resolution, 800 Mhz processor, 3 MP rear camera, 4 GB internal memory, Expandable memory up to 32 GB, 3G, Wifi
Samsung GALAXY Tab 2 510 P5100 OS:
Android 4.0 Launch Date:
July 2012 MRP:
` 32,990 ESP:
` 32,990 Specification: 10.1-inch capacitive multi-touchscreen, 1280×800 pixels screen resolution, 1 Ghz processor, 7000mAh battery, 3 MP rear camera, 0.3 MP front camera, 16 GB internal memory, expandable up to 32 GB, 3G, WiFi
Retailer/Website: www.snapdeal.com, www.ebay.in
Retailer/Website: www.infibeam.com
Retailer/Website: www.snapdeal.com
Micromax Funbook Pro
Zync Z-909
OS:
OS:
Samsung Galaxy SIII 32GB
Launch Date:
Launch Date:
Android 4.0 July 2012 MRP:
` 9,999 ESP:
` 9,999 Specification: 23.6-cm (10.1-inch) display,1024x600p pixels screen resolution, 1.2GHz processor, 5600 mAH battery,VGA front camera, 8GB of internal memory, 1GB of RAM, 3G, Wifi Retailer/Website: www.snapdeal.com
Android 2.3 July 2012 MRP:
` 3,699 ESP:
` 3,699 Specification: 17.8-cm (7-inch) resistive touchscreen, 800x480 pixels screen resolution, 1 GHz processor,0.3 MP front facing camera, 256 MB RAM, 4 GB internal storage, expandable up to 32 GB, 3G, WiFi Retailer/Website: www.homeShop18.com, www.futurebazaar.com
ASUS EeePC X101
OS:
MeeGo
OS:
Launch Date:
August 2011
Launch Date:
MRP:
` 12,290
MRP:
ESP:
ESP:
` 11,840
17.8 cm capacitive 5 point multi touch screen, 1.2GHz processor, external memory expandable upto 32GB, 2MP front camera, 3D G-Sensor for Gaming Experience, 3G, WiFi.
Specification: 25.7 cm WSVGA anti-reflective LED,1024×600 pixel screen resolution,1.33GHz Intel ATOM processor, 1GB DDR3 memory, Intel GMA 3150 graphics, 250GB HDD, 3 cell (40 W) battery, 4-in-1 card reader, 1.03kg. Retailer/Website: Croma Store, Saket, New Delhi, +91 64643610
OS:
Android 4.0 Launch Date:
July 2012 MRP:
` 41,500 ESP:
` 37,990 Specification: 12.2 (4.8-inch) HD Super Amoled display, 1280 x 720 pixel resolution ,1.4 GHz Exynos 4 processor ,8 MP rear camera, 1.9 MP front camera, 3G, WiFi Retailer/Website: www.flipkart.com
Netbooks Samsung N100
Specification:
Retailer/Website: www.infibeam.com
Android 4.0
OS:
` 9,999
MRP:
` 8,500
Launch Date:
August 2012
Launch Date:
August 2012
OS:
BSNL Penta T-Pad WS704C
Wishtel IRA Comet HD Android 4.0
OS:
Android 4.0
ESP:
Lava E-Tab Z7H
MeeGo August 2011 ` 12,499 ` 12,000 Specification: 25.7 cm LED-backlit screen, Intel Atom processor N455 CPU, 1GB DDR3 RAM expandable upto 2GB, 220GB storage, Bluetooth 3.0, Wi-Fi 802.11 b/g/n, 17.6mm thick, 920g. Retailer/Website: Eurotech Infosys, Nehru Place, Delhi, 9873679321
Acer Aspire One Happy OS:
Android Launch Date:
March 2011 MRP:
` 17,999 ESP:
` 15,490 Specification: 25.7 cm WSVGA high-brightness display with a 16:9 aspect ratio, dual-core Intel Atom N455, 1 GB RAM, Intel graphics media accelerator 3150 and internal hard disc memory of 320 GB, Bluetooth 3.0+ HS support, Wi-Fi, built-in multi-in-one card reader. Retailer/Website: Vijay Sales, Mumbai, 022-24216010
november 2012 | 21
Linux-based smartphone OS, Tizen, gets a new lease of life
If reports are to be believed, the Linux Foundation has breathed new life into the Linux-based smartphone OS, Tizen. According to a blog post, the Linux Foundation has released the source code and SDKs for the first alpha version of Tizen 2.0. This move has given rise to speculations about Samsung making a phone based on this platform. Reportedly, Samsung is showing interest in Tizen as a possible alternative to Android. According to sources, Samsung donated $50,000 to the Linux Foundation, making it one of the seven corporate platinum members. The reason behind this move was unclear back then. Earlier this month, the Wi-Fi Alliance published a document certifying that a Samsung handset named the 'GT-i9300_TIZEN' had passed its interoperability tests. The GT-i9300 is Samsung's internal model number for the Galaxy S III, prompting speculation that the company may be readying a version of that phone for Tizen. The blog post that announced the release spoke about the new version of Tizen, which includes an improved Web framework offering better support for HTML5 and other Web standards, plus device APIs for handling file transfers, power control, and notifications. Accompanying the release is an IDE and SDK for developing Web applications to run on Tizen, plus a new SDK for native platform development. This alpha release testifies to the fact that the platform, far from dying out, may actually come along with Samsung phones.
Samsung devices to get Jelly Bean update
Samsung smartphone users now have reason to cheer! The company has given the go-ahead for the naming of devices that will get an Android Jelly Bean a.k.a. Android 4.1 update. Check out the list of devices: Samsung Galaxy S II/ Samsung Galaxy S II LTE Samsung Galaxy Note Samsung Galaxy S Advance Samsung Galaxy Chat Samsung Galaxy Ace 2 Samsung Galaxy Beam Samsung Galaxy Ace Plus Samsung Galaxy Mini 2 Samsung Galaxy S Duos With Samsung curbing prices on its devices, it remains to be seen whether this news will create a frenzy among gadget lovers.
Vellamo benchmarking app gets a revamp
As Qualcomm's Vellamo benchmarking tool gets updated to 2.0, testing your Android device will now be more fun. Giving a thorough update to its Vellamo benchmark, Qualcomm has added a number of new tests to provide results that represent real-life scenarios. And this also marks the first time that the company has added a video streaming test and built a version for Android on x86, which
Google's Code-In contest for students begins this November
In an effort to give a boost to open source software development, Google has announced its popular Code-In contest for pre-university students. Students in the age group of 13-17 years can participate in the contest, which begins from November 26 and will continue till mid-January next year. Twenty grand prize winners of the contest will make a trip to Google’s campus in Mountain View and meet the company’s employees. Each winner will be allowed to take one of their parents or legal guardians along with them for the trip. According to the blog post from Google, students will be able to work with 10 open source companies on a variety of tasks. All of these have successfully served as mentoring organisations working with university students in Google’s Summer of Code programme. The kind of tasks students will be working on will fall into the following categories: Code: Tasks related to writing or refactoring code. Documentation/training: Tasks related to creating/editing documents and helping others learn more. Outreach/research: Tasks related to community management, outreach/marketing, or studying problems and recommending solutions. Quality assurance: Tasks related to testing and ensuring that code is of high quality. User interface: Tasks related to user experience research or user interface design and interaction.
means users can also test Intel's Medfield-based smartphones. And did we forget to mention its user interface? Well, it has been given a 2012-esque styling and it's worth mentioning here that it is surely one of the prettiest benchmarking tools available on Android. According to sources, Vellamo 2.0 also features an 'inline video' test; it installs a Web server on the device to stream a file with artificial network latency imposed to simulate Wi-Fi, 3G and 4G connections.
RHCE / RHCVA / RHCSS Exam Centre
The first alpha of Mandriva Linux 2012 is out
The first alpha for Mandriva Linux 2012 is finally available now for testing. Code-named Tenacious Underdog, this version upgrades the KDE desktop to version 4.9.0 from August and includes a faster distribution installer. Also, the text mode of the installer is working fine. The HAL (hardware abstraction layer) has been completely removed. Several bug fixes and other packages have been updated. Mandriva Linux project leader Per Oyvind Karlsen stated that the alpha got delayed due to some issues in the current build system.
Python 3.3 released
Python lovers now have a reason to smile. The version 3.0.3 of the Python language has been released. This brings in new syntax to the language in the form of ‘yield from’ allowing developers to delegate work to a sub-generator (PEP 380). Among several other changes, Python 2 style Unicode literal syntax for strings is back and is now capable of making more code from Python 2. Among the library changes, the fault-handler that helps debugging lowlevel crashes has been added along with the IP address, which is required for high-level objects representing IP addresses and masks. Other additions are lzma, to compress data using the XZ/LZMA algorithm; unittest.mock to replace parts of your system under test with mock objects; and venv for Python virtual environments. Improvements have been made to the namespace packaging (PEP 420); there’s a new memory view implementation that improves reliability, a new Python launcher for Windows (PEP 397, documentation), and qualified name support for functions and classes ensuring easier management of nested classes (PEP 3155). There is also a C accelerator for faster decimal arithmetic.
Now, LPI's Linux Essentials in North America
During the LinuxFest 2012 in Columbus, Ohio, the Linux Professional Institute (LPI), the world's premier Linux certification organisation, announced that LPI's Linux Essentials programme that measures foundational
At ADVANTAGE PRO, we do not make tall claims but produce 99% results month after month – TAMIL NADU'S NO. #1 PERFORMING REDHAT PARTNER RHCSS RHCVA RHCE
Only @ Advantage Pro
Redhat Career Program from THE EXPERT
Also get expert training on My SQL-CMDBA, My SQLCMDEV, PHP, Perl, Python, Ruby, Ajax...
New RHEL 6.2 Exam. Dates (RHCSA/RHCE) @ ADVANTAGE PRO for Nov - Dec - Jan quarter 2012-13 Nov. 5, 26, 30, Dec. 10, 24, Jan. 21, 28
“Do Not Wait! Be a Part of the Winning Team”
Regd. Off: Wing 1 & 2, IV Floor, Jhaver Plaza, 1A, N.H. Road, Nungambakkam, Chennai - 34. Ph : 98409 82185 / 84 Telefax : 28263527 Email : enquiry@vectratech.in www.vectratech.in NOVEMbEr 2012 | 23
knowledge in Linux and Open Source Software is now available in North America. Linux Essentials was initially released as a pilot programme in Europe, the Middle East and Africa, but due to popular demand has now been launched in North America. Targeted at new technology users, the Linux Essentials programme has been adopted by schools, educational authorities, training centres and others across the regions mentioned. The Linux Essentials programme has been under development by LPI for approximately two years and includes the participation of qualification authorities, academic partners, private trainers, publishers, government organisations, volunteer IT professionals, and Linux and Open Source experts. The single Linux Essentials' exam leads to a 'Certificate of Achievement' recognising knowledge of the following subject matters: The Linux Community and a Career in Open Source Popular Operating Systems and Major Open Source Applications Understanding Open Source Software and Licensing Linux Command Line Basics, Files and Scripts An online exam will also be conducted either through LPI or partner Internet-based testing (IBT) institutions.
Linux 3.7 gets 64-bit ARM support
While the release of Linux 3.6 is making waves in the open source circuit, Linus Torvalds has merged support for 64-bit ARM architecture for version 3.7 into the main Linux kernel development tree. Officially known as AArch64, several requests by kernel developers led to the code for the architecture being placed in a separate arch/arm64/ directory in the kernel source code. Previously, it had been reported that a set of 36 patches was released by an employee at ARM, named Catalin Marinas, which would extend the Linux kernel to provide support for ARM's AArch64 64-bit architecture. Accordingly, this 64bit ARM support was to be provided by the ARMv8 instruction set, as announced in the autumn of 2011. It was expected to be first used in processors in 2014. By late 2012, Applied Micro Circuits Corporation (AMCC) is expected to be coming up with its first sample chips on 64-bit ARM cores. Along with this, a patch collection will be integrated in the main Linux development tree, which boots on a range of different 32-bit ARM platforms that are to be built.
HP introduces Open webOS 1.0
When webOS from HP did not go down well in the market, the company decided to go the open source way. HP has officially released the version 1.0 of its Open webOS, which is exclusively optimised for touchscreens. The company had introduced the beta version in August. The source code is licensed under the Apache Licence 2.0. Open webOS 1.0 carries the key components of WebOS along with the Enyo 2 JavaScript application framework that was developed as part of the webOS project. It also includes building scripts for Linux and OpenEmbedded.
Nokia's open source OS, MeeGo, gets new lease of life
Recently, it was announced by Jolla, the Finnish mobile phone start-up, that it will be providing funds worth $260 million to resurrect the open source MeeGo mobile operating system. The company has also announced via its official Facebook page that it will showcase the new user interface of the OS along with details about the app ecosystem and SDK, at the Slush start-up conference to be held in Helsinki, Finland on November 21- 22 this year.
The MeeGo mobile OS was abandoned by Nokia, leading Jolla to adopt it in July. Jolla is looking forward to design and develop new smartphones based on it. MeeGo's popularity was increased by Nokia's N9, which is so far the only MeeGo smartphone to be released. According to recent reports, Jolla has announced that it has raised $260 million in funding, and that its first smartphone is almost ready for release. According to Digital Trends, “Jolla plans to aggressively target the Asian and Chinese markets, calling China a game changer in the technology industry, adding how it wants to create the third smartphone ecosystem in China, after Android and iOS.” Jolla’s MeeGo OS has been codenamed Sailfish, and is expected to make its first appearance very soon. Company sources claim it is more open than Google’s Android with respect to apps and service development, and have revealed in a tweet that with the ecosystem support system in place, the company is very close to announcing the date for the device’s launch.
look for ways to deliver platforms that address the evolving needs of the growing M2M space." The move is likely to help the company to get Java developers to choose its chipset over others. However, there is no information on exactly when Java ME Embedded 3.2 would be ready for production on Qualcomm's chips.
Here’s an online open source store for you
myOpenSourceStore.com has announced the launch of its e-commerce store, which will be a first-of-its-kind online open source store to accelerate the adoption of OSS. The online store exclusively specialises in offering the best available OSS under a wide range of categories. Apart from that, it also showcases a detailed overview (with recommendations) of top rated products available for download, absolutely free.
You can run Linux on almost all Windows 8 PCs
The solution is finally out—thanks to the Linux Foundation. Previously, we reported that Microsoft had incorporated a Secure Boot technology in the Unified Extensible Firmware Interface (UEFI), which prevented users from installing Linux on Windows 8 PCs. Now, the Linux Foundation has come up with a solution to enable users to install an open source operating system in computers that ship with Windows 8.
The solution from the Linux Foundation will allow users to install most of the open source OSs in computers with Secure Boot enabled. It will pick up a licence key from Microsoft and then sign a pre-bootloader tool, which in turn will allow users to chainload additional software without running another signature check. Microsoft’s plans for introducing Secure Boot in Windows 8 did not go down too well with the Linux community. Installation of Ubuntu, Debian, Fedora, or other OSs on Windows computers was easy, but Microsoft made things difficult in Windows 8 by including the UEFI instead of a traditional BIOS. The Secure Boot technology acts like a gatekeeper. It only allows the booting of an operating system with a security key. So, no gate pass meant, no entry! Secure Boot has the advantage of preventing viruses or other malware from infecting your PC while it boots. But the problem is that with such a strict security feature, unsigned software, like a program to install a Linux-based OS, will be blocked too.
Android users, beware of battery apps!
Beware Android users! Hackers are again on the prowl! This time they have added apps in the Android market that can help replenish your smartphone’s battery. IT security solution firm, Symantec, has figured out that the apps actually help in reviving your battery’s life but also steal important information. The company released an ‘Intelligence Report’ for September 2012, which stated that, “They steal critical information from your device.” The report further added that, “The high processing power of embedded CPUs and large, bright LCD screens, coupled with frequent usage of apps, make the battery’s life a perennial problem for device users. This has spawned a whole genre of applications aimed at addressing this problem.” These apps actually allow you to have a better battery status, notify you when the battery is going down, and automatically turn off features and apps that are not necessary. “We found a bunch of apps that promise to charge the battery using solar energy. They claim that they can turn your phone screen into a solar charger,” the Symantec report said. The report advises users to keep away from such apps till manufacturers actually install them on the phones. Till then, it is advisable to continue charging the phone using regular chargers. “If an app requests permissions that seem out of the ordinary for what it is supposed to do, then don’t install it,” it recommended.
Learning solution For 21st CENTURY Dealers & Distributors
Wanted Franchise Enquiry Solicited for Android Development Training
Zenwalk Linux 7.2 released
Version 7.2 of Zenwalk Linux is now available. This release mainly aims at improving the overall performance of the distribution, which is based on Slackware, the oldest Linux distribution. “We are happy to release Zenwalk 7.2. After several months of rescheduling, we think it's time to let this new jet fly,” said an official blog posting. “Zenwalk 7.2 is loyal to its design: providing one application per task, everything needed to work, play, code and create, in a single 700 MB ISO image, through a 10-minute automatic install process on any recent computer. Zenwalk aims to be really fast among the modern Linux desktops, due to many optimisations at the kernel, applications and desktop level. The challenge that we faced, and which caused a delay in the 7.2 release date, was to achieve 100 per cent Slackware-Linux compatibility while keeping most of the optimisations that were introduced during the past few years of development (from 2004). Zenwalk 7.2 runs on kernel 3.4.8 with a BFS scheduler. The Zenwalk desktop is based on the XFCE 4.10, GTK 2.24.10, and the 3.4.4 team. It has a unique look and feel, and perfect ergonomic integration of the application set comprising Libreoffice 3.6.2, Firefox/Thunderbird 15.0.1, The Gimp 2.8.2 and much more... The Netpkg package manager has been improved with support for multiple mirrors and better performance,” the blogpost reported. Zenwalk is a GNU/Linux operating system, designed to provide the following characteristics: Modern and user-friendly (latest stable software, selected applications) Very fast (optimised for performance capabilities). Rational (one mainstream application for each task). Complete (full development, desktop and multimedia environment) Evolutionary (advanced network package management tool – Netpkg).
Tablet PC Models Taupe Woodbine Purple Orchid
Endive Bisque Lilac
Email: support@designinc.in Website: www.designinc.in Contact No.: +91 9994356789 NOVEMbEr 2012 | 29
For U & Me
Open Source India 2012 Takes FOSS to a New Level
The techie crowd at Asia’s largest open source convention shows how FOSS has evolved from being a niche concept to becoming mainstream.
The future of technology lies in it being free and open source. This was the unanimous call raised by the participants and community members present at the event. FOSS is not just about making the source code of the software freely available; it’s also about opening up of the mind. Open Source India (OSI) 2012 concluded on this positive note, promising to come back next year with much more power and increased involvement from the community. The three-day affair proved to be an apt platform for a reunion of the FOSS community and for the sharing of FOSS concepts. The informative sessions and quality audience at Asia’s largest open source convention proved how FOSS has evolved from being a niche concept to becoming mainstream. The event witnessed a turnout of industry professionals, the community folk and a lot of newbies who came to get a closer look and to experience the FOSS environment. Open Source India 2012, which concluded at the NIMHANS Convention Center, Bengaluru, featured a lot of enlightening content. Sharing his insight on OSI 2012, Ramesh Chopra, vice chairman, EFY Enterprises Pvt Ltd, said, “We are glad that we have been able to bring out a much bigger, better and improved OSI this year. We aimed at offering something for all techies, including software developers, IT managers and heads of IT departments, project managers, delivery experts, academia, and the open source community.” Praising the event, Dr Pramod K Varma, chief architect and technology advisor to the Unique Identification Authority of India, said, “India has become a huge consumer of open source technology. It’s about time that we also became a dominant contributor to the open source projects. I think OSI, being a leading open source event in India, is proving to be a great platform for encouraging the community and increasing participation in open source projects.” The three-day convention witnessed the presence of over 70 speakers from the corporate world and the community. Companies like Oracle, Sify Technologies Limited, Intel, Acquia, HP, Dell and Microsoft came forward to show their support for open source technology at this event. Mandar Naik, director, Platform Strategy, Microsoft, said, “It is always exciting to be a part of OSI, and this year was no exception. The technology sessions were highly informative with some of the experts sharing their insights on a range of topics including mobile application development, the kernel and the cloud. I feel the conference proved to be a great opportunity to engage with technology enthusiasts from all over the country, ranging from app developers and open source contributors 30 | november 2012
The stage is set for the three-day 'open' convention
The event's registration desk is all set to welcome open source fans
The event witnessed the participation of many serious open source fans over the three days
(Left to Right) Varad Gupta, Dibya Prakash, Divyanshu, Prashanth Ranjalkar and Sanjay Manwani engaged in an exciting discussion on 'FOSS-fuelled Innovation' on Day 3
Open Source India 2012 to students. Nothing beats a great technology debate over a coffee with like-minded friends from the community.”
FOSS for developers OSI has always been known as a platform where developers get a lot of opportunities to learn about futuristic technologies. This year’s convention had a special focus on Google’s Android operating system, which was especially beneficial to mobile app developers. Dushyantsinh Jadeja, software business manager, Intel- APAC, talked about ‘Building next generation applications for Android on Intel architecture’ – a session that was much appreciated by the attendees. He said, “Android is the most preferred platform these days. The key behind making successful Android applications is to run them seamlessly on various hardware.” His talk focused on how Intel’s Overlay allows OEMs to customise applications on Android. Yet another interesting discussion was initiated by Anantharaman P N, director, Engineering, Adobe Systems, Bengaluru. He talked about ‘Building compelling mobile applications--the open source way’. Anantharaman threw light on topics like compelling mobile applications, the relationship between open source technology and mobiles, the open source landscape and how Adobe’s EdgeCode is helpful in building such apps. Other sessions that proved helpful to developers included ‘Kernel performance tuning’ by Varad Gupta, Keen & Able Computers Pvt Ltd, New Delhi; ‘Developing offline Web applications using HTML5 local storage’ by Janardan Revuru, project manager, Hewlett Packard, Bengaluru; ‘Test driven development and automation’ by Mahesh Salaria, technology evangelist, Kayako Support Systems, Gurgaon; and ‘Testing Web services’ by Dibya Prakash, technology consultant.
FOSS for IT managers OSI normally witnesses a great turnout of IT managers, who attend the event to find out about the latest technology solutions that can be implemented in their organisations. This year was no different, as IT managers and CXOs from across the nation attended the convention. A track called ‘Cloud Day’ was a major hit amongst one and all. Gaurav Agarwal of Sify Technologies Limited shared his insight on the topic ‘Opening the doors of the cloud’. He talked about how the cloud and open source come together to create innovation for customers, who can leverage the combined offering. The key takeaway was how a cloud service provider and the open source community can together create business propositions. He touched upon how the cloud can help the open source community at every stage -- right from the incubation stage, where users want to experiment with an idea and explore flexible models, to when they reach the inflexion mode with services stabilised, running and growing. Lux Rao, country lead, Cloud Consulting Solutions, HP India, Bengaluru, touched upon a very interesting topic amongst cloud users: ‘Public, private or hybrid: What is your cloud?’ He talked on how an organisation ought to choose the best cloud solution. The third day of the event saw an interesting track on
‘OpenStack’. Atul Jha of CSS Corp, who managed the session, ensured that there was something for all OpenStack users, be it the enterprises or the developers. Kavit Munshi from Aptira spoke about OpenStack’s identity and access management component, Keystone. Munshi emphasised that, “The community in India needs to focus on getting SMEs and students interested and involved with OpenStack. The community needs to assist developers and administrators to familiarise themselves with the technology with the help of the meetup group by organising demos and workshops.”
FOSS for everyone Gone are the days when open source technology was only for the techies. It has now become equally popular amongst regular technology users, and how! FOSS is now being used by everyone, be it the enterprises or the educational institutions. OSI 2012 did not miss out on touching upon the FOSS needs of the common user. Right from discussions on topics like ‘The past, present and future of open source’ to ‘How to contribute to FOSS without programming’, this jam-packed track covered some thought-provoking issues. Any discussion on the Indian open source community is incomplete without the mention of Indian LINUX User Groups. OSI had representatives from the popular Chennai and Kolkata LUGs, apart from legendary contributors like Raj Mathur and Kingsley John. The group discussion also incorporated the policy issues that need to be dealt with to improve adoption of FOSS in public bodies. ‘FOSS for academia’ was yet another session which was looked forward to. Charles Jayawardena, business consultant, Virtusa Corp, who came all the way from Colombo, Sri Lanka, delivered a talk on the ‘Akura open source school management system’, which is being deployed by his company in Sri Lanka. His talk demonstrated how open source technology can easily find place in the education space. K Prabhakaran, engineer, NRCFOSS, AU-KBC Research Centre, Anna University, Chennai, talked about a very successful model for deploying open source in academia, entitled, ‘Automation and Networking of 33 public libraries using FOSS in Tamil Nadu’.
Hands-on experience Open Source India believes in sharing practical knowledge as much as sharing ideologies. Apart from knowledge-packed technology sessions, there were hands-on workshops held by industry experts on HTML5, SQL, OpenLDAP and JBoss, among others. An interesting workshop on ‘App development for Android’ gained a lot of traction on Day 1 of the event. Dibya Prakash, who conducted the workshop, talked about why developers should choose Android, the basics of the Android platform, the development environment set-up, Android building blocks, the Android user interface, the resources framework, storage options, accessing Phone Components and publishing an application. A live stream of updates was provided on Twitter and Facebook, regarding the conference. This helped everyone stay updated on the discussions and sessions happening. Celebrating the spirit of FOSS, OSI aims to be back with greater enthusiasm and more information-packed sessions next year. november 2012 | 31
For U & Me (L) NIMHANS Convention Centre, the venue for the threeday 'open-source' convention, ready to welcome FOSS enthusiasts. (R) A big ‘Thank you’ to the partners of the event for their support. (L) Pramod K Varma, chief architect and technology advisor to the UIDAI Project, delivered the keynote address on ‘Open Source and Commodity Computing in Government’. (R) Ujjwal Kumar, technical evangelist at Microsoft Corporation, India, talked about ‘Design Principles for Building Great Apps’. (L) Charles Jayawardena, business consultant, Virtusa Corp, Colombo, spoke about the ‘Akura Open Source School Management System’, on his left is Mr Thushera Kawadwatta (R) Piyush S Newe from EnterpriseDB Software India Pvt Ltd spoke about ‘Database Sharding’. (L) The exhibitors were benefited by the quality audience that turned up at the event. (R) Ryusuke Kajiyama, MySQL pre-sales consulting manager, Asia Pacific & Japan, shared his experience during a handson workshop on MySQL performance tuning.
(L) The EFY Stall at OSI witnessed a good turn out. (R) Gaurav Agarwal from Sify Technologies spoke about ‘Opening the Doors of the Cloud’.
32 | november 2012
Open Source India 2012 (L to R) Dushyantsinh Jadeja, software business manager–APAC, Intel, gave his insights on ‘Building Next Generation Applications for Android on Intel Architecture’; Jacob Singh, director, India, Acquia, spoke about ‘Learning by Accident: The Three Secrets to Nurturing Technical Talent’; Lux Rao, country lead - Cloud Consulting Solutions, HP India, Bengaluru, threw light on ‘Public, Private or Hybrid - What is your Cloud?’ (L to R) S Sreehari, managing director, Novell India (The Attachmate Group), interacted with the audience on 'Changing Trends in Technology and How Developers Should Adapt to Them’ in his keynote address; Varad Gupta, founder of Keen & Able Computers Pvt Ltd, New Delhi, spoke on 'To Certify or Not To Certify'; Dibya Prakash, technology consultant, gave a hands-on workshop on 'App development for Android'. (L to R) Megha Singhvi, senior presales technical consultant, MySQL, Oracle, shared her knowledge on MySQL cluster in a jampacked session; Abhishek Datt, CTO, Taashee Linux Services, India, shared his views on 'Disaster Recovery Made Easy Using DRBD'; Ramesh Rajagopalan, engineering director, Dell India R&D Center, Bengaluru, shared his insights on ‘Combination of Cloud and High Performance Computing Clusters’ during the keynote address on Day 2. (L to R) Rajasekharan Vengalil, technical evangelist at Microsoft India, spoke about ‘Introducing TypeScript’; Rafiq Ahamed K from Hewlett-Packard, Bengaluru, talked about 'High Availability for Enterprise'; Chandra Balani, MySQL India sales manager, Oracle India Pvt Ltd, talked about ‘The State Of The Dolphin: Oracle’s MySQL Strategy and the Latest Key Developments, Including Product Releases, the Roadmap, and the Community’.
(L to R) Sanjay Manwani, senior engineering manager, MySQL, Oracle, talked of how 'MySQL is Exciting Again'; Raj Mathur, Kingsley John, T Shrinivasan and A Mani participated in a discussion on ‘The Role of ILUGs in promoting FOSS’.
november 2012 | 33
For U & Me (L to R) Amandeep Khurana, solution architect, Cloudera Inc, gave an impressive ‘Introduction to HBase’; Attendees eager for knowledge got some time to talk to the speakers; Mahesh Salaria, technology evangelist, Kayako Support Systems, Gurgaon, spoke about ‘Test Driven Development And Automation’. (L to R) Joel Divekar, general manager-information systems, People Interactive (I) Pvt Ltd, Mumbai, spoke about ‘Deploying Linux in an Enterprise’; Janardan Revuru, project manager, HP, Bengaluru, spoke about ‘Developing Offline Web Applications Using HTML5 Local Storage’; Yogesh Girikumar from CSS Corp spoke about ‘Swift’ during the OpenStack Day at OSI 2012. (L to R) Chaithra M G, senior member technical staff, Oracle India Pvt Ltd, spoke about ‘MySQL 5.6 Optimizer Improvements’; Kunal Deo from VMware spoke about ‘Inside The Open Source XNU Kernel’; Syed Armani talked about 'Ceilometer' on the OpenStack Day at OSI 2012.
34 | november 2012
Exhibition & Knowledge Partner
DISCOVER THE FUTURE OF ELECTRONICS IT’S EXCITING. PRESENTS OPPORTUNITIES. February 21 to 23, 2013, Halls 7, 8, 9 & 10 Pragati Maidan, New Delhi
AN INDIAN EXPO FOR THE GLOBAL ELECTRONICS INDUSTRY Realising Opportunities in the Fastest Growing Market
HURRY! TO AVAIL PREMIUM LOCATIONS FOR BOOTHS, CONTACT: Sunil Singh 088000 94210, Arun 088000 94213, 011-40596600, Email: efyenq@efyindia.com
For U & Me
Overview
Some Tips on How to Use
Open Source Technology for a Successful Business While adopting OSS software makes a lot of sense for both start-ups and enterprises, firms are warned to tread with caution. Choosing the right business model and working around the various types of licences, are some challenging parts of the OSS journey.
T
oday, it would be hard to find software companies refraining from using open source technology. OSS has grown and evolved to a stage where it is almost omnipresent. Even tech start-ups in India are venturing into the market with open source-based solutions. Diksha P Gupta from OpenSource For You spoke to several established firms that are doing business using open source technology and compiled some tips based on their experiences in the industry. Read on:
Keep constant track of the latest technologies
Technology changes rapidly and when it comes to open source, it changes even more rapidly. OSS software is supported by communities across the globe, which ensures that it changes at a much faster pace than proprietary software. So awareness is the key to sustain oneself in this market. Raj Mathur, founder 36 | november 2012
of Kandalaya, a consulting firm in the GNU/Linux, network application integration and network security domains, highlights the importance of keeping abreast of all the latest developments. Mathur says, “You should be aware of what technologies are available in different fields even if you are not very conversant with them. Those whose job is basically to architect and implement solutions for their clients, should be aware of the latest and emerging technologies. For instance, when you are architecting a solution in the proprietary software domain for a mail server, you would use one base technology and all the other technologies come along with it. But when you are dealing with open source technology, you have to choose each component individually and make all the components work together. You should be aware of what components are available, as well as their strengths, weaknesses and special features. You should also
For U & Me
Overview
know which technologies will fit together for a solution that’s appropriate for your client and you have to know how to join all of them together. These are the things that you need to be aware of when you are dealing with open source. “It is very easy to get stuck with one technology in open source also, which is fine, as one can become an expert in that technology. But eventually, what happens is that clients end up losing out on new technologies, or rather, better technologies, which may be more suitable for their requirements because the service provider is not aware of them. And the awareness comes from interactions with the community—whether it is through RSS feeds, forums or mailing lists,” he adds.
Be open to adopting new technologies
Mathur asserts, “The most important thing while consulting in open source is that you should be more open to new technologies than you would be in the proprietary software world, because technology doesn’t change so fast in the case of the latter. In the open source world, the technologies get enhanced and mutate really fast.”
Be innovative
Over a period of time, it has been noticed that people offering standardised open source solutions do not do as well in the market when compared to those who offer customised solutions. Abhishek Datt, chief technology officer, Taashee Linux Services, says, “I think one should not offer open source solutions that are available off-theshelf or those that can be downloaded. Instead, one should try to build expertise on particular platforms, customise them and either sell them as solutions, or perhaps implement them and support them. Ideally, it should be something that one has built upon and it should bring some value to the customer. What I have seen is that merely telling the customers that you can support a particular solution doesn’t work anymore. One has to add value to the solutions. For example, if you are using AlFresco, you can use it as a platform and build an application on top of it. It impresses customers more and it also gives you an opportunity to expand your business, as satisfied clients will call you often for support. I think start-ups should look for solutions and actually build on top of them rather than telling customers that, ‘We are champs in this particular technology and we will support it.’ One has to always come up with a value addition.” Making premium modules of open source applications also helps in better revenue generation. Charles Sudarshan Jayawardhane, business consultant, Virtusa Corp, Sri Lanka, explains, “Since open source applications are given free of charge, the revenue margins are very low or next to nothing for the companies. Often, it may even get difficult to cover the costs incurred in the business. Hence, building premium modules on open source applications always helps in revenue generation. If you have built an application which is well 38 | november 2012
thought of and is the exclusive offering of your company, it will definitely be worth a buy for the clients.”
Stay connected with the community
The best way to spread the word about your business is via the community, which is also the best resource when it’s time to learn the latest in the technology world. Baskar Selvaraj, chief executive, LinuXpert Systems, says, “As an entrepreneur, it is good to be associated with and participate in all the FOSS events, activities and LUG meetings. It gives your business visibility. I interact with the community often on various platforms to share my experience and knowledge. This helps me in making more contacts both within and outside the community, and it further helps in the growth of my business. I believe that promoting the ideology of GNU/ Linux and the value of FOSS always helps in promoting the open source business.
Be ready with reference architecture
Buyers of OSS solutions like to see some implementations before they actually invest in them. Datt emphasises, “I think a lot of people are interested in open source but they are unsure on how to use it; for instance, which one should they use and which one is best avoided, from the many available. If you are planning to offer a solution, keep certain reference architectures of the solutions that have been proven and that work all the time. For example, if you are coming up with a service for a hospital, you should offer something on a proven architecture, and make clear and precise suggestions rather than giving your client hundreds of options. If you are going to build an open source travel and ticketing system, you should factor in the needs of the clients and offer them a customised solution suited to their needs.”
Understand licences
Licences are complicated, yet it is extremely important to be on the legal side of things for open source businesses as well. Unfortunately, people in the FOSS world tend to take this important bit for granted. If you have plans to do business with OSS, you should master the finer points of licensing, or hire someone who is a master in this domain. Mathur explains, “Ideally, your tryst with open source technology can be of three types. You can deploy open source solutions in your organisation, consult people deploying open source solutions in their organisations, or develop open source solutions. Familiarity with the licensing procedures is extremely important in all three cases. If not, you may end up working with OSS platforms that have incompatible licences and encounter legal problems at any stage.” By: Diksha P Gupta The author is assistant editor at EFY. When she is not exercising her journalistic skills, she spends time in travelling, reading fiction and biographies.
Insight
Developers
Integrating Android with The Android market is growing steadily. Every day, new technologies and new frameworks are being introduced. Currently, the application’s beauty, creativity and interactivity are as important as its functionality in order for it to provide the best user experience. Among the several Android development platforms, JQuery Mobile can make your application amazingly beautiful.With a user interface system that works seamlessly across all mobile device platforms, it provides a very feature-rich development environment. This article demonstrates the use of JQuery Mobile with PhoneGap.
M
y system has the following software: the Linux distro is the Ubuntu 10.10 32-bit desktop version; I use Eclipse Indigo, and my Android version is 2.2 (though this can be applied to 2.3 also). The PhoneGap version is 0.9.3 and JQuery Mobile is version 1.2.0. The information and images in this article are with respect to this configuration.
Getting started
The framework provides a set of touch-friendly UI widgets and includes list-views, buttons, form elements, transitions, themes, pages, dialogue boxes, toolbars and an AJAX-powered navigation system to support animated page transitions. To get started with JQuery Mobile, let’s try a simple HTML page. In your text-editor, enter the Page-Template (index.html) given below, save it and open in a browser: <!DOCTYPE html PUBLIC “-//W3C//DTD HTML 4.01 Transitional//EN” “http://www.w3.org/TR/html4/loose.dtd”> <html> <head> <title>My Page</title> <meta name=”viewport” content=”width=device-width, initialscale=1”> <link rel=”stylesheet” href=”jquery.mobile-1.2.0.min.css”/> <script src=”jquery-1.8.2.min.js”></script> <script src=”jquery.mobile-1.2.0.min.js”></script> </head> <body> <div data-role=”page”> <div data-role=”header”> <h1>My Title</h1> </div><!-- /header --> <div data-role=”content”> <p>Hello Phonegap</p>
</div><!-- /content --> </div><!-- /page --> </body> </html>
To help newbies understand the template, let me explain a little more in detail. In the head, a meta viewport tag sets the screen width to the pixel width of the device. The stylesheet is added since it is needed to implement JQuery effects. We reference the JQuery and JQuery Mobile scripts to add their functionality. As a normal page, it contains a header, body and footer. Here, the data-role=”page” wrapper signifies that a page is going to be delineated. The data-role=”header” signifies a header tag, while data-role=”content” signifies a body for the whole content. Inside this content container, you can add any standard HTML elements—headings, lists, paragraphs, etc. Let’s proceed to build, implement and run this JQueryenabled HTML page in the Android emulator.
Building and configuring
The first thing is to download and install the Android SDK—the software development kit that provides the necessary tools for development, and comes in different versions for Windows, Linux and Mac OS. Then we need to install an IDE—I use Eclipse. Download and install Eclipse, then launch it and open its workbench. Download and install the ADT plug-in for Eclipse. Next, download the PhoneGap package—the cross-platform framework used to create package files to install your application on different mobile OSs like Android, iOS, Blackberry, etc. Also download the JQuery Mobile pre-defined min.js and min.css files used in our index.html page. Now that we have all the ingredients, we can start with integration. Start Eclipse and choose your workspace. Go to File > New >Android Project. Specify a Project Name (e.g., My First November 2012 | 39
Developers
Insight
Project), and a Build Target— you can choose from 2.2, 3.2, 4.0, etc. I chose 2.2. Specify an Application Name—I used MyFirstApplication; then a Package Name—the name must start with com (e.g., com. firstProject), and an Activity Name—the Java file name (I used MyFirstActivity). Click Finish. Now the new project should be shown in the Project Explorer, as in Figure 1. To integrate the downloaded PhoneGap with this project, create a new directory named libs. Open your PhoneGap archive and from the Android folder, copy the .jar file. Paste it in the newly-created libs folder of your project. Under the Figure 1: Project structure project’s assets folder, create a new www folder. Copy the .js file from PhoneGap’s Android folder into the new www folder. We also need to set the build path. Right-click on the project > Build Path > Configure Build Path > Libraries > Add Jar and add the PhoneGap .jar file. Now paste all the downloaded JQuery Mobile files in the www folder. Let’s move on to the implementation. We can embed our code in a simple HTML page in the www folder; just copy and paste the index.html file that we created earlier into the www project folder. Next, edit MyFirstActivity.java and add the following command: import com.phonegap.DroidGap; public class MyFirstActivity extends DroidGap
Figure 2: Hello World <li><a href=”#”>Building</a></li> <li><a href=”#”>Business</a></li> </ul>
In this code, the three attributes used for creating the list view are: 1. data-role=”listview”, which will create the view as a list, with the <li> items; 2. data-inset=”true” which will provide a bit of margin and rounded corners within the content area; and… 3. data-filter=”true” which will create a search bar. Running the project now will give results like in Figure 3. This was a basic list-view; other kinds will customise your list accordingly. You can create various list-views using the following: • Numbered list: Uses the <ol> ordered list tag. • List dividers: Which can be implemented using data-role=”list-divider” or using dataautodividers=”true”. • List with count bubbles: Wraps the item with a class ui-li-count:
Replace setContentView() with super.loadUrl(“file:/// android_asset/www/index.html”); and save. To run it in the emulator, right-click on Project and Run As > Android Application. One more thing is needed to add the emulator: Windows > AVD Manager > specify any name > Ok. The running project in the emulator looks like what’s shown in Figure 2. Congratulations! You are now a mobile developer.
<li><a href=”#”>Agriculture</a></li>
JQuery Mobile special features
<li><a href=”#”>Agriculture
I’m sure that you’re anxious to see the special features in action; so let’s try each one out in the following sections. List-views: Let’s just add some code to our template inside the data-role=”content” div:
<span class=”ui-li-count”>86</span>
•
List with thumbnails:
<img src=”icons/picture.png” alt=”Australia”> </a></li>
•
List with icons: For an icon, the item should be bound with class ui-li-icon:
<ul data-role=”listview” data-inset=”true” data-filter=”true”> <li><a href=”#”>Agriculture</a></li> <li><a href=”#”>Animal</a></li> <li><a href=”#”>Astronomy</a></li>
<li><a href=”#”>Agriculture <img src=”icons/picture.png” class=”ui-li-icon”> </a></li>
Insight
Developers
Buttons and widgets: Just as we created various lists using different attributes, in order to create basic link-based buttons, the attribute used within <a href=” “> is data-role=”button”. For a disabled button, add a class=”ui-disabled” attribute. Form buttons: The framework automatically converts any button or input element with a type of submit, reset, etc; there is no need to add a data-role attribute for this—you can directly call the Button plug-in just like any JQuery plug-in, as follows: $(‘[type=”submit”]’).button();
By calling the plug-in, you will get the enhanced buttons. To prevent a form button from being converted into an enhanced button, add the data-role=”none” attribute, and the native control will be rendered. Button plug-in based buttons: We can create buttons using normal HTML also, such as:
Figure 3: List and checkboxes
Figure 4: Buttons, the slider, searchbox and radiobutton
<input type=”range” name=”slider” id=”slider-0” value=”25” <button>Click Me</button>
min=”0” max=”100” />
<input type=”Submit” value=”Submit button”>
</form>
<input type=”Reset” value=”Reset button”>
Creating buttons with icons: We can add buttons with some pre-defined icons, which helps to specify the use of the button in a better way. For example:
Radio buttons: <fieldset data-role=”controlgroup”> <input type=”radio” name=”radiobutton” id=”button1” value=”1” />
<a href=”index.html” data-role=”button” data-
<label for=”button1”>Male</label>
icon=”delete”>Delete</a>
<input type=”radio” name=”radiobutton” id=”button2” value=”2” />
Various types of icons can be embedded, like the left and right arrows. You can check various types at http:// jquerymobile.com/demos/1.2.0/#/demos/1.2.0/docs/buttons/ buttons-icons.html. Some buttons can be seen in Figure 4. Form elements: Input types: The framework supports various input types. Some of them are given below:
<label for=”button2”>Female</label> </fieldset>
Checkboxes: <label>Subscribe</label> <label><input type=”checkbox” name=”checkbox1” /> Linux For You
<label for=”basic”>Text Input:</label>
</label>
<input type=”text” name=”name” id=”basic” value=”” />
<input type=”checkbox” name=”checkbox2” id=”checkbox2” class=”custom” />
Text-areas: This element should be wrapped inside datarole=”fieldcontain”: <label for=”textarea-a”>Textarea:</label> <textarea name=”textarea” id=”textarea-a”>
Search input: For creating the search bar, input type=”search” can be used: <input type=”search” name=”search” id=”search-basic” value=”” />
The slider: <form> <label for=”slider-0”>Input slider:</label>
<label for=”checkbox2”>Electronics For You</label>
By practising with these widgets, you can build a basic yet attractive UI. Lots of things like themes, transitions, toolbars, dialogue boxes and pop-ups are still left; I will cover them in the next part of the article. I am also exploring the JQuery Mobile framework in depth these days, and will share my findings with you. Suggestions and queries are always welcome. Anupriya Sharma The author has just graduated and is currently working in the Android department of a reputed MNC. She loves Android and iOS development, yet still manages some time for cooking, dancing and her all-time favourite, shopping. You can contact her at anupriyasharma2512@gmail.com.
November 2012 | 41
For U & Me
Let's Try
Use fedena
to Manage Your School
Fedena is an open source tool developed by Bengaluru-based Foradian Technologies that can automate the tasks of managing a school. This article covers its features, with detailed installation instructions.
A
number of open source software and tools are available for many of our day-to-day tasks, and they are on par with their commercial counterparts in terms of features and efficiency. If smartly harnessed, the open source revolution can quite brilliantly serve almost every need of yours. I encountered this awesome open source school management software when I was looking for some open source automation tools—I couldn't resist a test drive. At first, it seems a lot of work to find and evaluate a tool, but if the tool is as impressive as Fedena, it’s not a bad deal to invest a few hours investigating it, even if it means skipping a few cups of coffee. If you run or manage a school, then this tool can do a lot for you in easing the burden of administration. So now it’s time to see Fedena in action. I installed it on a Windows 7 machine. You can find installation instructions at Fedena’s website, http://projectfedena.org/, but a few steps are missing, so I’ll suggest an installation procedure to make your life a bit easier. It would be unfair if I didn’t thank the dedicated Fedena community, which assisted me in figuring out these missing steps. Fedena is based on Ruby on Rails, so you’ll have to install Ruby. Download it from http://rubyforge.org/frs/download.
42 | november 2012
php/72085/rubyinstaller-1.8.7-p302.exe and run the installer. Now click the Windows icon and click Start Command Prompt with Ruby. In this prompt, issue the command gem install bundler –remote. Next, run gem install win32-open3. Next, install MySQL — get the required package from http://downloads.mysql.com/archives/mysql-5.0/mysqlessential-5.0.90-win32.msi and run the installer. Copy C:\ Program Files\MySQL\MySQL Server 5.0\bin\libmySQL to C:\Ruby187\bin and restart the MySQL service. Download Fedena from http://projectfedena.org/ download/fedena-bundle-win and extract it. (I extracted it to the C drive.) You’ll get a directory called release. Run cd c:\ release to go to the Fedena source directory (modify it if it's different on your system). Run bundle install –local. Open C:\release\config\database and under the development line, just update your MySQL username and password. Then run bundle exec rake db:create to create the database; bundle exec rake db:migrate to populate the database with data; and bundle exec mongrel_rails start to start the mongrel server. Now Fedena should be successfully installed and it’s time to launch your browser and visit 127.0.0.1:3000. You will
Let's Try
Figure 1: Fedena main screen
see the login screen—log in with the username admin and password admin123. You’ll see the main screen (Figure 1) with modules like Admission, Examinations, Attendance, and more. All functionality has been categorised into modules, and you can perform your desired operation with just a few clicks. Fedena comes in both free and commercial versions; obviously, the commercial version has more features. The Fedena website has a link to a demo Fedena site where you can try it out before installing it locally. Here, you’ll find three additional modules (Hostel, Library and Transport) that are only in the paid-for version (see Figure 2). Fedena is highly customisable to your needs, with its plug-in friendly architecture. You can install various plug-ins to extend its functionality, and you can also write your own plug-ins. You can also integrate it with Moodle, another good open source learning management tool, and BigBlueButton, an open source video conferencing solution. Fedena is so feature-rich that I suggest you have a look at its complete features at the site—bet you’ll be impressed. Fedena can dramatically reduce the time and effort required to maintain a school. It is used by the Education Department of the Government of Kerala to automate about 15,000 schools in the state, and I am sure that what Fedena can do for those 15,000 schools, it can also do for you. So grab this cool tool, test-deploy it, and feel the power of open source. What I like most about Fedena is that it has been developed by an Indian firm. Recently, I read an article stating that India contributes very little to the open source community in comparison to what it consumes. Products like Fedena show that we can develop some high-quality open source products, and also that the future of open source is quite bright in our country. With more and more developers joining the open source movement, we can surely turn into big contributors to the open source ecosystem.
For U & Me
Figure 2: Fedena demo
By: Vinayak Pandey The author is a Red Hat Certified Engineer on RHEL6. He spends most of his time exploring open source tools and technologies, and fine tuning his Linux machine. Always keen to master new technologies, he's currently engaged in getting a grip on Python.
FREELANCER REQD FREELANCER team to complete balance work of a FOSS WEB APPLICATION (CentOS, PHP, postgresql and apache server) already Fully Documented, Screens Data Structures, Menu Completed for one or more of Coding, testing, implementation Cloud deployment Mobile Deployment Splitting to several small modules for different level usage Conversion to different languages Conversation of speech to text
The web application is a financial management and commerce management product Work To Be Done At Chennai, Tamil Nadu On Contract Basis, Payment Basis : Lump Sum / Partnering / Profit Sharing During Maintenance Can Also Be Considered.
ELECTRONICS & COMPUTERS INDIA LIMITED Email: kkeamalg101@gmail.com
november 2012 | 43
Open Gurus
Insight
There must be a lot of people out there who love trying out things with Android, and who spend a lot of time experimenting with it. Here’s something for those who love creating things with FOSS and electronic prototyping boards.
A
rduino is an open source, single microcontroller electronics prototyping board with easy-to-use hardware and software. It was developed in 2005 by Massimo Banzi and David Cuartielles. Arduino is capable of interacting with the environment by receiving inputs from a broad range of sensors and responding by sending outputs to various actuators. The Arduino board consists of 8-bit Atmel AVR microcontrollers, in addition to which, the boards have a standard way of connecting the CPU with various other complementary components to increase its functionality through a number of add-ons called shields. An Arduino board can be built by hand or can be purchased (pre-assembled) from http://arduino.cc/en/Main/Buy, depending on your needs.
a display device, an LED is added. So just start the Arduino software, select your board model and enter the following code: int ledPin = 13;
void setup() { pinMode(ledPin, OUTPUT);
44 | november 2012
// sets the digital pin as output
} void loop() {
Installing and working with Arduino
The microcontroller on the Arduino boards is programmed via the Arduino Programming Language (based on Wiring) and Arduino Development Environment (based on Processing). The open source Arduino environment can be downloaded for Windows, Linux or Mac OS X from http://arduino.cc/en/ Main/Software and extracted. This makes writing code and uploading it to the board very easy. As the environment is written in Java, make sure you have Java installed. When you are finished with the installation, here's a program to start off with Arduino programming. All programmers must be familiar with the ‘Hello World’ sample. When working with electronic prototyping boards, i.e., during physical computing—for micro-controllers that don’t have
// LED connected to digital pin
13
digitalWrite(ledPin, HIGH);
// sets the LED on
delay(1000);
// waits for a second
digitalWrite(ledPin, LOW);
// sets the LED off
delay(1000);
// waits for a second
}
Once you’ve typed in the code, connect your board via USB, and upload the program to it. As the LED has polarity, you need to fix it onto the board carefully. The long leg, typically positive, should be connected to pin 13, and the short leg to GND (i.e., ground). The LED starts going ON and OFF at intervals of one second, as shown in Figure 1.
Connecting Arduino to Android
To connect your Arduino board to an Android device, you need to have an ‘Amarino toolkit’. Amarino is a project developed at MIT to connect the Arduino and Android via
Insight Bluetooth, and it has been released under GNU GPL v3. The Amarino toolkit consists of three main components: An Android application called Amarino The Arduino library called MeetAndroid The Amarino plug-in bundle (optional) You can download these toolkit components from http://code.google.com/p/amarino/downloads/list. Moving on, if you want to work with the Amarino, you need to have an Android-powered device running version 2.x, though it supports version 1.6 too. Moving on to the Arduino board, you can have a Lilypad or Duemilanove, with a Bluetooth shield such as BlueSMiRF Gold and Bluetooth Mate, or an Arduino BT, which comes with Bluetooth attached to it already. Installing Amarino is simple. After you’ve downloaded the MeetAndroid library, extract and copy it to the Arduino libraries directory. Install the Amarino.apk package to your Android device by downloading directly to it, or by calling adb install Amarino.apk. Make sure your device is connected to the computer via USB, and the PATH is set correctly for the Android SDK Tools directory. Let’s now follow the steps to get the Arduino board connected to Android:
Figure 1: The Arduino LED is blinking
Open Gurus
Figure 2: Amarino app homescreen
1. Authentication
Open the installed Amarino application and click Add BT Device to search for your Arduino BT device. Make sure that it is turned ON. But before the two can talk to each other, they must be authenticated. Select the device and confirm pairing with it from the notification bar. Typically, the pin number is 1234, 12345 or 0000. Once your Arduino BT device is authenticated, it’s ready to go. See Figure 2.
2. Creating events
The next thing is to install the plug-in bundle from http://code. google.com/p/amarino/downloads/list (AmarinoPluginBundle. apk) after which you can head to event creation. Start the Amarino application and launch the Event Manager of your Arduino BT device. Click Add Event > Test Event. This is a test event—a demo that sends a random number (0-255) every 3 seconds. Now your Android device is ready to communicate with the Arduino board, so let’s set up the latter.
3. Setting up Arduino
Open your Arduino software and select File > Examples > MeetAndroid > Test. When the project opens, make a small edit: change the baud rate of your Bluetooth module from 57600 to 9600, as highlighted in Figure 3. Upload the sketch to the board. If there isn’t an LED on the board, you can fix one over pin 13.
Running the test program
Now, your Android device and Arduino board are ready to talk to each other. In the Amarino application on your Android device,
Figure 3: Arduino ‘test’ program code
click ‘Connect’ to communicate with the Arduino board. As soon as Android connects to the Arduino, it starts sending a random number every 3 seconds, and that lights the LED for 1 second. You can monitor the process by pressing ‘Monitoring’ on the main screen of the application (see Figure 4). Congratulations! You’ve made Android communicate with the Arduino board.
Applications
This concept of connecting Android to Arduino can be very handy in making electronics projects more useful, by increasing their flexibility. They could easily be used in home automation controls, power consumption meters, Bluetoothcontrolled robots, managing devices from the computer and much more. There is little doubt that Amarino brings us more power by helping us connect Arduino with Android. By: Yatharth A. Khatri The author is a FOSS fan who loves to work on all types of FOSS projects. He is currently doing research on human-computer interaction and is an Android developer too. You can reach him regarding any software issues at yatharth01@gmail.com.
november 2012 | 45
For U & Me
OpenBiz
Fuelled by OSS, redBus Gets Onto the Expressway
For companies contemplating OSS to run their operations, here’s a case study that should convince them about the merits of the open source-based business model. redBus relies heavily on OSS for its ticketing business and all its systems deploy Linux. has some B2B channels to expand its business. Satish Gidugu, chief technology officer, redBus, elaborates, “We have an aggregate inventory of about 800 tour and travel operators, as on date. We have developed a platform called the ‘Seat Seller’ that allows agents to buy tickets from us on behalf of their customers. Also, we have around 100 partners, like travel websites, who build their own user interface on top of the Web services provided by Seat Seller.” Gidugu adds, “Apart from this, an important element of our business is the bus operator ecosystem. These operators also need software to run their operations, host their inventory, share their inventory with multiple travel sites and so on. For this, we have built software called BOSS, which stands for ‘Bus Operators Software System’ – a hosted ERP-cum-inventory management solution. A good number of operators that we work with today use BOSS to run their operations. With redBus.in, Seat Seller and BOSS, we cater to all the constituents of the bus industry.”
Most of redBus’ business runs on OSS
Gidugu considers Linux as the backbone of his business. The transaction software and financial software used by redBus are entirely based on Java. He says, “Spring and Satish Gidugu, chief technology officer, redBus Hibernate are the technologies that we use. These systems are built on the Java stack because Java has proven oday, in India, small- and medium-sized businesses, credibility in building high-class enterprise products.” The too, are increasingly opting for open source two software that have helped redBus grow are BOSS and technology. redBus is a company that deploys open Seat Seller, and both of them are based on OSS. Gidugu source technology extensively. Right from the software for its explains, “We rely on open source technology extensively tour and travel agents to the operations within its campus, the to run our business smoothly. BOSS and Seat Seller are company prefers to use open source software (OSS) for most completely built on Java, and so are our backend transaction of its work. redBus is in the business of ticketing. And one of systems and those that interact with our financials. the most visible channels through which it offers tickets to its We deploy these solutions on Ubuntu machines on customers is its website, redbus.in. This is for customers who Amazon Cloud. That’s where the biggest use of open source want to avail the online payment option and have Internet comes in. We also use Pentaho, a business intelligence access. Customers can also book their services via mobile platform, for our internal business analytics.” websites that are configured for the iPhone, BlackBerry, as Gidugu adds, “Google BigQuery is a Web service that well as Nokia and Android phones. The company is also lets you do interactive analysis of massive datasets—up building mobile apps to facilitate easier transactions for its 1, Vikas Permises, 11 the Bank Street, Mumbai, IndiaBigQuery - 400is001. to Fort, billions of rows. Though Google a priced Mobile customers. Apart from these B2C channels, company also
T
46 | NOVEMBER 2012
OpenBiz For U & Me service, we use technologies like RabbitMQ (Rabbit Message Queue) fairly efficiently to push almost 4-5 million events a day into Google BigQuery across all the products we have. We then use this data to analyse how the inventory is moving, and so on.” The company employs around 500 people across the country. Needless to say, its employees work on Linux-based machines. Gidugu says, “We have a strong engineering team of over 60 employees, apart from those dealing with operations. We build all our products in-house. We manage the cloud on our own, as well as the development, deployment and even scaling. All our machines are typically Linuxbased. Even within the company, we largely resort to Linux deployment. All the call centre PCs and the laptops that most of our developers use, are Linuxbased.” Gidugu finds open source technology a boon for developers. He claims, “One thing that I have found is that as an engineer, my ability to understand a product is significantly enhanced when I look at the source code. With the source code within access, I know what is happening to the software, which is a great thing for any organisation that wants to do things its own way.”
But, open source requires a meticulous approach The world knows that deploying open source technology is not child’s play. In Gidugu’s own words, “Personally, I think it is like a double-edged sword. It is about what you make out of it. As long as an organisation has a reasonably smart engineering team and its members can accurately identify the problems that need to be solved, open source technology can work for them. It becomes a question of evaluation. No technology is perfect. If you want to buy a licensed product, you need to wait for the service pack, and if you buy open source technology, of course you have to do a lot of work on your own. But open source technology allows you to have control over how you want to use the product. I don’t think there is any one golden rule on how to use open source technology. Any organisation should use it depending upon their needs. We look at the problems that we want to solve and the capabilities that we need. As long as open source technology can solve a problem with some amount
of engineering effort, it’s the best choice. At the end of the day, my job is to find optimal engineering solutions to business problems. For example, on the database side, we use MySQL. The databases for all our products are based on pure MySQL deployments.” Companies remain undecided about deploying open source technology in their set-ups largely because they face a shortage of appropriate talent. redBus seems to be an exception. When asked about this, Gidugu says, “We do get the right kind of talent that we require. We are not running a pure services company and we are trying to stay ahead of the industry in terms of the technology curve as well. So what we do is, look for smart engineers. Typically, that means people who are able to work across technologies and have a general knowledge of the complex engineering that is needed. Even otherwise, we have never faced a situation in which we could not hire people. There is enough talent available. You just need to pick and choose the best. Another thing people should keep in mind is the licensing aspects of OSS. That is something we are very careful about when we select a particular open source technology. It depends on whether you want to use the technology for a commercial or non-commercial use. It is not easy to understand all the licensing issues around the open source model. OSS licensing has been simplified to a certain extent in recent times but I would say that one still needs to pay attention to this before you make an OSS choice.” By: Diksha P Gupta The author is assistant editor at EFY.
e: 09326087210. 1, Vikas Permises, info@technoinfotech.com 11 Bank Street, Fort, Mumbai, India - 400 001. Mobile: 09326087210. info@technoinfotech.com
NOVEMBER 2012 | 47
For U & Me
Career
A Career in the
Cloud
With cloud computing promising to transform India’s business landscape, it’s time to give your career a head start in this fastevolving domain.
R
efashioning their conventional business models, enterprises both big and small, are increasingly embracing cloud-based solutions to accelerate their business’ growth, cost-effectively. No wonder, the fastevolving domain of cloud computing is opening up new vistas in the employment market. If a study by Microsoft is anything to go by, cloud computing will create some 14 million new jobs across the globe by 2015, and India alone will generate over 2 million. As the trend of cloud computing sweeps across India, building a career in this area can be a good decision.
Why cloud computing is hot in India
“Cloud computing is all set to revolutionise the Indian IT job market. The need for new talent to help these companies take on cloud technology will grow, thus pushing the demand for cloud experts in the next two years. People are gradually waking up to its potential and getting themselves enrolled in training institutes for a course in cloud computing. The curriculum in cloud computing generally deals with the basics, cloud computing mechanisms, cloud security threats and ways to combat it, and more,” quips Niraj Agrawal, director, Mindscripts Technologies Training Solutions, Pune. 48 | november 2012
Of emerging roles and the required skillsets
Cloud computing is not only changing the business dynamics of the globe, but also altering the nature of jobs. In the process, new roles in the cloud domain are slowly making their presence felt, says Venkat Ramavath, director, Data Point Infotech, Bengaluru. “Interesting roles like that of a cloud architect, cloud security specialist, cloud developer, etc, are emerging in this terrain. A cloud architect will work in sync with the requirements of the client, delivering and managing the cloud infrastructure for them. A cloud security specialist will need to understand the security models and ensure the safety of the business’ data in the cloud. A cloud developer designs applications and deploys them on various platforms,” says Ramavath. The emerging roles in cloud computing also call for redefined skillsets, says Kunal Kumar, cloud computing specialist and Vmware-certified professional. “The need for a strong foundation in programming languages, like Java.net, C++, etc, and a basic understanding of product development and software application remains the same. But IT professionals must now upgrade their skills to also focus on business development and consumer-oriented requirements, as this will help the companies improve their cash flows,” elaborates Kunal.
Developers
Contest
The CodeChef.com Challenge—your monthly dose of coding puzzles from India’s biggest coding contest, now in print!
S
olve this month’s puzzle, and you stand the chance of being one of the three lucky people who could win a cash prize of Rs 1,000 each!
• It is not a multiple of 4, and so it is, necessarily, a number from 70 through 79. Hence, the answer is 75.
The October edition’s puzzle:
And the winners this month are:
Here are the conditions of the magic number: 1. If the magic number was a multiple of 2, then it was a number from 50 through 59. 2. If it was not a multiple of 3, then it was a number from 60 through 69. 3. If the magic number was not a multiple of 4, then it was a number from 70 through 79. What was the magic number?
• Saba Chaudhary • Paranthaman • Wasim Mohammad
The Solution: 1. Condition 1 eliminates all multiples of 2 except those from 50 to 59. Therefore, the numbers eliminated are 60, 62, 64, 66, 68, 70, 72, 74, 76 and 78. 2. Condition 2 indicates that if the number was not a multiple of 3, it was a number from 60 through 69. This eliminates 50, 52, 53, 55, 56, 58, 59, 71, 73, 77 and 79. 3. Condition 3 indicates that if the number was not a multiple of 4, then it was a number from 70 through 79. This eliminates 51, 54, 57, 61, 63, 65, 67 and 69. 4. The remaining number, 75, satisfies all three conditions. • It is not a multiple of 2 and so it does not have to be a number from 50 through 59. • It is a multiple of 3, and so it does not have to be a number from 60 through 69.
Here’s your CodeChef ‘Puzzle of the Month’: Our chef is fond of collecting international and Indian postage stamps, and only he knows how many stamps he had. Once, when asked by one of his friends about the number of stamps he owned, he replied with the following sentence: “If I divide the stamps into two sets, viz., international and Indian, then 5 times the difference between the number of stamps in each set equals half of the total number of stamps. Also, I’ve more number of Indian stamps compared to international ones. And half of the difference between the squares of the number of stamps in the Indian and international set is 1620.” Can you help the friend to find out how many international and Indian stamps the chef has collected?
Looking for a little extra cash? Solve this month’s CodeChef Challenge puzzle and send in your answer to codechef@efyindia.com or lfy@codechef. com by November 15, 2012. Three lucky winners get to win some awesome prizes, both in cash and other merchandise! You can also participate in the contest by visiting the online space for the article at https://www.facebook.com/ LinuxForYou.
About CodeChef: CodeChef.com is India’s first, non-commercial, online programming competition, featuring monthly contests in more than 35 different programming languages. Log on to CodeChef.com for the November Challenge that takes place from the 1st to the 11th, to win cash prizes of up to Rs 20,000. Also keep visiting CodeChef and participate in the multiple programming contests that take place on the website throughout the month
50 | November 2012
Interview For U & Me
Intel
involves a majority of developers with its initiatives
Narendra Bhandari, director, Intel Software and Services Group, Intel South Asia
november 2012 | 51
For U & Me
Interview
Intel is one the biggest contributors to open source technology. It is an important member of The Linux Foundation, and is one of the most significant contributors to the Linux kernel as well. But when the company designed its latest platform, Clover Trail, it chose Windows over Android or Linux. This move generated a broad range of reactions within the community, and people have begun to think that Intel is opting out of Linux and open source technology. Diksha P Gupta from OpenSource For You caught up with Narendra Bhandari, director, Intel Software and Services Group, Intel South Asia, to understand the company's strategy on open source technology, its initiatives for developers in India, and the idea behind leaving Linux while designing Clover Trail. Excerpts:
Q
What is Intel's strategy around open source technology?
The world knows that Intel is one of the major contributors to the Linux kernel. We contribute in a major way to Android as well. In fact, we are putting in a lot of energy into Android, given the fact that the platform is getting increasingly popular these days. I think our contribution to open source projects speaks volumes about our ideology around open source technology.
Q
Does Intel do Android-related development in the Indian R&D facility as well?
Q
How does Intel view the development of open source technology in India?
Not for the base platform, but in cases where we take the platform and make it available for devices, like the Lava Xolo phone that was released recently, we work to customise it. The development is basically split between multiple sites. The Indian facility also has an important role to play in that.
Overall, our strategy with open source has been pretty clear for almost 10-15 years. We are continuously investing in the base of open source like the kernel and all the other layers around it. We are a significant contributor to it. We try to push as many tools as we can on all environments, as much as possible. The community, from an application perspective, also has access to a lot of our technologies online. We have given the community SDKs, tools and we continuously update these with all the platform changes that happen. We drive a strategy where the developers write for multiple OS platforms including the open source distributions. Our objective is to get them as much benefit from the hardware angle as well. For example, if they are
There are certain very clear trends in the developers' community of the country. If you look at the last threefour years, the contributions from India in the form of applications for a variety of platforms and a variety of OSs have definitely increased manifold. writing for a particular OS with Intel architecture, there are multiple libraries and multiple layers available, which are optimised for our architecture and the developers can take advantage of that. More specifically in India, I think, from a developer programme perspective, we go across to the developers' hot-spots in the country, provide them the access and get them the information. As you are aware, the community is based on a self-driven model. People decide what they want to do and how they want to contribute as individuals and organisations. We provide the productivity tools so that they can go and contribute. I don't track how much contribution has gone from one country or the other. The community doesn't necessarily record any such details anywhere in the world. It is primarily focused on contribution.
Q
What is your opinion about the Indian market and how does Intel plan to grow in this open source technology market? We broadly look at the developers' base in the country. There are certain very clear trends in the developers' community of the country. If you look at the last threefour years, the contributions from India in the form of applications for a variety of platforms and a variety of OSs have definitely increased manifold. Three to four years ago, the interest in our developer programmes came mainly from people probably doing services or a small set of applications. We are now seeing a shift where a lot more people are trying to build intellectual property, which can be in the form of a game, a business application, something that makes it easy for consumers to educate themselves, something which could increase their productivity or something which could be just pure fun. What we are seeing is a pretty significant contribution from the developers based in India. Many of them use different components of technologies from across the ecosystem and the community. So, if you bring those two elements together, I think it is relatively safe to assume that there is a lot of usage and interest in what is available in open source for the developers. The other factor that you think about in today's environment is the turn-around time for applications or the cycle time for applications—to be conceptualised, built
Interview For U & Me and made available in the market – this has shrunk in a big way. Hence, the developers are thinking along the lines of how they can get their app or piece of code in the store or onto any other platform they choose to deploy it, as fast as they can. Then, they look at all possible tools in the environment.
Getting involved with Intel's initiatives for developers
Q
Does Intel have any programme that involves developers with its platforms?
Basically, all the way from Atom to Xeon, we use these processors to build a variety of platforms. There are Atom-based platforms going into a variety of clamshell form factors, into embedded systems, and into phones and tablets. Then, there are Core-based platforms, which are used in the ultrabook category, and then there are the server platforms. For each of these platform levels, we have SDKs, tools, communities, events, contests, hackathons, mentoring incentives for start-ups, etc— pretty much helping developers in the engineering space and as much as possible, in the business space too. There is a very strong level of participation from Intel Capital, where we invest in companies from India. So, we have a spectrum of offerings for developers from India, from trying to get them a basic compiler or the SDK, to getting them to our business channel and potentially getting them access to small capital to get them started, and eventually get equity investments. We have been doing this consistently and we have grown in this domain over the last few years. To answer your questions, depending upon what their target market is, developers get various benefits from Intel, right from the platform to the SDKs, tools, etc. How they choose to sell depends upon whom they are trying to sell to. If a company is building an Android application, its predominant choice would be Google Play when it comes to selling the app. But we give developers tools to make the applications a lot more productive on our architecture.
Q
What are your initiatives for developers in India, particularly for open source developers?
As far as our developer programmes are concerned, we work with the top high performance computing (HPC) installations in the country, where most of them work with a variety of open source projects. This is beyond supporting the community initiatives, which include making sure that online communities are available. We also work with academic institutions to provide them the tools. SDKs are provided to academia in a very different model. If you move higher, you find all the HPC installations. You know HPC is typically customised
open source work load. We practically work with all the large HPC installations across the country to make sure that their open source work loads are adapted to our platforms. At a worldwide level, if you pick the top 5-7 most popular workloads, most of them are constantly getting optimised for our architecture. For instance, Hadoop is very well optimised for our architecture and the developers have access to it. Practically, anyone and everyone who takes Hadoop and works in it, deploys it or optimises and builds applications on top of it, gets the benefit of our open source investments. If you notice, the level of abstraction in the open source world is moving higher and higher; at the same time, all our experts are constantly investing in making sure that the kernel is more robust, and that new features come in the mobile space. We have some of the best emulators in the Android space as well.
Developers get various benefits from Intel, right from the platform to the SDKs, tools, etc. How they choose to sell depends upon whom they are trying to sell to.
Q
If I plan on a start-up and want to reach out to Intel, how should I go about it?
You can call a bunch of people at Intel Capital and they can help you. There are some Intel Capital representatives based in India who are really active within the industry. They are Skyping, they are listening, they are connected to the industry in every way—they work with co-investors, constantly trying to provide good opportunities. There is a constant flow of ideas and, depending upon the state of the company, the appropriate resources are provided. If yours is a three-person start-up, which is coming up with an application for ultrabooks around education, you may not be ready for Intel Capital, but you are definitely in for my developers’ programme where you will be helped to turn your idea into a concept and data. Start-ups have access to hardware from me, they have access to tools from me, they can attend my developers' forums and join the community to reach a stage where they can design a prototype, work with some potential customers and get the product to the revenue-earning stage. When they reach the stage at which they need capital to grow further, that is the appropriate time for them to talk to Intel Capital. This does not mean Intel Capital doesn't talk to small start-ups. They help a lot of people and guide them. Their value to the community is not just the investments but also the guidance or strategic value that they provide.
november 2012 | 53
For U & Me
Interview
Q
Does that mean Intel has something to offer to all developers, be it a small start-up or a large set-up?
I certainly hope so. I am working with companies that comprise just five people, all the way to the equity investments stage. I hope this is covering a majority of developers. In the process, maybe we will learn that we are missing out on a few and find out what else can be done. Our fundamental approach and strategy is to drive new usages which are relevant to our environment on a variety of hardware platforms. So if there is something that is helpful for a teacher to help students learn the variables of an equation faster, we would love to bring that into the market place. If there is some application that helps businesses to balance their books faster, especially relevant to the Indian environment, we would love to support that. Our objective is to get more and more folks to build applications, not necessarily for Indian environments but preferably for Indian users.
Q
What are your strategies to involve students from colleges?
We have had a relatively robust programme in academia, fundamentally to build awareness of what is happening in the industry and what students can draw from that – for instance, our science contests are pretty famous. We work with curriculum bodies and colleges directly. We provide them hardware tips. Some years ago, we went to colleges and spoke to students about multi-core or parallelism in technology, and wanted to get these concepts introduced into their curriculum. It was too early and people were not aware about something like this at all. Today, we can see multi-core smartphones and devices, and they are being advertised for parallelism in computing. We went around and gave curriculum we developed to over 200 colleges across the country at that time. We are doing a lot with the embedded community on how to build embedded applications. There are some planned efforts and then there are some impromptu initiatives as well, to make the student community benefit from our technology and programmes.
Our fundamental approach and strategy is to drive new Clover Trail: Intel's controversial child! usages which are relevant to our environment on a variety of hardware Let’s touch upon the most controversial Intel product platforms. So if there is something being talked about these days, Clover Trail. Does it support Linux and Android? that is helpful for a teacher to help To begin with, we are working to bring Windows 8. From students learn the variables of an the tablet platform perspective, our focus was to bring out equation faster, we would love to the platform with the Windows 8 experience. But more bring that into the market place. and more evolution is scheduled to happen in the coming
Q
Q
How does a developer contact you to participate in your initiatives?
There are several mechanisms for that. First of all, get online. That is where Intel's developers’ zone comes into the scene. This is something which we have re-launched, simplified and tried to make into a simple, focused office. If you want to know what Intel is doing for developers, go to software.intel.com. You can be a part of Intel's developer zone depending upon your interests, technology, etc. If you feel the need to get more benefits, you can contact our partner programme. We have contests available for different parts of the country. My team was a part of around 20-30 events in the past three months -- many of them are hosted by the industry and a few of them are hosted by Intel, and we invite the developers. There is a constant flow of developers coming to us and seeking help in various ways. People use the social media like Facebook pages, Twitter, emails, and catch up at events. We also make a conscious effort to go out and talk to people doing incubation for developers. There are many companies that host events for developers and we are present on such occasions.
days. We will bring support for other platforms depending upon the need and demand. For over 15 years, we have always first come out with platforms, and then, over time, added Linux drivers, Linux interfaces and so on. The important thing now, from a consumption perspective, is to look at what kind of trends show up. We have to see whether people are choosing the Windows 8 offering or going down the path of the Android eco-system. We have got the expertise. We have more open source talent than probably any other company. We have got the platforms. I think it is going to be market driven as well. But, for now, Clover Trail is coming with Windows 8.
Q
Some reports claim that Intel has ruled out the possibility of any open source technology on Clover Trail.
Clover Trail is an Intel Atom SoC platform in the tablet space. Like other products at Intel, there is a roadmap on the Atom SoC platform, where we will see support for open source technology. When will that happen, I cannot comment right now, but it is surely coming. We will have to see how the market dynamics change. At the same time, form factor also plays a major role on how the platform will evolve.
Developers
How To
package com.ndk;
// Signature as generated by the header file.
public class fibb {
JNIEXPORT jlong JNICALL Java_com_ndk_fibb_fibbNative
public static long fibbJava(int n) {
(JNIEnv* env, jclass obj, jint n) {
if(n = =0 || n==1 ) { return 1; }
return fibbNative(n); //Signifies that fibbNative should return the value }
else { return (fibbJava(n-1)+fibbJava(n-2)); } } //Load the Native code module static {
Now that I have written the C version of Fibonacci, let’s create an appropriate Makefile for building the required shared library—to build the native code into a module so that Java can call the native function. The Makefile (must be placed in the jni directory) should be named Android.mk:
System.loadLibrary(“fib”); }
LOCAL_PATH := $(call my-dir)
//Prototype for the Native implementation
include $(CLEAR_VARS)
public static native long fibbNative(int n);
LOCAL_MODULE := fib
}
LOCAL_SRC_FILES := fib.c include $(BUILD_SHARED_LIBRARY)
I have now written a simple Java program (fibb.java) that computes Fibonacci numbers recursively. The next step is to generate a C header file based on this Java file; you can do that quickly using Java's standard javah tool, with the command javah -cp src -d jni com.ndk.fibb Note: To use javah, you need to install the Java JDK. If you look in the jni folder, you will have a header file there, named com_ndk_fibb.h, which is generated by the javah tool. It will contain standard JNI signatures that are generated using a naming convention that indicates there is a part of the code defined in Java—for example:
The Makefile is a part of the standard Android make system. All it does is take the source file fib.c and build it into the module called fib. Now that your Makefile is ready, you are ready to build your NDK app. Navigate to the project directory and run the command ndk-build (if you added the NDK directory to your system path—else give the absolute path to the extracted NDK's ndk-build binary). If the build is successful, you will now have a libs folder with the shared library stored in it. Run a project clean, so that the files generated by ndk-build are recognised by your development environment. Now, finally, you need a simple Android application to try out the library we just built—say, with a button that, when pressed, calls the shared module to compute the 15th Fibonacci number:
JNIEXPORT jlong JNICALL Java_com_ndk_fibb_fibbNative (JNIEnv* env, jclass obj, jint n);
The above code tells you that a method called fibbNative must be exported, so that it can be called by Java. Notice that the return value is jlong, which stands for Java Long, a standard JNI integer value. The variable env in the parameter list is a pointer to the virtual machine environment in which the code is executing. The variable obj of type jclass gives you the class in which this function is defined, while the type jint parameter n is the argument to the fibbNative function. Let us write the native C version of the above, and call it fib.c. Place it in the jni folder:
package com.ndk; import android.os.Bundle; import android.app.Activity; import android.view.Menu; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.TextView; public class MainActivity extends Activity implements OnClickListener{
#include “com_ndk_fibb.h”
TextView t1;
long fibbNative(int n) {
Button b1;
if (n==0 || n==1) return 1; else return fibbNative(n-1)+fibbNative(n-2); }
56 | november 2012
@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); b1=(Button)findViewById(R.id.button1);
For U & Me
Let's Try
Blogging the Way Hackers Do, with Octopress There are a plethora of popular blogging platforms available to choose from—Blogger, Wordpress and Posterous, to name a few. These have achieved considerable mainstream success, and host millions of blogs online. Lots of people adore these options, but for a hacker who finds fancy GUIs and bloated features distracting, Octopress is a good choice.
W
ith the innumerable existing platforms, you might be tempted to consider Octopress as ‘yet another’ blogging framework. This is definitely not the case, since Octopress is fundamentally different from other frameworks in its philosophy and approach towards blogging. Octopress is built with Jekyll, a static site generator. The idea behind Jekyll is very simple and elegant. It takes a template directory as an input, runs a parser on it, and generates a simple static HTML website. This provides Octopress with several advantages, but restricts it to a website with static content. However, this is not much of a restriction for blogging, as there is hardly any dynamic content associated with the task. With Octopress, there is no code and database involved, no caching and scaling issues to deal with—and, as you will discover as you explore further, with GitHub Pages, no hosting fee to pay as well! You get a powerful, easy to customise, easy to migrate, mobileresponsive blog. Last, but not the least, it gives you the satisfaction of looking at your blog posts as text files, and not as table attributes in a database. Octopress allows you to blog from the comfort of your favourite text editor and shell. If the CLI is your bread and butter, the experience will be amazing. However, if you are not comfortable with the Git workflow or with working on the command line, Octopress might not be the best choice for you.
Getting started with the installation
Now you need to install the dependencies. Grab bundler, which helps in managing package (known as gems in Ruby) dependencies for a Ruby application. gem install bundler
Once bundler is installed, you can install all dependencies for Octopress, with the following command: rbenv rehash # Only if using rbenv and not RVM bundle install
Moving on to installing the default Octopress theme, you need to run an included rake task. Rake is a build tool that, in the Ruby world, substitutes for the likes of make and ant.
To install Octopress, you need Git and Ruby 1.9.3 (installed using rbenv or RVM) on your machine. Installing the packages is a no-brainer; you can find lots of tutorials on the Web to assist you. The first step now is to clone the repository from GitHub. Fire up your terminal and run the following commands:
rake install
git clone git://github.com/imathis/octopress.git octopress
The only file you need to touch for configuration is the _config.yml file in the root of your Octopress directory. The structure of the file is self-explanatory. Here is a partial snippet:
cd octopress # If using rvm, you will be asked on trust for the .rmvc file, simply say Yes. ruby --version # Should say Ruby 1.9.3
58 | November 2012
This task copies the classic Octopress theme into the source and the sass folder of your installation. Octopress is now successfully installed.
Blog set-up and third-party configuration
Let's Try For U & Me title: Nitish Upreti subtitle: “Musings on Code , Technology and the Entrepreneurial world.” author: Nitish Upreti
In addition to the basic settings, you can edit third-party settings for integration with services like GitHub, Twitter, Google+ and other social platforms. To enable blog comments, Octopress also supports the Disqus platform, out of the box. github_user: Myth17
This creates a new directory About inside the source directory, with a file named index.markdown. This file can be edited for inserting content in the page. To add the About link in your home page, simply append the following code snippet to source/_includes/custom/ navigation.html: <li><a href=”/about”>About</a></li>
To generate and preview the blog, add the following command:
twitter_user: nitish googleplus_user: 110654724815061438236
rake generate
disqus_short_name: nitishsweblog
rake preview
That’s pretty much it, to get you started! You can experiment with other values based on your choices and preferences for your blog.
This starts a local Webrick server on your machine. Point your browser to http://localhost:4000/ for a quick preview.
Blogging basics: Getting to know ‘Markdown’
Octopress uses Markdown as the default format for creating content. So before you can get started with writing your first blog post, you should spend a minute getting comfortable with Markdown. Quoting John Gruber, the author, “Markdown is a text-to-HTML conversion tool for Web writers. Markdown allows you to write using an easyto-read, easy-to-write plain text format; then convert it to structurally valid XHTML (or HTML).” In simple words, Markdown allows you to write formatted text without worrying about cryptic mark-up tags. The focus thus shifts entirely to the content. A complete Markdown reference can be found at http://daringfireball.net/projects/markdown/. Continuing with creating a new post, rake tasks help you here as well:
Deploying your blog on the Web with GitHub pages
Your blog is ready for the world to see. The good news is that GitHub (social code hosting that hackers love) allows you to host your Jekyll-based site for free. I assume you already have a GitHub account; if not, sign up at https:// github.com/. You will be making use of GitHub User/ Organization Pages to host your blog. Create a new GitHub repository as username.github.com; in our case, it was myth17.github.com. Make sure to follow this naming convention, else you will end up with an HTTP 404 Not Found error. In this new repository, the master branch hosts the _deploy folder and serves your blog. The source for the blog is based on the source branch. To set up the instance with GitHub, run the following rake task: rake setup_github_pages
rake new_post[“My first Post”]
A new post file is created in the source/_posts folder; it has some YAML Front Matter (for Jekyll processing), and you can append the content of your blogpost after it: --layout: post title: “My first Post” date: 2012-09-02 10:07 comments: true
You will be asked to enter your GitHub pages’ repository URL, which is git@github.com:Myth17/myth17.github.com. git, in our case. You might even be willing to point your own domain to GitHub pages. This is easy to achieve. Under your source directory, create a new file named CNAME, and type in your custom domain. In this case, I will have it as niti.sh. Depending on whatever registrar your domain is with, point an A record to the IP address 207.97.227.245. Finally, to deploy the blog, issue the following command:
categories: [HelloWorld, Beginning ] ---
rake generate rake deploy
An About page is very common in all blogs. You can add it as follows: rake new_page[About]
At this moment, you will receive an email from GitHub, with the subject Page build successful. In this particular instance, the blog has been successfully set up on myth17. github.com and niti.sh. Don’t forget to commit your blog November 2012 | 59
For U & Me
Let's Try
changes to the source branch with Git. This is as simple as using the following code: git add .
Octopress also has lots of plug-ins, and ships with the ones that are most commonly used. The entire list can be found at http:// octopress.org/docs/plugins. The Gist Tag plug-in is quite commonly used. With this plug-in, sharing a GitHub gist is as simple as:
git commit -m ‘First blog Post’ git push origin source
{% gist 3549411 %} #where 3549411 is the gist id
Themes and plug-ins for Octopress
What lies ahead
Octopress is easy to customise with themes. A list of the popular options can be found at the project’s GitHub wiki: https://github.com/imathis/octopress/wiki/List-Of-OctopressThemes. Let’s say you like one of the available themes called DarkStripes, and want to install it. The installation is as simple as issuing the following code: cd octopress git clone git://github.com/amelandri/darkstripes.git .themes/ darkstripes rake install[‘darkstripes’] rake generate
60 | November 2012
Octopress is powerful and customisable to a great extent. In this introduction, we have barely scratched the surface. The entire reference documentation for Octopress can be found at: http://octopress.org/. Brandon Mathis, the creator of Octopress, has his own blog at http://brandonmathis.com/, which is a great example of a completely customised Octopress instance. Hope to see more and more hackers blogging with Octopress and having a lot of fun with it!
By Nitish Upreti The author is a Rubyist at heart. A technology start-up enthusiast and lover of classic rock, he tweets as @nitish and blogs with Octopress at http://niti.sh/.
Let's Try
Developers
The Semester Project—Part VII
The File System in Action This article, which is part of the series on Linux device drivers, gets the complete SIMULA file system module in action, with a real hardware partition on your pen drive.
Let’s first take a look at real SFS. The code available from http://linuxforu.com/article_ source_code/nov12/rfs_code.tar.bz2 gets to the final tested implementation of Pugs’ and Shweta's final semester project. It contains the following: real_sfs.c— the code of the earlier real_sfs_minimal.c, plus the remaining real SIMULA file system functionalities. real_sfs_ops.c & real_sfs_ops.h — the earlier code plus the additional operations needed for the enhanced real_ sfs.c implementations. real_sfs_ds.h—almost the same file as in the previous article, plus a spin lock added into the real SFS info structure, to be used for preventing race conditions in accessing the used_blocks array in the same structure. format_real_sfs.c—the same file as in previous articles and the real_sfs formatter application. Makefile—contains the rules for building the driver real_sfs_final.ko using the real_sfs_*.* files, and the format_real_sfs application, using format_real_sfs.c. With all these and the details provided earlier, Shweta completed the project’s documentation. And so, finally, Shweta and Pugs were all set for their final semester
project’s demo, presentation and viva. The highlights of their demo (in a root shell) were as follows: Loading the real_sfs_final driver: insmod real_sfs_final.ko Using the previously formatted pen drive partition / dev/sdb1 or re-formatting it using the format_real_sfs application: ./format_real_sfs /dev/sdb1. Caution: Please check out the detailed steps for these procedures from the previous article, before you actually format it. Mount the real_sfs formatted partition: mount -t real_sfs / dev/sdb1 /mnt And, browsing the mounted filesystem, using the usual shell commands like ls, cd, touch, vi, rm, chmod, … Figure 1 shows the real SIMULA file system in action.
Behind the scenes
And if you really want to know what enhancements Pugs added to the previous article's code to get to this level, it is basically the following core system calls, as part of the remaining 4 out of 5 sets of structures of function pointers (in real_sfs.c): 1) write_inode (under struct super_operations)—sfs_write_ inode() basically gets a pointer to an inode in the VFS' november 2012 | 61
Developers
Let's Try
inode cache, and is expected to sync that with the inode in the physical hardware-space file system. That is achieved by calling the appropriately modified sfs_update() (defined in real_sfs_ops.c, adapted from the earlier browse_real_sfs application). The key parameter changes are passing the inode number instead of the filename, and the actual timestamp instead of the flag for its update status. Accordingly, sfs_lookup() is being replaced by sfs_get_file_ entry(), and additionally, the data blocks are now being freed (using sfs_put_data_block()), if the file size has reduced. Note that sfs_put_data_block() (defined in real_sfs_ops.c) is a transformation of the put_data_block() from the browse_real_sfs application. And finally, underlying write is being achieved using write_to_real_sfs(), a function added in real_sfs_ops.c, which is very similar to (already there in real_sfs_ops.c) —except for the direction reversal of the data transfer, and marking the buffer dirty to be synced up with the physical content. sfs_write_inode() and sfs_update() are shown in the code below. For the others, check out the file real_sfs_ops.c. #if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,34)) static int sfs_write_inode(struct inode *inode, int do_sync) #else static int sfs_write_inode(struct inode *inode, struct writeback_ control *wbc) #endif { sfs_info_t *info = (sfs_info_t *)(inode->i_sb->s_fs_info);
Figure 1: The real SIMULA file system module in action
int size, timestamp, perms; *timestamp, int *perms) if (!(S_ISREG(inode->i_mode))) // Real SFS deals only with regular files return 0;
{ sfs_file_entry_t fe; int i;
size = i_size_read(inode);
if (sfs_get_file_entry(info, vfs_ino, &fe) == -1)
timestamp = inode->i_mtime.tv_sec > inode->i_ctime.tv_sec ?
{
inode->i_mtime.tv_sec : inode->i_ctime.tv_sec;
return -1;
perms = 0;
}
perms |= (inode->i_mode & (S_IRUSR | S_IRGRP | S_IROTH)) ? 4
if (size) fe.size = *size;
: 0;
if (timestamp) fe.timestamp = *timestamp;
perms |= (inode->i_mode & (S_IWUSR | S_IWGRP | S_IWOTH)) ? 2
if (perms && (*perms <= 07)) fe.perms = *perms;
: 0; perms |= (inode->i_mode & (S_IXUSR | S_IXGRP | S_IXOTH)) ? 1 : 0;
for (i = (fe.size + info->sb.block_size - 1) / info->sb. block_size; i < SIMULA_FS_DATA_BLOCK_CNT; i++)
printk(KERN_INFO "sfs: sfs_write_inode (i_ino = %ld): %d
{
bytes @ %d secs w/ %o\n",
if (fe.blocks[i])
inode->i_ino, size, timestamp, perms);
{ sfs_put_data_block(info, fe.blocks[i]);
return sfs_update(info, inode->i_ino, &size, &timestamp,
fe.blocks[i] = 0;
&perms);
}
}
}
int sfs_update(sfs_info_t *info, int vfs_ino, int *size, int
return write_to_real_sfs(info, info->sb.entry_table_block_
Developers
Let's Try
using the corresponding block layer APIs. And that is exactly what is achieved by the real SFS-specific function sfs_get_block(), which is being passed into and used by the first three functions mentioned above. Defined in real_sfs.c, this function is invoked to read a particular block number (iblock) of a file (denoted by an inode) into a buffer head (bh_result), optionally fetching (allocating) a new block. So for that, the block array of the corresponding real SFS inode is looked up, and then the corresponding block of the physical partition is fetched using the kernel API map_bh(). Note that to fetch a new block, we invoke the sfs_get_data_block() (defined in real_sfs_ops.c), which is again a transformation of the get_data_block() from the browse_real_sfs application. Also, in case of a new block allocation, the real SFS inode is also updated underneath, using sfs_update_file_entry(), a one-line implementation in real_sfs_ops.c. Code below shows the sfs_get_block() implementation.
start, V2S_INODE_NUM(vfs_ino) * sizeof(sfs_file_entry_t), &fe, sizeof(sfs_file_entry_t)); }
2) create, unlink, lookup (under struct inode_operations)—All the three functions sfs_inode_create(), sfs_inode_unlink(), sfs_inode_lookup() have the two common parameters (the parent's inode pointer and the directory entry for the file in consideration), and these respectively create, delete and look up an inode corresponding to a directory entry. sfs_inode_lookup() basically searches for the existence of the filename underneath using the already coded sfs_lookup() (in real_sfs_ops.c). If it is not found, it then invokes the generic kernel function d_splice_alias() to create a new inode entry in the underlying file system, for the same, and then attaches it to the directory entry dentry. Otherwise, it just attaches the inode from the VFS' inode cache (using the generic kernel function d_add()). This inode, if obtained fresh (I_NEW), needs to be filled in with the sfs_lookup-obtained file attributes. In all the above implementations and in those to come, a few basic assumptions have been made, namely: a) Real SFS maintains mode only for the user, and that is mapped to all three of the user, group and other of the VFS inode. b) Real SFS maintains only one timestamp, and that is mapped to all three—the created, modified and accessed times—of the VFS inode. sfs_inode_create() and sfs_inode_unlink() correspondingly invoke the transformed sfs_create() and sfs_remove() (defined in real_sfs_ops.c and adapted from the earlier browse_real_sfs application), for respectively creating and clearing the inode entries in the underlying hardwarespace file system, apart from the usual inode cache operations, using new_inode() + insert_inode_locked(), d_instantiate() & inode_dec_link_count(), instead of the earlier learnt iget_locked(), d_add(). Apart from the permissions and file entry parameters, sfs_create() has an interesting transformation from user space to kernel space: time(NULL) to get_seconds(). And in both sfs_create() and sfs_remove(), the obvious user space lseek() + write() combo has been again replaced by the write_to_real_sfs(). Check out all the mentioned code pieces in the files real_sfs.c and real_sfs_ops.c. 3) The address-space operations readpage, write_begin, writepage and write_end (under struct address_space_ operations) are basically to read and write blocks on the underlying filesystem, and are achieved using the respective generic kernel functions mpage_readpage(), block_write_begin(), block_write_full_page(), generic_ write_end(). The first one is prototyped in <linux/mpage. h> and the remaining three in <linux/buffer_head.h>. Now, though these functions are generic enough, a little thought will show that the first three of these would ultimately have to do a real SFS-specific transaction with the underlying block device (the hardware partition),
static int sfs_get_block(struct inode *inode, sector_t iblock, struct buffer_head *bh_result, int create) { struct super_block *sb = inode->i_sb; sfs_info_t *info = (sfs_info_t *)(sb->s_fs_info); sfs_file_entry_t fe; sector_t phys; printk(KERN_INFO "sfs: sfs_get_block called for I: %ld, B: %lld, C: %d\n", inode->i_ino, iblock, create); if (iblock >= SIMULA_FS_DATA_BLOCK_CNT) { return -ENOSPC; } if (sfs_get_file_entry(info, inode->i_ino, &fe) == -1) { return -EIO; } if (!fe.blocks[iblock]) { if (!create) { return -EIO; } else { if ((fe.blocks[iblock] = sfs_get_data_block(info)) == -1) { return -ENOSPC; } if (sfs_update_file_entry(info, inode->i_ino, &fe) == -1) {
november 2012 | 63
Overview
For U & Me
Welcome to the
Qurified World QR codes are those two dimensional grids of square pixels, featured prominently in some car advertisements. They look like tiles from some board game the Martians left behind. Today, they are being used to point mobile browsers to URLs, snap up contacts for the address books, record geolocation tags, or designate Wi-Fi hot spots. Creating your own QR code, or decoding one with your webcam, is very easy with nofuss FOSS tools.
T
he QR or Quick Response code is a two-dimensional matrix code that is read with an image capture device rather than a linear scanner. Originally invented by the Toyota subsidiary Denso Wave to track the progress of automobiles in the manufacturing process about 20 years ago, it found its way into everyday use thanks to its ability to store more and virtually any kind of information compared to a linear bar code. The QR code is an ISO standard today. It is also licence-free. Though Denso Wave holds patent rights, it has chosen not to exercise them.
Creating and decoding QR codes
All of the following code creators work on Ubuntu 12.04 LTS with the latest updates applied. Your mileage may vary depending on your distribution. Qreator: While Qreator can create QR codes as you type in your content, it cannot decode QR codes. It's an excellent first application to familiarise yourself with QR codes, and is available in the Ubuntu repositories. To get going, click the New button and then double-click one of the four options of URL, Text, Geolocation, or WiFi Network. Type in your URL or text, and see the square tiles of the QR code dance into place. Click the Save button on the bottom left to save the QR code as a PNG file. Note a couple of things here: the more content you want the QR code to hold, the busier it gets. The busier the QR code, the less immune it is to mutation or distortion. We
have generated the 'http://www.linuxforu.com/' QR code using Qreator, and taken the liberty to annotate it with the content string (using GIMP), but still managed to keep the code readable. The block-level error correction, even at its lowest level, makes the QR code resilient to some degree of distortion or erasure. QtQr: If Qreator can create the code, QtQr can decode it, too. Just drag and drop the QR code image on the application window, and the pop-up window gives the decoded content of the QR code. If you have a printed QR code, you can get QtQr to activate your webcam to capture and decode the QR code. Hit Ctrl+W and a window identifying your webcam will appear, with brief instructions. After you confirm webcam activation, hold the printed code close to the camera, until a green square flashes around the code in the camera image. You will find that a window has already popped up with the content decoded.
At the command-line
The Linux experience is never complete without the command line, so... 'qrencode' is a command-line tool to encode content to QR code. Read the man page, or use the '--help' option to figure out how to use it. The utility has quite a few capabilities, and some die-hard command line jockeys would say that it is more convenient. The only downside is that you have to provide your own encoding of the content. However, some might love the flexibility this offers. The november 2012 | 65
For U & Me
Overview
following command gets you the QR Code to the LFY Web page into a PNG file lfy.png. qrencode -o lfy.png 'http://www.linuxforu.com'
Decoding QR codes on the command line is a breeze with the Zbar tool suite. It has two tools: 'zbarcam' and 'zbarimg'. The first activates the webcam to capture your printed QR code (hold the printed QR code steady until you see the green square flash around the code in the camera view), and the second reads PNG images to output the content as text. The following command decodes the QR code in the PNG file lfy.png: zbarimg -q lfy.png
QRPedia website, which in turn sources information from Wikipedia and customises it for your media consumption device in a choice of languages. QR codes are a nice way to embellish your resume or business card with. Remember to keep it simple, remain standards-compliant and test your QR code with at least a couple of scanning applications before you deploy it. And lastly, Mom's sage advice: “Don't talk to strangers,” and “Look both ways before you cross the road,” work well when playing around with QR codes too. Malicious QR codes designed to cause harm are very real, so be cautious while snapping up QR codes that appear in dubious places, from sources you do not trust, or those that prompt you to download apps, send SMSs, or make calls.
All three command-line tools enable batch-mode 'qurification' of content or decoding of QR codes.
De facto encoding standards
While the ISO standard mandates the physical encoding standard for the QR Code, there is no formal encoding standard for the various genres of information that need to be encoded: typically biz cards, URLs, geo-locations, email IDs or calendar events. The Japanese company NTT Docomo has devised de facto standards to encode various types of information, leading to the SPARQ Code, which is nothing but the QR code that enforces a content encoding standard. Both SPARQ Code and Zxing projects host standards-compliant encoders on their respective websites (see References).
Novel uses
While advertisers have been using QR codes for quite some time to increase 'conversion' rates among target audiences, other novel uses have also created an impact. Many museums show QR codes along with their exhibits, making it easy for visitors to dig up more information on them. The QRPedia is another project where a QR code points to a link on a
References [1] [2] [3]
http://en.wikipedia.org/wiki/QR_code http://en.wikipedia.org/wiki/SPARQCode https://launchpad.net/qreator The link to Qreator page on launchpad. [4] https://launchpad.net/qr-tools/+download The link to the QtQr download page. [5] http://fukuchi.org/works/qrencode/index.html.en The qrencode page. [6] http://zbar.sourceforge.net/ The zbar tool suite. [7] http://www.sparqcode.com/static/maestro/ SPARQ Code generator. [8] http://zxing.appspot.com/generator/ The Zxing (zebra crossing) project has a standards-compliant QR code generator. [9] http://qrworld.wordpress.com/2011/10/21/qr-codesviruses-should-we-panic/ This, and the following link, talk about security threats from QR codes. [10] http://mashable.com/2011/10/20/qr-code-security-threat/
By: Gurudutt Talgery The author marvels at the liberating ideology of GNU/Linux. He loves to learn and write about open source software on Linux. He has over 20 years of experience in the software technology industry building products and managing teams.
EFY Group’s New Initiative
EB TimEs
Will be launched on Sept 11, 2012 This 8-page monthly B2B Newspaper will be a resource for traders, distributors, dealers, and those who head channel business, as it aims to give an impetus to channel sales
66 | november 2012
entation Docum entation Docum tion ing Localis Documenta
entation m u c o D
e Softcwumaenrtation Do
entation Docum
entation Docum entation Docum n
entation m u c o D tatio en Docum ware
sed Soft Web-Ba tion Localisa
entation m u c o D tation
Overview For U & Me
en Docum
Localisation
Localising User Documentation Previous articles in this series have covered how to localise applications, using desktop and Web-based software. Most software have user documentation, which needs to be localised. This article describes how to use OmegaT, a free computer-aided Translation Memory (TM) tool, to translate documentation.
U
ser manuals are more difficult to localise than help files or the user interface, as they have a lot more text and visuals to help people learn. While it is possible to copy the English version and create a local-language version in a simple editor, making modifications and generating newer versions is not easy. A specialised computer-based translation aid like OmegaT is a good tool for this purpose. Other proprietary but free-to-use tools, like the Google Translator toolkit, can also be used with certain limitations. OmegaT is a free TM application written in Java, suitable for professional translators. It supports multiple-file projects, multiple translation memories, glossaries and fuzzy matching. It supports more than 30 document formats, including plain text, .po files, Microsoft, OpenOffice and various mark-up languages. It has a builtin spell checker, and an interface to Google Translate. It supports exporting to the various formats used in localisation tools. Figure 1 shows a sample screenshot of OmegaT. It consists of three sub-windows; the first, on the left, is called Editor—it shows the source components, and also allows updating the corresponding target language translation. The second one, at top right (the match viewer), shows matches from translation
Figure 1: OmegaT for translating documents
memory. The third one, at bottom right, is a glossary viewer that shows the glossary matches. OmegaT operation is based on projects. For each project, the user needs to specify the source language, target language, basis for segmentation (either sentence or paragraph), and the directories for source files, target files, translation memory and glossary files. Once these are specified, all the source files are imported into the project. The user can select a file and do the translation. The translation memory is updated with each change made to the translation. To verify the translation, select Create Translated Documents from the File menu. The target directory will be updated with the translated files. Those can be processed further if required and viewed with an appropriate tool. In order to get a high-quality translation, it is important that the translation memories generated from the user interface and help file localisation are used in OmegaT.
Gedit manual translation
The use of this tool can be demonstrated by localising the Gedit manual from English to Telugu. The Gedit manual was originally written in HTML. Recently, it is being rewritten using Mallard, as I explained in a previous article. For the purpose of this article, let’s use the HTML version. The manual is available for download from the GNOME documentation website (http://library.gnome.org/users/gedit/). Download version 2.30.4 to get the HTML version. To populate the translation memory, the Telugu .po file can be downloaded from the GNOME localisation site (http://l10n. gnome.org/vertimus/gedit/master/po/te). With the use of localisation tools like Virtaal, the translation can be exported to the TMX format for use in OmegaT. A glossary can be updated by extracting all the one- and two-word strings from the user interface, using Translate Toolkit’s utilities and stored in a CSV format, suitable for OmegaT. Create a new project (Figure 2), with segmentation set to sentence boundaries; place the relevant source, translation memory and glossary files in the directories specified in the project—and the set-up is complete. Now, you can select specific files from ‘Project files’ (Figure 3) for translation. Each source segment, and its place-holder for the translation, is shown in the NOVEMBER 2012 | 67
For U & Me Overview
Figure 4: Gedit manual English sample page
Figure 2: OmegaT new project properties
Figure 3: OmegaT project files
left sub-window (Figure 1). Potential matches are displayed in the top right window, and glossary matches for words in the source string are shown in the bottom right. Using keyboard shortcuts or the mouse, the closest match can be selected and used as the translation. New target translation can be entered as well. As OmegaT is a Java utility, it has been noticed that the IBus-based input method does not work under Ubuntu; X keyboard map can be enabled to type the target language text in OmegaT. OmegaT shows the progress statistics of the translation. All source strings except file names used for hyperlinking need to be translated. OmegaT does recognise mark-up tags. Tag verification needs to be done prior to producing the translated documents, to ensure that each opening tag has a corresponding closing tag. Figure 4 shows a sample page of the English Gedit manual and Figure 5 the same in Telugu. 68 | NOVEMBER 2012
Figure 5: Gedit manual Telugu sample page
In this article, we have looked at how a free computer TM tool like OmegaT can be used to translate user documentation. We will explore aspects of localisation style and other related issues in future articles. For more information [1] OmegaT website: https://www.omegat.rog/en/omegat.html
By: Arjuna Rao Chavala The author is an independent consultant in the areas of IT, program/ engineering management and open source. He was a cofounder and first president of Wikimedia India. He initiated the IEEE-SA standardisation initiative in the software space in India and currently serves as the WG Chair for the global IEEE-SA project P1908.1, ‘Virtual keyboard standard for Indic languages’. He can be reached through his website arjunaraoc.blogspot.in and Twitter ID @arjunaraoc.
Guest Column Exploring Software
Anil Seth
Are We Safe Using Apps on Tablets? How do various open source mobile platforms protect users from malicious applications?
R
ecently, I learnt of the availability of Android on EeePC. I dusted off my old EeePC 701 and installed Android 4 from http://www. android-x86.org. It was straightforward—Wi-Fi and the function keys I tried, worked. I was actually curious to see how usable Android would be with a mouse and keyboard. I expected that the absence of a touch screen would make it irritating, but I was pleasantly surprised. I found myself absent-mindedly clicking and dragging a Web page to scroll even on the desktop! It may indeed be better to find an alternate way to select text and use click-and-drag for scrolling the screen's visible area. Multi-touch on the touch-pad would make even more touch-screen gestures, like pinch to shrink or zoom, available without a touch-screen. However, my exploration of Android and similar environments changed once I visited the applications marketplace. Searching for a Web browser did not show Firefox (it's not compatible with EeePC, and thus did not show up), but lots of other applications. The options for narrowing down the search were only free and paid. There was no separate grouping for open source. The reason for my concern was that before installing an application, the marketplace cautions us about the resources and facilities an application would use, and prompts us to think about whether we wish to allow that. My anxiety was about how I would know that I could trust an application? In desktop environments, rogue applications use system flaws to access information. However, if a user is consciously running an application, nothing prevents the application from accessing the contents of files owned by the user—unless, of course, the user is using SELinux and has configured it suitably for resources used by each application. Many of us would avoid running a commercial or closed-source application unless we trusted the source, or have no easy alternate—for example, Flash. I am inclined to trust an open source application, believing that it is not
likely to be misleading, whereas I would be concerned about the motivation of the group offering a free, but not open source, application. On the smartphone and tablet platforms, the number of applications available has given them bragging rights, as if the absurd number of applications for a platform makes it more usable! (I would urge people to see the talks by Barry Schwartz and Sheena Iyengar on ted.com!) Instead, I was struck by indecision (more like paralysis) because I was not sure if, by allowing an application access to the disk and network, I was risking exposing my personal information to dubious application makers. Could the application read the user names and passwords stored by the Web browser for various sites? And this was on a system I was using to just test and explore Android! Android is based on Linux, which is secure—but not against the ignorance of users (or perhaps, the politically correct term is 'social engineering'). So, I wondered how Android was handling these concerns, which led me to http://source.android.com/tech/security/
Android—each application is a user
The Android platform takes advantage of the security inherent in Linux by assigning a unique user ID to each Android application. Linux ensures that the resources used by each application are isolated from the others. Hence, permission to access the disk allows it to use the disk, but does not permit it to read any other file of any other application, as that is owned by a different user—i.e., another application, in this context! So, an application is running in a sandbox using the standard capabilities of the Linux kernel. It is a remarkably simple method, and has been known to work based on years of experience on desktops and servers. Applications may need to share data, and may do so using the standard Linux mechanisms, subject to the security policies. However, Android includes a new IPC mechanism, which is managed by the Android
november 2012 | 69
Exploring Software
Guest Column
environment. In particular, the ContentProviders mechanism provides access to the data stored on the device. An application can use data provided by another application using the ContentProvider mechanism, or expose its own data using this mechanism. When a user installs an application, the permissions dialogue will ask the user to agree to grant the permissions needed by the application. It is done once only, so as not to irritate the user. Overall, Android seems to provide a reasonable environment so that running applications from unknown sources is not as risky as I had first feared.
MeeGo—extended access control
I wasn't looking for MeeGo. I was actually interested in KDE Plasma Active (http://plasma-active.org/). I like the Plasma netbook environment, and have been looking forward to the Plasma Active environment on tablets. I was surprised to find that a distribution available for testing, provided by http://plasma-active.basyskom.com/, is based on MeeGo for the Intel platform. It is still a work in progress, but I was curious about how security issues are handled by MeeGo. MeeGo also uses the standard Linux environment. However, user-level security is not enough to isolate resources used by an application. The MeeGo access control framework introduced two new types of credentials— a Resource Token and Application Identifier. The D-Bus interfaces for a real system object are protected, and an application needing a resource needs to have the credentials to access those interfaces. Access control via D-Bus is used on desktops as well. For example, on modern Linux desktops, the decisions regarding who may update a system, who may use NetworkManager to configure Wi-Fi, etc, are managed using the PolicyKit framework. An application can create new resource tokens and control access to its sensitive resources. The Application Identifier is generated by the package manager, and remains unchanged during the life of the application. The additional access control mechanisms are implemented using the SMACK (Simplified Mandatory Access Control Kernel) module in the mainline kernel. Any system object, e.g., a file, will be automatically protected with additional authentication by the kernel if it has a SMACK label.
Firefox OS—HTML5
A draft of the Firefox OS’ security model is available on developer.mozilla.org. The key difference is that it will allow HTML5 applications to integrate with the device's
70 | november 2012
hardware using JavaScript. A side effect is that the need for numerous applications created specifically for a mobile platform, would disappear. The design of Firefox OS assumes that data transfer is slow, expensive, and has a monthly limitation (a common scenario in many countries, including India). Users are likely to keep data services disabled, except when they need to carry out some transaction. Even though (as Mark Zuckerberg stated) Facebook made a strategic error betting heavily on HTML5 rather than native applications, the time for HTML5 will come. Mozilla hopes to implement the needed API of HTML5 in Firefox OS, so that the experience of Facebook is a thing of the past. Firefox OS may indeed be the ideal platform for India. It would certainly make it easy to select open source applications rather than those that are merely ‘free’. A good place to check the current status of HTML5 on your browser is http://html5test.com/. Firefox OS' security model references the documentation of system hardening by Chromium OS, which also is an environment for running Web applications. As in the case of MeeGo, the key concept is the principle of least privilege, and it is implemented using mandatory and role-based access controls. A number of alternatives are available in the kernel, including SMACK, SELinux and TOMOYO. The core functionality needed by Firefox OS is available. We may conclude that securing a user's or each application's data against accidents or the malicious intent of other applications is a primary design consideration for mobile platforms. Each environment tries to ensure that an application runs effectively in some type of a sandbox. Sharing of data or objects between applications has to be specifically requested and authorised. While security policies are more complex in these environments than on traditional PCs, mobile OSs try to avoid presenting users with unnecessary technical details. As long as a user does not give unnecessary access rights to an application, various mobile platforms provide a pretty safe environment for a user to install and experiment with new applications.
By: Anil Seth The author is currently a visiting faculty member at IIT-Ropar, prior to which he was a professor at Padre Conceicao College of Engineering (PCCE) in Goa. He has managed IT and imaging solutions for Phil Corporation (Goa), and worked for Tata Burroughs/TIL. You can find him online at http://sethanil.com/ and reach him via email at anil@sethanil.com.
Let's Try
Developers
Python, Behave and
Mockito-Python Here’s an article that explores behaviour-driven development with Python for software developers interested in automated testing. Behaviour-driven development approaches the designing and writing of software using executable specifications that can be written by developers, business analysts and quality assurance together, to create a common language that helps different people communicate with each other.
T
his article has been written using Python 2.6.1, but any sufficiently new version will do. Behave is a tool written by Benno Rice and Richard Jones, and it allows users to write executable specifications. Behave has been released under a BSD licence and can be downloaded from http://pypi.python.org/pypi/behave. This article assumes that version 1.2.2 is being used. Mockito-Python is a framework for creating test doubles for Python. It has been released under an MIT licence, and can be downloaded from: https://bitbucket.org/szczepiq/ mockito-python/. This article assumes the use of version 0.5.0, but almost any version will do.
Our project
For our project, we are going to write a simple program that simulates an automated breakfast machine. The machine knows how to prepare different types of breakfast, and has a safety mechanism to stop it when something goes wrong. Our customer has sent us the following requirements for it: "The automated breakfast machine needs to know how to make breakfast. It can boil eggs, both hard and soft. It can fry eggs and bacon. It also knows how to toast bread to two different specifications (light and dark). The breakfast machine also knows how to squeeze juice from oranges. If something goes wrong, the machine will stop and announce an error. You don't have to write a user interface; we will create that. Just make easy-to-understand commands and queries that can be used to control the machine."
Note: Example code in this article is available as a zip archive at https://github.com/tuturto/breakfast/ zipball/master.
Layout of the project
Let’s start by creating a directory structure for our project: mkdir -p breakfast/features/steps
The breakfast directory will contain our code for the breakfast machine. The features directory is used for storing specifications for features that the customer listed. The steps directory is where we specify how different steps in the specifications are done. If this sounds confusing, don't worry, it will soon become clear.
The first specification
In the case of Behave, the specification is a file containing structured natural language text that specifies how a given feature should behave. A feature is an aspect of a program—for example, boiling eggs. Each feature can have one or more scenarios that can be thought of as use cases for the given feature. In our example, the scenario is hard-boiling a single egg. First, we need to tackle boiling the eggs. That sounds easy enough. Let’s start by creating a new file in the features directory, called boiling_eggs.feature, and enter the following specification:
november 2012 | 71
Developers
Let's Try
Feature: Boiling eggs as an user in order to have breakfast I want machine to boil eggs Background: Given machine is standing by Scenario: boil hard egg Given machine has 10 eggs And egg setting is hard And amount of eggs to boil is set to 1 When machine boils eggs Then there should be 1 boiled egg And eggs should be hard
It does not look like much, but we are just getting started. Notice how the specification reads almost like a story. Instead of talking about integers, objects and method calls, the specification talks about machines, eggs and boiling. It is much easier for non-coders to read, understand and comment on this kind of text. At this point, we can already try running our tests, by moving into the breakfast directory and issuing the command behave. Behave will try to run our first specification, and will fail because we have neither written steps or implemented the actual machine. Because Behave is really helpful, it will tell us how to proceed: You can implement step definitions for undefined steps with these
The first function defines what will happen when there should be a breakfast machine standing by. Let’s create a new instance of BreakfastMachine and store it in context, which is a special object that Behave keeps track of. It is passed from step to step, and can be used to relay information between them. Eventually, we will use it to assert that the specification has been executed correctly. This defines code that is executed when there is a step ‘machine has x eggs’, where x can be anything (in our example it is 10). {egg_amount} is automatically parsed and passed as a parameter to the function, which has to have an identically named parameter. Note that the parameters are Unicode strings, and thus need to be converted to integers in our example. If we were to run Behave at this point, we would get an error message that BreakfastMachine cannot be imported. This, of course, is because we have not yet written it. It might feel strange to start coding from this end of the problem (specifications and tests), instead of diving headlong into coding BreakfastMachine. The advantage of approaching the task from this direction is that we can first think about how we would like our new object to behave and interact with other objects, and write tests or specifications that capture this. Only after we know how we would like to use the new object do we start writing it. In order to continue, let’s create BreakfastMachine in the file breakfast.py and save it in the breakfast directory. This is the class our client asked us to write and which we want to test:
snippets: class BreakfastMachine(object): @given(u'machine is standing by') def impl(context): assert False
def __init__(self): super(BreakfastMachine, self).__init__() self.eggs = 0 self.egg_hardness = 'soft'
Now we need to define what is to be done when the breakfast machine is standing by and has 10 eggs ready to be boiled. Create the file eggs.py in the steps directory and add the following code in it: from breakfast import BreakfastMachine from behave import * @given(u'machine is standing by') def impl(context): context.machine = BreakfastMachine() @given(u'machine has {egg_amount} eggs') def impl(context, egg_amount): context.machine.eggs = int(egg_amount)
Steps are the bridge between the specification and the system being tested. They map the natural-language sentences into function calls that the computer can understand.
72 | november 2012
self.eggs_to_boil = 0 self.boiled_eggs = 0 self.boiled_egg_hardness = None def boil_eggs(self): pass
Steps to implement a hardness setting for eggs and the amount of eggs to boil are quite similar to setting the total amount of eggs available. The difference is that we are not creating a new BreakfastMachine, but using the one that has been stored in context earlier. This way, we configure the machine step by step, according to the specification. You can keep running Behave after each addition to eggs.py to see what kind of reports it will output. This is a good way of working, because Behave guides you regarding what needs to be done next, in order to fulfil the specification. In eggs.py, add the following:
Let's Try @given(u'egg setting is {egg_hardness}') def impl(context, egg_hardness): context.machine.egg_hardness = egg_hardness @given(u'amount of eggs to boil is set to {amount}') def impl(context, amount): context.machine.eggs_to_boil = int(amount)
Up to this point, we have been configuring the breakfast machine. Now it is time to get serious and actually instruct the machine to boil our eggs, and verify afterwards that we got what we wanted. Add the following piece of code to eggs.py: @when(u'machine boils eggs') def impl(context): context.machine.boil_eggs() @then(u'there should be {amount} boiled egg') def impl(context, amount): assert context.machine.boiled_eggs == int(amount) @then(u'eggs should be {hardness}') def impl(context, hardness): assert context.machine.boiled_egg_hardness == hardness
The first step is to instruct our breakfast machine to boil eggs. The next two are to ascertain that the results of boiling are what we wanted. If you run Behave at this point, an error will be displayed, because the machine does not yet know how to boil eggs. Change breakfast.py to the following final version, and the tests should pass: class BreakfastMachine(object): def __init__(self): super(BreakfastMachine, self).__init__() self.eggs = 0 self.egg_hardness = 'soft'
Developers
if you can write another scenario and try to soft-boil an egg, or boil multiple eggs. Just add a new scenario in boiling_eggs. feature after the first one, leaving an empty line in between. Do not repeat the feature or background sections—those are needed only once. After experimenting for a bit, continue to the second specification.
The second specification
Let's start by reviewing the customer’s requirements listed at the beginning of this article. Boiling varying numbers of eggs to different specifications, i.e., hard or soft, is taken care of. Frying and toasting should be easy to implement, since it is similar to boiling eggs. "Just make easy-to-understand commands and queries that can be used to control the machine," catches our attention, and we ask for a clarification from our customer, who sends us an API specification that tells us how their user interface is going to interact with our machine. Since they are still working on it, we’ve got only the part that deals with eggs. The following is for ui.py in the breakfast directory: class BreakfastUI(object): def __init__(): super(BreakfastUI, self).__init__() def eggs_boiled(self, amount, hardness): pass def error(self, message): pass
Their user interface is not ready yet, and their API specification is not completely ready either. But we got some of it, and it is good enough for us to start working with. We can first tackle sending error messages to the user interface. Let's add the following code at the end of boiling_ eggs.feature:
self.eggs_to_boil = 0 Scenario: boiling too many eggs should give an error self.boiled_eggs = 0 self.boiled_egg_hardness = None
Given machine has 1 eggs And amount of eggs to boil is set to 5 When machine boils eggs
def boil_eggs(self):
Then there should be error message "not enough eggs"
if self.eggs_to_boil <= self.eggs: self.boiled_eggs = self.eggs_to_boil self.boiled_egg_hardness = self.egg_hardness
The next step, like before, is to implement new steps in eggs.py:
self.eggs = self.eggs - self.eggs_to_boil else:
from breakfast import BreakfastMachine
self.boiled_eggs = 0
from ui import BreakfastUI
self.boiled_egg_hardness = None
from mockito import verify, mock from behave import *
Now we have a passing scenario and some code; so what’s next? You could experiment with what we have now, and see
@given(u'machine is standing by') def impl(context):
november 2012 | 73
Developers
Let's Try
context.ui = mock(BreakfastUI) context.machine = BreakfastMachine(context.ui)
self.eggs = self.eggs - self.eggs_to_boil else:
...
self.boiled_eggs = 0
@then(u'there should be error message "{message}"')
self.boiled_egg_hardness = None
def impl(context, message):
self.ui.error('not enough eggs')
verify(context.machine.ui).error(message)
We also need to modify our BreakfastMachine to connect to the user interface when it starts up. We do this by modifying the __init__ method in breakfast.py as per the following:
Let’s add a call to the UI object's error method in order to let the user interface know that there was an error, and that the user should be notified about it. After this modification, Behave should run again without errors and give us a summary: 1 feature passed, 0 failed, 0 skipped
def __init__(self, ui):
2 scenarios passed, 0 failed, 0 skipped
super(BreakfastMachine, self).__init__()
12 steps passed, 0 failed, 0 skipped, 0 undefined
self.ui = ui
Took 0m0.0s
self.eggs = 0 self.egg_hardness = 'soft' self.eggs_to_boil = 0 self.boiled_eggs = 0 self.boiled_egg_hardness = None
There are two interesting bits in eggs.py. The first is in the method where we set the machine to stand by. Instead of using a real BreakfastUI object (which wouldn't have any implementation anyway), we create a test double that looks like BreakfastUI, but does not do anything when called. However, it can record all the calls, their parameters and order. The second interesting part is the function where we verify that an error message has been delivered to the UI. We call verify, pass the UI object as a parameter to it, and then specify which method and parameters should be checked. Both verify and mock are part of Mockito, and offer us tools to check the interactions or the behaviour of objects. If we run Behave after these modifications, we are going to get a new error message, as shown below: Then there should be error message "not enough eggs" # features\steps\eggs.py:35 Assertion Failed: Wanted but not invoked: error(u'not enough eggs')
This tells us that the specification expected a call to the method error, with the parameter 'not enough eggs'. However, our code does not currently do that, so the specification fails. Let's fix that and modify how the machine boils eggs (breakfast.py):
There is a distinction between these two approaches. In the first one, we wanted to verify the state of the system after it boiled the eggs. We checked that the amount of eggs boiled matched our specification, and that they were of the correct hardness. In the second case, we did not even have a real user interface to work with. Instead of writing one from scratch in order to be able to run tests, we created a test double: an object that looks like a real user interface, but isn’t one. With this double, we could verify that the breakfast machine calls the correct methods with the correct parameters. Now we have the basics of boiling eggs. If you want, you can continue from here and add more features to the breakfast machine, like frying eggs and bacon, and squeezing juice out of oranges. Try and add a call to the user interface after the eggs have been boiled, and verify that you are sending the correct number of eggs and as per the specified hardness. After getting comfortable with the approaches and techniques shown here, you can start learning more about Behave and Mockito-Python. Good places to start are their download pages, as both offer short tutorials. In this brief example, we had a glimpse of several different ways of writing executable specifications. We started our project from a natural-language file that specified how we wanted our machine to boil eggs. After this specification, we wrote an implementation for each step, and used that to help us to write our actual breakfast machine code. After learning how to verify the state of a system, we switched our focus to verifying how objects behave and communicate with each other. We finished with a breakfast machine that can boil a given number of soft or hard-boiled eggs, and that will issue a notification to the user interface in case there are not enough eggs in the machine.
def boil_eggs(self): if self.eggs_to_boil <= self.eggs: self.boiled_eggs = self.eggs_to_boil self.boiled_egg_hardness = self.egg_hardness
74 | november 2012
By: Tuukka Turto The author is a software aficionado.
Developers
Let's Try
all threads will die, even if that is not what you intended. Also, you will notice the type of the fourth argument in the pthread_create call, and the return type of the threads entry routine—whose prototype is void * (*start_routine)(void *)). These are void*, which means you can pass any type of data, and also return any type of data when using these functions. Also, I highly recommend that you always check the return value of the functions. The return value on success for most pthread calls is 0, and a non-zero return value means some error has occurred; in which case, you should take appropriate action. I would suggest you read the manual page (man pthreads) for general information and more details on the functions.
Mutex and condition variables
A common problem arises when you are accessing (say reading and writing) global data shared between multiple threads. This can cause an incorrect output. This is called race conditions, and must be avoided. A classic way to solve this is shown in the pseudo-code fragment below. You can use a mutex, which is a basic form of synchronisation, as follows: lock_the_mutex(&mymutex)
both the threads complete the first step at almost the same time, what would happen? A deadlock—no thread will be able to proceed, as each is waiting to acquire a lock that the other holds. To solve this type of problem, there is the standard locking hierarchy—i.e., all programs that need mutexa and mutexb must always lock mutexa first and then mutexb. You can also first try to lock all the mutexes, and if any one of the set fails, release all previously locked mutexes. For this, you need to use a function like int pthread_mutex_ trylock(pthread_mutex_t *mutex). (You may download the file http://www.linuxforu.com/article source_code/oct12/system programming.zip. to experiment with the mutex functions.) Let me quickly introduce you to condition variables, which are also quite useful in certain cases. Now let’s assume a thread needs to perform some action when a condition becomes true, such as, when it is waiting for some data to arrive in a queue, so that it can consume it. This queue is shared among multiple threads. Now you need a solution that is efficient and will reduce waiting or unnecessary polling. There is a mechanism by which one thread can signal another, which is waiting for some condition to be true, through a condition variable. Some simple pseudo-code below will help you understand the benefit:
critical region unlock_the_mutex(&mymutex)
thread1() { pthread_mutex_lock(&mutex);
The critical region is one where you are accessing or updating the shared data. The POSIX data type for a mutex is thread_mutex_t. It can be statically or dynamically allocated. I have used ‘&’ since the function requires that we pass the address of the mutex. In actual code, you need to use functions like int pthread_mutex_lock(pthread_mutex_t *mutex and int pthread_mutex_unlock(pthread_mutex_t *mutex);. The mutex here allows you to ensure that at any instant, only one thread will be able to get the lock on the mutex atomically, and hence the other threads get blocked, since the mutex is now locked. Once the first thread releases the mutex, only then can another thread get the lock and proceed. While you are using mutexes, please be cautious of deadlocks, a thread relocking a mutex (without unlocking), and similar problems. Let us look at a code fragment to illustrate one of the problems mentioned: Thread1 { lock_the_mutex (&mutexa); lock_the_mutex (&mutexb)..); } Thread2 { lock_the_mutex (&mutexb); lock_the_mutex (&mutexa)..); }
Note the reversed order of locking the mutexes. Suppose 76 | november 2012
done++; pthread_cond_signal(&done_cond); pthread_mutex_unlock(&mutex); } thread2() { pthread_mutex_lock(&mutex); while (done ==0) pthread_cond_wait(&done_cond, &mutex); Take some action; pthread_mutex_unlock(&mutex); }
In the above example, there are two threads, thread1 and thread2. There is a condition variable, which is denoted by done_cond associated with a mutex variable. (A condition variable is always used in conjunction with a mutex.) thread2 needs to wait till the condition becomes true (here, the condition is the change of the variable done to non-zero). Now this is shared between the threads. In actual cases, this can be a queue, list or anything. If, suppose, thread2 is scheduled first, it gets a lock of the mutex, checks the condition done==0 (the value of done is not changed since thread1 is not scheduled), and then executes pthread_cond_wait(&done_cond, &mutex). This function pthread_cond_wait puts thread2 to sleep, while unlocking the mutex. So you do avoid polling here. Now, when thread1 gets a chance to run, it locks the mutex, changes the variable done, calls pthread_cond_ signal(&done_cond) and unlocks the mutex. Since thread1
Let's Try was in conditional wait, it can now wake up automatically (the mutex will be locked) and proceed. You may wonder why the done variable needs to be checked again? This is a recommended practice, since there can be spurious wake-ups too. (You may download the file http://www.linuxforu.com/article source_code/oct12/system programming.zip. to experiment with the condition variables.)
Reentrancy and thread safety
When writing code, say a function, for a multi-threaded application, the function must be thread-safe. This means that the function should be logically correct, so that you get proper output when it is executed simultaneously by multiple threads. You need to be careful that your code does not lead to a sudden runtime error, or incorrect results due to threading. These types of errors may not get detected while you are compiling your code and are usually due to the incorrect design of the problem statement. Observe the code below: char *strconvert(char *string) { static char buffer[MAX_STRING]; int index; for (index = 0; string[index]; index++) buffer[index] = tolower(string[index]); buffer[index] = ‘\0’; return buffer; }
The function does some conversion (here, converting to lowercase) on an input string, and the resultant string is passed back to the caller. This is fine when you are using this in a single-threaded program. But can you use this in a multi-threaded program, where it can be called simultaneously by multiple threads? No, you cannot, which is quite interesting. The function uses a static array—so it can happen that while one thread is running this function, it gets pre-empted, and another thread gets scheduled. The second thread also calls the function. Now the array buffer is a static one, and hence its state gets changed. So when the earlier thread gets scheduled again, it will access the buffer, which is now changed—and hence yield a wrong output. If you experiment with this code, you can see that perhaps you are getting correct results (for small strings) since one thread that was running this function did not get pre-empted. In spite of this, you cannot assume that this will work correctly for all cases. Such functions are called non-reentrant functions, and are unsuitable for multi-threaded programs. This is quite a big problem. To solve it, you can change the interface of the function as follows: char *strconvert_r(char *input, char *output) { int index;
Developers
for (index = 0; in_str[index]; index++) output[index] = toupper(input[index]); output[index] = ‘\0’; return output; }
You will notice that the caller allocates memory here in the variable output. Even if multiple threads are calling this function, all the variables that are used are local—so the context can be saved for all calls, and thus no non-deterministic or incorrect result will occur. Also, in production code, you need more checks and appropriate error handling, like checking the size of the output string, if it is big enough to hold the (transformed) input string. Doing all these for a very large program is quite a large task, and we need to have a proper design to make robust multi-threaded applications. You will be surprised to know that most of the C library functions are unsuitable for use in multi-threaded programs. In the standard library, you should check for functions that are reentrant—such as strtok, which is commonly used for string tokenising and is a non-reentrant function. But if you use the strtok_r function, it is reentrant. The interfaces of strtok and strtok_r differ. In the manual page, I see the warning, “The strtok() function uses a static buffer while parsing, so it's not thread safe. Use strtok_r() if this matters to you.”
Debugging multi-threaded programs using GDB
Take note of some of the debugger commands that can help you debug multi-threaded programs. I have used GDB release 7.2 on Fedora 15. From the output snippets below, you can see that one breakpoint has been set in the function thread_ entry, and then execution has stopped. The info threads command shows the current state of the threads (here, three threads), with an identifier associated with each thread. The * indicates the current thread. Fedora$gcc lfy_thread_2.c -o mythread -Wall -g -lpthread Fedora$gdb ./mythread (gdb) set target-async on (gdb) set non-stop on
The above two GDB settings need to be turned on for debugging, manipulating or examining individual threads under GDB. In non-stop mode, when a thread stops, only that thread is stopped. The debugger does not stop other threads as well. The implication of target-async on is that instead of waiting for a command to be completed, the debugger gives you the prompt back, and runs some command in the background. If you notice the snippets in the last section, I have used ‘c&’, which means that the current thread can continue executing in the background while GDB can accept subsequent commands. november 2012 | 77
Developers
Let's Try
(gdb) b thread_entry
Continuing.
Breakpoint 1 at 0x804858c: file lfy_thread_2.c, line 33.
Abcdeffg
(gdb) run
(gdb) [Thread 0xb7fecb70 (LWP 2505) exited]
(gdb) info threads Id
Target Id
Frame
4
Thread 0xb6feab70 (LWP 2507) "mythread" thread_entry
(arg=0x80487b7)
(gdb) info threads
at lfy_thread_2.c:33 3
Now Thread 2 has exited, which can be seen from the info threads command below:
Thread 0xb77ebb70 (LWP 2506) "mythread" thread_entry
(arg=0x80487a9)
Id
Target Id
4
Thread 0xb6feab70 (LWP 2507) "mythread" thread_entry
(arg=0x80487b7)
at lfy_thread_2.c:33 2
at lfy_thread_2.c:33
Thread 0xb7fecb70 (LWP 2505) "mythread" thread_entry
(arg=0x80487a0)
3
Thread 0xb77ebb70 (LWP 2506) "mythread" thread_entry
(arg=0x80487a9)
at lfy_thread_2.c:33 * 1
Frame
at lfy_thread_2.c:33
Thread 0xb7fed6c0 (LWP 2502) "mythread" (running)
1
Thread 0xb7fed6c0 (LWP 2502) "mythread" (running)
(gdb) The current thread <Thread ID 2> has terminated. See `help
In the above snippet, you see that there are four threads in total. (In my code, there are three calls to pthread_create, and one main thread.) GDB associated its own integer numbers 1-4 with the threads. The current thread is marked with a * and is in the running state. The ‘arg’ mentioned at the right is the argument given to the thread’s starting function. Now, you can switch your focus to some other threads by using the GDB command thread threadnumber. (gdb) thread 2 [Switching to thread 2 (Thread 0xb7fecb70 (LWP 2505))] #0 thread_entry (arg=0x80487a0) at lfy_thread_2.c:33 33
retstr = strconvert((char*)arg);
(gdb) info threads Id
Target Id
4
Thread 0xb6feab70 (LWP 2507) "mythread" thread_entry
Frame
(arg=0x80487b7) at lfy_thread_2.c:33 3
Thread 0xb77ebb70 (LWP 2506) "mythread" thread_entry
(arg=0x80487a9) at lfy_thread_2.c:33 * 2
Thread 0xb7fecb70 (LWP 2505) "mythread" thread_entry
(arg=0x80487a0) at lfy_thread_2.c:33 1
Thread 0xb7fed6c0 (LWP 2502) "mythread" (running)
(gdb)
Notice that Thread 2 has become the current thread. Let’s assume you are printing some value in the current thread, and then continuing (c&) the execution of the thread. The current thread exited, and you can now switch focus to other threads, which are possibly waiting at breakpoints. (gdb) printf "%s", arg ABCDEFFG (gdb) c&
78 | november 2012
thread'. (gdb)
So, in this way, you can walk through a multi-threaded program using GDB. This is quite a good way to learn. I would suggest you download the GDB manual, and read the relevant section too. Also, some of the reference links suggested here are quite good for getting a better understanding of the topic. I hope you grasped the basics of writing and debugging multi-threaded applications. Now you can consider applying multi-threading to some of your existing or future applications. References [1] Stevens, W. Richard and Rago, Stephen A. ‘Advanced Programming in the Unix Environment,’ 2nd Edition, Pearson Education [2] Butenhof, David R, ‘Programming with POSIX® Threads,’ Addison Wesley [3] Linux manual pages [4] https://computing.llnl.gov/tutorials/pthreads/ [5] http://red.ht/RLbrcA [6] http://www.yolinux.com/TUTORIALS/ LinuxTutorialPosixThreads.html
Acknowledgement I would like to thank Tanmoy Bandyopadhyay and Vikas Nagpal for their help in reviewing this article.
By: Swati Mukhopadhyay The author has more than 12 years of experience in academics and corporate training. Her interests are digital logic, computer architecture, computer networking, data structures, Linux and programming languages like C and C++. You can contact her at swati.mukerjee@gmail.com or http:// swatimukhopadhyay.wordpress.com/
CODE Sandya Mannarswamy
SPORT
This month’s column looks at how software bloat has become increasingly common and is taking a toll on performance.
F
or the last couple of months, we have been discussing dynamic languages such as JavaScript, and how they differ from traditional statically compiled languages like C or C++. One of the main differences of programming in the Web-centric world of Java, JavaScript, etc, is the increasing use of frameworks and libraries. While this facilitates rapid development, one of the concerns levelled against framework-based development is the excessive software bloat it inadvertently brings in. In this week’s column, we take a look at this phenomenon.
Myhrvold’s Laws
All of us have heard of Moore’s Law, which states that the number of transistors on a chip doubles approximately every 18 months. This in turn implies that hardware becomes twice as fast and powerful every two years. However, not many of us are familiar with Myhrvold’s Laws, which deal with the evolution of software. Nathan Myhrvold, a former CTO of Microsoft, proposed the following: 1. Software is a gas. This essentially means that software, like gas, expands to fill or utilise its container (namely, the hardware). No matter how fast the hardware you get, you can be sure that the next version of your favourite software will use it all up and will demand faster hardware. For instance, the number of lines of code for the Windows Vista operating system in 2008 was almost 150 times as large as that of Windows 3.1 in 1992. 2. Software grows until it becomes limited by Moore’s Law. Essentially, as you get faster hardware, software developers keep adding functionality to take advantage of the new hardware, till the Moore’s law induced improvements in the hardware are stretched to their limit. 3. It is the growth of software that makes Moore’s Law possible (by making it economically
viable). This follows from point (2)—software developers keep adding functionality till the existing hardware cannot meet the performance requirements; the software vendors then start clamouring for faster hardware, which in turn provides economic motivation for hardware vendors to double up hardware speed. 4. It is impossible to have enough of software. This essentially means that there will always be demand for new functionality, new algorithms and new software paradigms. In essence, Myhrvold’s Laws portray the evergrowing complexity and size of software in terms of greater functionality and newer algorithms, which in turn typically means more code. This is all the more so in the Web-centric decentralised software world of today, wherein programs are no longer monolithic, but consist of modules imported from multiple frameworks and libraries, with a small amount of glue code. This in turn means that the software application you are using is not written by a single developer, but multiple developers who are probably not aware of the exact use to which their library or framework will be put. This results in software libraries with generic functionality, usable with multiple types of data. Besides, there is great emphasis on software re-use, so that the pressure is on the developer to write minimal custom code and use library or framework code as much as possible.
Software bloat
While the ability to re-use framework or library code improves programmer productivity, it also has an impact on performance. A framework needs to be generic enough such that its API can be used in multiple use-case scenarios. The flip-side of this is that a generic routine may not be the most efficient piece of code for the specific use-case scenario you have in your application. Let us consider a simple example. NOVEMBER 2012 | 79
CodeSport
Guest Column
You are trying to convert a string representation to integer. In your use-case scenario, you know that the string will never contain any special characters. However, the library function you are using cannot make this assumption, since it needs to be generic to handle any string, including those that contain special characters. So it performs a redundant check for special characters, which is not needed in your use-case scenario. This is a simple example of the library function being bloated. While this is an example of bloat in operations that are executed, let us consider another example which results in extra memory being used. Your code needs a date object, but you know that you don’t need the day of the week corresponding to the date. However your ‘Date’ class from the library allocates an additional field for storing the ‘day of the week’ corresponding to the date. This results in additional memory consumption, which is not needed by your use-case. Both these forms of inefficiency are examples of ‘software bloat’, which impacts performance adversely. It is a well-known fact that, unfortunately, many of today’s large commercial software applications suffer from software bloat to a considerable extent. This leads to poor scalability and excessive use of hardware resources to meet the performance needs of the bloated software, compared to a lean and efficient version of the same software. There is a trade-off programmers make in using libraries and frameworks extensively, as opposed to writing custom code. Frameworks and libraries contribute to software bloat (unless they have been written very carefully, providing multiple specialised APIs—which, unfortunately, is not the case today). But they aid rapid application development. Software developers need to understand the performance penalties associated with the various libraries and frameworks they leverage in their application.
Types of software bloat
The term ‘bloat’ refers to a condition wherein a system is not functioning in an efficient manner. The inefficiency may be due to redundant operations being performed (execution bloat), or redundant memory being allocated (memory bloat). The latter also includes memory that is allocated and not used, or memory not reclaimed even after it is no longer live, etc. Consider a Java application, which has automatic memory management. A memory leak in a Java application is a typical example of memory bloat, since an object is not getting freed up though it is no longer needed. This situation, if excessively found, can cause the garbage collector to run out of heap space (since it cannot collect the leaked objects, it cannot free up memory and make it available for the application) and thus causes the application to crash. Execution bloat is typically associated with redundant operations being performed while there is a more efficient way to achieve the same end result. Consider a function that uses ‘bubble sort’ to generate sorted output. This obviously suffers from execution inefficiency, since ‘quick-sort’ would have been more efficient. This is an example of execution bloat. The earlier example we looked at was of a framework’s stringconversion function that checks for special characters. 80 | NOVEMBER 2012
By now, you may wonder, “Isn’t software bloat supposed to be removed by compilers/runtime and garbage collectors?” Well, the answer is “Yes, to some extent.” However, since today’s software applications are very complex, it becomes increasingly difficult for compilers, JIT and runtime to detect and eliminate bloat, since bloat is not caused by operations in one single function, method or class. Bloat occurs as the application state propagates across from custom code written specifically for your application over many layers of libraries or /frameworks, and it becomes difficult to detect and eliminate cross-layer bloat automatically. Currently, there is increasing attention being given to understanding the causes of software bloat. A summary of the nature and causes of bloat can be found in an article in IEEE software titled, ‘Four Trends Leading to Java Runtime Bloat’ by Nick Mitchell and others. While this article focuses on Java, many of the causes of bloat that it describes are equally applicable to all modern ‘framework-intensive’ languages like JavaScript, .NET, Python, etc. Well, the question: “Is bloat inevitable because of the object-oriented paradigms and focus on re-use in modern programming languages?” will be discussed in the next column.
My ‘must-read book’ for this month
This month’s ‘must-read book’ suggestion comes from our reader, Keshavan. He suggests the book, ‘The Design of Design—Essays from a computer scientist’ by Frederick P Brooks. According to Keshavan, “This is a great book if you ever wanted to know what makes an efficient design, and is a worthy successor to the software developer classic, ‘The Mythical Man-Month’ by the same author. The book discusses the traditional waterfall model of design, goes on to examine its flaws, and the alternatives to it. It then discusses collaborative design, which is increasingly important in today’s ‘flat’ world, and concludes with a number of case studies in design.” Thank you, Keshavan, for the recommendation. If you have a favourite programming book or article that you think is a must-read for every programmer, please do send me a note with the book’s name, and a short write-up on why you think it is useful, so I can mention it in this column. This would help many readers who want to improve their coding skills. If you have any favourite programming puzzles that you would like to discuss on this forum, please send them to me, along with your solutions and feedback, at sandyasm_AT_ yahoo_DOT_com. Till we meet again next month, happy programming and here’s wishing you the very best!
By: Sandya Mannarswamy The author is an expert in systems software and is currently working as a researcher in IBM India Research Labs. Her interests include compilers, multi-core technologies and software development tools. If you are preparing for systems software interviews, you may find it useful to visit Sandya’s LinkedIn group ‘Computer Science Interview Training India’ at http://www. linkedin.com/groups?home=&gid=2339182.
Insight
Open Gurus
The What, Why and How of
Testing
This article is for developers and those who are interested in testing. It covers the need for testing, as well as the basic concepts and the various techniques used in the process. It also explores some open source tools for testing.
H
ere’s a quick quiz to see how many of us understand what testing is all about and why we do it—mentally note ‘Yes/True’ or ‘No/False’ regarding each of these statements: Testing is the same as quality control. We do testing to ensure that the product is 100 per cent defect-free. Testing is very easy, but a boring job. Testing does not involve any creativity. Needless to say, most would respond with a ‘Yes’ to all the above questions. However, that would be incorrect; and I will explain why, soon. Many (in fact, most) of us tend to feel testing is a low-grade and boring task. But it is not so—it is perhaps the most challenging task anybody can take up. No development activity will teach you as much as testing will, provided it's done correctly, and with
the right attitude. This is my personal opinion based on my experience so far; other people may have different opinions, though. We know that a defect is an error in a program that prevents it from producing the expected results. Is this definition complete? We will learn more about that as we proceed. For now, here is a preliminary understanding: Testing is done to show that there are no errors in the program; and that the program is performing its intended functions correctly—i.e., doing what it was designed to do. However, while that understanding is correct, it is not complete. What about when a program does something it is not supposed to do? So, the point is, do not test to show that your program contains no defects. Instead, assume that your program contains defects, and then start testing to dig them out. november 2012 | 81
Open Gurus
Insight
How do you determine whether a test was successful or not? You would probably say that if no defects are found, then the test is successful—and if some defects are found, then the test is unsuccessful. This is completely contrary to the idea behind testing! To understand it better, let us take a simple analogy. Suppose a patient is very sick and is suffering from a high fever. He visits the doctor for a diagnosis. The doctor recommends some tests, but they don't detect any problem. So, we say that the tests were unsuccessful in detecting the problem. However, if the tests were able to detect the problem, the doctor can begin treatment of the patient—that's a successful test. The same is true for software testing. Assume that your program is like a sick patient. Now, it is up to you to design the tests to detect the problems. So, one can define a successful test as one that is able to detect defects, and an unsuccessful test as one that can’t find any. As should be evident by now, it is all about having the right attitude to testing. As is rightly said, “You will see what you want to see.” If your intention is to show that your program is defect-free, for sure, you are unlikely to find any defects.
Test cases
What is a test case, and what should it contain? In simple words, a test case is a set of variables or conditions that are used to test a program. A typical test case is expected to contain the following information: Test case ID Description of the test case Expected input Expected output Actual result (PASS / FAIL) Configuration (hardware/software) Steps to run the test case Author Date Recalling what we've discussed earlier, a good test case is one that has a high probability of finding defects. So how do you write test cases? That depends on how large and complex the program or functionality to be tested is. If the program/functionality is too small (say, a program with 8-10 lines of code), it might make sense to write test cases manually. However, if the program is big, use scripting languages to write the test cases. For example, the open source CPython, one of the most widely used implementations of the Python language, can be used to write test scripts. Other popular scripting languages include JavaScript, Perl, PHP and Ruby. Now, once you have written the test cases, how would you ensure that you have enough? How many test cases would you need to write to ensure that there are no defects?
82 | november 2012
It has been often observed (and my own experience corroborates this finding), that people write tests only as long as they can find defects. Most of them get tired or bored if they keep testing and no defects are found. In such cases, developers assume that their code works fine, and keep on adding new code without any tests. I am sure most project managers would agree with this observation. If you aren’t able to produce any effective results in testing, you probably need to re-check your testing process and approach. Often, the testing involves adding a large number of tests, but the tests are written randomly, and the testing tends to become aimless. While testing, it is extremely important to clearly classify the areas to be tested. You need to check not only the positive patterns but the negative patterns as well. (I will share more details on this in my next article.) It is also very important to check boundary conditions. As I mentioned earlier, you must not only check for whether the program is working correctly, but also check whether it is doing something it ought not to—there might be some hidden behaviour that you might not be aware of, and which might only get uncovered after the product has been shipped to the customer. This reminds me of an interesting article by S G Ganesh in the LFY April 2012 issue: “A Bug or a Feature?”. As he rightly puts it, “... the customers don’t know that it is a bug, but think it is a feature, and start using it. Now, even when you realise that it is a bug, you cannot fix it.” So, if you ship the product with a bug, and the customer gets used to it as a feature, you will not be able to undo it, and will have to support the bug in future releases as well. Most of the time, developers test in and around their code, keeping the design in mind. However, rather than the design, it is important to keep the customer or the end-user in mind, while testing. After all, they are the ones who will be using the product, not you! In my next article, I will explain the various kinds of testing techniques that are available. Till then, here are some suggestions for further reading. References and suggested reading [1] http://en.wikipedia.org/wiki/Scripting_language [2] Article by S G Ganesh in the April 2012 issue of LFY, ‘A Bug or a Feature?’ [3] The Art of Software Testing, by Glenford J Myers, Tom Badgett, Todd M Thomas
By: Deepti Sharma The author is an open source enthusiast with almost a decade of experience in the design, development and quality validation of systems software tools (compilers, assemblers and device drivers) for embedded systems. She loves writing for OSFY. She can be reached at deepti.acmet@gmail.com.
Guest Column Joy of Programming
S.G.Ganesh
B
Static Analysis Vs Dynamic Analysis Tools Both static analysers and dynamic analysers are widely used to find and fix bugs in programs. In this column, the author compares these analysers, and lists their advantages and disadvantages.
oth open source and enterprise software projects depend heavily on testing to find bugs. However, testing is costly in terms of the time and resources required. For this reason, using automatic analysers to find bugs has become a popular practice.
The types of tools
Static analysis tools check a program for potential bugs without actually executing the programs. For most programmers, the lint program immediately comes to mind when static analysers are mentioned. However, static analysis has come a long way from the good old days of the 1970s, when lint emerged. Though lint-like programs are widely used even today, static analysis has become very sophisticated, with very expensive tools available in the market today that can detect hard-to-find bugs with ease. Static analysis tools can typically be used at any phase in the software development life-cycle. However, they are typically used once some source code is available for analysis. For example, there are static analysers that can analyse (structured or formalised) requirements specifications, UML diagrams, etc. So let’s look at the advantages of static analysers: They find possible bugs in the code without actually executing the program—so you don’t have to wait to get an executable program to find bugs. When you apply static analysers on the code early in the software development life-cycle, and find and fix bugs— that’s good, because the earlier you find and fix bugs, the less money and effort it costs. Integrating and running static analysers is easy, and most tools are quick in finding problems and reporting them. Dynamic analysers, on the other hand, check the program for bugs by actually executing the programs. So, dynamic analysis tools are used once executable code is available. These tools are extensively used by developers for the following reasons: Dynamic analysers find real issues (no false warnings). They find defects that actually result in software failure/problems.
They aid in program debugging, tuning, tracing, etc – all of which are essential for the development and maintenance of complex software.
Desirable characteristics
Before we go ahead and compare these two kinds of tools, let’s first get familiar with some terminology used in program analysis. In terms of finding errors in programs, there are four possibilities: False positives: These are false warnings or alarms that are raised when there is no real error in the code. True positives: This is an alarm that is raised for a real error in the code. False negatives: This is a missed error—i.e., no alarm was raised when there was a real error in the code. True negative: This is when no alarm is raised if there is no real error in the code. If a tool warns of a real error in the code (a true positive), it is good, because that’s why we’re running the tool. If the code does not have a bug, and the tool does not report any warning (a true negative), that’s also good. However, if the tool generates a warning when there is no error in the code (a false positive), it’s annoying, and wastes developers’ time, because they have to spend hours figuring out that the violation was not actually a problem in the code. Also, if the tool does not give any warning when there is an error in the code (a false negative), then that’s also a problem since we want to find real bugs in code and that’s why the tool is being run. If the tool misses most of the bugs, then it’s not much use applying the tool. So, both false positives and false negatives are undesirable for program analysers.
Static analysers
The main problem with these is that they generate lots of warnings, and most of them are false positives (wrong warnings)! For example, if you run tools such as PC-Lint (C/ C++), PMD (Java), or FxCop (C#) on a software with a million lines of code, you’re likely to get close to a million violations! November 2012 | 83
Joy of Programming
Guest Column
Figure 1: An OpenSSL change that caused a security vulnerability in Debian
That’s HUGE—it’s one warning per line of code! In my experience in running such tools on large code bases, I’ve seen violation logs from such tools exceeding 2 GB in size. That’s a humongous amount of data, and we cannot humanly process such a large number of violations. Again, it depends on the kind of static analyser that you use. For example, all the three tools I’ve mentioned are ‘shallow-analysers’ or ‘pattern-matchers’. Nevertheless, many widely used static analysers are still such ‘patternmatchers’, and hence this discussion is still useful. However, if you run ‘deep-analysers’ (that use model-checking, abstract interpretation, etc), you’ll get a fraction of the number of violations. That brings in another related problem: the cost of the tools. Deep static analysers are mostly commercial tools, and they can find important hard-to-detect problems; but they work out very expensive for regular use in projects. Another problem with static analysers is that most violations reported by them are not of immediate concern to the projects. For example, assume that you’re working on a C project on Lintel (Linux + Intel) and that project is never going to be ported to any other machine or OS. So, when the static analysers report violations related to portability, they are not going to be relevant to you. You will need to ‘weed out’ irrelevant violations, and look out for pertinent violations— that’s irritating, as well as time-consuming.
Dynamic analysers
Now, all is not well with dynamic analysers either. A disadvantage with these is that they need executable code for running the tools, unlike static analysers that can work on partial source code. However, the main disadvantage with dynamic analysers is that they don’t find latent defects in code. In other words, the tools report bugs only if they are in the given code path with the given data values; or else, they will not report the errors. What this means is that you need a great testing infrastructure and high
code coverage to find bugs using dynamic analysers. But look at the contradiction here—if you have a great testing infrastructure, and have high code coverage, it means you would have anyway found a considerable number of bugs with testing! Also, running dynamic analysers is time-consuming and resource intensive. Further, depending on the kind of dynamic analyser you use, it can change the behaviour of the program! In other words, even if the program has a bug, if you try using a dynamic analyser, it may disappear and may not get reported by the dynamic analyser! This may seem like strange behaviour to you, but if you’ve used dynamic analysers or debuggers extensively, you’ll know that this is a common phenomenon. Why does this happen? Consider the example of concurrency bugs, which depend on thread scheduling. Acts such as monitoring the program, alter thread scheduling such that the exact situations required to reproduce the concurrency bug no longer exist. Hence, dynamic analysers may not find bugs, even when you know that a bug exists in your program! With these advantages and disadvantages of static and dynamic analysers in mind, you need to take a holistic view in selecting and using these tools for your project. You may even need to change your development process to introduce such tools as part of your development life-cycle. All tools have problems, so you need to figure out smart ways to overcome them. However, if you altogether give up on using program analysers, and depend on testing alone, you can be assured that the software you deliver will be bug-infested! Static analysers detect most problems that are usually found using manual analysis. Manual analysis is resource-intensive, but very effective; so it makes sense to use the available time of the experts for manual reviews. By mandating that you need to run static analysers and fix all the major problems before code review, manual reviews can focus on finding deeper problems in code. Selecting a suitable static analysis tool for your project is very important. For example, if yours is a development project, it’s likely that the code is not stable, and numerous bugs will be introduced as the development progresses. So, by choosing a shallow-analyser, and by enabling selective rules (i.e., disabling ‘noisy’ rules), you can make best use of the tool. However, if yours is a legacy code-base, it’s likely that changes to the code are very controlled. In this situation, you can make the best use of deep static analysers, and find and fix critical bugs in the code. Similarly, using dynamic analysers as part of the development life-cycle needs considerable thought and
Continued on Page no 99
How To
Open Gurus
Getting Started with Raspberry Pi Raspberry Pi is an inexpensive ARM processor-based single-board computer that runs the GNU/Linux operating system. For only $35, you get a system that can play games, stream video, function as a network server, control devices through input/output (I/O) pins and do a lot of other cool things. This article covers some experiments done with Raspberry Pi that will help you learn how to set up Arch Linux, configure streaming video and write Python code to control LEDs!
R
aspberry Pi is powered by a BCM2835 system-onchip from Broadcom that contains an ARM processor (running at 700 MHz) and a powerful graphics processing unit capable of 3D operations. The peripherals include two USB master ports, 10/100 Ethernet, HDMI and composite video outputs, and an SD card slot (more details available at www.raspberrypi.org/faqs). System memory is 256 MB of RAM. A few general-purpose input/output (GPIO) pins are available for low-level interfacing with the external electronic circuitry. There are two models of the device. Model A has 256 MB of RAM, one USB port and no Ethernet (network connection). Model B has 256 MB of RAM, two USB ports and an Ethernet port. Only Model B is currently in production.
Things you need, to get started
Raspberry Pi is a complete system—connect a USB keyboard/ mouse and a DVI/HDMI monitor to it and you are ready to go! The Linux operating system kernel and the root file system have to be present on the SD card. Note that the device does not support VGA monitors— you have to use a monitor with DVI/HDMI inputs. In case you have a monitor that accepts DVI inputs, you need an HDMI-to-DVI converter cable. An old TV set can also be used as a display by connecting it to the composite video output of Raspberry Pi. A standard micro USB mobile charger (capable of handling at least 700 mA of current) can be used to supply power to the board.
Preparing the SD card using a GNU/Linux system
The easiest way to get started with Raspberry Pi is to use an SD card to store the file system image. The Raspberry Pi website (www.raspberrypi.org/downloads) provides prebuilt images for both Debian GNU/Linux and Arch Linux. I prefer Arch because of its simplicity and the ease with which you can keep packages updated. Download the Arch Linux image and unzip it using the following command:
The Arch Linux image files are continuously updated. So the name of the file that you have downloaded might be different from what you see above. You will now see a directory called ‘archlinuxarm-13-06-2012.’ Change into this directory, verify the checksum of the file with the ‘img’ extension resent in the directory and write it to your SD card using the ‘dd’ command as follows: cd archlinuxarm-13-06-2012 sha1sum -c archlinuxarm-13-06-2012. img.sha1 dd if=archlinuxarm-13-06-2012.img of=/dev/sdb bs=1M
I am assuming that your SD card is detected as a device ‘/ dev/sdb’ on the GNU/Linux system you are using to write the image file. You can now connect the keyboard, mouse and monitor (and optionally the network cable) to Raspberry Pi, insert the SD card into its slot and power on the board. You should see boot messages scrolling by on the screen. You will be finally presented with a login prompt. Log in with user name ‘root’ and password ‘root’. Detailed instructions for setting up the SD card are available on the following Raspberry Pi wiki page: http:// elinux.org/RPi_Easy_SD_Card_Setup
Setting up networking on Raspberry Pi
If the Ethernet cable is plugged in, and if you have a DHCP server active on your network, Raspberry Pi’s network interface will automatically configure at boot up. You can verify this by pinging a well-known host, as follows: ping google.com
For some reason, the network interface on my Raspberry was not getting configured automatically. I had to execute the following commands to get things ready: mii-tool -A 10baseT-FD eth0
unzip archlinuxarm-13-06-2012.zip
dhcpcd eth0
november 2012 | 85
Open Gurus
How To
Figure1: Raspberry Pi
Performing an Arch Linux system update
The packages in an Arch Linux distribution get updated continuously—you can always bring the system up-to-date by executing the following command: pacman -Syu
‘pacman’ is the Arch Linux package installer (similar to ‘apt’ on Debian/Ubuntu). It is recommended that you perform this step and reboot Raspberry Pi.
Setting up X Windows
a few of the GPIO pins of the processor on board; you can identify these pins by looking for label ‘P1’ on the board. The pins on this connector are numbered P1-01, P1-02, P103 and so on. Check out http://elinux.org/RPi_Low-level_ peripherals for a diagram of this connector and details of the I/O pins. (You will note that certain pins are labelled as ‘do not connect’. These pins should not be used for interfacing.) There are many ways in which you can output voltages on these pins. One of the easiest ways is to use a Python library. The Raspberry Pi Python GPIO access library can be downloaded from http://pypi.python.org/pypi/RPi.GPIO. It is available as a tar file, RPi.GPIO-0.2.0.tar.gz. (Don’t worry if the version number you see on the website is different from this; you might be getting a newer version.) Installing the module is simple: untar it and run the set-up script (as the super user), as follows: tar xvf RPi.GPIO-0.2.0.tar.gz cd Rpi.GPIO-0.2.0 python2 setup.py install
Connect a red LED in series with a 1-kilo-ohm resistor between pins P1-08 (GPIO pin) and P1-06 (GND) and run the following program: import RPi.GPIO as GPIO import time GPIO.setup(8, GPIO.OUT) while True: GPIO.output(8, True) # LED ON
Setting up a basic graphical user interface (with X Windows) is simple. Just use ‘pacman’ to install some packages, as follows:
time.sleep(1) GPIO.output(8, False) # LED OFF time.sleep(1)
pacman -S xorg-server xorg-apps xorg-
Let’s use Python to write our demo app and install it using ‘pacman’, as shown below:
Configure pin P1-08 as an output pin using GPIO.setup and write high/low values to it using the function GPIO.output. Internally, the Python GPIO library uses special files under ‘/sys’ directory to access the I/O pins. The disadvantage with this method is that you really can’t perform timingsensitive operations this way.
pacman -S python2
Streaming video from a webcam
xinit xorg-twm xterm xf86-video-fbdev You can run X by typing ‘startx’ at the command prompt.
Installing Python
You may additionally install package ‘python2-pyserial’ if you plan to do some serial communication.
How to write a ‘blinking LED’ program
WARNING: Interfacing electronic circuits to Raspberry Pi might damage the board if not done correctly. The GPIO pins of a processor are useful for controlling external electronic circuitry. One of the simplest things you can do with these pins is write code, which will output a logic high (3.3V) or low (0V) on them. Raspberry Pi has a set of 26 pins (arranged as a 2x13 strip), which bring out 86 | november 2012
So what role can Raspberry Pi play in the design of your embedded system? If your application needs fast media processing and networking, you can replace your PC with Raspberry Pi, which consumes considerably less power (and comes in a much smaller form factor—the size of a credit card). If you need to do timing-sensitive operations (like measuring the time delay between low to high transitions of an I/O pin to microsecond-level precision), you can use something like an Arduino and interface it with Raspberry Pi. Let us, for example, assume that you wish to capture the video stream from your webcam and make it available over
How To
Open Gurus
the network. The first step is to connect the webcam and see whether Raspberry Pi recognises it—if you see a file called ‘video0’ under ‘/dev’, everything should work well! Next, you can install the popular ‘ffmpeg’ package, which does a whole lot of video manipulations, including streaming, using the following command: pacman -S ffmpeg
Let’s now create a small configuration file called ‘ffserv.conf’. This file supplies information like the network port at which the stream is available, the name of the stream, etc. Issue the following commands: Port 8010 BindAddress 0.0.0.0 MaxClients 10 MaxBandwidth 1000 <Feed feed1.ffm> File /tmp/webcam.ffm FileMaxSize 5M </Feed> <Stream feed1.mjpeg> Feed feed1.ffm Format mpjpeg VideoSize 160x128 VideoFrameRate 3 VideoIntraOnly Noaudio Strict -1 </Stream>
Check out http://ffmpeg.org/sample.html for more information on the parameters in this configuration file. Streaming of the video is done by a program called ‘ffserver’. Invoke it with the following command: ffserver -f ./ffserv.conf
‘ffserver’ automatically goes into the background. Now you have to run ‘ffmpeg’ to capture data from the webcam and supply it to ‘ffserver’, as follows: ffmpeg -v 2 -r 5 -s 160x128 -f vide o4linux2 -i /dev/video0 http://local host:8010/feed1.ffm
You can open the browser on a remote machine and enter the following URL to watch a streaming video: http://192.168.1.5:8010/feed1.mjpeg. (You have to replace the address 192.168.1.5 with the IP address assigned to Raspberry Pi’s Ethernet interface.)
Figure 2: Boot messages scrolling by on the screen
How to move your webcam with a servo motor
You can mount your webcam on the shaft of a hobby servo motor and control it using Raspberry Pi! The easiest way to do this is to run the servo control code on Arduino and send commands to Arduino from Raspberry Pi. When Arduino is connected to the USB port of Raspberry Pi, the serial port on Arduino is accessible as a special file (usually ‘/dev/ttyACM0’) on Raspberry Pi. The Python module ‘pyserial’ can be used to write the code, which sends and receives commands over this serial channel. (In our case, the ‘command’ may be simply a number that indicates the angle at which the servo shaft should be positioned.) The servo control code, as well as the Python code running on Raspberry Pi, are available for download at: https://github.com/rlabs/rpi-article.
The start of a revolution!
The Raspberry Pi revolution has just begun. This tiny credit-card sized GNU/Linux system has generated tremendous enthusiasm among hobbyists and ‘do it yourself’ fans all over the world. You can already see people coming up with amazing ideas like the ‘fishpi’ (http://fishpi.org/)—an autonomous Raspberry Pi controlled marine vehicle that its designers plan to launch on a journey across the Atlantic, unaided! By: Pramode C E The author is founder of recursive-labs.com, an online learning start-up focused on free software-based technologies.
november 2012 | 87
Cloud Corner
Let's Try Purpose
Innovation
Expectations
objectives
Analysis
Web Application Life Cycle Promotion
specifications
Development
Deployment
CloudBees: Deploying Applications The article published in the October issue of OSFY covered the concept of Platform as a Service, the Java PaaS, an introduction to CloudBees, the ecosystem, and other features. With that as a background, readers are now invited to move on and deploy a simple Web application on CloudBees.
T
o deploy a Web application, you need to create the database, use the CloudBees Eclipse plug-in to create a CloudBees project, run the application in the local and cloud environment, and understand the Sauce Labs and SonarSource services offered by CloudBees.
Registration
Registration for CloudBees is very easy. You need to provide details such as: Username—used to log in to Maven, SVN, and GIT Account Name, which will be used to construct the service URL for DEV@Cloud (https://app_name. account_name.cloudbees.com/)
Services
Once registration is successfully completed, click Add Services on the Services menu in the Dashboard (Figure 1) to add services for your account. All information related to a specific service like Database or Run@Cloud is made available, including the types of service offerings (Figure 2). Click the Learn More link for any service and then click Active to activate that service, after comparing the features available in the free and on-demand versions.
Using the application service
The on-demand option includes all the configuration features required for enterprise cloud applications. In the 88 | november 2012
free plan, a maximum of 5 Tomcat or EE applications are allowed. Tomcat and JavaEE apps are free, and an instance will have 128 MB of RAM, besides which user forums are available for support. To know more about Run@Cloud pricing, visit https://grandcentral.cloudbees. com/services/plans/application. In the subscribed services list (Figure 3), click Applications to deploy an existing WAR file of an application. Click Add New Application, give the application name (e.g., timenexpense) and select JVM Web Application as the Application Core Runtime. Once the application is ready, you will be notified; your new application will be available at http:// timenexpense.cleanclouds.cloudbees.net Next, click Configure. Go to the Deployments section (Figure 4), where you will see an Initial Create label with a timestamp. Click the Upload New Version button; it will open an upload form to deploy a new application version. Click Browse; select the appropriate WAR file from your system, provide a description and click OK.
Using the database service
CloudBees supports the MySQL database. Database management is typically done using a MySQL client like the 'mysql' command-line program distributed with MySQL. It provides Apache DBCP connection pool implementation, and supports external databases as well. To connect your application to a non-CloudBees database,
Let's Try
Cloud Corner
Figure 1: Dashboard
Figure 3: Subscribed services
Figure 4: Deployments Figure 2: CloudBees services
define the database as a resource-ref in the app's web.xml file, bind it to your database as a resource in the cloudbeesweb.xml file, and include the standard JDBC connection settings to connect to the database. In Subscribed Services, click Database, then Add New Database; it will open the Create a MySQL Database dialogue box. Enter a database name, user name and password. Once the database is ready, you can view its type, status, size, server, port, schema name, username and data source configuration tips on the dashboard. Users can create snapshots of the database, and also restore data as per their needs. Snapshot versions are also maintained (Rollback to Snapshot support) as seen in Figure 5.
CloudBees Eclipse plug-in (Run@Local, Run@Cloud)
Install the CloudBees Eclipse plug-in from http://cloudbees. eclipse.com/ (SubEclipse needs to be installed). Validate your CloudBees account in Eclipse: via Windows > Preferences, select CloudBees from the left sidebar and click OK. Once validation is successful, it will try to synchronise the forge repository. Open the CloudBees Perspective (Windows > Open Perspective > CloudBees) as seen in Figure 6.
Create a new repository: in the CloudBees dashboard, click the Services menu > Repositories > Create new Code Repository. Give the Name, Permission Scheme, Repository Type (Subversion, CVS, etc) and description. You can provide email IDs for email notifications for repository actions (commit or push notifications in HTML format). Create a new CloudBees Project in the Eclipse CloudBees perspective. Give the Project Name and click Next. Provide a Jenkins instance (repository); and select the newly created repository. If it gives an error, enable hosting in Forge to configure the Jenkins job SCM automatically. If the SVN repository is not configured in Eclipse (go to Windows > Preferences > Verify to check if SVN is available), install SVN 1.8, select SVNKit, and restart Eclipse. Install Forge GIT integration and Forge SVN Integration (Subclipse, Subversive). Again use Windows > Preferences > Verify to check that SVN is available; then again select the Jenkins instance and SVN Forge repository. A simple project will be created with a single servlet. Enter a simple print statement in the servlet to check when it is running. To deploy the app, right-click the CloudBees project, and select Run as > CloudBees Application (local). If you’re asked to unblock the program, do so. Access http:// localhost:8335/ to check local running.
november 2012 | 89
Cloud Corner
Let's Try tab. There, memory graphs (memory, classes and thread statistics), request graphs (requests, errors, request time and request byte statistics) and overview graphs (requests, memory, sessions and system load) are available. Server and access logs are available in case they are needed for troubleshooting.
Scaling Figure 5: Database snapshots
Figure 6: CloudBees Eclipse plug-in/perspective
Next, select Run as > CloudBees Application (Run @ Cloud) from the project context menu. Select the account. Access http://timenexpense.cleanclouds.cloudbees.net/hello. Commit the revision (an SVN commit will be done). Once the code is committed, you can compare using Java Source Compare functionality.
Application configuration and operations
Users can enable application-specific New Relic RPM application monitoring, Papertrail application logging and AppDynamics monitoring. New Relic RPM provides monitoring, troubleshooting, root cause diagnosis, capacity planning, scalability analysis, database usage analysis, Apdex scoring, and deployment history. Papertrail provides consolidated logging for your applications to help detect, resolve, and avoid infrastructure problems. AppDynamics is the leading provider of application performance management for modern application architectures in the cloud. All CloudBees Run@Cloud applications have an Operations page in the Console (Web-based) that provides insight into the health and activity of the application. You can get to these charts by logging into the Console, selecting one of your applications, and then clicking the Operations 90 | november 2012
In normal scenarios, the user experience suffers due to a lack of resources, or because running too many servers will cost a lot of money. It is all about perfect capacity planning—but how do you get that perfection in today’s dynamic IT environment? That is where autoscale, one of the most important features of PaaS, comes to your rescue—it increases and decreases server capacity based on real-time demand. It requires the platform provider to load-balance requests across a number of servers, monitor the load on each server, and spin up new resources as needed. It tries to keep application resources at a ‘sweet spot’ that saves money, yet ensures the user experience doesn’t suffer. All PaaS providers support auto-scaling to some extent. Java EE applications must be configured to access a centralised external database, as opposed to a database server co-hosted on the same server. With CloudBees, scaling is easy; it is available as a paid subscription. Once enabled, your application management screen will show new options for scaling (Figure 8). Vertical scaling for the application is achieved through app-cells. Choose the appropriate size. One app-cell offers dedicated RAM and compute processing power for your application to use. The more the app-cells, the more memory and processing power will be available to improve the performance and scalability of the application. For horizontal scaling, the drop-down lets you choose the number of machines to add to scale horizontally. Applications can choose a specific number of instances (up to 10) and set this up statically. The auto-scale service knows about your application's preferences, which are stored as a list of scale rules. It then listens to the app.stat feed (see the diagram in Figure 9) and does stream processing over it. Once it detects that your application has exceeded the set parameters, it will send a message to the deployment API service to add more capacity or reduce it (as required), within the range that you specified (settings in Figure 10). Thus, users need not worry about variations
Cloud Corner
Let's Try app.stat “feed”
statistics (app.stat)
Bus
3
2
Figure 7: Run@Local and Run@Cloud Auto-scale service
App instance
4
App instance
scale.out/in
App prefs.
api. cloudbees. com
App instance
1
Figure 9: How scaling works in CloudBees
Figure 8: Vertical and horizontal scaling in CloudBees
Figure 10: Scaling configuration in CloudBees
in adding or removing servers. Tools like New Relic can provide an insight to help you identify what the triggers should be for your application.
The CloudBees SDK
Sauce Labs
Sauce Labs is the company behind the very popular Selenium open source project. Selenium is a tool for automated in-browser testing of Web applications. You can develop a Selenium script that operates the website step-by-step within a browser you specify. After a Jenkins server builds the application, you can use Maven to run automated unit tests, and then run Selenium scripts to test the application from the user’s perspective, ensuring the build is valid. The entire process can be automated within CloudBees as your application is continuously built and integrated. Sauce OnDemand is a cloud-based service that allows you to run automated cross-browser functional tests at high speeds, in parallel, so you don't need to maintain testing infrastructure but still get the quality outcome with the cost benefits. Easy-to-use tools let you build both manual and automated tests that produce media-rich results, including screen shots and videos of bugs that can be added to any bug-tracking system.
SonarSource
Sonar is an all-in-one platform to manage source-code quality that provides support to fight the seven mortal sins of the developer: duplicated code, lack of coverage, complexity, no documentation, no standards, potential bugs and a spaghetti design. Sonar offers numerous services to uncover these sins: customisable dashboards, portfolio management, quality profile management, clouds, hotspots and the ability to drill-down to the source code. Sonar is multi-language and integrates well with Eclipse.
The CloudBees SDK provides command-line tools for your workstation that help to make the use of the CloudBees platform fast and light. Install it so you can build and test your application from your local workstation. Java 6 or higher must be available in the PATH. Download the CloudBees SDK and unzip it into a directory on your machine. Set the BEES_HOME environment variable and the PATH variable. On first use, enter your CloudBees email address and password. The SDK includes a set of commands to manage the Run@Cloud Application Service, and can be used to perform most operations that you can do via the Web console. For example, to create a new MySQL database, you'd run the bees db command, whose usage is: bees db:create [options] DATABASE_NAME -p,--password <password> Database password -u,--username <username> Database username (must be unique)
References CloudBees White-papers: http://www.cloudbees.com/resources/ white-papers.cb CloudBees—How it Works: http://www.cloudbees.com/platform/ how-it-works.cb CloudBees Documentation: [1] http://wiki.cloudbees.com/bin/view/Main/ CloudBees+Platform End to EndTutorial [2] http://wiki.cloudbees.com/bin/view/DEV Sauce+OnDemandService [3] http://wiki.cloudbees.com/bin/view/RUN/DatabaseGuide
By: Mitesh Soni The author is a technical lead at iGATE. He is in the cloud services practice (Research and Innovation) and loves to write about new technologies.
november 2012 | 91
Admin
Insight
Getting Started with Tcpdump
T
This article is an introduction to a popular UNIX tool called tcpdump—a very capable command-line utility that allows you to capture network data.
cpdump is based on libpcap, an open source C/C++ library for network traffic capture. libpcap allows you to write portable code by providing a common highlevel API for network packet capturing, because almost every operating system offers its own semantics on how to approach low-level network capture functionality.
Running tcpdump
You usually have to run tcpdump with root privileges—with the help of the sudo command—as network capturing is not allowed to everyone, for security reasons. If you try to capture network data using tcpdump without having root privileges, you will get the following error message: tcpdump: no suitable device found. If you use the sudo command, then you will have no problem. You can view the man page with the command man tcpdump. The first thing to do is find out which network interfaces are available on your Linux system; run tcpdump with the –D option, as shown below:
$ sudo tcpdump -D
1.eth0 2.wlan0 3.nflog (Linux netfilter log (NFLOG) interface) 4.eth1 5.any (Pseudo-device that captures on all interfaces) 6.lo
On my Mac system running Mac OS X 10.8.2, the command produces the following output: 1.en0 2.fw0 3.en1 4.lo0
Tcpdump output format
The following is the output from a single packet that was captured using tcpdump: 20:21:18.347280 IP v-4-kp07-d2084-56.xyzHost.com.http > 192.168.1.10.54706: Flags [.], seq 50400:51840, ack 1, win 33,
As you can easily understand, this is unique to every packet captured by tcpdump. IP: This indicates the use of the IP protocol. v-4-kp07-d2084-56.xyzHost.com.http: These are the source address and the used port (HTTP) found in the network packet. >: This symbol separates the source part from the destination part. 192.168.1.10.54706: This is the destination IP address and the port number (54706) used. Flags [.]: Various flags. seq 50400:51840: This is the beginning TCP sequence number (50400) and the ending TCP sequence number. TCP uses sequence numbers to order the received data. ack 1: This is the ACK flag that acknowledges receiving the data from the sender. win 33: This is the amount of data that will be sent before requiring an ACK packet back from the server. options [nop,nop,TS val 1109618191 ecr 1386382236]: These are several other options. length 1440: This is the length of the network packet.
Tcpdump basics
Tcpdump does not capture the entire network packet, by default. This is because, usually, the interest lies in the header parts of the packet that are normally captured with the default packet length. Here are some of the most useful parameters of tcpdump: The –v parameter produces slightly more output than the default; –vv produces even more verbose output; for truly verbose output, use –vvv. By default, tcpdump shows the captured network data on screen. While this is useful sometimes, usually you save the captured data to process it later—using the –w parameter followed by the desired filename. To read a data file saved by tcpdump, use the –r option followed by the filename. Tcpdump uses DNS name resolution by default. If you want to turn that off, use the –n parameter. Use –nn to turn off both DNS name resolution and port number resolution. To capture a given number of packets, use the –c parameter. The –tttt parameter produces a more readable timestamp output, as you can see in the following example:
options [nop,nop,TS val 1109618191 ecr 1386382236], length 1440 2011-12-26 20:02:12.465078 IP google-public-dns-a.google.com.domain
Now, let's understand the different parts of the captured packet: 20:21:18.347280: This is the time the capture took place.
92 | november 2012
> 192.168.1.10.53028: 24315 1/0/0 PTR google-public-dns-a.google. com. (82)
Insight
The –A option prints the packet in ASCII format. To print output in both the ASCII and HEX format, use the –XX parameter.
Tcpdump scenarios
Admin
$ sudo tcpdump src net 192.168.1.0/24 and dst net 10.10.10.0/24
To check traffic of host 10.10.10.1 using UDP port 514 (usually the syslog server):
That’s enough of the theory—now for some practical examples. The following example captures two packets of network traffic from TCP port 110 in ASCII format:
$ sudo tcpdump host 10.10.10.1 and 'udp port 514'
$ sudo tcpdump –c 2 –A port 110
To capture packets below a certain size, use sudo tcpdump less 1024. To capture broadcast or multicast traffic, use sudo tcpdump ‘broadcast or multicast’. To capture and show IPv6 traffic only, use sudo tcpdump ip6.
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 21:01:58.750072 IP 192.168.1.10.56836 > pop.someHost.gr.pop3: Flags [S], seq 563784957, win 65535, options [mss 1460,nop,wscale 1,nop,nop,TS val 1430216395 ecr 0,sackOK,eol], length 0 E..@..@.@. .... Q.h6...n!...........{. ............ U?^. ....... 21:01:58.751523 IP 192.168.1.10.56837 > pop.someHost.gr.pop3: Flags [S], seq 3877282998, win 65535, options [mss 1460,nop,wscale 1,nop,nop,TS val 1430216396 ecr 0,sackOK,eol], length 0 E..@K.@.@. .... Q.h6...n............{. ............ U?^. ....... 2 packets captured 498 packets received by filter 0 packets dropped by kernel
To capture two packets using both the ASCII and HEX format, use sudo tcpdump -i eth0 -c 2 -XX. To capture 100 packets to a file named out (-w out) and then stop, use sudo tcpdump –c 100 –w out. To capture the network traffic of the entire 10.10.10.0/24 network, use sudo tcpdump net 10.10.10.0/24. To capture incoming traffic to <some_host> that is also going to port 80 (usually HTTP traffic, but you should not always trust the port number for characterising the traffic), use sudo tcpdump dst host <some_host> and port 80. You can get common port numbers and service names from /etc/services. To capture all packet types except ARP and ICMP network packets with the more readable timestamp format, use sudo tcpdump -tttt not arp and not icmp. To capture traffic with the HTTP port from IP address 192.168.1.1 going to 192.168.1.10, or from 192.168.1.10 to 192.168.1.1 (sample output shown):
16:49:52.681405 IP 10.10.10.1.51787 > 10.10.10.2.syslog: SYSLOG local7.notice, length: 156
Tcpdump tips
When capturing network data using tcpdump, keep the following points in mind: If a parameter you are trying to use is not working as expected, check the man page. Always check the tcpdump man page when using it on a new system. When in doubt, capture everything! Remember—you can always filter later, but you cannot find a network packet that you did not capture. You can analyse captured data using WireShark. The main advantage of tcpdump is that as a command-line utility, you can use it with an SSH connection. WireShark, as a GUI application, has an overhead and can lose network data on a busy network—tcpdump needs less systems resources than WireShark. You can run tcpdump using the cron utility to capture data without being logged in at the machine. To capture full Ethernet frames, you should run tcpdump with the –s 1514 parameters; 1514 is the maximum length of Ethernet network packets. Most of the time, you do not need the full packet.
Summary You will now have become aware about the endless possibilities when capturing network data using tcpdump. The best way to learn is to practice and experiment. Remember that you do not have to process and analyse the captured data using tcpdump; other tools are more capable of doing that. I suggest you learn WireShark. I think that every network or Linux systems administrator should add tcpdump to his arsenal of tools.
$ sudo tcpdump \(src 192.168.1.1 and port 80 and dst
Web links and bibliography
192.168.1.10\) or \(src 192.168.1.10 and port 80 and dst
[1] tcpdump and libpcap site: http://www.tcpdump.org/ [2] Internetworking with TCP/IP, Volume I, Douglas E Comer, Prentice Hall [3] Wireshark: http://www.wireshark.org/
192.168.1.1 and port 80\) 21:38:11.492218 IP 192.168.1.10.57018 > 192.168.1.1.http: Flags [F.], seq 383, ack 72468, win 65535, length 0 21:38:11.492271 IP 192.168.1.1.http > 192.168.1.10.57017: Flags [.], ack 385, win 6000, length 0
To capture traffic from the 192.168.1.0/24 network to the 10.10.10.0/24 network:
By: Mihalis Tsoukalos The author enjoys photography, writing articles, programming iOS devices and creating websites. You can reach him at tsoukalos@ sch.gr or @mactsouk.
november 2012 | 93
Admin
Overview
Cyber Attacks Explained
Device Evasions
WAY TO FIREWALL EVASION AND ROUTER EVASION SCANNING PHASE
This is the last installment of the year-long series on cyber attacks. As we know, network infrastructure is protected by security devices such as routers and firewalls. So far, it was believed that such devices were enough to secure the perimeter. This belief has been proved wrong by a technique called device evasion, which is a new trend in attacks in the cyber security world. This article explores this technique in detail, and also covers methods to prevent IT infrastructure from such attacks.
B
y definition, evasion is the process of avoiding or bypassing an object or a situation. In technical terms, evasion is a technique by which an attacker bypasses a security system in the cyber security space. The system may typically consist of routers, firewalls, network switches and intrusion detection devices. As we all know, routers segregate networks; firewalls block unwanted IP address and TCP port communications, whereas intrusion detection devices add a layer of intelligence based on anomaly detection techniques. A few years back, when these techniques were implemented network administrators could safely say that their infrastructure was secure. However, the laws of human nature tell us that a
thief is always a step ahead of the cops! While these devices seem to thwart the attacks for some time, they have made cyber criminals more aggressive and have prompted them to come up with ways to penetrate and break down the security perimeter. Attackers use evasion methods in order to steal data, disrupt IT networks or plant software exploits. As shown in Figure 1, a typical secure infrastructure contains at least a router, with a switch and a firewall also incorporated. To learn more about device evasions, let’s concentrate on these three devices, and cover a bit about IDS systems too. Let’s also learn how each of these devices can be broken into, and finally, how to secure them to protect
Overview
Admin
the infrastructure. Device evasion is a highly technical and systematic approach to penetrating a network.
Router evasion
Often, routers are the first devices accessible from outside a firm’s network. Routers maintain their own routing tables, which store the paths to destination IP ranges, with a cost metric. The routing table is checked every time a TCP/ IP packet is processed prior to being sent to the desired destination. Besides this, routers implement intelligent algorithms to speed up the routing process. The router device thus acts as a first line of defence from the cybersecurity perimeter standpoint. In earlier articles in this series, we learnt about denial of service (DoS) attacks, IP spoofing attacks, man-in-the-middle attacks and packetcrafting attacks. All these attack vectors can either fool a router into routing malicious packets into the target network, or can simply render it functionally useless, thus disrupting the whole network. Besides this, there are a few routerspecific techniques too, which are mentioned below. Route hijacking: In this method, the attacker first sniffs the traffic originating from a router. Based on the information gathered, the router is then supplied with bogus source and destination IP addresses, which are spoofed purposely to trick the router. As the router tracks these and updates its routing table, this process can become very overwhelming to the router, causing its routing table to overflow and get corrupted. This is similar to a DoS attack, but can take much less time, in practice. An expert attacker can further exploit this situation by feeding the vulnerable router with its own IP segment, and waiting for the routing table to get built up again. The very first routing protocol, called RIP, didn’t have a built-in authentication mechanism to verify the authenticity of the routes being updated. Hence, a forged RIP packet can easily disrupt the routing table, and this scenario becomes seriously dangerous if multiple routers are connected together using the RIP protocol. Modern routers usually don’t fall prey to this attack; however, an improperly configured router can still be vulnerable. IoS penetration: Like any other network device, a router runs its own operating system, which can be vulnerable to attacks. For example, most wireless routers and early-generation industry-grade routers were running on a compromised kernel OS. Attackers with a thorough knowledge of such vulnerabilities can write scripts and programmatically subject the device to DoS or other dangerous attacks, such as remote code execution. Once the OS has been penetrated, attackers can remotely run commands to change critical configurations and settings. Attackers can route traffic to malicious servers to steal and corrupt data, and cause even more damage.
Firewall evasion
It is a common practice to host a firewall behind a router; however, many mid-sized firms running only one office
Switch Firewall Router
Typical Network Configuration
Figure 1: Typical network configuration
may completely remove the router, turning the firewall into the first line of defence. As compared to the router, a firewall acts as a strong opposition to attacks because its kernel functionality is designed for this, causing serious challenges to attackers wanting to get past a wellconfigured firewall. As we know, a firewall is a rule-based device that allows or disallows traffic entering or exiting a security perimeter. When a source address initiates a connection with a host behind a firewall, the firewall rules intercept the connection, interpret it and take appropriate action. This also tells us that the TCP handshake actually happened between the source and firewall. In case the connection being established is not allowed, based on the rules configured, the firewall drops the connection by sending a TCP RST signal to the source. If the connection is allowed, it initiates a connection with the destination and performs the packet transfer. This also shows us that the source and destination IP addresses, as well as the ports, are very important from the firewall’s standpoint. With this fundamental theory in mind, let’s look at few firewallspecific evasion attacks. Firewall request spoofing: If the attackers can spoof their packets to make them look like they are coming from the internal network segment of the firewall, an improperly configured firewall may allow those through the system. Similar effects can be achieved by spoofing the MAC address, in case the firewall is keeping a track of all internal MAC addresses. Firewall DoS: Modern firewalls intercept each packet and apply more intelligent checks on them than their predecessors, before letting them through to the network. Typical checks by anti-virus and anti-spyware algorithms, attack anomalies, etc, are performed on each packet. This feature, though tremendously useful, is exploited by attackers in some cases. Apart from sending an overwhelming number of requests, there are a few other tricks used by attackers. In one case, a malicious request to a known destination host listening on a known port is sent multiple times, but with the source IP and port of that packet spoofed to a non-existent host. There is no way for the firewall to know this, and hence this results november 2012 | 95
Admin
Overview
in an internal connection to the destination that gets updated in the firewall’s own tracking table. Since the source does not exist, such requests keep on piling up, thus exhausting the firewall resources. In another case, the source address is spoofed to be one of the internal network IP range, and the MAC address is spoofed. This causes the destination host to call for a MAC address RARP request, causing turbulence on the internal network. It is important to note that a firewall DoS attack can disrupt internal-to-external network traffic, and can even take down the internal network. Packet forging: In one of the previous articles in this series, we learnt about packet crafting. The same technique can be used against firewalls. In one case, a packet can be crafted to have a bad TCP checksum, which the firewall has to calculate every time, before taking a decision on the packet, thus causing sluggishness. In another case, the datalength parameter in the TCP packet can be filled with a very big number, which tricks the firewall into waiting for the entire data chunk to arrive. This can eventually exhaust the firewall’s internal memory. Rule exploitation: Attackers know by experience that, in many cases, a firewall is not configured correctly. For example, it is commonly found that TCP packet rules are set up, whereas the administrator simply forgets to deal with UDP packets, letting those get through the firewall. It has also been found that many firewalls are configured with port 80 being open bi-directionally, whereas it should be open only from the external network to the internal network. Modern black-hats (attackers) often write scripts to detect such mis-configurations—and on finding one, they gather enough data to exploit those further.
packets are sent in well-formed as well as malformed patterns. The IDS drops the malformed ones, but eventually ‘learns’ it as an acceptable behaviour based on historic patterns. Once this state is attained, the attacker storms the IDS with malformed packets to the host, thus achieving the goal of disrupting the system. This also shows us that an IDS is a great idea, but a mis-configured IDS can be equivalent to having almost no security at all. For switches and IDS systems, DoS attacks planted at Layer-2 or Layer-3 are possible. Switches are configured to deal with such situations; however, attackers scan networks to find out the weakest link, which is usually the misconfigured device, and use it as a target. The operating system of switches, too, can be compromised, as attackers can take control of these switches remotely. This is, however, not so easy in case of IDS systems, making these an important network component from the cyber security perspective.
IDS and switch evasion
Summary
Today’s IT infrastructures always try to go beyond firewalls, by implementing Layer-3 switches as well as intrusion detection systems. A Layer-3 switch contains some great features, such as compartmentalised virtual networks, network bandwidth quality of service, MAC registration, etc. An IDS system works by applying attack anomaly algorithms on the packet traffic in a network. While each of those have their advantages, the techniques used to invade a router and firewall apply to them too. At the heart of all these techniques is packet crafting and packet spoofing, so that these devices are fooled into treating those packets as legitimate—and thus the attack is not sensed, suspected or detected. One common technique to evade an IDS system is forceful signature embossing. An IDS system learns as time goes by, and updates its own database of anomaly signatures, which further helps in deciding which request is legitimate and which is not. Attackers send multiple spoofed packets over a long period of time, to a destination host running a known TCP service. The
Protecting FOSS systems
There are many flavours of open source routers running on Ubuntu and other distros. The same is true for firewalls and intrusion detection systems. While we have discussed the devices, it is important to note here that the FOSS systems behind the perimeter of these devices should be properly configured and monitored for network attack anomalies. At the network layer, Linux FOSS systems come with a built-in feature called source address verification. This is a kernel feature which, when turned on, starts dropping packets that appear to be arriving from the internal network, but in reality are not. Most of the latest kernels in distros, such as Ubuntu and CentOS, support it. This feature helps to reduce the chances of packet spoofing.
This article concludes this series on cyber security. We all love FOSS, and can enjoy and benefit from it even more only if it is cyber-secured. Thanks for all the great feedback received so far, and the encouraging words via email and social networks. Device evasion is a new trend in network attacks and is being increasingly used to break into corporate IT infrastructure for malicious reasons; it should be taken very seriously by network administrators and the IT senior management.
By: Prashant Phatak The author has over 20 years of experience in IT hardware, networking, Web technologies and IT security. Prashant is MCSE and MCDBA certified, and also an F5 load balancer expert. In the IT security world, he is an ethical hacker and a Net forensic specialist. Prashant runs his own firm, named Valency Networks, in India (http://www.valencynetworks.com) providing consultancy in IT security design, security audits, infrastructure technology and business process management. He can be reached at prashant@valencynetworks.com.
Recruitment Trends For U & Me
Open source professionals should be selfdriven and objective about criticism If you are willing to get into a career in open source technology, there are some ‘additional’ attributes that you require apart from your technical expertise. Diksha P Gupta spoke to Sreehari S, managing director, Novell India (The Attachmate Group), about what qualities are required for a successful open source professional and the company’s plans in India. Read on... Sreehari S, managing director, Novell India, (The Attachmate Group)
Q
Do you feel that India is rich in open source skills?
If you talk about the market, it surely is growing and so is the interest of people in making a career with open source technology. With the cost advantage, and more and more customers not wanting to get locked down by specific vendors, people are increasingly evaluating open source solutions. Along with that, the interest amongst professionals is growing. We see a lot of traction amongst the students as well, when we go for campus interviews. More and more professionals now want to work in this area.
Q
Is that one of the reasons why Attachmate is banking big on the Indian market?
India is definitely a big market for Attachmate. We have a lot of commitment to the India development centre because of the value it provides. Our investments, of course, keep shifting from location to location, depending on the projects, like in any other company. But broadly, there is a greater commitment to the Indian development centre and it is growing significantly.
Q
Can you throw some light on what kind of growth you have had in the India development centre over the last year?
We have grown about 15-20 per cent in headcount and we see ourselves growing at the same pace next year too. This hiring includes candidates from colleges as well. Beyond the growth in numbers, our objective is also to grow in terms of value we provide. So we do not focus too much on numbers alone.
Q
Do you think the education provided in Indian colleges is good enough to churn out professionals in the open source technology domain? I think industry is plugging into the education system so that we get better and much more employable talent. That is the general phenomenon and open source companies are not far behind. Industry is investing and influencing the education system on what kind of courses they ought to cover so that the talent being churned out is good enough to merge into the work place. We pick employees from colleges across India. We target some of the premier institutes across the country. Over the last several years, the trend is an increased penetration of open source talent NOVEMBER 2012 | 97
For U & Me
Recruitment Trends
in the institutes. There has been a lot more awareness over the past 10 years. Ten years back, most of the colleges didn’t even know what open source was. They did not have Linux in their labs. Today, it is a very different scenario. Having said that, I would add that there are two types of colleges—the premium ones that have a much higher penetration of open source in the labs and amongst their students; and there are some other colleges which have to catch up a bit more. So I would say that, over all, the colleges are catching up with open source technology. Moreover, colleges also understand that they have to work with the industry, so they are actually finding ways in which they can collaborate. We have worked with some of the colleges that wanted to understand what kind of curriculum may be good for their students to get jobs… so they wanted to work with us for their projects. We feel that this partnership is really helping in bridging the gap between what academia teaches and what the industry needs.
Q
Can the students do internships with you?
Yes, of course! There are different channels from which we pick students for internships. We pick up the passionate students who have the capability to join us as employees at a later stage. We plug them into our live projects so that they learn our technology better. If students whose college we have not visited want to do an internship with us, they apply to our HR department. We get lots of such requests but we pick and choose from the applications depending upon our need and the students’ calibre. Since these internships are on live projects, it requires a lot of investment from the team to bring students on par with the rest and only then can they start contributing. So, we cannot afford to have a large number of them but, yes, quality people are always welcome.
Q
Are your clients comfortable with open source solutions? Has the outlook in enterprises changed over the years, regarding open source technology? Yes, like I said earlier, the traction is continuously increasing because of various reasons including the cheap cost structure and people not wanting vendor lock-in. But once you say Linux, people immediately think it is a freeware and that security compromised. In a company like Novell or SUSE, it's not just the Open Source software that we download and give. It has some added services as well. What the customers buy is the support of enterprise-class open source software and the technical support of SUSE behind it. So, it is different from what everybody else can download from the internet. There are no reasons for these security concerns. I think what matters is which stack we offer to them. Open Source is a lot more customisable than its proprietary counterparts, so the customers see the advantage..
Q
How can any organisation gain by switching to open source tools and platforms?
If you ask about the scenario today, enterprises would want 98 | NOVEMBER 2012
to have a mix of open source and proprietary software, depending upon the application. But more and more people want to choose open source. Customers do not want to live with the pain of a vendor lock-in and the fear that comes with it. Also, the cost matters a lot. The total cost of a Linuxbased solution is, any day, a whole lot cheaper, as compared to a proprietary solution. Open source allows a lot of customisation, so you can go for whatever features you want. Of course, being open source software, you can add on the applications that the clients want.
There are different channels from which we pick students for internships. We pick up the passionate students who have the capability to join us as employees at a later stage. We plug them into our live projects so that they learn our technology better. If students whose college we have not visited want to do an internship with us, they apply to our HR department.
Q
How do FOSS platforms or tools add value to a project’s development?
In the teams doing a lot of development, we have seen that engineers tend to use a lot of FOSS tools like Bugzilla. Some of those tools are very basic to the project development. FOSS tools not only bring down costs but some of these tools have really matured over a period of time. They are being worked on by thousands of developers around the world as against one organisation, which is the case with proprietary software tools. For us, it is always good to see the developments happening from the market’s point of view and to bring down the overall cost. A lot of open source tools are being used even for projects that are not open source.
Q
What are the key tools that you use for product development and which products have they been used for?
The tools we use are very particular to the platform we work for. If a product is meant for the Windows market, we will obviously use tools that will help us to develop for Windows. But then we also have tools like KDE, iFolder, Open Office, etc. Open Office is used extensively in our organisation. I use it pretty much everyday.
Q
How does Attachmate find the right kind of talent to work in its open source projects? And what skill sets do you look for while recruiting your employees?
The hiring at Attachmate is not different from other open source projects. Having said that, there are some distinct capabilities we look for, especially when we are hiring for open source. We do not have teams in one single location and open source projects
Recruitment Trends For U & Me typically operate as a ‘community’, with people working from different locations across the globe and contributing. We look for people with capabilities to work with their remote counterparts, and that requires a different set of skills than working in a team sitting around across the cubicles. It means that people should be able to work with different cultures, understand what their counterparts are saying and also be able to communicate effectively. The candidates should be able to talk about their work openly because people don’t see each other day in and day out. So they will need to know the developer as a person. There will be appreciation and criticism at the same time. If they contribute some code which is not good, people may trash it, so they should be able to handle that kind of pressure. They don’t have managers who can shield them from such things on an everyday basis. People should have the maturity to represent themselves and to communicate effectively with counterparts whom they have not met and do not know. They just know their counterparts as email addresses or by name. These things require slightly different skills. So when we hire for open source, we look at some of those skills apart from the regular programming and technical skills that we expect. In projects with SUSE and Attachmate, we have different ways of hiring people. We have ‘Bootcamps’, where people can contact us and participate in a contest. The candidates are given programming challenges. Those who clear the contest can become a part of the open source project. The contests for different projects have different formats. So, hiring for open source is a slightly different ball game. The technical skills won’t be drastically different but the people should have a high-level of passion for their work, should be self-driven and objective
(Continued from Page 84....) planning. In real-world projects, dynamic analysers are used on a ‘need basis’. For example, if there are many memory leaks in a C/C++ program, projects usually try applying tools such as Rational Purify or Valgrind, which are useful. However, a better approach is to integrate such tools as part of the testing infrastructure, and run them as part of regular tests. You may be surprised to see dynamic analysers identify new bugs from the same testing infrastructure, as the development progresses! Most programmers are surprised with this simple suggestion that I’ve given here. They never consider using dynamic analysers regularly, and think that they are only for ‘need-based’ use. A final warning: Don’t blindly trust the results of program analysis tools! There have been cases in which trying to fix the warnings from static or dynamic analysers has resulted in bugs! Here is an example. In 2008, a security vulnerability was found in Debian’s OpenSSL package; the root cause was traced to a change
about criticism. That, we see, as the differentiating factor. As far as the technical skills are concerned, if they have the right attitude, they can pick up technical skills easily, from project to project.
Q
Do you also provide training to the people you hire?
Yes, of course. The training is very specific to the kind of projects they do and the skill sets that are required for the project. Once the candidates join us, we expect them to have some technical skills, since we do intense training. We believe in on-the-job learning, so that people can pick up on the technologies that we use.
Q
We know that colleges do not provide the kind of education that makes students industry-ready and they need some extra certifications. What kind of certifications should one go for and does having them add to a professional’s profile?
Certifications do make a lot of difference, especially for open source projects because they help students pick up the specific skills required for open source jobs. Last year, we tied up with some colleges to encourage certifications, and we helped them out with material. We do this on a regular basis.
Q
Do you interact with the online open source community as well?
Yes, we do a lot of that. In fact, this is something very fundamental to open source development. We interact with a lot of open source communities across the globe—some of them are related to the GNOME project, the iFolder project, the KDE project and the Linux kernel project. There are many projects of which we are either the owners or the contributors.
made two years earlier. In 2006, two lines were removed from crypto/rand/mdrand.c with the comment, “Don’t add uninitialised data” (see Figure 1). This change was done because the Valgrind tool complained about the use of an uninitialised variable. However, the code deliberately used the uninitialised value to add ‘entropy’ to the random-number generator. Because of this change of not adding uninitialised data, the randomness in Debian-generated keys (SSL and SSH) was reduced to 15 bits. The only ‘randomness’ used was the process ID, whose value is 2ˆ15, or 32,768 unique keys. The result was obviously a security vulnerability! The fix was to ‘undo’ the change and retain the uninitialised variable. This is a good example to understand how intentional errors are used in practice, and why we should not blindly trust the results of program analysis tools.
By: S G Ganesh The author works for Siemens (Corporate Research & Technologies), Bengaluru. You can reach him at sgganesh at gmail dot com.
NOVEMBER 2012 | 99
Developers
Let's Try
Creating
OpenMRS Modules This article demonstrates how to build a basic OpenMRS ‘hello-world’ module using the OpenMRS module framework as the base—a simple page that will be only shown to users if they are logged in. The software pre-requisites are the NetBeans IDE and SVN. Experience with Java development, especially Spring and Hibernate, is required.
F
irst, create a directory for the development work—I called mine development—and change to it using the cd command. Do an SVN checkout of the OpenMRS module framework with the following command: svn checkout http://svn.openmrs.org/openmrs-modules/helloworld/
Now launch NetBeans. Use File > Open Project, browse to the development directory, select the pom. xml file, and click Open. Once the project is open, the sub-projects can be opened by expanding the project and opening the API and OMOD (right-click it and select Open). Let’s assume your module name is Hello World and its ID is: helloworld. Note: The module ID is a unique (among modules) string that can distinguish the module from others. The module name can be anything that best describes it. Once you have decided on the module ID and name, it’s time to change some of the project properties. In case you want to skip this step, you can apply a diff to the checkedout version of the code by downloading it from: https://gist. github.com/3067580. Otherwise, modify the properties in the following files, replacing @module_id@ with the module ID and @module_package@ with the module package name: omod/src/main/resources/HelloWorld.hbm.xml omod/src/main/resources/messages.properties omod/src/main/resources/config.xml omod/src/main/resources/messages_es.properties
100 | november 2012
omod/src/main/resources/messages_fr.properties omod/src/main/resources/moduleApplicationContext.xml Other metadata that needs to be modified in the file omod/ src/main/resources/config.xml is shown below:
<id>helloworld</id> <name>helloworld</name> <version>1.0</version> <package>org.openmrs.module.helloworld</package> <author>Ben Wolfe</author> <description>Basic OpenMRS Sample Module</description>
Basic functionality
Now, let’s add some more basic functionality: when a user
Let's Try with View Hello World permission views a specific page, a helloworld string will be printed a specific number of times (the number is taken from the messages.properties file and is equal to the global properties). Now edit the config file at: helloworld/omod/src/main/ resources/config.xml and add the following privilege under the privilege tag:
Developers
let one add content to specific parts of the screen, whenever required by the module developer: <extension> <point>org.openmrs.admin.list</point> <class>org.openmrs.module.helloworld.extension.html. AdminList</class> </extension>
<privilege> <name>View Hello World Phrase</name> <description>Able to view the “Hello World” greeting</ description>
Once this is done, it’s time to edit AdminList.java to add content to it. This needs to be added to the admin.list extension point. The following should be the content of the admin.list file:
</privilege> package org.openmrs.module.helloworld.extension.html;
Then, add the global property—the number of times to print helloworld—in messages.properties:
import java.util.HashMap; import java.util.Map;
<globalProperty> <property>helloworld.repeatNumber</property>
import org.openmrs.module.Extension; import org.openmrs.module.web.extension.AdministrationSectionExt;
<defaultValue>3</defaultValue> <description>Number of repetitions of the phrase “Hello World”</description>
public class AdminList extends AdministrationSectionExt { public Extension.MEDIA_TYPE getMediaType() {
</globalProperty>
return Extension.MEDIA_TYPE.html; }
After this, create viewHelloWorld in /helloworld/omod/ src/main/webapp with the following commands:
public String getTitle() { return “helloworld.title”; }
<%@ include file=”/WEB-INF/template/include.jsp” %> <openmrs:require privilege=”View Hello World” otherwise=”/login. htm” redirect=”/module/helloworld/viewHelloWorld.htm” /> <%@ include file=”/WEB-INF/template/header.jsp” %> <br/> <openmrs:globalProperty key=”helloworld.repeatNumber” var=”repeatNum” /> <c:forEach begin=”1” end=”${repeatNum}”> <spring:message code=”helloworld.helloWorld”/> <br/> </c:forEach> <br/> <%@ include file=”/WEB-INF/template/footer.jsp” %>
You can access this page at the following location after module installation: <Web address of the OpenMRS installation> /module/helloworld/ viewHelloWorld.htm
Adding to the user dashboard
In order to make it visible to users in their homepage after logging in (i.e., on the dashboard), add the following to <API>/helloworld/omod/src/main/resources/config.xml. AdminList is a class that provides functionality to extend the dashboard view when module developers want to add content to it. Like this, there are other extension points also, which
Figure 1: OMOD file
november 2012 | 101
CALENDAR FOR 2012-2013 events to Look out For in 2012 Name aNd Website
n
date aNd VeNue
reseller club Hosting summit http://www.rchostingsummit.com/
India's largest conference for the Web Hosting community
1 – 2 November, 2012 Renaissance Mumbai Convention Centre Hotel, Mumbai
Linuxcon europe 2012 https://events.linuxfoundation.org/ events/linuxcon-europe/
An event for the Linux community
5 – 9 November, 2012 Hotel Fira Palace - Barcelona, Spain
KVm Forum/oVirt Workshop 2012 http://events.linuxfoundation.org/ events/kvm-forum/
An event for the Open Source community
7 – 9 November, 2012 Hotel Fira Palace -Barcelona, Spain
broadband tech india 2012 http://www.bharatexhibitions.com/ english/BBTI2012/index.php
Offers insights into the very latest technologies & applications that are driving the communication market forward
23 November, 2012 Hotel Shangri-La, New Delhi
Gartner data center conference http://www.gartner.com/technology/summits/na/data-center/
An event for Data Centre Professionals managing and advancing their enterprise's evolving IT infrastructure requirements
3 – 6 december, 2012 The Venetian Resort Hotel and Casino, Las Vegas
the big data, analytics, insights conference http://www.bigdatainsights.co.in/
An event to explore data analytics solutions, latest skills, tools, and technologies needed to make Big Data work for an organisation
18 – 19 december, 2012 The Westin Mumbai Garden City, Mumbai
LFY magazine attractions during 2012-13 h
tHeme
Featured List
May 2012
Virtualisation
Certification & Training Solution Providers
June 2012
Android
Virtualisation Solution Providers
July 2012
Open Source in Medicine
Web Hosting Providers
August 2012
Open Source on Windows
Top Tablets
September 2012
Open Source on Mac
Top Smart Phones
October 2012
Kernel Development
CLOUD Solution Providers
November 2012
Open Source Businesses
Android Solution Providers
December 2012
Linux & Open Source Powered Network Security
Network Security Solutions Providers
January 2013
Linux & Open Source Powered Data Storage
Network Storage Solutions Providers
February 2013
Top 10 of Everything on Open Source
IPv6 Solution Providers
NOVEMbEr 2012 | 103
Adodis Technologies CDN Software Solutions Fusion Informatics Ltd Openxcell Technolabs
QBurst
QBurst Adodis Technologies CDN Software Solutions Fusion Informatics Ltd Openxcell Technolabs
QBurst
Tekriti Software
An A-Z listing of
Android Solutions Providers Adodis Technologies | Bengaluru, India Adodis is an established leader in open source Web and mobile applications development. The company develops world-class websites, extensions and IOS/Android mobile apps in an ISO 9001 certified operating environment. Leading clients: Reputed IT companies and innovative entrepreneurs USP: Armed with a team of skilled developers who have the expertise in open source-based development and robust quality processes. Special mention: The company has been featured in leading industry journals and portals. Website: http://www.adodis.com/
LEADING
CDN Software Solutions | Indore, India The company offers customised Android application development and interface designing for most of the platforms, including Cupcake, Donut, Éclair, Froyo, Gingerbread, Honeycomb and Ice Cream Sandwich, using the Android SDK and framework APIs. It also offers Android Media APIs, touchscreens, 2D and 3D graphics, Bluetooth, Android Security Architecture, OpenGL, Wi-Fi, magnetometers and various other technologies. USP: CDN Software Solutions has developed 46 Android apps for multiple domains including utilities, business, entertainment, sports, etc, all of which have met the clients’ needs. Most of the apps are available in Google Play. The company believes in staying abreast with changing trends, and its team members always keep themselves updated on the latest and best technologies through a process of continuous learning. These proactive approaches have enabled the company to stay ahead and develop various skillsets. Special mention: The company was ranked as the ‘Top Android Application Developer’ and the ‘Top Mobile Application Development Company’ by Sourcingline, a Washington DC-based research firm, focused on IT firms. Website: http://www.cdnsol.com/
Fusion Informatics Ltd | Ahmedabad, India Fusion Informatics offers a variety of Android services and solutions like Business Application Development, Enterprise Application Development, the porting of applications to different platforms, Hybrid Mobile Application Development, Banking and Financial Application Development, E-Learning Application Development, E-Pos Application Development, Sales Management/Fleet Management/Tracking Application Development, E-commerce and Social Networking Application Development, GPS/LBS Bluetooth/Google MAP integration and Android Digital eBook Development, as well as a lot more. USP: Focused on business and enterprise applications, with exposure to over 4000 projects in different verticals, and 12 years of experience in global deliveries, providing end-to-end solutions for different cultures and domains. Website: http://69.162.114.154/fusioncms4/Index
104 | NOVEMBER 2012
Adodis Technologies CDN Software Solutions Fusion Informatics Ltd
Openxcell Technolabs | Ahmedabad, India The company develops Android applications and games using Unity 3D. It also provides dedicated developers on hire to IT companies in the US, UK, Europe and Australia. Leading clients: Mars International Pvt Ltd, which has products like Pedigree and Mars Chocolates, is a major client. The company has also developed an Android app for the government of Gujarat. This was for an event named ‘Vibrant Gujarat’ which took place in 2011. USP: Armed with a technically sound and experienced team of Android developers, the company has been able to develop over 75 apps for international companies.
Openxcell Technolabs
QBurst
QBurst Adodis Technologies CDN Software Solutions Fusion Informatics Ltd Openxcell Technolabs
QBurst
Website: http://www.openxcell.com
Tekriti Software
QBurst | Trivandrum, India QBurst offers a diverse set of Android services to its clients. These include: location-based services, Near Field Communications (NFC), augmented reality, core animation, open GL, and audio/video recording and streaming. Leading clients: A few of the company’s premium global clients include Peugeot (the major car manufacturer); the Department of Tourism and Commerce, Marketing (Dubai); Primera Hora (the popular Spanish Newspaper), and Ingogo (the leading taxi booking system in Australia). USP: QBurst uses agile iterative development methodologies, such as SCRUM, for application development. The company uses Web-based project management and collaboration tools such as Trac and Redmine to ensure projects stay on track. Clients are given access to these tools so that they remain updated on the project status as the company’s experts design, develop, test and deliver applications. Best practices such as continuous integration, unit testing and frequent releases are followed.
Website: http://www.qburst.com
Tekriti Software | Gurgaon, India Tekriti is an end-to-end solutions provider that offers services like conceptualisation, design, development (including integration with external systems like CRM, CMS and ERP solutions), location-based services, building Web interfaces, testing, automated testing, systems administration, maintenance and enhancement. Leading clients: Tekriti’s clientele is primarily from two segments – enterprise and start-ups. Some prominent Indian clients are: Indiatimes, Max Bupa, Ixigo, Kuoni, Sony India, Nokia India and NDTV. USP: Tekriti takes pride in understanding user behaviour and the way Web and mobile businesses run. Its USP is its consultative approach towards product development. Tekriti is not just a technology implementation partner but a complete product development partner, providing end-to-end services.
LEADING
Special mention: QBurst has been ranked fourth among the best Android development companies by Best Web Design Agencies. QBurst is also featured in the top Android & iPhone Developers Matrix by US-based Sourcingline.
Special mention: Tekriti has been recognised as one among the ‘10 most promising IT & ITES companies in India’ by SmartTechie. Tekriti was also featured on the cover of Business World magazine. Ashish Kumar, founder and CEO of Tekriti Software, was ranked among the top 100 entrepreneurs from Asia Pacific and granted a fellowship by FYSE (Foundation for Youth Social Entrepreneurship), China. Website: http://www.tekritisoftware.com
Read more stories on security and surveillance in
www.electronicsb2b.com
TOPSECURITY STORIES
to a bright future turers can look forward • CCTV camera manufac cameras? V CCT for y read stry • Is the indu latest in security market • Surveillance on mobile eillance vertical in India e to the growth of surv • Sectors that contribut market in India V CCT of th grow er • Challenges that hind logue Cameras IP Cameras Outsmart Ana • Surveillance Scenario: s in India soar eras cam V CCT for • Demand
ELECTRONICS
INDUSTRY IS AT A
Log on to www.electronicsb2b.com and be in touch with the Electronics B2B Fraternity 24x7
NOVEMBER 2012 | 105
TIPS
&
TRICKS
Create custom commands to play RAW files
music files in different formats. Run the following command to rename all files in the folder.
play -c 1 -s -r 8000 -t raw -2 <file_name>
$ for n in *; do mv “$n” `echo $n | tr ‘ ‘ ‘_’`; done
If you don’t have the Play command, please install sox using the command given below:
This will replace blank spaces from the file name with ‘_’. For example, ‘file name’ will be renamed as ‘file_name’.
# apt-get install sox
$ for filename in *.*; do ffmpeg -i $filename $filename.mp3; done;
A RAW file is a sound file with no header, so here’s how you can play it with a header:
Or use your distro package manager to install it. Now create a command alias as follows: Step 1: Open .bashrc file located in your home directory: $ vi /home//.bashrc
Add this line at the end of the file and save it: alias playraw=”play -c 1 -s -r 8000 -t raw -2”
Step 2: Reload the .bashrc file: $ source /home//.bashrc
Step 3: Use alias
SSH access on a different port
To change the default SSH port, you need to edit the SSH configuration file /etc/ssh/sshd_config Open the configuration file in any text editor and search for the line as follows:
#port 22
Uncomment and replace 22 with the port number you’d like to run SSH on. In my case I am using Port No 2087. Now, restart sshd by issuing the following command: /etc/rc.d/init.d/sshd restart
$ playraw
—Ishtiyaq Husian, ishtiyaq.husain@gmail.com
Convert all files in MP3 format in two simple steps
This command is used to convert your flv, wma, MP4 and other files into MP3 files. I tried this script on Ubuntu 10.10 and expect that it should work on most of the popular Linux distros. You need to install libavcodec-extra-52 and ffmpeg if they are not already installed. Now open your terminal and go to the folder with the 108 | november 2012
The above command will create MP3 copies of all audio files. —Nikhil Ikhar, siover2001@gmail.com
To test the changes, you need to first check if the given port number is open or not. The following command will let you know if the new port is using this command: netstat -ltn | grep 2087
In the output of the above command, the state should be LISTEN for Port No 2087. Finally, test ssh on the new port as follows: ssh USERNAME@localhost -p 2087
—vijith, vijith.pa@gmail.com
Get the directory size from the terminal
Getting the size of the directory is easy with the du command, but if you want to get the size of all files in the directory which should not include the sub-directory, you can use the command given below:
will keep executing and the output will be in nohup.out The same can also be achieved by using ‘disown’ as shown below: $ nmap -vv -sP 10.13.37.1/24 -oN VizOutput.txt
Now press…
$ls -hs | head -1
This will give you the desired size.
—Kousik Maiti, kousikster@gmail.com
Adding the ‘Eject’ command to your menu
Sometimes it gets difficult to eject the CD ROM on your Linux-based computer. Here is a tip that allows you to add the Eject command to your menu. Create a file called Eject CD in $HOME/.gnome2/ nautilus-scripts/ directory. Now open a file in any text editor and type the following code:
Ctrl+Z
This will suspend the running job. After suspending the job, run the following command: $ disown
Now you can log out from the terminal without killing the job. —Vizay Soni, Vs4vijay@gmail.com
Linux monitoring tools
#!/bin/bash eject /dev/sr0/
Here sr0 is the device name of the CD ROM Drive. Save and close the file. Next, provide ‘execute’ permission to the newly created script: $ chmod $HOME/.gnome2/nautilus-scripts/Eject\ CD
Now you can eject the CD from the pop-up menu displayed by right clicking on the menu. Note: Before ejecting the CD, be sure that no program is using the CD ROM drive, because the drive does not give any error if it fails to eject the CD. —Vivek Marakana, vivek.marakana@gmail.com
You can use various Linux tools to collect systems statistics like CPU usage, memory usage, IO for hard disks, etc. A few of these are listed below: 1. top – A well known command for listing the running process with CPU and memory usage. 2. vmstat - Provides all virtual memory related information. 3. iostat - Provides IO statistics of devices and other CPU statistics. 4. free – This command displays the free, used, buffered or cached memory. 5. dstat – This is a good command to collect all the statistics in one place. It is a combination of vmstat, iostat and the netstat commands. 6. netstat – This displays the routing table, network interface statistics and ports opened by the process. —Prasanna, prasanna.mohanasundaram@gmail.com
Execute your jobs without a hurdle
Here is a tip that will help you to keep executing jobs even after exiting the shell. You can use the ‘disown’ command to run a job after you’ve logged out. It releases all the running jobs from ownership. You can also use the ‘nohup’ command, but for that, you have to specify in advance as shown below:
$ nohup nmap -vv -sP 10.13.37.1/24 &
You can now log out from the terminal. Your command
Share Your Linux Recipes! The joy of using Linux is in finding ways to get around problems—take them head on, defeat them! We invite you to share your tips and tricks with us for publication in LFY so that they can reach a wider audience. Your tips could be related to administration, programming, troubleshooting or general tweaking. Submit them at www.linuxforu.com. The sender of each published tip will get an LFY T-shirt.
november 2012 | 109
EnterpriseDB Postgres Plus Subscription for your successful
PostgreSQL/Postgres Plus deployments Includes... Predictable cost for your support via Remote, Email and Telephonic support for your production systems
Unlimited number of incidents supported
Software updates upgrades, patches and technical alerts service Web portal access and knowledge base access with PDF documentation
The Postgres Plus Solution Pack provides high value add-on tools to PostgreSQL for Administrative Monitoring, Data Integration across multiple servers, Availability, Security, Performance and Software Maintenance.
Postgres Enterprise Manager (PEM) The only solution that allows you to intelligently manage, monitor, and tune large numbers of Postgres database servers enterprise-wide from a single console.
SQL Protect Protects your PostgreSQL and Advanced Server data against multiple SQL virus injection vectors by automatically learning safe data access patterns and collecting attack data.
PL/Secure for PL/PgSQL Protects your server side database code and intellectual property from prying eyes for both internal and packaged applications without any special work on the part of the developer!
Updates Monitor
Migration Toolkit
SQL Profiler
Eases your installation maintenance burden by notifying you when updates to any components are available and assists you in downloading and installing them.
Fast, flexible and customized database migration from Oracle, SQL Server, Sybase, and MySQL to PostgreSQL and Postgres Plus Advanced Server.
A developer's friend to find, troubleshoot, and optimize slow running SQL fast! Provides on-demand or scheduled traces that can be sorted, filtered and saved by users and database.
xDB Replication Server Provides easy data integration between PostgreSQL based servers and between Oracle and PostgreSQL allowing Oracle users to dramatically reduce their Oracle license fees
You can find more details on the following links: http://www.enterprisedb.com/products-services-training/subscriptions http://www.enterprisedb.com/postgresql-products/premium
For inquiries contact at: sales@enterprisedb.com EnterpriseDB Software India Private Limited Unit # 3, Ground Floor, Godrej Castlemaine, Sassoon Road Pune – 411001 T +91 20 3058 9500 F +91 20 3058 9502 www.enterprisedb.com