n Source ...ber 2012

Page 1

Exploring The CloudBees PaaS

IS NOW OPEN SOURCE FOR YOU

THE COMPLETE MAGAZINE ON OPEN SOURCE

VOLUME: 10 | ISSUE: 08

` 100

Volume: 01 | Issue: 01 | Pages: 112 | October 2012

THE COMPLETE MAGAZINE ON OPEN SOURCE HDD

HARDWARE

RAM

MEMORY

CPU

RAM

MOTHERBOARD

MUSIC CPU

PRINTER

MEMORY

PRINTER

THE MARCH OF THE HARDWARE

KERNEL PROGRAMMING

SOFTWARE E-MAIL

NEWS

SOFTWARE

E-MAIL

PANTING

CHART NEWS

E-MAIL

Android Push Notifications CHATTING

PROGRAMMING

SOFTWARE

CHATTING

NEWS

SOFTWARE

CHATTING

CHART

SOFTWARE

PANTING

CHART

PROGRAMMING PANTING

With Google Cloud Messaging

DVD E E FR Track Time In Your Driver With Kernel Timers

Get Started With Kernel Module Programming A Simple Guide To Building Your Own Linux Kernel India US Singapore Malaysia

Leading Providers Of

Cloud Solutions

9 770974 105001

“To developers who question if Microsoft is really serious about open source, my answer would be, ‘Absolutely’”

OSI 2012

Mandar Naik, Director—Platform Strategy, Microsoft

Celebrating The Spirit Of Open Source

A Curtain Raiser:

` 100 $ 12 S$ 9.5 MYR 19 10




Contents MUST READ

32

46

Celebrate the Spirit of FOSS with Open Source India 2012!

Web-based Platforms for

Localisation

FOR YOU & ME

Will Now Be

34

GNOME Extensions: Spicing Up the Desktop Experience

39

"For developers who really question if Microsoft is serious about open source, my answer would be 'absolutely"— Mandar Naik, director, Platform Strategy at Microsoft

48

Web-based Platforms for Localisation

51

Linux at Work

56

PHP Development: A Smart Career Move

ON THE DVD

OpenSUSE 12.2 Installation DVD

A Linux-based complete operating system that can be used across a range of desktops and server. The new version features a faster storage layer in Linux 3.4 and accelerated functions in glibc and Qt, to provide users with a more fluid and responsive desktop.

4 | october 2012



Contents ers

er

58

Track Time in Your Driver with Kernel Timers

92

63

Get started with Kernel Module Programming

67

A Simple Guide to Building Your Own Linux Kernel

69

Kernel Ticks and Task Scheduling

72

Kernel Uevent: How Information is Passed from Kernel to User Space

78

The Semester Project-VI: File System on Block Device

81

Git Version Control Explained: Advanced Options

85

Android Push Notifications with GCM

87

Ctags and Cscope

96

Using OpenBSD for the Server Infrastructure

99

Cyber Attacks Explained: Cryptographic Attacks

urus 89

Delhi (hQ) D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020 Phone: (011) 26810602, 26810603; Fax: 26817563 E-mail: info@efyindia.com BeNGAlURU Ms Jayashree Ph: (080) 25260023; Fax: 25260394 E-mail: efyblr@efyindia.com e-mAil: support@efyindia.com

"OpenStack has emerged as a really important component for cloud services"— Rajesh Awasthi, director, Cloud Service Providers, NetApp India, Marketing & Services An Introduction to CloudBees

s

Back Issues Kits ‘n’ Spares New Delhi 110020 Phone: (011) 26371661-2 E-mail: info@kitsnspares.com Website: www.kitsnspares.com

Advertising CheNNAi Venkat CD Mobile: 9742864199 E-mail: efychn@efyindia.com hYDeRABAD D S Sunil Mobile: 8977569691 E-mail: efyenq@efyindia.com KolKAtA Gaurav Agarwal Ph: (033) 22294788; Telefax: (033) 22650094 Mobile: 9891741114 E-mail: efycal@efyindia.com

50

Exploring Software: ReviewBoard

61

CodeSport

75

The Joy of Programming: Auto-generating Code

mUmBAi Ms Flory D’Souza Ph: (022) 24950047, 24928520; Fax: 24954278 E-mail: efymum@efyindia.com PUNe Sandeep Shandilya; Ph: (022) 24950047, 24928520 E-mail: efypune@efyindia.com GUJARAt Sandeep Roy E-mail: efyahd@efyindia.com Ph: (022) 24950047, 24928520 SiNGAPoRe Ms Peggy Thay Ph: +65-6836 2272; Fax: +65-6297 7302 E-mail: pthay@publicitas.com, singapore@publicitas.com

A List of d Leading Clou viders Solution Pro

102

An Introduction to the Yocto Project

UNiteD StAteS Ms Veronique Lamarque, E & Tech Media Phone: +1 860 536 6677 E-mail: veroniquelamarque@gmail.com ChiNA Ms Terry Qin, Power Pioneer Group Inc. Shenzhen-518031 Ph: (86 755) 83729797; Fax: (86 21) 6455 2379 Mobile: (86) 13923802595, 18603055818 E-mail: terryqin@powerpioneergroup.com, ppgterry@gmail.com tAiwAN Leon Chen, J.K. Media Taipei City Ph: 886-2-87726780 ext.10; Fax: 886-2-87726787

Exclusive News-stand Distributor (India)

REGULAR FEATURES 08 You Said It...

24 FOSS Bytes

10 Q&A Powered By LFY Facebook

31 CodeChef

12 Offers of the Month

103 Events & Editorial Calender

14 New Products

106 FOSS Jobs

18 Open Gadgets

Rahul chopRa

Editorial, Subscriptions & Advertising

Customer Care

AdMin 43

Editor

108 Tips & Tricks

iBh BooKS AND mAGAziNeS DiStRiBUtoRS Pvt ltD Arch No, 30, below Mahalaxmi Bridge, Mahalaxmi, Mumbai - 400034 Tel: 022- 40497401, 40497402, 40497474, 40497479, Fax: 40497434 E-mail: info@ibhworld.com Printed, published and owned by Ramesh Chopra. Printed at Tara Art Printers Pvt Ltd, A-47, Sec-5, Noida, on 28th of the previous month, and published from D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020. Copyright © 2011. All articles in this issue, except for interviews, verbatim quotes, or unless otherwise explicitly mentioned, will be released under Creative Commons Attribution-Share Alike 3.0 Unported License a month after the date of publication. Refer to http://creativecommons.org/licenses/by-sa/3.0 for a copy of the licence. Although every effort is made to ensure accuracy, no responsibility whatsoever is taken for any loss due to publishing errors. Articles that cannot be used are returned to the authors if accompanied by a self-addressed and sufficiently stamped envelope. But no responsibility is taken for any loss or delay in returning the material. Disputes, if any, will be settled in a New Delhi court only.

SUBSCRIPTION RATES eriod News-stand price You Pay Year Five Three One

Overseas

(`) 6000 3600 1200

(`) 3600 2520 960

— — US$ 120

Kindly add ` 50/- for outside Delhi cheques. Please send payments only in favour of EFY Enterprises Pvt Ltd. Non-receipt of copies may be reported to support@efyindia.com—do mention your subscription number.

6 | october 2012


Trained participants from over 42 Countries in 6 Continents Linux OS Administration & Security Courses for Migration LLC102: Linux Desktop Essentials LLC033: Linux Essentials for Programmers & Administrators LLC103: Linux System & Network Administration LLC203: Linux Advanced Administration LLC303: Linux System & Network Monitoring Tools LLC403: Qmail Server Administration LLC404: Postfix Server Administration LLC405: Linux Firewall Solutions LLC406: OpenLDAP Server Administration LLC408: Samba Server Administration LLC409: DNS Administration LLC410: Nagios - System & Network Monitoring Software LLC412: Apache & Secure Web Server Administration LLC414: Web Proxy Solutions Courses for Developers LLC104: Linux Internals & Programming Essentials LLC106: Device Driver Programming on Linux LLC107: Network Programming on Linux LLC108: Bash Shell Scripting Essentials LLC109: CVS on Linux LLC204: MySQL on Linux LLC205: Programming with PHP LLC206: Programming with Perl LLC207: Programming with Python LLC208: PostgreSQL on Linux LLC504: Linux on Embedded Systems LLC701: Android Internals LLC702: Android Application Development

Advanced Administration Training on DNS, Samba, Nagios & Postfix Python Programming - 13 October Android Application Development - 20 Oct

RHCVA / RHCSS / RHCA Training - Exams RH318: 13 & 27 Oct 2012; EX318: Call; RHS333: 13 Oct; RH423:20 Oct; RHS429: Call; RH436: 15 Oct, EX436: 19 Oct; RH442: 29 Oct; EX442: 02 Nov RH401: 8 Oct; EX401: 12 Oct

RH299 from 6, 13, 20 & 27 October RHCSA & RHCE Exam 12, 19 & 25 Oct LLC - Authorised Novell Practicum Testing Centre NCLP Training on Courses 3101, 3102 & 3103

Postfix Server Administration 15 October 2012 Microsoft Training Co-venture: CertAspire

RHCE Certification Training RH124: Red Hat System Administration - I RH134: Red Hat System Administration - II RH254: Red Hat System Administration - III RH299: RHCE Rapid Track Course RHCVA / RHCSS / RHCDS / RHCA Certification Training RHS333: Red Hat Enterprise Security: Network Services RH423: Red Hat Enterprise Directory Services & Authentication RH401: Red Hat Enterprise Deployment & Systems Management RH436: Red Hat Enterprise Clustering & Storage Management RH442: Red Hat Enterprise System Monitoring & Performance Tuning RHS429: Red Hat Enterprise SELinux Policy Administration RH318: Red Hat Enterprise Virtualization

Microsoft Certified Learning Partner

www.certaspire.com For more info log on to:

www.linuxlearningcentre.com Call: 9845057731 / 9449857731 Email: info@linuxlearningcentre.com

RHCSA, RHCE, RHCVA, RHCSS, RHCDS & RHCA Authorised Training & Exam Centre

NCLA / NCLP Certification Training Course 3101: SUSE Linux Enterprise 11 Fundamentals Course 3102: SUSE Linux Enterprise 11 Administration Course 3103: SUSE Linux Enterprise Server 11 Advanced Administration

Registered Office: # 635, 6th Main Road, Hanumanthnagar, Bangalore 560019

# 2, 1st E Cross, 20th Main Road, BTM 1st Stage, Bangalore 560029. Tel: +91.80.22428538, 26780762, 65680048 Mobile: 9845057731, 9449857731, 9343780054

Gold

Practicum

TRAINING PARTNER

TESTING PARTNER


YOU SAID IT On changing the name of your magazine! I have always been an avid reader of LINUX For You and it now appears that the magazine is to be renamed Open Source For You. Can you please update me about this? —Deepshika Kejriwal, deepshika.kejriwal@gmail.com ED: Thanks for writing to us. In an effort to meet the everbroadening scope of open source technology, we decided to rename our publication. While the core concept and content of the magazine will continue to be as relevant and useful as ever, this change would mean exploring open source technology beyond Linux. Linux is a major part of the open source world but the latter has now evolved to represent a whole lot more. The changed name will reflect the constantly changing times in the world of open source technology.

ED: Hearty congratulations from our team and we hope to publish more such tips of yours in future issues. And thanks a lot for your inspiring words of appreciation. It really helps us do much better. We make a sincere effort to keep publishing more and more interesting articles to match our readers' interests.

CORNER On subscribing to your magazine

Saurabh Garg: Please let me know the details on how to subscribe to your magazine. I would appreciate it if you could send me the respective link for subscriptions.

The death of Kenneth Gonsalves came as a shock I am a regular reader of your magazine and have contributed a few articles over the years. Picking up the September issue, I was startled to read the tribute to Kenneth Gonsalves—one of the main reasons for my buying the copy. It is really sad that we have lost one of our finest and most active workers in the open source community. I used to go through his column 'Foss Is Fun' in LFY and it still seems unbelievable that he is no more. Everyone in the community is definitely going to miss him. —Saurav Sengupta, sauravsengupta17@gmail.com ED: Thanks for writing in and sharing your feelings about what we all feel is a loss to the open source community. The sudden demise of Mr Gonsalves came as a shock to our team as well, and the tribute to him was a way of expressing our sense of loss. Yes, he definitely will be missed. May his soul rest in peace.

LINUX For You: To subscribe to the print edition, you can log on to http://electronicsforu.com/electronicsforu/subscription/subsc2scheme.asp. We also have an e-zine that allows you to enjoy the digital version of the magazine. For our subscribers, the e-zine service is free. Log on to http://ezines.efyindia. com/ for details.

Where does one find LFY in Chennai?

Visvanath Sv: Hello, I'm from Chennai. I have tried searching for your magazine in shops that sell other software related magazines. But I couldn't find a copy. Can anyone please tell me about the shops in Chennai that sell your magazine? Tulsi Raj: Go to Landmark bookshop in EA or INOX. You can also try at Anna Library.

Kudos to the team I was really excited when I got the news that a tip I’d shared was published in the September 2012 issue of your magazine. The team is doing a remarkable job in promoting open source software. LFY has been a great resource for learners—the tips and articles that you publish are really valuable and helpful for those wishing to continue on their journey through open source. All the very best for your future endeavours and I wish you all the success in the years to come. —Subeesh Mohan V, subee_vmd@yahoo.com

Please send your comments or suggestions to:

The Editor D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020 Phone: 011-26810601/02/03, Fax: 011-26817563 Email: osfyedit@efyindia.com



Powered By

www.facebook.com/linuxforyou

Akash Chauhan:

I run Ubuntu 11.4 in the virtualbox, which has a disk space(fixed) of 20 GB and is full. Is it possible to expand it? Like . comment

Aman Singhal: There is an option to change

that at the time of installation.

Ahmed Zaki:

Does Kernel 3.0.1 is stable enough? Like . comment

Nilesh Govindrajan: 3.0.1? The current branch is 3.5!

Aboobacker Mk: Ahmed, upto 3.4 is ultra stable.

Akash Chauhan: Thanks, but I want to know what has to be done after installation?

Manik Jindal: Take a HDD and copy your data from the virtual machine on to the HDD and then, remove that machine and add a new one with a larger disc space.

Akash Chauhan: It is not showing even HDD

and pen drive, not even a space to add additional VBox tools.

Manik Jindal: Before connecting your

machine, connect your USB device and then after that open VBOX, usme kisi tab ke under to option aata hai to show USB drives after booting the machine. I do not remember now, but there is a way to connect them. Search on the internet.

Manik Jindal: This might help. Try this link

http://www.howtogeek.com/howto/31726/mountusb-devices-in-virtualbox-with-ubuntu/

Shekhar Singh:

It depends on how you created your VB Ubuntu Image. It asks for the size to be variable or fixed.

Abhilash Thekken Thottathil:

Prince Abo:

Hello everyone, wanted to know if anyone can help me out with Linux step by step installation and configuration. Like . comment

Aboobacker Mk: Download a Linux distro and try to install it, If you get stuck, post here we are ready to help you. Prince Abo: Thank you.

Arun Kumar:

What is Kernel programming? What language should I learn to write code ? Like . comment

Balaji Sathyanarayanan: Mostly "kernel" means heart of the operating system. They deal about all types of operating system functionslike Memory Management, Process Management, giving the highest priority to some of the hardware functions by sending a signals etc. So to learn Kernel programming means learn C-language. And make sure one thing, learn the Linux commands.

Hi friends, can you all suggest any good url for configuring Nagios? Like . comment

Anoop Manoj: Try this links http://nagios.

sourceforge.net/docs/3_0/quickstart-fedora. html and http://nagios.sourceforge.net/ docs/3_0/monitoring-linux.html

Abhilash Thekken Thottathil: Thanks, but will this help?

Anoop Manoj: This is pretty straight forward, what specifically are you looking to configure?

Image quality is poor as the photos have been directly taken from www.facebook.com

Jake Charles:

Fully off of Windows 7, and now using Ubuntu XD. Like . comment

Gokulakrishna Sudharsan: Jake, Welcome to the brave world of Free software!! Here we can breath free and pure air. Harry Kaiser: Congrats buddy! Jake Charles: Ahh this is so much better!


Powered By

www.facebook.com/linuxforyou

Viren Mahajan:

Hello guys, I have started using Ubuntu yesterday only and I am not able to use my Wifi connection, as it is saying as no device found, firmware missing, Can you suggest me what to do? Like . comment

Rajat Khandelwal: I think Wifi software is not installed, or on your laptop, Wifi does not support ubuntu.

Viren Mahajan: In that case, what can be done? Gokulakrishna Sudharsan: Your post gives us absolutely no info except that you have an Ubuntu installation. Post details about your laptop.

Rajat Khandelwal:

Hello, I am a BCA student and I am interested in open source stuff. Can anyone suggest me any book or magazine to start with it. Thanks. Like . comment

Sathis Kumar: LFY magazine is a very good option. Else, you can refer to the web too. Rajat Khandelwal: Do I need perfection in Java

or PHP for starting an open source project or should I go with my basics?

Mankala ShravanKumar:

I bought Lenovo G580 laptop and installed Ubuntu 12.04, Wired connection is not working. Can someone help me? Like . comment

Muziwakhe Mzk Nhlapo: Is your wireless working?

Dux Brandon: MZK guru Mankala ShravanKumar: Wireless working but wired connection not working. Crestha Chucheel:

Is it possible to install android eclairs in Micromax A60? Having problem in Micromax A60 after the OS crashed. Like . comment

Harry Kaiser: I think, it comes pre-installed with

eclair!

Mani Kiran:

Hi friends, can somebody tell me “how to configure my own yum client on my desktop running rhel5 x86_64? While configuring and using, I am getting the following error: [root@kgroup05 repodata]# yum list Loaded plugins: refresh-packagekit, rhnplugin This system is not registered with RHN. RHN support will be disabled. ftp://192.168.1.5/opt/ftp/pub/Server/repodata/repodata/ repomd.xml: [Errno 14] FTP Error 500 : ftp://192.168.1.5/opt/ftp/pub/Server/repodata/ repodata/repomd.xml Trying other mirror. Error: Cannot retrieve repository metadata (repomd. xml) for repository: rhel-debuginfo. Please verify its path and try again. Like . comment

Manas Pradhan: To make ur own local reposi-

tory for yum try the below Mount ur rhel 5 DVD and copy the entire Server directory to any location, Ex- cp -rf /media/RHEL5.4/Server/* /var/ftp/pub Now install createrepo Ex- rpm -ivh createrepo-0.9.8-4.el5.noarch.rpm Now make the directory where u had copied ur files from ur DVD Ex- createrepo /var/ftp/pub Now create a yum file with extention “.repo”inside “/ etc/yum.repos.d” and append the below lines Ex- vi /etc/yum.repos.d/manas.repo [manas] name=localyum baseurl=file:///var/ftp/pub gpgcheck=0 enable=1 Hurray!!! Your yum is ready!

Mani Kiran:I've tried it before the very same method and the output is what I posted above.

Jai Krishna:

How to configure dhcp server in Linux? Can anybody help me through this? Like . comment

Jatin Khatri: Have a look at this: http://www.linuxhomenetworking.com/wiki/index. php/Quick_HOWTO_:_Ch08_:_Configuring_the_ DHCP_Server. Jai Krishna: Thank you. It really helped me to find a sample file. Thanks a lot!

Image quality is poor as the photos have been directly taken from www.facebook.com october 2012 | 11


OFFE

RS

12 | OCTOBER 2012

THE MONTH


Get the best real-world Android developer training anywhere! Attend

December 4-7, 2012

San Francisco Bay Area

Choose from more than 80 classes and workshops!

n Learn from the top Android experts, including speakers straight from

!

n Attend sessions that cover app development, deployment, management, design and more n Network and connect with hundreds of experienced developers and engineers like yourself “AnDevCon is a fantastic conference! There is no better place to experience the latest and greatest technologies and techniques in the field of Android development. If you attend one conference this year, this one should be it!”

Register Early and SAVE BIG!

www.AnDevCon.com Follow us: twitter.com/AnDevCon

—Jay Dellinger, Senior Software Engineer, Manheim

A BZ Media Event AnDevCon™ is a trademark of BZ Media LLC. Android™ is a trademark of Google Inc. Google’s Android Robot is used under terms of the Creative Commons 3.0 Attribution License.




NEW PRODUCTS

Asus unveils its S Series of ultrabooks in India

Asus Technology has launched Intel’s third generation Core processor-based mainstream ultrabooks—the ‘S series’. It offers users an audio-visual experience with Asus’ SonicMaster Lite technology for a powerful audio experience, and a 39.6-cm (15.6-inch) HD LED backlight glare display. Alex Huang, managing director, System Business Group, Asus India, said, “All the ultrabooks are compatible with Linux-based operating systems.” Price: Asus S56CA-XX030R: ` 46,999, Asus S56CA-XX056R: ` 52,999 Address: Asus Technology Pvt Ltd, 4C, Gundecha Enclave, Kherani Road, Near Sakinaka Police Chowki, Andheri E, Mumbai Email: info_india@asus.com Ph: 022-67668800 Website: http://in.asus.com/

Zebronics introduces multimedia headphones Top Notch Infotronix has introduced the Zebronics Brio, a bass multimedia headphone with an inbuilt microphone. The 2.2 metre long cable has an attached remote control (volume control) for even more relaxed use. It has been designed to provide excellent audio quality both while listening to music and making calls. Price: ` 475 Address: Top Notch Infotronix (I) Pvt Ltd, 1st Floor, No. 6C, Valliammal Road, Vepery, Chennai 600007 Email: service.zone@zebronics.com Ph: 044-4000 0007 Website: http://www.zebronics.net

16 | october 2012

The new Galaxy Note 800 tablet from Samsung Samsung Electronics has rolled out a new tablet—the Galaxy Note 800. The device comes with the functionality and precision of a pen and paper, as it features the S Pen with a 25.6-cm (10.1-inch) large display. The tablet is based on a 1.4 GHz quad-core processor along with 2 GB RAM. Asim Warsi, vice president, Marketing, Mobile Business, Samsung Electronics, said, “The tablet offers some unique features. Its multiscreen feature is one of the biggest talking points about the device. The feature allows users to utilise two different applications side-by-side, enabling them to multi-task. Users can view Web pages or videos or launch other applications while writing or sketching ideas with the S Pen on the other half of the screen. So one doesn't need to toggle back and forth between apps.” The device comes with Adobe's Photoshop Touch preloaded. The creative application is optimised for Samsung's S Pen. In addition, the Galaxy Note 800 offers a mini-apps tray that can launch a selection of mini-apps on top of others. The apps include an alarm, S Note, a music player, e-mail, calculator and a world clock. The device also features the 'My Education' app, which has been developed to help school students take advantage of the digital environment. Price: ` 39,990 Address: Samsung India, 2nd, 3rd and 4th Floors, Tower C, Vipul Tech Square, Sector 43, Golf Course Road, Gurgaon 122002 Email: supportindia@samsung.com Ph: 0124-4881234 Website: www.samsung.com

HCL introduces a 3G Android tablet – the HCL ME Y2 HCL Infosystems Ltd has launched its latest 3G enabled tablet – the HCL ME Y2. This third generation tablet from HCL runs Google's Android 4.0. The company claims that the tablet is designed specially to suit the needs of Indian consumers. It offers 3G connectivity with its in-built SIM slot. Based on Android OS 4.0.3, the device is equipped with a 2 mega-pixel rear camera and a 0.3 mega-pixel front camera, which enables video chatting with an HD display. The 17.8-cm (7-inch) tablet sports a multi-touch capacitive screen with a 1024 x 600 pixel display. The tablet runs on a Cortex A9 1 GHz processor. It comes with 1 GB RAM and 8 GB internal memory, along with a memory card slot that can add another 32 GB. The HCL ME Y2 also has a mini USB, mini HDMI and a microSD card slot. It has a robust 4000 mAH battery to deliver long hours of high definition audio and video playback. The Y2 is enabled with Bluetooth and also provides for Wi-Fi connectivity. Commenting on the device, Gautam Advani, EVP and head, Mobility, HCL Infosystems Ltd, said, "The HCL ME Y2 is the third generation tablet from the company. The tablet promises to deliver an enhanced experience to customers through advanced connectivity and innovative features.” The HCL ME Y2 will be supported by an integrated back-up service—HCL Touch, a 24X7 one-touch service facility. This latest device also offers the Hungama Application at a special rate for its customers. The subscription for the app is available for free for the first three months. It can be downloaded to play unlimited music and videos as well as access ringtones. Price: `14,999 Address: HCL Infosystems Ltd, E-4, Sector 11, Noida Email: tech_support@kobian.com Ph: +(91) 120 2520977 Website: www.hclinfosystems.com



SMARTPHONES Micromax A87 Ninja 4

MicroMax A 25 OS:

Android 2.3

OS:

Android 2.3

Launch Date:

September 2012

Launch Date:

September 2012

MRP:

` 3,999

MRP:

` 5,990 ESP:

` 5,990 Specification:

NEW

4-inch capacitative LCD touchscreen, 1400 mAh battery, 1GHz processor, 2 MP camera, memory expandable up to 32GB Retailer/Website: www.saholic.com

Samsung Galaxy S Duos

ESP:

` 3,999 Specification: 2.8 inch capacitive touchscreen, 320X240 pixels screen resolution, 1GHz processor, 1280mAh battery, 1.3 MP rear camera, 256MB RAM, 512MB ROM, 120MB internal storage

Samsung Galaxy Note 800 OS:

Launch Date:

Launch Date:

Android 4.0

August 2012 ESP:

` 17,900 Specification:

NEW

` 38,500 Specification:

Android 2.3

Launch Date:

Launch Date:

September 2012

September 2012

MRP:

MRP:

` 4,999

` 6,990 ESP:

` 6,990 Specification:

NEW

Retailer/Website: www.snapdeal.com

Map My India Car Pad 5

ESP:

` 17,990

Specification:

NEW

3.5-inch capacitive touch display,480 x 320 pixel screen resolution, 1GHz processor, 1,400 mAh battery, 3MP rear camera, 512MB of built-in storage, expandable up to 32GB,3G, Wifi Retailer/Website: www.maniacstore.com

Intex Aqua 4.0 Launch Date:

August 2012 MRP:

` 5,490

MRP:

NEW

` 4,890

Android 2.3

August 2012 ` 19,990

ESP:

OS:

Launch Date:

` 39,990 ESP:

OS:

Android 2.3

Android 2.3

MRP:

` 17,900

OS:

OS:

August 2012

MRP:

Micromax A57 Superfone Ninja 3

7.1 cm (2.8") QVGA Display, 832Mhz processor, 1200 mAh battery, 2MP Camera, 4 GB Internal memory, expandable up to 32GB

Retailer/Website: www.saholic.com

OS:

Android 4.0

NEW

Samsung Galaxy Y Duos Lite

NEW

Specification:

10.2-cm (4-inch) WVGA display screen, 480 x 800 pixels screen resolution, 1GHz processor, 5MP camera, 512MB RAM, 4GB of internal storage, 1500 mAh battery.

10.1" inch WXGA LCD display, 1280x800 pixels screen resolution, 1.4GHz Exynos Quad-Core Processor , 5 MP rear camera, 7000 mAh battery, 16GB User memory + 2GB (RAM), WiFi,

Retailer/Website: www.snapdeal.com

Retailer/Website: www.snapdeal.com

Retailer/Website: www.snapdeal.com

ESP:

` 5,490 Specification:

NEW

3.5-inch display screen, 480 x 320 pixels screen resolution, 800MHz processor, 1400mAh battery, 512MB of RAM, 3 MP rear camera, WiFi

12.7-cm (5-inch) capacitive touchscreen display, 800 X 480 pixels screen resolution, 1 GHz processor, 384 MB RAM,3G, WiFi

Retailer/Website: www.maniacstore.com

Micromax A-84 Elite

Wicked Leak Wammy Note

MTS MTag 351

OS:

OS:

OS:

OS:

Launch Date:

Launch Date:

Launch Date:

Launch Date:

MRP:

MRP:

MRP:

MRP:

ESP:

ESP:

ESP:

ESP:

iball Andi 5C Android 4.0 August 2012

` 12,999 Specification:

NEW

Specification:

NEW

` 11,000 Specification:

NEW

12.7-cm (5-inch) touch screen, 1 GHz processor, 2,500 mAh battery, 8 MP camera, 512 MB of RAM, expandable up to 32 GB, 3G, WiFi

4" IPS capacitive multitouch display,800×480 pixels, and a 1 GHz processor, 5 MP camera, 1630 mAh battery, 3G, WiFi

12.7-cm (5-inch) IPS capacitive touch screen, 1GHz Cortex A9 Processor, 2300 mAh battery, 5 MP rear camera, 4GB internal storage, expandable up to 32 GB, 3G, WiFi

` 7,499

` 11,000

` 9,999 ` 9,999

August 2012

August 2012

August 2012

` 15,999

Android 2.3

Android 4.0

Android 2.3

Retailer/Website: www. snapdeal.com

Retailer/Website: www.wickedleak.com

` 7,499 Specification:

NEW

3.5 inch capacitive touchscreen, 320x480 pixels screen resolution, 800MHz processor, 1300mAh battery, 3 MP camera, 128MB internal memory, expandable up to 32 GB Retailer/Website: At your nearest MTS Store

Retailer/Website: www.flipkart.com

Micromax Superfone Pixel A90

MTS MTag 281

MTS MTag 352

OS:

Android 2.3

OS:

Android 2.3

Launch Date:

August 2012

Launch Date:

August 2012

MRP:

` 5,499

MRP:

` 6,499 ESP:

` 6,499 Specification:

NEW

ESP:

` 5,499 Specification:

NEW

3.5'' Capacitive Touch Screen, 320X480 pixels screen resolution, 800 MHz processor,1400 mAh battery, 3 MP camera, 128MB internal memory, expandable up to 32 GB, Wifi

7.11 Cm (2.8”) capacitive touchscreen, 240X320 pixels screen resolution,800 MHz processor, 1200 mAh battery, 3 MP camera, internal memory expandable up to 32 GB

Retailer/Website: At your nearest MTS Store

Retailer/Website: At your nearest MTS store

Karbonn A18

OS:

OS:

Android 4.0

Android 4.0

Launch Date:

Launch Date:

August 2012

August 2012

MRP:

MRP:

` 12,990

` 12,990 ESP:

` 12,990 Specification:

NEW

4.3-inch AMOLED multi-touchscreen display, 480x800 pixel resolution, 1GHz processor, 1600mAh battery, 8 MP rear camera, 512 MB of internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website: www.snapdeal.com

ESP:

` 9,849 Specification:

NEW

4.3 Inch WVGA touch screen, 480 x 800 pixels screen resolution, 1 GHz processor, 1500 mAh battery, 5 MP rear camera, internal memory 1GB, expandable up to 32 GB, 3G, Wifi Retailer/Website: www.infibeam.com


SMARTPHONES Micromax A44 Superfone Punk OS:

Android 2.3 Launch Date:

August 2012 MRP:

` 4,500 ESP:

` 4,500 Specification: 3.14-inch full touchscreen, 800 MHz processor, 2 MP rear camera, 140GB internal memory, up to 32 GB expandable memory, 1200mAh battery. Retailer/Website: www.snapdeal.com

LG Optimus L5

Huawei Ascend G300 OS:

Android 2.3 Launch Date:

July 2012 MRP:

` 13,490 ESP:

` 11,698 Specification: 10.2-cm (4.0-inch) WVGA IPS screen, 480 x 800 pixels screen resolution, 1 GHz Cortex-A5 processor, Li-Ion 1500 mAh battery, 4 GB RAM, expandable up to 32 GB, 5 MP rear camera Retailer/Website: www.ebay.in

LG Optimus 4X HD

OS:

OS:

Launch Date:

Launch Date:

Android 4.0 July 2012 MRP:

` 12,499 ESP:

` 12,499

Android 2.3 July 2012 MRP:

` 34,990 ESP:

` 34,000

Specification:

Specification:

4 inch touchscreen display, 320 x 480 pixels screen resolution, 800 MHz Qualcomm processor, 1500 mAh Li-Ion battery, 4 GB internal memory, 32 GB expandable memory, 5 MP rear camera, 3G, WiFi

4.7-inch True HD IPS display touchscreen, 1280 x 720 pixels screen resolution, 1.5 GHz Quad core- Tegra3 processor, 2150mAh battery, 8 MP rear camera, 16 GB internal memory, expandable up to 64 GB, 3G, Wifi

Retailer/Website: www.indiaplaza.com, www.themobilestore.in, www. buytheprice.com, www.naaptol.com

Huawei Ascend Y200

LG Optimus L3

OS:

OS:

Android 2.3

Android 2.3

Launch Date:

Launch Date:

July 2012

July 2012

MRP:

MRP:

` 8,190

` 8,000

ESP:

ESP:

` 7,699

` 7,949

Specification:

Specification:

TFT LCD 3.5 Inches touchscreen, 320×480 pixels screen resolution, 800 MHz processor, 1250 mAh battery, 512MB internal memory, expandable memory up to 32GB, 3G, WiFi

3.2 inches TFT capacitive touchscreen, 240 x 320 pixels screen resolution, 800 MHz processor, Li-Ion 1500 mAh, 1 GB RAM, expandable up to 32 GB, 3G

Retailer/Website:www.flipkart.com

Retailer/Website:www.flipkart.com

G'Five A79 OS:

Android 2.3 Launch Date:

July 2012 MRP:

` 6,999 ESP:

` 6,699 Specification: 10.2-cm (4.0-inch) HVGA capacitive multi-touch screen, 320X480 pixels screen resolution, 5 MP camera, BCM21552 832 MHz processor, 1850 mAh battery, GPS/AGPS, G-Sensor, 3G, WiFi

Retailer/Website: www.buytheprice.com

Retailer/Website: www.buytheprice.com

G'Five A86 OS:

Android 4.0 Launch Date:

July 2012 MRP:

` 7,999 ESP:

` 7,999 Specification:

10.2-cm (4.0-inch) WVGA capacitive multi-touch screen, 480X480 pixels resolution, 1 GHz processor, 2800mAh battery, 8 MP rear camera, 0.3 MP front camera, 3G, Wifi Retailer/Website: www.naaptol.com

G'Five G95

G'Five G3D

Airtyme Picasso DG50

OS:

OS:

OS:

OS:

Launch Date:

Launch Date:

Launch Date:

Launch Date:

MRP:

MRP:

MRP:

MRP:

ESP:

ESP:

ESP:

ESP:

` 9,499

` 11,990

` 6,999

Specification:

Specification:

Specification:

Specification:

10.9-cm (4.3-inch) WVGA capacitive multi-touch screen, 480X480 pixels screen resolution, MTK6575 1 GHz processor, 2800 mAh battery, 8 MP camera, ROM 4GB EMM + RAM 512 DDR2, 3G, WiFi

13.5-cm (5.3-inch) WVGA capacitive multi-touch screen, 480X800 pixels screen resolution, 1 GHz processor, 2800m Ah battery, 8 MP camera, ROM 4GB EMM + RAM 512 MB DDR2, 3G, WiFi

10.9-cm (4.3-inch) glass-free 3D multi-touch screen, 1GHz processor, 2800mAh battery, 8MP camera, ROM 4 GB EMM+RAM 512 MB DDR2, 3G, Wifi

3.5 inch capacitive touch screen, 320X480 pixels screen resolution, 1450 mAH battery, 5 MP rear camera, 512 MB RAM, extendable up to 32 GB, 3G

Retailer/Website: www. naaptol.com

Retailer/Website: www. infibeam.com, www.homeshop18.com

Retailer/Website: www.homeshop18

Retailer/Website: www.flipkart.com

Samsung Galaxy Ace Duos (GT- S6802)

HTC Desire C

OS:

OS:

Android 4.0

Android 4.0

Launch Date:

Launch Date:

June 2012

June 2012

MRP:

MRP:

` 14,299

` 14,299

ESP:

ESP:

G'Five I88+ Android 4.0 July 2012 ` 8,899 ` 8,899

Micromax Suferfone A80 Infinity OS:

Android 2.3 Launch Date:

July 2012 MRP:

` 8,490 ESP:

` 8,490 Specification:

Android 4.0 July 2012 ` 9,499

Samsung Galaxy SIII OS:

Android 4.0 Launch Date:

June 2012 MRP:

` 43,180 ESP:

` 37,990 Specification:

3.6-inch fulltouchscreen,800MHz processor, 5 MP rear camera, front camera 0.3 MP, 2,500 mAh LiON battery, expandable memory upto 32 GB

12.19 cm capacitive touchscreen, 720x1280 pixels screen resolution, 1.4GHz quad core processor, 2100mAh battery, 16GB RAM, external memory expandable upto 64GB, 8MP rear camera, 3G, WiFi, A-GPS.

www.Infibeam.com, www.flipkart.com

Retailer/Website: www.adexmart.com

Android 4.0 July 2012 ` 11,799

` 14,299

Android 2.3 July 2012 ` 6,999

` 14,299

Specification:

Specification:

8.9 cm HVGA display, 320x480 pixels screen resolution, 600MHz processor, 1230mAh battery, 512MB RAM, 25GB of Dropbox space, 5MP camera with LED flash, Beats Audio, GPRS, Bluetooth, WiFi, A-GPS.

8.9 cm HVGA display, 320x480 pixels screen resolution, 600MHz processor, 1230mAh battery, 512MB RAM, 25GB of Dropbox space, 5MP camera with LED flash, Beats Audio, GPRS, Bluetooth, WiFi, A-GPS.

Retailer/Website:www.buytheprice.com

Retailer/Website:www.buytheprice.com

october 2012 | 19


Tablets Micromax Funbook Alpha

MTNL Lofty TZ300

OS:

Android 4.0

OS:

Launch Date:

September 2012

Launch Date:

MRP:

MRP:

` 6,499 ESP:

` 5,999 Specification:

7 Inch capacitive TFT multi-touch LCD screen, 800 x 480 pixels screen resolution, 1 GHz processor, 2800 mAh battery, internal memory up to 4 GB, expandable up to 32GB, 3G, WiFi Retailer/Website: www.infibeam.com

EduBridge EduTab tablet OS:

7”LCD Resistive Touch Screen, 800x 400 pixels screen resolution, 1.2 GHz Cortex A8 Processor,3000 mAh battery,0.3MP front camera, 4 GB internal memory, expandable up to 32GB, 3G, WiFi

NEW

17.8-cm (7-inch) Capacitive Touch Screen, 1 GHz Processor, 2800 mAH battery, Front camera, Expandable Storage memory up to 32 GB, 3G, Wifi Retailer/Website: www.snapdeal.com

INTEX i-Buddy OS:

Android 4.0 Launch Date:

July 2012 MRP:

` 6,490 ESP:

` 6,490 Specification: 7 inch capacitive touch screen display, 800 x 480 pixels screen resolution, 1 GHz processor, Frontfacing camera for video chatting, 2350 mAh battery, 4 GB internal memory, Expandable memory up to 32 GB, 3G, Wifi Retailer/Website: www.snapdeal.com

HCL ME Y2 OS:

Android 4.0 Launch Date:

July 2012 MRP:

` 14,999 ESP:

` 14,999 Specification: 7.0 inches, IPS LCD Capacitive Multi-Touch Screen, 1024 x 600 pixel screen resolution, 1 Ghz processor, 4000 mAh battery, 2 MP rear camera, 8 GB internal memory, Expandable memory up to 32 GB, 3G, Wifi Retailer/Website: www.snapdeal.com

August 2012

Launch Date: MRP:

` 12,700

ESP:

` 6,499 Specification: 7”LCD Resistive Touch Screen, 800x 400 pixels screen resolution, 1.2 GHz Cortex A8 Processor, 3000 mAh battery,0.3MP front camera, 4 GB internal memory, expandable up to 32GB, 3G, WiFi

NEW

Wishtel IRA Comet HD

` 5,899 ESP:

` 5,899 Specification:

Penta Tablet IS701C OS:

Android 4.0

MRP:

August 2012

ESP:

` 9,999 Specification:

OS:

Launch Date:

Specification: 17.78-cm (7-inch) TFT LCD capacitive multi-touch screen, 800X480 pixels screen resolution, 1GHz processor, 0.3 MP front camera, 3000 mAH battery, 4 GB internal memory, Expandable memory up to 32 GB, 3G, Wifi

NEW

Reliance 3G Tab OS:

July 2012

` 3,699

` 14,699 ESP:

` 14,699 Specification: 8-inch capacitive multi-touchscreen, 800 x 552 pixels screen resolution, 1 Ghz processor, 2 MP rear camera,0.3 MP front camera, WiFi Retailer/Website: www.snapdeal.com

Retailer/Website: www.naaptol.com

MRP: ESP:

MRP:

10.1” TFT LED Multi-touch Capacitive Touch Screen, 1.2 GHz processor, 5500 mAh battery,2 MP rear camera, 8GB internal memory, expandable upto 32GB, 3G, WiFi

Android 2.3

` 4,999

Retailer/Website: www.loginmobile.com

Android 4.0

Launch Date:

July 2012

NEW

Launch Date:

` 9,999

Retailer/Website: www.flipkart.com

Specification:

OS:

August 2012

7-inch capacitive multi-touchscreen display, 800x480 pixels screen resolution, 1.2GHz Cortex-A8 processor, 2,800 mAh battery, internal memory 4GB, expandable up to 32 GB, 3G, wiFi

` 12,700

BSNL Penta T-Pad WS704C

Android 4.0

NEW

ESP:

5 inch capacitive multi touchscreen, 1 GHz processor, 12 MP camera, 4 GB internal memory, expandable up to 32 GB, 3G, Wifi

Lava E-Tab Z7H

MRP:

` 8,500

MRP:

Retailer/Website: www.teracomstore.in

August 2012

MRP:

Android 4.0

Retailer/Website: www.teracomstore.in

Launch Date:

August 2012

Specification:

Specification:

Android 4.0

Launch Date:

` 8,500

` 3,999

OS:

Android 4.0

ESP:

ESP:

OS:

` 6,499

NEW

Mercury magiQ

Launch Date:

September 2012

September 2012

NEW

OS:

Android 4.0

Android 4.0 ` 3,999

MTNL Lofty TZ 200

Launch Date: MRP:

` 14,499 ESP:

` 12,699 Specification: 7inch capacitive multitouch touchscreen, 480 x 800 pixels resolution, 800 Mhz processor, 3 MP rear camera, 4 GB internal memory, Expandable memory up to 32 GB, 3G, Wifi

Samsung GALAXY Tab 2 510 P5100 OS:

Android 4.0 Launch Date:

July 2012 MRP:

` 32,990 ESP:

` 32,990 Specification: 10.1-inch capacitive multi-touchscreen, 1280×800 pixels screen resolution, 1 Ghz processor, 7000mAh battery, 3 MP rear camera, 0.3 MP front camera, 16 GB internal memory, expandable up to 32 GB, 3G, WiFi

Retailer/Website: www.snapdeal.com, www.ebay.in

Retailer/Website: www.infibeam.com

Retailer/Website: www.snapdeal.com

Micromax Funbook Pro

Zync Z-909

OS:

OS:

Samsung Galaxy SIII 32GB

Launch Date:

Launch Date:

Android 4.0 July 2012 MRP:

` 9,999 ESP:

` 9,999 Specification: 23.6-cm (10.1-inch) display,1024x600p pixels screen resolution, 1.2GHz processor, 5600 mAH battery,VGA front camera, 8GB of internal memory, 1GB of RAM, 3G, Wifi Retailer/Website: www.snapdeal.com

Android 2.3 July 2012 MRP:

` 3,699 ESP:

` 3,699 Specification: 17.8-cm (7-inch) resistive touchscreen, 800x480 pixels screen resolution, 1 GHz processor,0.3 MP front facing camera, 256 MB RAM, 4 GB internal storage, expandable up to 32 GB, 3G, WiFi Retailer/Website: www.homeShop18.com, www.futurebazaar.com

OS:

Android 4.0 Launch Date:

July 2012 MRP:

` 41,500 ESP:

` 37,990 Specification: 12.2 (4.8-inch) HD Super Amoled display, 1280 x 720 pixel resolution ,1.4 GHz Exynos 4 processor ,8 MP rear camera, 1.9 MP front camera, 3G, WiFi Retailer/Website: www.flipkart.com


Tablets Mercury mTab Rio

ICS Karbonn Smart Tab 1

Wammy Ethos

Go Tech Funtab Fit

OS:

OS:

Android 4.0

OS:

OS:

Android 4.0

Launch Date:

Android 4.0

July 2012

Launch Date:

Launch Date:

July 2012

MRP:

July 2012

` 8,999

MRP:

MRP:

` 5,999

` 11,999

ESP:

ESP:

ESP:

Android 4.0 Launch Date:

July 2012 MRP:

` 6,999 ESP:

` 6,999

` 8,999

Specification:

Specification:

17.8 cm (7 inch) capacitive 5-point touch screen, 1.2 GHz X-Burst processor, 3700 mAh battery, 2 MP front camera, 1 GB internal memory, expandable up to 32 GB, 3G, Wifi

7 inch capactive touchscreen, 1.2 Ghz processor, 4 GB internal memory, expandable up to 32 GB, 3G, Wifi Retailer/Website: www.flipkart.com

` 5,999 Specification: 17.8 cm capacitive touchscreen, 800 x 480 pixels screen resolution, 1 Ghz processor, 0.3 MP front camera, 4GB internal memory, expandable up to 32 GB, 3G, Wifi Retailer/Website: www.snapdeal.com

Retailer/Website: www.flipkart.com

Zync’s Z-999 Plus OS:

Android 4.0 Launch Date:

July 2012 MRP:

` 11,990 ESP:

` 11,990 Specification: 7 inches capacitive multi touch LED screen, 800 x 480 pixels resolution,1.5 Ghz processor, 2 megapixel rear camera, front VGA camera, 512 MB DDR3 RAM, 8 GB internal storage, expandable upto 32 GB, 3G, WiFi. Retailer/Website: www.snapdeal.com, www.naaptol.com

iBerry AUXUS AX03G OS:

Android 4.0 Launch Date:

June 2012 MRP:

` 9,990 ESP:

` 9,890 Specification: 7.0 inches, WVGA LCD Capacitive Touchscreen, 800 x 400 pixels resolution, 2 MP rear camera, Front camera 1.3 MP, 24 GB storage, expandable upto 32 GB, 3G, WiFi.

iBerry Auxus AX01 OS:

Android 4.0

Specification: 7 inch captive touchscreen, 800x400 pixels resolution, 1 Arm Cortex A8 processor, 1 GB of DDR3 RAM, 0.3 MP front camera, 4GB storage, expandable upto 16GB, 3G, Wifi. Retailer/Website: ADdo Technology, Bangalore, Contact: 080-40916780

OS:

Android 4.0 Launch Date:

May 2012 MRP:

` 7,999 ESP:

` 7,290

Launch Date:

MRP:

MRP:

ESP:

ESP:

` 6,000

` 6,199

` 6,500 Specification: 7-inch LCD multi-touch display, 1.5GHz processor, 512MB RAM, 1.3 MP front camera, 4GB internal storage, expandable memory up to 32GB, WiFi, 3G. Retailer/Website: www.buytheprice.com

iBall Slide 3G-7307

OS:

OS:

Launch Date:

Launch Date:

MRP:

MRP:

ESP:

ESP:

` 6,685

` 15,490

Specification:

Specification:

8.1 cm Bright LCD display, 650 MHz processor, 1400mAh battery, 3.2MP camera, VGA front camera, 3G, WiFi.

17.8 cm WVGA display, 800x480 pixels screen resolution, 1GHz processor, 4400mAh battery, 8GB built-in memory, external memory expandable upto 32GB, 2MP front camera, WiFi, Bluetooth.

Android 2.3 June 2012 ` 6,685

Retailer/Website: www.flipkart.com

Android 2.3 June 2012 ` 16,499

Retailer/Website: www.adexmart.com

Netbooks Samsung N100

ASUS EeePC X101

OS:

MeeGo

OS:

Launch Date:

August 2011

Launch Date:

MRP:

` 12,290

MRP:

ESP:

ESP:

Specification:

` 11,840

17.8 cm capacitive 5 point multi touch screen, 1.2GHz processor, external memory expandable upto 32GB, 2MP front camera, 3D G-Sensor for Gaming Experience, 3G, WiFi.

Specification:

Retailer/Website: www.infibeam.com

Launch Date:

iBall Andi 3e

Retailer/Website: ebay India

Karbonn Smart Tab 1

OS:

July 2012

` 5,990

25.7 cm WSVGA anti-reflective LED,1024×600 pixel screen resolution,1.33GHz Intel ATOM processor, 1GB DDR3 memory, Intel GMA 3150 graphics, 250GB HDD, 3 cell (40 W) battery, 4-in-1 card reader, 1.03kg. Retailer/Website: Croma Store, Saket, New Delhi, +91 64643610

Retailer/Website: Technocrat Infotech Pvt. Ltd., M-17 , Hemkunt Chamber, 89, Nehru Place, New Delhi-19

OS:

MRP: ESP:

9.7 inch display LCD, 1.2GHZ multi-core processor, 2MP rear & VGA front camera, 1 GB RAM, 8 GB Internal memory and expandable upto 32 GB, 3G, WiFi.

Zen Ultratab A100

Android 4.0

` 5,990

Specification:

Wishtel IRA thing 2

Launch Date:

July 2012

` 11,999

MeeGo August 2011 ` 12,499 ` 12,000 Specification: 25.7 cm LED-backlit screen, Intel Atom processor N455 CPU, 1GB DDR3 RAM expandable upto 2GB, 220GB storage, Bluetooth 3.0, Wi-Fi 802.11 b/g/n, 17.6mm thick, 920g. Retailer/Website: Eurotech Infosys, Nehru Place, Delhi, 9873679321

Android 4.0 July 2012 ` 6,199 Specification: 7-inch capacitive touchscreen, 800×480 pixels, 1.2GHz processor, 512 MB RAM 1.3MP (front) camera, 4GB Storage, expandable up to 32GB. Retailer/Website: www.homeshop18.com

Samsung Galaxy Tab 2 310 OS:

Android 4.0 Launch Date:

May 2012 MRP:

` 23,250 ESP:

` 19,300 Specification: 17.8 cm WSVGA TFT capacitive touchscreen, 1024x600 pixels screen resolution, 1GHz dual core processor, 4000mAh battery, 1GB RAM, external memory expandable upto 32GB, 3MP rear camera, 3G, WiFi, A-GPS. Retailer/Website: www.flipkart.com

Acer Aspire One Happy OS:

Android Launch Date:

March 2011 MRP:

` 17,999 ESP:

` 15,490 Specification: 25.7 cm WSVGA high-brightness display with a 16:9 aspect ratio, dual-core Intel Atom N455, 1 GB RAM, Intel graphics media accelerator 3150 and internal hard disc memory of 320 GB, Bluetooth 3.0+ HS support, Wi-Fi, built-in multi-in-one card reader. Retailer/Website: Vijay Sales, Mumbai, 022-24216010

october 2012 | 21




Facebook wants employees to replace iPhones with Android devices!

Facebook has reportedly asked its employees to switch from using iPhones to Android phones as it wants to improve its own 'horri-bad' Android app. According to a Business Insider report, banning iPhone is an effort to keep Facebook’s Android app updated. The report says, “The Facebook management realises its Android app is sub-par – and believes that the only way employees will take fixing it seriously is if they have to deal with its issues day in and day out. It's a practice called 'dogfooding’, from the phrase 'eating your own dog food’.” The report is attributed to some ex-Facebookers and others familiar with the development. However, Facebook is tight-lipped about the matter.

NASA's nano-satellite is powered by Android phones

NASA has disclosed details about a project called PhoneSat, as part of which it built nano-satellites by using off-the-shelf consumer smartphones. The project was started at NASA's Ames Research Center at Moffett Field, California. Developed at a cost of $3500 each, the three prototype satellites have been built in the shape of a cube that measures approximately 10 cm (4 inch). PhoneSat 1.0 is NASA's first prototype smartphone satellite. It uses the Nexus One phone from HTC, which runs on the Android OS. The smartphone acts as the computing unit, and its camera is used for observing the earth while the sensors orient the satellite. The NASA website reports that, “NASA engineers kept the total cost of the components to build each of the three prototype satellites in the PhoneSat project to $3,500, by using only commercial off-the-shelf hardware and keeping the design and mission objectives to a minimum, for the first flight.” The post added: “NASA PhoneSat engineers also are changing the way missions are designed by rapidly prototyping and incorporating existing commercial technologies and hardware. This approach allows engineers to see what capabilities commercial technologies can provide, rather than trying to custom-design technology solutions to meet set requirements. Engineers can rapidly upgrade the entire satellite's capabilities and add new features for each future generation of PhoneSats.”

Linux 4.0 to arrive by 2016?

The numbers in the Linux version 2.6 series had stretched on to version 2.6.39. Then came Linux 3.0. Now for those wondering if the 3 series will also keep churning out huge complex numbers, here is some good news. According to a blog posting by Sean Michael Kerner, Linus Torvalds has declared his intention to jump the version number of the Linux kernel up to 4.0 when the second version number of the current branch gets close to reaching ‘the 30s’. At the current kernel development speed, Linux 3.29 is expected to be released in the autumn of 2016. Torvalds had shared his decision in a Q&A session at the Linux Foundation's 2012 North American LinuxCon conference at San Diego. The developer said that larger version numbers are harder to keep track of, and that he prefers to jump to the higher major version when the numbers get too unwieldy for his taste. This is what kernel developers did when they jumped to Linux 3.0 in July, last year. The current version of the kernel in development is Linux 3.6, which is expected to be released in the second half of September, H Online reported.



Raspberry Pi converts a keyboard into a computer

This tiny computing device is finding applications almost everywhere. The Raspberry Pi can be customised to fit in almost all spheres of computing, despite the fact that the device was built to provide an affordable yet functional computer for students to learn the basics of programming and software development. But there's obviously more to it than what meets the eye! Now, a Cherry G80-3000 keyboard has been converted into a Raspberry Pi computer. Preamp is behind this Raspberry Pi project where, except for the HDMI port, just about every plug was moved to the back of the keyboard with the help of an Ethernet jack, a USB hub, and RCA jack. However, audio is missing in this portable combination. You can try and convert your Raspberry Pi into an Internet radio, use it as a media centre and also to build your home's security system.

Mozilla releases Thunderbird 15 with live chat

The Mozilla Project has released version 15 of its open source Thunderbird e-mail client, and has added instant messaging support along with an updated user interface and security improvements. Previously, developers had added support for instant messaging in Thunderbird 13 and 14, but had decided to disable it by default, to improve it. In Thunderbird 15, the supported chat network includes Facebook Chat, Google Talk, IRC, Twitter and XMPP/Jabber. One of the unique features of Thunderbird 15 is its full support for the Do Not Track (DNT) header. The DNT privacy setting is a developing standard being used to instruct websites that the browser user doesn't want them to track their online behaviour. The new version also takes care of 12 security vulnerabilities. Of these, five are rated as critical by Mozilla and could be exploited by a remote attacker to execute arbitrary code on a victim's system, for instance. Coming to the looks, developers have added a new menu and toolbar design based on the Australis theme in Thunderbird 15 to make it look more like upcoming versions of Firefox. The Filelink feature, used for sending large attachments, has also been updated and it now supports Canonical's Ubuntu One (U1) cloud storage platform. Thunderbird 15 is available for download for Windows, Mac OS X and Linux users. Also, you can use the built-in update tool or wait for the automatic update notification. The source code and binaries for Thunderbird 15 have been released under the Mozilla Public Licence 2.0.



Microsoft brings the SkyDrive cloud app to Android

After updating Nokia Belle phones with Microsoft's cloud service, SkyDrive, the company is now eyeing other smartphones and it has released an app for SkyDrive on Android, considering the gigantic share of Google's mobile OS. Android users can store multimedia files on the cloud-based SkyDrive app and access them from virtually any device, while also sharing them on the go. Users can access SkyDrive content, including files shared with them, and will be able to view recently used documents, as well as choose multiple photos or videos to upload from the phone. SkyDrive also offers users the ability to share files and photos through email or a link in another app, and open SkyDrive files in other Android apps and manage, edit or delete folders. The SkyDrive application for Windows desktop and OS X recently got updated and will now feature a new, modern design for desktop and tablet browsers, along with instant search, a contextual toolbar, thumbnail multiselect, drag-and-drop organisation and content sorting. A Microsoft blog post made the following announcement, “As many of you know, Windows Phone users can have all the photos they take on their phones automatically sent to the SkyDrive camera roll folder. I have 1,411 photos in this folder, and it used to be a pain to get to the most recent photo. We now default to sorting by newest to oldest, which should make viewing your camera roll much better.”

Java Enterprise Edition 7 will not feature the cloud

Oracle engineer Linda DeMichiel has confirmed that Java Enterprise Edition (Java EE) version 7 will not have standardisation for Platform-as-a-Service (PaaS) and multi-tenancy support until Java EE 8 comes out, which is expected by the spring of 2015. Java EE 7 is already running late as it was previously expected to come out by the fourth quarter of this year. It has now been delayed to the spring of 2013 in order to permit the inclusion of new features, such as Web Sockets and JSONP (JSON with padding). Java EE 6 was launched way back in December 2009 and it brought in “a division between Web and full profiles. The Web profile version, a slimmed down version of the full profile, included only features typically used in Java Web applications. Thirteen application servers are currently

Test water quality with an Android app

While we use various purifiers to ensure the water we consume is as safe as possible, many of these solutions come at a high cost and with a lot of inconvenience. Well, John Feighery used to work for NASA, providing safe water to people in space. After the Columbia Space Shuttle accident in 2003 in which seven crew members died, he began focusing on water and sanitation issues for those on earth. In an exclusive interview, Feighery told AlertNet, "I'd been working on supplying clean water to three or four people in space, and meanwhile there are a billion here on earth that don't have it. The world that my kids are going to grow up in has this huge problem that I felt I could work on." Feighery realised that the need for heavy equipment, charting notes and mapping locations by hand and transporting samples in incubators to a distant laboratory needed to be simplified and made less expensive. This led to the idea of using inexpensive testing equipment available online, along with mWater, which is an Android app that records the data results of water quality tests and maps them. This Android app is available in the Google Play store and allows people to track water quality tests at any given water source over time, thus, offering instant results. This app lets users to leave notes for other users about the appearance of the water, its smell and how it is flowing from the source, building up an archive of information over time.


compatible with Java EE 6,” says a report in The H Open. It is worth mentioning here that Java EE applications can run under Oracle, Red Hat, IBM and CloudBees cloud platforms. The reason for the non-inclusion of the cloud, according to Linda DeMichiel, is that Platform-as-a-Service (PaaS) and multi-tenancy support technologies are not yet mature enough for full standardisation. According to the report, “She believes that fully standardising cloud features could mean delaying Java EE 7 until spring 2014. A proposal has therefore now been submitted to the Java EE 7 Expert Group to delay standardisation of PaaS and multi-tenancy support until Java EE 8, which is scheduled for release in the spring of 2015.”

RHCE / RHCVA / RHCSS Exam Centre

What is new in OpenSUSE Linux 12.2?

OpenSUSE Linux 12.2 finally made its debut in the open source world. Here is a rundown on what is new in this latest version: Superfast speed This distribution comes with the Linux 3.4 kernel. According to a blog by Jos Poortvliet, openSUSE community manager for SUSE Linux: “It includes a faster storage layer to prevent blocking during large transfers.” He even added that KDE 4.8.4 makes the desktop more responsive. Meanwhile, glibc 2.15 boosts the performance of many functions, particularly on 64-bit systems, and Systemd 44 enables faster booting, reports PC World. Advanced infrastructure It comes with a more advanced infrastructure with a GRUB2 bootloader as default. Poortvliet wrote in his blog post, “We’ve begun the process of revising and simplifying the UNIX filesystem hierarchy to improve compatibility across distributions, and during start-up and shutdown, Plymouth 0.8.6.1 provides flicker-free transitions and attractive animations.” A sophisticated desktop GNOME 3.4 adds smoother scrolling, a reworked System Settings app, and an improved Contacts manager, Poortvliet noted. He also added that Xfce 4.10 offers an improved application finder, while the Dolphin file manager is “both prettier and faster.” A range of apps Included among openSUSE 12.2's software line-up are not just X.org 1.12, with its support for multi-touch input devices and multi-seat deployments, but also Mozilla Firefox, GIMP 2.8, LibreOffice 3.5, Krita 2.4 for painting and illustration, and Tomahawk Player for music, a PC World report reports. Cooler apps OpenSUSE 12.2 comes with a host of scientific tools providing math applications, the Stellarium astronomical simulator and more. Poortvliet explained that, “programmers will enjoy version 1.0.2 of Google’s Go language as well as the latest C++ language standards implemented in GCC 4.7.1 and Qt Creator 2.5.”

At ADVANTAGE PRO, we do not make tall claims but produce 99% results month after month – TAMIL NADU'S NO. #1 PERFORMING REDHAT PARTNER RHCSS RHCVA RHCE

Only @ Advantage Pro

Redhat Career Program from THE EXPERT

Also get expert training on My SQL-CMDBA, My SQLCMDEV, PHP, Perl, Python, Ruby, Ajax...

New RHEL 6.2 Exam. Dates (RHCSA/RHCE) @ ADVANTAGE PRO for Sept - Oct - Nov quarter 2012 Sept. 17, 24, Oct. 9, 15, 22, Nov. 5, 26, 30

“Do Not Wait! Be a Part of the Winning Team”

Regd. Off: Wing 1 & 2, IV Floor, Jhaver Plaza, 1A, N.H. Road, Nungambakkam, Chennai - 34. Ph : 98409 82185 / 84 Telefax : 28263527 Email : enquiry@vectratech.in www.vectratech.in october 2012 | 29


LynuxWorks demonstrates end-to-end multi-level secure thin client system

LynuxWorks Inc, a world leader in the secure virtualisation market, recently said that it is demonstrating a security solution at the Information Assurance Expo, which provides concurrent user access to three isolated thin client sessions running on a single SINA Multi-level Workstation. This collaborative high security product was first announced at the RSA conference in February this year, and is the first public demonstration of a full system, including the back-end infrastructure and configuration management components. This solution can be used by government agencies and Department of Defence (DoD) programmes that require the secure separation of multiple networks on a single workstation, or by any enterprise with highly sensitive information that needs isolation from malicious computing environments.

The SINA Multi-level Workstation (MLW) is a multi-domain thin client access solution running on a standard off-the-shelf laptop. It offers unprecedented security on a low-cost platform, providing access to multiple security domains from a single user workstation over a single network infrastructure. The SINA MLW takes full advantage of the security benefits offered by LynxSecure, which provides a foundation to host a minimal component-based architecture with formally verified security components. The SINA MLW features multiple, isolated, bare-metal cryptographic engines for each security domain to maintain the confidentiality, integrity and authenticity of information processed in each security domain. The SINA crypto engines utilise LynxSecure's platform resource control capabilities to mitigate some of the most advanced threats posed on today's multi-level solutions running on shared computing resources, such as crypto side-channel attacks and covert user data spill channels. The demonstration at the IA Expo is a debut for the back-end SINA network gateways. The SINA gateway is a single level cryptographic network gateway that connects the SINA MLW user sessions to clear-text security domain infrastructures. The SINA gateways support redundant link protocols to provide hardware failover protection and link teaming that combines Ethernet ports to increase link throughput. The SINA product line utilises an ingenious smart card management system, where all SINA configuration parameters and security credentials reside on smart cards instead of the hardware platforms. This provides a transparent hardware configuration model, allowing SINA products to be easily replaced and upgraded by simply swapping out the hardware components and reinserting the existing smart cards.

Disabled students at DU to be given tablets!

There is a tablet for everyone! The use of these multifunctional devices is picking up fast in the educational sector and, recently, many tablet makers have forayed into this segment with special applications designed for students. It is being reported that Delhi University (DU) will be giving out tablets to its disabled students. Every year, 3 per cent of the seats in DU are reserved for disabled students; the university will now be providing them with tablets to aid them in their studies. As per the TOI report, the tablets will be pre-installed with software that will help them with their subjects and lectures. For blind students, the tablet will be well equipped with audio books and lectures, and for deaf students the tablet will have speech-to-text translation incorporated. Vipin Tewari, deputy dean, Students' Welfare, said, “This is the first time that the varsity has taken such pro-active measures." About 1,600 students fall in the disabled students’ category in DU, and all of them are expected to be given the tablet by the end of this year. It has not been revealed which tablet maker DU will tie up with or whether the tablets will run Android or not.


Conte s t

Developers

Here’s your monthly dose of coding puzzles from India’s biggest online coding contest, now in print!

S

olve this month’s puzzle, and you stand the chance of being one of the three lucky people who could win a cash prize of Rs 1,000 each!

The September edition puzzle: There is a 50 cm long line of ants moving ahead. The last ant in the line had an urgent message to give to the queen ant, which is moving at first position. So while the line is moving, the last ant runs ahead, reaches the first ant (the queen) and passes on the message. And without stopping, it runs back to its original position. (Assume that no time is lost in giving the message.) During this time, the whole line has moved ahead by 50 cm. The question is: how much distance did the last ant cover in that time? Assume that it ran the whole distance at uniform speed.

The Solution: Let x be the distance travelled by the line when the last ant reaches the queen ant. Let u be the speed of the ant line and v be the speed of the last ant. Let t1 be the time taken by the ant to reach the queen ant. Let t2 be the time taken by the ant to return to its original position in the line, after delivering the message to the queen ant. then, 50 + x = v * t1 (for the last ant) ... (1) x = u * t1 (for the ant line) ... (2) Also, during the return journey, 50 - x = u * t2 (for the ant line) ... (3) x = v * t2 (for the last ant) ... (4) Then, from Equations 1 and 2, you get,

(50 + x) / x = v/u ... (5) Also, from Equations 3 and 4, you get, x / (50-x) = v/u ... (6) On comparing Equations 5 and 6, you get, x²= 50²-x² => x= √1250 = 35.36 x= 35.36 The total distance travelled by the last ant is 50 + 2x. 50 + 2(35.36) = 120.72.

And the winners this month are: • Shakti Rath • Mitanshu Bakshi • Kshitij Rastogi

Here’s your CodeChef ‘Puzzle of the Month’: Here are the conditions of the magic number: 1. If the magic number was a multiple of 2, then it was a number from 50 through 59. 2. If it was not a multiple of 3, then it was a number from 60 through 69. 3. If the magic number was not a multiple of 4, then it was a number from 70 through 79. What was the magic number?.

Looking for a little extra cash? Solve this month’s CodeChef Challenge puzzle and send in your answer to codechef@efyindia.com or lfy@codechef.com by October 15, 2012. Three lucky winners get to win some awesome prizes both in cash and other merchandise! You can also participate in the contest by visiting the online space for the article at https://www.facebook.com/LinuxForYou.

About CodeChef: CodeChef.com is India’s first, non-commercial, online programming competition, featuring monthly contests in more than 35 different programming languages. Log on to CodeChef.com for the October Challenge that takes place from the 1st to the 11th, to win cash prizes of up to Rs 20,000. Also keep visiting CodeChef and participate in the multiple programming contests that take place on the website throughout the month

OCTOber 2012 | 31


For U & Me OSI Curtain Raiser

Celebrate the Spirit of FOSS with

Open Source India 2012! It’s that time of the year when we come together to celebrate the spirit of freedom and openness—when we come together to talk technology, and how to make it ‘free’. It’s time, once again, for Open Source India, a.k.a. OSI, Asia’s leading conference on open source!

O

pen Source India is the premier open source conference in Asia, targeted at nurturing and promoting the open source ecosystem in the sub-continent. Started as LinuxAsia in 2004, OSI has been at the forefront of bringing together the open source industry and the community over the last eight years. The 9th edition of OSI, which is to be held this year, aims to take this event a notch higher by focusing on the open source ecosystem in Asia, and more specifically, in India. OSI 2012 will comprise various events that will run in parallel on all three days of the show. The aim is to focus on different user segments within the open source arena, and ensure that there is something for everyone.

An overview of OSI 2011

OSI 2011 was recognised as an event that celebrated the power of open source. Enthusiasts and industry professionals hailed the event as an ideal platform for interaction and the exchange of knowledge related to Linux and open source. Over 3,000 people attended the event last year, across the three days, which was a testimony to the fact that FOSS had strengthened its roots within the country. The agenda was to bridge the gap between the industry and the community, and to promote the adoption and use of open source software among Indian businesses and developers. Over 70 talks were delivered by eminent national and international experts at OSI 2011. The much-talked about event was not only a platform to hear renowned OS luminaries share their thoughts, but also provided the attendees with an excellent opportunity to get hands-on training with many open source tools. As many as 12 workshops were conducted during OSI 2011, which were a great attraction amongst the attendees.

The road ahead

OSI 2012 promises to be as big! Android is the newest and the most happening phenomenon in the open source world. Google’s open source operating system is gaining new heights not only in the global market but has caught on in a big way in the Indian market as well. Android will be one of the biggest attractions at OSI this year. While the speakers will touch upon topics like ‘Building Next Generation Applications for Android on Intel Architecture’, ‘Android Platform Customisation’, ‘Ruboto - Ruby on Android’ and ‘Mobile Application Testing’, experts will also 32 | october 2012

conduct workshops on ‘App Development for Android’. A special track on the kernel called ‘Kernel Days’ will be another attraction for developers. Topics like ‘Debugging the Linux Kernel’ and ‘Applications using Lauterbach’ will be covered by experts in this track. You can also acquire a fairly good knowledge on IT infrastructure, Web apps and the cloud. In fact, open source on the cloud will form a major part of the discussions at OSI. Talks on ‘OpenStack Cloud Services’, ‘The Future of the Cloud’, and ‘How to Choose the Best Cloud Solution’ will be helpful for CXOs to decide on the kind of technology they should opt for, depending upon their needs. OSI 2012 holds a lot for those in academia and for students as well. Special tracks will be organised for them, informing them of the importance of certifications and of open source in the educational sector. Sharing his insights on OSI 2012, Ramesh Chopra, vice chairman, EFY Enterprises Pvt Ltd, said, “OSI 2012 is likely to be bigger and better as compared to the previous editions. We want it to offer something for everyone, including software developers, IT managers and heads, CXOs, policy makers, systems administrators, project managers, delivery experts, as well as those in academia, the government and the open source community.” Why you should attend OSI 2012 1. 2. 3. 4. 5. 6.

To get an update on the latest in IT management To learn how to develop world-class applications To participate in hands-on workshops To evaluate open source solutions for your business needs To interact and network with the leading figures in the industry To find solutions for problems related to technology

By: Diksha P Gupta The author is assistant editor at EFY. When she is not exercising her journalistic skills, she spends time in travelling, reading fiction and biographies.


ResellerClub Hosting Summit

November 1 & 2 / 2012 Renaissance - Mumbai

REGISTER FREE

REGISTER FREE

Meet the who’s who of the Indian Internet Industry

Learn, Connect & Profit Learn from 15+ Sessions by Industry Veterans Connect with 30+ Internatonal Exhibitors Network with 2000+ Attendees Register now for FREE

www.rchostingsummit.com


For U & Me Overview

GNOME Extensions

Spicing Up the Desktop Experience GNOME’s latest desktop avatar hasn’t really excited most users. The GNOME team has, therefore, put together a number of extensions to make the desktop experience more user friendly. We take a look at some of the best extensions for GNOME 3.

G

NOME 3 debuted with much hoopla, a new UI, as well as a complete overhaul of the toolkit and codebase. It is aimed at bridging the gap between the ‘medieval’ PC industry and the growing tablet market with a multi-pronged approach. However, responses have been lukewarm, making it a tough time enticing users to adopt the new interface. Besides, the lack of many pre-requisite features makes for an unsatisfactory user experience. Another major let-down was the deprecated APIs, which made many applications, widgets and fancy add-ons obsolete. To tackle these issues, the GNOME team presented us with GNOME extensions—small add-ons that let users get an experience similar (if not better) to competing desktops, and even its older avatar. So let’s jump in for a quick look at these extensions and evaluate their performance compared to the bells and whistles offered by the competition. Compatibility with GNOME 3.x forks: Ever since the GNOME 3.x release, there’s been a huge uproar in the distro community about the revamped desktop. With missing features and a performance hit, most distros are either not including GNOME 3.x, or forking the release and re-crafting it. Cinnamon (of Linux Mint fame) is one of the most popular forks of GNOME Shell, and tries to bridge the gap

34 | october 2012

between the ever-loved GNOME 2.x while adding many new enhancements to the shell. The main disadvantage of using forks like Cinnamon or even Ubuntu Unity is it prevents you using the extensions from the GNOME extensions site. However, if you are lucky, you can still find the extension in your package manager or a third-party PPA. Most popular extensions are available and should be good to go even if you use unsupported versions. Cinnamon extensions: The Mint developers have gone one step ahead by not only forking the usual GNOME shell, but by providing users with arrays of extensions that will augment your user experience. To install Cinnamon extensions, simply install the extension from https:// extensions.gnome.org/ and copy the extension to ~/,local/ share/cinnamon/extensions. Forks aside, the major problem with GNOME extensions is that to install most of them, the system must be running the latest iteration of GNOME, else the GNOME extension website will simply not let you install the extension. This is a serious let-down and developers should come up with a solution. Even a new point release will bar you from installing the extensions. Installing extensions: Since GNOME Shell and


Overview For U & Me

Figure 1: The GNOME extensions website Figure 2: The Linux Mint Cinnamon tweak tool

Figure 3: The GNOME Tweak Tool Shell Extensions screen

Cinnamon are not at all compatible, their extensions will not work without certain changes being made. If you are using Linux Mint, you can switch between GNOME Shell and Cinnamon with gnome-shell –replace and cinnamon –replace respectively. Make sure you kill the associated process and shell before switching, unless you want to leak system memory. The best way to install extensions in GNOME is to use gnome-tweak-tool a.k.a. Advanced Settings, a one-stopshop for editing GNOME Shell properties and installing or changing themes, shell extensions and various other eye-candy. Make sure you have the gnome-shell-extension package installed; else you will not be able to install extensions using gnome-tweak-tool. Manual installation: There are many ways to install the extension manually. If you are using the GNOME extensions website, then just click the On/Off slider; installation will begin automatically. If you have cloned the source code for the extension, you can simply copy the extension in / usr/share/gnome/extension for system-wide (for all users) installation, or to ~/.local/share/gnome/extension for a single user. Make sure to restart the shell after this; press ALT+F2, type r and hit Enter. GNOME Shell will restart, and all installed extensions should be active.

Must-have GNOME extensions

If you head to the GNOME extensions website, you will find a not-so-user-friendly interface, and not many options to find

Figure 4: Weather extension in action

the best of the bunch. Most extensions are either too basic or too fragile to be used in a stable desktop. So I have selected some of the best for you. Note: It’s possible that the extension will not work, courtesy the broken version control at the GNOME end. If you are using a somewhat older revision, you may have to update it first. Weather: For those who love to know about the weather, there’s a cool extension that shows the latest updates in a beautiful graphical pattern, along with basic pressure, humidity and wind speed data, as well as forecasts for two days. This extension is not available from the GNOME extensions website; you can grab and compile source code from Git, or use the PPA for Ubuntu-based distros. To get the Git source, run git clone git://github.com/simon04/gnome-shell-extension-weather. git. For a PPA, use the following commands: sudo add-apt-repository ppa:webupd8team/gnome3 sudo apt-get update sudo apt-get install gnome-shell-extensions-weather

The GPaste clipboard: If you’re missing trusty old clipboard tools such as Klipper and GNOME clipboard, do not worry. GPaste fills the gap. It works pretty well, and lets you track your old clips and even back them up. Unfortunately, october 2012 | 35


For U & Me Overview

Figure 5: The gPaste clipboard showing recent clips

Figure 7: Dock extension showing docked icons

this nifty tool hasn’t made it to the extension website either. Git users can use the following command: git clone git://github.com/Keruspe/GPaste.git,

…while for PPA, add the ppa:webupd8team/gnome3 repository just like you did for the weather earlier—update it, and then sudo apt-get install gnome-shell-extensions-gpaste. The media player extension: Ever since Ubuntu integrated the music player in the GNOME panel, everyone seems to be jumping to do it too. The media player extension lets you enjoy your music collection right from the GNOME panel (you need a compatible GTK-based media player installed for this to work). The extension is available from the GNOME extensions website (https://extensions.gnome.org/ extension/55/media-player-indicator/). For Git: git clone git:// github.com/eonpatapon/gnome-shell-extensions-mediaplayer. git, and the webupd8team/gnome3 PPA (sudo apt-get install gnome-shell-extensions-mediaplayer). Docks: Deprecated APIs and a new Clutter desktop took away all the bells and whistles GNOME 2 used to offer, including docks. However, there is a gimmicky dock for dock fans. The let-down: the dock mimics the favourite apps panel in the activity pane, and doesn’t allow any of its own effects and options. You can’t even add custom entries—and in case you want to, you have to first add the application to the dock in the activities pane, and then it will automatically display shortcuts or icons on the desktop. Get it from the GNOME extension website (https://extensions.gnome.org/extension/17/ dock/), PPA (webupd8team/gnome3) and sudo apt-get install gnome-shell-extensions-dock. Mounters: These are handy extensions that let you see mounted devices – you can unmount them or open a file manager for one. Extension page: https://extensions. 36 | october 2012

gnome.org/extension/7/ removable-drive-menu/. Sensors - CPU temperature: Paranoid about your CPU’s temperature? An extension that uses the fickle lm_sensors does the job, if your CPU/APU is supported. Extension page: https://extensions. gnome.org/extension/82/ cpu-temperatureindicator/. Advanced volume Figure 6: Tweaked version of GNOME mixer: Tired of the Media player extension flimsy PulseAudio, yet want more control over your music playback and pipelines? The advanced volume mixer will be just right for you. It sits on top of the existing volume rocker, adding muchneeded abilities such as advanced device control and the ability to pause the sound output from apps. Get it at https://extensions. gnome.org/extension/212/ advanced-volume-mixer/. Removing accessibility: The only Figure 8: Mounter extension showing mounted devices downside of GNOME extensions is that the majority of them sit on the GNOME panel. Adding too many will create unnecessary clutter there. You can remove unused extensions to save space. Accessibility is one that is hardly used, and so you can get rid of it to reclaim space for other valuable extensions. The Remove Accessibility extension does just that. Extension page: https://extensions.gnome.org/ extension/112/remove-accesibility/. To-do/Note taking: This is a no-nonsense note-taking application. You can add notes by entering the contents, and they can be removed by clicking on them. No fancy editing options, back-ups, dates, time or alarm. It’s a simple clutterfree extension. If you need a simple notes extension, then this is for you. Extension page: https://extensions.gnome.org/ extension/162/todo-list/. Advanced Settings Center: This extension embeds an Advanced Settings option under the user menu, letting you access many system settings with just a few clicks—a very handy time-saver. Extension page: https://extensions.gnome. org/extension/341/settingscenter/.


Overview For U & Me

Figure 9: Extension showing current cpu temps

That sums up my must-have GNOME extensions. There are many other useful extensions like the workspace switcher, GMail notifier and more. I’ll leave the rest for you to discover. The GNOME extension website may not be the best, but it offers a boatload of extensions that might be handy for you.

Figure 10: To-do extension—list of tasks

Creating your own extensions

You can create your own extensions, though the dearth of documentation makes the job a Figure 11: Advanced Settings in little tough. Adding the changes user menu in settings and APIs can even sour the experience. Anyway, that shouldn’t stop us from trying. You need some knowledge of JavaScript and GNOME APIs. To begin, in a terminal run the following command: Figure 12: Test extension created with the GNOME extension tool gnome-shell-extension-tool –create-extension

You’ll be asked a few questions like the name of the extension, description and unique ID (UUID). After these steps, the tool will create the files needed for your extension, under ~/.local/share/gnome-shell/extensions/<your_ extension>. For example, I created an ‘LFY’ extension with the UUID LFY@shashwat-desktop, and a folder was created with this name in place of your_extension in the above path. In the directory, you will notice three files by default: • extension.js: The main JavaScript extension file that holds all the code for an extension to work. • metadata.json: A JavaScript Object Notation file that holds the metadata that you entered while creating the extension. • stylesheet.css: Provides the look and feel for your extension. All additional styling is in this file. Once you are done, simply restart the GNOME Shell and an icon will appear on the top. Upon clicking it, you may display several messages. If you want to develop an extension, there aren’t many live examples, though GNOME has compiled a decent getting-

started page: https://live.gnome.org/GnomeShell/Extensions. You may also want to get through the GObject used by GNOME developers to call data back and forth.

Removing or managing extensions

You can disable extensions using gnome-tweak-tool or the GNOME extension website (visit https://extensions.gnome. org/local/). You can use the extension preferences located under Settings Center (provided you have installed the Advance Settings Center mentioned earlier) or simply launch it by typing gnome-shell-extension-prefs. You can remove extensions either by removing their folder from ~./local/share/gnome-shell/<extension> or simply clicking the Remove button on the GNOME extension site.

Some closing words...

The GNOME desktop provides a stable experience, but the missing features are a setback for both developers and mainstream users. GNOME has come up with extensions to try and remedy the problem, but real extension development october 2012 | 37


For U & Me Overview extensions you can install and use at any point in time. This may make developers wary of creating new extensions for GNOME’s new desktop. Overall, the GNOME extension experience is a very mixed bag, which will hurt the desktop. The ever-changing API has let developers down. The install experience wasn’t great either, but thankfully many developers have provided multiple sources for installing their extensions—so even if the GNOME portal fails to meet your needs, you can always resort to other options. Finally, if you use GNOME, you may want to try some of the listed (and other) extensions to enhance your desktop experience, though the currently available extensions may not really rock your world just yet. References Figure 13: The GNOME extension control panel

has not yet caught up to that extent. Still, things are not too bad—the ever-growing development community is churning out extensions fast, and hopefully more useful extensions will soon find their way to us. Developing extensions may be painful at times, given the ever-changing APIs and missing documentation. The biggest hurdle is the panel, which limits the number of

38 | october 2012

• Home page: https://extensions.gnome.org/ • Developer resources: https://live.gnome.org/GnomeShell/Extensions

By: Shashwat Pant The author is a FOSS/hardware enthusiast who likes to review software and tweak his hardware for optimum performance. He is interested in Python and Qt programming and is fond of benchmarking the latest FOSS distros.


Interview For U & Me

For developers who really question if Microsoft is serious about open source, my answer would be 'absolutely' In a freewheeling interview, Microsoft’s Mandar Naik gets candid on a range of topics like releasing the source code for Windows, open source technology, and much more...

Mandar Naik, director, Platform Strategy at Microsoft

G

one are the days when Microsoft used to be an enemy of open source technology. Times have changed and so has Microsoft, to the extent that the company has started its own open source subsidiary. And now, Microsoft has become one of the major contributors to Linux, and is working to build an ecosystem where proprietary and open source technologies go hand-in-hand. Mandar Naik, Director— Platform Strategy at Microsoft and the man behind such efforts in India, reveals how the company has changed over the years.

Q

Mandar, what is your role at Microsoft?

My role is probably different from most folks at Microsoft, who sell Microsoft products and compete with open source product vendors. I do compete with open source 'products' as well, but my primary focus is to drive strong partnerships with open source because one of the core things that has happened over the past six to seven years at Microsoft is a vast change in how we look at things. If you go back 10 years, you could have

said that Microsoft competed with open source. It was all driven by what customers wanted. Ten years ago, there was this huge debate on what was good—open source or proprietary software. Today, if you look at the mature markets, the conversation has really changed tremendously.

Q

In what way?

One of the fundamental things that customers have realised is that it’s not about whether the source is open or closed. At the end of the day, it is about getting a job done. So, for CIO's, the objective is to get the business solution going. If there is an open source solution to it, that’s great; and if there is a proprietary solution, that’s fine too. Most IT environments nowadays are mixed source. You can no longer say that a company uses only open source solutions or proprietary solutions. Customers want compatibility of both the open as well as proprietary, irrespective of their operating systems. So customers have driven that change, based on how OCTOber 2012 | 39


For U & Me

Interview

their needs have changed. That is the reason why Microsoft has changed. And it’s not just Microsoft that has changed. You can see a lot of change happening across the board even in the open source community. In recent times, we have seen tremendous opportunities where we can partner with open source and vice versa. There are a huge number of developers who use technologies varying from Dot Net and Java to PHP. So, what Microsoft has done is focus on the fact that we don't want to tell PHP developers to give away their expertise in PHP and start developing on Dot Net. We want them to continue with PHP and leverage their expertise in PHP. What we ensure now is to work with the PHP community so that PHP runs as well on our platform (Windows) as it does on any other. So that's the kind of partnership that we are looking at driving. Microsoft is putting in humongous efforts within the company towards driving strategic partnerships. Today, we are committed to ensuring that every product we come up with inter-operates well with all other products. That's the biggest change that has happened. If you look back two years, we had started seeing these changes coming into our main product. So, if you are a Java developer or a PHP developer, and want to develop on Windows, don't worry about it; we will provide you the tools required for developing on Windows and these tools are open source. That was the good starting point in India. That's where the partnership with the open source community began. We are learning to work with the open source community and the open source community is learning to work with us. If you look at mature markets like the US, there has been tremendous progress. We are in a mixed source environment today.

Q

So, will it be correct to say that Microsoft's attitude has changed over six to seven years because customers have demanded it? Look at it in this way—we’ve got to be very realistic on why we exist as an industry. It’s not about Microsoft or open souce companies. Individual developers exist for the purpose of serving the customers’ need. So, for us, it is very natural to always do what customers need. That is reflected in every product that we have come up with and, obviously, there is a whole lot of innovation there. There are times when we have shown customers the right path, when we have seen them struggling in a certain area and have come up with solutions to help them. There have also been times when customers have come to us and said, “These are the changing business needs that I have, so now you go out and figure how you are going to help.” So, I think that this change in Microsoft has occurred a long time ago. Six to seven years ago is when it has started to show, but the change actually began a long time ago and it was really driven by our customers.

Q

Steve Balmer called Linux a cancer back in 2000, and the Linux Foundation recently released a report, which

40 | OCTOber 2012

shows that Microsoft is among the top 20 contributors to the kernel. Do you feel that Microsoft is still struggling with the image makeover? What Steve Balmer said over 10 years ago is not denied, but it also reflected the reality of the software market at that time. It is just like, 10 years ago we couldn't have imagined that enterprises would be willing to give the up control of their onpremise software and put it on the cloud, because the options did not exist back then. It is also natural for the open source community and developers to ask this question. Today, I personally think there are no proprietary software developers or open source developers. A developer is a developer. Look, if we were really not keen and committed to work with the open source community or industry, there would have been no point for all these investments. We always knew that this journey to becoming more open as a company was not going to be a short one and hence we were always in it for the long haul. It has not been merely an image makeover, but a strategic shift in how we see ourselves working with others. If

I personally think there are no proprietary software developers or open source developers. A developer is a developer. Look, if we were really not keen and committed to work with the open source community or industry, there would have been no point for all these investments. Microsoft has not struggled with this image makeover. you look at the mature markets like North America or Europe, our partnership with the open source universe has really made a huge impact on enterprises and customers. As an example, let’s look at our partnership with SUSE that we entered into a few years ago— there are customers who have deployed SUSE and Microsoft in a virtualised environment and this has happened because of the partnership. SUSE, too, was very keen on our partnership. So, honestly, it’s not just lip service. It’s not something fake to keep people quiet. The IT industry is consistently and continuously changing, and we have to adapt to those changes. Our commitment of partnering with the open source community is a part of the journey. But that doesn't mean that we are not going to compete. We see competitors today even in the open source ecosystem, just like we see competitors in the proprietary ecosystem. This competition is separate from the overall development of the industry. So, when we speak about partnering with the open source ecosystem, we mean to work a way forward. When it comes to competing, we will keep on doing so with our open source competitors. So, if you talk


Interview For U & Me about the cloud, for example, we compete with Amazon. It’s a proprietary platform. Similarly, we would compete with other open source cloud platforms. We see this competing as distinct from partnering with the open source ecosystem. The open source ecosystem is separate from the open source vendors. Red Hat is our competitor, and we compete with it.

I would urge those developers to go to microsoft.com/openness to take a feel of what we are actually up to. There are open source enthusiasts who are as actively participating in projects as Microsoft is. That's what people in India need to see and that's my job. I think what people in India need to see is that Microsoft is not competing with the open source ecosystem or the ideology. I think this has been proven enough by the fact that more than 80 per cent of all open source projects that exist in the world run well on Windows as well. If we were not serious about really partnering with the open source ecosystem, we would have never built something that makes open source work on our system. For developers who really question if Microsoft is serious about this, my answer would be 'absolutely'. So, I would urge those developers to go to www.microsoft.com/ openness to get a feel of what we are actually up to. There are open source enthusiasts who are as actively participating in projects as Microsoft is. That's what people in India need to see and that's my job. Given that India has the second largest developer base in the world, there need to be a whole lot of positive messages going out from our side. And we really want people to converse with us as much as possible. Microsoft is investing in open source technologies like NODE.JS to make them mainstream on our platform. With the new IaaS offerings on Windows Azure, we now support various flavours of Linux on our popular cloud platform. So, it’s up to Microsoft and the leaders of the open source community to start seeing this as an opportunity. If you develop an app that is not compatible with Windows, then you are simply cutting yourself out from almost 90 per cent of an opportunity. We don't want developers to do that. If people choose to be Java or PHP developers, we are not stopping them. We are, in fact, here to provide an extended opportunity to such developers to take their solutions to market. That's why it is important that the open source community and Microsoft work closely. Competition will remain, just like Red Hat competes with Canonical and SUSE. We are just a vendor when it comes to competing. So the ‘compete space’ is very different. What unfortunately tends to happen is that people bring that compete situation into the ideological conversations. For Microsoft, a developer is a developer. For the developer,

there are just two things that are important —to really drive the technology forward with passion and enthusiasm, and two, to be financially successful, and known for their good work. I don't see a reason why developers should be bound to a platform—be it open source or proprietary. They should have access to everything. As interest in interoperability and open source evolved to the cloud and devices, we brought in Microsoft Open Technologies Inc as a wholly owned subsidiary. We believe it is the best way to serve developers, customers and partners better. Microsoft is committed to openness. Microsoft is becoming more open in the way that we work with and collaborate with others in the industry, in how we listen to customers, and in our approach to the cloud. While this subsidiary has been formed in the US, the existing openness initiatives and those going forward through this subsidiary will definitely benefit customers, developers and the open source ecosystem globally, including those in India.

Q

So, if I were to read between the lines, is Microsoft giving open source developers a chance to commercialise their work? It’s not just about commercialisation. It is about innovation and ensuring that they are successful, both commercially and in terms of popularity. Why would a developer or a company invest a huge amount of time and resources on something that would not give them a great growth opportunity. That's what the fundamentals of any business are. Our view on the whole thing is that the developers should choose. If they think proprietary technology can give them growth, it’s their choice. If they think open source can do it for them, it’s great. What is even better is when they use opportunities involving both technologies.Let’s be honest, there is nothing called 'free' in the world today. If users get something free and are stuck with it, and then have to pay somebody to help them use or maintain it, it is not ‘free’ ultimately. So at the end of the day, there is nothing that is totally free. Ultimately, it’s about the customers and developers. And beyond all that, it is about innovation.

Q

You just mentioned the interoperability factor. Windows 8 is being heavily criticised for not being friendly with open source because of the secure-boot feature. It is being said that users will not be able to boot Linux operating systems on Windows 8 PCs. How do you defend that? Secure boot attempts to protect the PC against boot loader attacks, which can compromise a system before the OS even loads. Secure boot is actually a feature of Unified Extensible Firmware Interface (UEFI), a new type of boot environment that has gradually been replacing the standard BIOS process. Windows 8 taps into UEFI's secure boot to ensure that the pre-OS environment is safe and secure. The secure-boot feature is an innovation that has come through to ensure that there is security. I think, today, security is a major concern, OCTOber 2012 | 41


For U & Me

Interview

whether it is a Windows PC or a non-Windows PC. What we are leveraging in Windows 8 is just features and innovations that are coming up at the hardware level. I don't see a problem there because I know for a fact that there will be coexistence eventually, because this is a hardware innovation. It is just like how every product has to adapt to something or the other. There will always be features that may seem like blocking initially, but technology doesn't remain stationary and adaptation is required of everybody—whether it is Windows, Linux or different distributions of Linux. Infact we’ve already seen SUSE and other distributions adapt to the secure boot feature in UEFI.

Q

Microsoft has joined hands with the state government in Tamil Nadu. As soon as Microsoft signed the agreement, the Tamil Nadu government changed its stance of using 'only' open source in the free laptops it gave students as soon as it inked a pact with the company. The students had to switch over to Windows, abandoning BOSS Linux, which generated ire from the community. The government finally settled for providing the dual-boot option rather than just BOSS or only Windows. Don't you think such efforts are a threat for the community? See, I cannot comment on the political aspect of things. But I see it as a positive, because when the student graduates and is looking for employment, the student would have both options. Let's be honest—if they had gone for just Linux, it would have been a loss of opportunity for Microsoft, just like when we win something, it is a loss of opportunity for Linux. Every vendor, whether it is for Windows or for Linux, is going to capitalise on the growth and opportunities that India offers. So, at the end of the day, there will be times when X will win and times when Y will win. Regarding the particular case you mentioned, I think it is a perfect situation because as a student, I will get access to both. There is no cost issue involved here. Going back to the fundamental issue, I would re-assert that there is nothing that is free (monetarily). Whether a government adopts a proprietary platform or an open source platform, there is a cost involved. That might be the upfront purchase cost or the cost of implementation, maintenance, etc. But there will be a cost. There are enough research studies today that mention companies saying that you cannot look at acquisition as the only cost. When you think about software and technology, you have to think about the longer term and the total cost of ownership, which includes not just the acquisition cost but things like implementation, deployment, maintenance, downtime, the security impact and all of that. So I think your point that proprietary software is more expensive and open source is less expensive, or free, is not true. Otherwise, there wouldn't be a commercial distribution of any open source software. Take any open source software company. They will have two versions of their software—one is the open source version and the other is the commercial version. Have you seen any company

42 | OCTOber 2012

providing implementation, consulting and deployment support for free? Not really. Everybody has to make money. For Microsoft, we openly say that we are a proprietary software company in a majority of the things that we do and here is how much our software will cost. Open source software is slightly different. Those companies say that if people use their commercial software, they need to buy a subscription and that comes at a cost. I think what has happened in Tamil Nadu is great actually. Today, the student gets both Windows as well as Linux. What could be better for a student! Ultimately I think the Tamil Nadu government would have taken a decision based on what it thought was right and what was in the best interests of the students. I think it’s time that we moved ahead of the debate about which one is better—open source or proprietary technologies. I think the need of the hour is that both proprietary and open source technologies join hands. So, if students get laptops with both software, they get exposure and choice and we should let them have it. This will help in increasing their employability. The focus should be on building capabilities as opposed to discussions on technology.

Microsoft is committed to openness. Microsoft is becoming more open in the way that we work with and collaborate with others in the industry, in how we listen to customers, and in our approach to the cloud. While this subsidiary has been formed in the US, the existing openness initiatives and those going forward through this subsidiary will definitely benefit customers, developers and the open source ecosystem globally, including those in India.

Q

What about Codeplex? It has over 28,000 projects. Are there projects that open source developers can make money from? Absolutely! It would be very difficult for me to say who is making the money, or how much. I would like to use the example of Sugar CRM. The way they are making money is phenomenal. So I think it’s not about Source Forge or CodePlex, it’s about the developer having an idea, and willing to invest that amount of time and effort in turning that idea into a viable business proposition. By: Diksha P Gupta The author is assistant editor at EFY. When she is not exercising her journalistic skills, she spends time in travelling, reading fictions and biographies.


How To

Admin

Using for the Server Infrastructure Some regular readers of have probably used Linux as a desktop, while others may have even done programming on it. This article gives readers an overview on using OpenBSD, a UNIX operating system. Read on to find out how this amazing OS can be put to good use.

Y

ou can download this OS for free from www.openbsd. org and install it on a laptop or desktop, using the CD ISO image. Installing OpenBSD is not hard, and may involve some trial and error. OpenBSD is a very clean OS, with a lot of great ideas and discipline having gone into its design and coding conventions. It is slim, neat, clean and elegant in every way. It lacks support for a few of the fancy and frivolous things that some end users may think necessary, but appears to meet the goals of most scientists and engineers. OpenBSD, though a generalpurpose OS as UNIX has always been, is an out-and-out developer’s OS or engineer’s friend.

The differences between OpenBSD and Linux

Linux comprises the kernel (begun by Linus Torvalds) plus the GNU userland, Bash shell, GNU compiler suite, applications and glibc. The projects involved are different, maintained separately and follow different timelines and release schedules, making it hard for end users to do a combination of these themselves— so various distributions and vendors like Debian, Red Hat, Slackware or Ubuntu blend these elements to make end userviable operating systems. Also, obtaining the source code for every part of the OS is difficult, since it is not a single project. In contrast, for OpenBSD, the entire OS, the base system, the

kernel, the userland, the packages and the glue code -- all go into the same project. They are developed and maintained by the same team. A particular release of OpenBSD (say, 5.1) is unique, and depending on the hardware architecture you can get the exact set of binaries and source. (This is also true of NetBSD and FreeBSD.)

My tryst with OpenBSD

I have been using OpenBSD for my personal and commercial activities since 2003. I am a cryptographer, and I had to develop the IPsec kernel crypto code for my former employer’s router device. I looked at the IPsec implementation in FreeSWAN, the KAME project in FreeBSD, and also looked at NetBSD and OpenBSD. I finally settled on OpenBSD, and ended up working in the kernel C code. That is how I started my journey—and I have never looked back since. For this article, I will only cover IT infrastructure usage, using OpenBSD as a server and desktop OS, and as a developer platform. I will not be too concerned with kernel or C coding and development (other than once in a while). By and large, I will cover real-world personal and business use of this platform. I have created my own USB installer, and I have my own USB release of OpenBSD, which I maintain in eight separate projects. You can access it at http://liveusb-openbsd.sf.net. OctOber 2012 | 43


Admin

How To

Package management

Linux has distribution-specific package management tools and formats for installing additional packages from the Internet or from a CD—like Red Hat RPM, while Debian and Ubuntu use DEB, and so on. The package management system in OpenBSD uses what is known as the pkg_add toolkit. Like everything else in OpenBSD, this is tightly integrated with the OS itself, and all the tools necessary for installing, maintaining, upgrading and removing packages are installed as part of the OS install. The package management system is written in Perl, which is preinstalled for OpenBSD. You can install packages from an install CD (which needs to be purchased) or off SSH, FTP or HTTP mirrors. It’s simple to install a package. For example, pkg_add socat (as the super-user) will install the package socat. Certain packages come with ‘flavours’, which determine whether they are built with Python or Lua support for X. For example, if you wish to install the multimedia application mplayer, then you could try pkg_add -i mplayer. The extra ‘-i’ switch is important—it turns the installation into an interactive session, which will prompt you to choose a specific flavour of mplayer. I normally install a certain minimum set of packages after a base install, which takes about 10 minutes, by issuing the following command: # pkg_add -i mplayer vim socat colorls qemu windowmaker pidgin firefox

Your choices may differ. In fact, if you end up using my LiveUSB project images, they come pre-built with a certain configuration, packages, etc. Now I guess you are fairly familiar with OpenBSD and are ready to explore its potential to address basic and slightly advanced needs like running your own mail server, NAS backup system, and a very simple Web server. Doing this obviously requires domain knowledge and experience, which you can slowly acquire—or if you already have it, you can now do things in a new way using OpenBSD. First examine what problem you are trying to solve and then go about the implementation.

Mail server

A mail server sends and receives e-mail from various domains with multiple users, catering to the needs of local users and other e-mail servers. My company is in the business of creating OpenBSD-based products for email solutions, so I have personal experience of developing and maintaining a mail server for more than four years. If you are exposed to systems administration in your day job, then you probably already know that Internet email is very different from the simple local mail service that runs without any contact with the outside world. However, there is no need to worry. OpenBSD makes some things easy for us; while other things have to be learnt and only when you learn about them can you enjoy the benefits of the power of OpenBSD. So you’re now faced with the choice of which MTA

(Mail Transfer Agent) to use. There are several popular options available, and though OpenBSD does not yet have its own implementation, it will very soon—Gilles Chehade and friends are hard at work on one called OpenSMTPD. I settled on Wietse Venema’s Postfix MTA over alternatives like Sendmail, Qmail and Exim. Also, certain sites use Microsoft Exchange, which so far has never given me any pain. It is very odd—Windows machines often get virus-infected, sending out spam that can get your company’s mail server’s public IP blacklisted as a spam source. Other people on the Internet may abuse your mail server for purveying mail to unsuspecting third-party users. Yet, Microsoft Exchange, as an MTA, has not given me any trouble so far. First, you obviously have to install Postfix, which is as simple as pkg_add -i postfix. It will throw up about eight choices, and you can select what you like. No big deal. It is also very important to know that a mail server doesn’t just send and receive mail using SMTP; it also has ancillary functions like distributing mail using IMAP and/or POP3, and acting as a webmail server. All this is part of the bargain. You will end up using the very brilliant Dovecot open source package for IMAP and POP3, which can also interact nicely with MS Outlook and Roundcube, a PHP webmail package. One of my customers tells me that using Roundcube is similar to using Outlook. It is that great and user-friendly. So to install, issue a simple… pkg_add roundcube dovecot and you’re ready.

Roundcube

Easy going so far, right? Now, configuring Dovecot and Postfix is easy, but Roundcube presents certain difficulties, which we will look into. The first thing to do is identify the database Roundcube should use for storing mail—I normally use SQLite. Initialise a database with a script like what follows: # sqlite -INIT SQL/sqlite.initial.sql sqlite.db sqlite>.exit #chmod o+rw sqlite.db

Then you have to configure Roundcube (edit db.inc.php) with the path to the initialised database file. You also have to configure the timezone, the domain, the default IMAP and SMTP server, the address-book (if using LDAP), enable the preview pane and so on, in the configuration file main.inc.php. Following this, the kind souls who developed Roundcube have given an installer that tests the installation in many ways. Before you get to that, however, you have to enable PHP in Apache, and also add a section with directives for allowing the .htaccess magic within the webmail directory. Another very important thing to remember is that Roundcube will only run in a non-chrooted Apache server. Be very careful about that, since by default OpenBSD runs Apache in a chroot environment, jailed within /var/ www. Once all this is done, and the Roundcube installer is moved out of the way, you can access and use webmail. But you have to do some configuration tweaks to get there, which we tackle later.


How To Postfix

Now, let’s spend some time on the basics of Postfix configuration, though this is itself a fairly vast topic requiring a lot of knowledge about Internet mail, RFC, etc, which obviously cannot be explored in a single article. In short, Postfix uses two files, /etc/postfix/main.cf and /etc/postfix/master.cf, for configuring its e-mail service. The commonly edited file is main. cf. The default installation of Postfix on OpenBSD will run outof-the-box, and you can send and receive mail. In case you wish to enable the submission port TCP 587 for sending mail, you can do so by uncommenting (remove the leading hash) the following line in /etc/postfix/master.cf: #submission inet n

-

-

-

-

smtpd

Admin

install any package), one usually just issues the following line: httpd_flags= in /etc/rc.conf.local

…before rebooting (though do not do so yet). That would run a Web server chroot-ed to /var/www. You can modify files under /var/www/htdocs, which is the document root of the Web server. However, for Roundcube, you need to run a nonchroot Web server, so use the modified flags line given below: httpd_flags=”-u”

Now your Web server is insecure, so be careful about exposing it to the Net.

After you modify the configuration, to activate it, run a postfix reload command. You still have to do some more work in /etc/postfix/main.cf to set the local network for relaying, selecting the relay domains and so on. Postfix has built-in controls to guard against becoming an open relay that can be abused by spammers on the Internet, requiring you to set the mynetworks parameter to specify from which networks it will accept mail to relay (for example, mynetworks= 127.0.0.1, 192.168.0.0/16). You also need to set the mydomain, relay_domains and a few other minimal parameters for an excellent standardscompliant full-blown Internet mail server. So the complete configuration looks somewhat like what’s shown below:

NAS

mynetworks = 127.0.0.1, 192.168.0.0/16

After a reboot, run showmount -e to see if your machine can serve files via NFS. There are excellent backup and restore utilities that can back up the entire file system. Read the man pages of dump(8) and restore(8). The typical usage for dump is as shown below:

myhostname = lfy.com relay_domains = $mydestination, lfy.com home_mailbox = Maildir/

You can make the most of these things if you already know and handle mail in your day job. You can opt for a 10/8 or 172.16/12 network style in the mynetworks parameter, if you have a lot of hosts on your LAN. Always remember that local users can send mail to any destination domain, but outside users can send mails only to those domains specified in the relay_domains parameter; be very careful about that. If you test for an open relay configuration, always test from an Internet location; Postfix should reply with a 554 relay access denied error when you try relaying mail using your mail server, but ‘as an outsider (from a network other than the local network)’. I emphasise this since this has tripped me up in the past, and I have suffered many hours of agony when spammers hit us. Anyway, only experience can teach you how to use OpenBSD and Postfix to your advantage—no article can do that for you. Let us now move on to the other topics this article intends to cover.

Web servers

To enable the built-in OpenBSD Apache (there’s no need to

Next, you may wish to serve files and sometimes also support Windows file-sharing (what is usually meant by NAS), plus also back up and restore in some cases. For running an NFSmountable server, just modify/etc/exports as shown below: /home -mapall=root -network=192.168.1 -mask=255.255.255.0

Then, you need the following lines in /etc/rc.conf.local: portmap=YES nfsd_flags=””

# dump af file.dump /dev/rwd0a

For restore, use the following command: # restore rf file.dump

It is actually more complicated than that, but you can get started with this. If in doubt, you can always Google to learn more. I hope this was a good introduction to certain useful things that you can do with OpenBSD. You are even welcome to customise it to suit your unique needs. I’ll catch you later with more cool stuff. By: Girish Venkatachalam The author runs a company named Gayatri Hitech (http:// gayatri-hitech.com) that creates computer networking products like firewalls, mail servers, VPNs, etc. He may be contacted at girish@gayatri-hitech.com. He is on Twitter, Skype and GMail as girish1729.

OctOber 2012 | 45


For U & Me

Overview

Will Now Be

Before making a formal announcement about changing the name of the magazine, we checked with the industry and the community as to what they felt about the new title. The responses were overwhelming. Most industry experts and community members welcomed this change, which they felt was for the better, and would broaden the horizons of the magazine. Here are a few responses from the industry to the name of LINUX For You being changed to Open Source For You, with effect from its October 2012 issue:

Mandar Naik, director, Platform Strategy, Microsoft

I welcome this change in the name from LINUX For You to Open Source For You as it represents the increasing popularity of the open source development model on multiple platforms, both on-premise and on the cloud. It is important to note that the open source model has been very popular on platforms like Windows Server on premise and Windows Azure in the cloud. At Microsoft, our increased commitment to working with open source has sparked tremendous momentum, and contributed to the rapid growth of open source software on Windows with the number of open source apps that run on Windows growing 400 per cent to more than 350,000— with 23 of the top 25 OSS projects running on Windows. Codeplex, Microsoft’s open source project community, has grown to more than 28,000 open source projects and more than 300,000 registered users. In Windows Azure today, the open source community has a very strong cloud platform that offers seamless support for open source development using technologies like Node.js and improved Linux support. With the advent of the cloud and the changing landscape of information technology, I see the new and evolved Open Source For You expanding its horizon to provide even more impactful content on mixed source environments, both on premise and in the cloud. I wish Open Source For You the very best and look forward to the exciting content in the upcoming issues.

Rajiv Sodhi, managing director, Go Daddy, India I think this title is much more apt for the community. I think open source, in general, is a broader term than just Linux. I 46 | october 2012

think it will be well received by the community.

Mayank Prasad, senior software engineer, Oracle The new name looks okay to me. I don’t think any change in the title matters as long as the content is the same and good enough for me to pick up. Even if the name of the magazine changes to Open Source For You while it continues to offer the similar kind of content, I will continue to pick it from the stands.

Vinod Panicker, senior product architect, Wipro

I am a regular reader of the magazine and I don’t think the change will make any difference to me. I guess the change will broaden the scope of the magazine, so it will no more be just Linux-centric. It may change from being a onepoint magazine for Linux to becoming more broad-based, covering all open source technologies, like applications that are on the open source framework. So the content will not be restricted to the Linux kernel but will cover open source technologies, across the board. It will not be a niche magazine any more, and will connect to more people. Those who want to read about open source technology will see a wider area being covered in the magazine.

Yogesh Girikumar, open source evangelist

This is a welcome change. I think it is a nice move. Something that I would like to regularly see in the magazine is content for newbies. That’s a major reader segment, which is ignored by magazines and journals in this domain. It will definitely increase the scope of the magazine. You can now


Overview add a lot of content that is not directly related with Linux and which is useful to readers.

Syed Anwaarullah, associate developer, Convergys

I think changing the name of the magazine to Open Source For You is better because it is more generic and will promote the magazine to a broader audience. After the name change, the magazine can cover a wide range of topics like Android, which is open source yet not directly linked to Linux. Android developers like me will be glad if you present more content on this growing section of open source technology.

Prajod Vettiyattil, lead architect, Wipro

I am a regular reader of the magazine and have actually wanted to see this change happen for quite some time, because ‘LINUX For You’ did not reflect the kind of

For U & Me

articles published in recent issues. Initially, the magazine was only about Linux, but now it is about a lot more. The changed name is more representative of the content you are publishing. It is also good for the target audience. The new title has much wider scope as compared to the previous title.

Arun Tomar, open source evangelist

I think this will be a nice change as it will give readers a lot of technologies to mull over, ranging from the desktop and servers to mobiles, etc.

Joel Divekar, general manager, Information Systems with People Interactive

Open Source For You is an initiative that gives a fresh and broader perspective to your organisation, helping it to spread the word on open source. This also helps explain that open source is not specific to any operating system or distribution.

And here’s what our Facebook community says: Anant Shrivastava:

Reminds me of the RMS interviews you published in your initial editions when he pointed out some issues with the magazine’s name and its impact. I, for one, support this change as this is what this magazine is about. It’s not only about Linux, but the whole open source ecosystem.

Pradeep Prakhar:

Yes, this name is more representative of the content we expect in this magazine— which leads open source enthusiasts towards overall growth. I am okay with the new name of the magazine.

Rakesh Mallick:

This is the correct name for the magazine.

Read more stories on security and surveillance in

www.electronicsb2b.com

Mohammed Shameem:

Good move. Rather than focusing on one project, i.e., Linux, the magazine can focus on all aspects of open source and hence the name change is apt.

Raghava Karthik:

Change the name to anything! We just need the magazine’s content to remain as it is (of course, some new topics are welcome)!

Harshad Joshi: Congratulations for expanding the focus of the magazine from just Linux to all open source projects. I hope to see lot of content on Free BSD, Open Solaris (smartos) and other related technologies like the cloud. All the best. And do make the magazine thicker.

TOPSECURITY STORIES

to a bright future turers can look forward • CCTV camera manufac cameras? V CCT for y read stry • Is the indu latest in security market • Surveillance on mobile nce vertical in India e to the growth of surveilla • Sectors that contribut in India ket mar V CCT of th grow • Challenges that hinder Analogue Cameras : IP Cameras Outsmart • Surveillance Scenario in India s soar eras cam V CCT • Demand for

ELECTRONICS

INDUSTRY IS AT A

Log on to www.electronicsb2b.com and be in touch with the Electronics B2B Fraternity 24x7

october 2012 | 47


For U & Me Overview

Web-based

Platforms for Localisation Previous articles in this series have explored how to localise the application user interface using Desktop software. This article covers localising using Web-based platforms, which are easier and convenient, as contributors do not have to worry about file formats, translation memories, version management systems, etc. These are expressly designed as Translation Management Systems, allowing multiple team members to collaborate on a project.

T

he features of web based platforms include display of localisation status and statistics of all the files of a project, support for translation memories and workflow management. Glossaries are also supported in some of the tools. Let us look at Launchpad.net, Pootle for LibreOffice, and Translatewiki.net as three examples.

Launchpad.net

This is one of the well-known Web-based localisation platforms. On visiting the site, you will learn that it supports

Figure 1: Using Launchpad.net to localise WUBI

48 | october 2012

334 languages and has 64,938 translators in 43 translation groups. It was started by Canonical in 2005, and released as open source in 2009. Its translation tool is called Rosetta. It is much more than a localisation platform, as it supports code hosting with version control, release management, bug tracking, mailing lists and wikis. The translation feature provides a summary view of the status of localisation, and features suggestions from the vast translation memory. It can be used for localising operating systems and independent projects. If a package uses Launchpad.net as the upstream (the root place for managing the project), users can just do the localisation. If the upstream is different, then, after completing the localisation,

Figure 2: LibreOffice localisation using Pootle


Overview For U & Me

Figure 3: Localisation of MediaWiki using Translatewiki.net

Figure 5: A page in Telugu Wikipedia with some strings that are not localised

Figure 4: Message localisation overlay in Translatewiki.net

users have to download what they have localised and submit it for updating the upstream, separately. Users have to create a launchpad.net account to contribute to localisation. It is advisable for users to become members of the localisation teams to collaborate actively with other members. All the localisation suggestions submitted by users are reviewed by the translation coordinator. The localisation pages offer different options to filter the messages for improved productivity. Figure 1 shows the Launchpad screenshot for the localisation of the Windows Ubuntu Installer (WUBI) into Telugu. You can see the different menus available on a translation page. The messages are presented as one set, even though they may be present in different source files.

Pootle

LibreOffice uses Pootle (PO-based Online Translation / Localisation Engine tool) for Web-based translation. Pootle is developed by Translate.org.za, a South Africabased non-profit organisation focused on the localisation of open source software. The Web-based tool leverages the Translate Toolkit software from the same organisation, and complements a standalone localisation tool called Virtaal. Many open source projects are hosted on Locamotion website (http://pootle.locamotion.org). To localise a project, an account is needed on the server. After logging in, languages and projects of interest need to be configured. If required, the leaders of language projects need to be contacted to get the necessary permissions.

Figure 6: Displaying message_ids corrresponding to the screen in Figure 5

Figure 7: Translating the specific message of MediaWiki in Translatewiki.net

Figure 2 shows a sample screenshot of the Telugu localisation for LibreOffice, hosted on an independent instance of Pootle by the Document Foundation. The glossary suggestions are given on the left of the terminology files of the project on the server. The existing translation is checked for localisation defects, and the status is shown on the right side. This feature is unique to Pootle. Localisation text can be updated making use of the glossary suggestions. There is a provision to submit the update for review, as well as to mark a localised message as ‘fuzzy’ and add appropriate comments for review by others. Messages are organised as per the directory and file structure of the source.

Continued on Page no 55 october 2012 | 49


Exploring Software Anil Seth

Guest Column

ReviewBoard Here’s a development tool that puts in place a system for single or multiple reviews of the code developed. It even offers the opportunity for self-reviews. Read on…

W

Use the post-review command to submit the patch to be reviewed. Sign into the Web interface of ReviewBoard, associate a reviewer or a group of reviewers with the code patch, and change the state from Draft to Publish. Modify code as per the reviewer's comments, and update the patch for review. Once the code is reviewed and approved, commit the changes to the repository. Finally, close the review request. To a reviewer, the Web interface will show the entries needing review. If a group of people are to review the code, the review request will be shown to each member of the group. Each reviewer will continue to see the item that needs to be reviewed even if someone in the group has already reviewed it, so multiple people can continue to review and offer their comments. However, if a review request needs to be reviewed by any one person from the group, the 'Number of Reviews' field in the Web interface may be used to ignore requests that have already been reviewed but not yet been closed.

hen I look back, it isn't that we were unaware of some of the recommended development processes, and neither did we believe they were not useful. It is just that the development infrastructure was so poor that we could not effectively implement them. It boiled down to too much effort for very small returns. These days, it is a shame if developers do not use some of these development tools. Institutionalising some of them makes it easier, as each developer does not need to think about which ones to use. For example, these days, the default repository has become Git. For a common, centralised development repository, in contrast to a distributed environment, Subversion is a perfectly good option. However, Git can work in a centralised environment as well, and is obviously an excellent tool; so, why make an effort to select between alternatives? Repositories can tie in with automated build systems, which will run the affected test cases and raise alerts in case an update causes regression errors. The build automation still misses issues like potential problems, security concerns or that there may be a better way to implement the code. How do we create an environment so that at least one person looks at the code?

ReviewBoard

Administering ReviewBoard

It is great to see that the use of ReviewBoard is increasing, including in a number of open source projects (http://www.reviewboard.org/ users/). The original workflow of ReviewBoard was pre-commit—i.e., the code is reviewed before committing it to the repository. However, it is possible to use a post-commit workflow, where the code is submitted to the repository and then reviewed. The other very nice capability of ReviewBoard is that it can work with a fairly large number of repositories, including Git and Subversion. Most people will find the pre-commit workflow to be the easier option. There is no ambiguity about the state of code in a repository. However, programmers cannot work on files that they have modified and are pending review. This may block them from working. In the post-commit workflow, there is the additional effort of keeping track of code in the repository that is still pending review. For repositories like Git, it is fairly easy to create a branch and merge it after the review. You may want to look at how KDE does it, at http://techbase.kde.org/Development/Review_Board. As a simple illustration, consider a centralised repository like Subversion. In addition to the normal development environment, a programmer will need to install RBTools, which contain the command post-review. In a simple pre-commit scenario, a programmer will: Check out the code. Make changes. 50 | OCTOber 2012

ReviewBoard is versatile, so you will need to set it up for your needs. The basic requirements will be: Adding users. Authentication can easily be done using LDAP, instead of a separate password for the application. Creating permissions for different types of users, e.g., submitters only, submitters and reviewers, reviewers with some administrative privileges, etc. Adding a repository. Optionally, creating default reviewers for various files. ReviewBoard comes with pretty good documentation for both users and administrators. You can easily set it up and use it, even if you are a small group. It may not be a bad idea to use ReviewBoard even if you are the only developer. It gives you a chance to examine the changes you have made, and to ask yourself, 'Why did I do that?'.

By: Anil Seth The author is currently a visiting faculty member at IIT-Ropar, prior to which he was a professor at Padre Conceicao College of Engineering (PCCE) in Goa. He has managed IT and imaging solutions for Phil Corporation (Goa) and worked for Tata Burroughs/TIL. You can find him online at http://sethanil.com/ and reach him via email at anil@sethanil.com.


Overview

For U & Me

Linux at Work

Open source software applications for the office that allow inter-operation and conversion, for those interested in using Linux at work.

M

icrosoft platforms (XP or one of the more recent releases) are ubiquitous in the workplace. Most organisations make it their Standard Operating Environment (SOE). Generally, the applications installed are also from Microsoft. Much to the dismay of diehard Microsoft fans, some users have products from Apple, too, on their desktop. Typically, Apple provides the platform from which applications are launched, while they often also use Microsoft Office products like Word and Excel. This article is not directed towards either of these types of users—unless they are considering a conversion. It is often claimed that Linux cannot survive in the office environment because there are no compatible (or sufficiently compatible) applications. I beg to differ. What I am about to describe may not suit every class of user in every office, but

I have found that I’m doing fine with Linux on my desktop at work, and no one (so far) is complaining. So, what do I use? Before I answer that question, I’ll provide a brief overview of what sorts of applications are necessary.

Applications in a Microsoft environment

I think these fall into two classes—applications that allow you to interoperate with your co-workers, and there are others which only affect you. In the first category, for instance, is the ability to create or update a document. In the second, you run the application for yourself. Let me be more concrete. If someone sends me a Word document and expects me to make changes and send it back, the result of my efforts must be a Word document which my co-worker can read and modify. The modifications I make

OCTOBER 2012 | 51


For U & Me

Overview

should not gratuitously deform the document. On the other hand, which browser I use does not affect anyone else. In the following sections I will discuss the Linux applications I use that correspond to Word, Excel, Outlook and Lync. I’ll also discuss browsers.

www.documentfoundation.org/develop Install the LibreOffice spreadsheet application with yum install libreoffice-calc. The total download size is 142 M, and the installed size is 417 M. Install the LibreOffice word processor application with yum install libreoffice-writer. The total download size is 86 M.

The desktop

Many sites use Microsoft Exchange as the mail server, with users running Outlook as their mail client. I’ve not tried it myself, but I believe Evolution provides compatibility with Exchange. You could also use OWA (a.k.a. Outlook Web Access), the Web interface to Outlook. For this, all you need is a browser. My solution is a combination. I use NMH (new MH message system) as my main email client. From Firefox, I use OWA to preview mail and to access calendars. NMH (savannah.nongnu.org/projects/nmh) is an email system based on MH and is intended to be a (mostly) compatible drop-in replacement for MH. NMH isn’t a single comprehensive program. Instead, it consists of a number of fairly simple single-purpose programs for sending, receiving, saving, retrieving and otherwise manipulating email messages. You can freely intersperse nmh commands with other shell commands, or write custom scripts that use nmh commands. If you want to use nmh as a true email user agent, you’ll need to also install exmh to provide a user interface for it—nmh only has a command-line interface. Install it with yum install nmh. The total download size is 862 k, and installed size is 4.2 M. In the unlikely event that you want to use NMH, you’ll need getmail to retrieve messages from Exchange using IMAP. Getmail (pyropus.ca/software/getmail/) is intended as a simple replacement for Fetchmail for those who do not need its sundry configuration options, complexities and bugs. It retrieves mail from one or more POP3 servers for one or more email accounts, and reliably delivers into a Maildir specified on a per-account basis. It can also deliver into mbox files, although this should not be attempted over NFS. Getmail is written entirely in Python. Install it with yum install getmail. The total download size is 174 k, while the installed size is 838 k. Don’t let the description of Getmail put you off. I don’t think the maintainers have changed it in years. In that time, IMAP4 and, presumably, other features have been added to it.

You need to understand a little about what my desktop runs to make sense of the sections that follow. I’m not recommending my choices, just providing context. For most readers, Ubuntu might provide a more familiar experience. I always find that the Microsoft environment messes with my head. There is nothing I want less, nothing that would put me off more, than a Linux option which offered the Microsoft look and feel. But that’s just me. I’m not trying to convert anyone. Currently, my machine at work runs CentOS 5.7. It also runs Fedora 16 under VMware Player (with Fedora as a virtual machine). CentOS gives me a conservative and stable platform for most of my work, but I also have access to reasonably leading-edge software under Fedora. To me, it feels like I’m running two computers, with some of my sessions in CentOS and others in Fedora. It is unlikely that any of my readers use the windowing system I do, or have even heard of it: OLVWM (OPEN LOOK Virtual Window Manager). It has a look and feel that I have grown comfortable with over 10 years, and I see no point in changing. Fortunately, my choice of windowing system does not prevent you from running your preference.

Office products

For a near equivalent to Word or Excel, I use LibreOffice, a descendant of OpenOffice. “How near?” I hear you ask. Well, I have had no complaints. I don’t know how well LibreOffice handles Excel macros, but for the sort of spreadsheets I encounter, LibreOffice’s Calc module is excellent. For Word documents, LibreOffice’s Writer suits me, but I am more interested in content than presentation. Importantly, LibreOffice lets you open Word and Excel documents, and to view them the way they were intended to look, or pretty close to it. If your work requires you to produce or work with Word documents where the presentation must be exactly right, Writer may not be for you. It’s possible that things have changed, but over many years, OpenOffice has often disappointed users when it came to the fine details. In which case, you might not be able to get away from Microsoft altogether. You might prefer to run XP as a virtual machine, and use Microsoft Word and Excel. LibreOffice is readily available on Fedora. It is not available in standard YUM repositories in CentOS 5.7. It might be possible to get it to run, but I fear that it would jeopardise stability, so I’m not prepared to try. The URL is

52 | OCTOBER 2012

Email

Browsers

This one is easy. Some rather annoying websites seem to go out of their way to only work perfectly with Internet Explorer, but even dedicated Microsoft users sometimes prefer Firefox or Chrome. Mozilla Firefox (www.mozilla.org/ projects/firefox/) is an open source Web browser, designed for standards compliance, performance and portability. I have run Firefox for a long time, so I usually run at least one Firefox at work--in my CentOS environment. That means it’s getting a bit long in the tooth, but it’s more than adequate for the bulk


Overview of my browsing. Install it with yum -y install firefox. For lightweight browsing, I use Dillo (www.dillo. org/), which is a very small and fast Web browser using GTK. Dillo is very basic. It’s graphical, but has no frames, HTTPS, or JavaScript support. I’ve used Dillo for a while now, and am satisfied that it is not going to destabilise my system. However, it is not available in standard Yum repositories in CentOS, so it just seemed easier to install it in my Fedora VM. Install with yum -y install dillo. Total download size is 3.5 M. For even more lightweight browsing, there are Lynx (lynx.isc.org/) and ELinks (elinks.or.cz/). Lynx is a textbased Web browser that does not display any images, but does support frames, tables and most other HTML tags. One advantage it has over graphical browsers is speed; Lynx starts and exits quickly, and displays pages swiftly. Install it with yum install lynx. ELinks is another text-based Web browser that’s very similar to Lynx in all respects mentioned. Install it with yum install elinks. These text-mode browsers can be very handy when you are interested in content rather than presentation. But why two browsers, you may ask? I have used Lynx for over 15 years. It’s reliable, solid and basic. However, its rendering of tables is unattractive and it lacks any sort of JavaScript support. Surprisingly, ELinks does have JavaScript support, so there are sites that can be viewed in Elinks but not in Lynx or Dillo. I haven’t used ELinks for a long time. I have found it hard to get used to its user interface where it differs from Lynx (like having to hold down the Shift key while using the mouse, to select or paste text)—but I’m coming around. Lately, I have found some irritants with Firefox, so I have started experimenting with Google Chrome (chrome.google. com). I run Google Chrome in my Fedora VM. There’s a nice how-to at http://bit.ly/M8BhFZ/. Install it with: cd /tmp wget https://dl-ssl.google.com/linux/linux_signing_key.pub rpm --import linux_signing_key.pub echo “[google] name=Google Chrome 32-bit \ baseurl=dl.google.com/linux/chrome/rpm/stable/i386” >> /etc/yum. repos.d/google.repo yum install google-chrome-stable

I had been vaguely considering Google Chrome for some time, but my intention turned to action when I found that one application (Remedy) was causing Firefox to abort several times a day. I found this behaviour particularly galling because I typically have four or five Firefox windows running, each with up to ten tabs open. Every time Firefox crashed, there was a period during which my machine was unusable till all my sessions were restored. I now run Remedy in Google Chrome, and have not had a problem. It’s too early

For U & Me

for me to ditch Firefox altogether, but the two browsers are sufficiently similar so it’s not onerous for me to run both.

Instant messaging

Microsoft Lync is an instant messaging client used with Microsoft Lync Server. In its place, I run Pidgin (pidgin. im) under Fedora. It is also available in CentOS, but not pidgin-sipe, which is needed to talk to Lync. Pidgin allows you to talk to anyone using a variety of messaging protocols including AIM, MSN, Yahoo!, Jabber, Bonjour, Gadu-Gadu, ICQ, IRC, Novell Groupwise, QQ, Lotus Sametime, SILC, Simple and Zephyr. These protocols are implemented using a modular, easy-to-use design. To use a protocol, just add an account using the account editor. Pidgin supports many common features of other clients, as well as many unique features, such as Perl scripting, TCL scripting and C plug-ins. Pidgin is not affiliated with or endorsed by America Online, Microsoft Corporation, Yahoo! Inc, or ICQ Inc. Install it with yum install pidgin pidgin-sipe.

Software for conversion

In the previous sections, my focus was inter-operation. To my mind, this section is at least as important. In the Microsoft world, your co-workers tend to reach for Word almost by reflex. Quite often, I am sent Word documents that contain a few lines of text, something that I think would be more appropriate if created in Notepad or WordPad, or better yet, simply entered directly in an email. If they don’t send me attachments, then their emails are sent as HTML. Most of the time, it is overkill. When it comes to lists, or any sort of tabular data, the choice is often Excel. There may be no intention to perform any sort of calculations--and yet, strangely, Excel seems to be the application of choice. For most of the operations I want to perform on this data, both Word and Excel documents are completely inappropriate. In the past, I tried to ask for documents in a more useful format, but often my requests were met with blank looks. I have since gathered a number of tools to allow me to convert such data to something more satisfactory.

Word documents

For a long time, my tool of choice was Antiword (www. winfield.demon.nl/). Antiword is a free MS-Word reader. It converts the documents from Word 2, 6, 7, 97, 2000, 2002, and 2003, to text, Postscript, and XML/DocBook. Antiword tries to keep the layout of the document intact. Install it with yum -y install antiword. Total download size is 175 k. I use Antiword from NMH to ‘render’ MS Word attachments. My resume (CV) is in Word format (.doc). I found that all placement agencies accepted this format and some accepted nothing else. It was originally created for me in MS Word. Since then, I have used OpenOffice to make

OCTOBER 2012 | 53


For U & Me

Overview

modifications. Whenever I complete a set of changes, I always use Antiword to produce a plain .txt file. I then copy the .doc and the .txt, appending a sequential number to the name, in order to have a backup. So, for example, cv.txt.13 was produced from cv.doc.13 using Antiword. If I want to see what I changed between, say, cv.doc.13 and cv.doc.16, I use the following command:

to convert. In a state of confusion, I stumbled on a different program called xls2csv at search.cpan.org/~ken/xls2csv-1.07/ script/xls2csv. I haven’t tried this one because of what I read at vinayhacks.blogspot.com.au/2010/04/converting-xls-tocsv-on-linux.html. What I use instead is LibreOffice (Calc, as detailed above). It has no problems with dates; and it also handles xlsx. Here’s an example of how to run it from the command line (or a script):

diff cv.txt.13 cv.txt.16 libreoffice --headless --convert-to csv Access.xls

It’s quick, lightweight and easy to read. The format of my resume may not be perfect, but the words are correct. Most agencies simply toss the document into some sort of database, so that they can search for certain capabilities. I’m sure the database is not the least bit interested in the format in which the document displays. Resumes are usually emailed, not printed. It’s not that the format is so dreadful; it’s just that things like tables may cross pages. But it’s always going to be difficult to guarantee that sort of layout. A different printer can cause the layout to change. If I want it to be perfect, I create a PDF. There is one issue with Antiword: it does not handle .docx. In the past, this was not a problem, but as more users migrate to more recent products from Microsoft, .docx is becoming more common. I have seen nothing to indicate that the author of Antiword is planning modifications to handle .docx. And that’s not altogether surprising. The difference between .doc and .docx is not just a few extra features; it’s a completely different format. A .docx file is a zip file containing several other files, many of them .xml. If you are interested in converting .docx, you might investigate docx2txt at docx2txt.sourceforge.net. An alternative to Antiword is catdoc from www.wagner. pp.ru/~vitus/software/catdoc/ but I can’t offer an opinion, because I haven’t tried it. The reason I have not looked at docx2txt or catdoc is because I now use LibreOffice (Writer—as detailed earlier) for this sort of conversion. And, yes, LibreOffice handles .doc and .docx seamlessly. Here’s an example of how to run it from the command line (or a script): libreoffice --headless --convert-to txt:Text AttributeReview.docx convert /tmp/AttributeXReview.docx -> /tmp/AttributeXReview.txt using Text

It’s just some extra typing compared to Antiword, but you could avoid that with an alias.

Spreadsheets

For a while, I used xls2csv (which comes from the same stable as catdoc). It worked fine until it came to dates. There are several date formats used internally by Microsoft Excel, and xls2csv was getting them wrong in the documents I was trying

54 | OCTOBER 2012

convert /tmp/Access.xls -> /tmp/Access.csv using Text - txt csv (StarCalc)

Possibly the main drawback with LibreOffice is that it has a heavy footprint compared to the other applications I have mentioned. But, these days, if your alternative is to run XP or a later offering from Microsoft, you need a machine that can carry the load. Further, if you plan to follow my general approach and run some of these tools in a VM, you are going to need a fairly beefy machine. My experience has been that performance is brilliant. At work, I have an Intel i7 (which I didn’t pay for). That’s the whole point: this is about Linux at work—software for the office.

Oracle databases

For interactive work, SQLDeveloper provides an environment that’s consistent across Microsoft and Linux platforms. It has a helpful GUI that makes it very easy to use. Once you’ve mastered the arcane syntax required to connect to the database, the SQL needed to extract data is not too hard. The GUI provides a lot of help. To run, this tool requires IcedTea (icedtea.classpath. org), the OpenJDK development tools. Install these with yum install java-1.6.0-openjdk-devel. The URL for SQLDeveloper is http://bit.ly/99P9Zv. The total download size is 148 M. Next, install SQLDeveloper with yum --nogpgcheck localinstall sqldeveloper-3.0.04.34-1.noarch.rpm. However, for unattended operation, I choose SQLPlus. Although there are a number of irritating things about it, I can ignore them all once I have extracted the data I need out of an Oracle database into a CSV. The URL is www.oracle. com/technetwork/topics/linuxsoft-082809.html for Instant Client Downloads for Linux x86. You can download oracleinstantclient11.2-basic-11.2.0.3.0-1.i386.rpm and oracleinstantclient11.2-sqlplus-11.2.0.3.0-1.i386.rpm. Install these with yum --nogpgcheck localinstall oracle-instantclient11.2basic-11.2.0.3.0-1.i386.rpm (total size: 168 M) and yum --nogpgcheck localinstall oracle-instantclient11.2sqlplus-11.2.0.3.0-1.i386 (total size: 2.6 M).

Putting it all together

I’m going to outline the sort of tasks that I use these tools for. Typically, people maintain or generate some data, and then


Overview want that data used to generate some other data or a report. So I am told to use an Excel spreadsheet on a network share as part of a process. Typically, I write a shell script. Part of the process might involve using LibreOffice as mentioned above to convert the Excel spreadsheet to a CSV. Perhaps I extract some of the fields from the CSV, use them to interrogate a database, match the output against some other criteria, and finally generate a new CSV. I can send the output CSV to my customer because Excel accepts CSV as input. I suppose if it were requested, I could use LibreOffice to convert the .csv to .xls or .xlsx, but I have never been asked for that. I find that scripting is much more powerful than many of the things one might try to do in Excel. The process can be made more automatic, which is particularly advantageous if the process has to be run frequently or regularly. In the latter case, I can invoke my script from cron. But do bear in mind that I am talking about tools. It’s always best to find the most appropriate tool for the task. I haven’t covered everything, but with this arsenal I can go a long way. My previous article (Funny Mythness) discussed ways in which you could have Linux in the workplace using

For U & Me

virtual machines. In this article, I’ve suggested ways in which you can move more towards Linux. I’ve talked about applications that allow you to inter-operate with others who are probably restricted to Microsoft’s office applications; and I’ve discussed tools that allow you to convert from inconvenient formats to those more amenable to manipulation using Linux tools. In the next article, I will look at UNIX tools and how to unleash their power on files converted with the tools in this article. By: Henry Grebler The author has spent his days working with computers, mostly for computer manufacturers or software developers. His early computer experiences include relics such as punch cards, paper tape and mag tape. His darkest secret is that he has been paid to do the sorts of things he would have paid money to be allowed to do. Just don’t tell any of his employers. He has used Linux as his personal home desktop since the family got its first PC in 1996. Back then, when the family shared the one PC, it was a dual-boot Windows/Slackware set-up. Now that each member has his/her own computer, Henry somehow survives in a purely Linux world. He lives in a suburb of Melbourne, Australia.

(Continued from Page 49....)

Translatewiki.net

This is a clever project that is in the process of localising MediaWiki. The Translate MediaWiki extension was started in 2006 by translatewiki.net/wiki/User:Nike and translatewiki.net/ wiki/User:Gangleri, and has become very popular for localising MediaWiki, its extensions and a host of other software. MediaWiki itself has been designed with internationalisation and localisation in mind. So it’s no wonder that it is available in over 280 languages. Like other Web-based projects, this too requires a contributor to create an account and select preferred languages. Then a project can be selected for localisation, and its various messages can be localised. Figures 3 and 4 show a typical localisation screenshot. The suggestions from translation memory are shown, along with the percentage of matches, and the blue dots following the number are hyperlinks to the actual translation. The information about the context of the string is shown next, followed by the source, and then an edit box to enter the localised text. This update is saved just like a Wikipedia page. Additional menu buttons allow easy navigation across the messages. Another advantage with this tool is that most updates will go live on the respective language wiki projects of Wikimedia Foundation in 24 hours. Most Indian-language MediaWiki projects would have been already localised. But occasionally, users may come across pages for which some messages are still in English (Figure 5). Usually, when a project is partially localised, trying to search for the specific message is time-consuming

for typical free software. In MediaWiki, it becomes very simple to locate the message_id by appending ‘?uselang=qqx’ to the URL of the page. In such a case, MediaWiki software displays message_ids rather than actual English strings (Figure 6). Then you can use the message_id to search Translatewiki.net to directly locate the untranslated message string and translate it (Figure 7). In this article, we have seen three different Web-based platforms for localisation, which make it easy for anybody to contribute to localisation in their preferred languages. So why not try your hand at it, and share your experience with fellow readers? I will be happy to devote an article exclusively to your feedback and questions. For more information [1] Launchpad home page: https://translations.launchpad.net/ [2] Pootle features: translate.sourceforge.net/wiki/pootle/features [3] LibreOffice translations using Pootle instance: https:// translations.documentfoundation.org/ [4] Translatewiki.net home page: translatewiki.net/wiki/Main_Page

By: Arjuna Rao Chavala The author is an independent consultant in the areas of IT, program/ engineering management and open source. He is also the president of Wikimedia India and the WG Chair for the IEEE-SA project P1908.1, ‘Virtual keyboard standard for Indic languages’. He can be reached through his website arjunaraoc.blogspot.in and Twitter ID @arjunaraoc.

OCTOBER 2012 | 55


For U & Me

Career

Development

A Smart Career Move We bring you a ringside view of the enterprising and creative careers in PHP development.

C

ompanies are now expanding their business realms and increasingly moving online to build a strong customer base as well as augment their cash flows. In an effort to garner more website traffic, which means boosting revenues, organisations are increasingly hiring PHP professionals to design customised and eye-catching Web portals. Backed by an open source ecosystem, PHP (PHP: Hypertext Preprocessor) development has turned out to be not just cost-effective, but also the most preferred choice in the business circuits. Statistics reveal that almost 40 per cent of the portals online (like Yahoo, Wikipedia, Facebook, Flickr, etc) use PHP as a server side scripting language. As the market in PHP development is expected to offer a wide range of opportunities in the years to come, a career in this domain can be extremely fulfilling.

PHP development: A popular career option!

While IT companies are increasingly adopting policies to recruit and retain the best PHP development talent, IT

56 | october 2012

training institutes across the country too are witnessing a healthy rise in the number of queries received for a course in this domain. The training institutes are making the most of this trend and offering relevant courses to meet the growing requirements. Isha Malhotra, director, Tech Altum, Noida, says, “The increased career opportunities in PHP development have evoked a certain amount of interest in the aspiring IT students, and the influx of aspirants joining this course in our institute bears testimony to the fact. Not just that, if you take a cursory glance at the online job postings, you will discover vacancies on PHP development appearing every two days.” So what makes PHP Web development one of the most sought after career options? Any programming language that is easy to learn, execute and operate naturally appeals to a novice programmer and that explains why PHP is a hit in the recruitment landscape, feels Ram Kumar, manager, Operations, Beeflex Education, Bengaluru. “Compared to ASP.net, Java.net, C, C++ or any other software


Career For U & Me SOME PHP DEVELOPMENT COURSE PROVIDERS UGS Academy A-164 Sector-63, Noida Ph: 8800549993, 8800549991, Email: hrd@ugsacademy. com, Website: http:// ugsacademy.com

Beeflex Education No 32, 2nd Floor, CMH Road,Indiranagar, Bengaluru Ph: 080-65781111, 080-65785555, Email: beeflexedu@gmail. com, Website: www. beeflexeducation.com

TGC Multimedia and Animation Academy H-85A, 2nd Floor, South Extension Part-I, New Delhi-110049 Ph: 011-46026939, 41680790, 9810031162, Email: info@tgcindia.com, Website: www.tgcindia.com

language, it is easier to develop Web apps on a PHP-backed environment. That apart, PHP scores over other languages in aspects like requiring less time to create scripts, its versatility, and the efficiency with which it copes with the challenges of Web development. Besides, it is cost effective. Moreover, success stories of enterprising Web developers minting money by designing peppy websites drive prospective programmers to make a career in PHP Web development,” says Kumar.

Recruitment trends

According to industry experts, the last few years have seen a swell in the demand for PHP professionals, as more small and medium-based businesses that hired Web developers have switched to open source platforms to cut costs. Rahul Mehra, CEO, UGS Academy, Noida, says, “Open source technology, which was not considered a hot option till a few years ago, is now the driving force in the IT marketplace. That explains why around 20 million Web applications, including popular content management systems like Drupal and Wordpress, are built on PHP. The wide scope in the PHP terrain automatically boosts one’s chances of employability.” Mehra feels a common myth that is doing the rounds in the community is that big companies don’t hire PHP professionals, but according to him, that is not quite true. “Though there is minimal requirement for PHP developers for upmarket brands, companies like Accenture and Capgemini do hire them. But smaller companies recruit them on a larger scale and pay them reasonably well. Small and medium-sized enterprises in our sector come to us and tell us about their requirement in PHP development. We provide training to our students and help them get a job in these companies,” says Mehra.

Minimum input, maximum benefit

Another major reason why students make a beeline to the various training institutes to learn this course is that a mere degree along with training in PHP development can help them get a job quickly. Sivakumar, CEO, Siva Soft Training and Development Institute, Hyderabad,

Tech Altum 501, Om Complex, 5th Floor, Naya Bans, Sec-15, Noida-201301 Ph: 0120-4280181/ 9911640097 Email: info@ techaltum.com, Website: http://www. techaltum.com/

SivaSoft Training and Development Block No: 402, Annapurna Block, Aditya Enclave, Near Maitrivanam, Ameerpet, Hyderabad 500016, Ph: 092481 53330, Email: sivasoft@ymail.com Website: http://www. sivasoft.in

says, “When it comes to the job market, one can reap the maximum benefit after undertaking a course in PHP development. Of course, knowing more programming languages apart from PHP has its own advantages, but it is definitely not a prerequisite. There is a huge demand for skilled PHP professionals in the market and companies do hire them. In our institute, we have students with four years of Java experience, but they undertake an advanced course to specialise in PHP. Even if one is looking to freelance, PHP is the right option.” So, what can one expect in terms of remuneration? Ravi Ranjan, director, TGC Multimedia and Animation Institute, New Delhi, shares his views: “The starting salaries may not be that great for a novice PHP programmer, but if one has the right skill sets, salaries do touch record highs within two years. A good PHP programmer can earn up to Rs 200,000 to 300,000 per month. There are people who hesitate to embark on a career in PHP because of lower starting pay packages. The myth needs to be busted and spreading awareness among these new aspirants will definitely help. Joining a good training institute will serve the purpose.”

Do certifications help?

The next big question is whether Zend PHP certifications boost employability? Namita Misra, a Zend-certified senior software engineer at OSSCube, Noida, elaborates, “It’s not only certification that helps, but also the way you do coding and the way you apply logic while writing a program—even the interviewer views you in a different way. Certifications add credibility to your resume and your package gets a boost. Certifications also convey a message that the candidates have a strong inclination towards this domain and are serious about their careers.”

By Priyanka Sarkar The author is a member of the editorial team. She loves to weave in and out the little nuances of life and scribble her thoughts and experiences in her personal blog.

october 2012 | 57


Developers

Insight

Track Time in Your Driver

with Kernel Timers This article shows how, with the help of kernel timers, resource allocation and the time-bound operation of various devices occurs without any conflicts.

K

ernel timers offer a great way to keep track of time in device drivers. Instead of implementing a loop to wait for some time to expire, a timer function can provide a one-step solution. Timers are also used to poll a device at a constant interval, when the hardware can't fire an interrupt. The kernel measures time by counting the number of ticks since the time the system has booted, and stores this count in a global jiffies variable. The define HZ controls the rate of ticks and can be modified by the user in <asm/param.h>. A kernel timer is a data structure that allows the kernel to invoke a user-defined function, with user-defined arguments and user-defined time. This is defined in linux/timer.h and kernel/timer.h. In recent kernels, there are two ways to implement timers. The first is the timer API, which is used in most places, but it is less accurate and quite simple. The other is the high-resolution timer API, which allows us to define time in nanoseconds. 58 | october 2012

A simple timer API

The timers are implemented as a doubly linked list; the structure is shown below: struct timer_list { struct timer_list *next; struct timer_list *prev; unsigned long expires; void (*function)(unsigned long); unsigned long data; int slack; };

The kernel provides a driver with helper functions to declare, register and remove kernel timers. You will never


Insight have to touch (and you must not) the *next and *prev pointers. The kernel handles this. Thus, from a user point of view, the timer_list provides an expiration duration, a callback function and user-provided context. The expires field represents the jiffies value when the timer is expected to run. The user can initialise the timer by simply calling setup_timer, which sets up the callback and user-provided context. Also, the user can set the values of data and function in the timer and call init_timer (which is called internally by setup_timer). The prototypes are given below:

Developers

Figure 1: Compiling the timer_example.c

void init_timer( struct timer_list *timer ); void setup_timer( struct timer_list *timer, void (*function) (unsigned long), unsigned long data );

The next step is to set the expiration time. This can be done by calling mod_timer. Timers are automatically deleted upon expiration, but to leave it hanging after the module has been unloaded may cause the kernel to hang. This also means that you can add the timer again from the handler itself. The cleanup_module function calls timer_pending to detect whether the timer has expired or not. It returns true if the timer is pending. If you want to extend the timeout to acquire the following behaviour, issue the following commands:

Figure 2: Timer module running

Figure 3: Timer module removed #include<linux/timer.h> /*create a timer named timer1*/

del_timer(&timer1);

static struct timer_list timer1;

timer1.expires=jiffies+new_value; add_timer(&timer1);

/*called when timer expires*/ void timer1_expires(unsigned long data)

You can do this by using mod_timer(&timer1, new_ value); if the timer is expired by the time mod_timer is called, it acts like add_timer and adds the timer again.

Implementing timer as an LKM

LKM (Loaded Kernel Module) is supported by many UNIXlike operating systems and Microsoft Windows; it is an object file that is used for extending the kernel to support new hardware. Let's try to make a module that implements timer functions, to show you how to use timers in practice. For you to try this, you must have the kernel headers package installed. I have Fedora 17 installed, so I used su -c 'yum install kernel-devel' to install the specific headers. You may need to install 'kernel-PAE-devel' if you are using the PAE kernel. Debian and Ubuntu users may use the apt-get install module-assistant to install the packages necessary to build an LKM. After you have finished installing the packages, consider the following program: #include<linux/kernel.h>

{ printk("timer1 called back (%ld)\n", jiffies); } int init_module(void) { int retval; printk("Setting up timer for initialisation \n"); setup_timer(&timer1, timer1_expires,0); printk("Starting timer to fire in 200ms (%ld)\n", jiffies); retval=mod_timer( &timer1, jiffies + msecs_to_jiffies(200) ); if(retval) printk("Error in mod_timer\n"); return 0; } void cleanup_module(void) { int retval=del_timer(&timer1);; if(retval) printk("Timer still in use\n");

#include<linux/module.h>

october 2012 | 59


Developers

Insight

printk("Timer removed");

enum hrtimer_restart (*function)(struct hrtimer *);

return;

struct hrtimer_clock_base *base;

}

unsigned long state; #ifdef CONFIG_TIMER_STATS

Within init_module, setup_timer initialises the timer and sets it off by calling mod_timer. When the timer expires, timer1_expires is called. When the module is removed, the timer is deleted by invoking del_timer. Save this in a directory of your choice and name the file timer_example.c. The next step will be to create a Makefile in the same directory, as shown below: obj-m += timer_example.o KDIR := /lib/modules/$(shell uname -r)/build

int start_pid; void *start_site; char start_comm[16]; #endif };

The hrtimer structure must be initialised by hrtimer_ init(), and then it can be started with hrtimer_start. This call includes the expiration time (in ktime_t) and the mode of the time value (absolute or relative value). The prototypes are:

PWD := $(shell pwd) void hrtimer_init( struct hrtimer *time, clockid_t which_clock, all: $(MAKE) -C $(KDIR) M=$(PWD) modules

enum hrtimer_mode mode ); int hrtimer_start(struct hrtimer *timer, ktime_t time, const enum hrtimer_mode mode);

clean: $(MAKE) -C $(KDIR) M=$(PWD) clean

Now, in your terminal, as the root, go to the source code directory and run make to compile the code (see Figure 1). After compiling, you will find the output timer_example.ko file, which has to be inserted into the kernel, with insmod timer_example.ko. Since there are no error messages, how does one verify that the module has been inserted? Check the dmesg output with dmesg | tail (Figure 2). Now remove the module (rmmod timer_example) and the timer will also be removed, as you can see with another dmesg | tail (Figure 3). To get rid of the messages with warnings, include a macro MODULE_LICENSE() with the string ‘GPL’ included in brackets as a parameter.

High-resolution timer API

The simple API discussed above is easy to implement, but is not accurate enough to be used for real-time applications. The high-resolution timer API (‘hrtimer’) has almost the same structure (below) as the simple API, but some changes make it work with great accuracy. The global jiffies is not used, but a special data type ktime is used to represent time. This structure defines timers from a user perspective. This is implied by _softexpires (the expiry time given when the timer was armed), which gives the absolute earliest expiry time of the hrtimer. The function member is the same as before; start_pid is an important field—it stores the PID of the task that started the timer, and start_comm stores the name of the process that started the timer. struct hrtimer { struct timerqueue_node node; ktime_t _softexpires;

60 | october 2012

To cancel a timer, use hrtimer_cancel or hrtimer_ try_to_cancel. The first will cancel the timer, but if the timer has already fired, it'll wait for the callback function to finish, and then cancel the timer. The latter will return false if the timer has already fired. Use this as per your needs. extern int hrtimer_cancel(struct hrtimer *timer); extern int hrtimer_try_to_cancel(struct hrtimer *timer);

There are even more timer operations, which can be found in include/linux/hrtimer.h.

Some points to take care of

Kernel timers are run as a result of software interrupts. The timer function must be atomic in all cases. Kernel timers can cause serious race conditions, even on uni-processor systems, since they are asynchronous with other code. So, any piece of code used by the timer callback function must be protected from concurrent access, either from atomic types, or by using spin-locks. Last, but not least, a timer should run on the same CPU that registers it. References [1] http://en.wikipedia.org/wiki/Kernel_(computing) [2] http://fedoraproject.org/wiki/Building_a_custom_kernel [3] http://www.kernel.org/doc/Documentation/timers/hrtimers.txt

By: Rahul Gupta The author graduated from BVCOE, New Delhi, and is fascinated with Web 2.0. He loves to play around with Linux and will be happy to receive any queries or suggestions at rahulgupta172@yahoo.com.


CODE Sandya Mannarswamy

SPORT

Over the last couple of months, this column has covered dynamic languages such as JavaScript, and how they differ from traditional statically compiled languages like C or C++. Since many readers have requested a discussion on programming interview questions, this month’s column takes a break from JavaScript and features a medley of interview questions. Let’s first list this month’s interview questions: 1. You have been asked to design a price comparison website for online gold jewellery sellers. One of the ways of getting the data from online stores is to crawl their websites and get their product price lists. Hence, you need to design a code snippet that, given a URL, will crawl all URLs reachable from the page at that URL, and store the page contents of each URL as separate files in a directory. Write a C code snippet to do this. (Clue: Try to consider this a graph traversal problem.) 2. We are all familiar with the spanning trees of a graph. A spanning tree of a graph G(V,E) is a subset of the graph’s edges, such that it forms a tree spanning all vertices of the graph. The Minimum Spanning Tree (MST) of a graph with weighted edges is a spanning tree whose sum of edge weights is the minimum among all the spanning trees possible for a graph. Questions on spanning trees and MSTs are quite popular in software interviews, so do make sure you read up on algorithms to construct MSTs. Kruskal’s algorithm and Prim’s algorithms are popular for constructing MSTs. Both are greedy algorithms, which we have discussed in one of the earlier columns. Here is a variant of the standard MST question: If you are given a weighted graph, how can you find the maximum spanning tree? (The maximum spanning tree is the spanning tree whose sum of edge weights is the maximum.) Is it possible to use a greedy algorithm to

determine the maximum spanning tree? 3. Given a sequence of ‘n’ integers a1, a2, a3, …. an, what is the maximum number of inversions that are possible in any permutation of the sequence? An inversion is a pair of elements that are out of sorted order. For instance, an array of integers sorted in ascending order has zero inversions. Can you write a code snippet to determine the number of inversions, given a permutation of ‘n’ integers? What is the expected time complexity of your algorithm? 4. Many of the popular programming languages like Java, Ruby, Scala, JavaScript, etc, support automatic memory management, which means that dynamically allocated memory can be freed automatically by the language runtime once it is no longer needed. This is popularly known as garbage collection, and a number of complex algorithms have been developed for this purpose. However, C/ C++ requires the programmer to perform explicit memory management by freeing the memory through free/delete calls. What makes it difficult to implement an automatic memory reclamation facility for C? How would you design a garbage collector for C? What language features would you ask your developers to forego if you wanted to design an efficient garbage collector for C? 5. We are all familiar with the concept of adjacency in a graph. Two vertices are said to be adjacent if there is an edge connecting OCTOber 2012 | 61


CodeSport

6.

7.

8.

9.

Guest Column

them. A set of vertices, S, such that no two vertices in S have an edge between them, is known as an ‘independent set’ of the graph G. Given a graph G(V,E), can you find the maximum independent set of G? The maximum independent set is that independent set containing the maximum number of vertices among all independent sets. Given an arbitrary graph G, can you design a polynomial-time algorithm to find the maximum independent set of G? If so, give the algorithm. If not, explain why no polynomial-time algorithm is possible? Would your answer change if the graph was determined to be a tree? If you are asked to colour a graph, and you are given the different independent sets present in the graph, how would you use that information to colour the graph with a minimum number of colours? Given strings A, B and C, find out if string C is an interleaved version of strings A and B. Basically, the same order of letters, but mixed together, is an interleaving. For example, A = abcd; B = xyz; C = abxcydz is an interleaving—but not if C is baxcdyz. Write an algorithm to determine if string C is an interleaving of strings A and B. How would you handle the case if a character in A is also present in B? What is the complexity of your algorithm: (a) when there are no duplicate characters in strings A and B, and (b) when there are duplicate characters in strings A and B? We all know that eggs break when dropped from a height. If you are present in a skyscraper with N floors, you are told that if you drop an egg from floor K or any floor above it, to the ground, then the egg will break. If you drop the egg from a floor below K to the ground, then it will not break. You are given a supply of eggs, and asked to determine the value of K. What would be your strategy of determining the value of K if you are given an infinite supply of eggs? How would your solution change if you are given only three eggs? You are given two sets of integers—Set A and Set B, both containing n integers each. You are given a value ‘k’. You are asked to find out whether there exists a pair of integers (a,b) such that a ε A, b ε B and a + b = k. A brute-force method would be to test each pair of integers (a,b). However, this would require a complexity of O(n^2). Can you design an algorithm which can be more efficient than the brute-force algorithm? Can you design a solution which has O(nlogn) complexity? We are familiar with different synchronisation mechanisms like mutex, semaphore, etc. Is it possible to implement these different synchronisation mechanisms if the underlying architecture does not support an atomic way of testing a memory location for the presence of a specific value, and changing it to a new value? (By

atomic, I mean that both the events—the testing of the memory location and the writing of a new value to it— either happen successfully, or neither of them happens.) 10. We are all familiar with algorithms for finding the median in linear time. A popular one is the variant of quick-sort based on the partition procedure of quick-sort, so that we limit our search for the median continuously to one side of the given array. (For a detailed discussion of the implementation, see http://en.wikipedia.org/wiki/ Selection_algorithm) However, consider a large stream of integers that are continuously getting generated and passed on to you. You are asked to return the median for the stream of integers already received. (Clue: Remember that you are asked for the median, multiple times. So spending a lot of effort on setting up a data structure such that repeated queries are faster, would amortise over the multiple queries you would receive.) Design an algorithm that returns the median for the stream of integers that have been received till that point in time. What is the complexity of your algorithm?

My ‘must-read book’ for this month

This month’s suggestion for the must-read book/article comes from me, though it actually is neither a single book, nor an article. I wanted to suggest that you look through the online algorithms course on the COURSERA website (http://www.coursera.org/courses). The algorithms course is taught by Robert Sedgewick, author of the popular book on data structures. Another online course I would like to recommend to our readers is the ‘Design of Computer Programs (CS212) Programming Principles’, which is available at Udacity (http://www.udacity.com/). This is an excellent course to improve your problem solving skills. If you have a favourite programming book or article that you think is a must-read for every programmer, please do send me a note with the book’s name, and a short write-up on why you think it is useful, so I can mention it in this column. It would help many readers who want to improve their coding skills. If you have any favourite programming puzzles that you would like to discuss on this forum, please send them to me, along with your solutions and feedback, at sandyasm_AT_ yahoo_DOT_com. Till we meet again next month, happy programming and here’s wishing you the very best!

By: Sandya Mannarswamy The author is an expert in systems software, and is currently working as a researcher in IBM India Research Labs. Her interests include compilers, multi-core technologies and software development tools. If you are preparing for systems software interviews, you may find it useful to visit Sandya's LinkedIn group, ‘Computer Science Interview Training India’ at http://www. linkedin.com/groups?home=&gid=2339182.


How To

Developers

Get Started with

Kernel

Module Programming

Kernel module programming helps create drivers that are missing in the current kernel's list. This article takes readers through the basics of creating a kernel module, navigating through the output, loading and removing a module. Knowledge of 'C' language (most of the kernel code is in C) and the GNU C extensions is required.

I

use the 10.04 32-bit desktop edition, with kernel 2.6.34.12. New distros carry the kernel 3.x series. There is a very slight difference between the architecture of 2.x and 3.x series, so what I discuss in this article will work on both. The source code for this article can be downloaded from http://www.linuxforu.com/article source_code/oct12/ kernel module.zip So let's get started with the traditional ‘hello world’ module; the filename is hello.c. #include<linux/init.h>

#1

#include<linux/module.h> MODULE_LICENSE(“GPL”);

#2 #3

static int hello_init(void){ printk(KERN_ALERT "Hello world"); return 0; }

#4 #5

static void hello_exit(void) {

#6

printk(KERN_ALERT "GoodBye"); } module_init(hello_init);

#7

module_exit(hello_exit);

#8

Note: Most libraries are in the include folder of the kernel root directory. In module programming as well as in core kernel programming, <linux/headerfile> is short for /root kernel directory/include/linux/ headerfile. So you can easily say that include is the base directory for simple libraries. For architecturerelated libraries, arch is the base directory. Now let me explain the code, line by line. 1. init.h is included because it will get executed as soon as you load the module in the kernel space. It is mandatory to include this header file in almost every program. 2. All the module functions and recognisers are present in october 2012 | 63


Developers

How To the corresponding .c file (no need to list the source files explicitly). So just give the name of your program file against obj-m (without the .c extension).

Describing the base build path Figure 1: Build, load, remove and verify output

3.

4.

5.

6.

7. 8.

the module.h header file. It is also mandatory if you are writing a module for the kernel. MODULE_LICENSE() will specify the licence type under which this module is written. Get into a habit of writing it, otherwise you will unnecessary ‘taint’ warnings in the kernel log. I have used the GPL. A static int function is created to execute on loading of the module. This function is also known as an initialise function. It must not be of the void type because the function must return false if initialisation failed. There is no printf() in kernel programming; whereas here, there is printk(), which copies the formatted string into the kernel log buffer, which is normally read by the syslog program. It works similar to printf(); the only difference is that printk() enables us to specify a priority flag that is used by syslogd to decide where to display kernel messages. In the line, KERN_ALERT, is the flag. KERN_ERR, KERN_WARNING etc, are some other flags. It also works well without the flags. Like the initialise function, you made a function which will be executed when you remove the module from the kernel space. The function usually has a void return type. The module_init(function name) call will tell the kernel which function to execute on the module’s initialisation. The module_exit(function name) call will tell the kernel which function to execute on module removal.

Makefile obj-m += hello.o #1

It will compile and build your module with respect to the mentioned path. Please note that the above path is generic for most Linux distributions. It will act as a base for the module build.

Compile, load and remove

Compile the module using make. A lot of files will be generated, such as hello.ko, hello.mod.o, hello.o, etc. Load the module into kernel space using the insmod utility, and you must need root permissions to do that. So run sudo insmod hello.ko in the terminal. Your module gets loaded. To check the output of the module, since printk() doesn't write to the console (though this can be done by a kernel hack), check the kernel log with the dmesg command. Run it and you will see ‘Hello World’ in the output. Now let’s remove the module from the kernel space using sudo rmmod hello.ko in the terminal. Verify this using dmesg again; you will see GoodBye printed at the end of the syslog stack. See the screen-shot (Figure 1). Note: You can't load a module twice successively. If you have just loaded a module, and you want to load it again, you need to unload it first; else you will get a console error message.

Passing parameters

If you want to pass a single or a series of parameters along with the module itself, which is quite useful when your module depends on other modules' output, follow the steps shown below.

Passing a single parameter Module filename: parameter.c #include<linux/init.h> #include<linux/module.h>

all:

#include<linux/moduleparam.h> #1 make -C /lib/modules/$(shell uname -r)/build M=$(PWD)

modules #2

MODULE_LICENSE("GPL");

clean:

int paramTest; make -C /lib/modules/$(shell uname -r)/build M=$(PWD)

module_param(paramTest, int, S_IRUSR|S_IWUSR); #2

clean static int param_init(void) {

Let’s compile the above module using a Makefile with the above code, which should be in the same folder as hello.c; otherwise, you need to specify an absolute path to the file. obj-m is a list of what kernel modules to build. The .o and other objects will be automatically built from

printk(KERN_ALERT "Showing the parameter Demo"); printk(KERN_ALERT "Value of paramTest is: %d", paramTest); #3 return 0; }


How To

Developers

static void param_exit(void) { printk(KERN_ALERT "Exiting the parameter demo"); } module_init(param_init); module_exit(param_exit);

Figure 2: Passing a single parameter with module

Here’s an explanation of the code that's different from the Hello World module: 1. All module parameter-related functions are in the moduleparam header file. 2. Here, module_param(parameter name, data type, permission) is used to tell the kernel the name of the parameter, its variable data type, and the permissions for it. Permissions are generally of five types: S_ IWUSR, S_IRUSR, S_IXUSR, S_IRGRP, and S_IWGRP. S_I is a common prefix; R corresponds to read, W to write and X to execute. USR means the user, and GRP, the group. Permission in module_param can be a single permission or multiple ones using the | OR operator in between, as shown in the code.

Displaying the value of the parameter in the initialise function

Change hello.o to parameter.o in the Makefile; compile using make. Pass the parameter to the module during its insertion into the kernel space. Run sudo insmod parameter.ko paramTest=2. Please note that the name is case-sensitive. After inserting, check the kernel log. You will see the output ‘Value of paramTest is:2’. If you don't specify a value for the parameter during insertion, it will take the data type's default value—0 for int. See Figure 2.

Passing a series of parameters

You can pass more than one parameter to the module using the concept of the arrays. Here's the code (parameterArray.c): #include<linux/init.h> #include<linux/module.h> #include<linux/moduleparam.h> MODULE_LICENSE("GPL"); int paramArray[3]; #1 module_param_array(paramArray, int, NULL, S_IWUSR|S_IRUSR); #2 static int array_init(void) { printk("Into the Parameter Array Demo"); printk("Array Elements are: %d\t%d\t%d", paramArray[0],

Figure 3: Retrieving process information } module_init(array_init); module_exit(array_exit);

1. Declaring an int array of size 3. 2. To make the array variable act as parameters, use the module_param_array() function instead of module_ param(). Arguments to the function are the variable name, data type, counter and permissions, respectively. What is a counter? It is used to track the number of parameters passed to the module. In your option, you could also ignore the count and pass NULL as I did. Compile the module. To insert, run sudo insmod parameterArray.ko paramArray=1,2,3. To verify, check the kernel log and you will see paramArray values as 1,2,3. If parameters are missing, the kernel again takes the default value of the data type, 0 for int.

Playing with processes

Each process in Linux has certain properties associated with it, such as process ID, runnable state, flags, etc, in a structure present in the sched.h header file in / include/linux. The structure is named 'task_struct'. You can retrieve lots of real-time information by accessing this structure; all you need is to create a pointer to the structure, and then start using it. I want to retrieve the process ID, process name and its state in a real-time environment; here's the code (process.c):

paramArray[1], paramArray[2]); return 0; }

#include<linux/init.h> #include<linux/module.h> #include<linux/sched.h>

static void array_exit(void) { printk("Exiting the Array Parameter Demo");

MODULE_LICENSE("GPL");

october 2012 | 65


Developers

How To

static int test_init(void) { struct task_struct *task;

#1

for_each_process(task) { #2 printk("Process Name: %s\t PID:%d\t Process State:%ld\n", task->comm, task->pid, task->state); #3 } return 0; } static void test_exit(void) { printk(KERN_INFO "Cleaning Up.\n"); }

its properties. Make the necessary changes in the old Makefile, compile, and insert it. You should see something like what’s shown in Figure 3. You will see process names along with their process IDs and the process state (integer value; and Google for what those values mean). There are over 50 properties associated with processes defined in that structure. So take a reading at the beautifully documented structure and start playing with processes. Linux kernel module programming is the best thing I have come across so far. I am learning it in depth, and will share my experiences in the future too. A large number of e-books are also available online. The power of Google is tremendous. So explore them. Queries and suggestions are always welcome.

module_init(test_init); module_exit(test_exit);

Explanations: 1. A pointer named task was created to the structure task_struct. 2. You iterate over the processes handled by the kernel (whether they are in executing or waiting state). 3. You access the structure through the pointer and display

Ankur Aggarwal The author loves to explore open source technologies and enjoys programming. He has learned Python, PHP, C and C++. He loves rock and metal music, and plays the guitar too. He runs a blog related to Linux at www.flossstuff.wordpress.com and can be contacted via ankur.aggarwal2390@gmail.com or on Twitter as @ankurtwi.


How To

Developers

A Simple Guide to Building Your Own Linux Kernel

This article is for all the newbies who want to learn how to build their own Linux kernel. Following the steps mentioned will demonstrate how easy the task is!

P

lease note that the steps mentioned were done on Ubuntu 10.04, but would remain the same for any Linux distribution.

Pre-requisites

Git is the utility for version control on Linux. Install it with a simple sudo apt-get install git-core. You also need the curses library development files—install the package with sudo aptget install libncurses5-dev. Next, verify the current Linux kernel version with the uname –a command so you can download the relevant kernel source to build using the git clone command. I’ve used Linux 2.6 in this example, as you can see in Figure 1.

Configuring the kernel

Now that you have the source, you need to configure it. You can use the command make menuconfig if you are experienced on what configuration parameters to set. If you are new to this, I would suggest you use the default configuration—copy the existing config file to the kernel source directory with, for example, cp /boot/config/-2.6.32-38-generic .config

Now, run make oldconfig to start the configure process. Please note that here you will be prompted to answer ‘Yes’ or ‘No’. If you are unsure, just keep on hitting ‘Enter’ (that is a default ‘Yes’).

Naming the kernel

Next, it might be good to give your kernel a name. For this, open the Makefile, and edit the lines below: VERSION = 2 PATCHLEVEL = 6 SUBLEVEL = 32 EXTRAVERSION = -dips NAME = Building My Kernel

Now, issue the make command. This should take a couple of hours to complete.

Installing the kernel

Now, issue the following command to install all the kernel modules: october 2012 | 67


Developers

How To

make INSTALL_MOD_STRIP=1 modules_install

Next, run a make install.

Figure 1: Download the kernel source

Updating with initramfs

Run the command, sudo update-initramfs -c -k 2.6.32-dips+ For readers who are keen to know what initramfs is, it is a root filesystem that is loaded at an early stage of the boot process. It is the successor of the old initrd.

Verify the installation

Now, we are almost done! To verify, check your /boot directory. Are you able to find the new kernel image and the config file for your build? If yes, then congratulations! You have succeeded in building your kernel.

Modify the GRUB file and reboot

GRUB stands for Grand Unified Bootloader. It is a bootloader package from GNU. GRUB provides a user the choice to boot any one of multiple operating systems installed on a computer, or to select a specific kernel configuration available on a particular operating system’s partitions. You can find the GRUB file at /boot/grub/grub.cfg, as shown in Figure 2. Issue the command sudo update-grub and your GRUB file will be modified. You should be able to see an entry for your kernel, which looks like what’s shown below: ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black

Figure 2: GRUB file insmod ext2 set root=’(hd0,5)’

search --no-floppy --fs-uuid --set 0b0ebe63-24f6-45c0-a3df-

5d9c447b5ad9

linux /vmlinuz-2.6.32-dips+ root=UUID=325f1d1d-2070-45f3-

94ed-2027c5ffcbf3 ro quiet splash initrd /initrd.img-2.6.32-dips+ }

Now, just reboot the system. Your system should now boot up with your just-built kernel. You can verify it by using the uname –a command again. That’s all you need to build a kernel. Now wasn’t that simple? So, have fun building your kernel!

set menu_color_highlight=black/light-gray ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry ‘Ubuntu, with Linux 2.6.32-dips+’ --class ubuntu --class gnu-linux --class gnu --class os { recordfail

By: Deepti Sharma The author is an open source enthusiast with almost a decade of experience in the design, development and quality validation of systems software tools (compilers/assemblers/device drivers) for embedded systems. She loves writing for LFY. She can be reached at deepti.acmet@gmail.com.

EFY Group’s New Initiative

EB TimEs

Will be launched on Sept 11, 2012 This 8-page monthly B2B Newspaper will be a resource for traders, distributors, dealers, and those who head channel business, as it aims to give an impetus to channel sales

68 | october 2012


Insight

Developers

Kernel Ticks and Task Scheduling This article explores how kernel functions and task scheduling are closely linked to time.

T

ime is vital for kernel programming since many kernel functions are time-driven. Some are periodic, such as push and pull migration for load-balancing, the scheduler that runs queues, or refreshing the screen. Their frequencies are fixed (such as 100 times per second). The kernel schedules other functions, such as delayed disk I/O, at a relative time in the future. For example, the kernel might schedule a floppy device driver to shut off after the floppy driver motor becomes inactive; this can be 50 milliseconds (say) from now or after completion of a certain task. So the kernel horology is relative. The kernel must also manage the system uptime, and the current date and time. Events that occur periodically every 10 milliseconds are driven by the system timer. This is a programmable piece of hardware that issues an interrupt at a fixed frequency. The interrupt handler for this timer is called the timer interrupt. The hardware provides a system timer that the kernel uses to gauge the passing of time, which works off an electronic time source, such as a digital clock or the frequency of the processor. The system timer goes off (often called hitting or popping) at a pre-programmed frequency, called the tick rate. When the system timer goes off, it issues an interrupt that the kernel handles via a special interrupt handler. Because the kernel knows the pre-programmed tick rate, it knows the time between any two successive timer interrupts. This period is

called a tick. This is how the kernel keeps track of both ‘wall time’ and system uptime. ‘Wall time’, which is the actual time of day, is important to user-space applications. The kernel keeps track of it simply because the kernel controls the timer interrupt. The kernel defines the value in <asm/param.h>. For example, a microprocessor with an x86 architecture has 100 Hz, whereas one with the Intel Itanium architecture (earlier IA-64) has a 1024 Hz rate.

Timer interrupts

Interrupts are asynchronous events that are usually fired by external hardware; the CPU is interrupted in its current activity and executes special code—the ISR (Interrupt Service Routine)—to service the interrupt. Besides programming for hardware interrupts, the kernel services other interrupts. A module is expected to request an interrupt (or IRQ, for an interrupt request) channel before using it, and to release it when it’s done. The following functions, declared in <linux/ sched.h> implement the interface: int request_irq ( unsigned int irq, //Interrupted number being requested void (*handler)(int, void *, struct pt_regs *), //function pointer to handle unsigned long flags,

october 2012 | 69


Developers

Insight

const char *dev_name, //contains the owner of the interrupt void *dev_id //to identify which device is interrupting ); void free_irq(unsigned int irq, void *dev_id);

Here, request_irq returns either 0 to indicate success, or a negative error code. Every time a timer interrupt occurs, the value of an internal kernel counter is incremented. The counter is initialised to 0 at system boot, so it represents the number of clock ticks since the last boot. The counter is a 64-bit variable (even on 32-bit architectures) and is called jiffies_64.

Jiffies

The global variable, jiffies, holds the number of ticks that have occurred since the system booted. On boot, the kernel initialises the variable to zero, and it is incremented by one during each timer interrupt. Thus, because there are Hz timer interrupts in a second, there are Hz jiffies in a second. The system uptime is therefore jiffies/Hz seconds. What actually happens is slightly more complicated. The kernel initialises jiffies to a special initial value, causing the variable to overflow more often, catching bugs. When the actual value of jiffies is sought, this ‘offset’ is first subtracted. The jiffies variable is declared in <linux/jiffies.h> as extern unsigned long volatile jiffies; one should generally use <linux/sched. h>, which automatically pulls in <jiffies.h> to use the counter and the utility functions.

Calculating system date

The current time of day (the wall time) is defined in kernel/ time/timekeeping.c. The structures responsible for fetching the system data are as follows:

systems, the current wall time is set relative to this epoch. The xtime.v_nsec value stores the number of nanoseconds that have elapsed in the last second. The date is fetched into the structure by calling a routine called getnstimeofday, defined in kernel/timekeeping.c as follows: /* Returns the time of day in a timeval*/ void do_gettimeofday(struct timeval *tv) { struct timespec now; getnstimeofday(&now); /*Returns the time of day in a timespec*/ tv->tv_sec = now.tv_sec; tv->tv_usec = now.tv_nsec/1000; }

Kernel code (especially a driver) often needs a way to delay execution for some time without using timers. This is usually to allow the hardware time to complete a given task. The time is typically quite short. For example, the specifications for a network card might list the time to change Ethernet modes as two microseconds. After setting the desired speed, the driver should wait at least for two microseconds before continuing.

Long and short delays

Long delays: Occasionally, a driver needs to delay execution for relatively long periods—more than one clock tick. Some solutions hog the processor while retarding real work, while other solutions do not do so, but offer no guarantee that your code will resume in exactly the required time. There are certain approaches regarding a long delay, which are listed below: The brain-dead approach: The simplest and easiest implementation is busy waiting or busy looping. It is applied as follows: unsigned long j = jiffies + jit_delay * Hz;

struct timespec { __kernel_time_t tv_sec; /* seconds */

while (jiffies < j) /* void */;

long tv_nsec; /* nanoseconds */ }; /* __kernel_time_t is long type mentioned in posix_types.h*/ struct timeval { __kernel_time_t tv_sec; /* seconds */ __kernel_suseconds_t tv_usec; /* microseconds */

However, this technique precludes the CPU from performing any other tasks, since the loop continues till the jiffies reach jit_delay. The scheduling approach: This explicitly releases the CPU by using the schedule function declared in <linux/ sched.h>

}; while (time_before(jiffies, j1)) { struct timezone { int tz_minuteswest; /* minutes west of Greenwich */

schedule( ); }

int tz_dsttime; /* type of dst correction */ };

The timeval, timespec and timezone data structures are defined in <linux/time.h>. The xtime.tv_sec value stores the number of seconds that have elapsed since January 1, 1970 (UTC). This date is called the epoch. In most UNIX 70 | october 2012

The timeout approach: The best way to implement a long delay is to use the kernel’s intelligence. If, say, a driver uses a wait queue for some event, and simultaneously wants the event to be completed within a particular time span, then one should use wait_event_timeout, declared in <linux.wait.h> as follows:


Insight long wait_event_timeout(wait_queue_head_t q, condition, long timeout);

Small delays: Sometimes, kernel code, like a real driver, needs to calculate very short delays in order to synchronise with hardware. Therefore, one cannot use jiffies for the delay. The kernel functions udelay and mdelay are used for short waits. Their prototypes are:

Developers

remove a timer from the list. Once the timer expires it is automatically removed from the list. Bidirectional in double linked list is an advantage since in immediate previous timer is not necessitated for deletion. A timer is marked by its timeout value (in jiffies) and the function that is to be called on the time value expiry. The timer handler receives an argument, which is stored in the data structure, together with a pointer to the handler itself. The data structure of the timer is in <linux/timer.h>:

#include <linux/delay.h> void udelay(unsigned long usecs); void mdelay(unsigned long msecs);

struct timer_list { struct timer_list *next; /*hold the address next node*/ struct timer_list *prev; /*hold the previous next node*/

The udelay function delays execution by busy looping for the specified number of microseconds. The mdelay function delays execution for the specified number of milliseconds. 1 second = 1,000 milliseconds = 1,000,000 microseconds, and udelay (150) is a delay for 150 μs. The udelay() function is implemented as a loop that knows how many iterations can be executed in a given period of time. The mdelay() function is then implemented in terms of udelay(). Because the kernel knows how many loops the processor can complete in a second, the udelay() function simply scales that value to the correct number of loop iterations for the given delay.

Task scheduling and the use of the kernel timer

In multi-tasking OSs, many tasks run at the same time. Providing the appropriate resource to the task is called task scheduling. The tool that distributes the available resources to the tasks is called a task scheduler. This is also called the process scheduler, and is the part of the kernel that decides which task to run next. It is one of the essential factors of a multi-tasking OS. One feature many drivers need is the ability to schedule the execution of some tasks at a later time without resorting to interrupts. Linux offers three different interfaces for this purpose: task queues, tasklets, and kernel timers. Task queues and tasklets provide a flexible utility for scheduling execution at a later time, and are triggered by the kernel itself. They are only used to manage hardware that cannot generate interrupts. They always run in the interrupt time. And run only once, though scheduled to run multiple times. Kernel timers are used to schedule a task to run at a specific time in the future. Moreover, they are easy to use, and also do not re-register themselves, unlike task queues that do. At times, one needs to execute operations detached from any process context, like finishing a lengthy shutdown operation. In that case, delaying the return from the close() (the function for closing the file descriptor; it returns 0 or 1 for success, r for failure, etc) wouldn’t be fair to the application program. Using a task queue would be wasteful, because a queued task must continually re-register itself until the requisite time has passed. The kernel timers are organised as a double linked list. add_timer and del_timer are the functions used to add and

unsigned long expires; /* the timeout, in jiffies */ unsigned long data; /* argument to the handler */ void (*function)(unsigned long); /* handler of the timeout */ volatile int running; };

Here, expires denotes jiffies. The execution of timer>function will last till jiffies is equal to or greater than timer>expires. The timeout is an absolute value equal to the current value of jiffies and the amount of the desired delay. The first step in creating a timer is to initialise it: Struct timer_list K_timer ;

Now it should be initialised. Init_timer(&K_timer);

The timer_list structure is initialised once; the function add_timer inserts it into a sorted list, which is then polled a 100 times per second, more or less. The add_timer is declared in <time.h> and defined in <timer.c> Stereotype: Extern void add_timer( struct timer_list *timer )

Now the timer should be activated. Add_timer(&k_timer);

Even systems (such as the Alpha) that run with a higher clock interrupt frequency do not check the timer list more often than that—the added timer resolution would not justify the cost of the extra passes through the list. References 'Linux Kernel Development' by Robert Love

By: Debasree Panda The author is an open source developer. His area of expertise is teradata databases.

october 2012 | 71


Developers

Insight

Kernel Uevent

How Information is Passed from Kernel to User Space This article explores the kernel uevent framework and looks at how drivers use it to send device-specific information to user space. It also covers how to receive and process kernel uevents in user space.

T

he Linux kernel version 2.6.10 (and later versions) introduced the uevent notification mechanism for kernel and user-space communication. These user events (uevents) generated from the kernel are used by userspace daemons to either create/ remove device files, to run programs, or load/remove a driver in user land. Inside the kernel, these uevents are linked to the kernel data structure called kobject. This data structure’s life cycle (linked with a device) is what is notified as uevents to user space.

state, and is defined in include/linux/kobject.h as enum kobject_action: enum kobject_action { KOBJ_ADD, KOBJ_REMOVE, KOBJ_CHANGE, KOBJ_MOVE, KOBJ_ONLINE, KOBJ_OFFLINE,

Netlink

The Linux kernel uses netlink to send kernel uevents to the user space. Netlink is a socket-like mechanism used in Linux to pass information between kernel and user processes. Netlink, similar to a generic BSD socket infrastructure, supports primitive APIs like socket(), bind(), sendmsg() and recvmsg().

Uevent actions

As discussed above, a uevent message communicates the device’s or subsystems’ states inside the kernel to user space. The kobject_action data structure indicates the kernel object’s 72 | october 2012

KOBJ_MAX };

Each uevent message sent to user space is tagged with one of these kobject actions. For example, actions like KOBJ_ADD or KOBJ_REMOVE are used to notify the addition or deletion of a kernel object—which happen when a device is added or deleted inside the kernel, using device_add or device_del. KOBJ_CHANGE is the action most used by drivers to notify changes in the device, like the configuration or state. Other actions like KOBJ_ONLINE/KOBJ_OFFLINE are generally


Insight used to indicate when a CPU’s state changed; KOBJ_MOVE is used only by network interfaces when renaming a kobject. You can find the kernel uevent framework implemented in lib/kobject_uevent.c. The uevent framework exports APIs kobject_uevent and kobject_uevent_env, which implement the user-space communication. The difference between the two APIs is that the latter allows a driver to send extra information to user space. This extra information is termed ‘environmental data’ by the framework. Internal to the framework, the kobject_uevent API is just a call of the kobject_uevent_env API with ‘environmental data’ set to NULL. The basic message passed to user space from the kernel uevent framework is as follows:

Developers

User Space Application

Netlink Socket Kernel Driver Figure 1: A simple representation of the uevent mechanism

Kernel driver sends uevent message

ACTION= DEVPATH= SUBSYSTEM= SEQNUM=

The DEVPATH string holds the path of the kobject and SUBSYSTEM indicates the subsystem the uevent originated from. Any extra data other than the above four, is termed as environment data. Information in this environment data is specific to the subsystem or the kernel object. The environment data, which is basically an array of strings, holds state changes or other information useful to user-space code. The environment data is packed in a kobj_uevent_env data structure, defined in /include/linux/kobject.h: struct kobj_uevent_env {

netlink driver broadcasts the uevent messages

User Space client processes uevent messages

Figure 2: The flow of uevents from kernel to user space

driver forms three different state strings with the keyword USB_STATE, in a format similar to kobj_uevent_env environment data, as shown below:

char *envp[UEVENT_NUM_ENVP]; int envp_idx; char buf[UEVENT_BUFFER_SIZE]; int buflen; };

The uevent framework provides the add_uevent_var API to add more environment data to a uevent message.

How it is used by a driver

Let us now see how a driver uses this framework to send information to user space, with an example—a simple scenario of an Android device connected to a host PC as a USB device. When an Android device is connected as a USB device, the Android framework notifies the user with a USB icon in the notification bar. To do this, the Android USB framework needs certain information from the kernel when the Android device is configured as a USB device. This information is passed on through uevents by the kernel USB driver. Let us see how USB-specific state information is passed on to the Android USB framework. The following code is from the android_work function of the drivers/usb/gadget/android.c file; you can browse the code at https://android.googlesource.com/. The USB gadget

char *disconnected[2] = { "USB_STATE=DISCONNECTED", NULL }; char *connected[2] = { "USB_STATE=CONNECTED", NULL }; char *configured[2] = { "USB_STATE=CONFIGURED", NULL }; char **uevent_envp = NULL;

When an Android device is connected as a USB device, the state of the device is checked from the gadget driver’s flags and appropriate environment data is assigned to the uevent_envp variable. For example, when the device is connected to the PC, USB_STATE=CONNECTED is set and when the drivers are installed successfully and the device is functional, USB_STATE=CONFIGURED is set. if (cdev->config) uevent_envp = configured; else if (dev->connected != dev->sw_connected) uevent_envp = dev->connected ? connected : disconnected;

This state information is then propagated to the user space using kobject_uevent_env with the action KOBJ_CHANGE, as shown below: if (uevent_envp) {

october 2012 | 73


Developers

Insight

kobject_uevent_env(&dev->dev->kobj, KOBJ_CHANGE, uevent_ envp); pr_info("%s: sent uevent %s\n", __func__, uevent_envp[0]);

envp[] */ for (i = 0; (bufpos < (size_t)buflen) && (i < HOTPLUG_NUM_ENVP-1); i++) { int keylen;

In the user space, Android’s UsbDeviceManager collects this USB_STATE information and propagates it to the listeners. You can check out the UsbDeviceManager.java code that handles this in the repository: http://bit.ly/R2fCLP. After that example, let us write a simple C program to see how to receive uevents in user space.

Receiving uevents in user space

Since the kernel uevent uses a netlink socket, it requires a simple socket program in user space to receive uevent messages. Figure 2 illustrates a simple uevent flow through kernel space to the user space. The best example of how to receive and decode uevents in user space is available as open source (uevent_listen.c) and we will use it to explain things. The first step to receiving a uevent in user space is to connect to or open the netlink socket. In this case, it is NETLINK_KOBJECT_UEVENT. This allows us to receive the kernel messages.

char *key; key = &buffer[bufpos]; keylen = strlen(key); envp[i] = key; bufpos += keylen + 1; } envp[i] = NULL; printf("[%i] received '%s' from '%s'\n", time(NULL), action, devpath); /* print payload environment */ for (i = 0; envp[i] != NULL; i++) printf("%s\n", envp[i]);

Having seen important blocks of uevent_listen.c, let us build and run the program on the terminal. The following snip shows one of the uevents received when you connect a USB pen drive to your PC while running uevent_listen.out:

sock = socket(PF_NETLINK, SOCK_DGRAM, NETLINK_KOBJECT_UEVENT); if (sock == -1) {

[1345998474] received 'add' from '/devices/

printf("error getting socket, exit\n");

pci0000:00/0000:00:1d.7/usb2/2-1'

exit(1);

ACTION=add

}

DEVPATH=/devices/pci0000:00/0000:00:1d.7/usb2/2-1 SUBSYSTEM=usb

retval = bind(sock, (struct sockaddr *) &snl, sizeof(struct sockaddr_nl)); if (retval < 0) {

MAJOR=189 MINOR=149 DEVNAME=bus/usb/002/022

printf("bind failed, exit\n");

DEVTYPE=usb_device

goto exit;

PRODUCT=cf2/6230/100

}

TYPE=0/0/0 BUSNUM=002

After successfully binding with the NETLINK_ KOBJECT_UEVENT, the program should wait, listening to the uevent socket for messages from the kernel. The maximum buffer size of the uevent message is 2048, as defined in /include/linux/kobject.h, and the receive buffer will be of the same size. buflen = recv(sock, &buffer, sizeof(buffer), 0); if (buflen < 0) { printf("error receiving message\n"); continue; }

The received message holds multiple string data created by the kernel uevent framework, and parsing can be based on the length returned by the socket API. /* hotplug events have the environment attached - reconstruct

74 | october 2012

DEVNUM=022 SEQNUM=2548

To understand how the following message was constructed, you can refer to the usb_dev_uevent (drivers\usb\core\usb.c) and usb_uevent (drivers\usb\core\driver.c) functions. This article provided a very brief introduction of the Linux uevent notification mechanism, followed by a simple user space program that receives uevents in user space. There are many more interesting features you can explore like uevent filters, uevent files for cold-plugged devices, udev rules, etc, to improve your understanding of the uevent mechanism. By: Rajaram Regupathy The author works as a Linux specialist with ST Ericsson India Pvt Ltd, and is the author of the book ‘Bootstrap Yourself with Linux-USB Stack’.


Guest Column Joy of Programming

S.G.Ganesh

Auto-generating Code Using tools to auto-generate code from high-level models or specifications looks like a cool idea. Though generating code instead of ‘hand-writing’ it from scratch has many advantages, one needs to also be aware of the disadvantages of this approach. This article takes a closer look at auto-generating code.

W

hen I first got exposed to auto-code generators, I was quite impressed with what they could achieve. For example, YACC (Yet Another Compiler Compiler) is a compiler generator. To explain what a ‘compiler generator’ means in layman terms, it’s when you provide the specification (i.e., language grammar) for parsing interspersed with hand-coded actions, and the YACC tool generates code corresponding to that grammar. YACC is a ‘compiler compiler’; in other words, it’s a compiler that takes the grammar and generates as output the code for a compiler that recognises that grammar. So, instead of writing a compiler (the parser component of a compiler, to be specific), which can be a really complex task, you can specify the grammar in the format required by YACC, and you get parser code ‘auto-generated’. You can use that parser to parse the input source files. Similarly, you can generate a ‘lexical analyser’ using the Lex tool. A lexical analyser takes source code, which is a stream of characters, and generates a stream of tokens (for this reason, a lexical analyser is also known as a tokeniser); these tokens are then fed to the parser. Coming back to Lex, you can specify what tokens you need to generate in the format required by Lex, and the Lex tool will generate a lexical analyser code for you. Looks cool, doesn’t it? Tools like lex and yacc can be considered to be ‘domain-specific languages’ (DSLs). These are designed for a specific domain, and are specialised languages for performing tasks in those domains. You can contrast DSLs with general-purpose languages such as C++ and Java, which are pretty much domain-neutral, and you can use them for a vast range of programming tasks. Not all DSLs generate compilable code. For example, SQL (Structured Query Language) is a DSL and is meant for interacting with a database. These DSLs can, in turn, be considered as Application Oriented Languages (AOLs). AOLs are specialised languages that are designed to program for a specific application or problem domain. AOLs allow us to worry about the problem domain, and help us focus on specifying the problem instead of the implementation details. AOLs are a good target for

auto-generating code. Well-known examples are code generators from UML diagrams—where you specify your high-level design in UML diagramming tools (such as ArgoUML, Enterprise Architect and Rational Rose), and you can get code automatically generated from those diagrams. Other examples are MatLab and Simulink, used for mathematical analysis and modelling. Another well-known kind of auto-code generator includes GUI builders, which most of you may already be familiar with. For example, you can use tools such as NetBeans (for Java) or Visual Studio (for C#/VB) for UI design, and these tools will automatically generate compilable source code. Of course, you need to add things such as event-handling code and business-logic code to the generated code, but they already allow you to focus on the UI design by just dragging and dropping UI controls instead of writing all the template code in a text editor. At some point you may have used ‘wizards’ that generate code for you for your specific needs. These are quite popular, especially for simplifying programming for repetitive tasks. They are widely used, especially in enterprise projects, since they help software companies make their programmers more productive. Also, product companies that sell software packages targeting programmers, attract potential developers by showing how software developers can reduce their programming efforts if they buy their software package. As a programmer, you can now simply click or drag and drop to create programs instead of typing at the keyboard for long hours. Looks quite attractive, doesn’t it? So, what are the advantages of ‘generating code’? 1. Improved productivity. Automated code generators allow you to work at the level of specifications or models, and communicate in the terms used in the application domain. With general-purpose languages, you need to work in terms of low-level constructs such as for and while loops, which is quite tedious and time consuming. 2. Reduced complexity. When programmers work at higher levels of abstraction, they can perform complex tasks with ease, compared to low-level constructs in generalpurpose languages. october 2012 | 75


Joy of Programming

Guest Column

3. Less buggy code. When you hand-code something complex, it’s likely to have numerous bugs. Worse, with high complexity, you probably won’t know what bugs there are in the hand-written code. The best way to overcome this problem is not to write code at all ! The code is auto-generated from the specification, so if the specification is correct, you can be certain that the generated code is correct. These advantages are well-known. That’s why such auto-code generators appear to be a great idea. But that’s not the end of the story. If you have considerable working experience with using such auto-code generators, you are probably aware of the numerous problems and disadvantages of using them. We’ll discuss them in greater detail now. Developers understand the code they have written, and will be able to understand code written by fellow programmers who are human beings. The generated code is auto-generated by a program, is often quite complex, and often unreadable—and it’s not an exaggeration if I say it is sometimes a nightmare to read auto-generated code! So, maintainability and understanding suffers with autogenerated code. The auto-generated code often needs to be customised to make it usable for your requirements or to get it working. For example, if you create screens using GUI builders, the generated code will contain empty event handlers with TODO entries for you to enter code for specific events such as mouse clicks and mouse moves. You’ll have to figure out what these methods are, and understand what to write in there. They also contain hook methods for inserting your own logic or adding business logic. Often, developers add the bare minimum code necessary to get the program working. If the code segments with default behaviour are not exercised during the testing, the tests will pass and the software will get released. But bugs will be found in actual usage, and get filed as such. Forgetting to customise code is the cause of numerous bugs related to auto-generated code. The major difficulty with auto-generated code is that it’s difficult to integrate programmer code into the generated code. Consider the following scenario. You create large UML diagrams in a tool that can generate Java code. In the generated code, you have added lots of business logic, and made modifications to the parts of the generated code. For the next release of the software, you get new requirements and need to change the UML diagrams. However, you cannot make changes to the UML diagrams and generate code because the existing modified code is not in sync with the newly generated code! This occurs, assuming that the UML design tool does not support syncing the changes

back to the diagrams from the code, which is the case with most of the tools available today. Hence, you need to make all the changes in the actual code from the earlier version, without using the UML design tool, which defeats the very purpose of auto-generating code—this is the most significant problem with it. One practical way to integrate generated code with the programmer’s code is to maintain strict separation between them by keeping them in physically separate files. In this approach, programmers don’t directly make changes to the file containing generated code. Rather, they make changes in a separate file, for example, by overriding the hook methods. So, an argument to this approach is that if the higher-level specification or model changes, then the code can be regenerated, and that will not impact the programmer-written code. Yes, this solves some of the problems with mixing generated code and programmer written code. However, it still does not address the problem of when the generated code needs to be modified, particularly in the context of supporting Non-Functional Requirements (NFRs). Consider, for example, that you want to improve the performance of your software. Generated code, because it is generic and has numerous hook methods, is often inefficient. Similarly, consider that you want to make your code thread-safe, and assume that the auto-generated code is not. In both these cases, you cannot directly make modifications to the generated code, because if you regenerate the code from the generator tool, those changes will be lost. So, separating generated code from programmers’ code solves some problems, but not all. In practice, the evolution of the auto-generator tool can also cause problems. Assume that you’re working on a maintenance project that made use of auto-generated code. Now you want to regenerate the code with new modifications, but you have access only to the new version of the auto-generator tool. This new version generates code that is incompatible with the code generated by the older version of the tool; so if you use the code generated from the new version of the tool, your existing code (written to be compatible with the code in the older format) will not work. You’ll then be forced to modify the generated code by hand, and not use the newer version of the tool. There are other problems as well that I don’t delve into too deeply, but I leave it to you to think about. Assuming that you use auto generators in your project, and face these problems, how will you handle the following situations: There are many cases where the generated code has bugs, which means that the generator tool generates a buggy program (this is not an uncommon problem). You’ve coding guidelines for your project that need to


Guest Column Joy of Programming

be strictly adhered to, and the auto-generated code does not conform to your coding guidelines. Yours is an embedded systems project, which does not allow the use of recursion or allocation of dynamic memory, but the auto-generated code makes liberal use of them. Yours is a complex project, and you make use of auto-generators to specify the models. But the autogenerator does not support modelling a specific feature that is required for your project. If you discover some of the limitations of the autogenerators late in the software development life-cycle, it can cause serious problems. Now, after looking at the disadvantages and potential problems with auto-generating code, let’s reflect on its advantages. Are there any? For example, I mentioned improved productivity as an advantage—is it really so? If you think about it, things like GUI builders help reduce the tedium involved in writing UI code. But we still take care of the harder part, which is design. For example, we need to separate the UI logic from business logic, and again maintain clean separation from the underlying database (or any data source). Design is hard work and takes time. In a large software development project, the time required for creating UI is negligible— it's the design during development, and re-factoring during maintenance that takes most of the time. So, the extent of ‘improved productivity’ may not be as significant as you would expect. As you can see, here is a neutral view on auto-generating code, which boils down to a few fundamentally simple ideas: Auto-generating code helps improve productivity by simplifying the implementation of functionality, but it does not deal with the problem of meeting non-functional requirements. Auto-generating code, as a computer science problem, works well; but from a software

engineering perspective of not just developing, but also maintaining the software, it has its limitations. Though it solves many problems, it also introduces new problems to solve. Auto-generators help programmers like us to work on high-level abstractions; but any auto-generator tool will have specific limitations, and working around them will expose you to implementation details and may force you to work on the actual generated code. So, if you’re given a choice to either auto-generate the code or write it yourself from scratch, take a holistic look at auto-generating code, instead of blindly autogenerating the code. Before I end this column, one final thought. Autogeneration is not just for source code, it could be used for generating test code as well. For example, with ModelBased Testing (MBT), you can specify the conditions to check in a diagrammatic form (i.e., a model), and the MBT tools would generate test cases for those conditions. It's an effective way to reduce unit testing effort and create effective test cases. Why? During testing, you don’t systematically cover all the possibilities—and even for a few possibilities, the number of tests to be written to cover all those could be huge. But by auto-generating test code from models, this problem can be solved. However, this isn’t used widely because it takes time to learn how to build models to generate test cases. Also, often the models generate test cases that will check for conditions that cannot occur in practice. So it takes time to find out which tests are legitimate and which tests are invalid. Even with these disadvantages, MBT is a good approach, and we should try to use it.

By: S G Ganesh The author works for Siemens (Corporate Research & Technologies), Bengaluru. You can reach him at sgganesh at gmail dot com.

Know the Leading Players in Every Sector of the Electronics Industry

ELECTRONICS B2B INDUSTRY IS AT A

www.electronicsb2b.com Log on to www.electronicsb2b.com and be in touch with the Electronics B2B Fraternity 24x7 october 2012 | 77


Developers

Let's Try

The Semester Project-VI

File System on Block Device This article, which is part of the series on Linux device drivers, enhances the previously written bare-bones file system module, to connect with a real hardware partition.

A

fter writing the bare-bones file system, the first thing Pugs figured out was how to read from the underlying block device. The following is a typical way of doing it: byte1_t *used_blocks; /* Used blocks tracker */ struct buffer_head *bh;

byte1_t block[SIMULA_FS_BLOCK_SIZE]; /* A block-size scratch space */

bh = sb_bread(sb, block); /* sb is the struct super_block pointer

} sfs_info_t;

*/ // bh->b_data contains the data // Once done, bh should be released using: brelse(bh);

To do the above, and various other real SFS (Simula File System) operations, Pugs felt he needed to have his own handle to be a key parameter, which he added as follows (in the previous real_sfs_ds.h): typedef struct sfs_info { struct super_block *vfs_sb; /* Super block structure from VFS for this fs */ sfs_super_block_t sb; /* Our fs super block */

78 | october 2012

The main idea behind this was to put all required static global variables in a single structure, and point to that by the private data pointer of the file system, which is the s_fs_info pointer in the struct super_block structure. With that, the key changes in the fill_super_block (in the previous real_sfs_ bb.c file) become: Allocate the structure for the handle, using kzalloc() Initialise the structure for the handle (through init_ browsing()) Read the physical super block, verify and translate the information from it into the VFS super block (through init_browsing()) Point to it by s_fs_info (through init_browsing())


Let's Try Update the VFS super_block based on these changes Accordingly, the error-handling code would need to do the shut_browsing(info) and kfree(info). And that would additionally need to go along with the function corresponding to the kill_sb function pointer, defined in the previous real_ sfs_bb.c by kill_block_super, called during umount. Here are the various code pieces:

static int sfs_fill_super(struct super_block *sb, void *data, int

Developers

following transformations: ‘int sfs_handle’ into ‘sfs_info_t *info’ lseek() and read() into the read from the block device using sb_bread calloc() into vmalloc() and then appropriate initialisation by zeros. free() into vfree() Here’s the transformed init_browsing() and shut_ browsing() in real_sfs_ops.c:

silent) { sfs_info_t *info;

#include <linux/fs.h> /* For struct super_block */ #include <linux/errno.h> /* For error codes */

if (!(info = (sfs_info_t *)(kzalloc(sizeof(sfs_info_t), GFP_

#include <linux/vmalloc.h> /* For vmalloc, ... */

KERNEL)))) return -ENOMEM; info->vfs_sb = sb;

#include “real_sfs_ds.h” #include “real_sfs_ops.h”

if (init_browsing(info) < 0) { kfree(info); return -EIO;

int init_browsing(sfs_info_t *info) { byte1_t *used_blocks;

}

int i, j;

/* Updating the VFS super_block */

sfs_file_entry_t fe;

sb->s_magic = info->sb.type; sb->s_blocksize = info->sb.block_size;

if (read_sb_from_real_sfs(info, &info->sb) < 0) {

sb->s_blocksize_bits = get_bit_pos(info->sb.block_size);

return -EIO; }

if (info->sb.type != SIMULA_FS_TYPE) {

}

printk(KERN_ERR “Invalid SFS detected. Giving up.\n”); return -EINVAL;

static void sfs_kill_sb(struct super_block *sb) {

}

sfs_info_t *info = (sfs_info_t *)(sb->s_fs_info); if (info) {

/* Mark used blocks */

shut_browsing(info);

used_blocks = (byte1_t *)(vmalloc(info->sb.partition_size));

kfree(info);

if (!used_blocks) {

}

return -ENOMEM;

kill_block_super(sb);

}

}

for (i = 0; i < info->sb.data_block_start; i++) {

Note that kzalloc(), in contrast to kmalloc(), also zeroes out the allocated location. The get_bit_pos() is Pugs’ simple way to compute logarithm base 2, as follows:

}

used_blocks[i] = 1; for (; i < info->sb.partition_size; i++) { used_blocks[i] = 0; } static int get_bit_pos(unsigned int val) { int i;

for (i = 0; i < info->sb.entry_count; i++) { if (read_from_real_sfs(info, info->sb.entry_table_block_

for (i = 0; val; i++) { val >>= 1;

start, i * sizeof(sfs_file_entry_t), &fe, sizeof(sfs_file_entry_t)) < 0) {

}

vfree(used_blocks);

return (i - 1); }

return -ENOMEM; } if (!fe.name[0]) continue;

And init_browsing() and shut_browsing() are basically the transformations of the earlier user-space functions of browse_real_sfs.c into kernel-space code real_sfs_ops.c, prototyped in real_sfs_ops.h. This basically involves the

for (j = 0; j < SIMULA_FS_DATA_BLOCK_CNT; j++) { if (fe.blocks[j] == 0) break; used_blocks[fe.blocks[j]] = 1; }

october 2012 | 79


Developers

Let's Try offset %= block_size; } if (offset + len > block_size) // Should never happen { return -EINVAL; } if (!(bh = sb_bread(info->vfs_sb, block))) { return -EIO; } memcpy(buf, bh->b_data + offset, len); brelse(bh);

Figure 1: Connecting the SFS module with the pen drive partition

return 0; }

} info->used_blocks = used_blocks; info->vfs_sb->s_fs_info = info; return 0; } void shut_browsing(sfs_info_t *info) { if (info->used_blocks) vfree(info->used_blocks); }

Similarly, all other functions in browse_real_sfs.c would also have to be transformed, one by one. Also, note the read from the underlying block device is being captured by the two functions, namely read_sb_from_real_sfs() and read_from_ real_sfs(), which are coded as follows: #include <linux/buffer_head.h> /* struct buffer_head, sb_bread, ... */ #include <linux/string.h> /* For memcpy */ #include “real_sfs_ds.h” static int read_sb_from_real_sfs(sfs_info_t *info, sfs_super_ block_t *sb) { struct buffer_head *bh; if (!(bh = sb_bread(info->vfs_sb, 0 /* Super block is the 0th block */))) { return -EIO; }

All the above code pieces put in together as the real_sfs_ minimal.c (based on the file real_sfs_bb.c created earlier), real_sfs_ops.c, real_sfs_ds.h (based on the same file created earlier), real_sfs_ops.h, and a supporting Makefile, along with the previously created format_real_sfs.c application, are available from http://www.linuxforu.com/article source_code/ oct12/file system block drive.zip

Real SFS on block device

Once compiled using make, Pugs didn’t expect the real_sfs_first.ko driver to be way different from the previous real_sfs_bb.ko driver, but hoped it would now read and verify the underlying partition. For that, he first tried mounting the usual partition of a pen drive to get an ‘Invalid SFS detected’ message in dmesg; and then tried after formatting it. Note that the same error of ‘Not a directory’, etc, as in the previous article, still exists, as this driver is still very similar to the previous bare-bones driver. The core functionality’s yet to be implemented; it’s just that it is now on a real block device partition. Figure 1 shows the exact commands for all these steps. Note that the ./format_real_sfs and mount commands may take a lot of time (maybe in minutes), depending on the partition size—so preferably use a partition that’s less than, say, 1 MB. With this important step of getting the file system module interacting with the underlying block device, the last step for Pugs would be to do the other transformations from browse_real_sfs.c and, accordingly, use them in the SFS module.

memcpy(sb, bh->b_data, SIMULA_FS_BLOCK_SIZE); brelse(bh);

By: Anil Kumar Pugalia

return 0;

The author is a freelance trainer in Linux internals, Linux device drivers, embedded Linux and related topics. Prior to this, he was at Intel and Nvidia. He has been working with Linux since 1994. A gold medallist from the Indian Institute of Science, Linux and knowledge sharing are two of his many passions. Creating and playing with open source hardware is one of his hobbies. Learn more about him at http://profession.sarika-pugs.com/, including information about his open source hardware freak-outs, and his various Linux related training sessions and workshops. He can be reached at email@sarika-pugs.com.

} static int read_from_real_sfs(sfs_info_t *info, byte4_t block, byte4_t offset, void *buf, byte4_t len) { byte4_t block_size = info->sb.block_size; struct buffer_head *bh; if (offset >= block_size) { block += (offset / block_size);

80 | october 2012


Insight

Developers

Git Version Control Explained

Advanced Options The article published in the September issue covered the fundamentals of Git, including the basic data structures and commands to get started with a local repository and work with a remote repository. This article will focus on the advanced features and commands available, using an example to demonstrate how to collaboratively integrate with GitHub for coding.

L

et’s take a look at the advanced features and commands available in Git.

Merging branches

In Git, you create short-lived branches to develop some feature. Once the feature is complete, you merge the code to the main branch, and delete the feature branch or topic-branch. To merge a branch to the current branch, the command git merge branch_name is used.

Pulling changes from a remote branch

When you want to sync changes from a remote repository, use the git pull command—git pull branch_name, for example; though, if you are pulling from the origin repo, just git pull is enough. This will fetch changes from the remote repository and will try to merge them to the current local repository. Thus,

git pull is the same as git fetch or git merge. If multiple people are committing to a repository, git pull is not recommended— you should use git fetch + rebase (see the next section).

Rebasing changes

The merge command tries to put the commits from the branch to be merged on top of the HEAD of the current local branch checkout. The git rebase branch_name command is a type of merge; it tries to find out the common ancestor between the current local branch and the branch you are trying to merge. It then puts the commits from the branch you are trying to merge to the current local branch by modifying the order of commits in the current local branch. For example, if the current local branch has the commits A-B-C-D, and the merge branch has the commits A-B-X-Y, then the merge command will try to turn the local branch into something like A-B-C-D-X-Y. The rebase command, however, tries to formulate the repo as A-B-X-Y-C-D. october 2012 | 81


Developers

Insight

When multiple developers work on a single remote repository, you cannot modify the order of commits in the remote repository. Hence, you should try to formulate your local commits on top of the remote repository commits. So use rebase to put your local commits on top of remote repository commits and push the changes. To add your local commits on top of remote repo commits, use the following command: $ git fetch $ git rebase -i origin/master

This command will interactively modify the local commits (master branch) on top of commits from the origin of the remote repo’s master branch.

Tracking the remote branch on a local branch It is possible to create a local branch that tracks a remote branch with the following command:

tagname. To push tags to a remote repo, use git push –tags. To delete a tag from local and remote repos, use the command given below: $ git tag -d tag_name // delete local tag $ git push origin :tag_name // delete tag from remote repository

Creating and using patches

Commits in Git can be represented as a series of diffs or patches applied one after the other. You can create patches from commits, email them, and the recipients can apply them to their repository. To create a patch from COMMIT_ID to the current HEAD, use: $ git format-patch COMMIT_ID

git checkout -b localbranch remote_repo/newbranch

This will create a series of .patch files. To apply and create commits on a repo from these .patch files, use git am patch_file. patch. To apply the patch’s changes to only modify files locally, but not create the commits, use git apply patch_file.patch.

Fetching changes

Cherry-picking changes

The git fetch command is used to download commits from remote repositories, but it does not attempt to merge the code. An optional parameter is the repository name, as in git fetch repo_name.

Stashing changes

Using Git, it is possible to pick an individual commit from any branch in the current repo, and try to apply it on top of the current branch HEAD. This feature is known as cherrypicking. To cherry-pick a commit, use: git cherry-pick COMMIT_ID.

Stash is a very interesting feature. When you make local changes, you may need to switch to some other branch to check something and come back, but may not want to lose your local changes that are not yet committed. The git stash command preserves the current repository state for the current branch (with every local change) in its memory. The Git stash memory is a stack -- a LIFO list. To get the stashed state back, you can first use git stash list to view the contents of the stash memory; to apply the last stashed item back, use git stash apply.

Creating archives from the repository

Displaying objects

Migrating an SVN repository to Git

The git show command is used to display any object in the Git repo—commits, blobs, the tree, etc.

Tagging

Marking release versions of code helps you track code with respect to features or based on different criteria. Git has the built-in capability to tag different commits with tag descriptions. To list available tags, use git tag -l. To tag the current HEAD, use the following command:

It is possible to create a source-code archive from the Git repository. To create file.tar with all the latest code in the HEAD, run git archive -o file.tar HEAD.

Cleaning a repository

While working with code, you create lot of temporary files in the working directory. At some point, you may need to clean out all the files except those tracked by the Git repository. For this, use the git clean command.

Migrating all code from a Subversion repository without losing any history is super easy with the following steps. First, create a users.txt file in the following format, listing the Subversion users and Git username: svn_user1 = Git User1 <email> svn_user2 = Git User2 <email>

Next, initialise the Git repo and do a fetch, as shown below:

$ git tag -a ‘tagname’ -m ‘tag message’ HEAD $ git svn init svn_repo_url

To tag a particular commit, use COMMIT_ID instead of HEAD. To display details about a tag, just use git show 82 | october 2012

$ git config svn.authorsfile users.txt $ git svn fetch


Insight You just successfully migrated all your code to a Git repository. You can now add a remote Git repo, and sync your code to the remote repository as follows:

Developers

Fix bug-0234 – limit the size of the array <Blank line> Description about bug-0234 and fix #Add rest of the changes to another commit $ git add main.c

$ git remote add origin git@github.com:user/my_remote_repo.git

Grepping through files

Searching through source code is a big part of a programmer’s life, and grep is used extensively for this. Git has an inbuilt grep command to search through files tracked by the repository—git grep text.

$ git commit $ git log # Now to sync my repo to a remote (Github) repository I created. # Add a remote repo to current repo git remote add origin https://github.com/t3rm1n4l/projectname.git $ git push origin master # Push master branch (Default) to remote repo $ git branch –a # List branches

Git describe

The git describe command describes the current commit based on the last tagged version. Try git describe and you may get something like:

Now, if you want to work on a feature that takes a lot of time, while some other development goes on in parallel, create a branch for the feature development as shown below: $ git branch feature-x

1.0-62-g4e975db

This auto description can be interpreted as follows: • 1.0 – The recent tag • 62 – The HEAD is the 62nd commit after the tagged commit 1.0 • g4e975db – The first eight characters of the HEAD commit ID.

Integrating with Github

Github is a great Git repository hosting website, which has lots of facilities to collaborate with many developers on a project, including features that help to code, integrate, merge and perform code reviews. Let us go through an example Git workflow along with GitHub integration.

Example workflow

The following is the most common Git workflow that everyone uses daily.

$ git checkout feature-x

To add some changes related to feature-x (such as, adding a TODO.txt): $ git add TODO.txt $ git commit –m “Added TODO for feature-x”

To go back to the master branch and make a few commits (like adding and changing few files), issue the following commands: $ git add core.c $ git commit –m “Move core functions to core.c” $ git add main.c $ git commit –m “Refactor main.c”

If you want to work on feature-x, do a git checkout feature-x, then add a bunch of files to build the feature, and then use the following code:

$ mkdir myproject # Decided to start a new project $ git init #Add project as a git repo

$ git add feature.c main.c core.c

# Created few files README, main.c. Now I wanted to add those files

$ git commit (message as follows)

to repo

Add feature-x – logger module for xxx

$ git add README main.c

<Blank line>

$ git commit –m “Initial commit”

More description about the feature

$ git log # View the log for the commit #Later I want to change the commit message to a more appropriate one.

Switch back to the master branch to make some more changes:

$ git commit --amend $ git diff # I modified main.c. I would like to see the diff

$ git checkout master

# Now I would like to add the changes into two commits

$ git add README (commit some changes from README)

$ git add –p main.c #Interactively select few code chunks and add

$ git commit –m “Improve README”

# Commit the selected changes $ git commit # vim opens up and we add a commit message such as:

If you then want to merge the feature-x I developed into october 2012 | 83


Developers

Insight

the main branch, use the git merge feature-x. However, some merge conflicts will occur. Fix them by editing the listed files, and commit them as follows: $ git add core.c $ git commit –m “Merge feature-x into master”

Now, to sync the code with the remote repository, use git push origin master. Then, to delete the feature branch that is now merged into main, use git branch –D feature-x. By this time, another developer may also have joined this project and committed to the remote Git repository; so to sync back the remote repo to your repo in order to get his changes from there applied to your local repo, use git pull. After making some commits in the local branch, you may find that git push origin master failed, because the other developer added a few extra commits to the remote repo. So, to rebase your code on top of the latest commits from the remote repo, use the following commands: $ git fetch $ git rebase origin/master

84 | october 2012

This lists some conflicts. Resolve those commits by editing the listed files manually, and continue the rebase operation with git rebase –continue. After a successful rebase, try to push to the remote repo again, by using git push origin master once again, which will now succeed. In the meantime, if you have created another branch, branch-x, which you may want to push to the remote repo, use git push origin branch-x. Git is a unique and amazing version control system. In this article, I have gone through most of the essential commands. Using Git for hobby projects is the best method to master good practices. You can learn more of Git by joining Github.com and exploring it through a social coding experience. You can use the reference learn.github.com. Happy hacking, till we meet again. By: Sarath Lakshman The author is a Kerala-based hacktivist of Free and Open Source Software. He loves working on the GNU/Linux environment and contributes to the Pardus Linux distro project. He has recently authored the Linux Shell Scripting Cookbook, which gives insights on shell scripting through 119 recipes. He can be reached via his website http://www.sarathlakshman.com.


Overview

cloud messaging

Developers

Android Push Notifications with GCM

This article explores the use of the Google Cloud Messaging service to send push notifications to applications running on Android phones.

M

obile applications are all about delivering instant information on the move, but who would like to have an application where you keep pressing the Refresh button to fetch the latest updates? An auto-refresh after every minute to check for updates sounds like a solution, but that would involve some major challenges. A mobile device has limited resources in terms of battery life and data transfer limits—not to mention the server load that millions of devices would generate by contacting it every minute. Wouldn’t it be great if the server could automatically tell all the devices that there is new information available for them, as soon as the data arrives? Here, push notifications come to the rescue! Most major mobile OSs offer pretty similar push notification functionality to developers. Apple Push Notifications Service (APNS) for the iPhone, Google Cloud Messaging (previously C2DM) for Android, and Microsoft Push Notifications (MPNS) are the popular services. A typical push notification system follows the journey depicted in Figure 1. On mobile app installation, the device connects to the push notifications’ servers (owned by Google, Apple or Microsoft) and requests for a unique device ID. This is then sent to your application server (see Figure 2) and is used for sending notifications to this device. To send a message, your application server sends the notification content (along with the device ID to which the message is to be sent) to the push server, which takes the responsibility of sending this message to the device, and returns a success/error response to the application server. The Google Cloud Messaging service was launched at Google’s annual I/O event in June 2012, and it replaced C2DM (cloud to device messaging), which was the beta version. With GCM, the limitation on the number of messages per day was removed, and a multi-cast feature was introduced. It is possible to implement the server side and client side of GCM by making individual service requests, but Google has made things simpler by providing two easy-to-use libraries. To get started, download the GCM library using the Android SDK, and get the jar files from ‘ANDROID_HOME\extras\google\gcm’. First, you need to register an application with Google. Go to https://code.google.com/apis/console and create a new project. Navigate to the Services tab and enable the Google Cloud Messaging service for this project. Navigate to the API Access tab and create a new server key, which would be used on the server end to send messages. Also note down the project ID from the URL; this would be used in the Android application to get the messages. Next, you need to develop an Android application.

Note: You can download the sample server side code and Android application from http://goo.gl/9SDB4. Create a new Android application, and add gcm.jar from the Android SDK to the CLASSPATH. Modify the manifest file to have the following: • Permissions for Internet, the ability to access accounts and receive GCM messages, as well as a wake lock • A service • A broadcast receiver • A custom permission to prevent other applications from receiving the notifications. • SDK version 8+ Given below is the manifest for my application: <?xml version=”1.0” encoding=”utf-8”?> <manifest xmlns:android=”http://schemas.android.com/apk/res/android” package=”in.amolgupta.android.gcm” android:versionCode=”1” android:versionName=”1.0” > <uses-sdk android:minSdkVersion=”8” /> <uses-permission android:name=”android.permission.INTERNET” /> <uses-permission android:name=”android.permission.GET_ACCOUNTS” /> <uses-permission android:name=”android.permission.WAKE_LOCK” /> <permission android:name=”in.amolgupta.android.gcm.permission.C2D_ MESSAGE” android:protectionLevel=”signature” /> <uses-permission android:name=”com.google.android.c2dm. permission.RECEIVE” /> <uses-permission android:name=”in.amolgupta.android.gcm. permission.C2D_MESSAGE” /> <uses-permission android:name=”android.permission.GET_ACCOUNTS” /> <uses-permission android:name=”android.permission.USE_ CREDENTIALS” /> <uses-permission android:name=”android.permission.WRITE_ EXTERNAL_STORAGE” /> <application android:icon=”@drawable/ic_launcher” android:label=”@string/app_name” > <activity android:name=”.SamplePushActivity”

OctOber 2012 | 85


Overview

Developers Pushing a message

Registration

Push notification message to device

Send notification message

Device requests for device id

Apple/Google servers

Response

Device id sent to application server

Your Application server

Figure 1: The typical push notification system android:label=”@string/app_name” > <intent-filter> <action android:name=”android.intent.action.MAIN” /> <category android:name=”android.intent.category. LAUNCHER” /> </intent-filter> </activity> <receiver android:name=”com.google.android.gcm. GCMBroadcastReceiver” android:permission=”com.google.android.c2dm. permission.SEND” > <intent-filter>

Apple/Google servers

Your Application server

Figure 2: Registration

To register the device, call the function GCMRegistrar. register(this, SENDER_ID); here, SENDER_ID is the 12-digit project ID from the API console. Running this application would register it on the Google server, and return a device ID, which can be sent to the server using a Web service. For simplicity, here let us just copy it to the server-side code. Last, you need to write the server-side code for the system. To send the push notifications, a Google Web service is called, which can be done with any programming language. Using the gcm-server.jar from the Android SDK, the implementation becomes very simple, and can be integrated into any Java-based system such as your Web services, desktop clients, server-side schedulers, etc. The following lines of code can be used once the JAR has been added to the class path:

<action android:name=”com.google.android.c2dm. intent.RECEIVE” /> <action android:name=”com.google.android.c2dm. intent.REGISTRATION” /> <category android:name=”in.amolgupta.android.gcm” />

Sender sender = new Sender(“<Server key>”); ArrayList<String> devicesList = new ArrayList<String>(); //add the device ids from Android device devicesList.add(“<android device id>”); Message message = new Message.Builder().collapseKey(“1”) .timeToLive(3).delayWhileIdle(true)

</intent-filter> </receiver> <service android:name=”.GCMIntentService” />

.addData(“message”,”message text!!”) .build(); MulticastResult result = sender.send(message, devicesList, 1);

<activity android:name=”.NoteList” android:label=”@string/title_activity_note_list” > </activity> </application> </manifest>

Write the service that extends GCMBaseIntentService and implements the functions onRegistered(), onUnregistered(), onMessage(), onError(), and onRecoverableError(). Next, you need to write the activity for this application. In the activity, you can call the functions from the jar file to check if the device is GCM enabled or not, and also to check if your manifest file has all the required configurations. GCMRegistrar.checkDevice(this); GCMRegistrar.checkManifest(this);

Executing this code would push a notification to the Android device. By implementing proper handling of the intents, you can show this message in the notification bar as an alert, or can initiate any other action in the application. There is a limit on the amount of data that can be sent via push notifications; therefore, it is recommended that instead of sending the complete message content, you should only notify the device that there is a new message available, and let it retrieve the actual message with another service call. By: Amol Gupta The author works for Infosys Mobility, and has a keen interest in mobile and cloud computing. He likes to explore the various possibilities in the two fields, and how they complement each other. He can be reached at amolgupta1989@gmail.com and http://twitter.com/amolgupta.


Let's Try

Developers

Ctags and Cscope In the last article on Ctags, I discussed how to install and use Ctags to browse source code. In this article, I do the same for Cscope. This article also includes a comparison between Cscope and Ctags.

C

scope is an interactive tool to browse C source code. It examines .c and .h files, yacc (.y files) and lex (.l files) in the current directory. It maintains a symbol cross-reference (cscope.out), which is used for finding language objects including C pre-processor symbols, function calls, function declarations, etc. The symbol table for cross-reference is created only once, when Cscope is used on the source files for the first time; it is rebuilt when the source is changed.

Installing Cscope

Features

Go to the Downloads folder and extract the source with tar–xvf cscope-15.7a.tar.bz2. Make a separate directory, build_scope, with mkdir build_scope. Change into the build_scope directory, and then configure the source by running the configure utility in the extracted source. For example: /cygdrive/c/Users/root/ Downloads/cscope-15.7a/configure. To build Cscope, next run ‘make’, followed by (as the root user) make install to install it. Check if the cscope command is available (on Windows+Cygwin, look for cscope.exe in the folder ‘/c/cygwin/usr/local/bin’ ). If it is, the build was successful.

Cscope creates its own database to allow faster searching. It provides a number of options for searching source code; it allows you to search for global definitions, calling functions, called functions, any C symbol, and any text string. It also enables you to find files, and to find files that #include a particular file. It even lets you change a text string.

To install Cscope, you need to compile from the source. Get the latest version from http://sourceforge.net/projects/cscope/ files/ and follow the instructions given below. Note: The steps to build and configure Cscope on Linux and Cygwin are the same. I built and installed Cscope under Cygwin on Windows.

Using Cscope

Figure 1: Definition of symbol or function in a particular file

Figure 2: Calling function pointing to a function call

Change to the directory containing the files you want to index for search. In this example, let’s go to the GCC source code with cd /cygdrive/d/source_gcc/gcc-4.7.0/ gcc/. To generate the Cscope symbol table for cross reference, run cscope –R * and wait (it takes time to build the database). Then press the Return key to continue with Cscope. Your terminal will show a list of various functions that can be performed by Cscope. The following are some examples that show the various functions of Cscope. To find a global definition, enter the name of the symbol or function against the relevant option. I tried to find the function ‘simplify_subreg’. The results are listed, and to choose from them, go to the particular option and press Enter. This will redirect yoy to the particular file containing that symbol or function definition, as shown in Figure 1. Now, :q is used to quit from the file containing the function or symbol definition; CTRL+D is used to quit october 2012 | 87


Developers

Let's Try

Figure 5: Global definition of the called function Figure 3: Definition of the called function

Figure 4: Call to the specified function Figure 6: Location of the text string

from the Cscope terminal. To find the functions called by a particular function, again, enter the name of that function against the option. (I tried to find the functions called by the function ‘assemble_integer’.) The results get listed as before; navigate them as was done earlier to go to the particular file containing the global definition of the calling function, at the point where the chosen function is called (shown in Figure 2). Then go to the called function name and use CTRL+] to jump to the definition of the called function, as shown in Figure 3. To find functions calling a function, enter the name of the function; I searched for ‘expand_mult’. As before, choose one of the results to open the file containing that function definition, as in Figure 4. Then go to the called function name and use CTRL+] to jump to the definition of the called function, as shown in Figure 5. CTRL+T is used to trace back to where you started your search in the particular directory. To find a particular text string (for example, ‘Print an error’), follow the now-familiar procedure to open the file containing that text string, as shown in Figure 6.

Ctags vs Cscope

From the previous article and this, you can see that both tools are helpful in browsing through source code. You might be wondering which one to use. There are some basic differences. You might have noticed that Ctags takes you to the function where it is defined,

whereas Cscope will show you the place where the function is defined, as well as the places where it is called (or referenced). Another important difference is that Ctags offers auto-completion of code. For example, if my_structure is a structure object, and you type my_structure. or my_structure->, it will pop up the list of elements of the structure. So, whether to use Ctags or Cscope is your choice; I generally use a combination of both, depending on the requirement! References and further reading [1] http://vim.wikia.com/wiki/Browsing_programs_with_tags [2] http://www.fsl.cs.sunysb.edu/~rick/cscope.html [3] http://cscope.sourceforge.net/cscope_vim_tutorial.html

By: Gaganpreet Kaur The author is a software engineer at RF-Silicon Technologies Pvt Ltd. She is currently working as a part of the GCC Compiler Team on machine descriptions specific to targets. She can be reached at kang_276@yahoo.in. Edited and mentored by: Deepti Sharma, project manager, Software Group, RF-Silicon Pvt Ltd. Deepti has considerable experience in design, development and quality validation of system software tools (compilers, assemblers, device drivers, etc) for embedded systems. She has also been a mentor and trainer on compiler development, following good practices and maintaining quality. She loves writing for and can be reached at deepti.acmet@gmail.com.


Insight

Open Gurus

The Yocto Project is an open source project initiated by the Linux Foundation, which makes development on embedded devices easier and portable. It's an end-to-end embedded Linux development environment with various tools, configuration files and documentation, and pretty much has everything one needs. This article attempts to introduce the project, and is by no means a complete tutorial. It assumes that the reader is familiar with the nuances of embedded development environments.

T

he industry has seen a proliferation of embedded devices and processors. As these have grown more powerful and feature-rich, the use and popularity of the Linux operating system has skyrocketed. One of the driving factors for its popularity is because it remains open source, offers free licensing and extensive usage, which in turn has led to an abundance of applications. However, embedded developers cannot simply pick up a standard Linux distribution or application for use in their tiny environments. They face challenging tasks like handpicking boot-loaders, the kernel, libraries, applications, and development tools specific to their custom hardware environment, before cross-compiling and optimising them to fit in their minuscule embedded environments. The solutions to these challenges are not easy, and a developer’s group will need considerable effort, experience and exposure to address them. Moreover, spending time on these problems will leave developers with little time and energy to really concentrate on the core task of building excellent features for their boards. Non-commercial and commercial embedded Linux distributors offer well-tested solutions for specific embedded processors, but for an embedded developer, this is a nightmare. Every new project demands the learning of a new set of tools and development environments prior to even getting started.

Evolution

It all started with the OpenZaurus distribution, which had a tool called buildroot to solve some of the problems faced by embedded developers. The constraints with buildroot were that it could not scale to different targets apart from Sharp's Zaurus. Its users eventually began discussing about the possibilities of a new generic build, which led to the beginning of OpenEmbedded (OE). This is an embedded build system, and provides a 'build from source' methodology, to reliably bootstrap customised Linux distributions that cater to embedded devices and beyond. It has a robust structure for metadata, and vast coverage for open source software. Feature

wise, it came out as the most advanced system in the market, and being specifically designed for the customisation needs of the embedded market space, soon became the industry standard. A large number of organisations were using OE, which in turn increased support and enhanced the internals of the OE. While being a very good first-of-its-kind base, OE suffered due to its complexity, steep learning curve and lack of documentation. Apart from being a build system, OE started to collate various other tools as well—while it was inherently not designed to do so. In November 2010, industry leaders joined together to form the Yocto Project under the umbrella of the Linux Foundation, and announced that the work would continue under this new name. Despite Yocto being a fork of OpenEmbedded, the common areas are now down to OE-core. This is mutually beneficial for both OE and Yocto, where the resultant output is a high-quality OE-core with fewer fully featured software tools in a single framework. Yocto got a head start with a project built on commercial learnings and community acceptance.

The Yocto Project

According to the Yocto Project website: “It’s not an embedded Linux distribution—it creates a custom one for you.” With its vibrant community, the Yocto Project has rapidly gained traction among silicon companies, embedded developers, operating system providers and tools providers, thus capturing the entire gamut of embedded development. Some of the famous names involved are Cavium Networks, Dell, FreeScale Semiconductor, Intel, Texas Instruments, etc. Despite being funded by several major organisations, the Yocto Project remains independently governed by the Linux Foundation, based on the open source principles of collaboration and transparency. Yocto gives developers a head start by providing hardware BSPs (Board Support Packages) a standardised format across architectures, the latest and well-tested metadata for the kernel, an ‘automagically’ created Application Developer Kit (ADK), and great flexibility for embedded device requirements. As per Yocto's documentation page (yoctoproject.org/ october 2012 | 89


Open Gurus

Insight

documentation), currently four target architectures—ARM, MIPS, PowerPC and x86 (32 and 64 bit)—are supported. For each of these, there is support for a QEMU virtual machine, where images can be booted under emulation. For each architecture, there is sample hardware for reference implementation. These are: ARM: BeagleBoard PowerPC: FreeScale mpc8315e-rdb MIPS: Ubiquity Networks Router Station x86: Atom Black Sand, WEBS-2120 Some of the highlights of Yocto are: → A new release every six months with the latest software components. → A repository for open industry specific BSPs. → Thorough documentation.

IMAGE/ROM

Custom tools

IP BSP

scripts

Custom patches

Arch files

IMAGE SPECIFIC BUILD

Makefile s

Config files

kernel System binaries

SDK SPECIFIC BUILD

libraries Kernel code

BUILD PROCESS

SDK

IDE plugins Application code

Binaries

SDK

kernel bootloaders

Public repos

Private code

Git/CVS/SCM

Elibc/gli bc Cross compiler s depend encies

images

Board support config

packagi ng

Generic Embedded Development environment

Metadata config

System config

user config

Yocto Environment

Figure 1: A generic embedded environment transformed to Yocto Project

The essential tools and components of Yocto Poky

One of the fundamental components is the reference build system called Poky (originally derived from Poky Linux). Created by Richard Purdie, Poky is a large and complex system, which combines various components such as BitBake (core building engine), Metadata (the config present in the form layers), common generic OE-core, BSPs (interface layers between hardware and OS), and documentation for its build system. Poky's de facto tool-chain is based on the GNU Compiler Collection (GCC), but other tool-chains can be used as well.

Figure 2: Yocto build process overview

FETCHING

PATCHING

COMPLIATION

IMAGE

Figure 3: Screenshot of dependencies on an Ubuntu machine

ADT

Found at http://www.yoctoproject.org/projects/applicationdevelopment-toolkit-adt, this is a toolkit to enable software developers to provide an SDK that can be used by end user developers. ADT has Qemu, build tool-chains, profilers and debuggers as a part of it.

Hob

A new entrant into the project, this tool provides a graphical interface to BitBake. Most tasks that are done on the commandline can be achieved by the Hob GUI. Hob runs as a standalone tool. A new Web-based tool called WebHob is on its way. Note: There are more tools in the Yocto Project; explore them at http://www.yoctoproject.org/projects.

The Yocto build process

The build process consists of fetching the source from various defined sources, patching the code if required, compiling the big bunch, and throwing out image or ROM files. See Figure 2.

Download and try Yocto

Let's get our hands dirty setting up Yocto. Visit http://yoctoproject. org, click Documentation to keep it handy. Next, bring up a Linux system and install all the necessary dependencies. The 90 | october 2012

Figure 4: Sourcing the open embedded environment

dependencies for an Ubuntu system are in Figure 3. Refer to your Quick Start Guide if you are working on a different distro. Download the latest Yocto release with Git, using the command git clone git:// git.yoctoproject.org/poky.git to do so. Tarballs are also available. Explore the main config in the directory conf/local.conf. For a machine with more than one CPU, you can have multiple compilation threads: BB_NUMBER_THREADS=”2” PARALLEL_MAKE=”-j 2”

Set the environment by sourcing the ‘oe-init-build-env’ file in the main directory. This file should be sourced (. ./oe-init-build-env) and not executed in a sub-shell. See Figure 4. Once this is done, you are automatically moved to a different directory called poky/build. Execute bitbake –k core-image-minimal. This is the minimalist image being created, of the x86 type. By changing options, you can compile for different architectures and different types of images.


Insight

Open Gurus

Figure 6: Hob screenshot while it is parsing various recipients Figure 5: Initiating hob tool

Figure 8: Hob build process which will produce the final image

and apply and fix these patches for every future update. Kickstarting a project in Yocto involves a fair bit of set-up work initially. Yocto does not mandate any version control while pulling code from upstream.

Figure 7: Hob selection page where you can select base image type and customise layers

Let’s try Hob here—see Figure 5 (terminal) and Figure 6 (Hob GUI start-up). The next step is shown in Figure 7. As you can see, the target qemux86 has been selected. Next, select ‘core-image-minimal’ and kick off the build. Hob starts building (Figure 8). This is going to take some time, depending on the config and parallelism options you have set. After the build, you can execute runqemu qemux86 to run the Qemu emulator with the newly compiled image. Some of the other configuration files in the Poky system are: 1) Configuration (*.conf)—global variables build/conf/local.conf (local user-defined variables) ./meta-yocto/conf/distro/poky.conf (Yocto policy config variables) ./meta/conf/machine (machine-specific variables) 2) conf/local.conf—additional features: EXTRA_IMAGE_FEATURES adds features (groups of packages) INCOMPATIBLE_LICENSE = ‘GPLv3’ eliminates packages using this licence 3) Recipes for building packages For instance, meta/recipes-core/glib-2.0/glib2.0_2.32.4.bb builds GLib and installs it. The documentation provides excellent explanations for many other files. You must, however, watch out for the following: If custom patches, which are pulled from upstream, are incorporated in the source, Yocto needs to carry them within,

Getting involved

Like every other open source project out there, Yocto depends on contributions from members like you and me. Refer to the contribution pages at yoctoproject.org, and be a part of this excellent project. Mastering Yocto tools effectively can be a little daunting initially, but the time spent on it is well worth it. The flexible customisation and configurable options give versatile projectspecific control, which will help developers of embedded systems to concentrate on their core software development.

References [1] http://www.yoctoproject.org [2] #yocto on irc freenode.irc.net (#yocto channel)

Acknowledgements I am profoundly thankful to Divyanshu Verma and Tandava Popuri for reviewing this article and providing constructive feedback.

By: Surya Prabhakar The author works as a software development engineer for the Linux Engineering Group of Dell India R&D Centre, Bengaluru. He has close to 10 years of industry experience and has worked on Linux, embedded systems, virtualisation, cloud computing, etc. He can be reached at surya_prabhakar@dell.com.

october 2012 | 91


Cloud Corner

Interview

OpenStack has emerged as a really important component for cloud services To ensure growth while conducting their businesses in a smart and cost-effective way, SMEs have no choice but to resort to the cloud, which provides them the perfect growth path, says Rajesh Awasthi, director, Cloud Service Providers, NetApp India, Marketing & Services.

T

he cloud is the most fascinating emerging trend in the technology world. The Indian market for cloud services in 2012 amounts to about $183 million as per the Asia Cloud Forum. India is one of the major markets for cloud services providers across the globe, not merely because of the pace at which this technology is being adopted but also because of the number of SMEs operating in the country. Yes, small and medium enterprises have a great role to play in the cloud revolution in India. To ensure growth while conducting their businesses in a smart and costeffective way, SMEs have no choice but to adopt the cloud. It provides them the perfect growth path, says Rajesh Awasthi. And, of course, this revolution would not have been possible without open source technology. Cloud services providers across the world overwhelmingly depend on open source technologies like OpenStack. Diksha P Gupta from the editorial team spoke with Awasthi about the major trends driving the cloud revolution in India.

Trends driving the cloud revolution in India

Awasthi has watched cloud computing evolve in India. Throwing light on the subject, he says, “You have to look at the adoption of cloud technology from two to three different angles. First is how people are utilising the existing technology within their data centres, which is also known Rajesh Awasthi, director, Cloud Service Providers, NetApp India, Marketing & Services 92 | OCTOber 2012


Interview Cloud Corner as the private cloud. There are companies building their own private clouds, which cater to their in-house or group requirements. Second is the Public Cloud Services. These are cloud services providers like Tata Communications, BSNL Reliance Communications, Bharti Airtel, HCL Infosystems, etc. These companies deliver IT as a service to the end user. Going a step ahead, what we started witnessing was that there were a large number of companies looking at the areas in which they could utilise the public cloud services from service providers and the areas where it made more sense to use the technologies within their own data centres—to bring about more efficiency within their own data centres for creating their own private clouds. Third area is where people are talking of creating a hybrid cloud in which their production system is on their private cloud. And as the data ages, they push it to the public cloud services. This may include archival data that they may need to keep for compliance purposes over a longer period.” The hybrid cloud is one of the fastest emerging models today, where, depending on the requirements of the application, the data resides within the data centre in a private cloud environment and when it ages, gets moved onto a public cloud service. Also, large enterprises in India don't place their sales force automation applications within their own data centre. They use salesforce.com or SFDC, as it is called. A large number of companies in India are deploying it for their own sales forces. Ramco Systems, which is dominant in the ERP systems deployment space today, is also a cloud-enabled company. A mid-sized company looking at an ERP solution need not set up its own data centre and deploy an application within it. The solution can be hosted with Ramco and the latter will deliver it as a service to the end customer. So ERP as a service is being delivered too. The cloud revolution in India is more need-based. Awasthi highlights what drives this trend: 1. With respect to the storage area or IT infrastructure, one of the trends is for cloud services providers to deploy a shared IT-as-a-service environment, which would be delivered in a secured multi-tenant way to its end customers. So, they are not placing physically independent systems at the disposal of each customer but deploying a single environment in which they put in technologies that help to create a secure multitenancy environment. The cloud services companies are working on creating those environments. Indian companies are ready to look at this option when they deploy their business applications within their data centre or cloud services arena. 2. The second area where there is a lot of enthusiasm is in disaster recovery or business continuity. Today, businesses are heavily dependant on IT. Instead of setting their own data centre in a different seismic zone, companies are adopting Disaster Recovery as

People are talking of creating a hybrid cloud in which their production system is on their private cloud. And as the data ages, they push it to the public cloud services. This may include archival data that they may need to keep for compliance purposes over a longer period. a Service (DRaaS) with the service provider offering different SLAs (service-level agreement). 3. The third area gaining traction is businesses going in for off-site back-up for long-term archival solutions. For example, telecom companies have to keep CDRs (call detail records) for several years as per the regulations of the Telecom Regulatory Authority of India. For this, telcos are looking towards cloud services providers rather than keeping the data in their own data centres. Similar solutions are needed in the banking sector. Banks are supposed to keep the details of transactions of each customer for a period of threefour years—so an archival solution with the cloud services providers would be ideal.

OCTOber 2012 | 93


Cloud Corner

Interview

4. Businesses are also going to cloud services providers for their big data solution requirements. This requirement falls into three broad categories, which are termed as the 'A-B-C' of big data. 'Analytical' solutions and high 'Bandwidth' solutions (required in the case of high-performance computing) comprise the first two requirements. In high Bandwidth area, NetApp has joined hands with open source vendors like Luster to create distributed a file system, to be deployed in a high-performance computing environment. The 'C' in the ‘ABC’ stands for large 'content' repositories. Projects like the UID or NPR, that need to deploy huge solutions that manage the data of one billion people, including their finger prints and iris scans, have to be stored in a single location. This data will be utilised by banks, security agencies and other branches of the government. So, managing and securing that kind of a content repository, which will continue to grow over a period of time, is extremely important. That's where Hadoop-based solutions play a crucial role.

I have seen a large number of telecom service providers as well as cloud services providers that look at open source based stacks to deploy cloud solutions. There is an alliance in the industry called the 'OpenStack', which does joint work in creating cloud environments using open source technologies. So, within India too, we have seen cloud service providers deploying solutions based on the open stack. Open source technology has an important role to play

Since data is growing at an exponential rate, the next stage would be where customers require high bandwidth solutions, both from the compute side as well as from the storage perspective. Those requirements may be for a specific period of time. Awasthi says, “One has to see whether one wants to fulfill those requirements from within one's own data centre or look at cloud services providers. That's where we are trying to work and provide solutions with the help of open source technology."

Hadoop will be the winner!

Hadoop has emerged as a really important component. As the size of the content repository increases, the 94 | OCTOber 2012

requirement would be to scale the data storage across multiple systems and create a single distributed file system across all those storage systems. That is where the beauty of the Hadoop solution comes in, which brings in the HDFS file system over and above the different storage systems that are deployed by the customer. Awasthi says, “We have an example of how a Hadoop— based solution has helped NetApp in customer services. NetApp offers continuous monitoring of NetApp storage systems deployed at the customer site using'Auto support'. where we take direct feeds from all the systems which are enabled with 'Auto support'. This is done to give pro-active support to customers by doing predictive analysis on near online data received from various customers' systems. That's a huge volume of data, because we have multiple systems deployed across the world and each one of them is generating alerts or the other at any given point in time. When that information comes to our 'Auto Support' system, it creates a huge repository. Earlier it used to take months to create a report from a system on the huge data repository. Recently, we replaced our storage solution with our own Hadoop-based E-series solution at the back-end. The same work that used to take months is now done in a couple of hours or days. That is the benefit of open source technology—something we see within our own internal IT system. I am sure it wouldn't be different in any other business. For example, if a retail chain wants information about customer behaviour from various branches spread across the country, it’s a major task to co-relate the vast data that is received from different sources. If the corelation has to be done in a specified time to make a quick business decision, Hadoop plays a major role.” There are no security related fears in adopting open source technology either. In fact, the cloud services providers are happily adopting open source technologies because they help cut costs. Awasthi says, “I have seen a large number of telecom service providers as well as cloud services providers that look at open source based stacks to deploy cloud solutions. There is an alliance in the industry called the 'OpenStack', which does joint work in creating cloud environments using open source technologies. So, within India too, we have seen cloud service providers deploying solutions based on the open stack. People are more receptive to these kinds of solutions, provided they add value to their end customers. They are ready to implement these solutions in their infrastructure. Because in the end, the customer looks at what is the service level agreement (SLA) that the cloud service provider is ready to sign. They are least bothered about what technology is being used. Cloud services providers are looking at the way open source is spreading across data centres, and they find this to be a financially beneficial solution that can be deployed, while giving the customers the desired SLAs.”


“Converting Big Data to Business Growth”

20th & 21st November - Taj Lands End, Mumbai First Time in India - A Networking Opportunity With Professionals Who Relate to Big Data / Consumer Insights & Business Analytics Highlights : • 200+ Senior Professionals • 35+ Speakers • Panel Discussions / Case Studies / Individual Presentations • Exhibition Booths - latest technologies related to data • Professionals from IT, Marketing, Analytics, Risk & Fraud background under one roof • And much more Who will you meet : • CTO • Heads of Technology • Data Scientist • Predictive Analyst • Business & Market Intelligence • Consumer & Market Insights • Risk & Fraud • Data Architect From : Media Partner

• Financial Institutions • Telecommunication • E-Commerce • Automobile • Pharmaceuticals & Bio-Chem • Utilities • IT Companies / Data Solution Providers / Data Consultants / Data centres • FMCG / Retail

Conceived & Managed by B2B Media

Registration Cost - R 15,000 + Service Tax For Speaker Opportunities / Sponsorships / Registrations - Call Neha +91 9819414516 Janice +91 9820104787 or email: neha@kamikaze.co.in / janice@kamikaze.co.in KamiKaze B2B Media : Kshitij, 103 - 1st Floor, Veera Desai Road, Opposite Andheri Sports Complex, Andheri (W), Mumbai 400 058 Tel.: +91 22 613 81800 / 26780476


Cloud Corner

Insight

An Introduction to This article introduces the CloudBees PaaS (Platform as a Service) offering.

A

ccording to Gartner, the application infrastructure is the mid-tier platform technology layer of the software stack (a.k.a. middleware) where architecture standards, best practices, prevailing protocols and programming models for business applications are defined. Platform as a Service (PaaS) enables organisations to use the application infrastructure as a software service to create, run and integrate cloud-based business applications. The PaaS resides within the space between Software as a Service (SaaS) and Infrastructure as a Service (IaaS). The latter delivers basic network, storage and compute/processing capabilities as standardised, scalable service offerings. SaaS delivers software capabilities as online Web applications and Web services. PaaS offerings often support DevOp practices, which include self-service, automated provisioning, continuous integration and continuous delivery. With PaaS, the team can see whether an application is working, broken or staged, and so on—across the entire application lifecycle. Apart from that, PaaS provides: On-demand self-service (service catalogues, incremental DevTest) Rapid elasticity (provisioning, de-provisioning, flexibility and scalability) Resource pooling (platform environments commonly pool memory, code libraries, database connections; multitenancy, resource utilisation) Measured services (monitoring, metering, billing, etc). Usage is monitored, and the system generates bills based on the charging model. In the next three to five years, PaaS providers will be capable of providing comprehensive offerings that are not dedicated to a single language.

The Java PaaS

PaaS for Java has come a long way in the past 24 months. PaaS product offerings are rapidly evolving, which is great news for the Java community – there are now low-cost, scalable and hassle-free hosted solutions for Java environments. The Java platform is well suited for PaaS since the JVM, the application server and deployment archives

provide natural isolation for Java applications, allowing multiple developers to deploy applications in the same infrastructure. Google App Engine was the lone PaaS provider for Java developers. Fortunately, the scenario is changing; it makes sense, since Java developers represent one of the biggest developer communities in the world. In the past two years or so, several commercial providers have entered the Java PaaS space: CloudBees Amazon Elastic Beanstalk CloudSwing Cloud Foundry Google App Engine Heroku Red Hat OpenShift

An introduction to CloudBees

CloudBees is run by JBoss and Sun veterans. Its weight is growing in the Java PaaS space. CloudBees brings several unique features into the Java PaaS panorama, in particular, continuous integration—a complete development, testing and deployment management in the cloud.

The CloudBees ecosystem

For JEE developers, I consider that CloudBees and OpenShift offer the ‘best of the breed’ services so far. And between the two, CloudBees is the winner in this highly competitive landscape. While all other Java PaaS vendors focus on providing a hosted runtime environment for Java applications, CloudBees takes the platform concept further to support the complete development, testing and deployment lifecycle of Java applications.

Features Supported technology platforms and stacks

One of the most important attributes of CloudBees is the technology platform and stack it supports. CloudBees supports Tomcat, Java SE, Java EE, standard Java libraries, MySQL, commercial relational databases and Big Data. It allows file system and thread access. No special framework is


Insight Dev Build CodeQuality Sonar Source

Run

Test Selenium

Continuous Development

Jenkins Repositories GIT, SVN, MAVEN

APIs

Production Java, JEE, Pauthon AutoScaling

Data Service MySQL, MongoHQ, Cloudant

Identity Management

Messaging sendGrid

Provisioning

Management

Monitoring New Rellc

Logging/ Auditing Paper Trall

Run-Time

Metering/Billing

Back-End

Cloud Management/Virtualization

Jenkins as a service

needed for application deployment, and it is easy to migrate applications to and from CloudBees.

Support for the development processes

CloudBees makes life easier for application developers, as it removes the overhead for application and resource management. It is not only a runtime environment, but also an integrated build, dev and test environment. Developers can use the Jenkins service to have CloudBees automatically and continuously check out, build, test and report code in the repository. CloudBees supports IDE tools, command-line

@yearly @monthly @weekly @daily @hourly @reboot * min (0 - 59)

One of the most important features of CloudBees, from the business perspective, is the platform’s ability to auto-scale. CloudBees supports a built-in load balancer and the custom domains for it. Users can define performance criteria to measure load-balancing requirements and the auto-scaling of application servers is supported accordingly. Database scaling is not supported. It provides horizontal, vertical and autoscaling features with Web-based monitoring dashboards.

PrivateEdition-On Premise-Enterprises

Figure 1: CloudBees EcoSystem

Entry

tools, Web-based consoles, Web access to logs, third-party development and testing services, and API access. On top of all these, good documentation is like chocolate topping on vanilla ice-cream.

Performance and scalability

WEB, UI, CLI, HTTP API, Eclipse, Maven, Ant

Public Edition-IaaS-EC2-RackSpace

Cloud Corner

Jenkins is an open source continuous integration tool written in Java. It supports SCM tools including CVS, Subversion, Git, Mercurial, Perforce and ClearCase. In CloudBees, users can execute Apache Ant and Apache Maven-based projects. Builds can be started by a commit in a version control system, scheduling via a cron-like mechanism, building when other builds have completed, and by requesting a specific build URL. Cron is a time-based job scheduler in UNIX-like computer operating systems that enables users to schedule jobs (commands or shell scripts) to run periodically at certain times or dates (see table below). With the use of the CloudBees dashboard, users can manage and keep track of various builds.

Description Run once a year, midnight, Jan. 1st Run once a month, midnight, first of month Run once a week, midnight on Sunday Run once a day, midnight Run once an hour, beginning of hour Run at start-up * hour (0 - 23)

* day of month (1 - 31)

* month (1 - 12)

Equivalent To

0 0 0 0 0

0 0 0 0 *

1 1 * * *

1 * * * *

* * 0 * *

* day of week (0[SUN] - 6)

Figure 2: The CloudBees Dashboard—Jenkins

OctOber 2012 | 97


Cloud Corner

Insight and ensure superior online service delivery.

XWiki

Figure 3: New Relic application monitoring and management

XWiki Cloud is a cloud-based wiki that is secure, simple to use and easy to organise. It is a light and powerful development platform. Using structured data and in-page scripting, you can create macros and applications that extend the capabilities of your wiki and customise it to match your specific needs. With XWiki, CloudBees provides easy content creation for applications, importing Office documents (.doc, .xls, .ppt, etc) to your wiki, and a powerful search engine to search in every element of your wiki, including attachments.

The SendGrid service

Figure 4: XWiki in CloudBees

Papertrail—logging

CloudBees' management console provides access to the server’s unrefined log file. Papertrail provides aggregation and analysis of log files—it can be used to aggregate, tail and search log messages from applications deployed on the CloudBees platform. Papertrail automatically collects and reports the log data from running CloudBees applications. It needs to be enabled from the CloudBees dashboard for specific applications. It combines logs from different sources, keeps them for longer periods of time, and provides summary analysis based on the logs. It saves useful searches, enables team-wide visibility, notifies external services of important events, automatically archives logs, and gives access to log details via the Web, the command-line and the API.

New Relic—deep monitoring and management of Web apps

New Relic is an application performance monitoring service. It embeds its software agent into your application to collect detailed performance information about your application in real time. New Relic monitoring is part of the CloudBees ecosystem, so you can turn it on for any of your applications with a few clicks. In CloudBees, the New Relic software agent is automatically instrumented into your application at the time of deployment, without the developer needing to change any source code. New Relic monitors app performance at the component level, and spots potential bottlenecks. It determines if performance issues are in the front end, the app, the database, or at the network level. It is the tool developers need to monitor real user experiences, proactively identify application bottlenecks,

CloudBees now supports sending mail from applications with the SendGrid Service for RUN@cloud. SendGrid offers a hosted SMTP service to send emails to either a mailing list or to individual users. Once it is enabled, it is available as a JNDI resource in your CloudBees application. SendGrid's cloud-based email infrastructure relieves businesses of the cost and complexity of developing and maintaining custom email systems. It provides reliable delivery, scalability and real-time analytics, along with flexible APIs that make custom integration a breeze. SendGrid solves the technical challenges, eliminating your email headaches, so that you can focus on your core product and meet the email demands of your business. In the next article in this series, we will look at how to deploy a JEE application on CloudBees, how to run it on the local and cloud environment, and how to configure a CloudBees plug-in in Eclipse. References [1] http://wiki.cloudbees.com/bin/view/Main/ CloudBees+Platform End to EndTutorial [2] http://wiki.cloudbees.com/bin/view/DEV/ Sauce+OnDemandService [3] http://wiki.cloudbees.com/bin/view/RUN/Papertrail [4] http://wiki.cloudbees.com/bin/view/RUN/NewRelic [5] http://wiki.cloudbees.com/bin/view/RUN/DatabaseGuide [6] http://www.infoq.com/articles/paas_comparison [7] http://wso2.com/download/Selecting-a-Cloud-Platform.pdf [8] http://en.wikipedia.org/wiki/Cron Gartner Insight: COMPLIMENTARY RESEARCH—‘Platform as a Service in the Cloud Services Value Chain’

By: Mitesh Soni The author is a Technical Lead at iGATE. He is in the Cloud Services Practice (Research & Innovation) and loves to write about new technologies.


Overview

Admin

Cyber Attacks Explained

Cryptographic Attacks Cryptographic solutions are used to encrypt data transmission over wireless or wired protocols. Unfortunately, these techniques have proved to be vulnerable to attacks and data can be stolen. This article explores the various means to strengthen encryption techniques to protect network infrastructure—including methods using FOSS-based solutions.

A

s we all know, the heart of cryptographic network communication is public-key cryptography (PKI), which is used to encrypt the TCP/IP communication between two network end-points. PKI uses various encryption algorithms to ensure data security. The whole idea behind encryption is to make it a very difficult, time-consuming task to try out all possible keys. For example, if a message is encrypted using an 8-bit key, it means that 256 different combinations of the key need to be tried to decrypt the data. Any computer can perform this task in less than a second. However, if the key length is extended to 32, it would need 65,536 combinations to be tried, needing a few seconds. Extending this, the number of combinations for a 256-bit key would need literally many years for even a powerful computer to crack. While the key length is an important factor, the mathematical algorithm used for processing encryption and decryption is also equally important. The algorithm is supposed to quickly perform the action, while maintaining necessary data and key security. There are many algorithms, such as SHA1, 3DES, etc. There are two types of keys—symmetric and asymmetric. The symmetric types use only one key for encryption and decryption, while for asymmetric keys, there are two different keys, which are complementary to each other. Please refer to Figure 1, which shows the basic cryptography functionality, which is designed with the objectives of data confidentiality, integrity and authentication in mind. It is important to understand what cryptography means to the Internet. The Internet is blessed with SSL (Secure Socket Layer) and TLS (Transport Layer Security) protocols, which perform the job of encrypting and decrypting the data sent, so as to enable users to securely exchange personal

information, credit card numbers, etc. SSL and TLS are based on asymmetric keys exchange. There are two demands here— that the data must be encrypted for security reasons, and that the website must be known to be legitimate. The latter is important because attackers may host websites to steal personal information. To ensure legitimacy, the Web server has an SSL certificate, which enables traffic via the HTTPS protocol, using TCP port 443 for communication. This SSL certificate is provided or signed by a trusted certification authority such as Verisign or Thawte, which ensures that the SSL certificate holder is a genuine party, will adhere to security standards, and is eligible to obtain a certificate and install it on their servers. The SSL certificate is tied to the domain name, such as abcd.com. To understand all the security concerns, we need to first study how certificates work. Digital certificates using asymmetric PKI have two keys, a public key and a private key. The private key is installed on the Web server where the website URL is supposed to be secured using SSL. The public key is shipped with all browsers that support SSL. To support multiple certificate authority vendors, browsers are equipped with their public keys, as well as various ciphers (the encryption and decryption algorithms). Each public key has an expiration date and needs to be updated once it nears expiration. When we install a digital certificate on a Web server, we are essentially installing the private key, which is specifically created by the trusted certification-providing authority. Now let’s see how this mechanism works at a higher level, for a browser. As shown in Figure 2, when a person tries to access an SSL-secured website, the browser first challenges the server by sending its own cipher strength. In response, the server does the same, and also sends a copy october 2012 | 99


Admin

Overview Message Encrypted "ka$56s#0NpD"

Web Browser

Web Server

I support this cipher. Identify yourself

Sender

Message Sent "Hello World"

Message Received "Hello World"

I support this cipher

Receiver

Symmetric Key Crypto = Single Identical Key to Encrypt / Decrypt Asymmetric Key Crypto = Public + Private Key Pair to Encrypt / Decrypt

Figure 1: Basic cryptography functionality

of its own installed SSL certificate (for the hosted website URL). At this point, the browser checks the validity and authenticity of the certificate, by using the set of public keys on it. On finding it to be acceptable, the browser sends back a digitally signed response to the server to initiate further secure communication. If the server certificate cannot be verified for authenticity, the browser alerts the user about this situation. It’s important to note that while SSL helps achieve security, there is an overhead, since TCP/IP by default does not provide any security—adding the encryption layer onto the existing protocol frame can result in bigger TCP packets.

Cryptographic attacks

Network administrators commonly invest time and money to design security around applications, servers and other infrastructure components, but tend to take cryptographic security less seriously. Before going into the various attacks, let’s understand first that cryptography is all about keys, the data, and the encryption/decryption of the data, using the keys. A few cryptographic attacks try to decipher the key, while others try to steal data on the wire by performing some advanced decryption. Let’s take a look at a few common attacks on cryptography. The SSL MITM attack: In this case, the attackers intrude into the network and establish a successful man-in-the-middle connection. The attackers silently watch the HTTPS traffic on the wire, and wait for the targeted website to respond to some browser’s HTTPS request. As we learnt earlier, the server is supposed to send its digital certificate to the browser as a part of the SSL handshake process. The attackers grab this certificate, and note down various details such as the domain name, expiration date, cipher strength, etc. The attackers then create their own certificate (also called a self-signed certificate), containing the same information as that of the captured certificate. From this point on, the man-in-the-middle attackers intercept each browser request and respond with the fake certificate. As a normal response to such a situation, the Web browser pops up a warning to the user, which in most cases is ignored, and thus the attackers are successful. Further, on the server side, the attackers establish a separate HTTPS connection to complete the request, and the result of the response is fed back into the browser on the connection already established. This gives the attackers complete control on the

Here is my certificate for you to validate Certificate is a acceptable Here is a sample data encrypted Sample data successfully decrypted. Here it is.. SSL Session established. We can talk security

Figure 2: A secure browsing mechanism

SSL traffic, and helps them steal the personal information. Since this attack involves a real intrusion into the network, it is less likely to happen, but can result in serious data loss. Also, since the attackers are not breaking the request-and-response chain, it becomes tough to detect the data theft. The SSL MITB attack: Similar to the attack mentioned above, in this case, the attackers inject a JavaScript code snippet into the browser to create a man-in-the-browser situation. This snippet monitors all SSL activities and records the session. While this is happening, the attackers also record the encrypted version of the same session, and programmatically try to find out the cipher strength and the key, besides stealing data. This attack is becoming more popular of late, due to multiple open source browsers, and the various security vulnerability problems with each of them. Key hijacking: This is another intrusive type of attack, whereby the attackers gain access to the Web server hosting the website (by using one of the many intrusion techniques already discussed in previous articles of this series). Once the server is compromised, the attackers use an elevated privilege attack to gain access to the certificate store, from where the private key can be obtained. The attackers then use packet sniffing to download an entire HTTPS session, and store it for offline decryption. The decryption process needs the private key, which is already stolen; and the public key, which is available in the browser's trusted authority key store. The data set so deciphered might reveal vital personal information such as user IDs, addresses, credit-card numbers, etc, assuming that the targeted website sells goods online using e-commerce technology. The birthday SSL attack: This attack relies on a mathematical theory called the birthday problem, which says that statistically, in a set of randomly selected people, some pairs of people will have the same birthday. The probability increases as the number of people grows. In cryptography, the data integrity is established using a hash or checksum, which is calculated at both ends of the transmission, to ensure that the data is not tampered with. Birthday attacks target the hash,


Overview and need multiple attackers coming together, who individually capture chunks of data and share it among themselves. Each chunk is then programmatically analysed to create an additional set of data in such a way that the hash of it matches that of the data chunk. In other words, for a given chunk of data and hash combination, the mathematical algorithm creates a clone data set. Further processing of the original data chunk and the resultant data set helps derive the encryption key. This attack is very time-consuming and technically complex, but can be accomplished by using multiple powerful computing machines and software programs. Chosen dataset attacks: As discussed earlier, attackers always aim for data as well as the key, in order to completely compromise a cryptographic system. A chosen dataset method consists of two different types. In the first case, called chosen plaintext, it is assumed that the attackers have access to the original data and the encrypted version of it. The attackers then apply multiple encryption keys to the original data, and each time the output is compared with the already encrypted version. If the result is positive, it means the key is derived. In the second type of attack, called chosen cipher text, attackers have the cipher text and also the decrypted version of it. Again, the attackers try multiple keys until the output matches that of the decrypted version obtained already. These attacks are a bit less time-consuming, but need the attacker to gain an enormous amount of data and computational power for the desired results. The SSL brute-force attack: Here, the attackers send very small data sets to be encrypted by the SSL protocol. The attackers capture the result and store it against the transmitted dataset. By doing this for lots of data chunks, a key can be eventually derived. This process is very slow, and can take days to decipher the key, and it has been found that such attacks often originate from within the firm’s network. To speed up the process, this method is usually combined with the group key decipher attack. Group key deciphering: As learnt earlier, key-based encryption is dependent on the length of the key; a bigger key requires more time to decipher. In a group key deciphering attack, multiple attackers come together—each one with a powerful machine. Unlike brute force attacks, where a lot of data is captured, in the group method only a given set of data is captured and used. This data is subjected to all possible permutations of keys to try decrypting the data. Since 256-bit encryption can take many years to decipher, using multiple powerful computing machines can bring that time down. Attackers also use statistical grouping of keys to be tried from different machines, to further quicken the process. In the past, a few such experiments showed that cracking a 128-bit key required only a few days. With improving CPU speeds and throughputs, unfortunately, cracking a 1024-bit key quickly can become a reality soon. Compromised key attacks: Cryptography is all about trust, whereby a trusted certificate-providing authority signs a certificate. The provider itself is supposed to be extremely secure. Unfortunately, in the past, the certificate-

Admin

provider’s own private key has either been exposed or stolen by attackers, who then have used this private key to sign certificates created for a domain name, which is their own site. Any browser lured to this website will not suspect such a website, because the certificate will pass the authenticity test. This happens because the public key of such certificates will already be present in the browser certificate store. This can, and in the past has, resulted in loss of personal information. SSL DoS attacks: An attacker’s main aim is usually to steal data. Since it is a troublesome and highly technical process in cryptography, a few attackers tend to use legacy methods, such as denial of service attacks. SSL negotiation adds an overhead to the TCP protocol, slowing down the communication to achieve security. In an SSL denial of service attack, the attackers establish SSL communication through a browser, and then send multiple bogus packets with varying lengths on that channel. Each packet is decrypted and processed on the server side, thus eventually exhausting CPU power, resulting in service outage. In another form, which takes place at OSI Layer 3, TCP port 443 is bombarded with bogus fragmented packets, creating a similar effect.

Protecting FOSS systems

In the FOSS world, cryptography is mainly used on Web servers by implementing the SSL protocol. Besides, open source developers can digitally sign the code before sending it to a trusted party, to prevent wire-tapping. On a Web server, the very first step is to use a digital certificate from a trusted authority. It should also have the latest and strongest cipher algorithm, and the key length should at least be 256 bits. The second step is to protect the certificate store—that crucial area on the Web server where the website’s private key is stored. Only administrators and network managers should have access to it. To protect FOSS networks from brute-force attacks, other network security protection should be in place (this has already been discussed in previous articles in this series). While most critical infrastructures implement a firewall, a UTM device and powerful antivirus or anti-Trojan software, it is also imperative to have an intrusion detection system (IDS) in place. IDS systems are capable of intercepting denial of service and brute-force attacks, and also help stop other critical anomalies. In case of Linux workstations, cryptography can be used to encrypt a file or entire disk too.

By: Prashant Phatak The author has over 18 years of experience in the field of IT hardware, networking, Web technologies and IT security. Prashant is MCSE and MCDBA certified and is also an F5 load balancer expert. In the IT security world, he is an ethical hacker and Net-forensic specialist. Prashant runs his own firm called Valency Networks in India (http://www.valencynetworks.com) providing consultancy in IT security design, security audits, infrastructure technology and business process management. He can be reached at prashant@valencynetworks.com.

october 2012 | 101


IBM Cisco Systems VMware Software India Pvt Ltd India Sify

ESDS Software Solution Pvt Ltd

CtrlS IBM Cisco Systems VMware Software India Pvt Ltd India Sify

ESDS Software Solution Pvt Ltd

CtrlS

A List Of Leading

Cloud Solution Providers IBM | Bengaluru, India IBM Services offers a bouquet of cloud offerings across the foundational components of IaaS, SaaS, PaaS and BpaaS. IBM’s cloud-based technology offerings, like SmartCloud Enterprise, help clients and software developers (ISVs or Independent Software Vendors) in India excel in cloud computing, providing secure and reliable Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) solutions. It also provides ready-to-use business solutions in the CRM and HRMS space—solutions that can be enabled and deployed within a matter of weeks, if not days. Large ISVs and value added resellers or distributors that aspire to become cloud service providers now have an option to contract with IBM to start up their cloud services by paying a nominal royalty and launching their own branded SmartCloud Enterprise equivalent solution.

LEADING

The offerings currently available are IBM Smart Cloud Entry, IBM SmartCloud Enterprise, the ISV-led cloud offering, IBM Smart Business Desktop, SmartCloud managed back-ups, LotusLive Portfolio and the Cloud Service Provider framework. Major open source-based offerings: Some of the company's cloud offerings support Linux and open source hypervisors like KVM, which provide the foundation for public and private cloud deployments. For instance, IBM’s public cloud offering SmartCloud Enterprise supports the SUSE and RHEL flavours of Linux. Leading clients: Confianz Information Technologies Pvt Ltd, Sree Charan Cooperative Bank Ltd, Jeppiaar Institute of Technology, WinHire Technologies, EmployWise, True Value Homes (TVH), etc. USP: On-demand self service, ubiquitous network access, location independent resource pooling, rapid elasticity and provisioning, and a pay-per-use model. Special mention: The company was recently awarded for its cloud security solutions by SC Magazine in the 'Best Cloud Computing Security' category. EMA (Enterprise Management Associates) also awarded the IBM Tivoli Application Performance Management solution with its 'Best Cloud Vision and Design’ award. Website: http://www.ibm.com/in/en

Cisco Systems | Bengaluru, India The company has a comprehensive portfolio of technologies that can be applied by enterprises and service providers to build clouds. These components span compute, storage access and networking, virtualisation, orchestration and self-service capabilities, as well as security, collaboration and video services. The company also provides solutions for customers to deliver cloud services like Collaboration Cloud, Video Delivery, IaaS, VXI, security-as-a-service, etc. The solutions portfolio is driven by a three-pronged cloud strategy: • Deliver products, solutions and services to organisations so that they can build their own secure clouds that are capable of supporting enterprise-class SLAs. • Enable service providers to deliver secure cloud solutions and services to their customers. • Drive technology innovation, open standards and ecosystem development. Major open source-based offerings: The company is committed to building an open cloud ecosystem with a broad array of partners. Cisco participates actively in OpenStack, an open source platform that facilitates both private and public cloud computing environments. Dynamic provisioning of the network and network-based services is an essential element of cloud computing. In pursuit of this goal, Cisco is helping shape the development of Quantum, Nova, Horizon and other parts of OpenStack. For example, Cisco delivered Quantum reference implementations to

102 | october 2012


CALENDAR FOR 2012-2013 events to Look out For in 2012 Name aNd Website

n

date aNd VeNue

Gartner symposium/itxpo 2012 http://www.gartner.com/technology/symposium/india/

An event that focuses on IT and its benefits in an enterprise

10 – 12 october, 2012 Grand Hyatt, Goa

interop http://www.interop.in/

An event that will help you make smart business decisions using IT and Open Source

10 – 12 october, 2012 Bombay Exhibition Center, Mumbai

open source india http://www.osidays.com/

An event targeted at nurturing and promoting the open source ecosystem

12 – 14 october, 2012 NIMHANS Convention Center, Bengaluru

cloudcomputing World Forum http://www.cloudcomputinglive.com/india/

An event on cloud computing

16 – 17 october, 2012 JW Marriott, Mumbai

the big data, analytics, insights conference http://www.salessummit.co.in/bigdata.html

An event that will explore data analytics solutions, latest skills, tools, and technologies needed to make Big Data work for an organisation

20 – 21 November, 2012 Taj Lands End, Mumbai

reseller club Hosting summit http://www.rchostingsummit.com/

India's largest conference for the web hosting community

1 – 2 November, 2012 Renaissance Mumbai Convention Centre Hotel, Mumbai

Linuxcon europe 2012 https://events.linuxfoundation.org/events/ linuxcon-europe/

An event for the Linux community

5 – 7 November, 2012 Hotel Fira Palace · Barcelona, Spain

KVm Forum/oVirt Workshop 2012 http://events.linuxfoundation.org/ events/kvm-forum/

An event for the open source community

7 – 9 November, 2012 Hotel Fira Palace · Barcelona, Spain

Gartner data center conference http://www.gartner.com/technology/summits/na/data-center/

An event for the data centre professionals managing and advancing their enterprise's evolving IT infrastructure requirements

3 – 6 december, 2012 The Venetian Resort Hotel and Casino, Las Vegas

LFY magazine attractions during 2012-13 h

tHeme

Featured List

May 2012

Virtualisation

Certification & Training Solution Providers

June 2012

Android

Virtualisation Solution Providers

July 2012

Open Source in Medicine

Web Hosting Providers

August 2012

Open Source on Windows

Top Tablets

September 2012

Open Source on Mac

Top Smart Phones

October 2012

Kernel Development

CLOUD Solution Providers

November 2012

Open Source Businesses

Android Solution Providers

December 2012

Linux & Open Source Powered Network Security

Network Security Solutions Providers

January 2013

Linux & Open Source Powered Data Storage

Network Storage Solutions Providers

February 2013

Top 10 of Everything on Open Source

IPv6 Solution Providers

OCTOber 2012 | 103


IBM Cisco Systems VMware Software India Pvt Ltd India Sify

ESDS Software Solution Pvt Ltd

CtrlS IBM Cisco Systems VMware Software India Pvt Ltd India Sify

support L2 Networks that span multiple switches. Cisco will offer an OpenStack Quantum Plug-in and REST APIs for its virtual switches to orchestrate multi-tenant cloud infrastructures. Cisco’s Cloud CTO and VP, Lew Tucker, was recently elected vice chairman of the OpenStack Foundation. Cisco is an active participant in the Open Networking Foundation (ONF) as well. Cisco Open Network Environment (ONE), a comprehensive initiative targeted at network programmability complementing an overarching cloud platform, aims to deliver amongst other features: (a) open APIs across Cisco’s operating systems (Cisco IOS, IOS-XR and NX-OS); and (b) proof-of-concept controller software and OpenFlow agents for software-defined networking research.

ESDS Software Solution Pvt Ltd

CtrlS

Leading clients: The key industry verticals in which Cisco has seen maximum traction are ITeS, FSI, manufacturing and government. In addition to this, the company has partnered with several service providers to provide cloud computing services to SMEs. USP: Because of its networking heritage and the strength of its core networking business, the company is in the best position to help customers realise the benefits of the cloud. This market position allows Cisco to highlight its innovation and leadership – organically and with partners – as part of its strategy. Website: http://www.cisco.com/web/IN/index.html

VMware Software India Pvt Ltd | Bengaluru, India

LEADING

The company provides solutions across the three layers of IT environment: infrastructure, application and end-user computing, and thus helps its customers adopt a complete cloud environment. VMware cloud solutions improve IT efficiency, agility and reliability, while helping IT drive innovation. VMware delivers everything IT needs to build, operate, staff and manage the cloud, while continuously quantifying its impact. VMware helps customers evolve technical foundations, organisational models, operational processes and financial measures to establish both a cloud infrastructure and cloud operations model that delivers the greatest benefit from cloud computing. VMware cloud solutions maximise the potential of cloud computing to deliver new IT services that fuel business growth. It also helps create and rapidly deploy services that differentiate the business and transform IT into a source of innovation. VMware Cloud Operations Services provide organisations with insight, prioritised recommendations, and expert guidance. Major open source-based offerings: VMware has a long history of support for open source software in its products. In addition to collaborating with the open source community, VMware works closely with major Linux vendors to ensure high quality support for Linux guest operating systems running on VMware hypervisors. As an active participant in the open source community, VMware has open sourced the VMware Tools as the Open Virtual Machine Tools project, contributed the VMI (Virtual Machine Interface) para-virtualisation code under the GPL, collaborated with the Linux kernel community and others in the development of paravirt-ops, and sponsored OSDL's DCL F2F. Leading clients: The company has more than 1800 customers across verticals in India, which include HPCL Mittal Energy, Dhanalaxmi Bank, CRISIL, HDFC Bank, Axis Bank, Tata Communication and KPIT Cummins. Website: http://www.vmware.com/in

Sify | Chennai, India The company's Cloudinfinit offers enterprise grade IaaS and PaaS services on a pay-per-use basis through its online cloud portal, www.cloudinfinit.com. Cloudinfinit offers ready-to-use compute instances on a multi-tenant, robust and fully scalable infrastructure to host the most demanding e-business applications in an enterprise grade, secure, highly available and self-controlled environment backed with stringent service level guarantees. The services include wide choices across computing, storage, networking, security, analytics and the protection product stack, to provide end-toend IT infrastructure on a pay-per-use basis. The cloud services are highly scalable and resilient, while ensuring rapid deployment. Major open source-based offerings: The company provides PaaS services like Ubuntu, Linux and over 80 operating system variants. In SaaS, the company offers Sify Mail, Sify forum software that are available for the open source community, etc. Leading clients: The company has provided cloud services to over 125 leading companies from the BFSI, retail, IT, textiles, manufacturing and government sectors. USP: Sify has successfully leveraged its strengths (the network, hosting, security, professional services and

104 | october 2012


IBM Cisco Systems VMware Software India Pvt Ltd

software expertise) to deliver its cloud stack, and is able to deliver an end-to-end converged cloud services model to its enterprise customers of all sizes. The Indian government, too, trusts the company's cloud to run its mission critical business applications.

India Sify

ESDS Software Solution Pvt Ltd

Website: http://corporate.sify.com

CtrlS IBM Cisco Systems

ESDS Software Solution Pvt Ltd | Nashik, India

VMware Software India Pvt Ltd India Sify

The company's cloud services solution, the eNlight Cloud Platform, offers IaaS on a pay-per-consume model on the public and private cloud, on a fixed pricing model. It also offers disaster recovery, desktop-as-a-service and ERP hosting on a pay-per-user model.

ESDS Software Solution Pvt Ltd

CtrlS

Major open source-based offerings: eNlight Cloud is entirely based on the Xen kernel, which is open source. The company has customised the kernel to provide instant auto scaling of the CPU and RAM, which results in 70 per cent savings on the capex and opex required for hosting websites or applications. Leading clients: Podar International, Kafilla Tours and Travels, Ambuja Cements Limited, MPSC.gov.in, the A2Z group, Gujarat Infomatics Ltd, WebTel TechnoLogies, Access Automation Pvt Ltd, Insignia Limited, IndoUS Capital Advisors Pvt Ltd, eXperio Communication Pvt Ltd, Tara Nutricare, BanyanTree Consulting, DCP Mumbai Police, Mahila Nagri Sahkari Patasansha, Prism Informatics Ltd, Ignify Inc, Virmati, Vayam Technologies Ltd, and many more. USP: Intelligent auto-scaling, pay-per-consume, per-minute billing, prepaid model and instant provisioning (less than 3 minutes) of virtual machines with all popular Linux distros. Migrating a Linux dedicated server on eNlight Cloud has resulted in saving a minimum of 80 per cent of power. Special mention: The company has received prestigious awards from the Government of Maharashtra like the ‘IT Enterprises 2009’ (Special Award), the 'Green IT Infrastructure 2010’, 'IT Enabled Services', ‘Business & Service Excellence Awards 2012’, and many more.

CtrlS | Hyderabad, India CtrlS offers both public and private cloud services. The public cloud offering has three variants that are differentiated by the virtualisation technology deployed. The company operates through Parallels Virtuozzo Containers, Xen and VMWare. Major open source-based offerings: The company's product, 'Real Cloud', is based on a Xen platform hypervisor which has been customised. The company has more than 100 open source images on the cloud, which include CMS, HRMS and CRM applications. All of these can be provisioned in a matter of 15 minutes from the cloud, ready to be used. Leading clients: These include one of the largest telecom companies, the largest media house in the country and some of the leading portals. USP: A choice of over 142 OS images, instant provisioning, easy-to-use Virtual Datacenter Control Console for instant deployment, scale and multiple VM management, high availability on clustered environments for all cloud users, and enhanced levels of security with a dedicated firewall for each cloud user.

LEADING

Website: http://www.esds.co.in/

Special mention: Gartner has listed CtrlS amongst the top cloud providers in India. Website: http://www.ctrls.in

e: 09326087210. 1, Vikas Permises, info@technoinfotech.com 11 Bank Street, Fort, Mumbai, India - 400 001. Mobile: 09326087210. info@technoinfotech.com

october 2012 | 105




TIPS

&

TRICKS

Record whatever you do in the terminal

Have you ever felt that you should record everything you do in the terminal in a file? Then try out the following tip. There is a command named script which can be used with option –a to append the output to a file. Given below is an example that will show how it works:

Mandriva~:$ script -a lfy Script started, file is lfy Mandriva~:$ uname -a Linux localhost.localdomain 2.6.33.5-desktop-2mnb #1 SMP Thu Jun 17 21:30:10 UTC 2010 i686 i686 i386 GNU/Linux Mandriva~:$ uname Linux Mandriva~:$ exit exit Script done, file is lfy

Here, the name of the file is lfy. You can verify it later by using the code given below: Mandriva~:$ cat lfy Script started on Mon 16 May 2011 02:09:47 AM EDT

Use ‘-d’ switch to compare two files in VIM. This command splits the VIM screen vertically and shows the differences. vim -d file1 file2

To load new files in separate windows: If you have a file named ‘first.txt’ loaded already in VIM, then use ‘:split second.txt’ to load another file named ‘second.txt’ in a separate window—IM will split the screen horizontally and load the second file. You can use ‘:vsplit’ to split the screen vertically. ‘Ctrl+w’ can be used to switch between the windows. VIM as a command: Normally, we use VIM as an editor; however, it can be used as a command. It allows the execution of VIM commands with switch ‘-c’, for example. Here is a command to replace all ‘>’ characters to ‘>>’ in a file FILE.TXT without opening VIM. vim -c “:s/>/>>/g” -c “:wq” FILE.TXT

To open a file in read-only mode: Use the ‘-R’ switch to open a file in read-only mode; later on, ‘!’ can be used to forcefully write to the file.

Mandriva~:$ uname -a

—satya prakash, satya.comnet@gmail.com

Linux localhost.localdomain 2.6.33.5-desktop-2mnb #1 SMP Thu Jun 17 21:30:10 UTC 2010 i686 i686 i386 GNU/Linux Mandriva~:$ uname

Check your processor and OS architecture

Linux Mandriva~:$ exit exit Script done on Mon 16 May 2011 02:10:32 AM EDT

—Sibi, psibi2000@gmail.com

Wonders of VIM

VIM has a very useful command set. Here are a few commands that can be used to increase your productivity. VIM as a file comparator: 108 | october 2012

You might want to install a 64-bit OS on your machine, but the processor might just be 32-bit compatible. Sometimes it happens the other way too, i.e., you install a 32-bit OS on a machine that has a 64-bit processor. Here is how to find out whether the installed OS as well as the CPU are of 64-bit or 32-bit. Given below is the command that will output the details of the OS installed: $ uname -m

The result for a 64-bit OS installation (for x86_64 architecture):


x86_64

The result for a non-64-bit OS installation (for i686 architecture): i686

To know about the processor, run the following command: $ lshw -class processor | grep width

Shown below is the result for a 64-bit installation: width: 64 bits

that is written in the Echo command. The festival command is used to change the text to voice. You can use this command in many ways according to your creativity. Do remember to check that you have festival installed before trying this tip. —Vinay Jhedu, vinay.komal100@gmail.com

Ignoring the case during TAB-completion

By default, TAB-completion is not useful if the name of the file or directory starts with an uppercase. You can make your Shell totally ignore the case for the name by adding the following entry in /etc/inputrc: set completion-ignore-case on

The result for a 32-bit installation: width: 32 bits

Note: Please install lshw if it is not already installed on your system

Then restart your Shell. From now onwards, TABcompletion will complete your file or directory name, and completely ignore the case. Do remember to make changes to inputrc only as the root user. You can read more about this in the manual pages of readline: man readline

—Srikanth Vittal, vi.srikanth@gmail.com

Sudoing with Fedora

Ever felt tired of entering the super-user password after typing ‘su –c’ again and again? Type ‘su -c visudo’ just once and uncomment the following line:

—Sachin P, iclcoolster@gmail.com

Find your OS and distribution name

Here is a tip that will let you know the name of the OS, along with other details:

[root@vl-pun-blg-qa27]# lsb_release -a # %wheel ALL=(ALL) ALL LSB Version:

Replace ‘wheel’ with your sudo username. So if the username is egghead, the line becomes… %egghead ALL=(ALL) ALL

Save and quit. You’re good to use egghead as the sudo user. —A. Datta, webmaster@aucklandwhich.org

:core-3.1-ia32:core-3.1-noarch:graphics-3.1-

ia32:graphics-3.1-noarch Distributor ID: CentOS Description:

CentOS release 5.5 (Final)

Release:

5.5

Codename:

Final

—Narendra Kangralkar, narendrakangralkar@gmail.com

Let your Linux system welcome you

Issue the following script and name it welcome.sh

echo “Hi zades you are welcome today is “ | festival --tts date| cut -d” “ -f 1-3

| festival --tts

Now put the command sh welcome.sh at start-up. This will allow the script to run every time you log in to your system. Once done, restart your system to hear the message

Share Your Linux Recipes! The joy of using Linux is in finding ways to get around problems—take them head on, defeat them! We invite you to share your tips and tricks with us for publication in LFY so that they can reach a wider audience. Your tips could be related to administration, programming, troubleshooting or general tweaking. Submit them at www.linuxforu.com. The sender of each published tip will get an LFY T-shirt.

october 2012 | 109



EnterpriseDB Postgres Plus Subscription for your successful

PostgreSQL/Postgres Plus deployments Includes... Predictable cost for your support via Remote, Email and Telephonic support for your production systems

Unlimited number of incidents supported

Software updates upgrades, patches and technical alerts service Web portal access and knowledge base access with PDF documentation

The Postgres Plus Solution Pack provides high value add-on tools to PostgreSQL for Administrative Monitoring, Data Integration across multiple servers, Availability, Security, Performance and Software Maintenance.

Postgres Enterprise Manager (PEM) The only solution that allows you to intelligently manage, monitor, and tune large numbers of Postgres database servers enterprise-wide from a single console.

SQL Protect Protects your PostgreSQL and Advanced Server data against multiple SQL virus injection vectors by automatically learning safe data access patterns and collecting attack data.

PL/Secure for PL/PgSQL Protects your server side database code and intellectual property from prying eyes for both internal and packaged applications without any special work on the part of the developer!

Updates Monitor

Migration Toolkit

SQL Profiler

Eases your installation maintenance burden by notifying you when updates to any components are available and assists you in downloading and installing them.

Fast, flexible and customized database migration from Oracle, SQL Server, Sybase, and MySQL to PostgreSQL and Postgres Plus Advanced Server.

A developer's friend to find, troubleshoot, and optimize slow running SQL fast! Provides on-demand or scheduled traces that can be sorted, filtered and saved by users and database.

xDB Replication Server Provides easy data integration between PostgreSQL based servers and between Oracle and PostgreSQL allowing Oracle users to dramatically reduce their Oracle license fees

You can find more details on the following links: http://www.enterprisedb.com/products-services-training/subscriptions http://www.enterprisedb.com/postgresql-products/premium

For inquiries contact at: sales@enterprisedb.com EnterpriseDB Software India Private Limited Unit # 3, Ground Floor, Godrej Castlemaine, Sassoon Road Pune – 411001 T +91 20 3058 9500 F +91 20 3058 9502 www.enterprisedb.com


Hurry! Offer expires September 2012 October 31,30, 2012


Turn static files into dynamic content formats.

Create a flipbook

Articles inside

An Introduction to CloudBees

7min
pages 96-98

An Introduction to the Yocto Project

8min
pages 89-91

"OpenStack has emerged as a really important component for cloud

9min
pages 92-95

Ctags and Cscope

5min
pages 87-88

Android Push Notifications with GCM

5min
pages 85-86

Kernel Uevent: How Information is Passed from Kernel to User Space

6min
pages 72-74

The Semester Project-VI: File System on Block Device

5min
pages 78-80

Git Version Control Explained Advanced Options

10min
pages 81-84

Kernel Ticks and Task Scheduling

9min
pages 69-71

A Simple Guide to Building Your Own Linux Kernel

3min
pages 67-68

Get started with Kernel Module Programming

8min
pages 63-66

CodeSport

7min
pages 61-62

PHP Development: A Smart Career Move

5min
pages 56-57

Linux at Work

19min
pages 51-55

Track Time in Your Driver with Kernel Timers

6min
pages 58-60

"For developers who really question if Microsoft is serious about open source, my answer would be 'absolutely"— Mandar Naik,

16min
pages 39-42

Exploring Software ReviewBoard

4min
page 50

Web-based Platforms for Localisation

3min
pages 48-49

Using OpenBSD for the Server Infrastructure

17min
pages 43-47

GNOME Extensions: Spicing Up the Desktop Experience

10min
pages 34-38
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.