Open Source ...ber 2012

Page 1

THE COMPLETE MAGAZINE ON OPEN SOURCE

VOLUME: 10 | ISSUE: 10

Network And Cloud Monitoring With Nagios

IS NOW OPEN SOURCE FOR YOU

` 100

Volume: 01 | Issue: 03 | Pages: 112 | December 2012

THE COMPLETE MAGAZINE ON OPEN SOURCE

COMBAT

NETWORK INTRUSION

FREE

WITH OSS

DVD

Getting Started With WireShark Use The Built-in Security Features In Your FOSS Distro A Look At OpenSSH, Netcat And PF A List Of Network Security Solutions Providers India US Singapore Malaysia

` 100 $ 12 S$ 9.5 MYR 19 12

Android Corner ■ ■ ■

Get Fit With Android Intents In Android Integrating Android With jQuery Mobile

9 770974 105001

Queues In Web Applications

“Mozilla Measures Success By How Much We Are Doing To Improve The Overall Health Of The Open Web” —Mozilla Foundation



OSFYClassifieds Classifieds for Linux & Open Source IT Training Institutes WESTERN REGION Linux Training & Certification Courses Offered: RHCSA, RHCE, RHCVA, RHCSS, NCLA, NCLP, Linux Basics, Shell Scripting, (Coming soon) MySQL Address (HQ): 104B Instant Plaza, Behind Nagrik Stores, Near Ashok Cinema, Thane Station West - 400601, Maharashtra, India Contact Person: Ms. Swati Farde Contact No.: +91-22-25379116/ +91-9869502832 Email: mail@ltcert.com Website: www.ltcert.com

NORTHERN REGION GRRASLinuxTrainingandDevelopmentCenter Courses Offered: RHCE,RHCSS,RHCVA, CCNA,PHP,ShellScripting(onlinetraining isalsoavailable) Address (HQ): GRRASLinuxTrainingand DevelopmentCenter,219,HimmatNagar, BehindKiranSweets,GopalpuraTurn, TonkRoad,Jaipur,Rajasthan,India Contact Person: Mr.AkhileshJain Contact No.: +91-141-3136868/ +91-9983340133,9785598711,9887789124 Email: info@grras.com Branch(es): Nagpur,Pune Website(s): www.grras.org,www.grras.com

SOUTHERN REGION Veda Solutions Courses Offered: Linux Programming and Device Drivers, Linux Debugging Expert, Adv. C Programming Address (HQ): 301, Prashanthi Ram Towers, Sarathi Studio Lane, Ameerpet, Hyderabad - 500 073, India Contact Person: Mr. Sajith Contact No.: +91-40-66100265 / +91-9885808505 Email: info@techveda.org Website: www.techveda.org

Advantage Pro Courses Offered: RHCSS, RHCVA, RHCE, PHP, Perl, Python, Ruby, Ajax, A prominent player in Open Source Technology Address (HQ): 1 & 2 , 4th Floor, Jhaver Plaza, 1A Nungambakkam High Road, Chennai - 600 034, India Contact Person: Ms. Rema Contact No.: +91-9840982185 Email: enquiry@vectratech.in Website(s): www.vectratech.in Linux Learning Centre Courses Offered: Linux OS Admin & Security Courses for Migration, Courses for Developers, RHCE, RHCVA, RHCSS, NCLP Address (HQ): 635, 6th Main Road, Hanumanthnagar, Bangalore - 560 019, India Contact Person: Mr. Ramesh Kumar Contact No.: +91-80-22428538, 26780762, 65680048 / +91-9845057731, 9449857731 Email: info@linuxlearningcentre.com Branch(es): Bangalore Website: www.linuxlearningcentre.com IPSR Solutions Ltd. Courses Offered: RHCE, RHCVA, RHCSS, RHCDS, RHCA, Produced Highest number of Red Hat professionals in the world Address (HQ): Merchant's Association Building, M.L. Road, Kottayam - 686001, Kerala, India Contact Person: Benila Mendus Contact No.: +91-9447294635 Email: training@carnaticindia.com Branch(es): Kochi, Kozhikode, Thrissur, Trivandrum Website: www.ipsr.org

*astTECS Academy Courses Offered: Basic Asterisk Course, Advanced Asterisk Course, Call Centre Administrator Course, Free PBX Course Address (HQ): 1176, 12th B Main, HAL II Stage, Indiranagar, Bangalore - 560008, India Contact Person: Lt. Col. T. Shaju Contact No.:+91-9611192237 Email: t.shaju@asttecs.com Website: www.asterisk-training.com

DOES YOUR INSTITUTE PROVIDES TRAINING ON LINUX & OPEN SOURCE?

Get Your Institute listed Here! Call Priyanka on 011-26810601 or Email: efyenq@efyindia.com


www.linuxforu.com

The Only IT Magazine That’s Read By Software Developers & IT Administrators

Expanding

Open Source For You (Formerly LINUX For You)

World’s #1 Open Source Magazine Get Noticed! Advertise Now! Contact Priyanka @ +91 11 4059 6614 EFY Enterprises Pvt Ltd D-87/1, Okhla Industrial Area, Phase 1, New Delhi 110 020


OFFE

RS

Free Free Cloud Cloud Credit Credit Worth Worth ``25,000! 25,000!

THE MONTH Get Free Trial!

Sify Sify Offers Offers FREE FREE Cloud Cloud Credit Credit Worth Worth Rs. Rs. 25,000!* 25,000!* ! HHuurrrryy!d imititeedeer!r! LLim ffff O O d d PPeeririoo

••1-2 1-2vCPU vCPU--1-2 1-2GB GBRAM RAM •75-100 •75-100GB GBSAN SAN storage storage••50-100 50-100GB GBinternet internetdata datatransfer transfer ••DNS DNS••OS-windows/linux/Open OS-windows/linux/OpenSource SourceOS. OS. All Allfor forunlimited unlimitedusage usagefor forthe theentire entire11month! month! Visit Visitour ourwebsite websiteto toavail availthe theoffer offer

! HHuurrrryy!d imititeedeer!r! LLim ffff O O PPeeririoodd

www.cloudinfinit.com www.cloudinfinit.com

Test Testhow howour ourCloud CloudWorks: Works: http://demo.enlightcloud.com/ http://demo.enlightcloud.com/ Login LoginDetails: Details: Username: Username:demo@demo.com; demo@demo.com;Password: Password:demo demo For Formore moreinformation, information, call callus uson on1800-209-3006/+91-253-6636500 1800-209-3006/+91-253-6636500 or oremail emailus usat atsales@esds.co.in sales@esds.co.in

www.esds.co.in www.esds.co.in

Early Early Bird Bird Offer! Offer!

Flat Flat 20% 20% Discount! Discount! On-Demand On-Demand Course! Course!

! HHuurrrryy!d imititeedeer!r! LLim ffff O O PPeeririoodd

eNlight eNlight Cloud Cloud Computing Computing Services Services by by ESDS ESDS

Get Gettrained trained&&Certified Certifiedon onPostgreSQL PostgreSQL today! today!EnterpriseDB EnterpriseDBininpartnership partnershipwith with GT GTEnterprises Enterprisesisisoffering offering"Advanced "Advanced PostgreSQL PostgreSQLAdministration" Administration"training traininginin Bangalore Bangalorefrom from12th 12thto to14th 14thDec Dec2012. 2012. Registe RegisterrNow! Now!Inquiries Inquiriescan canbe be sent sentat: at:trainingsindia@enterprisedb.com trainingsindia@enterprisedb.com

JNR JNRManagement ManagementResources Resources Pvt PvtLtd. Ltd.Offers!* Offers!* HHuurrrryy!! TTililll aalilidd eerr O OffffeerrVV mbb eecceem 3311ssttDDnnly O O ly!!

Flat Flat20% 20%discount discounton onall allSymantec Symantec SSL SSLCertificates Certificatesand andSymantec SymantecSafe Safe Site Sitewith withfree freetechnical technicalsupport! support! Contact ContactRajneesh Rajneeshat at+91-9311029414 +91-9311029414 or orwrite writeto tossl@jnrmangement.com ssl@jnrmangement.com *Platinum *PlatinumPartner Partnerfor forSymantec SymantecSSLs SSLs

www.enterprisedb.com www.enterprisedb.com

www.mysslonline.com www.mysslonline.com

15%

10% OFF! OFF!

OFF! OFF! Year Year End End Offers! Offers!

2012 2012 Farewell Farewell Offer! Offer! Direct Direct 15% 15% discount discount on on all all courses! courses!

*astTECS *astTECSoffers offers flat flat10% 10%discount discounton on IP IPPBX PBX&&Call CallCenter CenterDialer Dialer HHuurrrryy!! TTililll Vaalilidd eer!r! O OffffeerrV mbb eecceem D D t t s s 1 1 33

Contact Contactus usat at+91-9886879412 +91-9886879412// +91-80-42425000 +91-80-42425000 E-mail: E-mail:info@astTECS.com info@astTECS.com

HHuurrrryy!! TTililll aalilidd 22!! 11 O OffffeerrVV bbeerr2200 m m e e c c e e DD

www.astTECS.com www.astTECS.com

Use Use Coupon Coupon Code: Code: 2012 2012 Farewell Farewell Contact Contact us us at at +91-9887789124 +91-9887789124 or or write write to to info@grras.com. info@grras.com. Catch Catch us us on on facebook.com/grras facebook.com/grras

www.grras.org, www.grras.org, www.grras.com www.grras.com

To To Advertise Advertise Here, Here, Contact Contact Priyanka Priyanka at at 011-26810601/02/03 011-26810601/02/03 or or write write to to efyenq@efyindia.com efyenq@efyindia.com www.linuxforu.com www.linuxforu.com DECEMBER 2012 | 11


Powered By

www.facebook.com/linuxforyou

Askar Asku:

I have a folder A and its duplicate folder B. I want to add some files in folder A and modify some. I also want to update the folder B with the latest modifications in folder. In a nutshell, I want all the new and modified files of A to be copied to folder B. Any suggestions? Is there a BASH command for the same?

Subin M Thevalakkattu:

Hi friend, please help me. How to install Picasa 3.9 on Fedora 16 without using wine? Like . comment

omps: You have to wait till Google releases it for Linux, else you can use other alernatives such as Showtell photo manager, which is as good as Picasa!

Like . comment

Ravi Ranjan Singh: You have to add a hard link between folder A and folder B. Use the command $ln A B on the bash prompt. It will create the hard link. Askar Asku: This is what I should have done, but I didn't do that. The file already exists. Giving a hard link wouldn't work here.

Kaarthick Raju:

Hi friends, can you give me the link to download Ubuntu for windows freely for 32 bit? Like . comment

Dinesh Raja: http://www.ubuntu.com/pre-downlo ad?distro=desktop&bits=32&release=latest.

Flamuri Shqiptar:

Hello. I have installed dual boot:Win7 and Ubuntu. I pressed the delete button in Ubuntu partition. When I restarted my PC, there was cant worj. ERROR:NO SUCH PARTITION . Can you help me? Like . comment

omps: Reinstall Windows or reinstall ubuntu back. Your partition table will be fixed and you will see both the Oses. Flamuri Shqiptar: I wanted to reinstall Win-

dows 7 but I couldn't. I tried with XP too, but it did not work. I want to install only Windows 7.

omps: So please install Windows 7 only from the

installation disc.

Flamuri Shqiptar: I can' t install because there is

a an error which says:Ccannot boot from cd-code:5.

Kaarthick Raju: Thank You Dinesh!.

Aniruddha P Tekade:

I have ubuntu 12.04 installed and it has been more than 6 months that it worked nicely. But now I am facing some problems as the software centre is not opening. The error that pops up is :opening the cache etc. Please help me out. Like . comment

Daniel Ribeiro: I had the same problem with

software centre and the update manager. I solved that updating by the old school way (with the console). After that my software centre worked fine again. Hope I have been able to help you.

Vipin Balakrishnan: Close your software centre and update manager if it opens. Then execute the below command in the terminal

Ahmed Bhd:

sudo apt-get clean sudo mv /var/lib/apt/lists /tmp sudo mkdir -p /var/lib/apt/lists/partial sudo apt-get clean sudo apt-get update.

Can anyone tell me where can I find the system menu in ubuntu 12.10? Like . comment

Jatin Khatri: Do you mean system settings menu? If yes, then go to unity lense and and type "settings".

Ahmed Bhd: My system language is in French, so where can I find it?

Panckaz D Samay:

Tulsi Raj: Please install classic menu editor.

Like . comment

Ahmed Bhd: Thanks a lot! Image quality is poor as the photos have been directly taken from www.facebook.com

How to install .exe files in Ubuntu?

Suresh Dharavath: Just install the software called wine. After that you can install the .exe files.


Powered By

www.facebook.com/linuxforyou

Ajay Songare:

Which is the best open source network analysis tool? Like . comment

Irfan Naseef P: What do you mean by analysis ?

To explore your network, NMAP will help. To monitor, NAGIOS will help. To sniff, try Wireshark.

Jatin Khatri: NMAP. omps NMAP.

SA Suneesh:

When I tried to install ubuntu ultimate edition 3.4 on my Lenovo laptop, it is failed and seems that grub instalation failed. What should I do?. Like . comment

Shivam Gupta: Either your setup is corrupted

or you are installing OS in a logical partition rather than primary partition.

Liesa Chiun Chun:

And how to make security in Linux? Please help. Like . comment

Karthigeyan Kith:

Is Ubuntu virus free OS ? Like . comment

Daniel Ribeiro: Depends on your distro. If you

are using an ubuntu-based distro, here you have what I do:

Shubham Verlekar: Yes, Kith it is completly virus

1st - Configure your ufw: https://help.ubuntu.com/ community/UFW

Irfan Naseef P: All linux flavours are virus free.

2nd - Install rootkit hunter and use it regularly if you install packages out of the official repositories.

free.

It's the power of the Linux kernel.

Abdul Shukoor:

Hi, please help. If I install Windows on a system thatalready has a Linux os, then Linux boots for a longer time. Can anyone tell me the reason and solution of this problem? Like . comment

Shivam Gupta: Hi friend ! First you need to under-

stand why this has happened.If you are workng on a linux based distro then your boot loader is GRUB (mainly) and if you are using windows NTLDR is the boot loader. GRUB is a multiple kernel supporter and supports both windows and linux and NTLDR is a a single kernel supporter. So, if a machine is already having a linux, it has a GRUB loader and when you install Windows the NTLDR replaces the GRUB and then only one kernel can be supported that is of Windows. Coming to the solution, first, you need to understand how to do the entry in GRUB. There are many software available which enables you to do this work. One of which is GRUB ENCHANCER. Or after installing windows, you can user EASY BCD SOFTWARE to bring back linux distro. Hope I have been able to help you. .

Abdul Shukoor: Thanks a lot for the help!

Isah Salim:

Hey Ubuntu fans! I lost my root password and now I'm trying to use another account. I'm trying to use to access root previleges but i can't use sudo because this account does not belong to any group. Any suggestions? Like . comment

Damanjeet Singh: Change the run level and reset

the password.

Isah Salim: How? Can you please show me step by step?

Swapnil Jain: For step by step process,

check://www.techpage3.com/2012/08/recover-lostroot-password-in- ubuntu.html.

Jatin Khatri: It's here http://askubuntu.com/questions/121698/how -do-i-reset-a-lost-password-using-recovery-mode-requires-me-to -type-the-pass. Stv Spidér Sikahüa: -root //get passwd directory- *isah zee*- Hope this will help you retrieve your lost password. Tulsi Raj: Reboot system and enter the shift menu then go recovery mode enter root and change the password. Image quality is poor as the photos have been directly taken from www.facebook.com december 2012 | 13




NEW PRODUCTS

Now, revel in Jabra's new Zync introduces 24.6 cm HD Android ICS tablet headset—CLEAR Zync has added the Z1000, a 24.6-cm tablet to its kitty. The Jabra, a GN Netcom brand, has introduced two new 3.0 mono headsets in India called Jabra CLEAR and Jabra TALK. Jabra CLEAR allows users to free up their hands to browse, play, swipe and text besides talking wirelessly and listening to music. Ralph Ede, managing director (South Asia), GN Netcom (S) Pte Ltd, said, “Multi-tasking and traffic police penalties are a major issue for the youth in India. Jabra CLEAR is a good option that helps them manage their lifestyle easily.” Jabra’s multi-use technology allows people to pair up the headsets with two Bluetooth devices. Users can pair up these headsets with two phones or with one phone and one music player. Jabra CLEAR has a sleek and shiny exterior that comes in a choice of two colours, black or white. The lightweight design, combined with Ultimate Comfort Eargels, makes for a secure fit. Jabra CLEAR is equipped with a portable car charger that allows you to charge your smartphone as well. The headset is HD voice ready. This feature ensures that your conversations sound as natural as if you were standing right next to the person you are talking to. As and when the mobile operators start offering high-definition (HD) voice on their networks, the headset will be able to deliver rich, crisp and clear HD voice with just a simple software upgrade, whereas Bluetooth products from other brands would require hardware changes to achieve the same results. Jabra CLEAR offers 6 hours of talk time and 8 days of standby time. Price: ` 2699 Address: Jabra, 7 Northeastern Blvd, Nashua, NH 03062, USA Email: techsupp@jabra.com Ph: 1- 800-327-2230 Website: www.jabra.com

16 | december 2012

tablet runs on Android 4.0 (ICS), which is upgradable to Android 4.1, a.k.a. Jelly Bean. It offers a 5-point multi-touch capacitive HD display with a screen resolution of 1024 x 768. The tablet comes with freebies worth Rs 1350 inside the box. The other features include in-built 3G support with a mini HDMI and a micro USB port, a TF card, a 3.5 mm earphone jack and a DC port. “This is our first attempt at launching a tablet with a multi-touch HD display with a higher screen resolution with the ‘face-unlocking’ feature. This new product will enhance the viewing experience,"said Ashish Garg, director, Zync Global Private Limited.

Price: ` 10,990 Address: Zync Pvt Ltd, Apple Group Of Companies, B-16, Sector-2, Noida, Email: info@zync.in Ph: 0120-4821999; Website: http://www.zync.in/

LG's Optimus Vu to fight Galaxy Note II LG has launched its Optimus Vu phablet in India to compete with Samsung’s Galaxy Note II. The Optimus Vu features a 12.7-cm (5-inch) True HD display and comes with a dual core Qualcomm processor. The phablet is powered by a 1.5 GHz quad core Nvidia Tegra 3 processor and runs on Android 4.0 (ICS). On the storage front, the phablet has 32 GB of internal memory. It has an 8 MP rear camera with an LED flash and a 1.3 MP front camera for video chats and calling. Amit Gujral, assistant general manager, head, Mobile Device Engineering and VAS Group, said, “LG Optimus Vu targets the business class as well as youngsters who want to keep upgrading technology." Price: ` 34,500 Address: LG Electronics India Pvt Ltd, Plot Number 51, Udyog Vihar, SurajpurKasna Road, Greater Noida-201306; Email: lgservice@lgindia.com Ph: 0120- 2560900; Website: http://www.lg.com/in

Zync debuts in the smartphone space Zync has forayed into the smartphone segment with its Z5 Android-based device. The smartphone sports a 12.7-cm (5-inch) TFT display and runs Android 4.0 (ICS). The device is powered by a 1 GHz processor. It has 512 MB of RAM to support multi-tasking. The phone comes with an 8 MP rear camera with a flash and a 0.3 MP front shooter for video calls. The phone is backed by a 2,500 mAh battery. The Z5 is an affordable large-screen dualSIM smartphone, which has an internal storage of 4 GB, expandable up to 32 GB via SD card. Price: ` 9,490 Address: Zync Pvt Ltd, Apple Group of Companies, B-16, Sector-2, Noida Email: info@zync.in Ph: 0120-4821999 Website: http://www.zync.in/


Exhibition & Knowledge Partner

DISCOVER THE FUTURE OF ELECTRONICS IT’S EXCITING. PRESENTS OPPORTUNITIES. February 21 to 23, 2013, Halls 7, 8, 9 & 10 Pragati Maidan, New Delhi

AN INDIAN EXPO FOR THE GLOBAL ELECTRONICS INDUSTRY Realising Opportunities in the Fastest Growing Market

HURRY! TO AVAIL PREMIUM LOCATIONS FOR BOOTHS, CONTACT: Sunil Singh 088000 94210, Arun 088000 94213, 011-40596600, Email: efyenq@efyindia.com


Tablets Zync Z1000

HCL ME G1

OS:

Android 4.0

OS:

Android 4.0 October 2012 ` 14,999 ESP:

` 14,999

NEW

Specification: 9.7 Inches multi-touch capacitive touchscreen display, 1.2 GHz Dual processor, 7000 mAh battery, 2 MP rear camera, 16 GB internal memory, expandable upto 32 GB, 3G, WiFi Retailer/Website: www.hclstore.in

OS:

Launch Date:

Launch Date:

MRP:

MRP:

ESP:

ESP:

MRP:

October 2012

ESP:

` 10,990 Specification:

Android 4.0 October 2012 ` 7,499

` 13,999

NEW

9.7 Inches capacitive touchscreen, 1024 x 768 pixels screen resolution, 1.5 GHz processor, 7000 mAh battery, 2.0 MP rear camera, 8 GB internal memory, expandable up to 32 GB, 3G, Wifi Retailer/Website: www.snapdeal.com

Micromax A101

OS:

Android 4.0

` 10,990

MRP:

Penta T-Pad WS702C

Launch Date:

October 2012

Launch Date:

iBall Slide 3G 7334

Karbonn Smart Tab 3 Blade

` 9890 Specification:

NEW

7- inch capacitive touchscreen, 1024x600 pixels screen resolution, 1 GHz processor, 4400 mAh 2MP rear camera, 8 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website: www.indiaplaza.com

Croma CRXT1075 OS:

OS:

Android 4.0

Android 4.0

October 2012

October 2012

` 14,999 ` 9,999 Specification:

NEW

` 5,990 ESP:

` 4,990

NEW

12.7-cm (5-inch) TFT WGCA touchscreen, 800 x 480 pixels screen resolution, 1 GHz processor, 2000 mAH battery, 5 MP rear camera, 1.2 GB internal memory, expandable up to 32 GB, 3G, Wifi

Specification:

Retailer/Website: http://www.homeshop18.com

Retailer/Website: www.infibeam.com

ZenFocus myZenTAB 708BH

17.7-cm (7-inch) capacitive touchscreen, 480 x 800 pixels screen resolution, 1.2 GHz processor,2600 mAh battery, 2 MP rear camera, 4 GB internal memory,expandable up to 32 GB, 3G, WiFi

Zen Ultratab A900 Android 4.0

Android 4.0 October 2012 ` 7,899 ESP:

` 7,899 Specification:

17.7-cm (7-inch) capacitive touchscreen, 800 x 400 pixels screen resolution, 1.2 Ghz processor, 3,200 mAh battery, 0.3 MP front camera, 8 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website: www.zenkart.com

Penta T-Pad WS703C OS:

` 6,899 Specification:

22.9-cm (9-inch) capacitive touch screen, 800 x 480 pixels screen resolution, 1.5GHz processor, 4,000 mAh battery, 1.3 MP front camera , 4GB internal storage, expandable up to 32 GB, 3G, WiF Retailer/Website: www.homeshop18.com

Adcom Tablet PCAPad 721C

7.0 inches, Multi-Touch Capacitive Touchscreen, , 800 x 480 pixels screen resolution, 0.3 front camera, 4 GB of internal memory expandable to 32 GB, 3G, WiFi Retailer/Website: www.snapdeal.com

ESP:

` 9,250 Specification:

` 6,450 Specification: 17.8-cm (7-Inch) Captive Touchscreen, 1.2 GHz processor, 8GB internal storage, expandable upto 32 GB, WiFi

NEW

Retailer/Website: www.flipkart.com

Retailer/Website: www.cromretail.com

Mercury mTAB7

Zync Z930

Android 4.0 Launch Date:

October 2012 MRP:

` 6,499

ESP:

` 4,499 Specification:

ESP:

NEW

Retailer/Website: www.naaptol.com

Asus PadFone

Swipe Tab All in One

OS:

OS:

Android 4.0

Android 4.0 Launch Date:

October 2012 MRP:

` 11,999

ESP:

Retailer/Website: www.snapdeal.com

NEW

Retailer/Website: www.flipkart.com

` 64,999

7 Inches (17.78 cm) CAPACITIVE Touch Screen, 3500 mAH battery, 3.2MP rear camera, internal memory 4 GB, expandable up to 32GB, 3G, Wifi

Specification:

17.8-cm (7-inch) capacitive touch display, 480 x 800 pixels screen resolution, 1.2 GHz Cortex A8 processor, 2300 mAh battery, 0.3 MP front-facing camera for video calling, 4 GB internal storage, expandable up to 32 GB, 3G, WiFi

MRP:

NEW

` 6,499

17-inch TFT capacitive touch Screen, 800 x 480 pixels screen resolution, 1.2 GHz processor, 3600 mAh battery, 0.3 MP front-facing camera, 4GB internal memory, expandable up to 32 GB, 3G, WiFi

October 2012

` 12,899

NEW

NEW

Launch Date:

MRP:

` 6,999 ESP:

NEW

October 2012

MRP:

ESP:

` 5,499

Launch Date:

October 2012

` 6,990

MRP:

Android 4.0

Launch Date:

17.8-cm (7-inch) capacitive touchscreen, 800 x 480 pixels screen resolution, 1 GHz processor, 2 MP rear camera, 4 GB internal memory, expandable up to 32GB, 3G

October 2012

Specification:

MRP:

Launch Date:

OS:

Android 4.0

Specification:

MRP:

` 7,999

Launch Date:

ESP:

Android 4.0

ESP:

NEW

October 2012

Launch Date:

` 7,999

MRP:

MRP:

OS:

October 2012

Launch Date:

EKEN Leopard C70

OS:

OS: OS:

7" capacitive multi touch screen, 1024 x 600 pixels screen resolution, 1.2 GHz processor, 2 MP rear camera, 8GB internal storage memory, expandable up to 32 GB, 3G, WiFi Retailer/Website: www.snapdeal.com

Android 4.0

` 5,999

NEW

OS:

` 5,999

MRP:

Specification:

Launch Date:

October 2012

Launch Date:

MRP: ESP:

Android 4.1

OS:

Launch Date:

` 6,899

` 64,999 Specification:

ESP:

NEW

Super AMOLED capacitive touchscreen, 960×540 pixels screen resolution with multi touch panel, dual core 1.5 GHz processor, 1520 mAh battery, front VGA camera (640×480), rear camera 8 MP, 2G Retailer/Website: www.shopping.indiatimes.com

` 11,999 Specification:

NEW

17.8-cm (7-inch) capacitive touchscreen, 1028 x 768 pixels screen resolution, 2 MP rear camera, 8 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website: Company’s Online store


Tablets Datawind UbiSlate 7R+ Android 4.0 September 2012

OS:

Android 4.0

OS:

Launch Date:

Launch Date:

Launch Date:

MRP:

September 2012 MRP:

` 2,999-` 4,999

MRP:

ESP:

ESP:

` 2,999-` 4,999

MRP:

` 3,499

Winknet Ultimate (TWY300)

September 2012

Launch Date:

ESP:

Datawind UbiSlate 7C+

Android 4.0

OS:

` 3,499

OS:

Datawind UbiSlate 7Ci

NEW

Specification: 7- inch resistive touchscreen, 800 × 480 pixels screen resolution, 1 GHz processor, 3200 mAh battery, front camera, 4 GB internal memory, expandable up to 32 GB, 2G, WiFi Retailer/Website: www.naaptol.com

Winknet Wonder (TWY100) OS:

Android 4.0

ESP:

` 2,999-` 4,999 Specification: 7- inch capacitive touchscreen, 800×480 pixels screen resolution, 1 GHz processor, 3200 mAh battery, front camera, 4 GB internal memory, expandable up to 32 GB, 2G, WiFi

Retailer/Website: www.flipkart.com

Specification: 7-inch capacitive touchscreen, 800 x 480 pixels screen resolution, 1.5 GHz processor, 3000 mAh battery, 0.3 MP camera, 8 GB internal memory, expandable up to 32 GB, 3G, Wifi

Launch Date:

Launch Date:

MRP:

MRP:

ESP:

ESP:

` 7,499 Specification:

Retailer/Website: www.flipkart.com

Android 4.0 Launch Date:

August 2012 MRP:

` 5,899 ESP:

` 5,899 Specification:

NEW

7-inch capacitive multi-touchscreen display, 800x480 pixels screen resolution, 1.2GHz Cortex-A8 processor, 2,800 mAh battery, internal memory 4GB, expandable up to 32 GB, 3G, wiFi Retailer/Website: www.flipkart.com

Specification:

NEW

7-inch capacitive touch display, 800 x 480 pixels screen resolution, 1.2 GHz processor, 2800 mAH battery, 2 MP camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi

MeeGo

OS:

Launch Date:

August 2011

Launch Date:

MRP:

` 12,290

MRP:

ESP:

ESP:

Retailer/Website: Croma Store, Saket, New Delhi, +91 64643610

Micromax Funbook Alpha

` 6,499

ASUS EeePC X101

25.7 cm WSVGA anti-reflective LED,1024×600 pixel screen resolution,1.33GHz Intel ATOM processor, 1GB DDR3 memory, Intel GMA 3150 graphics, 250GB HDD, 3 cell (40 W) battery, 4-in-1 card reader, 1.03kg.

Retailer/Website: www.snapdeal.com

MRP:

OS:

Specification:

9.7-inch, TFT LCD (4x3), Capacitive Multi-touch screen, 1024 x 768 pixels screen resolution, 1.5 GHz processor,8000 mAh battery, 2 MP rear camera, 16 GB internal memory, expandable upto 32GB

September 2012

Netbooks Samsung N100

` 11,840

NEW

Launch Date:

Retailer/Website: www.infibeam.com

Retailer/Website: www.snapdeal.com

OS:

` 6,814

Specification:

Android 4.0

` 6,699

NEW

` 14,999

OS:

September 2012

17.7-cm (7-inch) touchscreen, 1 GHz Processor, 512 RAM and a provision for 3G dongle

Lava E-Tab Z7H

Retailer/Website: www.flipkart.com

Android 4.0

` 7,499

NEW

7- inch capacitive touchscreen, 800×480 pixels screen resolution, 1 GHz processor, 3200 mAh battery, front camera, 4 GB internal memory, expandable up to 32 GB, 2G, WiFi

OS:

MRP:

` 7,499

NEW

OS:

September 2012

ESP:

Specification:

` 15,995

Micromax Funbook Infinity

Android 4.0

` 8,995

` 2,999-` 4,999

September 2012

Celkon CELTAB

Launch Date:

September 2012

NEW

Android 4.0

MeeGo August 2011 ` 12,499 ` 12,000 Specification: 25.7 cm LED-backlit screen, Intel Atom processor N455 CPU, 1GB DDR3 RAM expandable upto 2GB, 220GB storage, Bluetooth 3.0, Wi-Fi 802.11 b/g/n, 17.6mm thick, 920g. Retailer/Website: Eurotech Infosys, Nehru Place, Delhi, 9873679321

ESP:

` 5,999 Specification: 7 Inch capacitive TFT multi-touch LCD screen, 800 x 480 pixels screen resolution, 1 GHz processor, 2800 mAh battery, internal memory up to 4 GB, expandable up to 32GB, 3G, WiFi Retailer/Website: www.infibeam.com

Acer Aspire One Happy OS:

Android Launch Date:

March 2011 MRP:

` 17,999 ESP:

` 15,490 Specification: 25.7 cm WSVGA high-brightness display with a 16:9 aspect ratio, dual-core Intel Atom N455, 1 GB RAM, Intel graphics media accelerator 3150 and internal hard disc memory of 320 GB, Bluetooth 3.0+ HS support, Wi-Fi, built-in multi-in-one card reader. Retailer/Website: Vijay Sales, Mumbai, 022-24216010

december 2012 | 19


SMARTPHONES Intex Aqua 3.2

Lava XOLO X700 OS:

Android 4.0

OS:

Android 2.3

Launch Date:

October 2012

Launch Date:

October 2012

MRP:

` 17,400

MRP:

` 3790

NEW

ESP:

` 3790 Specification:

3.2-inch capacitive touchscreen, 320 x 240 pixels screen resolution, 1 GHz processor, 1,200 mAH battery, 2 MP rear camera, 512 MB RAM, expandable up to 32 GB, WiFi Retailer/Website: www.flipkart.com

Zync Z5 OS:

Android 4.0

ESP:

` 14,000 Specification:

NEW

0.9-cm (4.3-inch) qHD touchscreen, 960 x 540 pixels screen resolution, 1.2 GHz processor, 2000 mAh battery,5MP rear camera, memory expandable up to 32 GB, 3G, WiFi

NEW

` 9,490 Specification:

5 inch TFT touchscreen, 480 x 800 pixels screen resolution, 1 GHz processor, 2500 mAH battery, 8 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website: www.snapdeal.com

October 2012

MRP:

MRP:

` 16,299

` 6,777 ESP:

` 6,777 Specification:

NEW

Android 4.0

Android 4.0

` 34,500 ESP:

` 29,999

October 2012

October 2012

MRP:

MRP:

NEW

Specification: 5-inch capacitive touchscreen, 1024 x 768 pixels screen resolution, 1 GHz processor, 2080 mAh battery, 8 MP rear camera, 3G, Wifi Retailer/Website: www.flipkart.com

` 14,999 ESP:

` 9,999 Specification:

` 12,990

NEW

iBall Andi 4.3j

` 9,990

MRP:

` 19,799 ESP:

` 19,799 Specification:

NEW

ESP:

` 8,499 Specification:

NEW

10.16-cm (4-inch) Super LCD WVGA display, 1GHz Qualcomm MSM8225 Snapdragon processor, 1650 mAh battery, 5 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi

10.2-cm (4-inch) capacitive touch screen, 480 x 800 pixels screen resolution, 1500 mAh battery, 5 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, Wifi

Retailer/Website: www.ebay.com

Retailer/Website: www.saholic.com

Karbonn A21 Android 4.0

Android 4.0

Launch Date:

October 2012

October 2012

MRP:

` 10,490 ESP:

` 10,490 Specification:

NEW

11.4 cm (4.5 inch) capacitive touchscreen, 1.2 GHz processor, 1800 mAh battery, 5 MP camera, 4 GB of internal memory expandable to 32 GB, 3G, WiFi Retailer/Website: www.snapdeal.com

ESP:

` 9,290 Specification: 10.1-cm (4-inch) IPS WVGa display touchscreen, 1.2 GHz processor, 1420 mAh battery, 5 MP camera, 3G, WiFi Retailer/Website: www. flipkart.com

Android 4.0

Launch Date:

Launch Date:

October 2012

October 2012

MRP:

MRP:

` 15,999

ESP:

` 9,499 Specification:

NEW

Retailer/Website: Reliance Digital outlets

NEW

ESP:

` 12,499 Specification:

Retailer/Website: www.saholic.com

Karbonn A1+

OS:

OS:

Launch Date:

Launch Date:

MRP:

MRP:

ESP:

` 6,990 Specification:

NEW

12.7-cm (5-inch) multi capacitive touchscreen, 800 x 480 pixels screen resolution.1 Ghz dual-core processor, 2400 mAh battery, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi

Karbonn A7+

Android 2.3 October 2012

` 6,990

MRP:

` 9,290

OS:

` 9,499

NEW

Retailer/Website: www.naaptol.com

Android 2.3

October 2012

Launch Date:

Specification:

Spice Stellar Horizon Mi 500

Android 2.3

OS:

` 9,999

OS:

4.3-inch capacitive touch display, 1 GHz processor,1,630mAh and 900-mAh dual battery, 5 MP camera, 2 GB internal memory, 3G, Wifi

Karbonn A9+

OS:

ESP:

4.3-inch AMOLED touch screen, 800 x 480 pixels screen resolution, 1 GHz processor, 1600 mAh battery,8 MP rear camera, 512MB of built-in storage, expandable up to 32 GB, 3G, WiFi

5-inch TFT capacitive touch screen, 480 x 854 pixels screen resolution, 1 GHz processor, 2000 mAh battery, 8 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website: www.saholic.com

MRP:

OS:

Launch Date:

Launch Date:

October 2012

October 2012

NEW

OS:

Launch Date:

Launch Date:

Specification:

Micromax A90S Superfone PIXEL

Android 4.0

Android 4.0

` 15,840

Micromax A110 Superfone Canvas 2

OS: OS:

ESP:

LG Optimus Vu

Karbonn A11

HTC Desire X

Launch Date:

October 2012

Retailer/Website: www.buytheprice.com

MRP:

ESP:

Android 4.0

Launch Date:

All Reliance Outlets

October 2012

` 9,490

OS:

Android 2.3

Retailer/Website: http://www.homeshop18.com

Launch Date:

MRP:

OS:

4-inch TFT touch screen, 854 x 480 pixels, 1 GHz processor, 5 MP rear camera, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi

Android 4.0

October 2012

Sony Xperia J

8.9-cm (3.5-inch) HVGA capacitive touch screen, 320 x 480 pixels screen resolution, 800 MHz processor, 1400 mAh battery, 3 MP rear camera, WiFi

OS:

Launch Date:

Reliance Smart V6700

` 4,290

NEW

3.5-inch Capacitive Touchscreen, 320 x 480 Pixels screen resolution, 1 GHz processor, 1420 mAh battery, 5 MP camera, 157 MB internal memory, expandable to 32 GB,3G, WiFi Retailer/Website: www.flipkart.com

ESP:

` 4,290 Specification: 8.89-cm (3.5-inch) capacitive screen, 1 GHz processor, 1500 mAh battery, 3 MP rear camera, 3G, WiFi Retailer/Website: www. saholic.com

NEW


SMARTPHONES Idea Aurus OS:

Android 2.3 Launch Date:

September 2012 MRP:

` 7,190

Samsung Galaxy Chat GT-B5330 OS:

Android 4.0 Launch Date:

September 2012 MRP:

ESP:

` 10,000

` 7,190

ESP:

Specification:

` 8,500

8.9-cm (3.5-inch) capacitive display, 800MHz processor, 5 MP rear camera, a front camera, 1300 mAh battery, 256 MB RAM, expandable up to 32 GB, 3G

Specification:

Retailer/Website: www.flipkart.com

Samsung Galaxy Note 2 OS:

Android 4.0 Launch Date:

September 2012 MRP:

` 39,990 ESP:

` 38,990 Specification: 5.5-inch HD Super AMOLED screen,1280 x 720 pixels screen resolution, 1.6 GHz processor, 3,100 mAh battery, 8 MP camera, 3G, WiFi Retailer/Website: www.snapdeal.com

7.6-cm (3-inch) TFT LCD display, 320 x 240 pixels screen resolution, 850 MHz processor, 2 MP rear camera, internal memory 4 GB, expandable up to 32 GB, 3G, Wifi, Retailer/Website: www.saholic.com

Sony Xperia Miro OS:

Android 4.0

Sony Xperia Tipo

Sony Xperia Tipo Dual

OS:

OS:

Launch Date:

Launch Date:

MRP:

MRP:

ESP:

ESP:

Android 4.0 September 2012 ` 9,999 ` 9,399

8.1-cm (3.2-inch) TFT capacitive touchscreen,320 x 480 pixels screen resolution, 800 MHz processor, 3.2 MP camera, 2.5 GB internal storage, expandable up to 32 GB, 3G, WiFi

Retailer/Website: www.naaptol.com

Retailer/Website: www.flipkart.com

Micromax A87 Ninja 4

Specification:

3.5-inch TFT capacitive touchscreen,320 x 480 pixels screen resolution, 800 MHz processor, 1500 mAh battery, 4 GB internal memory, expandable up to 32 GB, 3G, WiFi Retailer/Website: www.flipkart.com

OS:

Launch Date: MRP:

` 5,990 ESP:

` 5,990 Specification:

OS:

Android 2.3

OS:

Launch Date:

Launch Date:

September 2012

Launch Date:

MRP:

MRP:

` 6,990

` 4,999

MRP:

ESP:

ESP:

ESP:

Specification:

Specification:

7.1 cm (2.8") QVGA Display, 832Mhz processor, 1200 mAh battery, 2MP Camera, 4 GB Internal memory, expandable up to 32GB

3.5-inch capacitive touch display,480 x 320 pixel screen resolution, 1GHz processor, 1,400 mAh battery, 3MP rear camera, 512MB of built-in storage, expandable up to 32GB,3G, Wifi

Retailer/Website: www.snapdeal.com

Wicked Leak Wammy Note

Retailer/Website: www.maniacstore.com

MTS MTag 351

Android 2.3 August 2012 ` 19,990 ` 17,990 Specification: 12.7-cm (5-inch) capacitive touchscreen display, 800 X 480 pixels screen resolution, 1 GHz processor, 384 MB RAM,3G, WiFi Retailer/Website: www.snapdeal.com

Micromax Superfone Pixel A90

OS:

OS:

Android 4.0

Android 2.3

Launch Date:

Launch Date:

August 2012

August 2012

MRP:

` 11,000

` 7,499

ESP:

ESP:

` 12,990

` 7,499

ESP:

` 11,000 Specification:

MRP:

Specification:

12.7-cm (5-inch) touch screen, 1 GHz processor, 2,500 mAh battery, 8 MP camera, 512 MB of RAM, expandable up to 32 GB, 3G, WiFi

3.5 inch capacitive touchscreen, 320x480 pixels screen resolution, 800MHz processor, 1300mAh battery, 3 MP camera, 128MB internal memory, expandable up to 32 GB

Retailer/Website: www.wickedleak.com

Retailer/Website: At your nearest MTS Store

MRP:

` 3,999 ESP:

` 3,999 Specification:

Retailer/Website: www.saholic.com

OS:

` 4,890

September 2012

Retailer/Website: www.saholic.com

Map My India Car Pad 5

` 6,990

Launch Date:

2.8 inch capacitive touchscreen, 320X240 pixels screen resolution, 1GHz processor, 1280mAh battery, 1.3 MP rear camera, 256MB RAM, 512MB ROM, 120MB internal storage

Micromax A57 Superfone Ninja 3

September 2012

Android 2.3

4-inch capacitative LCD touchscreen, 1400 mAh battery, 1GHz processor, 2 MP camera, memory expandable up to 32GB

Samsung Galaxy Y Duos Lite Android 2.3

MicroMax A 25 OS:

September 2012

` 14,499

` 10,299 Specification:

MRP: ESP:

` 10,499

8.1-cm (3.2-inch) TFT capacitive touchscreen,320 x 480 pixels screen resolution, 800 MHz processor, 3.2 MP camera, 2.5 GB internal storage, expandable up to 32 GB, 3G, WiFi

Android 2.3

` 14,499

September 2012

Specification:

Launch Date:

September 2012

Android 4.0

OS:

Android 4.0 Launch Date:

August 2012 MRP:

` 12,990 Specification:

Intex Aqua 4.0 OS:

Android 2.3 Launch Date:

August 2012 MRP:

` 5,490 ESP:

` 5,490 Specification:

3.5-inch display screen, 480 x 320 pixels screen resolution, 800MHz processor, 1400mAh battery, 512MB of RAM, 3 MP rear camera, WiFi Retailer/Website: www.maniacstore.com

Karbonn A18 OS:

Android 4.0 Launch Date:

August 2012 MRP:

` 12,990 ESP:

` 9,849 Specification:

4.3-inch AMOLED multi-touchscreen display, 480x800 pixel resolution, 1GHz processor, 1600mAh battery, 8 MP rear camera, 512 MB of internal memory, expandable up to 32 GB, 3G, WiFi

4.3 Inch WVGA touch screen, 480 x 800 pixels screen resolution, 1 GHz processor, 1500 mAh battery, 5 MP rear camera, internal memory 1GB, expandable up to 32 GB, 3G, Wifi

Retailer/Website: www.snapdeal.com

Retailer/Website: www.infibeam.com

december 2012 | 21


Now, Android 4.2 will help combat malicious apps

In an effort to bring down malware attacks on the Android platform, Google has introduced a new security system in Android 4.2. The new feature works on the device and scans any new apps that you install from Google Play or from third party apps stores. The feature is completely opt-in, so the first time you install an app that is not from Google Play, your Android 4.2 running device will ask you if you want the app to be scanned. That apart, the new Android 4.2, a minor update of Android 4.1, comes packed with an array of new features. Here are some of them: Photo Sphere camera:Users can click pictures in every direction. According to Google, “With Android 4.2, you can snap pictures in every direction that come together into incredible, immersive photo spheres that put you right inside the scene.” Gesture typing: Users can just glide their fingers over the letters in order to type, and lift them off after each word. Users don’t have to worry about spaces because they get added automatically. The keyboard can anticipate and predict the next word, so you can finish entire sentences just by selecting suggested words. You can now power through your messages like never before. Android's dictionaries are now more accurate and relevant. With improved text-to-speech capabilities, voice typing on Android is even better. Multiple users: The owner of the device can give users their own space through user accounts. According to sources at Google, “Everyone can have their own homescreen, background, widgets, apps and games–even individual high scores and levels! And since Android is built with multi-tasking at its core, it’s a snap to switch between users–no need to log in and out.” This feature is available only on tablets. Wireless display: Android 4.2 allows devices to enable wireless display. Users can share movies, YouTube videos, and anything that’s on your screen, on an HDTV. Fast and smooth: Google claims that the new Android 4.2 operating system is fast, fluid and smooth.

Raspberry Pi-based box makes rooting Android phones easy

Rooting Android phones with a Raspberry Pi-based box will now be simpler and more interesting. According to Wikipedia, “Android rooting is the process of allowing users of smartphones, tablets and other devices running the Android mobile operating system to attain privileged control (known as ‘root access’) within Android's subsystem.” All you need to do is to connect your device to the ingenious box named CASUAL, an abbreviation for 'Crossplatform ADB Scripting Unified Android Loader'. CASUAL, developed by long time hardware hacker Adam Outler, helps the user to plug in the device and feed its exploits over ADB. Theoretically, you don't need a screen, mouse or a keyboard to operate

Apache introduces CloudStack 4.0.0-incubating

The Apache CloudStack project has announced the 4.0.0-incubating release of the CloudStack Infrastructure-asa-Service (IaaS) cloud orchestration platform. This is the first release from within the Apache Incubator, the entry path into the Apache Software Foundation (ASF). Apache CloudStack is an integrated software platform that allows users to build a feature-rich IaaS. CloudStack includes an intuitive user interface and rich API for managing the compute, networking, accounting, and storage for private, hybrid or public clouds. The project entered the Apache Incubator in April 2012. The 4.0.0-incubating release represents the culmination of more than six months work by the CloudStack community. The release includes more than a dozen new features, many bug fixes, security fixes, and a fully audited code base that complies with ASF guidelines. The new features include: interVLAN routing (VPC); site-to-site VPN; local storage support for data volumes; virtual resource tagging; secure console access on XenServer; the capability to upload an existing volume to a Virtual Machine; dedicated high-availability hosts; support for the Amazon Web Services API (formerly a separate package);AWS API extensions to include tagging; support for Nicira NVP (L2); Ceph RBD support for KVM; support for Caringo as secondary storage; and KVM Hypervisor support upgraded to work with Ubuntu 12.04 and RHEL 6.3. In addition to the official source code release, individual contributors have also made convenience binaries available on the Apache CloudStack download page.


CASUAL. If it is deployed correctly, users can simply connect an Android smartphone with a known exploit to a system running CASUAL. The exploits will take place on their own then. Outler has made a headless box, which allows users to simply plug-in their devices for rooting. The box he has created is based on a Raspberry Pi computer with an Arduino battery. This makes the device extremely portable. The box comes fitted with five LEDs and a toggle switch, which are used to communicate with the user. As soon as you connect a phone to the device, the LEDs will indicate the act and finally a ‘pass’ or ‘fail’ LED will glow to tell you what happened to the process. According to Outler, the success rate of the device is around 70 per cent. The exploits loaded in the box are capable of handling around all the standard Android firmware versions. Since it is open, anyone can contribute to CASUAL if they find a new exploit. Outler said that anyone can build such a device simply by using a headless mode designed for embedded devices, like the Raspberry Pi.

TRAINING PARTNER

Adieu Ubuntu Linux 11.04!

If you are a user of Ubuntu Linux 11.04 (Natty Narwhal), it's time to switch over to something more exciting! Packed with new features, Canonical has released Ubuntu Linux 12.10 or Quantal Quetzal and announced the end of Natty Narwhal. The company has asked users of Ubuntu 11.04 to upgrade in a stepwise manner by first moving to Ubuntu 11.10 (Oneiric Ocelot). The instructions are available on the Ubuntu site. Kate Stewart, Ubuntu release manager, said that Ubuntu 11.10 and Ubuntu 12.04 (Precise Pangolin)—the next step along the way—are still actively supported with security updates and select high-impact bug fixes.

Raspberry Pi gets more teeth with RISC OS

Now you can run your Linux-based $25 Raspberry Pi minicomputer with the RISC OS. The operating system has been named RISC OS Pi. This is the first ‘official’ release of RISC OS exclusively meant for the Raspberry Pi. The operating system is made to be programmed onto an SD card with a storage capacity of 2 GB or more and can be downloaded free from the Raspberry Pi download site, as an SD card image or as a torrent. Interested users can also buy a speciallybranded programmed SD card from RISC OS Open. RISC OS Open Limited (ROOL) manages the source code for RISC OS. RISC OS is a computer operating system designed in Cambridge, England deceMber 2012 | 23


by Acorn. First released in 1987, its origins can be traced back to the original team that developed the ARM microprocessor. RISC OS is owned by Castle Technology Limited. “I have spent a lot of time while I was young with Acorn Archimedes and RiscPC products. This is a great moment for me personally to see an evolved version of the original ARM operating system to be featured in the Raspberry Pi. From the Foundation’s point of view, we welcome the arrival of an alternative desktop environment, offering a rich suite of applications, and with BBC BASIC only a few keystrokes away,” said Eben Upton, founder, Raspberry Pi Foundation.

Windows 8 may fail to woo consumers: Linux Foundation

Microsoft may be going gaga over its latest Windows 8 operating system, especially designed to function on traditional desktops as well as on touch devices. But the Linux Foundation feels that this strategy of Microsoft may not work as the consumers are used to Linuxbased software. Jim Zemlin, executive director of the Linux Foundation, feels that consumers use Windows 8 with a keyboard and mouse instead of a touch screen. “Microsoft is stuck in the space between the desktop-driven, cost per software licence world they dominated and the era we are just now entering: a world driven by open source software and services,” said Zemlin, in his official blog posting. Zemlin further explained that the computers are slowly becoming synchronous with portability and Microsoft's hybrid operating system approach for Windows 8 is a good move but “…perhaps bad to apply a ‘Jack of all trades’ approach that may not work so well for their existing bread and butter desktop users.” In another instance, Microsoft said that it has disabled USB charging in its recently launched Surface tablets as it wants people to carry either a laptop or a Surface, or even better, only a Surface to meet all their computing requirements. But a Gartner analyst has something different to say. “I like Windows 8 on a Tablet, but not on a desktop, which is unfortunately where I will use it.”

New to Linux? Try Linux Lite 1.0.0

If you are a newbie in the world of Linux, the newly launched open source distro Linux Lite 1.0.0 is just for you. The official announcement about the availability of the distro was made by Jerry Bezencon. According to a post, Linux Lite is free for everyone to use and share, and suitable for people who are new to Linux or for people who want a lightweight environment that is also fully functional. The makers of the distro claim that it was created for three reasons. They explain, “One, to show people just how easy it can be to use a Linux-based operating system—to dispel myths about how scary Linux operating systems are; two, to help create awareness about Linux-based operating systems; and three, to help promote this community.”

LibreOffice 3.6.3 released

This piece of news is sure to bring a smile to the faces of open source fans. The version 3.6.3 of LibreOffice open source word productivity suite has been released by The Document Foundation. The new release comes with more than 90 bug fixes including solutions to layout problems, overflowing margins, as well as issues that arise while importing and exporting ODF documents. The LibreOffice 3.6.3 is available for Linux, Windows and MacOS platforms and according to official sources, “This new release is another step forward in the process of improving the overall quality and stability for any kind of deployment, on personal desktops or inside organisations and across companies of any size.”

Florian Effenberger, chairman of the board, The Document Foundation, has requested LibreOffice users to support the foundation financially by generous donations: “If you enjoy LibreOffice, please donate and help us with our budget for 2013,” said Effenberger in his Google+ page. After the Foundation's Hard Hacks initiative, several fixes in the third maintenance update to the 3.6.x group were introduced. The Hard Hacks programme was initiated to tackle complicated bugs in the software's code.


RHCE RHCSS

ADVANCE LINUX MODULES

SHELL PHP & SHELL SCRIPT MYSQL CCNA SCRIPT CCNA

ONLINE TRAINING ONE TO ONE, INSTRUCTOR LED TRAINING

India’s first network security education provider now available at Four different locations

SAVE MONEY & TIME Get yourself CERTIFIED ONLINE for Details, Call us.

100%

result in RHCE/RHCSS exam Special Offer Only For Online Training

www.grrasspace.com VPS Severs Email Marketing Solutions Java Hosting

Shared Hosting Sever Management Domain Registration

JAIPUR : GRRAS Linux Training and Development Center 219, Himmat Nagar, Behind Kiran Sweets, Gopalpura Turn, Tonk Road, Jaipur(Raj.) Tel: +91-141-3136868, +91- 9887789124, +91- 9785598711, Email: info@grras.com

Free Original Course ware Get Hurry: Take demo ( totally free ) for more queries Call 09887789124

PUNE: GRRAS Linux Training and Development Center 18, Sarvadarshan, Nal-stop, karve Road, Opposite Sarswat-co-op Bank, Pune 411004 M: +91-9975998226, +91-7798814786 Email: info.pune@grras.com

NAGPUR: GRRAS Linux Training and Development Center 53 Gokulpeth, Suvrna Building, Opp. Ram Nagar Bus Stand and Karnatka sangh Building, Ram Nagar Square, Nagpur- 440010, Phone: 0712-3224935, M: +91-9975998226, Email: info.nagpur@grras.com

www.grras.org


Giving more details about the distro, Bezencon informed, “You get a Web browser, email, a music and movie player, a CD/DVD burner, office software, voice chat, a photo editor, network access tools, printing and the Linux Lite Help Manual. All the tools you need are available out-ofthe-box—to get you up and running straight away and using Linux Lite productively as soon as you've installed it.” Linux Lite is based on Ubuntu 12.04 LTS with five years’ support. The following software is included: GParted, LibreOffice Writer, LibreOffice Calc, the XFBurn CD/DVD burner, the VLC Media Player, the Firefox Web browser with Flash, OpenJDK Java v6, Mumble Voice Chat, Thunderbird Email, the XChat IRC client, the GIMP image editor, the Leafpad text editor and Xarchiver.

Attachmate launches Reflection 2011 R3

Attachmate Corporation has announced the availability of Attachmate Reflection 2011 R3, a family of terminal emulation and PC X server software that securely connects desktop and mobile users to mission-critical business applications. Compatible with Windows 7 and Windows 8 Certified, Attachmate’s new terminal emulator software connects users to enterprise applications running on IBM mainframes (including the new IBM zEnterprise EC12), IBM AS/400 (IBM System i), UNIX/Linux servers and HP NonStop. “With the impending launch of Windows 8 and excitement surrounding the new operating system, it’s important to support customers regardless of their operating system choice in this evolving landscape,” said Tom Bice, vice president, Marketing and Product Management at Attachmate. “As the leader in the market, our customers expect Attachmate to assist them with reliable access to critical business information while adopting new technologies like Windows 7, yet also looking forward to the next version of Windows, whether on a PC or mobile device.” In addition to assisting customers in easing migration pains, Reflection 2011 R3 provides organisations with an opportunity to leverage the latest productivity and security enhancements whether they are upgrading from previous Attachmate versions or replacing other solutions in an effort to standardise. Reflection 2011 R3 is available immediately for purchase. For more information on pricing, please contact sales@attachmate.com.

An Android smartphone for the visually impaired

Now even the visually impaired can enjoy the power-packed features of an Android smartphone. Qualcomm has introduced its Ray mobile device for the visually impaired. According to the company, the Ray device is “…an alwayson, easy-to-use, multi-function smartphone synchronised with audio books from Israel's Central Library for the Blind, Visually Impaired and Handicapped.” Till now, the blind and visually impaired were forced to use simple 2G mobile phones catering only to voice telephony. The World Health Organisation has highlighted the shocking truth that about 285 million people are visually impaired worldwide: 39 million are blind and 246 have low vision. Most of the visually impaired are over 50 years old and this group forms 20 per cent of the world's population. The less fortunate visually impaired have to depend on devices like audio book-readers, colour readers, navigation tools, raised Braille labels, special


bar-code scanners, and large-buttoned, voiceenabled MP3 players. Most of these devices are costly. The Ray smartphone is based on the Android operating system and is powered by Qualcomm's Snapdragon processor. According to Qualcomm, the Project Ray device combines the goodies of smartphone technology along with multiple specialty devices for the blind into one single, cost-effective handset with continuous mobile broadband connectivity. The user interface (UI) is designed for eye-free interaction. The project is being tested on around 100 participants across Israel.

RHCE / RHCVA / RHCSS Exam Centre

Here comes NimbRo-OP, an open source robot!

Professor Sven Behnke and his team from the University of Bonn have come up with an open source robot named NimbRo-OP. Professor Behnke's team previously won the Louis Vuitton Cup for 'Best Humanoid' at RoboCup 2012. This model is also based on the same NimbRo model but, this time, NimbRo-OP has got an attractive white head. NimbRo-OP is 95 cm tall (37.4 inches) and weighs 6.6 kg (14.55 pounds). Nimb Ro-OP is powered by a dual-core AMD E-450 processor along with 2GB of RAM, and a 64 GB solidstate drive. The robot is armed with Wi-Fi, a wide-angled video camera (Logitech C905) and a Robotis CM-730 controller with a 3-axis accelerometer and a 3-axis gyro. The software is released under free and open licences. The GNU/Linux software is based on DARwin-OP. Those who are interested in the project can own a fully assembled robot from University of Bonn at an introductory price of 20,000 Euros plus taxes and shipping. The main reason for open sourcing this project is to enable other groups to use the robot, further improve it and take it to upcoming RoboCup events. The next RoboCup will happen in June 2013 in the Netherlands.

Meet Another Rival of Raspberry Pi

With the invention of Raspberry Pi, the stakes in the battle for supremacy in the small-sized computers space have gone up. One of the latest entrants in this area is Olimex, the Bulgarian device maker. The company has introduced the OLinuXino, a small single-board computer, which is totally open source. Olimex claims that this computer has a better processor and more built-in I/O ports as compared to Raspberry Pi. The developers of the project say that the hardware and the software of the device are open source. This means that one can not only modify the software of the device but can also download the source files for the hardware. This can help in developing and selling your own products based on the design of the Olimex PC, which is available in two variants, both of which sell for less that $60.

At ADVANTAGE PRO, we do not make tall claims but produce 99% results month after month – TAMIL NADU'S NO. #1 PERFORMING REDHAT PARTNER RHCSS RHCVA RHCE

Only @ Advantage Pro

Redhat Career Program from THE EXPERT

Also get expert training on My SQL-CMDBA, My SQLCMDEV, PHP, Perl, Python, Ruby, Ajax...

New RHEL 6.2 Exam. Dates (RHCSA/RHCE) @ ADVANTAGE PRO for Dec - Jan - Feb (4th Quarter 2012-13) Dec. 10, 24, Jan. 21, 28, Feb. 4, 18, 25, 28

“Do Not Wait! Be a Part of the Winning Team”

Regd. Off: Wing 1 & 2, IV Floor, Jhaver Plaza, 1A, N.H. Road, Nungambakkam, Chennai - 34. Ph : 98409 82185 / 84 Telefax : 28263527 Email : enquiry@vectratech.in www.vectratech.in deceMber 2012 | 27


The A13 OLinuXino features a 1 GHz Allwinner A13 ARM Cortex-A13 processor and Mali 400 graphics. And the iMX233-OLinuXino-MAXI has a 454 MHz Freescale i.MX233 processor. The latter has been specifically designed for embedded computing applications, while the A13 model can be used as the basis for an Android or Linux computer.

Android users: Beware of SMiShing!

It seems that hackers are leaving no stone unturned in attacking Android devices. Recently, security researcher, Xuxian Jiang and his team at North Carolina State University uncovered a new vulnerability in the Android 4.0.4 (ICS) OS—a new kind of phishing named 'SMiShing'. This vulnerability is also present on the latest Jelly Bean flavour of Android. Researcher Jiang has previously discovered several other Android vulnerabilities and he has posted a YouTube video demonstrating the new phishing style. “While continuing our efforts on various smartphone-related research projects, we came across a smishing (SMS-Phishing) vulnerability in popular Android platforms. This vulnerability allows a running app on an Android phone to fake arbitrary SMS text messages, which will then be received by phone users. We believe such a vulnerability can be readily exploited to launch various phishing attacks,” wrote Jiang in an official blog posting. According to Jiang, one serious aspect of the vulnerability is that it does not require the (exploiting) app to request any permission to launch the attack. (In other words, this can be characterised as a WRITE_SMS capability leak.) Another serious aspect is that the vulnerability appears to be present in multiple Android platforms. To quote Jiang, “… in fact, because the vulnerability is contained in the Android Open Source Project (or AOSP), we suspect it exists in all recent Android platforms, though we have so far only confirmed its presence in a number of phones, including Google Galaxy Nexus, Google Nexus S, Samsung Galaxy SIII, HTC One X, HTC Inspire, and Xiaomi MI-One. The affected platforms that have been confirmed range from Froyo (2.2.x), Gingerbread (2.3.x), Ice Cream Sandwich (4.0.x) and Jelly Bean (4.1).”

Google releases Chrome 23

If you love using the much admired Google Chrome, here is an update that will interest you. Google has released the latest version of the Chrome Web browser for Windows, Mac and Linux—23.0.1271.64. If you already use Chrome, this update will reach you automatically. But those who want to give Chrome a try for the first time can get hold of the latest release by visiting google.com/chrome. So if you’re wondering what’s new with Chrome 23, according to Google notes, this release comes with a slew of new features that “…include GPU accelerated video decoding on Windows and easier website permissions.” This basically indicates that you can expect a better battery life. The search giant also said that, “…dedicated graphics chips draw far less power than a computer’s CPU, so using GPU-accelerated video decoding while watching videos can increase battery life significantly.” Besides, it would get “…easier to view and control any website’s permissions for capabilities such as geolocation, pop-ups, and camera or microphone access.” Users can also send a ‘Do Not Track’ request to websites.

SUSE Linux Enterprise high availability extension gets SAP NetWeaver integration

SUSE recently announced that its SUSE Linux Enterprise High Availability Extension 11 Service Pack 2 is certified for integration with the SAP NetWeaver technology platform. The SAP certification scenario, which is called ‘SAP NetWeaver High Availability Cluster (NW-HA-CLU) 730’ establishes a reference architecture to help ensure compliant implementation of high-availability scenarios for running SAP solutions.

The SAP Integration and Certification Center (SAP ICC) has certified SUSE Linux Enterprise High Availability Extension 11 SP2 via the NW-HA-CLU 730 integration scenario for the Linux x86_64 platform. SUSE Linux Enterprise High Availability Extension is included with SUSE Linux Enterprise Server 11 SP2 for SAP Applications. The highavailability extension helps customers reduce unplanned downtime and further ensures continuous access to missioncritical applications and data. This reference combination offers customers improved speed, reliability and flexibility. First, it helps customers reduce time to revenue by accelerating integration projects and easing implementation. Second, the technical alignment of SUSE with SAP solutions through certification also improves reliability and supportability. Finally, improved integration of SUSE Linux Enterprise High Availability Extension with the control framework of SAP solutions gives customers the flexibility to add new functionality.


Insight For U & Me

Beware of Malicious Shortened URLs Attackers use the shortened links, which may or may not be legitimate, to lead unsuspecting users to malicious websites that are designed to attack any system using a vulnerable browser.

T

he Internet is now a minefield of malware. Every year, hundreds of millions of new threats appear and cyber criminals are constantly changing tactics hoping to catch users off-guard. Shortened URLs have become popular in recent years as a means of conserving space in character-limited text fields, such as those used for micro-blogging. URL shortening services allow people to submit a URL and receive a second, specially coded shortened URL that redirects to the original URL. Attackers are taking advantage of this type of service because it helps to hide the actual destination URL. Social networks are a security concern for organisations because they provide an effective platform for attackers to launch this type of attack. Users who see a link posted by a friend on a social networking site may be more likely to trust (and click on) it, with little fear of danger. Currently, most malicious URLs on social networking sites lead to websites that are hosting attack toolkits. Using malicious shortened URLs can be a very successful method of attack. As more people join and frequent social networking sites and the sophistication of these sites grows, it is likely that more complex attacks will be perpetrated through them, including the use of malicious shortened URLs. Small businesses are nimble, and that can provide them with a competitive edge in today’s Internet-based market. And with more and more business being conducted online, keeping your sensitive information safe is more critical than ever. Hackers do not care what the size of your business is. What hackers do like about small businesses is that they tend to have more money in the bank than the typical end-user and fewer cyber defences compared to a larger company. Using easily available attack toolkits, even a relative novice can infect your computers and extract all the information they need to steal your bank accounts' login and password details, or steal a list of your customer’s credit card numbers. We all use social networks and so do cyber criminals. The viral nature of these social networking services means that the right messages can be spread with little expense. If that wasn’t bad enough, cyber criminals are preparing to get you on your smartphone and tablets. Many businesses now have employees using smartphones and tablets to access corporate data, but have not yet implemented security policies for these devices. A sharp increase in destructive software developed specifically for these devices is anticipated. Hackers are already taking note of this opportunity to exploit a new market.

Everyone loves clicking on links!

This just goes to show why social engineering is as effective in spreading malware today as it was exactly ten years ago, when the Anna Kournikova virus sped across the Internet almost as fast as the tennis star’s serve. The virus was so successful because, well, let’s face it, everyone wanted to check out the athletic beauty’s latest picture. In the end, though, all they got was a malware infection and a hard life lesson: Curiosity killed the cat!

Building trust and securing the weakest links

Always on SSL: Companies like Facebook, Google, PayPal and Twitter are offering users the option of persistent SSL encryption and authentication across all the pages of their services (not just login pages). Not only does this mitigate manin-the-middle attacks, but it also offers end-to-end security that can help secure every Web page that visitors to the site use, not just the pages used for logging-in and for financial transactions. Extended validation SSL certificates: EV SSL certificates offer the highest level of authentication and trigger browsers to give users a very visible indicator that the user is on a secured site by turning the address bar green. This is valuable protection against a range of online attacks. You need to avoid compromising your trusted relationship with your customers by securing websites against MITM attacks and malware infection in the following ways: Implementing always-on SSL. Scanning your website daily for malware. Regularly assessing your website for vulnerabilities. Choosing SSL certificates with extended validation to display the green browser address bar to website users. Displaying recognised trust marks in highly visible locations on your website to inspire trust and show customers your commitment to their security. Getting your digital certificates from an established, trustworthy certificate authority who demonstrates excellent security practices, and by protecting your private keys.

By Prashant Jain The author is the CEO and founder of JNR Management Resources Pvt Ltd with 10 years of experience in website security solutions. He is a consultant in PKI, authentication and encryption solutions. He is also a chartered accountant with a professional experience of 25 years.

DECEMBER 2012 | 29


For U & Me

Overview

Sports and technology have always been tied together in one way or another. With the miniaturisation of technology and increased functionality, it is now possible to turn every portable device into a sports hub.

I

t all started in 1985, when the first cyclo-computer was detailed and invented. Till recently, such devices and tracking techniques were limited to the elite. Since then, these gadgets have shrunk, now making it possible for you to hold a cyclo computer, a heart rate monitor and a complete personal trainer in your hand. Sports computers or tracking devices made their debut with proprietary computers, courtesy of Garmin, Sigma and many others. Later on, sports tracking apps like Nike+ made their entry. However, due to their high prices, these apps weren’t adopted on a wide scale. The arrival of smartphones has resulted in a paradigm shift, creating arrays of opportunity, opening new doors for untapped world and bringing sports apps to the masses. In 2010, Android 2.3 was released—a platform cheaply available to all. With Android, developers started churning out sports applications without the need for costly hardware or show-off shoes. Thanks to some of these, we can now track and maintain our fitness regimes easily. Let’s look at the best Android tracking applications that let you monitor your workout intuitively and smartly. Let us also test some of the best diet and training apps that guide you.

Track!

The very purpose of sports is to improve and compete. Tracking apps are designed to let you track your score globally, and to compare your workouts to some of the best athletes in the world. Pair these apps with a heart-rate belt, and you have a complete working ‘trainer’ with you. 30 | december 2012

Sports-Tracker.com: For outdoor exercise

Outdoor exercise is a great way to warm up; it is fun and refreshing, more intense, and requires more stamina than typical indoor exercise. Sports-Tracker packages a complete tracking device into an intuitive app that is available for almost all smartphone platforms—whether the old Symbian, Meego, iOS, Android or Windows Phone. The app lets you track your workout easily, offering more than 12 pre-loaded profiles for different workouts such as running, cycling, hiking, etc. All you need is an active Net connection, and if you live in a country with open GPS satellites, you can use the application without an Internet connection. The app provides precise details, with a map showing your location and speed across different routes that you tracked. The app's beautifully skinned interface adds to its intuitiveness. The home screen lists your user profile with the total number of workouts, distance tracked and other details. The centre of the screen shows the details of your latest workout, notifications from your friends (yes, this is a social tracking app) and information of dusk/dawn timings. From there, you can create a new workout, view your diary and track your complete workout statistics. You can even check out your friends' achievements, and much more. One really striking feature is the Explore wizard, with which you can track the workouts of people near you who use the app (provided they have made their workout public). The developers have leveraged maps and various other features to track your route during or after the workout, which helps you discover new and unknown routes with the help of


Overview For U & Me

Figure 1: Sports Tracker app home screen Figure 2: Sports Tracker showing a workout summary

Google Maps. Sports tracker also splits the whole tracking session into laps. Listing the fastest and the slowest, with details such as lap time, speed, and distance. The app has a cool chart section that shows your workout statistics such as speed/ altitude vs distance, and lets you monitor your highs and lows during certain points in the workout. If you have an additional heart-rate belt, you can pair up the belt with the app, and it will monitor and list details in an elegant manner, allowing you to improve your heart rate by varying your workout. Though Sports-Tracker offers its own recommended heart belt (available from the website) that works with almost all platforms, you can pair up any supported Bluetooth heart belt. Once you have finished your workout, if you want, you can share it with a mere press of a button—even with popular social sites such as Twitter and Facebook. The icing on the cake is the website, which shows all your hard work, stats and routes for a more detailed and precise look at your workout. The website is currently built using Flash; however, developers will roll out a spanking new HTML5-based website soon, along with a revamped version for Android. The best part of Sports-Tracker is that it's available for free, with no hidden costs or features being cut. All the mentioned features are free, and can be used without even creating an account… Awesome!

Figure 3: The Sports Tracker website showing a detailed view of a workout

Unlike the Sports-Tracker interface, which is outstanding, the VirtuaGym interface is simple, hassle-free, and with nothing fancy. The home screen has only two options, Exercise and Progress. The first opens up the calendar, where you can add workout sessions, or the specific exercise you

VirtuaGym—A virtual indoor/gym class trainer

Exercise without following the proper methods and training can lead to disproportionate muscle gain and even injury. To reduce such risks, you can join gym classes or even hire trainers—but not all of us can afford these luxuries. VirtuaGym is an Android app that lets you track most indoor exercise to see it's done properly. The app comes preloaded with boatloads of workouts; if you're not satisfied with these, download some from the Net, or upgrade to a Pro account for full access.

Figure 4: The upcoming revamped Sports Tracker website

december 2012 | 31


For U & Me

Overview

Figure 5: VirtuaGym showing a gym machine workout

Figure 6: VirtuaGym teaching indoor/floor exercises

want to do. Once you add a workout, you are presented with choices—not only basic floor and free-hand exercises, but even full-fledged gym exercises. On selecting an exercise, you are greeted with an image of it. Here, you can choose the text-based version (showing you the proper steps) by clicking the ‘i’ icon on the top right—or you can play the animated clip for a clear view of the exercise sequence. The animation is very smooth and displays all the necessary positions for the exercise. The software even guides you through the number of sets and repetitions needed for that particular exercise. Once you complete an exercise set, a timer tells you to get ready for the next set. The app even teaches you the breaks you need to take during each exercise session, which is very nifty indeed. However, during my testing, it failed to alert me with an audible alarm, even though the option was enabled. Upon completing the exercise, the timer appears again, showing you the rest time you should take before starting another exercise. VirtuaGym is a complete personal training app, and I am thrilled to be able to use it for my training schedule. Other than a few glitches here and there, the app does the job right.

Calorie counter-MyFitnessPal

A proper diet is a must for a healthy body and mind. An appropriate diet control can lead to balanced and quick weight loss or gain. This requires a balanced intake of calories, nutrients and vitamins. Myfitnesspal helps with that. The very wellintegrated diary that is synced online, the weight-measurement tool and calculator provide an estimate on your progress. It works well, and is a boon for the diet-conscious “people”. The interface may not be the best, but it isn't gaudy, and provides a plethora of options to manage your diet and daily routine. The bonus is that the app provides you a brief overview of your calorie stats and gives you a glimpse of how you will look a few weeks later, if you 32 | december 2012

Figure 7: The Myfitnesspal home screen

Figure 8: Calorie chart showing total calorie consumption

continue using the app effectively. The app has an efficient bar-code reader to read the calories of what you are eating (of course, irrelevant if you are eating home-cooked food–this feature makes more sense in Western countries where they tend to eat more packaged foods). This is a major drawback for Indian users of the app; it's tough for a layman to calculate calorie intake for home-cooked meals. To compensate for this, the app has a great online database on food; if you ate three chapatis, you can search for chapati and add up your calorific intake from there. Items that are not available can be added manually. Overall, the app is a must-have for the health-conscious.

Limbering down

Apps like the above have a few missing bits that will soon get sorted out with future revisions, so don't get disheartened if you find some features missing. Using the apps properly can give you an A-class training experience without even hitting a gym. Productive and helpful apps like these exercise your smartphone and you! References: [1] Sports Tracker-https://play.google.com/store/apps/ details?id=com.stt.android | sports-tracker.com [2] Virtua Gym-https://play.google.com/store/apps/ details?id=digifit.virtuagym.client.android [3] Calorie Counter-https://play.google.com/store/apps/ details?id=com.myfitnesspal.android

By: Shashwat Pant The author is a FOSS/hardware enthusiast who likes to review software and tweak his hardware for optimum performance. He is interested in QT programming and fond of benchmarking the latest FOSS distros and software. You can follow him at @shashpant.



Admin

Insight

proof; hence, security designers need to explore all possible options prior to putting appropriate controls in place, leading to a robust design. This is where understanding and using the in-built security features of FOSS distros becomes important —to introduce robustness.

FOSS security features

While there are so many distros available with various built-in features, I will concentrate on those features that are found in almost all versions. Some of the features mentioned below are actually open source projects that became integral parts of distros, over time. Iptables: All Linux distros support iptables, which is essentially a truth-table sort of database containing information that lets the net-filter algorithm decide on how to treat a packet. It is a kernel module, requiring elevated privileges to configure. The working operation of iptables is very simple. Each packet is stripped into various fields, and the rules from the table are applied to make a decision in terms of letting it go ahead, blocking it, or dropping it. For a given server role, iptables can be written only once, by taking into account all the packet acceptance and rejection scenarios, and would rarely be needed to change. While many production farms use iptables to introduce an additional layer of security, it is important to note that it puts an additional burden on the server’s resources. Since every packet is stored temporarily and checked against a set of rules, it needs a considerable amount of computational power. Hence, iptables rules should not be very elaborate, but just adequate for the given network or application scenario. You can learn how to set up iptables on Ubuntu Linux, at https://help.ubuntu.com/community/IptablesHowTo ConnTrack: This is another kernel-based module that falls under the net-filter framework. As an extension to iptables, ConnTrack essentially tracks the connection for all network sessions. It further tries to relate packets that formed a sensible and successful connection. ConnTrack operates at Layers 3 and 4, and creates useful information about each packet by reading its various fields. This can optionally be used further by iptables, to improve its effectiveness. For example, if the high-level protocol is HTTP, the packets are found to contain HTTP headers, as well as the session-based source and destination IP address, and service port information. If this data is made available by ConnTrack, it becomes easy for iptables to allow those packets without delving deep into them, thus saving precious (server) computational resources. The right approach is to have iptables and ConnTrack together. Source address verification: One of the serious security attacks is packet spoofing, whereby attackers modify the source IP address to fool the destination host. As a result, it is rather difficult to detect and stop the spoofing attack.

34 | december 2012

Most Linux systems come with a built-in, but usually less known, feature called source address verification. It is a kernel feature that, when turned on, starts dropping packets that appear to be arriving from the internal network but in reality are not. Most of the latest kernels on distros such as Ubuntu and CentOS do support it; if your Linux distro does not, it is time to upgrade or migrate to a new distro. Modifying the hosts.conf file to add “nospoof on” is another level of defence to try. For smaller Linux networks, a nice utility called arpwatch is very useful for detection. Arpwatch keeps track of MAC and IP addresses, and records all changes—and can be scripted to alert administrators upon a possible attack. Scripting can also be done to go through network interface logs and look for anomalies with respect to source address forging. Anti-sniff: Another serious type of attack is packet sniffing, wherein the network cards are put into promiscuous mode and packets are dumped for analysis to create an attack vector. All famous distros such as Ubuntu and CentOS do support anti-sniffing utilities, which monitor the network interface settings and ensure that promiscuous mode is not enabled. This effectively stops sniffers from working, thwarting further security attacks. SniffIt: While the anti-sniffer is deployed in a FOSS network, it is important to see if it is functioning properly. For that, you need to simulate sniffing, and the SniffIT or DSniff utilities do that. Wireshark is another good example. The idea behind a sniffer is also to capture packet patterns that can eventually be fed into an intrusion detection system. Snort is a famous FOSS IDS system. DSniff is very effective in capturing SSL traffic.

Beyond FOSS built-in security

As explained earlier, no single device or method can help you achieve 100 per cent security. Also, it is important to note that for some attacks such as packet sniffing, packet crafting, etc, there are no built-in security features available in an open source distro. All the methods explained here surely strengthen security, but they must be complemented with commercial-grade appliances and devices, to design a robust perimeter defence system. By: Prashant Phatak The author has over 20 years of experience in the field of IT hardware, networking, Web technologies and IT security. Prashant is MCSE and MCDBA certified, and is also an F5 load balancer expert. In the IT security world, he is an ethical hacker and net-forensic specialist. Prashant runs his own firm called Valency Networks in India (http://www.valencynetworks.com) providing consultancy in IT security design, security audits, infrastructure technology and business process management. He can be reached at prashant@valencynetworks.com.



Developers

Overview

Intents in

Android One factor contributing to Android’s success in the smartphone market is the large number of third-party application developers in Java. The other not-so-well-known yet important factor is the ease with which developers can write complex applications with Android's very good inter-application communication mechanism (IACM), facilitating component re-use and faster turn-around while writing new applications. This article explores ‘intents’—the heart of Android’s IACM.

F

irst, let’s get familiar with the Android terminology used in this article: Activity: The user interface part of an application. An application can have one or more activities. Service: Code that runs in the background (typically to perform a long operation) and does not interact with the user. Broadcast Receiver: Code that registers (‘listens’) for a particular broadcast message.

The components of an intent

An intent itself is an object of the Intent class, which meets the descriptions we talked about earlier. Broadly, the descriptions can be classified as given below:

Component Name (Optional)

Describes what component should handle the intent. It is specified by the fully qualified class name of the component and the package name, with the project name, e.g., “com.example. project.lfy.SampleActivity” with “com. example.project”.

Action

Describes what action to be taken; e.g., ACTION_DIAL to specify ‘make a phone call’.

Data

Contains data required for the action; e.g., the phone number to call.

Category

Provides additional information about the action to perform; e.g., CATEGORY_LAUNCHER means it should appear in the Launcher as a top-level application.

Extras (Optional)

Key-value pairs holding additional information, if any. Mostly used in the broadcast intents. For example, while reporting battery status, the intent describes battery ‘capacity’, ‘temperature’, etc, as extras.

What are intents?

In Android, ‘intent’ describes ‘an operation to be performed’. For example, it can say ‘open a browser’, ‘send an SMS’, ‘make a phone call’, etc. In some cases, an intent can also describe something that has happened in the past, and is now being reported to the Android system. Examples of this include reporting that the battery status has changed, a head-set has been plugged in, a camera button was pressed, etc. This category is mostly used by the Android OS to send ‘system-wide’ notifications about a certain event, and is usually referred to as ‘System Broadcast Intents’. These can only be sent by the system, not by an application. Intents can be sent between Activities, Services and Broadcast Receivers. Applications use intents for both interand intra-application communication. 36 | december 2012


Overview Types of intents

Intents are divided into two groups: a) Explicit intents: These explicitly specify the name of the target component to handle the intent; in these, the optional ‘component name’ field (above) is set to a particular value, through the setComponent() or setClass() methods. b) Implicit intents: These do not specify a target component, but include enough information for the system to determine which of the available components is best to run for that intent. Consider an app that lists the available restaurants near you. When you click a particular restaurant option, the application has to ask another application to display the route to that restaurant. To achieve this, it could either send an explicit intent directly to the ‘Google Maps’ application, or send an implicit intent, which would be delivered to any application that provides the ‘Maps’ functionality (e.g., Yahoo Maps).

Intent Resolution

For implicit intents, Android ‘figures out’ the right component that can handle them. This process is called ‘Intent Resolution’, which maps an Intent to an Activity, Broadcast Receiver or Service (or sometimes two or more activities/receivers) that can handle it. If an activity wants to receive a particular intent, it can do so in the two ways mentioned below: a) It should specify the same in the Android Manifest.xml, in the <intent-filter> tags, as shown below:

Developers

c) Sticky Broadcast: This is retained by the system even after it has been sent; this means that whenever you register a receiver for these intents, you will get a ‘cached’ copy of the intent that was recently sent. ACTION_BATTERY_ CHANGED, an intent that describes the changes in battery status, is a good example of a sticky broadcast.

Sample code

The sample code available at http://linuxforu.com/article_ source_code/dec12/intents_android.zip creates a simple intent to launch the contacts list; it also listens to the battery status change intent and provides the battery capacity. This sample code can be tested with Eclipse, running an Android emulator. Having introduced readers to Android intents, in the next article, we will look at how Android prevents applications from sending system-level intents. References [1] http://developer.android.com/reference/android/ content/Intent.html

By: Durgadoss R The author is a kernel programmer by profession, who spends his spare time hacking Android. You can reach him at r.durgadoss@gmail.com.

<intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter>

b) It can explicitly register a Broadcast Receiver using regist erReceiver(BroadcastReceiver, IntentFilter)

Types of intent broadcasts

There are three different types of broadcasts used by the Android System to broadcast intents: a) Normal Broadcast: Sent asynchronously, all registered Broadcast Receivers will receive it. The order in which receivers receive it is undefined. Intents are sent with the context.sendBroadcast() method. b) Ordered Broadcast: This is delivered sequentially to each eligible Broadcast Receiver; the order is defined by the priority of the associated Intent Filters, using the android:priority attribute in the Android Manifest.xml. Any Broadcast Receiver receiving an ordered broadcast can stop further sending by the abortBroadcast() method. The sender of an ordered broadcast can choose to be notified when the broadcast has completed. The isOrderedBroadcast() method returns true if the current receiver processes an ordered broadcast. These intents are sent using the context.sendOrderedBroadcast() method. december 2012 | 37


Admin

Let's Try

Getting Started with

WireShark

This article presents WireShark, a very capable and popular graphical network protocol analyser.

G

erald Combs created Ethereal, the ancestor of WireShark, back in 2006. When he changed his job, he could not use the name Ethereal any more, so he renamed his software WireShark. Nowadays, most people only know WireShark! The main advantage of WireShark is that it is a graphical application. There is also a command-line version of WireShark but I have never used it. You can get WireShark either from its website—by compiling its source code—or directly from your Linux distribution. I personally prefer the second option.

the protocol, whereas the body part contains the data. Some protocols are reliable, whereas others are not, which means they do not guarantee package delivery—this is not always a problem, but the application must deal with it, if required. WireShark captures packets, and analyses and displays them in a human-readable format. WireShark allows you to follow a TCP/IP ‘conversation’ between two machines, view packet data, etc. Before you start capturing, it is better to have in mind a particular issue that you want to solve or examine. What follows are the initial steps towards successful network traffic analysis.

Running WireShark

Basic usage

If you try to run WireShark as a normal user, you may not be able to use network interfaces for capturing network traffic, due to reasons related to UNIX permissions. Run WireShark as the root (sudo wireshark) when capturing data, and as a normal user when analysing network data. Figure 1 shows WireShark run by a user without root privileges. Before going into more details about WireShark, I have to talk about network traffic in Ethernet networks that use the TCP/IP family of protocols. When we say TCP/IP, we not only mean the TCP and IP protocols, but many other protocols including ARP, BOOTP, UDP, ICMP, FTP, etc. Information is transferred using packets. Each packet has a header and a body part. The header part contains information that is needed by 38 | december 2012

After running WireShark with the required user privileges, you will be able to see the list of available network interfaces. The easiest way to start capturing network packages is to click your preferred interface, which will make WireShark immediately start capturing network packets (Figure 2). If you know nothing about TCP, IP or UDP, you may find the output confusing. To stop capturing, select Capture and then Stop from the WireShark menu. Alternatively, press the fourth icon from the left, with the white x in a red background, which is only active during captures. Using the method described for capturing, you cannot change any of the default WireShark capture options (viewable by selecting Capture and Options from the menu as shown in


Let's Try

Admin

Figure 1: WireShark running Figure 3: WireShark Capture options

with WireShark, while new file formats are frequently added. It is more likely that WireShark cannot read a file due to invalid packet types than an incompatible format! Similarly, WireShark allows saving captured network data in a variety of formats, visible in Figure 4. You can even use WireShark to convert a file from one format to another.

WireShark display filters

Figure 2: WireShark capturing data

Figure 3). There you can select the Interface (en0), see your IP address (192.168.1.10), apply any Capture Filter (in the image, there is none), put your network card in promiscuous mode, and save captured data to one or more files. When capturing lots of data, it is considered a good practice to first save and then examine captured network traffic. When you put your network card in promiscuous mode, you allow the network device to catch and read every network packet that arrives, even if the receiver is another device on the network. You can also choose to stop packet capturing after a given number of network packets, a given period of time, or a given amount of data (in bytes). WireShark allows you to read and analyse captured network data from a large number of file formats including tcpdump, libpcap, Sun’s snoop, HP’s nettl, K12 text file, etc. This means that you can read almost every kind of captured network data

The amount of network data that WireShark may display can be too Figure 4: Supported file formats much for a human to watch and understand, especially on very busy networks. Usually, when using WireShark, we want to examine a given problem or situation, or even watch for unusual network activity. WireShark allows us to filter network data for specific types of traffic during capture, avoiding creating huge capture files—but there are also display filters, which tell WireShark to display packets that really matter, while capturing everything, so that you look at fewer packets and easily find what you want. Generally speaking, display filters are considered more practical and versatile than capture filters, because most of the time, you don't know in advance what you want to examine. Nevertheless, using capture filters can save time and disk space; that is the main reason for using them. WireShark tells you when a Display Filter is syntactically correct—when the background turns light green, then the filter is syntactically correct; and when the syntax is erroneous, the background is pink. You can see both cases in Figure 5. The result of a logically inaccurate (yet syntactically correct) filter at capture time is no captured data—so you may recognise this error the hard way. december 2012 | 39


Admin

Let's Try

Figure 5: Syntactically right (up) and wrong (down) display filters

Figure 6: A DHCP transaction

What you can also notice in Figure 5 is that WireShark is smart enough to understand erroneous IP addresses such as 192.168.257.10. The presented display filter shows only traffic that originates from, or goes to, the 192.168.1.10 IP address. Note: Display filters do not magically solve problems by themselves. They are extremely useful tools when used the right way, but you still have to interpret the results, find the problem and think about the solutions.

Analysing DHCP traffic DHCP (Dynamic Host Configuration Protocol) provides configuration information to hosts on TCP/IP networks. It is based on BOOTP (the Bootstrap Protocol) and extends it by adding more capabilities. DHCP and BOOTP both use UDP (ports 67 and 68). DHCP supports, among others, the following basic messages: DHCPDISCOVER: Clients searching for a DHCP server 40 | december 2012

send this broadcast message to the LAN, using only the client MAC address, because the client does not have an IP address yet. DHCPOFFER: When a DHCP server receives a DHCPDISCOVER message, it responds with a DHCPOFFER message. DHCPREQUEST: The DHCPREQUEST message provides information to the chosen DHCP server, even if there is only one available. DHCPACK: This message is the response of the chosen DHCP server. It includes all the required configuration information. Figure 6 shows a DHCP transaction that involves the airport wireless card of an iMac and an ADSL router that also acts as a DHCP server. Please notice how much easier it is to see the DHCP protocol when using the right Display Filter (bootp). The first packet is the DHCPDISCOVER message from the iMac searching for a DHCP server. Since the iMac does not have an IP address yet, the source IP of the packet is 0.0.0.0 and the destination IP is the broadcast IP (255.255.255.255). What distinguishes the airport card of the iMac from the other network devices found in the same LAN is the MAC address of the airport card, which is unique. Therefore, the DHCPDISCOVER message should include the MAC address of the device requesting a DHCP server. The next message is the DHCPOFFER from the DHCP server with IP 192.168.1.1, and is a broadcast message, since the iMac still has no IP address. Then the iMac requests the DHCP server for the offered configuration parameters with the DHCPREQUEST message. Next, the DHCP server sends a DHCPACK message back to the iMac that includes the configuration parameters. From now on, the iMac can use the offered configuration information and any parameter that is unique to the iMac, like the IP address, is reserved by the DHCP server and is not offered to any other device. WireShark is a powerful tool that every Linux or network administrator should know. I suggest that you first start by learning display filters before going any further. As always, before trying to solve a complex problem, first try to address simpler or smaller issues. Web links and bibliography [1] WireShark: http://www.wireshark.org/ [2] Tcpdump and libpcap site: http://www.tcpdump.org/ [3] Internetworking with TCP/IP, Volume I, Douglas E. Comer, Prentice Hall [4] DHCP RFC: http://www.ietf.org/rfc/rfc2131.txt [5] Display Filters Reference: http://www.wireshark.org/docs/dfref/ [6] The TCP protocol RFC: http://www.ietf.org/rfc/rfc793.txt [7] The UDP protocol RFC: http://www.ietf.org/rfc/rfc768.txt

By: Mihalis Tsoukalos The author enjoys photography, writing articles, programming iOS devices and administering UNIX machines. You can reach him at tsoukalos@sch.gr or @mactsouk.


Let's Try

Developers

Integrating Android

with

The jQuery Mobile article in the OSFY November issue began with ‘Hello PhoneGap’, and went on to cover various types of list-views and form elements. This article covers some amazing features that make jQuery Mobile a unique platform—pop-ups, dialogue boxes, tool-bars, transitions and themes.

F

irst, let us continue with form elements and then look at theming our applications.

Form elements—the next batch

Flip or toggle switch: This is a UI element used on various mobile devices. It is also called an On/Off switch and is used for true/false data input. Users can either drag the slider, or tap one side to toggle. To get it, use data-role=”slider” under the select attribute, and two options appear—one option is ‘On’, and the other ‘Off’. Set the for attribute of the label to match the id of the select attribute: <label for=”toggle1”>Toggle Flip switch:</label>

Longer labels: To use longer labels like Switch On/ Switch Off, we need to use a predefined class .ui-slider-switch {width: 10em}. You can add this class in a style tag as CSS, and add the class name within the div tag as follows: <style> .toggle-container .ui-slider-switch { width: 10em } </style>

Add the class name as toggle-container in the div tag, as shown below: <div class=”toggle-container”>

<select name=”Toggle “ id=”toggle1” data-role=”slider”> <option value=”on”>On</option> <option value=”off>Off</option> </select>

The result of this can be seen in Figure 1. Mini version: This version is useful for tool-bars, and can be included using data-mini=”true” inside the select tag. december 2012 | 41


Developers

Let's Try

Figure 1: Toggle switch

Select menus: The select menu is based on a native select element, with a custom-styled select button that matches the look and feel of the jQuery Mobile framework. It is keyboardaccessible on the desktop as well. When you click the button, the native OS menu will open, and after selecting the value when the menu closes, the custom button’s text is updated to match the selected value. To add a select menu to your page, there should be a select element with a set of option elements. Similarly, you need to set the for attribute of the label tag to match the id of the select element, as we did earlier with the toggle switch. There is no need to add the data-role attribute here; we can make a drop-down menu as easily as in simple HTML:

Figure 2: Select menu

Figure 3: Horizontal select menu

Vertically grouped select menus: For the grouped select input, add data-role=”controlgroup” in the <fieldset> tag, and a <legend> tag, which will specify the label for the whole group: <div data-role=”fieldcontain”> <fieldset data-role=”controlgroup”> <legend>Date of Birth:</legend> // this will signify label for whole group <label for=”month”>Month</label> <select name=”select month” id=”month”> <option>Month</option> <option value=”jan”>January</option>

<label for=”choice” class=”select”>Select your Country</label>

<!-- etc. -->

<select name=”choice here” id=”choice” > <option value=”america”>America</option>

</select> <label for=”day”>Day</label>

<option value=”australia”>Australia</option>

<select name=”select day” id=”day”>

<option value=”brazil”>Brazil</option>

<option>Day</option>

<option value=”canada”>Canada</option>

<option value=”1”>1</option>

<option value=”denmark”>Denmark</option>

<!-- etc. -->

<option value=”france”>France</option>

</select>

<option value=”germany”>Germany</option> <option value=”india”>India</option>

<label for=”year”>Year</label>

</select>

<select name=”select year” id=”year”> <option>Year</option>

The result of the above code will look similar to what’s shown in Figure 2. In the above template, to group the list of countries under a list divider, you need to use the optgroup element, which will categorise the list accordingly: <select name=”choice here” id=”choice” >

<option value=”2011”>1990</option> <!-- etc. --> </select> </fieldset> </div>

Horizontally grouped select menus:

<optgroup label=”A”> <option value=”america”>America</option>

<fieldset data-role=”controlgroup” data-type=”horizontal”>

<option value=”australia”>Australia</option> </optgroup> <optgroup label=”B”> <option value=”brazil”>Brazil</option> </optgroup> </select>

42 | december 2012

Use the above code, and you will be able to see a horizontal menu like what’s shown in Figure 3. Dialogue boxes: These are like pop-ups that appear over the previous screen; they need some user action or response— e.g., a confirm dialogue box asks the user a question and


Let's Try

Developers

needs a ‘Yes’ or ‘No’ response. You can create a dialogue box as follows. First, create a button that will display a dialogue box when clicked: <div data-role=”page”> <a href=”#confirm” data-role=”button” data-rel=”dialog”> Open Dialog</a> </div>

Figure 4: Dialogue

Next, create the dialogue page that will be displayed on clicking the Open Dialog button: <div data-role=”page” id=”confirm” class=”ui-dialog-contain” > <div data-role=”header”> <h1>Confirm</h1> </div> <div data-role=”content”> Do you want to continue?</div>

Figure 5: Pop-up

<a href=”index.html” data-role=”button” data-inline=”true”>Yes </a> <a href=”#” data-role=”button” data-inline=”true” data-theme=”a” > No </a> </div>

Note: The href element in the button page, and the id in the dialogue page must be the same—i.e., in the above template, I have given <a href=”#confirm”> and in the dialogue page id=”confirm”. To close the dialogue or go back, you can use “# “ as an href attribute, and can use data-rel=”back”. To set the dialogue box’s width, margins and padding, use a class named ui-dialog-contain and give the specifications in a style tag as follows: <div data-role=”page” id=”confirm” class=”ui-dialog-contain” >

Figure 6: Menu pop-up Hi, I am Tooltip. </div>

There are many more pop-up types like Menu, Image, etc. We will discuss some of them in the following part of this article. I personally like this part, so please read it carefully! Menu pop-up: For this, add the code below in a script tag:

.ui-dialog-contain { width: 50%;

$.mobile.changePage(‘#dialogpage, ‘pop’, true, true);

max-width: 500px; margin: 10% auto 15px auto;

And this code is needed in the page:

padding: 0; position: relative;

<div data-role=”dialog” id=”dialog1”>

top: -15px;

<div data-role=”header” data-position=”fixed”>

}// You can change the width and margins accordingly.

<h2>Subscribe</h2> </div>

A dialogue box demonstrating all the above is presented in Figure 4. Pop-ups: There are a lot of pop-ups that you can include to make your application more interactive. Let us look at a very basic pop-up tool-tip (the following code will show a very basic pop-up, as in Figure 5):

id=”open” value=”Open Source” checked=”checked” />

<a href=”#popup” data-rel=”popup”>Tooltip</a>

label>

<div data-role=”content” data-theme=”c”> <div data-role=”fieldcontain”> <fieldset data-role=”controlgroup”> <input type=”radio” name=”radio1” <label for=”open”>Open Source For You</

<div data-role=”popup” id=”popup”>

december 2012 | 43


Developers

Let's Try <a href=”android image.jpg” class=”thickbox”> <img alt=”Single” src=”android image.jpg” border=”0”></a> </center>

Note: In <a href> , specify the path of the image— i.e., folder name/image.jpg. A screenshot of the image popup is shown in Figure 7. Tool-bars: A tool-bar is an important part of an application, and gives an idea about what type of content the page has. We call them header bars, footer bars or navigation bars. To include a simple header bar in your page requires the following single line of code: <div data-role=”header”> <h2>Title</h2></div>

Tool-bar with buttons: A tool-bar contains buttons with standard actions, with some pre-defined icons like delete, check, etc. To add this to your page, use the following commands: Figure 7: Image pop-up

<div data-role=”header”> <a href=”index.html” data-role=”button”data<input type=”radio” name=”radio2”

icon=”delete”>Cancel</a>

id=”electronics” value=”Electronics” />

<h1>Edit Contact</h1>

<label for=”electronics”>Electronics For You</label>

<a href=”index.html” data-role=”button” dataicon=”check”>Save</a>

</fieldset></div></div></div>

Image pop-up: There are various jQuery Mobile plugins that can make your image gallery jaw-dropping, like Thickbox, Lightbox, Fancybox, Facebox, Slimbox, etc. Using these image pop-ups gives you a closer look at your picture. They are very simple to use; you just need to add the plug-in files. A demo for the Thickbox plug-in is given below; the rest are similar, so give them a try. How to use a plug-in: 1. Download thickbox.css and thickbox.js from http:// thickbox.net/. 2. Add the two files in the respective <script> and <link> tags. 3. Add class=”thickbox” within your <a href> tag and add <img> tag under the parent <a href>. In the head tag:

</div>

Footer bars: Footer bars can be added in the same way you add the header, with <div data-role=”footer”>. To add padding between the buttons, add class=”ui-bar” in the div tag. To group buttons together into a button set, wrap the links in a wrapper with data-role=”controlgroup” and data-type=”horizontal” attributes: <div data-role=”controlgroup” data-type=”horizontal”>

Nav-bars: jQuery Mobile has a very basic navigation bar widget, in which up to five buttons can be provided, with optional icons. To include a very simple nav-bar with two buttons, use the following commands: <div data-role=”navbar”> <ul><li><a href=”a.html”>Page One</a></li>

<link rel=”stylesheet” href=”thickbox.css” type=”text/css” media=”screen” />

<li><a href=”b.html”>Page Two</a></li> </ul></div>

<script language=”javascript” type=”text/javascript” src=”thickbox.js”></script>

In the body tag: <center>

44 | december 2012

Whenever you click the nav-bar, it will get activated. If you want to set the item in active state, use class=”uibin-active”. To restore the active item each time the page is rendered, use class=”ui-state-persist”. You can also use both the classes together in the following way:


Let's Try

Figure 8: Basic nav-bar <div data-role=”navbar”> <ul><li><a href=”index1.html” class=”ui-btn-active uistate-persist”>Tab 1</a></li>

Developers

Icons in nav-bars: You can also use icons with the navbar items—e.g., different icons for Home and for Contact Us. Similarly, data-role=”icon” can be used to include icons; and to give the icon a specific position, use data-iconpos=”top” . Fixed footbars: For a tool-bar that remains fixed as a footer, use the “fixedtoolbar” plug-in. It can be fixed to the top or bottom of the viewport, even when the page content is being scrolled in between. To enable this, use dataposition=”fixed” as shown:

<li><a href=”index2.html”>Tab 2/a></li></ul> </div>

<div data-role=”Footer” data-position=”fixed”>

Nav-bars with headers: To use nav-bars with a title or header bar, use the commands shown below:

</div>

<h1>Fixed Footer/h1>

<div data-role=”header”> <h1>Welcome Guest</h1> <div data-role=”navbar”> <ul> <li><a href=”#”> Home</a></li> <li><a href=”#”>Products</a></li> <li><a href=”#”>Services</a></li> <li><a href=”#”>Contact Us</a></li> </ul> </div> </div>

Full-screen tool-bar: These tool-bars fill the entire screen with content—such as a photo viewer, when you want to fill the entire screen with a picture. To enable this, use the option data-fullscreen=”true” in the div tag. Page transitions: As of now, we have worked on page navigation using tool-bars—but in today’s creative world, users want to navigate pages with beautiful transition effects, which can result in an amazing user experience. jQuery Mobile has the perfect solution -- page transitions. The framework includes a set of CSS-based transition effects like flip, pop, fade, turn, slide, etc, that can be applied to any

december 2012 | 45


Developers

Let's Try <a href=”index.html” data-role=”button” data-icon=”delete” data-theme=”c”>Cancel</a> <h1>Edit Contact</h1> <a href=”index.html” data-role=”button” data-icon=”check” data-theme=”b”>Save</a> </div>

Figure 10: Theming lists

Theming content: The following code will theme the whole content of the page, and will colour the background of the page accordingly: <div data-role=”page” data-theme=”a” data-content-theme=”a”>

Figure 9: Tool-bar themes

page. To set a transition effect, add data-transition=”flip” (or instead of flip, add slide, pop, etc):

Collapsible blocks: The following code will create a collapsible content block, which will only be shown on clicking the collapsible button:

<div><a href=”test.html” data-transition=”flip”><button name=”b1”

<div data-role=”collapsible” data-collapsed=”true” data-theme=”b”>

value=”Click Me”></button></a></div>.

<h4>Different themes in a header </h4> <div data-role=”header” data-theme=”e”>

To learn how various transition effects can beautify your app, try the effects demos at http://jquerymobile.com/ demos/1.0a4.1/docs/pages/docs-transitions.html. Note: Many Web browsers and mobile devices don’t support this feature yet; Google Chrome does, so use it if your browser doesn’t show any transitions.

Get started with themes

Theming is the way to display the same content in different styles, formats and colours to give your application a different look across all the pages in your application. jQuery Mobile provides various very easy-to-use themes. Let’s dive in! Playing around with theme swatches: JQM has a healthy theming framework, called a swatch, which supports 26 sets of tool-bars, buttons, colours and content. To give your application a specific theme, you just need to add the data-theme attribute, which can be applied to any widget— or if applied to a page, then all widgets will automatically inherit the theme. The framework provides five default theme swatches that are assigned letters a, b, c, d, e... For example: <a href=”#” data-role=”button” data-theme=”a”>Button</a>

Theming tool-bars: To set the header or footer bars to a different colour theme, use the data-theme attribute with datarole=”header/footer”. Buttons in tool-bars can also use the same themes with this same attribute. Using distinct themes in a tool-bar: You can provide different themes to the tool-bar and to the widgets (buttons) in the tool-bar: <h4>Different themes in a header </h4> <div data-role=”header” data-theme=”a”>

46 | december 2012

<a href=”index.html” data-role=”button” data-icon=”delete” data-theme=”c”>Cancel</a> <h1>Edit Contact</h1> <a href=”index.html” data-role=”button” data-icon=”check” data-theme=”b”>Save</a> </div></div>

Theming forms: You can implement themes with forms in a similar manner; use the data-theme attribute in the page containing the form elements to make all elements inherit the theme. Theming lists: List-views can be themed in the following way: <ul data-role=”listview” data-inset=”true” data-theme=”b” datafilter=”true”> <li><a href=”#”>Agriculture</a></li> <li><a href=”#”>Animal</a></li> <li><a href=”#”>Astronomy</a></li> <li><a href=”#”>Building</a></li> <li><a href=”#”>Business</a></li> </ul>

I have covered many important aspects of jQuery Mobile. This framework is the best thing I have come across. It has completely changed my views on the mobile UI. Many more features like page loading and widgets are still to be explored; you can Google and get familiar with them. You will love them. Suggestions and queries are always welcome. By: Anupriya Sharma The author has just graduated and is currently working in the Android department of a reputed MNC. She loves Android and iOS development. Apart from that, she manages some time for cooking, dancing and her all-time favourite, shopping. You can contact her at anupriyasharma2512@gmail.com.


How To

Admin

A Look at OpenSSH, Netcat and PF Thisarticle article deals with some of the systems administrator's familiar This deals with some of the systems administrator’s mostmost familiar tools.tools.

O

penSSH was a project started by the OpenBSD team based on the work done by Tatu Ylonen, a Finn, who wrote something called SSH (Secure Shell) back in 1995. It came as a welcome replacement to telnet, which has been a security nightmare for several decades. Today, SSH and its companion, SCP (Secure Copy), can work wonders, not only from a security standpoint, but also from a convenience angle. Plenty of new features are being added with each release to enhance the security with strong crypto, more user conveniences like printing the fingerprint graphics, and so on. But for the purpose of this article, I will focus only on certain specific use cases. I have found that SSH is an incredibly powerful, versatile and capable tool that lends itself easily to automation and quickly getting a job done. You can even set up remote or local port forwarding, which is normally only done by firewalls. Let’s look at some of the huge variety of things SSH can do. At the simplest level, it is used for logging into a (usually remote) system with ssh user@host, after which you’re prompted for a password and are logged in. If you set up public key authentication (described below) and load

your private key using the ssh-agent daemon, then you can log in without any password—very convenient for non-interactive scripts that run remotely. This works well for SCP too, which uses the same network protocol as SSH, but does file transfer instead. Another program, SFTP, uses FTP-like semantics, but differs since it uses secure connections, has no multiple connections like data and command connections, or active and passive modes like conventional FTP. I mostly use only SSH or SCP. Running a simple remote command using SSH is easy: ssh foo.bar ls, for example, which connects to the foo.bar machine and lists the files in the default folder. If you don’t supply a username with an @ before the hostname (or with the -l switch), then SSH uses the username under which you are logged in to the local machine. The default folder is then the home folder for that user account on the remote host. However, for editing and other interactive commands, this approach will not work. For example, you have to run SSH as follows: ssh -t girish@yahoo.com vi /etc/ntp.conf, in which -t switch sets up the terminal mode necessary for working transparently on the network. Once you exit vi, you are back at the local machine prompt, since the SSH session is torn down on completing the editing command. december 2012 | 47


Admin

How To

Sometimes this can confuse you, but it is very powerful. If you combine SSH with the tmux terminal multiplexor, then you have even more possibilities. With the -t switch, SSH can do a lot of remote operations resourcefully.

Key-based authentication

Now, let’s find out how to set up SSH public key authentication to log in without passwords. This involves several steps. First, run these on host A, from which you want to log in to host B: $ ssh-keygen -t dsa <Keep empty private key password> $ cat ~/.ssh/id_dsa.pub | ssh hostB “cat >> ~/.ssh/authorized_ keys”

With this, the public key on A should be appended to the ~/.ssh/authorized_keys file on host B. Next, set up the sshagent daemon and use the ssh-add command to load the key. On host A, use the following commands: $ eval `ssh-agent -s` $ ssh-add $ ssh-add -l (should list your key fingerprint)

Now you are all set. If you log in on host A, you can talk to the remote machine without passwords; in place of this, your private key authenticates you to the other host. Instead of laboriously typing passwords, this lets you log in with the private key’s password just once, and use the key for multiple remote logins. This is a bit complex, so please try it a few times to get it right.

SSH for simple port forwarding

whether port forwarding succeeded or not. The uses of this vary. In general, you can have a facility to log in to your local machine from any point on the Internet if you set up remote port forwarding to a public IP. This is why remote port forwarding seems more useful to me. In general, port forwarding is a vast topic requiring a lot of testing and thinking. It’s not easy to set up; you have to know a lot of background information to get things working. In this article, we will also look at how to do this with a pf firewall rule. SSH also allows you to set up a simple VPN between two networks. This requires the VPN option to be enabled in the sshd_config on both sides. A VPN tunnel helps you encrypt traffic and also access both networks seamlessly, but SSH VPNs are costly from a networking angle, since they add packet header overhead, and have the complexity of multiple TCP layers. You can use them in emergency situations when other VPN options are hard to set up, but IPsec VPNs are best for long-term set-ups. Or you could go for OpenVPN-type SSL VPNs without TCP. SSH allows many different functions; the man pages and the commented configuration files tell you a lot. You could also turn off root logins in SSH in case you want to restrict root access from remote systems. But for the purpose of this article, I will stop right here, since I also have to cover other topics. Now let us move over to netcat, the Swiss army knife of networking.

Netcat

Netcat, also known as nc, is a very powerful tool for debugging issues in port forwarding, while setting up a simple TCP, UDP server or client; or even for testing UNIX domain sockets. It is also useful for port-scanning and sniffing, but let’s look at only a few general uses of netcat here.

There are basically two variants in port forwarding—local or remote port forwarding. In local port forwarding, you can set up a local port to be forwarded to some other machine. Whenever a connection is made to this port, the connection is forwarded over the secure channel, and a connection is made to the given host and port from the remote machine. In remote port forwarding, you can set up a remote port to be forwarded to a local machine when someone connects to the remote machine. This requires an SSH connection to be persistent between the hosts—a little bit of jugglery, since the port forwarding is session-based, and not permanent like on a firewall. In brief, the commands to set up local and remote port forwarding, respectively, are:

From a security point of view

$ ssh -L 1234:remote:2345 remote

$ nc -v localhost 1235

The amazing netcat helps you figure out which ports are open, and which can be used to mount a security attack. Netcat does not always have to be run manually; it can be easily scripted, and you can do plenty of amazing things with it. Netcat’s simplicity is easily misunderstood, but its power lies in its simplicity. Let us look at some examples: $ nc -l -p 1235

The previous simple command line sets up a TCP server at port 1235. You can then connect to it using the following command:

$ ssh -R 1234:remote:2345 remote

You also have to enable this in the /etc/ssh/sshd_config on the remote host. You will often get a clear message to tell you 48 | december 2012

Now you can chat between the two windows. In fact, netcat can help you write a simple chat program, or even figure out NAT traversal techniques used by programs like


How To Skype to initiate incoming calls. Netcat can thus work across simple NAT firewalls, and you can do a lot of very advanced networking functions with it, though netcat by itself does not break or add to security. You can run a UDP server, instead of TCP, with nc -u -l -p 1235 and you can run a UNIX domain server with nc -U -l /tmp/unixsock, which shows you that many really great things can be done with it. With experience working on UNIX, and an understanding of networking, you can unlock netcat’s potential. You can do a simple file transfer between two machines; on machine A, run nc -l -p 1234 > /tmp/newfile and on machine B, run cat newfile | nc -v remoteip 1234, which fetches newfile from B to A. Interesting, right? As I said, you can easily script netcat, since it does not have the interactive password requirements of SSH. In fact, you can also marry netcat and SSH and obtain a simple TCP proxy. But all that requires another more comprehensive article. For now, we are done with this great tool. Let us now talk about the pf firewall found in OpenBSD.

The OpenBSD pf firewall

OpenBSD is not the only OS to have the pf firewall. Though originally designed and written for that OS, today we find pf in FreeBSD and NetBSD too. In fact, the popular pfSense ISO is based on OpenBSD pf but it runs FreeBSD. FreeBSD pf is not the real thing. OpenBSD is where pf is natively developed and maintained; FreeBSD ports it later, so you are better off running it natively. Besides, pf, being firewall software, uses kernel networking code extensively. OpenBSD pf is also tightly integrated with its own kernel networking sub-system. Consequently, pf works best on OpenBSD. Now, like Linux iptables, the pf firewall can do a lot of cool stuff like port forwarding. However, iptables is very complex, and does too many things. Writing to payload data in protocols like FTP, SIP, RTP, etc, is something that user-land proxy applications do. That is not the job of a kernel firewall. Also, content inspection is not its job. A kernel firewall like pf limits itself only to packet headers at TCP or Layer IV. Beyond that, user-land applications have to handle the data. However, pf is very powerful, since it can do things like direct server return, and divert-tosockets, which can be used for very advanced packet forwarding, routing, and so on. All pf rules are normally written in /etc/pf.conf, but can be added from the command line or included from other files as well. However, in general, keeping all firewall rules simple, short and sweet is the preferred convention. In OpenBSD, everything is kept very simple; that is why it is so secure. And pf rules read

Admin

just like plain English – they are so easy to read that you will forget you are writing a computer language. Still, firewall rules take a lot of learning and it’s not possible to cover them in one article. I have worked very hard to learn pf rules, and despite using them extensively in all our products, even today I have difficulty with certain rules. It takes a lot of time and experience to master them. Now, let us look at a simple NAT set-up rule, in /etc/ pf.conf: pass out quick on egress from 192.168.0.0/16 to any nat-to 2.3.4.5

The public interface IP is assumed to be 2.3.4.5, and the local network is 192.168.0.0/16. For new or changed pf rules, you load the rules using (as root) the pfctl -f /etc/pf.conf command. If the firewall is not enabled, you enable it with pfctl -e while you can view active rules with pfctl -sr (the ‘-sr’ stands for show rules). Pf can do many things like simple TCP and UDP port forwarding, for instance. A simple TCP port forwarding rule to forward the SMTP port on the public interface of the firewall to a local machine 192.168.1.5 is set up as follows: pass in inet proto tcp from any to egress port 25 rdr-to 192.168.1.5

Remember, port forwarding is messy to get right. The reverse of NAT, it involves state maintenance and tracking. So unless symmetric routing is enforced, in which the upstream and downstream paths of the packets are going through the same firewall, things don’t work as expected. The TCP handshake itself will fail and you can easily check with netcat that the port forwarding does not work as intended. You can easily forward a range of ports. For example, to forward ports 5000 to 6000, both inclusive, issue the following command: pass in inet proto tcp from any to egress port 5000:6000 rdr-to 192.168.1.5

You can make that a UDP rule by simply changing proto tcp to proto udp in that line. Now, if you found that easy, this is what pf is all about; simple to use, yet powerful enough for sophisticated tasks. By: Girish Venkatachalam The author runs a company named Gayatri Hitech (http:// gayatri-hitech.com) that creates computer networking products like firewalls, mail servers, VPNs, etc. He may be contacted at girish@gayatri-hitech.com. He is on Twitter, Skype and GMail as girish1729.

december 2012 | 49


Exploring Software

Anil Seth

Guest Column

Choosing Between Replacing and Renewing a Desktop “Renew the desktop using the advice of Linus Torvalds—boot from a Solid-State Disk.”

M

y desktop seemed slow. It is four years old and has faced a fair amount of criticism from my family. I toyed with the idea of buying a new one. Moore's Law is supposed to still hold good, but I found that the extra performance is via more cores. Each core is not much faster. Hence, the new desktop may not be any faster, at least, for booting and signing in. And what would I do with the older one? I had installed Fedora when I bought the system. Since then, I have been upgrading it. Could something have gone wrong? I freed a partition and installed a fresh copy of Fedora 17 on it. The new installation was much snappier. The boot time, as shown by systemd-analyze, came down from 72 to 27 seconds! However, the comparison wasn't fair. The original installation had a lot of services installed on it. Removing the additional services reduced the boot time to 45 seconds. The kernel and the initramfs timings were about the same. So what was the difference? There may be some issues related to repeated upgrades of the distribution, which are not the supported methods. While searching for differences, I did find an explanation for why the Ethernet interface was still named eth0 and not em0 as per the default Fedora installation. During the upgrades, an additional package called biosdevname was introduced, yet it was ignored. No package required it; so, its use was optional. Besides, the device name difference was not relevant as far as performance was concerned. Disk fragmentation can have an impact on performance. While de-fragmentation is not supposed to be a necessity on Linux, various upgrades and experimentations may have seriously fragmented the root partition. Backing up the root partition, reformatting and restoring it reduced the boot time to 34 seconds.

An SSD as root

There was a parallel approach. I had been fascinated by articles about dropping Solid-State Disk prices. Unfortunately, the local vendors had no idea, or quoted a price that was still high. Recently, I found an online store offering 60 GB at prices not too far above the US prices. In a remarkable coincidence, the SSD disk was delivered to me the same day as I read the, "Get thee behind me, Satan!" comment by Linus Torvalds on slashdot.org. The SSD disk behaves just like a SATA disk, so installing and using it as an additional disk was not a problem. However, it would not boot from the SSD disk. It was obviously a hardware limitation of my old desktop. The workaround was to create a boot partition on a conventional disk, and have the root partition on SSD. The result was a remarkable improvement in the 50 | december 2012

responsiveness of the system. Systemd-analyze showed the new boot time as 14 seconds. This figure is even more remarkable when you notice that the kernel and initramfs timing is still the same as original, which is about 7 seconds. So, the user-space boot time has come down from 38 to 27 seconds after defragmentation, and to 7 seconds with SSD!

Bootchart

A very useful utility for identifying boot performance is the bootchart package. It collects the boot data, and prepares a beautiful, detailed chart—bootchart.png in the /var/log directory. Unlike systemdanalyze, this includes the start of the display manager. This is the time that matters to the user. With the SSD, bootchart gives the time as 26 seconds when the login screen is ready. The corresponding timing for a SATA root disk is 56 seconds originally, and 41 seconds after de-fragmentation. The difference between the two types of disks can be seen from the following table: Cumulative time SSD root SATA root SATA root restored CPU time

24 sec

29 sec

28 sec

IO time

44 sec

343 sec

177 sec

Elapsed time

26 sec

56 sec

41 sec

The conclusion is obvious. I/O time is the most significant variable in reducing boot time. The benefit of no rotational delay and minimal uniform access time in a SSD disk gives a new life to an old system. A 60 GB SSD is a cost-effective option in a desktop, as it can be used for root and home. Multimedia and other large files can be on the existing SATA or IDE hard disk. This investment is likely to result in better performance than replacing the desktop—unless, of course, your brand-new desktop relies exclusively on SSD. My next project is to refresh my netbook. That may happen soon, as more options for 120 GB SSDs are becoming available in India, and local prices are likely to become comparable to US prices.

By: Anil Seth The author is currently a visiting faculty member at IIT-Ropar, prior to which he was a professor at Padre Conceicao College of Engineering (PCCE) in Goa. He has managed IT and imaging solutions for Phil Corporation (Goa), and worked for Tata Burroughs/ TIL. You can find him online at http://sethanil.com/ and reach him via email at anil@sethanil.com.


Interview For U & Me

Mozilla measures success by how much we are doing to improve the overall health of the open Web Mozilla's rapid release cycle for the Firefox browser has had corporations a bit worried for a long time. Many feel that the Mozilla Foundation should slow down a bit for users to get accustomed to the changes offered with each release. Mozilla, however, has a different take on this. While a lot has been happening at Mozilla, the foundation is also eyeing the mobile space with Boot To Gecko. But with a super successful open source software like Android around, is Boot To Gecko a cool move from Mozilla? Diksha P Gupta from Open Source For You touched upon all these issues in an exclusive conversation with the Mozilla Foundation. Here are a few excerpts:

Q

What are you doing to ensure that even companies adopt Firefox like individuals do?

Many organisations have two primary concerns: 1) The Firefox release schedule doesn’t allow sufficient time for the organisations and their vendors to certify new releases of the products; 2) The associated end-of-life policy exposes them to considerable security risks if they remain on a non-current version past Firefox 3.6. To help address these issues, Mozilla offers an Extended Support Release (ESR) based on an official release of Firefox for the desktop. This can be used by organisations including schools, universities and businesses, as well as by others who need extended support for mass deployments. To find out more information about this programme, please see Mozilla’s ESR Overview.

Q

Mozilla had to pull back Firefox 16 after its launch, amidst security concerns. What were they and how have you fixed them?

The security vulnerability in Firefox 16 could have allowed a malicious site to potentially determine which website’s users had visited it and get access to the URL or URL parameters. There was no indication that this vulnerability was being exploited in the wild. Mozilla worked quickly to fix the vulnerability, releasing updates to Firefox for Android on October 10 and Firefox for Windows, Mac and Linux on October 11.

Q

The beta of Firefox 17 offers a new social API. There have been Mozilla APIs that did not get very far. Why did you feel the need to introduce one now?

Many people use social sites throughout the day—checking back for updates, chatting with friends and sharing. When we started to integrate social sites into Firefox, this was our guiding principle: make it easy to stay connected, and stop treating social like ‘just another tab’. We see potential for Social API integration beyond traditional social sites, too – imagine using the sidebar as an easy way to keep up with group projects, email, or new music. Mozilla is a non-profit organisation; we build Firefox only for users, and features like the Social API exist solely to give users a more integrated, human, awesome Web experience.

Q

According to Statscounter, Chrome is the No 1 browser, followed by IE. What do you think Firefox lacks? Mozilla measures success by how much we are doing to improve the overall health of the open Web. Mozilla achieves success by helping more people make choices about what software they want to use, what level of participation they would like to have online, and how to take part in building a better Internet. When we see growth in community contributors, software localisation, and a competitive browser market, for example, we know we are moving toward our goals.

Q

Secure surfing is increasingly becoming a concern for one and all. How is Mozilla working to ensure security for its users?

Mozilla has incorporated some features in the Firefox browser, which help users ensure their online security. Here are a couple of them: Warn me when sites try to install add-ons: Firefox will DECEMBER 2012 | 51


For U & Me

Interview

always ask you to confirm installations of add-ons (those little pieces of software that enhance your Firefox experience). To prevent unrequested installation prompts, Firefox warns you when a website tries to install an add-on and blocks it. You can add exceptions to this rule for sites you trust—just click Exceptions, enter the site name and click Allow. Block reported attack sites: Check this if you want Firefox to check whether the site you are visiting may be an attempt to interfere with normal computer functions or to send personal data about you to unauthorised parties over the Internet (note that the absence of a warning does not guarantee that a site is trustworthy). Block reported Web forgeries: Check this if you want Firefox to actively check whether the site you are visiting may be an attempt to mislead you into providing personal information (this is often referred to as phishing). Note that the absence of a warning does not guarantee that a site is trustworthy. For more information, see ‘How the phishing and malware protection in Firefox works’. Remember passwords for sites: Firefox can securely save passwords you enter in Web forms to make it easier to log in to websites. Clear this checkbox to prevent Firefox from remembering your passwords (for example, if you’re on a public computer). Even with this checked, however, you’ll still be asked whether to save passwords for a site when you first visit it. If you select ‘Never for This Site’, that site will be added to an Exceptions list. Use Exceptions to access that list. Use a master password: Firefox can protect sensitive information such as saved passwords and certificates by encrypting them with a master password. If you create a master password, each time you start Firefox, it will ask you to enter the password the first time it needs to access a certificate or stored password. You can set, change or remove the master password by checking or unchecking this preference or by clicking the ‘Change Master Password’ button. If a master password is already set, you will need to enter it in order to change or remove the master password.

Mozilla for mobile devices

Q

When do we see Boot To Gecko being launched and who will be your preferred OEM partner?

Telefónica has committed to bringing commercial Firefox OS devices to market in Latin America in early 2013. The first devices featuring the Firefox OS will be manufactured by TCL Communication Technology (Alcatel) and ZTE, using Snapdragon processors from Qualcomm.

Q

How different will Boot To Gecko be from Android or Ubuntu for Android?

The Firefox OS will be totally open (just like the Firefox Web browser) and available to any network operator or OEM. However, it will be managed and maintained by Mozilla, so it will not need every operator to agree to every aspect or 52 | DECEMBER 2012

Although Google makes some source code of Android available, Android is essentially not open: all the APIs are designed by Google, and Google controls the direction of the technology. The source is available, but often only after that particular version of Android has been shipped. Firefox OS will be more open. decision which has slowed down some other initiatives. Although Google makes some source code of Android available, Android is essentially not open: all the APIs are designed by Google, and Google controls the direction of the technology. The source is available, but often only after that particular version of Android has been shipped. Firefox OS will be more open because the governing rules for the ecosystem will be looser, and being HTML5-based, it extends the openness of the Web to the mobile.

Q

Which segment amongst tablets or smartphones are you targeting for Boot To Gecko and why?

There is a much lighter software footprint on Boot to Gecko mobile devices. The operating system and apps are one layer closer to the hardware, so less memory and CPU is needed to deliver the same performance compared to more advanced handsets. This cost-effectiveness will allow Mozilla to offer Firefox OS smartphones to consumers who would otherwise have purchased a feature phone. This aligns well with the Mozilla project's overall goal of strengthening the open Web and expanding its reach.

Mozilla and the Indian community

Q

How do you interact with the community in India and what do you plan to do to increase the involvement of developers from this part of the world? There is a vibrant community in India, including more than 30 Mozilla Reps that work to grow and strengthen our community, and promote Mozilla, our products and contribution opportunities. Vineel Reddy of Hyderabad serves on Mozilla's global Reps Council. Mozilla Reps have been organising numerous workshops, events, interactive sessions and contests around the country to get people engaged and excited with building and expanding the open Web. There have been 13 events in India in the past two months. Right from organising community events to spreading Mozilla at schools, we are doing it all in India. We organise MozParty events and MozCamps, MozClubs and Moz Interactions for the community to come up and discuss Mozilla. We have also planned to conduct a website making session in Chennai government-run schools, and help in spreading the word about Mozilla.



Admin

How To

Figure 1: Pound configuration—I

server(s); in two BackEnd blocks below, give the IP addresses of the Web servers, port numbers and priorities. For an example, see Figure 2. I have set the priority to 7 for the 192.168.0.7 Web server, and for 192.168.0.8, it’s 3. This means that when a request is received to access the website, there are 70 per cent chances that the request will be sent to the first server. You can omit the Priority field to equally distribute the traffic to both servers. Now start the service with service pound start and if everything is done right, the service will start smoothly. Well, now it’s time to check by accessing the site. Prior to that, you may need to adjust your firewall settings; either make the necessary changes, or temporarily stop the firewall service using service iptables stop on both the machines. Open your browser and type the IP address of the machine on which you’re running Pound (for me, it's 192.168.0.6) and you’ll see the content “SITE1”; now just click the Refresh button a few times and you’ll see the contents changing.

Observations and explanations

Figure 2: Pound configuration—II <VirtualHost 192.168.0.7:80> DocumentRoot /var/www/html/site1 ServerName www.site1.com </VirtualHost> <VirtualHost 192.168.0.8:80> DocumentRoot /var/www/html/site2 ServerName www.site2.com </VirtualHost>

Start your Web server with service httpd start; access both websites and make sure they are working. Here, notice that in httpd.conf I used two different names for ServerName—one is www.site1.com and the other www.site2. com. Also remember that the contents of our two index.html files differ. I’ll explain the reason later in this article. Now get back to the first machine, where we’ll configure Pound—which is the easiest part of the whole process. Edit /etc/ pound/pound.cfg and on line 6 you’ll see the main listening ports setting ListenHTTP. Here, give the port number and IP address of the machine on which you’re running Pound. You can see my settings in Figure 1. Now, you’ll need to provide the addresses of the Web servers you’re running. On pound.cfg line 35, you’ll find Catch-all

54 | december 2012

First of all, I'm running my Web servers on 192.168.0.7 and 192.168.0.8 as visible above—yet, I access them at the address 192.168.0.6. This is because your (client/browser) request(s) must be made to Pound, which fetches the response from the Web servers mentioned in pound.cfg and returns it. Sure, in our test set-up, you can directly access the websites at 192.168.0.7 or .8, but in real-world situations, you'd firewall these back-end servers to only accept requests from the Pound server, and ensure that all client requests must be made to the server running Pound. Second, why do the responses differ (SITE1 first, and when you Refresh a few times, SITE2)? Pound distributes incoming requests to either of the configured back-end servers, based on their priority. As noted earlier, we deliberately entered different content in each index.html file so we'd know which site the response came from. However, in an actual working environment, all Web servers should be serving the same content (if it's static content, that is). Remember our www.site1.com and www.site2.com virtual host site names? They are obsolete or unused; all access being through Pound, the only address that matters is that of the machine running the Pound service.

That's it!

This is just a beginning; you can further extend this exercise by implementing HTTPS or deploying a Squid proxy server for your load-balanced Web servers. I hope you're feeling a bit more comfortable with the term ‘load balancing...’

By: Vinayak Pandey A computer graduate, a Red Hat Certified Engineer on RHEL6 and an open source enthusiast, the author has also worked as a freelance Linux trainer for a while. On the professional front, he will start his journey with Tata Consultancy Services soon. You can catch him at https://hackstips.wordpress.com.


Conte s t

Developers

The CodeChef.com Challenge—your monthly dose of coding puzzles from India’s biggest coding contest!

S

olve this month’s puzzle, and you stand the chance of being one of the three lucky people to win a cash prize of Rs 1,000 each! The November edition’s puzzle:: Our chef is fond of collecting international and national postage stamps, but only he knew how many stamps he owned. Once, when asked by one of his friends about the number of stamps he had, he replied with the following sentence: “If I divide the stamps into two sets—international and national—then five times the difference between the number of stamps in each set equals half of the total number of stamps. Also, I’ve more national stamps than international ones. And half of the difference between the squares of the number of stamps in the national and international sets is 1620.” This answer left his friend rather puzzled. Can you help the chef’s friend to find out how many international and national stamps he has collected? The solution Let the number of Indian and international stamps be ‘x’ and ‘y’. Given, Condition 1: 5(x-y) = (x+y)/2 (x+y)/(x-y) = 10 Condition 2: (x^2-y^2)/2 = 1620 (x+y)(x-y) = 3240 Let (x+y) = A and (x-y) = B; => A/B = 10 ; A = 10B and AB = 3240 Therefore, by solving the equations, we get, A = 180 and B = 18;

On putting values, (x+y) = 180 and (x-y) = 18 On further solving, we get, x = 99 and y = 81. ANSWER: The number of Indian stamps is: 99 The number of international stamps is: 81 And the winners this month are: • Santosh Pathak • Vishal Gawai • Vijay Kumar Here’s your ‘CodeChef Puzzle of the Month’: Our chef is fond of playing games. He called a few children to his restaurant to play a ‘smart guessing game’. In this game, the player will use N coins numbered from 1 to N, and all the coins will be showing heads. The player needs to play N rounds. In the j-th round, the player will flip the face of all the coins whose number is less than or equal to j (i.e., the face of coin i will be reversed, from Heads to Tails, or from Tails to Heads, for i ≤ j). Each of the children has to guess the total number of coins showing Heads and Tails, respectively, after playing N rounds. Consider N to be 500 and help these children in finding out the answer to the chef’s question. Looking for a little extra cash? Solve this month’s CodeChef Challenge and send in your answer to codechef@efyindia.com or lfy@codechef.com by December 15, 2012. Three lucky winners get to win some awesome prizes both in cash and other merchandise! You can also participate in the contest by visiting the online space for the article at https:// www.facebook.com/LinuxForYou

About CodeChef: CodeChef.com is India’s first, non-commercial, online programming competition, featuring monthly contests in more than 35 different programming languages. Log on to CodeChef.com for the December Challenge that takes place from the 1st to the 11th, to win cash prizes up to Rs 20,000. Do keep visiting CodeChef to participate in the multiple programming contests that take place on the website throughout the month!

DECEMBER 2012 | 55


For U & Me Overview

Manual of

Localisation: Ensuring Consistency in Localisation Through Style Guides This article in the series deals with the topic of style guides for localisation.

A

ccording to Wikipedia, “A style guide or style manual is a set of standards for the writing and design of documents, either for general use or for a specific publication, organisation or field. [It helps] provide uniformity in style and formatting across multiple documents.” I first learnt of style guides when my proposal for a conference research paper was accepted, and I was asked to send the complete paper as per their style guide. It consisted primarily of how to number sections, figures and references; the font sizes for headings of sections and subsections, as well as guidelines on other elements typical in a scientific publication. In the context of localisation, we try to make a software application (or its documentation) developed for a source language appear equally natural for speakers of a target language. Languages differ in attributes, usage for communication, cultural elements, such as date formats, names of calendar elements and their short forms, and currency symbols. These need to be localised consistently. As the source language's elements and practices may have a different structure from the target language—or even be entirely absent in it—a style guide helps make the localisation

56 | december 2012

consistent, even if many people participate in the localisation. The style guide deals with language and cultural attributes, terminology and quality assessment aspects. In the following sections, I will focus on the difficulties common to Indian language localisers. I encourage localisers to use the references for more information on specific languages. I have used the FUEL and Microsoft style guides for Telugu for illustration and highlight the improvements needed. Let's look at some language and cultural attributes.

Product and feature names

As product and feature names are trademarked, some style guides advise against localising them. For one, the Microsoft Telugu style guide states that product names should not be localised. Here is an example why: The Microsoft Feedback Tool is unable to send feedback. (+) Translation: Microsoft

From the viewpoint of user expectations, this is inappropriate. By transliterating Microsoft in Telugu as ,we do not violate any trademark rights. Actually, local


Overview For U & Me laws demand that office signboards use the local language in addition to English. Using transliteration, the user interface looks more natural and friendly for a typical user

more natural for Telugu speakers. However, if these are used in a context meant for professionals, the first form will be more appropriate.

Technical acronyms

Foreign words with multiple conjuncts for certain syllables

Technical acronyms such as OLE (Object Linking and Embedding), or RAM (Random Access Memory) are difficult to localise. When they are transliterated in Indian languages, they may be difficult to read—so for localising contexts where the user is supposed to have a bit of technical knowledge (like those who install operating systems) these acronyms can be left, as is. However, if they are to be presented to a lay user, then it is better that they are transliterated as individual characters or as a pronounceable word.

Culture-specific words

When we translate words like India, we prefer the more common Indic equivalent of Bharat, or its variations (variation 2, below) rather than a transliteration of India (variation 1, below). Similarly, when country names include the direction names, it is better to translate the direction name to the local language. Example: From FUEL Telugu

English

Telugu (variation 1) Telugu (variation 2)

India

Software:

The first form is closer to English pronunciation, but is difficult for the user to understand. It is better to use ZWNJ (second case) to simplify the conjunct formation and improve readability, and also make it end with the ‘u’ sound (third option) to make it natural for Telugu people.

Short-cut keys

Different software use different ways (preceding with ‘_’, or ‘&’, or ‘~’) to highlight short-cut keys. In English, these keys flow in the usual text. For Indic languages, as there are several input methods and some keys for vowel matras need the support of dummy glyphs, it is not possible to use local language short-cut keys. So, usually, the English short-cut keys are put at the end of the string, in parentheses. But these are still difficult to use, as this requires changing the input method, so I suggest that we dispense with the short-cut keys, as mouse and touchbased interaction is becoming the norm. However, it is an accessibility aid, so may require a longer discussion.

Terminology North America

Keynames

Some keys have specific abbreviations like Ctrl and Alt, and most style guides suggest they be retained. I would suggest that as it is essential for everyone, transliterating the key name is more helpful. Example: From FUEL Telugu

English

Telugu (variation 1) Telugu (variation 2)

Both Ctrl keys together change layout.

Ctrl

Foreign words ending with halanth MeterRoad-

While the first form is correct from a phonetic perspective, the words are appended with a ‘u’ sound, which makes it

There is no unified terminology for most Indian language localisation needs. FUEL is a project by open source software teams, and Microsoft has its own terminology. It is essential for a specialised language centre to build a common terminology that can be used by all.

Quality assessment

Currently, it requires other team members to spot deviations from the guidelines. Some localisation tools are able to flag elements like extra spaces. It would be very useful to have localisation tools check for conformance to style guides. For more information [1] FUEL style guides, http://www.fuelproject.org/styleguide/index [2] Microsoft style guides, http://www.microsoft.com/Language/ en-US/StyleGuides.aspx

By: Arjuna Rao Chavala The author is chief consultant of Arc Alternatives. He serves as the WG Chair for IEEE-SA project P1908.1 “Virtual keyboard standard for Indic languages”. He co-founded and served as the first president of Wikimedia India. He can be reached through his website http://arcalter. com or by email to arjunaraoc@arcalter.com.

december 2012 | 57


For U & Me

Career

Secure Your Career with

'Ethical Hacking!' The deteriorating cyber security situation in the country has led to more opportunities for ethical hackers. OSFY finds out what it takes to build a career in this adventurous domain!

I

n the dark alleys of virtual space, cyber crooks, a.k.a. hackers, are persistently coming up with novel attacks to knock out even the latest anti-fraud defences. If a report released by software security services provider, Norton, is anything to go by, over 42 million people have become casualties of cyber crimes in India in 2012. With growing instances of sophisticated cyber crimes becoming a virtual reality, cyber sleuths are now much in demand to safeguard valued information systems. According to reports, India faces a dearth of 470,000 cyber security experts and an upcoming government-private sector initiative hopes to train 500,000 cyber warriors or ethical hackers in the near future. Before we delve deeper to understand what it takes to make a career in ethical hacking, let us get to know what this term implies. “Ethical hacking is basically a term used by the media or the general public. Experts know this process as penetration 58 | DECEMBER 2012

testing, intrusion testing or red teaming. And these terms certainly do not sound that cool, whereas ethical hacking does,” says Tamaghna Basu, who works as a cyber security expert at an online shopping portal. Basu, who is also a core member of the null community, continues, “Ethical hackers, who are also known as white hats, hack a security system on behalf of their organisation and identify the security loopholes that a hacker could exploit, and fix them. In a nutshell, they basically do what the hackers do, but the process is ethical since this is done to enhance the safety of the computer systems.” According to Basu, the advent of mobile technology has given rise to a new breed of professionals who work as Android penetration testing experts, iOS intrusion testing experts and more. The increasing reliance on IT sources has given way to more threats, thus increasing the vulnerability factor. So much so that both government departments and private firms


Career

For U & Me

Some ethical hacking course providers Name of Training Institute

Contact Details

Innobuzz Knowledge Solutions (P) Limited

10, PVR Plaza, Connaught Circus, New Delhi 110001 Ph: (0)9953749101 Email: info@innobuzz.in Website: http://www.innobuzz.in

CyberCure Technologies

20, II Floor, Hudson Lane, New Delhi 110009 Ph: 011- 41557113 Email: info@cybercure.in Website: http://www.cybercure.in

Institute of Information Security

201 and 204, Ecospace IT Park, Off Old Nagardas Road, Andheri(E), Mumbai 400069; Ph: +91 22 4005 2628/ +91 22 4295 3158 Email: info@iisecurity.in; Website: http://www.iisecurity.in

Indian School of Ethical Hacking

ISOEH Saltlake Branch: Infinity Benchmark Building, 18th Floor, Room #3, Saltlake Electronics Complex, Sector - V, Kolkata 700091; Ph: 9830310550; Website: http://www.isoeh.com/

are now hiring ethical hackers to safeguard their networked computer systems. Ankit Oberoi, director, Innobuzz Knowledge Solutions, Delhi, explains, “With the awareness on privacy laws escalating in India, the demand for ethical hackers has increased manifold in both private and government sectors like banks and the armed forces.” Oberoi, who pursued a career in ethical hacking before opening up his own training institute, comes up with an interesting fact. “I thought of becoming a serious ethical hacker only after I became a victim of hacking. My data was stolen from my computer and I failed to find a good ethical hacker to sort out the problem. Then, I decided to take it up as a career option,” says Oberoi. When Oberoi started his institute three years back with 40-odd aspiring ethical hackers, little did he know that his school would end up training more than 1000 students across the country. He explains the reasons. “The sudden rush of people enrolling for ethical hacking courses is mainly because of security concerns. Everyone does not want to become an ethical hacker, but they want to educate themselves so that they can combat the threats. Training institutes offer both basic and advanced courses, and for those who understand ethical hacking, it can be a lucrative career option, so they go for the latter option,” says Oberoi. While aspiring ethical hackers are making a beeline to training institutes, Rajat Garg, managing director, CyberCure Technologies, says it is important to know the responsibilities of an ethical hacker. “These professionals devise security policies for a company, get involved in remote management of security products, don the hat of a security auditor and clinically probe cyber crimes,” says Garg. And what is the curriculum at institutes that train people for such careers? “Our first module deals with whether ethical hacking is legal or illegal. As training

institutes, it is important to tell students about the varied risks associated with unlawful hacking. Then, they are taught about coding, cyber forensics, network security and case studies of the latest virus attacks,” elaborates Garg. With ethical hacking offering lucrative job opportunities, it is important to be armed with the right skill sets to ensure a successful stint in this domain. Says K K Mookhey, founder of the Institute of Information Security, Mumbai, “The first and foremost thing is keeping abreast with the latest technology and being flexible enough to keep updating your skills to tackle the new challenges that come your way. Second, it’s essential to keep your integrity intact while working in this terrain. That apart, knowledge in programming and a basic knowhow of TCP/IP protocols such as SMTP, ICMP and HTTP are required. Certification courses such as ‘Certified Ethical Hacker’ play an important role in shaping one’s career.” It’s not just that the careers in this arena promise to be fruitful—the remuneration that one gets is equally good, says Sandeep Sengupta, co-founder of the Indian School of Ethical Hacking, Kolkata. Sandeep, who himself is a security expert by profession, helped Calcutta University combat a virus attack in 2003 that almost paralysed the functioning of the whole system. “If you have the right skill sets accompanied with the right aptitude, you can earn Rs 100,000 to Rs 200,000 per month easily. Organisations don’t mind hiring good ethical hackers on a higher pay,” says Sengupta.

By Priyanka Sarkar The author is a member of the OSFY Bureau. She loves to weave in and out the little nuances of life and scribble her thoughts and experiences in her personal blog.

DECEMBER 2012 | 59


Cloud Corner

Insight

How to Choose the Best

Cloud Solution Laptop

Network

Smartphone

Cloud usage in Indian companies is on the rise. Large, mid- and small-sized companies are all taking to this trend. Here are some tips on how to go about adopting the best cloud services.

Tablet Computer

Server Mobile

I

ndia is becoming more cloud-ready with every passing day. Large enterprises and SMEs are warming up to this phenomenon called the cloud. According to the recently released third annual VMware Cloud Index, a commissioned study conducted in 11 Asia Pacific countries by Forrester Consulting and ITR, cloud usage in India has surged. Fifty per cent of the respondents in the country stated that they have already adopted cloud solutions or approaches, which is a growth of about 25 per cent over last year. An additional 30 per cent of the respondents declared that they are currently planning to deploy cloud solutions within the next 18 months, highlighting the growing cloud opportunities in India. However, 40 per cent of respondents state that there is internal resistance to change that is hindering the adoption of the cloud, suggesting that faster cloud adoption is possible for Indian organisations if these hindrances can be overcome. “The survey demonstrates the potential for cloud computing in the country and reflects a double digit rate of adoption,” said T Srinivasan, managing director, VMware India and SAARC. The study also reveals that 54 per cent

60 | december 2012

Database of senior IT professionals surveyed in India consider cloud computing as a top business priority. There has also been an improvement in the understanding of cloud computing with 72 per cent of respondents claiming a good understanding, compared to 59 per cent last year. Knowledge about cloud computing is higher in large organisations (10,000+ employees: 82 per cent) compared to small organisations (<500 employees: 63 per cent). It is the lack of knowledge about the cloud that is deterring the small and medium enterprises in the country to migrate to it. Open Source For You spoke to a few industry giants to gather tips on how to choose the best cloud solution.

Determine the profile of the application that one wants to host on the cloud Before a firm begins looking for a cloud solution, the decision makers should be sure of the profile of the application that they want to host on the cloud. Arvind Mehrotra, president, Asia Pacific and Middle East, NIIT Technologies Ltd, says, “First, one has to understand the load of an application. The applications that require high-peak loads and larger scale



Cloud Corner

Insight

infrastructure are considered suitable for the cloud. Apart from these, there are applications that have an aspect of seasonality to them. There are certain points in time where the application infrastructure required has to be much larger and one needs to work out how to support those peak loads. There are event-based applications. Whether it is an SME or an enterprise, one has to first determine the application profile.” Mehrotra suggests that one can decide on which applications are appropriate for the cloud in two ways. He said, “First is the load pattern of the application. If the infrastructure demand is very large for an application that’s being used, the in-house cost of it will be higher. If that cost will get reduced by using a cloud-based solution, and you are getting scalability by the service-provider—that is the cloud solution for you. One has to always make a loadanalysis and understand the demand of the application. “Yet another determinant of the profile has to be whether the application is core or non-core. If the applications are not core and their consumption is by only a small set of people rather than being used enterprisewide, you can resort to the cloud. For example, if the procurement department in your organisation has 10 people working in it, you will not want to create a procurement platform for that small set of people. You may want to take a procurement application as a service from a cloud service provider. Similarly, consider a Contract Management System which is used by only a few people in the legal department, or a Risk Management System that is also used by a very few. These applications may be critical but since they are not served to a major proportion of the employees, they can be well served by the cloud. “The third set of applications which can move to the cloud are the consumer-based applications, where there is no differentiation for the organisation. For example, with the email, it doesn't matter whose email you are using, and whether it is based on internal or external infrastructure, as long as it can be made secure. For such applications, there is no distinction to your brand, to your buying process or to your selling process. These applications are best suited for the cloud. Whereas when it comes to core applications like an ERP, we do not see very large usage of such applications on the cloud. A Risk Management System or a Contract Management System is much more successful on the cloud as compared to an ERP.”

SLA commitments

Gaurav Agrawal, Sify Technologies Limited, says, “The Service Level Agreement is one of the key things that you have to look at. This is important because you are connected to an application on the Web. There is no infrastructure nearby. You will have to call the service desk of your cloud service provider to get the support. So you need a commitment from your service provider in the form of a SLA.” 62 | december 2012

Support

If you are migrating to the cloud for the first time, you have to understand how the migration of your applications will happen. You have to find out how much of the cloud offering can be configured to best suit your kind of set-up. This can either be through a cloud service provider or a channel partner, but configuration can play a prominent role in the business.

Security

This is one major fear when an organisation is migrating to cloud-based applications. It is not just about the security of the environment but the security involved in the processes that are being conducted in the cloud. You need to have access to these processes so that you can validate that the security environment is to the level your organisation is comfortable with, and you should be able to audit these using a third party. You should be able to visit a data centre and come back with the confidence that your information, data and your business processes are in a safe, secure and auditable position. For security audits, a majority of enterprises resort to third parties, while SMEs ask for third party certificates for security. You must also find out what kind of audit is allowed and what is not.

Degree of customisation

If you come across a cloud solution that requires too much of customisation, experts recommend that you do not adopt such a solution. All cloud applications provide a certain set of services and processes. The applications can be customised to a certain degree. But if your demand involves a high level of customisation and changes, cloud apps are not for you, because you are expecting that application to be changed to suit your business processes. This means that you are not ready to adopt the cloud yet. What you are looking for is someone to manage your apps environment in a ‘managed services’ way.

The ability of the cloud solution to interact with your existing apps

In today's world, more and more applications are talking to each other. So, for example, if you have a portal or a workflow, it will be talking to your email system, OCR system or to some other application. If you go for a cloud-based application, you have to see whether those inter-connected needs of your system are being supported or served, and only if your service provider gives you the assurance of compatibility with your existing applications, can you go ahead with the solution. By: Diksha P Gupta The author is assistant editor at EFY.


How To

Open Gurus

Mastering the

PandaBoard This article deals with image processing on PandaBoard using Kinect and OpenCV. It covers installing OpenCV with Kinect support on PandaBoard, along with a few examples.

I

just finished conducting a tutorial on this topic at LinuxCon North America 2012. Let me first explain the motivation behind choosing this topic. Recently, there have been a lot of cool mind-blowing hacks using Kinect, such as the minority report replica, gesture-detection system, etc. But all of these hacks or systems developed are not at all portable! It would be very hard to even commercialise them. So, why not start developing your idea on a mobile platform such as PandaBoard?

Why are we doing this?

Microsoft Kinect

Kinect is a motion-sensing input device made by Microsoft. If you buy Kinect for Windows, you also get a commercial licence with it. The price is $250; for students, it's $150. Read more about it at Wikipedia if you need to.

Installing OpenNI and Kinect drivers (Ubuntu on PandaBoard) Note: These instructions will only work for Ubuntu running on PandaBoard, and not on a desktop machine or a laptop.

This might be your first question. Here are a few of my answers: Using the combined potential of the PandaBoard, OpenCV and Kinect, we can possibly solve a lot of computer vision problems. Since PandaBoard is very small, we could possibly end up developing a product that can be carried around by users wherever they go. It sounds cool!

OpenNI (Open Natural Interaction) is a multi-language, cross-platform framework that defines APIs for writing applications utilising Natural Interaction. The first part, OpenNI installation, begins with installing the dependencies, with the following code:

A quick recap of PandaBoard and OpenCV

sudo apt-get update

I have covered these topics in previous articles, so I'll request you to refer to earlier issues for details. All I'll say here is that PandaBoard is an open source embedded development board based on the Texas Instruments OMAP 4470 platform, which supports various Linux distributions. Incidentally, its price has now dropped to $153, while the PandaBoard ES version is available for $162. OpenCV is one of the world's most popular computer vision libraries, free for commercial and research use. The latest version (2.4.2) also has support for Kinect.

sudo apt-get install gcc-multilib libusb-1.0.0-dev git-core buildessential sudo apt-get install doxygen graphviz default-jdk freeglut3-dev libopencv-dev

Next, create a folder to hold the download and the installation, and download the latest unstable version of the OpenNI software from GitHub: mkdir kinect

december 2012 | 63


Open Gurus

How To

cd kinect git clone git://github.com/OpenNI/OpenNI.git cd OpenNI

Note: The version number in the path may need to be changed to reflect whatever is the current version.

git checkout unstable

Note: The version mentioned in the README file should at least be 1.5.4.0.

For the second part (installing the Kinect driver), first download the source code, as follows: cd ~/kinect git clone git://github.com/avin2/SensorKinect.git

The software is set up for software floating point, but Ubuntu has been compiled for hardware floating point—so the compiler flags need to be modified. To do this, issue the following commands: cd Platform/Linux/Build

Next, configure the compiler flags; the software floating point option has to be turned off. In the ~/kinect/SensorKinect/Platform/ Linux/Build/Common directory, edit the Platform.Arm file to remove the “-mfloat-abi ” option, as we did earlier for OpenNI. Now, build the driver, which will create a Redist folder:

sudo gedit Common/Platform.Arm cd ~/kinect/SensorKinect/Platform/Linux/CreateRedist

Remove the “-mfloat-abi” option in the file, so that it finally looks like what’s shown below: ifeq "$(CFG)" "Release" # Hardware specifying flags CFLAGS += -march=armv7-a -mtune=cortex-a8 -mfpu=neon

./RedistMaker

To configure the USB port, an edit is required to the config file, or else it will not select the correct USB port. Edit Config/ GlobalDefaultsKinect.ini in ~/kinect/SensorKinect/Platform/Linux/ Redist/Sensor-Bin-Linux-Arm-v5.1.2.1 and find the line that reads:

#-mcpu=cortex-a8 # Optimisation level, minus currently buggy optimising methods

;UsbInterface=2

(which break bit-exact) CFLAGS += -O3 -fno-tree-pre -fno-strict-aliasing # More optimisation flags

Change this line (un-comment and change the interface from 2 to 1) to:

CFLAGS += -ftree-vectorize -ffast-math -funsafe-mathoptimizations -fsingle-precision-constant

UsbInterface=1

endif

To build OpenNI, run the following:

Finally, install the driver and test it: cd ~/kinect/SensorKinect/Platform/Linux/Redist/Sensor-Bin-Linux-

cd ~/kinect/OpenNI/Platform/Linux/CreateRedist

Arm-v5.1.2.1

./RedistMaker.Arm

sudo ./install.sh

Note: It’s possible that this will result in an error because MAKE_ARGS includes “-j0 ″, which is illegal. If this occurs, edit Redist_OpenNI.py and find the line that looks like the one below: MAKE_ARGS += ' -j' + calc_jobs_number()

…and change it to… MAKE_ARGS += ' -j1'

This will build the OpenNI binaries and create a folder called Redist. Then install OpenNI with the following command: cd ~/kinect/OpenNI/Platform/Linux/Redist/OpenNI-Bin-Dev-Linux-

cd ~/kinect/OpenNI/Platform/Linux/Redist/OpenNI-Bin-Dev-LinuxArm-v1.5.4.0 cd Samples/Bin/Arm-Release ./Sample-NiSimpleRead

This should result in a series of values being displayed, which vary if something is waved in front of Kinect.

Install OpenCV, with Kinect support, on the same platform Note: These instructions would also work for Ubuntu running on a desktop machine or a laptop. For PandaBoard, it took me about 2.5 hours, whereas on my i7 machine, it took 15 minutes!

Arm-v1.5.4.0 sudo ./install.sh

64 | december 2012

Approach 1: If you are not interested in installing the latest


How To version of OpenCV, then you can install it from the Ubuntu software repositories with a simple sudo apt-get install libopencv-dev. Approach 2: If you want the latest version of OpenCV, you need to build from source; you will have to install the dependencies prior to that. First, to make sure your system is up to date, run the following command: sudo apt-get update sudo apt-get upgrade

Proceed to installing the dependencies, which are segregated as follows. Essentials: These are libraries and tools required by OpenCV: sudo apt-get install build-essential checkinstall cmake pkg-config yasm

Image I/O: These are libraries for reading and writing various image types. If you do not install them, then the versions supplied by OpenCV will be used: sudo apt-get install libtiff4-dev libjpeg-dev libjasper-dev

Video I/O: You need some or all of these packages to add video capturing/encoding/decoding capabilities to the highgui module: sudo apt-get install libavcodec-dev libavformat-dev libswscaledev libdc1394-22-dev libxine-dev libgstreamer0.10-dev

Open Gurus

extract the downloaded file to your home folder. Navigate to the extracted folder, make a sub-folder 'build ' and cd to it; then run the following command: sudo apt-get install cmake-gui cmake-gui

Now provide the source folder, and in the binary folder option, provide the 'build ' folder path. Click Configure and select the 'OPENNI ' boxes to include the Kinect support for OpenCV, and click Configure again to update. Once you are sure, click Generate. Next, compile OpenCV with the usual make, and install it with sudo make install. Edit the /etc/ld.so.conf.d/opencv.conf file and add /usr/ local/lib to it, then run sudo ldconfig. Edit the bash.rc file to add the following: PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig export PKG_CONFIG_PATH

Now, log out and log back in, or restart the system. To configure OpenCV with CodeBlocks, go to Project > Build Options; then go to Compiler > Other Settings and add `pkg-config --cflags opencv`. Next, go to the Linker settings and in other settings, add: `pkg-config --libs opencv` Note: To run your code from the terminal, use the following command-line pattern:

libgstreamer-plugins-base0.10-dev libv4l-dev g++ `pkg-config --cflags --libs opencv` -o <name for executable

Other dependencies: sudo apt-get install checkinstall gir1.2-gst-plugins-base-0.10 gir1.2-gstreamer-0.10 libgstreamer-plugins-base0.10-dev libgstreamer0.10-dev libslang2-dev libxine-dev libxine1-bin libxml2-dev

Python: Packages needed to build the Python wrappers:

file> <name of program.cpp>

Uninstalling OpenCV completely

If you need to remove your OpenCV installation and return to the pre-installation state, first go to the build folder (in the OpenCV folder), and run sudo make uninstall. Then delete the entire OpenCV folder. Next, run the following command: sudo find / -name "*opencv*" -exec rm -i {} \;

sudo apt-get install python-dev python-numpy

Other third-party libraries: Install Intel TBB to enable parallel code in OpenCV: sudo apt-get install libtbb-dev

GUI: The default GUI for highgui in Linux is GTK. You can optionally install QT instead of GTK, and later enable it in the configuration: sudo apt-get install libqt4-dev libgtk2.0-dev

Now download OpenCV from http://bit.ly/WcVc5K and

Note: The above command will delete every file with 'opencv ' in its name! Edit the /etc/ld.so.conf.d/opencv.conf file and remove /usr/ local/lib from it, then run sudo ldconfig. Edit the bash.rc file and remove the lines we added to it (see the install section).

A few examples

Example 1: Display RGB data from Kinect in OpenCV (see Figure 1): //Program to display rgb data in OpenCV 2.4.2 using Kinect

december 2012 | 65


Open Gurus

How To

//jayneil.dalal@gmail.com, 18-6-12 #include "opencv2/imgproc/imgproc.hpp" #include "opencv2/highgui/highgui.hpp" #include <iostream> using namespace cv; using namespace std; int main( int argc, char** argv ) { VideoCapture capture(CV_CAP_OPENNI); for(;;) { //variable declaration Mat bgrImage; capture.grab(); capture.retrieve( bgrImage, CV_CAP_ OPENNI_BGR_IMAGE );

Figure 1: Running example code

//Display rgb image imshow( "rgb", bgrImage); if( waitKey( 30 ) >= 0 ) break; } return 0; }

Example 2: Display depth data from Kinect in OpenCV (see Figure 2): //Program to display the depth map in OpenCV 2.4.2 using Kinect //jayneil.dalal@gmail.com, 18-6-12 #include "opencv2/imgproc/imgproc.hpp"

Figure 2: Another example

#include "opencv2/highgui/highgui.hpp" #include "opencv2/opencv.hpp"

#include <iostream> using namespace cv; using namespace std;

Use a lightweight desktop environment such as XFCE, instead of Unity. Use an external Solid State Drive (SSD) for faster file access.

int main( int argc, char** argv ) { VideoCapture capture( CV_CAP_OPENNI );

References

for(;;) {

[1] http://openni.org/docs2/ProgrammerGuide.html [2] http://docs.opencv.org/doc/user_guide/ug_highgui.html

//variable declaration Mat depthMap; Mat show; const float scaleFactor = 0.05f; capture >> depthMap; depthMap.convertTo( show, CV_8UC1, scaleFactor );

Acknowledgement I would like to thank Nipuna Gunasekera from Texas Instruments for his guidance, advice as well as technical suggestions for improving PandaBoard’s performance.

imshow( "depth map", show ); if( waitKey( 30 ) >= 0 ) break; } return 0; }

Increasing PandaBoard performance

Here are a few tricks to further increase PandaBoard’s performance:

66 | december 2012

By: Jayneil Dalal The author is a FOSS advocate, and loves to explore different open source technologies. His areas of interest include OpenCV, Python, Android, Linux, Human Computer Interaction, BeagleBoard, PandaBoard, Arduino and other open source hardware platforms. In his spare time, he likes to make video tutorials on various open source technologies, which can be found on YouTube. He was a speaker at LinuxCon 2012, Droidcon 2012, and various other international conferences. He is a big-time Arsenal fan. He can be reached at jayneil.dalal@gmail.com.


CODE Sandya Mannarswamy

SPORT

This month, as is our practice for every year-end column, we will discuss a bunch of programming questions.

L

ast month, we started discussing software bloat and its impact on the efficiency of our software. Every year, in the CodeSport column for December, I feature a bunch of programming questions that I have discussed in the column over the past 12 months. So let’s do the same this year and continue our discussion of software bloat next month. (1) You are given two sorted lists of size X and Y. How would you determine the kth smallest element in the union of the two lists? What would be the complexity of your algorithm? (2) You are given an array X of N integers. You are asked to write an algorithm which returns the ‘hot element’ (one that appears more than N/3 times), if one exists. If no hot element exists, your algorithm should be able to determine that. The other constraint is that you can only compare the elements of the array for equality, i.e., you can only use comparisons of the form ‘Is X[i] = X[j]’, which take constant time. What is the complexity of your algorithm? (3) You are given three containers of capacities 10 litres, 7 litres and 4 litres each. The 7 and 4 litre containers are full of water. The 10 litre container is empty. The only type of action allowed is pouring the contents of one container into another, and you can stop pouring only when either (a) the source container is completely empty; or (b) the receiving

(4)

(5)

(6)

container is entirely full. Can you write an algorithm to determine whether there exists a sequence of actions that leaves exactly 2 litres in the 7 litre container? Given a binary tree T = (V, E) represented in adjacency list format, along with a root node ‘r’, you are asked to design an algorithm such that it can pre-process the tree, and answer a bunch of queries of the form, “Is vertex ‘u’ an ancestor of vertex ‘v’ ?” in constant time. Note that only the query response needs to be in constant time, whereas you can spend time preprocessing the tree to build sufficient information to answer the queries in constant time. Note that a vertex ‘u’ is said to be an ancestor of vertex ‘v’ if the path from root node ‘r’ to vertex ‘v’ passes through vertex ‘u’. You are given a directed graph G(V, E) in the form of an adjacency list. Give a linear-time algorithm for determining whether the graph contains an odd-length cycle. If you can show that it does not contain an odd-length cycle, can you infer anything about whether it is a bi-partite graph or not? You are given a directed acyclic graph G(V, E). A Hamiltonian path is a path in a directed graph that touches each vertex exactly once. Write an algorithm to determine whether the directed graph contains a Hamiltonian path? What is the complexity of your algorithm? Instead of being given a DAG, if you are asked to DECEMBER 2012 | 67


CodeSport

Guest Column

determine whether a general directed graph contains a Hamiltonian path, what would be your solution? (7) We all know what a minimum spanning tree is. Now you are asked to find the maximum spanning tree of an undirected graph. The maximum spanning tree is the spanning tree with the largest total weight. What is the complexity of your algorithm? Can this be solved in polynomial time? (8) Your friend is organising a party for her classmates. There are N people in her class, excluding your friend. She also knows which pairs of her classmates know each other. She wants to invite as many people to the party as possible, subject to the following two constraints: (a) Every person invited should know at least 5 other people in the party; and (b) there should be 5 other people in the party whom a person invited to the party does not know. Write an algorithm that takes the list of class mates and the list of pairs who know each other, and outputs the best choice for the party’s list of invitees. What is the complexity of your algorithm? (9) You are given a sequence of ‘n’ numbers a1, a2, ….. aN. A subsequence is a subset of these numbers taken in order of the form ai1, ai2, ……… aiK, where 1 ≤ i1 ≤ i2 ≤…. ik ≤ n. An increasing subsequence is one whose elements are strictly in increasing order, such that ai1 ≤ ai2 ≤ …….. aiK. You are asked to write a program to find the longest increasing subsequence given a sequence of N integers. For example, if you are given 5, 2, 8, 6, 3, 6, 9, 7, then the longest increasing subsequence is 2, 3, 6, 9. What is the complexity of your algorithm? (10) You are given a string ‘s’ of N characters, which you believe to be the garbled text of a document. However, the garbling has resulted in all punctuation of the original text getting removed. You are also given a dictionary, which can tell whether a particular substring is a word. You can check whether a string is a valid word in the dictionary by calling the function ‘dict’ with a string; it returns true if the given string is a valid word in the dictionary, else it returns false. Write an algorithm to check whether the given sentence can be reconstructed into a valid sequence of words in the dictionary. Is it possible to write a greedy algorithm for solving this problem? (11) Given a graph G(V, E) where V is the set of vertices, and E is the set of edges in the form of an adjacency matrix, a subset S of V is an independent set if there are no edges among the vertices in S. Is it possible to write a polynomial time algorithm to determine the largest independent set in a given directed graph? Now instead of a directed graph, you are given a tree. Can you write an algorithm for determining the largest independent set in the tree? 68 | DECEMBER 2012

(12) A vertex cover of a graph G(V, E) is a subset of vertices V’ ϲ V such that each edge E is incident on at least one vertex in V’. That is, each edge E is covered by at least one vertex in V’, hence the name ‘vertex cover’. If G is a tree instead of an arbitrary graph, write an algorithm to determine the minimum-size vertex cover of G. (13) You are given a directed graph G(V, E) with positive edge weights. You are asked to write an algorithm that returns the length of the shortest cycle in the graph. Your algorithm should return 0 if the graph is acyclic. What is the complexity of your algorithm? (14) You are given a directed acyclic graph G(V, E) and two vertices in the DAG, namely x and y. Write an algorithm to count the number of distinct paths in DAG between x and y. (15) You are given a string ‘s’ of length N. You are asked to write an algorithm to output all the permutations of the given string. What is the complexity of your algorithm?

My ‘must-read book’ for this month

This month’s must-read book suggestion comes from our reader Nethravathi. She suggests a classic computer science book: ‘A Discipline of Programming’ by Edsger W Dijkstra. Nethravathi says that “…this is an essential book to understand programming from the mathematical formalism perspective. It builds a framework for building ‘correct’ programs and how to reason about ‘correct’ programs. While this is not an easy book to read, I would suggest it to anyone who is seriously interested in programming formally”. Thank you, Nethravathi, for your suggestion! If you have a favourite programming book/article that you think is a must-read for every programmer, please do send me a note with the book’s name, and a short writeup on why you think it is useful, so I can mention it in the column. This would help many readers who want to improve their coding skills. If you have any favourite programming puzzles that you would like to discuss on this forum, please send them to me, along with your solutions and feedback, at sandyasm_ AT_yahoo_DOT_com. Till we meet again next month, happy programming—and here’s wishing you a very happy new year in advance!

By: Sandya Mannarswamy The author is an expert in systems software and is currently working as a researcher in IBM India Research Labs. Her interests include compilers, multi-core technologies and software development tools. If you are preparing for systems software interviews, you may find it useful to visit Sandya's LinkedIn group ‘Computer Science Interview Training India’ at http://www. linkedin.com/groups?home=&gid=2339182.


Overview

Developers

Queues in Web Applications This article explores message queues—a technique that can be used to improve the user-perceived responsiveness of Web applications.

I

f you are building a Web application, it is obviously to be used by people—and you want them to be happy when they use it. When I am talking about Web applications, I don’t only mean websites, but also mobile and desktop apps that make use of the Internet to fetch some data from your server, or another Web service. Your users’ happiness depends on a lot of factors. Usability is one of them, of which responsiveness is an integral part. Your users unknowingly want to use responsive apps that don’t make them wait while they complete some task, but allow users to carry on with whatever they want while the app is busy working in the background.

The problem

The problem with an app’s responsiveness arises because, generally, our first attempt at building our application and handling requests has unrealistic aims. We often want to

do everything in one request-response cycle (Figure 1). For example, a user confirms her order on an e-commerce website; the server receives this request, and creates relevant entries in the database. It then sends a notification email to the user about the order’s confirmation, and then sends the response back to the client. Now, how important is it to send the email before giving the user feedback? The email step would usually take longer, compared to other things your application does while processing this particular request. Wouldn’t it greatly reduce your response time if you could somehow complete the emailsending task after the user response is sent? This way, your users don’t have to wait while the server is busy doing a less important task. They can continue with whatever they wish to do, while your application takes care of the less important work at the back. december 2012 | 69


Developers

Overview

Request

Task 1

Client

Task 2

Task 3

Response

Task N

Figure 1: Doing everything at once?

Request

Client

Task 1

Response

Task N

Task 2

Task 3

Figure 2: Break the request-response cycle

The general solution

A straightforward solution to the ‘everything in one request-response cycle’ problem is don’t do everything at once; break the request-response cycle (see Figure 2). If you can identify which tasks are less important, you can offload these onto another process, and reduce the work that your Web server threads do. By doing this, you reduce the response-to-user time. Please note, what I mean by less important work is not work that can be skipped or omitted, but which need not be done immediately, as it does not affect the user response. For example, sending an email is an important part of your application—but it does not directly change the response your server sends to the user. Therefore, whether it’s sent before or after sending the user response does not really matter. So, your Web server threads take care of receiving the request and doing essential things first—and everything that does not directly affect the user response should be offloaded to another process, so that those things are worked upon in the background, without making the user wait. Also, the reduction in tasks improves the performance of your Web application; the server takes less time to process the request. Thus, your server will also be able to handle a lot more requests.

70 | december 2012

Queues

A simple way to offload work to other processes is to use queues. Yes, these are the same old queues you studied in your college days. As defined on Wikipedia, a queue is a particular kind of abstract data type or collection in which the entities in the collection are kept in order, and the principal (or only) operations on the collection are the addition of entities to the rear terminal position, and the removal of entities from the front terminal position. Queues are essentially a FirstIn-First-Out (FIFO) data structure. In our everyday life, queues are used at numerous places to maintain order and efficiency. For example, queues at a ticket counter outside a multiplex are formed to buy tickets on a firstcome-first-serve basis, and to also avoid chaos. As we have seen, you can reduce the load on Web servers by offloading work to other processes. For example, every request that needs to send an email can add a task in the email queue at one end; another process (called the worker process) can pick up tasks one by one from the other end of the email queue and do the required work (in our case, sending an email). If you’re wondering what queues can help you with, the possibilities are endless. A few general cases where they will prove useful are: Data processing that does not directly affect user response. Media processing—e.g., conversion/compression of videos, images, etc; SlideShare or Speakerdeck converting your presentation files into their own format, which the application understands. Updating caches—this is an important example. You use caches to quickly send responses for requests that have been previously processed, and by some calculations, are not different from the previously processed requests. But after some time, the responses in your caches get stale, if the data used gets changed. You generally wait for a cache miss to happen to regenerate the response and update the cache. Queues can help you with updating caches after certain intervals, so that your cache miss rate reduces. As a general rule of thumb, you can use queues for offloading off your server just about anything that does not affect the user response. And if you are doing this, then you are working in the direction of improving the overall user experience.

Your first queue

If you understand what a queue is, you can easily implement one within 15 minutes. All you need is a data store (an RDBMS, a key-value store like Redis, or even the main memory) to hold all the tasks (or queue entries) and a script/program written in any programming


Overview language that will handle the logic. I am going to use Python in this article, but you can use any language that you like. So, in theory, your program adds tasks to your queue from one end, and another program takes out tasks from the other end of the queue and runs them.

Redis + HotQueue

HotQueue is a small Python library that implements queues with Redis, and is really easy to get started with. Let’s directly dive into the code: # Producer script (or the script that adds tasks

Developers

from hotqueue import HotQueue queue = HotQueue("myqueue", host="localhost", port=6379, db=0) while True: message = queue.get() # do something awesome with message

In your Flask app, connect with the Redis server using the HotQueue class whenever you need to queue anything, then JSON-encode the task, and queue it using the put method from the HotQueue class.

# in the queue) import json from hotqueue import HotQueue

from hotqueue import HotQueue

queue = HotQueue("myqueue", host="localhost", port=6379, db=0)

@app.route('/order/add', method=['POST']) def submit_order():

queue.put('some message/work')

# create entries in your database

The above example is very simple. There is a producer (a script that adds tasks in the queue) and a consumer or worker (a script that picks up tasks from the other end of the queue, and does the work accordingly). In the producer script, we first connect with the Redis server, and then add the task (or message) using the put method of the HotQueue class. So whenever we run the producer script, a task gets added. The consumer script starts by connecting to the Redis server, and then goes into an infinite loop. It first tries to get a message from the queue by using the get method of the HotQueue class. This is a blocking call, which means that nothing will execute unless you have a task in the queue. As soon as a message is picked up from the queue, execution proceeds, and you may do anything with the picked up task. So in our email example, you may use the put method in, say, your Flask views. The actual task may be a JSONencoded message with the recipient’s email address, the message and the subject; and our worker script will pick that up and send the email.

queue = HotQueue("myqueue", host="localhost",

What is Flask? Flask is a lightweight web micro-framework written in Python. It does pretty much what any micro-framework (or framework), like Sinatra (written in Ruby) and Silex (written in PHP), does. In the context of this article, we will be using Flask to write request handlers that will add tasks to queues. In the next code snippet, submit_order() is a request handler for all incoming POST requests at /order/ add URI. So whenever a POST request is made to /order/ add, submit_order() is called.

port=6379, db=0) message = dict(recipient=g.user['email'], subject='Order confirmation.', body='<Message in HTML or text>') queue.put(json.dumps(message)) return render_template('thanks_order.html')

In your worker process, pick up a task/message from the queue, decode the JSON-encoded task into a Python native dictionary, and send the email using any Python library that works for you. Now all you have to do is fire up your worker processes on your machine, and let requests come in! Using HotQueue is that simple! As I said, you can implement your own queue in about 15 minutes—and if you do so, the end result will be something pretty close to HotQueue.

Redis + PyRes

HotQueue is simple, and it just works. PyRes, on the other hand, is more robust. It is a Python clone of Github’s Resque (written in Ruby) and offers more features like displaying the status of queues, failure handling, etc. Since PyRes is a clone of Resque, it honours the naming conventions Resque makes use of and, therefore, it makes it possible to use Resque’s Web monitoring system god and monit. These monitoring systems can be used not only to check the status of your queues, but also to fire up new workers, or kill stale workers, as and when necessary. Here is a sample code of our email task example:

# Consumer script (or the script that picks up

# Consumer script (or the script that picks up

# tasks from the queue)

# tasks from the queue)

december 2012 | 71


Developers

Overview

import json from hotqueue import HotQueue queue = HotQueue("myqueue", host="localhost", port=6379, db=0) while True: message = queue.get() message = json.loads(message) ''' send email to message['recipient'] with message['subject'] as subject and message['body'] as message body. '''

In the above example, we created a class by the name Email, which is the job class. It has one attribute called queue, which specifies the name of the queue in which our messages will be put. Lastly, it implements a static method called perform. This method gets the task/message as the only argument, and does what we want our worker to do. So, we are not going to write a separate worker script in case of PyRes. Our Flask view is pretty much the same as in our previous example, except it uses the enqueue method of the ResQ class to put tasks in the queue. The first argument is the job class, i.e., Email, and the second argument is a JSON encoded message. ./pyres_worker email

Since we do not have a separate script for our workers, how do we fire them? After installing PyRes, you get a pyres_worker executable script. So to run your workers, all you have to do is pass this script a commaseparated list of queues, which this worker will listen to. In the above example, we are binding our worker to the email queue. So, PyRes is not difficult to implement, and has a lot of advantages. A good reason for you to use it is that GitHub has used Resque in production to do millions of task per day. Read the case study on Resque at https://github.com/ blog/542-introducing-resque to get a better idea.

Message Queues

So far, we have seen some ways of implementing queues using a program or script that handles the queue logic, and makes use of an existing data store to store messages. Let’s take a closer look at Message Queues (MQs). They are dedicated systems for the purpose of queuing to enable asynchronous processing of tasks. In simple terms, MQs are systems that can be your queues, and take care of themselves. So instead of a data store, you have the MQ solution or a Message Queue Broker,

72 | december 2012

and task-management scripts, which add tasks and take them out to pass them to your workers. MQs can help you in all the use-cases we discussed earlier and in every scenario where queues are used. The biggest advantage of using MQs is that they are reliable. And that is something that can be important for your application, depending on how important the tasks you want to queue are. Another great use of MQs (more specifically, with a task-management system called Celery) is that you can use it to replace cron jobs. I have never been able to use cron jobs very smoothly. Using Celery for these will bring all your scheduled and repeated tasks into your code-base, and it will be a lot easier and more intuitive to create them and maintain them. When your server crashes, you will not have to take care of your crontab separately. Just do a git clone and have your Celery workers and your Message Queue Broker running, and you are good to go!

RabbitMQ + Celery

RabbitMQ is an MQ solution or a Message Queue Broker very common in the Python community. Celery, as the project website states, is an asynchronous task queue or job queue based on distributed message passing. In simple words, Celery is a queue or a task manager. The good thing about Celery is that it has lots of exciting features, amazing documentation, awesome community support and it not only works with RabbitMQ, but with a range of data stores like Redis, MongoDB, and RDBMSs like MySQL. To get started, you must have RabbitMQ installed. On Ubuntu/Debian, run apt-get install rabbitmq-server; on Fedora, run yum install rabbitmqserver; after which you can install Celery with pip install celery and you’re done. To define your tasks, Celery provides you with some really simple APIs like the task decorator: # file name -> tasks.py # module name -> tasks from celery import Celery celery = Celery('tasks', broker='amqp://guest@localhost//') @celery.task def email(recipient, subject, message): '''

send email using whichever library you like We have just defined our tasks in the above code. The first argument to Celery is the name of the module (in our case, tasks). We also need to add tasks in the queue wherever necessary, and have workers running to do the work.


Overview

Developers

from tasks import email Web App

Task Manager

Database

Worker Server (s)

@app.route('/order/add', method=['POST'])

Broker

def submit_order(): # create entries in your database email.delay(g.user['email'], 'Order confirmation.', '<some message>')

return render_template('submit_form.html')In the above code, all we are doing is importing the task, which is essentially a method (email method) in the tasks module, and we queue it up using the delay method on this task, passing the required parameters. celery -A tasks worker --loglevel=info

We use the celery command to start a worker by passing the worker argument and the name of the module that contains our tasks—in this case, tasks. You may also run the worker in the background, using a celery option. That’s it! Celery is that simple to use! This also shows that if you already have isolated independent code in the form of methods, you can easily convert them into tasks using the @task decorator. As I discussed earlier, Celery can also be used to schedule tasks and repeating jobs like cron. To do something like that, this is what you have got to do: from celery.schedules import crontab CELERYBEAT_SCHEDULE = { 'email-every-ten-minutes': { 'task': 'tasks.email', 'schedule': timedelta(minutes=10), 'args': ('admin@mysite.com', 'Site Health', '<some awesome stats>'), }, }

In the above example, we are basically adding a configuration for Celery; the name of the configuration is CELERYBEAT_SCHEDULE. Celery Beat is the scheduler that comes with Celery. The CELERYBEAT_SCHEDULE configuration is just a Python dictionary, with the key as its name, and the value is another Python dictionary, with various fields for defining the scheduled task. The task field requires the name of the task to execute. In our case, it’s tasks.email because the email method is in tasks.py (the tasks module). The schedule field requires a value defining the frequency of execution. Here, we have defined it to be 10 minutes (as a simple example), but this can be as complex as executing a task on every first month of

Figure 3: In production

every quarter of the year. To have your scheduled tasks running, just run your workers as we saw earlier, and then run the command celery beat. With this, you will have your repeated tasks running in no time. Celery is amazingly useful. We saw some of its out-ofthe-box features, but there is a lot more that it has to offer. Check out http://celeryproject.org/, as it has the most amazing documentation for Celery, and is the one-stop-shop for it.

MQ solutions and protocols

There are a lot of MQ solutions or MQ Brokers out there— RabbitMQ, Amazon Simple Queue, Apache MQ, Gearman, Starling, OpenAMQ, Sun Java Message Queue System… and this list is definitely not exhaustive. To my knowledge, RabbitMQ is a pretty famous solution, backed by VMWare, and is used a lot (at least in the Python community). You can definitely choose any one you want. There are also some competing protocols for message queuing: Asynchronous Message Queue Protocol (AMQP): RabbitMQ implements this protocol. Java Messaging Service (JMS) Streaming Text Oriented Messaging Protocol (STOMP) We are not going to discuss protocols any further, because that requires going very deep into the topic. But you normally won’t need to go too much into the subject—and when you need to, you can do your research.

Criteria for broker selection

Since there is a wide variety of brokers available, it might get difficult for you to zero in on one broker to include in your stack. Also, your selection process is vital, as you are adding a new system in your stack, which means that it will require some amount of work to get it in. Therefore, here are a few parameters that you ought to consider while selecting a broker: Difficulty in recovery: How difficult is it to recover your MQ system if something breaks or crashes?

Continued on Page 75... december 2012 | 73


For U & Me

Case Study

For Shaadi.com,

Ubuntu Scores Over Windows For all CTOs and IT managers, bringing costs down and deploying easy-to-use technology is the biggest challenge. Shaadi.com has addressed this issue by relying on the open source model. Over a year, more than 50 per cent of the users in the company have migrated to Ubuntu from proprietary software.

J

oel Divekar, GM, Information Systems, People Interactive (I) Pvt Ltd, is responsible for managing IT infrastructure at over 50 offices across India. He has to ensure that capex and opex are kept at favourable levels, and his experiments with open source (to enable this) have been successful. The shift was primarily driven by Joel's love for open source technology. He said, “Basically I come from an open source background. I have been using Linux for almost 20 years. My organisation did not have any monetary constraints, so there was no financial need to bring in Ubuntu. The key factors driving this decision were more related to user-management and infrastructure management. We have remote offices across India in places like Kochi, Bhubaneshwar, Guwahati, etc. They were all using proprietary software. User management was a difficult task–it involved ensuring anti-virus software was in place and patches were upgraded, and figuring out why systems were running slow. I felt that Ubuntu and Linux could give me better control over the systems; also, open source software can run smoothly on older devices as well. It enables good memory and application management. And it is also secure. So, this motivated me to use Ubuntu. I started off with a small team.” Yet another reason that prompted Joel to choose Ubuntu was that he had the responsibility of preparing the Annual Operation Plan (AOP) or Budget for each financial year. Joel explains, “To reduce TOC and to improvise ROI, last year, in the month of November, I initiated a project to migrate Windows XP users to Ubuntu. Till date, we have successfully migrated over 65 per cent of the users (around 800) to Ubuntu, and work is going on to migrate the rest” 74 | december 2012

… And the migration began

Joel started 'Operation Open Source' in November last year. He recalls, “Around OctoberNovember last year, I got to know that the company was planning to set up a new office in Thane. It was to be a 75-seater. And there were plans to subsequently set up more such offices. I saw that as an opportunity. I was getting a fresh set of machines and fresh users so I could go ahead and implement my plans. This was going to a back office, or a CRM team. The team members had to use the Web browser as their interface. I saw an opportunity there. I trained my guys for the installation. I took two machines and worked on those machines for almost three days.” He evaluated multiple Linux distributions and Ubuntu topped his list for its ease of use and extensive online documentation. He said, “The good part is that you get hundreds of Ubuntu installation guides, but I needed something that could match my requirements. So installing the required applications, fonts and tools needed by the workforce, finding alternate options and alternative software for the existing tools needed a lot of work.” The next step was to list down open source alternative software for all applications and tools used by the backoffice team. Joel started with Ubuntu 10.04 and did a lot of ground work before he went on to the migration. He installed various applications, including Libre Office, VLC, the Flash plug-in, Java JRE, the PDF reader, 7Zip, the GIMP, Microsoft compatible fonts, hplip printer drivers, etc. After installing the applications, he prepared the installation guide and sent it to one of his staff members. He


Case Study For U & Me said, “I asked him to go through it, format the machine and start installing. They started installing but a major issue that came up was that a few things that were obvious to me were not as obvious to them. In the beginning, I may have taken a few things for granted, but when the team actually started installing, I found gaps in my documentation. I improved on it and only when my documentation was thorough, I started distributing it across my team. I asked team members to install Ubuntu on at least three-four machines and report their experiences to me on a regular basis. After that, we encouraged a small team that operates from our Tardeo office in Mumbai. I told them that I would give them different software and ensured support to them in case they faced any issues. Initially I gave them their existing machines as well but asked them to refrain from touching them.” Joel used the findings of this sample set-up to improvise on his documentation for setting up further models. The activity of setting up the new office began in January. Since all the 75 desktops Joel had to set up required a similar configuration, he got one system ready and then used Clonezilla to clone this system onto the remaining 74 systems.

Key benefits to the organisation ● Reduced TCO, which is crucial for a growing organisation ● Easy integration of the Ubuntu system in an existing environment ● Secured environment ● Better systems management ● Reduced downtime ● Vast online community support

Joel replicated the same system in other units of his company and continues to do so even today. He said, “I did not have to put in much effort to convince the management because of the value that open source offers. Not only did it ensure cost benefits, but also technology benefits.” By: Diksha P Gupta The author is assistant editor at EFY. When she is not exercising her journalistic skills, she spends time in travelling, reading fiction and biographies.

Continued from Page 73...

Low level of maintenance: An MQ system should not demand too much maintenance, especially if your team is small, because your actual work is not to maintain queues—it is to develop your app. Ease of deployment: Deployment should be easy, and it should not alter your existing deployment process too much. It should be easy for you to add the MQ in your stack. Durability and persistence: Message queues generally store messages in the main memory, and that means that your messages are vulnerable to getting lost if something goes wrong. According to your application, and the importance of your messages, you might want to consider how this is handled. If your broker goes down, is it able to reconstruct queues with minimum loss when it comes up? When the complete system goes down, since the messages are stored in main memory, what happens when the system comes up? Does it have any level of persistence? What level of reliability does your broker offer? Depending on your application, these questions take on different levels of importance. Community support: This can be an important factor. Community support can be the life line when you get stuck. An active community will always be helpful while dealing with different kinds of problems. Cluster support: Can multiple broker installations on a network be clustered together? Again, this is very use-case specific.

How to put it all together

So, we have taken a good look at how to make use of message queues. In production, a general set-up would look something like Figure 3. The Web server catering to your application’s traffic can be on one server and your database on another server. Your task manager (like Celery) can be on the same server as the Web application. Your broker should be on a different server, so that if something goes wrong with it, your Web server remains safe, and vice-versa. Your workers can be either on the server where the broker is, on a different server, or even on multiple different servers, depending on (for example) the number of tasks or the nature of tasks. Building usable and responsive Web applications is important. We saw how queues can help us with improving responsiveness and looked at a couple of use-cases where they can be helpful. But the possibilities are endless, not only in Web applications, but almost anywhere. We also saw that it’s not all that difficult to set up queues. You can make use of queues with simple libraries such as PyRes, or something much more robust like Celery, in just about no time. I hope you found this tutorial useful, and I hope you are already on your way to building some amazing apps. By: Vaidik Kapoor The author is interested in distributed computing, Web applications, scalable systems and in building anything that solves real problems. He is a FOSS enthusiast and a foodie. He blogs at http://vaidikkapoor.info.

december 2012 | 75


Open Gurus

How To

Control Raspberry Pi’s GPIO Pins with PHP

Ever since the tiny credit-card-sized Raspberry Pi appeared in the market, it has caught the imagination of every electronics and computer hobbyist around the world. The powerful Linux OS, combined with its 26 I/O pins, can do many amazing things out-of-the-box. This article is about controlling the GPIO pins.

B

esides GPIO (General Purpose Input/Output) pins, the tiny Raspberry Pi has a I2C (Inter-Integrated Circuit) bus, a UART (Universal Asynchronous Receiver/ Transmitter Serial) bus and a SPI (Serial Peripheral Interface) bus for connecting several kinds of devices. For the OS installed on the tiny SD card, I prefer Debian Wheezy Linux, for it’s a Debian type and easy to install. To control Raspberry Pi's (or ‘RasPi’) seven GPIO pins, there are simple Python programs. With a few lines in Python, one can control these pins, making them high or low. Combined with other programs, these pins can do many amazing things. However, here we will do some simple programs like switching some LEDs with these GPIO pins.

76 | december 2012

Many interactive games can be designed with the help of these GPIO pins. The possibilities are endless, depending on your imagination. However, since I love PHP, I always try to do the equivalent of most code in PHP—and when I search the Net, I find there is usually a simple way to do this.

First, the Python way...

Python is already installed on the RasPi; for those who don't know, 'Pi' refers to the Python language. To interact with the GPIO through Python, you have to install the GPIO Python libraries. The latest available is Rpi.GPIO-0.3.1a.tar.gz. Download it from http://raspberry-gpio-python.googlecode.


How To

Open Gurus

com/files/ and extract the content with tar zxvf Rpi.GPIO0.3.1a.tar.gz and then cd to the RPi.GPIO-0.3.1a directory, before issuing the following command: pi@raspberrypi /home/pi/RPi.GPIO-0.3.1a $ sudo python setup.py install

This will install the Raspberry GPIO library into Python and you are ready. Write this small script in any text editor, and proceed as follows to connect a red LED across pins 6 and 11 of the RasPi. Pin 11 is the GPIO No 17 (see Figure 1) while Pin 6 is the ground (0 volt). Insert two small sleeves on pins 6 and 11, and then insert the LED directly on to the pins—positive on Pin 11, negative on Pin 6. import RPi.GPIO as GPIO GPIO.setmode(GPIO.BOARD) import time GPIO.setup(11,GPIO.OUT) while True:

Figure 1: Pi I/O pins

PHP-GPIO/blob/master/GPIO.php and copy into the /var/ www directory. It's a simple PHP file where all the libraries are included. Next, create a PHP script to control the GPIOs with the following code:

GPIO.output(11,True) time.sleep(2)

pi@raspberrypi ~ $ sudo nano /var/www/test.php

GPIO.output(11,False) time.sleep(2)

<?php require_once('GPIO.php');

Save it and run this program as the super user:

$gpio = new GPIO(); $gpio->setup(17, "out");

pi@raspberrypi ~ $ sudo python testgpio.py

11

The LED should be blinking—2 seconds 'on' and 2 seconds 'off'. Terminate the program with Ctrl+C.

while(true) {

Now, the PHP way

//on board connect the LED to pin no

$gpio->output(17, 1); usleep(2 * 1000000);

// 2000000 micro Seconds or 2

seconds the LED will be on

First install Apache and PHP on your RasPi, with the following command:

$gpio->output(17, 0); usleep(2 * 1000000); // 2000000 micro seconds off }

pi@raspberrypi ~ $ sudo apt-get install apache2 php5

?>

Time to grab a cappuccino and relax while the RasPi installs the packages. To check PHP5, try a simple info.php page in /var/www with the following code:

Time for testing

<?php

At the Pi terminal, run sudo php -f /var/www/test.php and if everything goes well, the LED will be blinking with the same frequency as it did for the Python code. If not, go back and check all the connections, and the program.

phpinfo(); ?>

Now, in the terminal, you can run php -f /var/www/ info.php or in graphics mode, open the RasPi browser, Midori, and open http://localhost/info.php to see if you get the expected pageful of information, which means PHP is working. So far, so good. To interact with the GPIOs, there is a PHP-GPIO library, which you can download from https://github.com/pickley/

By: S Bera The author is a mechanical engineer and presently works as additional general manager at NTPC Limited. An avid lover of open source software, he has deployed OSS both at his home and office, making good use of the LAMP (Linux-ApacheMysql-PHP) model. He likes to read and travel, and blogs at http://www.scribd.com/berasomnath. He can be contacted at berasomnath@gmail.com.

december 2012 | 77


Developers

Let's Try

Improve the Performance of

Python with Profiling

Here are some key insights for Python users who want to speed up their code.

I

n most aspects of our life, speed is important—whether it is while driving or executing code. Since the dawn of computer science, we have argued about whether our code should use less memory or take less time to execute. Since the price of memory has been steadily decreasing, developers now focus on making their code run faster and spend precious hours trying to shave those few seconds off their execution time. But how does one go about trying to speed up working code? Well, you could just randomly tweak areas of code, but this method has proved to be messy and very inefficient. Usually, I prefer writing code in Python, but sadly when I get into an argument with someone about the performance of Python, the usual responses I get are “Python is too slow,” or “The global interpreter lock restricts Python’s code performance,” or something along those lines. So my goals for writing this article are as follows: Getting faster code using pure Python. To list some techniques and tools you can use without resorting to writing code in C or assembly. Here is the list of libraries that the required tools depend on, so download and install them, if you haven’t already done so: CPython (The default interpreter) line_profiler Runsnake 78 | december 2012

Let's get started by looking at some code! This is the same code we will try to improve in subsequent portions of the article, and it's the boring old matrix multiplication code (matrix_mult.py): import random from time import * import cProfile class matrix(object): def __init__(self, m, n): # Create random matrix self.m=m self.n=n self.mat = [[random.random() for row in range(n)] for col in range(m)] def zero(self): self.mat = [[0 for row in range(self.n)] for col in range(self.m)] def mult(matrix1, matrix2): # Matrix multiplication


Let's Try

Developers

Figure 1: cProfile in the CPython interpreter if len(matrix1[0]) != len(matrix2): print 'Matrices must be m*n and n*p to multiply!' else:

if __name__ == '__main__': x=matrix(500,500) y=matrix(500,500)

new_matrix = matrix(len(matrix1), len(matrix2[0]))

profile_mult('x','y')

new_matrix.zero() for i in range(len(matrix1)): for j in range(len(matrix2[0])): for k in range(len(matrix2)): new_matrix.mat[i][j] += matrix1[i][k]*matrix2[k][j] return new_matrix

The above code simply defines a class matrix, which generates a matrix of M rows and N columns, and initialises the matrix with random values. The function zero initialises the matrix to zero. The function mult takes two matrix objects, multiplies them and returns a matrix object with the result. The function mult takes O(n^3) time to multiply the matrices.

Profiling

Profiling is the process by which you analyse how a program is using system resources, and the time taken to execute portions of your code. By analysing your code, you can target potential ‘hot spots’, which are portions of your code that tend to run slowly. By optimising these areas of code, you will end up with a significantly faster execution time. In this article, let’s focus on profiling your code with the following: cProfile runsnake line_profiler

cProfile

cProfile is the standard way to profile your Python code, by importing the cProfile module. To profile the above code using cProfile, add the following code:

Figure 1 shows how you can use cProfile from the CPython interpreter. Note that the variables x and y should be passed as strings to the function profile_mult, because you need to pass a string that represents the statement to be profiled, to the cProfile.run() function. x and y are instances of the matrix class, each initialised with 500 rows and 500 columns. All values in the matrices are random values. Take a look at the matrix multiplication loop, mult(matrix1, matrix2). It’s iterating over i, j, and k—each of which is a column or row of a 500×500 matrix. 500^3 = 125 million cycles through Python for loops. As you can see, cProfile gives you a lot of numbers, which represent the following: 1. ncalls: represents the number of calls made to the corresponding script or function. 2. tottime: represents the total time spent executing the script or function. 3. percall: represents the time taken per call to the function or script. 4. cumtime: is the time spent on the function, including calls to other functions. 5. percall: this is cumtime divided by tottime. It also shows you the total number of function calls, and the amount of time taken to execute the statement.

Optimisations

Well, as we can see from the cProfile output, the functions len, range, and random are called the most number of times— due to which I can make the following optimisations in my mult function: def mult(matrix1, matrix2):

import cProfile

# Matrix multiplication

def profile_mult(matrix1, matrix2):

if len(matrix1[0]) != len(matrix2):

cProfile.run('mult(' + matrix1+'.mat,' + matrix2+'.mat)')

print 'Matrices must be m*n and n*p to multiply!'

december 2012 | 79


Let's Try

Developers

Figure 2: Optimised else: l1=len(matrix1) l2=len(matrix2[0]) l3=len(matrix2) new_matrix = matrix(l1, l2) new_matrix.zero() for i in xrange(l1): for j in xrange(l2): for k in xrange(l3): new_matrix.mat[i][j] += matrix1[i] [k]*matrix2[k][j] return new_matrix

As you can see, rather than calling the len function each time in the loop, I am calling len on matrix1, matrix2[0] and matrix2, and storing the values in variables l1, l2 and

l3, before using them in place of the len function calls. Another optimisation I used in my code is replacing all range function calls with the xrange function, a generator, which is more efficient. You can see the results of the above optimisation in Figure 2. As you can clearly see, I have reduced the execution time by roughly 2 seconds. Not much, but it is a minor improvement. Also, notice how the total number of calls has dropped from 752013 to 251014! The problem with cProfile is that for simple code, it gives you a reasonable idea of what is going on within your code, but for more complex problems, the output becomes harder to understand. This is where Runsnake comes in.

Runsnake

Runsnake is another profiling tool, which allows you to visualise the profiled results obtained from the cProfile module. This can be useful to get the bigger picture. Use

1, Vikas Permises, 11 Bank Street, Fort, Mumbai, India - 400 001. Mobile

Figure 3: Runsnake

80 | december 2012


Let's Try

Developers

Figure 4: After Runsnake-revealed optimisation

Runsnake by storing the output of cProfile in a file (e.g., context.profile) using the following command in the terminal: python -m cProfile -o context.profile matrix_mult.py

Then visualise it using the command runsnake context. profile, which generates a window like what’s shown in Figure 3. The Runsnake module is simple, and tells you how much time is spent in a given region of your code by scaling that region appropriately.

Optimisations

As you can see, we spend a lot of time in the __init__ function as compared to the others, so let's try to get rid of the class matrix and replace it with just a simple function matrix and zero, which effectively does the same thing. The following modifications were made to the code:

As you see in Figure 4, the execution time improvement is nearly 6 seconds, and the number of calls went from 251014 to 512! A simple optimisation like this can speed up code immensely; even though you can’t tell from the cProfile output, you can get a clear picture using Runsnake, and act accordingly.

Line_profiler

Now let us look at the line_profiler library. cProfile may be fine and dandy for projects where you need a brief overview of which functions are slowing your code down, but what if you want to know which precise line in the function is responsible for the slowdown? Well, here’s a neat tool, known as line_profiler. When you install the line_profiler library, you get a file called kernprof.py, which is responsible for profiling your code on a line-by-line basis, rather than as an entire function. To use line_profiler, first decorate with @profile: @profile def mult(matrix1, matrix2):

#instead of a class matrix we define a function matrix and a

#rest of the code same as the unoptimised code

function for zero def matrix(m, n): # Create random matrix mat = [[random.random() for row in range(n)] for col in range(m)] return mat def zero(m, n):

Note: After finishing profiling with kernprof.py, please remember to remove the @profile, because the Python interpreter will not understand the decorator. Next, run kernprof.py with the -l -v flags to perform line-by-line profiling and to view the output, using the following command:

mat = [[0 for row in range(n)] for col in range(m)] return mat

kernprof.py -l -v matrix_mult.py

e: 09326087210. 1, Vikas Permises, info@technoinfotech.com 11 Bank Street, Fort, Mumbai, India - 400 001. Mobile: 09326087210. info@technoinfotech.com

december 2012 | 81


Developers

Let's Try

Figure 5: Line_profiler

Figure 6: Lineopt output c=0

The output you get is shown in Figure 5. Since line_profiler lets you see which line is causing the biggest problems, you can now focus on fixing the problem by directly improving the slow code. Let’s analyse the output of the line profiler: Line #: This is the line number in the file. Hits: The number of times that line was executed. Time: The total amount of time spent executing the line. Per Hit: The average amount of time spent executing the line. % Time: The percentage of time spent on that line. Line Contents: The actual source code.

Optimisations

Notice how the innermost line 'new_matrix[i][j] += matrix1[i][k]*matrix2[k][j]' executes for 61 per cent of the total time. This means that optimising this one line can immensely boost your execution speed. Hence, I tried the following optimisation: def mult(matrix1, matrix2): # Matrix multiplication if len(matrix1[0]) != len(matrix2):

for k in range(len(matrix2)): c += matrix1[i][k]*matrix2[k][j] new_matrix.mat[i][j]=c return new_matrix

I tried this small change because i and j remain constant through the execution of the innermost loop; the time taken in that line is increased due to memory access for a location in memory that stores the array values. Hence, I replaced the array variable with a variable c, and assigned the value of c to the corresponding array element. Now, the array will be accessed from memory only once rather than k times, which slowed down the code drastically, as can be seen in the cProfile output. As you can see in Figure 6, code execution time has come down to 39 seconds just by this small change! Arriving at optimum performance involves a lot of trial and error. All the changes you make in your code may or may not work in your favour. Keep experimenting and try to understand what is slowing your code down, and why. Some changes end up slowing down your code, so use profiling tools to give you a clear picture of which areas of your code need your attention for tuning. In the next article in this series, I will cover Cython and PyPy as tools to improve Python’s performance.

print 'Matrices must be m*n and n*p to multiply!' else: new_matrix = matrix(len(matrix1), len(matrix2[0])) new_matrix.zero() for i in range(len(matrix1)): for j in range(len(matrix2[0])):

82 | december 2012

By: Sahil Chelaramani The author is an open source activist, who loves Linux, Python and Android. A third year student at the Goa Engineering College (GEC), he loves to code beyond anything else.


Guest Column Joy of Programming

S.G.Ganesh

Over-reliance on Testing is Considered Harmful In the quest to create high-quality software, most software companies spend enormous amounts of money and resources on testing. Is this a good strategy?

A

few years back, I bought a costly smartphone, attracted by its ‘cool’ features. Once I started using it, I realised that it was way too buggy. Let me give you two examples. The mobile came with supporting software on a CD. After installing the software and starting it, it crashed, throwing a Visual C++ runtime error: “R6025: Pure virtual function call”! As a C++ expert, I could easily understand the bug: it is illegal to call a pure virtual function from a constructor, and when we make such a call, it will crash the application. But the question was: why hadn’t the testing team for that mobile phone software caught it before releasing it? Another strange problem with my mobile is that it ‘freezes’ or ‘hangs’ if I talk for ‘too long’. After a few freezes, I figured out that ‘too long’ means approximately half-anhour, which is not really that long! By freeze or hang I mean the screen would go blank, and it wouldn’t respond to any key presses. It meant I couldn’t restart the mobile (which required pressing keys). The only thing I could do was to remove the battery and put it back! Later, I also figured out a work-around; if I plugged the mobile to its charger, it would immediately spring back to life! This discovery showed me an important aspect of the problem—since it recovered when an event occurred (plugging in the charger, in this case), it is likely to be a software bug! When I talked to my friends about this problem, most of them said they were not surprised—they said that bugs are very common in mobiles. Reading about the strategies of mobile phone companies and talking to my friends working in those companies was enlightening. Mobile phone companies face cut-throat competition and only those who first deliver the latest and coolest functionality to the market, survive! In this scenario, the main quality strategy to test the software is: if the tested features work, it goes to market! Maybe there is a bit of exaggeration in that statement, but it more or less captures the essence of how these companies focus on functionality, and how they use testing as the primary means to check quality. The situation is not so different in the software industry, where testing is the main approach for improving software quality. According to Boris Beizer (a well-known researcher

in testing), testing accounts for approximately half of the total software development costs! Depending on the size and type of the software company, the ratio of developers to testers ranges from 5:1 to 1:1! It is safe to say that software companies rely too much on testing. Let us take a holistic view of testing, to understand why the focus on software testing is not the way to create highquality software. E W Dijkstra, the well-known Dutch computer scientist, once said, “Testing can show the presence of bugs, not their absence.” This statement appears to be a clever play of words—but, if you think about it, it is insightful. What he means is, with testing, you can only check if the software has bugs. If you encounter no bugs while testing, it just means that testing did not uncover any bugs; however, you cannot say there are no bugs in the software. For this reason, by doing testing alone, you cannot say that software will work fine because it is unfeasible and impossible to test all the possibilities. This meaning is reflected in the US Food and Drug Administration’s guidance statement on validation for medical software: “Software testing by itself is not sufficient to establish confidence that the software is fit for its intended use.” Yes, there has been considerable progress in software testing research, and today there are sophisticated testing tools available, but the statement that Dijkstra made, and the position of the FDA on testing, still holds. To get better clarity on the limitations of testing, let us discuss testing from two different perspectives. Real-world software is complex, and its complexity is rising every year. For example, the size of Windows 3.1 was approximately 4 million LOC in 1990; in 2002, it was 40 million LOC for Windows XP. I don’t know the code-base sizes of the latest releases of Linux or Windows, but you can make easy guesses. Many applications I know are more than a million LOC, and their size is increasing every year. The humongous sizes of real-world code-bases make testing extremely difficult. Let me give you a simple example. A few years back, when I was writing code in Eclipse 3.1, it crashed. I checked its stack trace, which revealed december 2012 | 83


Joy of Programming

Guest Column

that the bug was a NullPointerException in the org.eclipse. jface.dialogs.DialogSettings.load method in DialogSettings. java in line 278. The stack trace had 56 method calls in it, starting from the org.eclipse.equinox.launcher.Main.run method in Main.java line 1236. If I were to fix this bug as a developer, it would be so difficult to just reproduce this problem by creating a test case on my own. Further, it would be close to impossible to ensure that whatever fix I made was correct... and that I had not broken any existing functionality. Obviously, Eclipse is a huge code-base, and there is a large testing infrastructure for Eclipse. Still, any developer who has worked on such large code-bases will understand what I said—the complexity of testing in such code-bases is overwhelming. In other words, the enormous complexity of the software renders testing ineffective, no matter what testing technique, method or process you use. From another perspective, the bugs found during testing are only the tip of the iceberg: For every bug found in testing, there are 10 or more bugs lurking beneath yet to be uncovered! In this way, software bugs are akin to icebergs—what we see is the tip of the iceberg, while 1/8th to 1/10th of the iceberg is submerged beneath the water. In other words, these latent bugs lurk in the darkness, only to be exposed later when changes are made to the software, or when a user attempts to use some unusual functionality. For certain kinds of bugs such as those that are data-flow-based or functionality-based, testing is effective. For other kinds such as design bugs, testing is ineffective. To summarise, focusing extensively on testing is not the right strategy to ensure the high quality of the software. The alternative has been well-known for many decades. But what is it? There is no silver bullet for creating high-quality software, but there is one solution that comes close. It is the lowly and humble manual review! For a long time, it has been well-acknowledged that manual review techniques such as peer-reviews, inspections, code and design walkthroughs are effective in creating high-quality software. There are many reasons why manual reviews are more effective than testing. First, manual reviews not only find actual bugs, they also find potential and latent bugs. They can also unearth hard-to-find bugs early in the software development life-cycle, such as design bugs (this fact is important because luminaries such as Capers Jones found that 25 to 64 per cent of bugs in software are design bugs). Further, reviews don’t just find bugs, they also identify weak spots in software where improvements are required, and hence help improve software quality. Though manual reviews are used in the software industry, they haven’t received the focus and attention they deserve. If manual reviews are so useful, why aren’t they used extensively? The main reason is that they are effort-intensive.

And because of this, most development projects, hurrying to meet deadlines, skip manual reviews and depend on testing to catch the bugs. So what can we do about this? Fortunately, static analysers can address this limitation of manual reviews. Starting from the days of simple ‘lint’-like pattern matchers, static analysers have come a long way. Today, there are sophisticated static analysers that use modern and advanced techniques such as model checking, abstract interpretation and program querying. These tools are able to find bugs that are hard to detect with testing, or even manual reviews. These tools also scale well to work on large software implementations, and find bugs in millions of lines of code with ease. Though many of these tools are costly, since they spot the hard-to-find bugs, the ROI (return-on-investment) is very good. Research shows that a considerable percentage of bugs that are detectable by manual reviews can be automatically detected using static analysers. By integrating static analysers as part of the software development life-cycle, bugs can be found earlier. After that, if extensive manual review is performed on the code, reviewers can focus on harder-to-find bugs. With extensive static analysis and manual reviews, testing will find only a few bugs (mostly those that relate to interface and contextrelated problems, since they are not well addressed by manual reviews and static analysers). Consider the two examples I mentioned in the beginning of this article. The call to a pure virtual function inside a constructor is a bug that can be easily caught using both static analysis tools as well as manual reviews. However, only if we create the right test-cases, and if the bug manifests in the given context, can testing find this bug; otherwise, it wouldn’t. For the problem in the event-handling code, manual reviews can help find it, since it is likely to be a design problem. Static analysers are not (yet) effective in finding such problems. Advanced testing techniques, such as modelbased testing (MBT), can help find such bugs, but MBT is not widely used in practice. When manual reviews are combined with static analysis tools, and are made an integral part of the development life-cycle, they can help create high-quality software. Of course, testing has an important role to play, for example, in acceptance testing. However, the right strategy for creating high-quality software is to focus extensively on manual reviews and static analysis, in addition to testing the software.

By: S G Ganesh The author works for Siemens (Corporate Research & Technologies), Bangalore. You can reach him at sgganesh at gmail dot com.


Let's Try

Rapid Application Development with

NT

NT

E

M

E PL

IM

DE

AN

ME OP L E

V

N

E

YZ AL

Developers

IG ES

D

This is intended to be the first in a series of articles on rapid Web application development using the QCubed PHP development framework. The series is intended for new to intermediate-level PHP developers, to guide them on how to develop Web applications faster with QCubed.

I

t goes without saying that Web application development is a skill that is in great demand, as well as a hot passion among those enchanted by the magic they see on the Web! However, it takes a lot of time and effort. For instance, if you want to build a blog from scratch, let’s look at what you require: 1. A database—let’s say PostgreSQL or MySQL 2. The LAMP or LAPP stack 3. Tables in the database to hold posts, comments and your login system, so that no one but you can add blog posts. The first couple of requirements are not hard; all they need are the downloading and installing of packages. The tough part is writing SQL queries, designing data entry forms and so on. You may say it shouldn’t take more than six hours—but what if I said that you could do it in just one hour? I know, that might sound a bit far-fetched—but it's reality! And what about adding more functionality to let others make blog posts? What about allowing them to register? And while building websites, we often strive to improve them. If you think the improvements mentioned could take a day: creating new columns, modifying queries and testing them—you could be right. However, you can also get it done in about half the time. Allow me to show you precisely how...

Meet QCubed

QCubed is a PHP development framework like CodeIgniter, CakePHP, Symfony, etc. It facilitates rapid Web development. However, there are a few good things about QCubed that give it an edge over other frameworks; things that make it a true RAD (Rapid Application Development) tool. Each feature is unique, and I will cover them one at a time.

Easy to set up

Installing QCubed is simple. You extract files, copy them to a location under your webroot directory, modify the configuration file, and you are ready to go!

Database support

QCubed has support for MySQL, PostgreSQL, Oracle, Microsoft SQL Server, Informix and DB2. This means you can use the framework no matter which database you want to use. Apart from this, you can easily use more than one database in your application.

Code generation

Code generation (or codegen) is the most distinguishing feature of QCubed. It is this one feature that makes your development process so very fast, by removing a large part of the tedious, mundane tasks you have to do when developing. You should be excited—there are multiple things that codegen brings to the table. One of its major advantages is that it minimises the amount of SQL queries you have to write, and all the routine tasks that come with each (parsing the results for SELECT queries, writing queries to insert or update data, and so on). QCubed achieves it by creating individual classes for each table. You can access the rows of the database as objects of that class. To get the columns of a row, you access the properties of the row object. It also generates functions for selecting data from the table, which create SQL queries automatically; you get results as an array of objects, which is a lot easier to play around with. In real life or databases, handling relationships is a pain when there are too many! The codegen process will create functions for handling relationships for each of your tables. It will consider your foreign keys and will allow you to query tables using those methods; they too are generated automatically! So loading the blog post of a comment is actually just one more word at the distance of an arrow mark, that's it! Indexes have a reason to exist—they make queries run faster at the database level. QCubed's code generator acknowledges this, and creates special functions for columns on which indexes are defined. This makes sure that you december 2012 | 85


Developers

Let's Try $objPost->Save();

Wasn't that easy? Now, what if you wanted to retrieve the post? This is how it would work, using the Comment class: $intCommentId = 1; $objComment = Comment::Load($intCommentId);

But that is not how you would want to use comments on a blog post. You would actually want to get all the comments on a particular blog post, which can be done as shown below: Figure 1: A sample auto-created draft data entry form $intBlogPostId = 20;

have all the functions that you would usually need, ready during the Web development process. It also facilitates other functions, which can be used to query tables using conditions on any column. Let us look at a simple example—you want to create a blog, and you have two tables ‘post’ and ‘comment’ (in MySQL) containing columns defined as below:

// Get all the comments of blog post number 20 in an array

CREATE TABLE `comment` (

In the above code sample, the function LoadArrayByPostId was created during the code-generation process, as the column 'comment.post_id' was pointing to 'post.id' using a foreign key, and 'comment.post_id' did not enforce uniqueness in the column. I hope you begin to get the feel of how QCubed eases your entire query-writing process. But that is not the end of codegen’s list of advantages; it also relieves you from creating data-entry forms—a great aid to the development process! Some may instantly argue that one can use phpMyAdmin (or phpPgAdmin for PostgreSQL), but that is not the optimum solution in most cases. Allow me to explain: phpMyAdmin can alter the DB schema. If you happen to mistakenly alter the schema while 'entering data', you may have to begin from scratch again (depending on what you changed). You will not be able to use phpMyAdmin to create forms for your real website. phpMyAdmin would help you get things done for only MySQL. QCubed will create forms for all supported databases. You cannot control the execution of phpMyAdmin as long as the correct password is supplied. It will run no matter who is accessing it. When you use codegen, you get form draft pages made for you, so you can use them to enter data into your databases straightaway! You also get ready-made Panels, which do the same job, except that you can include them (with or without modifications) in your Web pages. Well, form creation is not of great importance in QCubed, which allows you to focus on things that matter—like creating styles for the Web page, and how to add features, rather than writing SQL queries over and over again. You also get QDataGrids created for your tables, so searching your tables for values gets much easier. Just head to http://examples.qcu.be/assets/_core/php/examples/

`id` int(11) NOT NULL AUTO_INCREMENT, `post_id` int(11) NOT NULL, `comment_body` varchar(1000) NOT NULL, PRIMARY KEY (`id`), KEY `post_id` (`post_id`) ) ENGINE=InnoDB; CREATE TABLE `post` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` varchar(128) NOT NULL, `body` varchar(10000) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB; ALTER TABLE `comment` ADD CONSTRAINT `comment_ibfk_1` FOREIGN KEY (`post_id`) REFERENCES `post` (`id`) ON DELETE CASCADE ON UPDATE CASCADE;

When you run the codegen on the above schema, you get the classes 'Post' and 'Comment' created and available to you. You would be able to use them as shown below, to create a new blog post, for instance: // Create a new blog post $objPost = new Post(); // Initialise the database defaults to the object $objPost->Initialize(); // Now change the values of the columns you want $objPost->Title = "Trying QCubed"; $objPost->Body = "This is a post created using a PHP script with the help of QCubed."; // We do not set the 'id': it is an auto incrementing value // Time to save

86 | december 2012

$arrComments = Comment::LoadArrayByPostId($intBlogPostId); // Iterate over array to print the comments one by one foreach ($arrComments as $objComment) { echo "<br />" . $objComment->CommentBody; }


Let's Try datagrid/filtering.php and explore! There is a whole lot more to codegen; we will highlight more features in the articles to come.

HTML templates

QCubed gives you room to keep your ‘control’ and ‘presentation’ issues separate. You define how your page 'behaves' in one file, and how it looks in another. There's little need to have HTML mixed with PHP code. Will it allow you to work faster? You can change the look (styles and structure) of a page easily by using a different 'template' file for each condition. Developing a mobile site is not going to take a lot of extra work—there will be no need to write the logic twice.

jQuery integration

Too well known to require an introduction, jQuery is the most famous JavaScript library out there, and you have tons of plug-ins. jQuery comes built into QCubed. In fact, QCubed even allows you to create jQuery UI objects using PHP. The JavaScript code is automatically created for you when you visit (render) the page. QCubed uses jQuery to get AJAX done so easily, yet it produces jaw-dropping effects.

AJAX and event-driven programming

With QCubed, you seize all control over defining how your page will behave on different occasions. Define an action on any control (individual HTML elements such as buttons, textboxes, divs, spans, text, images, etc) in QCubed (using PHP, of course) for events like a click, mouse hover, keypress, etc —in short, for all the events JavaScript can detect. Then write an event-handler for the action (in PHP, again) and let the rest be lost in oblivion—QCubed will take over. Remember that when you create an event and its handler, you have to specify the type of action it is. There are three at your disposal: QJavaScriptAction—You write a function in a JavaScript file, include that in your page, and tell QCubed that you want to execute the same at the occurrence of the event. You are not even writing jQuery code. QCubed—at your service! QServerAction—The page will reload itself with all the changes that you commanded in the event-handler function. There’s no need to worry about POST data, creating a long URL string or handling the GET parameters. QAjaxAction—An AJAX call is sent to the server, the event-handler function is executed and the server sends back the changes. Yes, AJAX can be activated by changing the action type! Forget about writing separate functions to test what works with AJAX and what doesn’t. And you do not need to create a new PHP page to handle a complicated query-string; you just need to write functions. The event-driven architecture of QCubed is so flexible that it requires another article for me to share all the details.

Developers

Caching

If you have memcached running on your server, mention that to QCubed in the configuration file, and enable caching on the database you want. All your 'Load' function calls start utilising memcached to store the PHP objects of your DB table rows. Once again, there is no need to write code to handle caching. You only need to activate the functionality.

Security

Security is an inevitable aspect for any Web application, more so in PHP, since it’s easier to remain ‘insecure’ while using PHP than other languages—but it is still fast and easy enough to lure a huge number of developers from all over the world. QCubed closes some holes that are left behind by PHP. When you are using QCubed's query system, the SQL statements generated are properly escaped before being sent to the server, to prevent SQL injection attacks. QCubed also comes with the HTMLPurifier library, which can be used to filter HTML code coming from text-boxes, and to filter them as per your requirements. The default values prevent XSS attacks well enough, though you can customise the filtering mechanism. It is a little dangerous to keep your includes directory in your DocumentRoot. QCubed allows you to move the includes directory elsewhere by editing just one line in each of two files. You are not going to need to modify your entire codebase. In addition, QCubed allows you to take control of the execution of sensitive scripts. Let's say you have a script that updates your database or generates reports, which you want accessible to you alone. You can use QCubed to ensure that only IP addresses defined in its configuration file can access those pages. This functionality too is just a function call away.

Extensibility

QCubed is fully OOPS, and thus very extensible. There are many good plug-ins and you can always enhance the functionality of your project. The best thing about plugins is the 'plug-in installer'. You can actually divide your full application into smaller plug-ins and make installable packages. Welcome to modularity! So that was just the tip of the iceberg. There is a whole lot more you can do with QCubed... but wait, one of the best things about it is its licence. QCubed is available under the MIT licence, which means you can use it in another OSS project, or even a commercial one. No one will come after you with lawyers or a court summon. And before we call it a day, let me tell you that it's free of cost as well. We shall explore more about QCubed in the articles to come. By: Vaibhav Kaushal The author is a 25 year old college dropout from Bengaluru, who also happens to be a core contributor to QCubed. He loves writing for technology magazines and his blog when he is not busy fiddling around with QCubed or developing his website (http://www.c-integration.com/).

december 2012 | 87


Open Gurus

Overview

MTP: The New Way to Exchange Data This article explores the differences between the older Mass Storage Class (MSC) and the newer Media Transfer Protocol (MTP) methods of sharing embeddeddevice memory when connected to a PC.

S

torage is a key component of any embedded handheld device like a mobile phone or other convergence devices. To transfer the data stored in these devices, technologies such as USB or Bluetooth are used, the former being more predominant. Generally, when the USB shares the memory of a mobile device, it is shared as a drive on the host PC. The USB class used is known as Mass Storage Class (MSC), and it allows you to manage the drive completely like a local hard drive connected to the PC. But in recent Android versions (Ice Cream Sandwich and later), a newer class named MTP (Media Transfer Protocol) is made the default to share mobile memory over USB to a host PC. So what necessitated this change? Let’s take a brief look at the two USB classes to understand why MTP is preferred over MSC. Before going into the advantages of MTP, let us first explore how these two protocols work, with the help of Linux.

An overview of USB MSC

The MSC protocol defines how data exchange can take place between a USB host and a USB-compliant device. Whenever you use the ubiquitous pen drive for all your data-transfer activities, MSC is the underlying protocol. USB devices using the MSC class are equivalent to an external hard drive connected to a host. These devices can be used just like in-PC drives, to drag and drop files. Thus, the USB host can interact with different flash/hard drives without having any knowledge about the underlying storage system.

88 | december 2012

Linux USB MSC architecture

Figure 1 shows the block-level architecture when an MSC device is connected to a standard Linux host. The Linux device’s MSC architecture consists of two parts: the storage subsystem and the USB subsystem. In the Linux framework, the Virtual File System (VFS) layer is used to abstract the storage and the USB layers. The gadget driver, being USB controller-specific, collects the USB transfers and identifies to which function the transfer belongs (e.g., the Ethernet packet, the MSC packet, etc). If it is an MSC packet, the gadget driver sends the packet to the mass storage driver. Inside the class driver, the SCSI commands are decoded, and appropriate storage operations are done using the VFS layer. SCSI (Small Computer Systems Interface) is a standard for the transfer of data between two devices. The entire blocks are implemented in kernel space. On the host side, the USB core drivers detect the USB device and expose it as a block device. The data is transferred to the block layer as SCSI commands, with the help of the storage driver.

An overview of the Media Transfer Protocol (MTP)

MTP was introduced by Microsoft to enable data exchange between a USB host (initiators) and devices (responders) with ‘intelligent’ storage capabilities (those with some sort of user interactivity). These include smartphones, tablets and portable audio players. ‘Media’


Overview USB HOST

USB DEVICE USB SUBSYSTEM

STORAGE SUB-SYSTEM BLOCK LAYER

Open Gurus

USB MSC DRIVER

VFS

GADGET DRIVER

USB CORE

STORAGE (SCSI)

BLOCK DEVICE

KERNEL SPACE

Figure 1: Block-level architecture—MSC device to Linux host DEVICE USER SPACE USB DEVICE MTP LIBRARY

USB SYSFS INTERFACE USB DEVICE MTP FUNCTION DRIVER

USB DEVICE MTP LIBRARY JNI

USB DEVICE MTP DAEMON

DEVICE KERNEL SPACE

The advantages of MTP VIRTUAL FILE SYSTEM (VFS)

BLOCK LAYER (SD CARD OR EMMC)

DEVICE CONTROLLER DRIVER

USB CORE

On the host side, the USB core drivers communicate with the USB device. Applications like Windows Media Player on Windows use the libmtp library to communicate with the device. Based on the requests from the applications, mtpd on the device would respond using the USB drivers.

LIB MTP

MEDIA PLAYER USB HOST

Figure 2: Block-level architecture—the MTP device to host

in Media Transfer Protocol does not just mean audio/ video; it encompasses all binary data, including text files. Devices supporting MTP come up as a ‘portable device’, and not a drive as in USB MSC. All the content (songs, images, videos) on devices are represented by objects, and object handles are used in order to reference a logical object on the device. Once the device is connected to the USB host, enumeration takes place, and all associated metadata, such as the file creation time, the time of modification, size, folder details, etc, are passed to the host PC. After this, data exchange can begin.

Android implementation of MTP

In Linux, sysfs is an interface used for communication between user space and kernel space. Figure 2 illustrates the block-level architecture when an MTP device is connected to a standard host. The MTP implementation on the device side is divided across user space and kernel space. User space components: The mtp daemon (mtpd) runs in the JVM (Java Virtual Machine) of Android. It loads and calls the USB Device MTP Library via JNI (Java Native Interface). The USB Device MTP Library takes care of decoding and responding to MTP commands. Kernel space components: The USB Device MTP Function Driver handles the MTP class-specific control requests like opening or closing the session as per the user’s instructions. It also controls the reading/writing of files from the storage media through the Virtual File System.

From both the architectures, you can see that MSC works on SCSI commands, and the host has unrestricted access to the device. The device cannot modify the data without releasing its connection from the host, if mounted. If the device is given permissions to modify the data when there is a connection with the host, there are chances of corrupting the data, or even the file system. That is why, when you mount an SD card of an Android device to a PC, you cannot access the SD card using the Android device until you unmount it from the PC. MTP has a solution to this problem. It manages the files, unlike MSC, which manages storage. The media player on the host reads and writes entire files on the device. With MTP, Windows Media Player has an option to sync media content on the PC and the device. You can even customise the data content for the sync options based on the ratings, etc. With MSC, this can be achieved only with special applications. Unlike MSC, MTP enables the monitoring of deviceinitiated events, and the changes in device properties. MTP does not allow the transfer of unsupported file formats, unlike mass storage, where the user needs to check for compatibility after the file transfer. Lastly, MSC devices are more prone to virus attacks than MTP. USB MSC and MTP represent two efficient data exchange methods. While MSC has been the de-facto standard followed until the recent past, it fails to address the needs posed by modern-day intelligent storage devices. While flash-based devices will continue to use MSC, we will see more and more portable devices with hard drives switching over to MTP. However, it is always the user’s prerogative to choose between the free way or the safe way to manage data, i.e., whether to use MSC or MTP. By: Sakethram Bommisetti & Megha Dey Sakethram and Megha work with ST Ericsson, Bengaluru.

december 2012 | 89


Developers

Overview

Hidden Features of GWT

This article gives readers an overview of the hidden features of the Google Web Toolkit (GWT), which is used for developing AJAX-based Web applications in Java. It is aimed at those familiar with the basics of GWT.

A

synchronous JavaScript and XML (AJAX) is not a language or new technology, but a combination of known technologies such as HTML, XML and JavaScript. It has been combined to serve users in a better and faster way. Being asynchronous is the key; all user actions are handled in parallel. In traditional Web applications, all user actions were handled sequentially. AJAX requires HTML for creating and loading pages, JavaScript for altering page content, XML/JSON for data communication, and serverside business logic. AJAX-based Web applications reduce unnecessary server calls and the reloading of the entire page for small user requests or actions.

The Google Web Toolkit (GWT)

GWT is an open source Java-based framework for developing AJAX-based Web applications. A Java-based framework like Swing, it enables developers to develop rich Web applications without any knowledge of HTML/ JavaScript. GWT provides solutions for known drawbacks of AJAX, which are listed below: 90 | december 2012

• GWT provides a compiler, which converts all code written in Java into JavaScript. Because of this, developing applications with cross-browser compatibility becomes easier. • It has a way of maintaining a history of accessed application URLs, letting you use the browser’s Forward/Back button. • Debugging of client-side code is possible. Since we are developing in Java, debugging the AJAX application is possible during development, using a native IDE.

The internals of GWT

The simple client-side architecture of a GWT application is shown in Figure 1. The Java source code in the client source package will be compiled to JavaScript by the GWT compiler, so the client side of the application is restricted to using classes from the GWT widgets library, and other modules and classes supported by the GWT JRE emulation library. This library supports part of the classes from java.lang, java.io and java.util. GWT


Developers

Overview provides two modes of execution during Client Side UI development: module (JS) • Host/Dev mode: Written Java code is executed directly in Web GWT-RPC browsers through embedded Business Logic: Server DOM elements. Java byte-code side implementation interacts directly with the JVM (Java classes, Servlet..) for rendering UI elements, without converting it into JavaScript. This provides an Data Store easy way for the developer to debug applications using IDEs. • Web/Prod mode: The JavaFigure 1: GWT client architecture to-JavaScript compiler is the backbone of the GWT framework. Generated JavaScript will run and create UI elements, and this JavaScript runs well in all browsers. The core GWT functionality and UI elements are provided by gwt-user.jar and gwt-dev.jar. The overall view of AJAX applications developed using GWT framework is shown in Figure 2. UI elements can interact with server-side business logic through any approach such as the GWT-RPC mechanism, traditional servlets, or by building custom HTTP requests to retrieve information from the server using GWT’s HTTP client class in the com.google.gwt.http.client package. If the server-side implementation is written in Java, then GWTRPC is the best choice for the client-server communication. GWT RPC is a mechanism for passing serialised Java objects to and from a server over standard HTTP. The server-side implementation will not be converted into JavaScript, so it can contain any Java classes or external libraries.

Resources (Css, GIF...) Java Code

Client-side application unit testing

Testing applications and understanding the internals, reveals bugs in the early stages of development and helps developers deliver quality products. GWT provides testing for the clientside part of the application, to simulate the actions after firing certain events. It can also be used to test the performance of the asynchronous function calls, to get the server response or execution time. JUnit is the most commonly used open source unit testing framework for Java applications. The GWT framework has extended JUnit and customised it for testing both Web and hosted modes of GWT applications. To implement a test case, a class has to extend GWTTestCase

GWT Compiler

com.example. gwt.client ProjName.gwt.xml (module descriptor)

JAR’s (3rd Parth widgets/modules, JS liabraries)

Uses

JRE custom emulation (java.lang, java.util....)

JS Output .cache.html, ● .nocache.js ●

ProjName.html (module bootstrap loader)

gwt_user.jar, gwt_dev.jar (GWT UI widgets..)

Figure 2: An overview of the GWT application

class, override and implement the getModuleName() function, which gives the entry point (class name) of the application. public GWTExampleTestCase extends GWTTestCase{ public String getModuleName(){ return “com.example.gwt.test.client.TestCase”; } public void testCase1(){ //implement test case assert(true); } public void rpcTestCase2(){ //implement rpc object creation & function calls onSuccess(){ finishTest() // Successful execution, if flow reaches this function // before maximum time expires } onFailure(){

Hidden features of GWT

The points mentioned above are some of the well-known important features of GWT. There are many less-known features, which help developers build applications with better performance and quality. These features will also improve modularity while developing huge Web applications. In this article, I will highlight these not-so-common, yet important features of GWT, and provide some insights into them.

Resources (Css, GIF...)

assert(false); } delayTestFinish(maximumAllowedTimeMilliSecs); // For testing execution time of server calls } }

Multiple test case classes can be executed together by writing a test suite (a class that extends the GWTTestSuite class, and defines all test case classes together). GWT provides a batch script, junitCreator, which creates a skeleton GWT test unit class, along with Ant scripts for testing both Dev and Web modes.

Client-side logging

Since we can’t use external logging libraries like log4j on the client side, GWT provides a separate module for this, which emulates java.util.logging, so the method of configuring and accessing it is the same as traditional logging in Java applications. The GWT log module has to be inherited in the project module descriptor file: december 2012 | 91


Developers

Overview

// Module descriptor file (*.gwt.xml) <inherits name=”com.google.gwt.logging.Logging”/> // Property configuration in *.gwt.xml <set-property name=”gwt.logging.enabled” value=”TRUE”/> <set-property name=”gwt.logging.consoleHandler” value=”ENABLED”/> // Handler for Root Logger

separate module enabling you to develop one Web application that can display text in different languages. The GWT module for internationalisation is named i18n, and has to be inherited in the module descriptor file. The default locale used will be locale—en_US. Static string i18n is implemented by the Constants/Messages interface.

Logger rootLogger = Logger.getLogger(“”); // Handler for Child of Root logger

// Module descriptor file (*.gwt.xml)

Logger childLogger = Logger.getLogger(“childLogger”);

<inherits name=” com.google.gwt.i18n.I18N “/>

Enabling or disabling logging, changing log levels and logging the handler configuration can be done programmatically in Java code, or by setting properties in the project module file (*.gwt.xml). All the log messages can be logged into the Development Shell of GWT, an IDE console, or the FireBug console, by configuring the corresponding logger handler.

//Define I18NSamplesConstants.properties & I18NSamplesConstants_

JavaScript Native Interface (JSNI)

Eventually, we might need to integrate handwritten JavaScript functions or third-party JavaScript libraries/widgets. GWT allows developers to integrate these JavaScript libraries into the client side of applications through the JSNI interface. During GWT compilation, Java source code is converted into JavaScript, and that’s when these JSNI interfaces are directly used in the final output. The concept of JSNI is inspired from Java native interfaces. The key features of GWT JSNI are its ability to implement a Java method as a JavaScript snippet, and the ability to call JavaScript from Java, and vice-versa. JSNI functions have to be declared natively and as static functions. All JavaScript code has to be embedded into comment blocks inside the function, as follows: // JSNI function public static native void alert(String msg) /*-{ $wnd.alert(msg); }-*/; // Calling JS function from Java function public void onClick() { alert(“calling JSNI function”); } //Sample invocation signature [instance-name]@qualified-class-name::method-name(input-paramsignature)(input-param-values)

In order to invoke Java code from a JSNI function, it has to use a detailed method signature approach to differentiate between overloaded functions.

Internationalisation

Due to the globalisation of IT applications, it has become mandatory to develop application support for specific countries and regions. The GWT framework provides a 92 | december 2012

el_fr.properties file welcome=welcome

welcome= accueil

//Constants welcome=welcome {0}

welcome= accueil {0}

// Messages //Create an interface with same name as properties file public interface I18NSamplesConstants extends Constants { String welcome(); } //Dynamically create object for constants interface I18NSamplesConstants myConstants = (I18NSamplesConstants) GWT. create(I18NSamplesConstants.class); gwtUIItem.setTitle(myConstants.welcome());

The Constants interface can be used to display a set of constant values of different types, for elements. The Method interface can be used to format and display values, along with parameters to accept input dynamically. We can define the type of locale to be loaded either in the module descriptor file, or dynamically in the application URL. Dynamic string i18n is a little slower than a static approach, but can be used when integrating GWT applications with already existing server-side localisation systems. The module’s host page can be configured to have the localised string to be displayed and these can be displayed appropriately.

HTML5 support in GWT

GWT supports the HTML5 canvas tag, using which you can build pages with dynamic image/audio/video support. Full support for HTML5 in classes of GWT is still under development, though a lot of open source projects have implemented this.

Local or client-side storage

HTML5 defines a standardised way to store large amounts of information at the client side. Earlier, you could store only 4 KB of information, with a maximum of 20 KB per domain, using cookies. Now, using HTML5 local storage, you can store a maximum of 5 MB of information, persistently cached at the client side. This reduces the number of hits to the server for small queries. GWT


Overview supports local storage through the Storage module. <inherits name =’com.google.gwt.storage.client.Storage’/> Storage storage = Storage.getLocalStorageIfSupported(); if (storage != null) { storage.setItem(“html5”, “local storage”); String value = storage.getItem(“html5”); }

GWT provides a series of classes and events, such as LocalStorage, SessionStorage, StorageMap and so on, for easy access to client storage.

HTML5 visualisation via Canvas

GWT provides classes supporting the addition of 2D shapes to the application. Initialise a GWT canvas, calculate coordinates for a shape, and draw on the canvas. The assigned coordinates are scaled to their equivalent DOM size. Small coordinate shapes will give higher performance, but lag in quality. Similarly, large coordinate shapes give higher quality, but take time to initialise. GWT provides a series of classes for adding gradients, patterns and text styles, defined in the com.google.gwt.canvas.dom.client package. All the GWT widgets or Canvas elements added in the applications, support drag-and-drop functionality.

Embedding audio/video in the application

GWT provides classes for embedding static audio/video files, or playing them while loading the application. The audio class in the com.google.gwt.media.client package provides support for adding audio files. Video file functionality is provided by the com.google.gwt.media.client.Video class. File formats supported by these classes vary depending on the browser. Audio audio = Audio.createIfSupported(); if (audio == null) { return; } audio.addSource(“path/file.mp3”, AudioElement.TYPE_MP3); audio.setControls(true) // for preloading audio.setSrc(“path/file.mp3”); audio.setPreload(MediaElement.PRELOAD_AUTO);

Performance and compile-time improvement GWT not only defines the way to develop an application, but also provides guidance on improving the performance of an application. The deployed application’s initial loading or start-up time increases proportionally with the size of the app. Similarly, for the development of large-scale applications, GWT compilation with the default setting will take a lot of time and processor cycles for the compilation of Java code to JavaScript. A few approaches to reduce this time are discussed below.

Developers

Code splitting

The size of an AJAX application’s final output of JavaScript increases with the complexity of the application. This, in turn, increases the application’s initial load time. Since the application will try to download the entire JavaScript content during initial loading, the application start-up time will increase. The code splitting methodology provides a way to define a part of the application to be downloaded initially; other parts of the application will be downloaded on demand. For instance, during the initial loading of an application, only the JavaScript required for the login page of the application is needed. Only if the user successfully logs into the application will the next page’s JavaScript content be downloaded. Until then, the JavaScript content of the home page or other internal modules is not required or used. //for manual loading by configuring in *.gwt.xml file <extend-configuration-property name=”compiler.splitpoint.initial. sequence” value=”com.name.of.the.class”/> //for programmatic configuration GWT.runAsync(new RunAsyncCallback() { @Override public void onSuccess() { // Load corresponding module or segment } @Override public void onFailure(Throwable reason) { // Can occur due to network failure or server unavailability. } });

Using the code-splitting approach, we can define the modules that need to be downloaded initially by default, and all other modules can be configured to download on demand, during the first visit. This reduces start-up time, and optimises the performance of the application. To split the code, insert a GWT.runAsync() block into the required page/view initialisation of the application. This runAsync implements a RunAsyncCallback interface, which annotates the GWT compiler to create a separate module for the part covered in the block. Hence, the initial download will not include the content for the module defined in Async block. But if the initial load requires some part defined in another Async block, it will be downloaded automatically during initial load. Hence, care has to be taken while initialising split points to not mix different modules. We can even manually define the load sequence by defining them in the module descriptor file.

Browser-specific build and compile

By default, the GWT compiler produces JavaScript output for six specific browsers such as IE6, IE8, IE9, Opera, Safari and Firefox. Different combinations of output are generated december 2012 | 93


Developers

Overview

for these browsers, with respect to default locales. If the application size increases, the compile time will be high. The list of supported browsers is given in UserAgent.gwt.xml of com.google.gwt.user in gwt-user.jar.

@Source(“logConfig.xml”) public TextResource initialConfigurationFile(); @Source(“helpManual.pdf”) public DataResource ownersManualDocument();

<set-property name=”user.agent” value=”ie6” />

// only IE6 is

}

supported // Inject the contents of the CSS file

We can define the required browser support in the module descriptor file, as above. In that case, only output compatible with the defined browser will be generated and supported.

Client bundles

All the resources used in the application (image/CSS/text files) are handled in a better way through the ClientBundle interface. In order to use client bundle functionality, you have to inherit the corresponding module in the application: <Inherits name=”com.google.gwt.resources.Resources”/> public interface SampleResources extends ClientBundle {

SampleResources.RES_INSTANCE.cssResource().ensureInjected(); // Display the manual file in an iframe new IFrame(SampleResources.RES_INSTANCE.ownersManualDocument(). getURL());

We have to define an interface by extending the ClientBundle interface, and declare all resources used in the application, categorising them as CssResource, TextResource, DataResource, ImageResource and so on. By defining the resources this way, the entire application’s resources will be downloaded as a bundle, not as individual files. This increases the performance of the application by reducing the number of server hits for downloading resources.

public static final SampleResources RES_INSTANCE = GWT. create(SampleResources.class); @Source(“application.css”) public CssResource cssResource();

94 | december 2012

By: Karthikeyan R The author is a Java developer and technophile, and loves to keep in touch with emerging technologies.


How To

Cloud Corner

Network and Cloud Monitoring

with

This article focuses on the open source network- and infrastructure-monitoring tool, Nagios.

C

omputers on a network share resources and information with each other. Network monitoring involves the supervision of network operations using various products to ensure performance and the availability of network services by reporting failures. Network management refers to the actions, procedures and tools that are related to keeping network services up and running, tracking resources, performing upgrades, and configuring the resources of networked systems. Network management is a superset of network monitoring. It provides monitoring and reporting for network services as well as for host resources such as logs, storage and processor usage. It monitors IT infrastructure, detects problems before they occur, and alerts humans. The network- and infrastructure-monitoring tool, Nagios, is used to monitor publicly available services and more. Nagios is very useful for SMBs. Effective usage of Nagios ensures issue tracking in a timely manner. SLAs, which are critical in today’s IT world, can be met effectively, ensuring that outages have minimal effect on the organisation’s IT infrastructure.

Nagios Core

Nagios Core is the open source base for Nagios XI—a commercial monitoring solution. To run Nagios Core, you need a machine running Linux or a UNIX variant, and network accessibility, as basic pre-requisites. Nagios Core allows you to monitor your entire IT infrastructure to ensure that resources, applications, services and business processes function appropriately. In the event of a failure, it can alert technical staff of the problem in a timely manner, which allows them to commence remedial action before outages affect business processes, end users or customers. Nagios Core is licensed under the GNU General Public License. There are multiple online resources related to it. The Nagios support forums, support portal and community mailing lists are very useful.

Features

Nagios' deep insight into infrastructure, and its detailed reports on the status of resources, services and other components—availability reports, historical reports december 2012 | 95


How To

Cloud Corner ● HTTP ● SSH ● SMTP ● SNMP

● Processor ● Disk ● Logs Network Service Monitoring

Host Monitoring

Plug-In Design

Remote Monitoring

● C++ ● PHP ● Python ● Ruby

Figure 1: Features of Nagios

and third-party add-ons—help users gain valuable understanding.

Use cases

● SSL encrypted Tunnels ● SSH

Nagios can be effectively used for network monitoring at ISPs, in government, health care, manufacturing, banking and finance, telecommunications, etc. As an example of getting Nagios to monitor a system, let’s install it in an OpenSUSE Virtual Machine. We can use VMware Workstation, Player or any other product to create the VM. Install OpenSUSE 12.2 in it. When that's done, install some pre-requisite packages using the Zypper command-line package manager, with the following commands: zypper install kernel-source make gcc gcc-c++ zypper install rrdtool php5 php5-gd php5-zlib apache2-mod_php5 perl-SNMP net-snmp-32bit nmap ncpfs libwavpack1 apache2

Download Nagios Core into the VM, from www.nagios. org/download/. Log in as root, create a user and groups for Nagios, and change the user password: # /usr/sbin/useradd -m nagios # passwd nagios Changing password for nagios. New Password: Password changed.

Figure 2: Nagios Checker settings

# groupadd nagcmd # groupadd nagios

Add the nagios user to the nagcmd and nagios groups, as follows: # usermod -G nagcmd nagios

Figure 3: Status script URL

Extract the Nagios Core tar file; cd to the extracted folder, and run the following command: # ./configure -with-command-group=nagcmd # make all ##If successful, ends with an “Enjoy!” message.

Install init scripts, and sample config files in /usr/local/ nagios/etc: # sudo make install-init /usr/bin/install -c -m 755 -d -o root -g root /etc/rc.d /usr/bin/install -c -m 755 -o root -g root daemon-init /etc/rc.d/ nagios *** Init script installed *** # make install-config

Figure 4: Nagios Checker status

96 | december 2012

Install and configure permissions on the directory:


How To # make install-commandmode /usr/bin/install -c -m 775 -o nagios -g nagcmd -d /usr/local/ nagios/var/rw chmod g+s /usr/local/nagios/var/rw *** External command directory configured ***

Configure Nagios for Apache: # sudo make install-webconf /usr/bin/install -c -m 644 sample-config/httpd.conf /etc/apache2/ conf.d/nagios.conf *** Nagios/Apache conf file installed ***

Set a Web admin password for Nagios: # htpasswd2 -c /usr/local/nagios/etc/htpasswd.users nagiosadmin New password: (admin) Re-type new password: Adding password for user nagiosadmin

Next, restart Apache and check its status with /etc/init.d/ apache2 restart and service apache2 status respectively. Now download the Nagios plug-in from http://www. nagios.org/download/plugins/ and extract the file; then issue the following command: # cd nagios-plugins-1.4.15 # ./configure --with-user=nagios --with-group=nagcmd # make

Wait for approximately 10-15 minutes till the command runs, then finish it with a sudo make install command. Add Nagios to the list of system services to ensure it automatically activates on system start: chkconfig --add nagios chkconfig nagios on

Verify the sample Nagios configuration files as follows:

Cloud Corner

right-click the ‘N’ sign in the status-bar and click Settings; provide the Nagios Web URL, the username and password (Figure 2). Provide the Status Script URL, as seen in Figure 3. Click OK, and within seconds, you will see the Nagios status on the status bar. Click on it and a detailed list will pop up (Figure 4).

Cloud monitoring with Nagios

As per NIST’s definition, cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources such as networks, servers and storage, which can be rapidly provisioned and de-provisioned with minimal management effort. Virtualisation is the core of cloud computing, since it provides cost savings by reducing up-front investment in infrastructure. Cloud monitoring refers to monitoring resources provided by cloud service providers. In the virtual world, this is not just necessary but crucial for high availability and fault tolerance, and to avoid single points of failure, since resources are shared. Monitoring of virtual systems is dynamic, since the resources are virtual and can be treated like files. Resource monitoring for scaling up and scaling down is also critical considering the huge capacity offered by cloud service providers.

Public cloud monitoring

Cloud monitoring refers to the monitoring of the performance of physical or virtual servers, storage, networks and the applications running on them. Cloud monitoring tools are used to collect data and illustrate patterns that might otherwise be difficult to spot in dynamic infrastructure. Nagios provides monitoring for cloud resources—compute, storage and network services. Nagios is proficient at monitoring a variety of operating systems to detect cloud environment issues, network outages and application availability.

Amazon Web Services (AWS) monitoring

Nagios solutions that provide cloud monitoring capabilities (for Amazon EC2, Amazon Simple Storage Services, etc) are Nagios XI and Nagios Core.

/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

If there are no errors, start Nagios with service nagios start, then open a browser and access http://localhost/nagios and log in as nagiosadmin with the password you set.

Nagios Checker (Firefox add-on)

The Nagios Checker Mozilla Firefox add-on provides the status-bar display of information on the status of resources from Nagios. The Nagios Web interface information is parsed to make information available to the indicator. The add-on has been verified with Nagios 3, 2.5+, 2.0b4, 1.5, 1.3 and 1.2. You can add it from https://addons.mozilla.org/enUS/firefox/addon/nagios-checker/. After restarting Firefox,

References [1] Nagios Core documentation: http://nagios.sourceforge.net/docs/nagioscore/3/en/toc.html [2] http://compnetworking.about.com/od/ itinformationtechnology/f/net_monitoring.htm [3] http://library.nagios.com/library/products/nagiosxi/manuals [4] https://addons.mozilla.org/en-US/firefox/addon/nagios-checker/ [5] http://searchenterpriselinux.techtarget.com/definition/Nagios

By: Mitesh Soni The author is a technical lead at iGATE. He is in the cloud services practice (Research and Innovation) and loves to write about new technologies.

december 2012 | 97


For U & Me

Recruitment Trends

The demand for open source professionals has definitely increased in the outsourced projects over the past few years India has enough of open source talent but, unfortunately, there are not many takers in the market. Enterprises still think twice before getting on to a Linux-based system, which is why India’s open source talent is not getting as many opportunities, opines Piyush Somani, MD/CEO, ESDS Software Solution Pvt Ltd. Diksha P Gupta from Open Source For You had an exclusive conversation with Somani about the difficulties those qualified in open source technologies face in the Indian market, and how his company is interested in promoting both the technology Piyush Somani, MD/CEO, ESDS Software Solution Pvt Ltd and the talent. Excerpts:

Q

You are present in other parts of the world apart from India. How much are you banking on the Indian market when it comes to selling technology products based on open source technology? Our Indian business model is mainly focused on hosting and other services. These are related to hosting software-as-aservice, platforms-as-a-service, infrastructure-as-a-service, etc. We do the maximum number of hostings based on open source technology. I feel India is still not ready to accept open source technology the way the West has. While in the UK and US, most of our servers are based on Linux, in India, the number of Windows and Linux servers is almost the same.

Q

Do you try to promote open source technology when you offer solutions to clients who are not particularly concerned whether the technology is open source or proprietary? Yes, we offer our clients open source-based solutions first. Only in the cases where clients want Windows-based solutions, do we implement them. Else, we prefer offering open source solutions—only because of the kind of value they come with. 98 | december 2012

Q

Do FOSS platforms or tools add value to the project’s development?

Of course, they do. There are many add-ons that one can make with open source tools at their disposal. In Windows, it becomes very difficult to find a solution. I have been a systems administrator in the past. I always had trouble in understanding Windows and finding out the solutions to problems. There are so many bugs and one always has virus-related issues with the Windows platform. You have limitations on Windows and you cannot work out solutions for that platform easily. But with open source platforms, one has thousands of solutions at your disposal. In the open source domain, one just needs to do a Google search to find a solution to a problem. Open source technology allows users to be flexible and enjoy the luxury of inventing. This, in a way, helps in promoting open source technology. Because of contributions from the community, open source technology is more robust and popular. Developers from across the globe are contributing to a project, which strengthens the projects and helps in bringing out some really good piece of code. The only pre-requisite is that those in the community should have proper communication


Recruitment Trends For U & Me amongst themselves. The open source community abroad meets on various forums, including a lot of online forums and blogs, on which they discuss various technologies but it is not a common scenario in India. Open source professionals in India are not linked to each other, so a proper communication channel needs to be established. This channel helps in involving more and more people and in building a stronger community.

Q

Do you feel India is rich in open source skills?

India has enough talent when it comes to open source skills but companies here are not ready to deploy open source technology solutions. In fact, the number of Windows systems administrators we get is less than the Linux sysadmins. So the talent pool is definitely there but the problem is that the software solutions providers or the companies into core development are more inclined towards development on proprietary platforms. Unless they do their developments on platforms like Perl, PHP or Java, they cannot move onto the Linux platform. Companies hosting their solutions on servers are not ready to accept Linux-based hosting platforms. They prefer proprietary hosting platforms. So the talent is not getting the right kind of boost, as the acceptance is not there from the corporate side.

Q

What kind of hiring do you do when it comes to open source professionals?

We work on a lot of open source technologies, so the maximum number of people we have in our support staff work on open source technologies. Around 80 per cent of our employees work on Linux and open source platforms.

Q

Do you think the availability of adequate open source talent is courtesy the changing education pattern in India?

No, unfortunately, that is not the case. There is not much inclusion of open source technologies in the syllabus, except for a few colleges. A few colleges across India have included some part of Linux in their curriculum but that is not enough to prepare an entire work force. The credit goes to the students who are self-driven and they manage to learn a lot. The Internet certainly has a major role to play in this learning. I am glad the awareness about open source has increased enough to interest youngsters, which is why they are thinking of venturing into this domain. Their interest, which is supported by some courses available like Red Hat certifications, helps in promoting open source technology amongst the youth.

Q

Do you see an increase in the demand for open source professionals with this increasing awareness?

Globally, I see a major increase in the demand for open source professionals but India is still to catch up. The demand for open source professionals has definitely increased in the

There is not much inclusion of open source technologies in the syllabus, except for a few colleges. A few colleges across India have included some part of Linux in their curriculum but that is not enough to prepare an entire work force. The credit goes to the students who are self-driven and they manage to learn a lot. outsourced projects over the past few years but when it comes to the domestic projects, I do not see much of an increase.

Q

What are the key open source tools that you use in your products?

We have many products and add-ons that we have developed on open source technologies. Our data centre management system is based on open source technology. It takes care of monitoring, managing and trouble shooting all the parameters for a data centre. So, right from rebooting or installing a server to assigning reverse DNS, forward DNS, monitoring, managing and trouble shooting, everything can be done online. We also have developed a billing system for hosting. This has been developed in PHP to work on the Linux platform only. We have developed many such models which complement the open source technology. We have developed some free models as well, which are available online. Our eNlight platform is being positioned as a 'friend' of Linux. It comes with auto-scaling features like scaling of CPU and RAM. This is possible only in eNlight because of the fact that it is based on the Linux platform. So if you have a website that has got seasonal traffic, like that of a university, where results are declared only once in a year -- there are possibilities that the website will either slowdown or crash during the peak times. But if the website is hosted on eNlight, it will never crash. The number of CPUs and the RAM of your virtual machine will increase and will help your machine to grow in size, which will eventually help in serving the incoming traffic. All these add-ons that are done by our company complement open source technology. We use languages like PHP, Java and Perl to power our products.

Q

How open are your clients to adopting open source solutions and has their attitude changed towards FOSS, of late?

Our end customers are happy adopting open source technology-based solutions. Most of the customers say that they want to host on Linux platforms. But companies that have developed their ERPs, or any other software, are not inclined to open source technologies. That is the saddest part of the open source story in India. december 2012 | 99


Developers

Insight

Segmentation Faults

A huge amount of application code is developed every moment—hence, program failures are very common. Often a program ends abruptly, leaving no clue as to why it failed. When it is accompanied by the famous ‘segmentation fault’ message, users are lost. This article helps readers understand what happens behind the scenes, and exactly where the application fails.

L

et’s begin by understanding what the term ‘core dump’ means and discovering how it can help you. A core dump is the memory snapshot of a process at a specific time, especially when the process crashes (exits abnormally). The major pieces of the program state are dumped, including registers and memory management information. Core dumps assist in debugging errors in the program. The memory segments of a process have text, data, heap and stack. In order to get core dumps, you need to enable the kernel configuration CONFIG_ELF_CORE in the running kernel. If the running kernel has this configuration disabled, enable it, rebuild the kernel, and boot the system with the new kernel. Second, ensure that the core dump is turned on—the core of a crashed process will be dumped if the core file size is set. To do this, use the ulimit command. The –c option gives the current size setting for core files; ulimit –c 0 turns off core dumps, while ulimit –c x sets the size of the core file to x (e.g., 1024) bytes. And ulimit –c unlimited turns on the core dump with an unlimited size for the core file. When an illegal statement is executed, the process receives a signal. If the default action of the signal is to dump the core before exit, it checks for the user limit set for the core file. If the set value turns on the core dump, it 100 | december 2012

dumps the core file. When the program receives signals, it performs the default action of the signal, unless the user has registered other functions as signal handlers, in the program. Some signals on Linux (whose default action is to dump the core before exiting) are SIGQUIT, SIGILL, SIGABRT, SIGFPE, SIGSEGV, SIGBUS, SIGSYS, SIGTRAP, SIGXCPU and SIGXFSZ.

Triggering a seg-fault deliberately

You can deliberately include an illegal statement in a program—dereferencing a NULL pointer, as shown in the following code (call it test.c): #include <stdio.h> int main() { int *ptr = NULL; printf ("Dereferencing the NULL pointer: %d/n", *ptr); return 0; }

The value of ulimit –c has been set to unlimited so when you compile the code with gcc -g -o test test.c and then run the compiled program, you will receive


Insight

Developers

Figure 1: Set the template for the core file name

the message ‘Segmentation fault (core dumped)’. By default, the core dump file name will be core. You can also format the dump file name by defining a template in /proc/sys/kernel/core_pattern, using % specifiers that are substituted as follows: %% %p %u %g %s %t %h %e %c

a single % character in the core file name PID of the dumped process (numeric) real UID of the dumped process (numeric) real GID of the dumped process number of the signal causing the dump The time of the dump, expressed as seconds since the Epoch (00:00h, 1 Jan 1970, UTC) Hostname executable filename core file size soft resource limit of crashing process

Figure 1 shows how I set the template for the file name, ran our test program, and got a core file with a name as per the specified pattern. If /proc/sys/kernel/core_pattern does not include %p but /proc/sys/kernel/core_uses_pid is set to 1, then the core file name will be appended with .PID (the PID of the crashing process). If %p is in core_pattern then the core_uses_pid value is ignored. You can also dump the core of a running process (that has not crashed) using the gcore utility, specifying the PID of the process. For example, if a program is started under the PID 5004, then gcore 5004 will dump a core file for the running process, without a crash—it continues running.

Reducing the core file size

There might be space restrictions per user on a multi-user system; large core files, especially when multi-threaded applications crash (since they would use more memory), might just eat up available storage. Linux supports the reduction of the core file size using the filter option (/proc/<pid>/coredump_ filter), which can be used to dump only specific memory to the core file when the process crashes. Table 1 shows what each bit signifies—what memory to dump: Table 1

Bit 0 1 2 3 4 5 6

What to dump Anonymous private mappings Anonymous shared mappings File-backed private mappings File-backed shared mappings ELF headers Huge TLB private Huge TLB shared

The default value of coredump_filter is 0x3, which dumps only anonymous, private and shared mappings of the crashed memory. The file accepts hex, decimal and octal values. The sample use of the coredump_filter can be viewed at http://linuxforu.com/article_source_code/dec12/coredump.zip. You can obtain the memory records of a crashed process, which can help in debugging the program. You have the flexibility to specify the core file name pattern and to optimise size using coredump_pattern and coredump_filter.

Debugging using core dumps

Core dumps can be debugged using GDB. When gdb is run with the binary that created the core file, as well as the core file, it reads the contents of the core dump, and understands the following: The name of the process that dumped the core file The signal sent to the process to dump the core file Any illegal statement in the program, with the line number Values of variables Values of registers, etc. Figure 2 shows a sample run of GDB in this manner. This can be viewed at http://linuxforu.com/article_source_code/ dec12/coredump.zip.

Useful links [1] [2] [3] [4] [5]

http://en.wikipedia.org/wiki/Core_dump http://linux.die.net/man/5/core http://www.akadia.com/services/ora_enable_core.html http://www.cyberciti.biz/tips/linux-core-dumps.html http://stackoverflow.com/questions/17965/generate-acore-dump-in-linux

By: Nayana Mariyappa The author works as a technical lead with the Linux Platforms Group in the R&D Centre (Bengaluru) of Huawei India. She has around six years of experience in the Linux domain, in the areas of memory management, process management, diagnostic tools, and Linux platform releases for telecom products.

december 2012 | 101


Digital Track Solutions Eon Networks Netland Solutions

Nevales Networks Payatu Technologies

SISA Information Security Quick Heal Technologies Unmukti Technology Digital Track Solutions Eon Networks Netland Solutions

Nevales Networks Payatu Technologies

SISA Information Security Quick Heal Technologies Unmukti Technology

An A-Z listing of

Network Security Solutions Providers Cisco Systems | Bengaluru, India The company has an end-to-end portfolio of security solutions that can be customised as per an organisation’s requirements. The solutions provide a seamless integrated experience for the end user. The company offers a range of solutions based on Adaptive Security Appliances (ASA) firewalls, ASA CX next-gen firewalls, the Identity Services Engine (ISE), and Intrusion Prevention Systems (IPS). Its approach towards network security involves offering a combination of both software and hardware solutions to customers. The Cisco ASA firewall solutions enable businesses to meet all their security needs with Cisco’s multi-scale performance and comprehensive suite of highly integrated, context-aware security services. The Cisco context-aware (CX) firewall goes beyond the typical next-gen firewalls, which are technically capable of recognising only a portion of the context of a flow and are ineffective against emerging threats. The Identity Services Engine (ISE) is a core component of the Cisco TrustSec solution and Cisco SecureX architecture, essential for network security and is ideal for implementing BYOD. Cisco IOS Intrusion Prevention System (IPS) provides the organisation’s network with the intelligence to accurately identify, classify, and stop or block malicious traffic in real time.

LEADING

The company also offers integrated router or switch security, and integrated threat control solutions to enterprises for network security. Some of Cisco’s network security products include: Router Security, the Small Business SA500 Series Security Appliances, Cisco Automated IOS Protection Solution and Cisco IOS Content Filtering. Leading clients: The company is the leading network security vendor in India, according to data shared by Frost & Sullivan for (CY Q2 2012). The company also sees significant traction from traditional enterprises, as well as the retail, health and education verticals, as organisations in these domains realise the importance of securing their data against internal and external threats. Major open source offerings: Cisco is a formidable player in the network security space and has a host of end-to-end solutions that aim at safeguarding organisations. In 2007, Cisco acquired IronPort and, since then, has contributed to the open source community, offering bug fixes, feature patches, funding for developers and for core projects. Cisco has also won recognition from the community for its Apache Spam Assassin solution, which is used for email-spam filtering. USP: The company's network security solutions are based on the Borderless Network approach that was founded on Cisco’s best-in-class wireless networking and security solutions. Using the network as the platform, businesses can use integrated network security products, standalone appliances, fully hosted or hybrid-hosted offerings, or security SaaS to build a wide range of security solutions. To gain the greatest value for their security investments, Cisco builds ecosystem partnerships and offers professional services, thus creating one of the most complete offerings in the marketplace. Cisco's solutions are based on the Borderless Network Architecture, allowing employees and their organisations to connect and communicate anytime, anywhere and on any device— in a secure and reliable manner. Special mention: Cisco recently won an award for network security appliances at the CRN Annual Report Card Awards, held on August 21, 2012 at the XChange 2012 conference in Dallas, Texas. Website: http://www.cisco.com/en/US/products/hw/vpndevc/solutions.html

Digital Track Solutions | Chennai, India The company offers a myriad of network security solutions like firewalls, VPN, SSL VPN, anti-virus, anti-spam, data leak prevention, two factor authentication, Web filtering, IPS, application controls, Web application firewalls, etc. Leading clients: Applabs, CSC, IMI Mobile, Tata Projects, ADP, Beam Telecom, ACT, L&T, MAA TV, TV9, Penna Cements, KCP Cements, Hetero Pharma, Waterhealth, etc.

102 | DECEMBER 2012


Digital Track Solutions Eon Networks Netland Solutions

Nevales Networks Payatu Technologies

SISA Information Security

USP: Provides end-to-end network security solutions, training on a free-of-cost basis to customers on new technologies for upcoming security solutions. Special mention: Has been awarded the ‘No.1 Medallion Partner in India’ and the ‘Best Technical Architect’ from Sonicwall, the ‘Best Partner for Fortinet’ in the southern region, the Silver Partner for ‘Checkpoint Solutions’ in India, the ‘Best Partner’ and the ‘Best Technical Manager' from Websense for the year 2008-2009, the ‘Best Value Added Reseller’ and ‘Sales Representative (Srikkantan Venkatesh)’ from NetApp, the ‘Gold Partner’ for the year 2011 from NetApp, the ‘Preferred Partner in south India’ from Solar Winds, the ‘Best Deal for the Year’ and the ‘Star Performer’ for 2009 from Symantec, the ‘Most Preferred Partner’ for McAfee in south India, and the ‘Best Partner 2011 Enterprise Security’ for Fortinet. Website: http://www.digitaltrack.in

Quick Heal Technologies Unmukti Technology Digital Track Solutions Eon Networks Netland Solutions

Nevales Networks Payatu Technologies

SISA Information Security Quick Heal Technologies Unmukti Technology

The company has been in the network security business since 2007 and so far it has done satisfactory installations for more than 100 clients. The company works closely with all leading OEMs and has done major business for brands like Cyberoam, CheckPoint and Sonicwall. Leading clients: PeopleStrong, Nucleus Data Recovery, Honda Trading, Clairvolex, Unitech Nirvana, Motoman Robotics, as well as government organisations and many leading schools at Gurgaon. USP: 24x7 sales support with the help of a good number of highly technical staff on the company’s pay-roll. Eon Networks has a strong in-house R&D team that always keep evaluating new technologies, and helps its customers keep up-to-date on cutting-edge solutions and what's new in a wide range of areas in the IT security space. Special mention: The company has recently received Symantec’s SMB certification, becoming a Symantec Specialist Partner. The firm’s security team was also awarded the ‘Cyberoam Certified Network and Security Expert’ certificates from EliteCore. Website: http://www.eonconnects.net

Netland Solutions | New Delhi, India The company offers managed end-point security from Kaspersky to its clients. Kaspersky Open Space security covers all levels of networks, right from workstations, servers and mail servers, to proxy servers and smartphones. It supports various operating systems like Linux, Windows, Android, etc. Kaspersky provides the following features: support for the virtualised environment, automated lifecycle processing, vulnerability scanning, Web interfaces, mixed environment support, pre-defined policies, automated mobile policies, and support for any size of network. Powerful control tools that it offers include: device control, application start-up control, application privilege control, Web control and content filtering.

LEADING

Eon Networks | Gurgaon, India

Leading clients: DRDO, ITBP, BSF, JBM Auto Ltd, Vishal Megamart, Swaraj Mazda Ltd, Crowne Plaza Chain of Hotels, United India Insurance Co Ltd, Fairwealth and many more. USP: Ninety-nine per cent reduction in enterprise-users’ calls to the helpdesk. It makes networks stable. Website: http://www.netlandsolutions.in

Nevales Networks | Mumbai, India The company offers emerging enterprises a secure environment to conduct their business on a ‘pay as you use’ basis. Nevales manages security, connectivity and enables businesses with 1 to 400 employees to access cloud applications. The Nevales platform is a combination of an 'On-premise Security Gateway Device' and a 'Cloud Services' platform that secures businesses and enables cloud applications. The Nevales security gateway, N300-X, integrates several crucial security features such as firewalls, VPNs, intrusion detection, anti-virus and antispam software, surf detection, spyware guarding, access management, bandwidth management, traffic monitoring and efficient reporting in a single management platform, and is available on a subscription basis.

DECEMBER 2012 | 103


Digital Track Solutions Eon Networks Netland Solutions

Nevales Networks Payatu Technologies

SISA Information Security Quick Heal Technologies Unmukti Technology Digital Track Solutions Eon Networks Netland Solutions

Nevales Networks Payatu Technologies

SISA Information Security Quick Heal Technologies Unmukti Technology

The architecture of the N300-X enables businesses to access applications and services safely and securely from the Nevales Cloud. The solution is ISP-agnostic and is compatible with all current and future Internet services. Major open source-based offerings: Nevales Networks provides SNORT for intrusion prevention and ClamAV as an open source alternative to the commercial Trend Micro, the gateway anti-virus solution. Leading clients: Shaman Motors, Empire Hotels, M.S. Engineering College, Fredun Pharmaceuticals, and more. USP: The company offers services like reduced costs and no hidden expenses, no CAPEX, no licensing and maintenance hassles, the ‘pay as you go’ service model, a simple-to-use single security solution, remote manageability and administration from anywhere, improved administration of resources, latest security updates, upgrades and automated monitoring, localised free telephone and online support, etc. Special mention: Featured in ‘NASSCOM’s Emerge 50 start-ups for 2011’ list and was one among Techcircle’s ‘Top 10 Emerging Indian SaaS Companies for 2012’. Website: http://www.nevales.com

LEADING

Payatu Technologies | Pune, India The company offers a variety of security services including security assessments, managed security services, consulting and training. Payatu delivers high-end services focusing on organisations' actual needs when securing themselves and their assets. The security assessments are based on Payatu's own threat modelling methodology called the BRT model that defines the ‘Business, Risk and Threats’ for an organisation. The prime services offered include: • Product security testing including software and hardware. • Application security assessment including client, server, desktop and mobile applications. • Infrastructure security assessment including IP and telecom networks. • High-end technical security training including Android systems security, Linux systems security, Web hacking and network hacking. Apart from security services, Payatu also manages and organises Nullcon, India's largest security conference. Major open source-based offerings: A firm believer in the power of open source, most of the company’s security assessments and training are done using open source security software. Its dependence on and passion for open source is not limited to just using it, but the company is one of the few security organisations that has gone ahead and contributed to the open source community by writing open source security software and publishing it online for free. Some of its notable projects are ‘Game Over’–a Web security learning platform, and ‘Jugaad’–a framework to protect against malicious code injection in Linux processes. Leading clients: The company has various clients across different verticals around the world, including banks, finance firms, government, telecom operators, healthcare providers, security software vendors, etc. USP: Payatu's assessments are based on years of experience and research in software development issues. It applies its ingenious threat modelling technique–BRT—to cover the otherwise ignored aspects of the target in a conventional security assessment. Special mention: The company's team of experts has responsibly disclosed security vulnerabilities to various large players such as LinkedIn, Apple, Cisco, etc, out of a passion for security research. The research team's dedication and motivation in finding flaws in anything and everything has vividly demonstrated Payatu’s worldclass expertise in the security domain. Website: http://www.payatu.com

Quick Heal Technologies | Pune, India The company offers Quick Heal Terminator–a high-performance, unified threat management solution that is easy-to-use, catering to the network security needs of small-and mid-sized enterprises. The power-packed security solution is tailored to suit the complexity of emerging threat scenarios. This fully integrated product is a simpler and smarter way of replacing multiple security programs with one comprehensive solution. It has well-integrated features along with a robust firewall, VPN and bandwidth management solution. Major open source-based offerings: Quick Heal's Network Intrusion Detection and Prevention System is based

104 | DECEMBER 2012


Digital Track Solutions Eon Networks Netland Solutions

Nevales Networks

on SNORT, and its Web proxy is based on SQUID. The company also uses open source technologies like MySQL and PHP. USP: Quick Heal solutions are designed to make security enforcement flexible, and deployment simple and effective, with the sole aim of augmenting business efficiency and empowering its client’s workforce. The solution helps SMBs achieve their network security goals. Website: http://www.quickheal.com

Payatu Technologies

SISA Information Security Quick Heal Technologies Unmukti Technology Digital Track Solutions Eon Networks Netland Solutions

Nevales Networks

SISA Information Security | Bengaluru, India

Payatu Technologies

The company's services are spread across three verticals—consulting, training and software products, and are currently being offered to over 300 organisations spread across 30 countries. SISA is India's first PCI QSA organisation, and continues to pioneer and lead the security industry within this sector. SISA's services within the PCI sector include PCI DSS validation, PA QSA validation, PCI ASV scanning, etc. SISA also provides technical security services such as the application pen test and code review, network VA, penetration testing and forensics.

SISA Information Security Quick Heal Technologies Unmukti Technology

Major open source-based offerings: SMART-RA is a free formal risk assessment tool, wholly developed by SISA. The tool helps organisations reduce expenditure on information security risk assessments by up to 60 per cent. The tool can be accessed at www.smart-ra.com. Leading clients: SISA has clients across various sectors such as airlines, banking, retail, IT and ITES, e-commerce, etc. Some big names include IBM, Wipro, Infosys, Vodafone, Bookmyshow, Flipkart, Snapdeal, Yatra.com, Bank of India, HDFC Bank, Bank of Baroda, Qatar Airways, Abu Dhabi Commercial Bank, etc. USP: SISA is known amongst its clients for efficient solutions that surpass compliance criteria, and reach for the larger goal of security. This service philosophy is entrenched in SISA's motto, 'Security and not just Compliance'.

Website: http://www.sisainfosec.com

Unmukti Technology | Gurgaon, India The company offers Hopbox (http://hopbox.in), a pay-for-use network security service, managed and monitored by experts with a mission to provide enterprise-grade, affordable network security to companies, irrespective of their size. The company doesn't sell appliances; instead, it aims for an all-inclusive approach to provide services to customers at the middle and lower end of the spectrum. Major open source-based offerings: Being an open source shop, the company has been able to use existing robust tools to create a scalable platform that quickly responds to customer requirements even as it keeps adding new features. The entire platform has been built exclusively using open source components. The onpremise appliance has a FreeBSD kernel core with PF, Snort, OpenVPN, Racoon, etc. The cloud component utilises Debian GNU/Linux with Squid, ClamAV, C-ICAP, MySQL and MongoDB.

LEADING

Special mention: SISA Information Security has recently featured in ‘NASSCOM’s Emerge 50 Companies for 2012’ under the growth category.

Leading clients: V-Mart Retail Ltd (at more than 60 locations), Resurgent India Ltd (a boutique investment bank), Ginni Systems Ltd (software), DesFab Engineers & Builders (manufacturing), Mukesh Raj & Co Chartered Accountants (accounting), Stones2Milestones Education, etc. USP: As a pay-for-use, completely managed service offering, Hopbox reduces the TCO (Total Cost of Ownership) for enterprise grade network security. Proactive monitoring by experts and real-time responses to emergent threats create a continuously evolving network security environment. A cloud-based ‘Policy & Malware Filtering Engine’ ensures that data leak and Web usage policies are applied and Web-based threats are thwarted. The ‘Real Time Analytics Engine’ helps clients understand what is happening in their networks and provides actionable insights on network status, emerging threats, employee Web usage and policy effectiveness. Special mention: Invited exhibitors at Nullcon Delhi 2012, an international security conference and exhibition. Website: www.unmukti.in

DECEMBER 2012 | 105



CALENDAR FOR 2012-2013 events to Look out For in 2012 Name aNd Website

n

date aNd VeNue

Gartner data center conference http://www.gartner.com/technology/ summits/na/data-center/

An event for data center professionals managing their enterprises' through IT

3 – 6 dec, 2012 The Venetian Resort Hotel and Casino, Las Vegas

the big data, analytics, insights conference http://www.bigdatainsights.co.in/

An event to explore data analytics solutions, latest skills, tools, and technologies needed to make Big Data work for an organisation

18 – 19 dec, 2012 The Westin Mumbai Garden City, Mumbai

cybermania http://www.shaastra.org/2013/ main/#events/cybermania

National level ethical hacking championship

5 – 8 Jan, 2013 IIT, Chennai

Nullcon Goa–international security conference http://www.nullcon.net/website/

A security conference series - an initiative by null – the open security community, a registered not-for-profit society

27 Feb – 2 mar, 2013 The Bogmallo Beach Resort, Goa

Gartner symposium/itxpo 2013 http://www.gartner.com/technology/ symposium/dubai/

Industry's only event to deliver the insights, tools and relationships necessary to create, validate and execute transformative business technology strategies

5 – 7 mar, 2013 Madinat Jumeirah Hotel, Dubai, U.A.E.

LFY magazine attractions during 2012-13 moNth

theme

Featured List

May 2012

Virtualisation

Certification & Training Solution Providers

June 2012

Android

Virtualisation Solution Providers

July 2012

Open Source in Medicine

Web Hosting Providers

August 2012

Open Source on Windows

Top Tablets

September 2012

Open Source on Mac

Top Smart Phones

October 2012

Kernel Development

CLOUD Solution Providers

November 2012

Open Source Businesses

Android Solution Providers

December 2012

Linux & Open Source Powered Network Security

Network Security Solutions Providers

January 2013

Linux & Open Source Powered Data Storage

Network Storage Solutions Providers

February 2013

Top 10 of Everything on Open Source

IPv6 Solution Providers

DECEMbEr 2012 | 107


TIPS

&

TRICKS

Using ‘vi’ commands on your terminal

Using ‘vi’ commands while working on the terminal is a good work enabler. To set your terminal to ‘vi’ mode, you need to use the following command:

# espeak -p 80 “hello how are you”

…(default being 50) Issuing the following form of the command will control the speed of the speech, in terms of words per minute:

set -o vi # espeak -s 80 “hello how are you”

Now you can use the command mode and the insert mode of ‘vi’ while working on the terminal. —Dipjyoti Ghosh, dipjyoti.ghosh@gmail.com

There are more interesting options available in the man pages. —Sanjay Goswami, sanjaygoswamee@gmail.com

Measuring the network throughput between two Linux systems

Get your IP address

Here is a one line command to fetch all the IP addresses (except localhost) of your computer:

# ifconfig | grep “inet addr:” | awk ‘{print $2}’ | grep -v ‘127.0.0.1’ | cut -f2 -d:

Note: Use the above command as the root user. —Balkaran Brar, balkaran.brar@gmail.com

Iperf is a tool that measures the bandwidth and the quality of a network link. It can be installed very easily on any Linux system. One host must be set as the client and the other one as the server. Make sure that iperf is installed on both systems. If it is not installed, then use your package manager to install it before trying this tip. Now run iperf on one of the Linux systems as the server, as shown below: linux-erv3:/home/test/Desktop # iperf -s ------------------------------------------------------------

Make your system speak for you!

You can make your system speak for you by using the Speech Synthesizer command normally available in Ubuntu and many other distributions of Linux. To do so, issue the following command:

Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------

Go to the second Linux system and run iperf -c <host_ name or server_ip> as the client:

# espeak “hello how are you” linux-6bg3:~ # iperf -c 192.168.1.100

You will hear a voice speaking for you. To change the pitch of the voice, you can issue the following command in the format shown:

108 | DECEMBER 2012

-----------------------------------------------------------Client connecting to 192.168.1.100, TCP port 5001


------------------------------------------------------------

If the command is not found, it gives the output shown below:

[ 3] local 192.168.1.109 port 39572 connected with

[aarsh@localhost ~]$ which moodule

TCP window size: 16.0 KByte (default)

192.168.1.100 port 5001 /usr/bin/which: no moodule in (/usr/lib/qt-3.3/bin:/usr/ ^C[ ID] Interval

Transfer

Bandwidth

kerberos/sbin:/usr/kerberos/bin:/usr/lib/ccache:/usr/local/ bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/aarsh/

[ 3] 0.0- 6.3 sec 6.38 MBytes 8.51 Mbits/sec

By default, the iperf client connects to the iperf server on the TCP port 5001 and the bandwidth displayed by iperf is the bandwidth from the client to the server. In the above example, it is 8.51 Mbits/ sec between two Linux test systems connected over a wireless network.

bin)

—Aarsh S Talati, aarshstalati1989@gmail.com

Securing files

Here is a simple tip to password protect your files:

vi -x test

—Prasanna, prasanna.mohanasundaram@gmail.com

Power yourself with Netstat

Here are a few uses of the netstat command that can help you. To display the kernel interface table:

netstat -i

To display the kernel routing table: netstat -rn

To display all open network sockets: netstat -uta

To display network statistics:

This command will ask for an encryption key. You have to type the key twice. Then save and quit the opened file. Now, whenever you open this file, it will ask for that password first. —Sumit Chauhan, sumit1203@gmail.com

Uninstalling a package

To completely uninstall a package, first check the exact name of the package to be uninstalled by using the following command: sudo dpkg --get-selections | grep package_name

The output of the above command will display the name of the package. Once you know the package name, you can remove it by using the command shown below:

netstat -s sudo apt-get remove --purge package_name

—Prasanna, prasanna.mohanasundaram@gmail.com

Finding the full path of the shell command

There is a command named which that takes one or more arguments as input. It prints to standard output the full path of the shell command. It does this by searching for an executable or script in the directories listed in the environment variable PATH: [aarsh@localhost ~]$ which poweroff /usr/bin/poweroff

—Neeraj Joshi, neeraj88joshi@gmail.com

Share Your Linux Recipes! The joy of using Linux is in finding ways to get around problems—take them head on, defeat them! We invite you to share your tips and tricks with us for publication in OSFY so that they can reach a wider audience. Your tips could be related to administration, programming, troubleshooting or general tweaking. Submit them at www.linuxforu.com. The sender of each published tip will get a T-shirt.

DECEMBER 2012 | 109


FOSS Events

OpenMRS 2012: A Meet that Rocked! “All of us in Google’s Open Source Programs Office are very pleased to see how many exceptional contributors to OpenMRS have started working with this outstanding humanitarian FLOSS project through their work with Google Summer of Code.” —Cat Allman (Co-organiser Science Foo Camp,Open Source Programs at Google)

T

his year’s OpenMRS Implementers Meeting was indeed a rocking affair. It was held at the International Institute of Rural Reconstruction in Silang, Philippines from October 9-12, 2012. More than 100 people from over 21 countries participated in the event, including attendees from the United States, Kenya, India, Sri Lanka, Pakistan and other countries. Google and ThoughtWorks were the sponsors for this year’s event. The annual Implementers Meeting began in 2006 as a way to bring members of the community together to collaborate, share implementation experiences, and find ways to improve OpenMRS. This meeting provides an opportunity for developers to collaborate and improve their technical skills in OpenMRS, for implementers to share their best practices from implementations, and for users to propose and prioritise their top features in future releases of the software. As was introduced last year, the 2012 Implementers Meeting also included visits to OpenMRS implementation sites. The meeting started with opening remarks from OpenMRS co-founder Paul Biondich, followed by welcome addresses from the Hon Juanito Victor C Remulla, Governor of Cavite, and Dr Portia Fernandez-Marcelo, director of the National Telehealth Centre. The un-conference format allows people to follow the talks and discussions of their own choice. A lot of parallel talks were organised on Day 1. First, Paul Biondich presented the ‘State of OpenMRS’, and Bill Lober discussed the architecture and interoperability of OpenMRS. Later, Roger Friedman discussed requirements for hospital systems. Afterwards, a group of implementers with varied experiences did a short brainstorming session led by Dawn Smith, Lauren Stanisic and James Arbaugh. This was followed by a session by Tobin Greensweig, who explained issues around point-of-care OpenMRS use. Then Ellen Ball and Viet Long Pham discussed upgrades and

data exchanges. Day 1 ended with several ‘birds of a feather’ sessions that included discussions about things like promoting OpenMRS service providers, and a Google Summer of Code (GSoC) discussion session. Day 2 started with a discussion on biometrics in OpenMRS, presented by Shaun Grannis, followed by data migration from .NET systems into OpenMRS, which was presented by Titi Tsholofelo. Afterwards Darius Jazayeri led an interactive session on how to use OpenMRS to build an EMR application. The session on the ‘OpenMRS Reporting module’ was led by Mike Seaton. On Day 2’s ‘birds of a feather sessions’, there were discussions about large-scale OpenMRS deployments, pharmacy modules, Kenya OpenMRS distributions, MDR-TB module experiences and future needs, inventory management with the Open Boxes system, and a discussion about the OpenMRS learning curve. On Day 3, there were visits to the various clinics in which the OpenMRS-powered distribution CHITS system was in production. That was followed by a remarkable session by OpenMRS cofounder Burke Mamlin on the future roadmap for OpenMRS. Day 4, the last day of the meet, started with a ‘Site Visit Q&A Forum’ presented by Burke Mamlin and local implementers. This was followed by sessions on mobile technology in OpenMRS by Nicholas Wilkie, multiple applications at a single facility by Mwatha Bwanali, and registration modules with Tobin Greensweig. The meet ended with a review of the 2012 meeting and a discussion on the future prospects of the community. References [1] [2] [3] [4] [5]

OpenMRS: http://openmrs.org/ Event: http://events.openmrs.org/ Event coverage: http://lanyrd.com/2012/omrs12/ Notes: http://lanyrd.com/2012/omrs12/notes/ CHITS: http://chits.ph/


EnterpriseDB Postgres Plus Subscription for your successful

PostgreSQL/Postgres Plus deployments Includes... Predictable cost for your support via Remote, Email and Telephonic support for your production systems

Unlimited number of incidents supported

Software updates upgrades, patches and technical alerts service Web portal access and knowledge base access with PDF documentation

The Postgres Plus Solution Pack provides high value add-on tools to PostgreSQL for Administrative Monitoring, Data Integration across multiple servers, Availability, Security, Performance and Software Maintenance.

Postgres Enterprise Manager (PEM) The only solution that allows you to intelligently manage, monitor, and tune large numbers of Postgres database servers enterprise-wide from a single console.

SQL Protect Protects your PostgreSQL and Advanced Server data against multiple SQL virus injection vectors by automatically learning safe data access patterns and collecting attack data.

PL/Secure for PL/PgSQL Protects your server side database code and intellectual property from prying eyes for both internal and packaged applications without any special work on the part of the developer!

Updates Monitor

Migration Toolkit

SQL Profiler

Eases your installation maintenance burden by notifying you when updates to any components are available and assists you in downloading and installing them.

Fast, flexible and customized database migration from Oracle, SQL Server, Sybase, and MySQL to PostgreSQL and Postgres Plus Advanced Server.

A developer's friend to find, troubleshoot, and optimize slow running SQL fast! Provides on-demand or scheduled traces that can be sorted, filtered and saved by users and database.

xDB Replication Server Provides easy data integration between PostgreSQL based servers and between Oracle and PostgreSQL allowing Oracle users to dramatically reduce their Oracle license fees

You can find more details on the following links: http://www.enterprisedb.com/products-services-training/subscriptions http://www.enterprisedb.com/postgresql-products/premium

For inquiries contact at: sales@enterprisedb.com EnterpriseDB Software India Private Limited Unit # 3, Ground Floor, Godrej Castlemaine, Sassoon Road Pune – 411001 T +91 20 3058 9500 F +91 20 3058 9502 www.enterprisedb.com


Hurry! Offer expires September 30, 2012

*

Test, develop and deploy your application on VMware vCloud powered cloud Avail free cloud credit worth ` 25,000*, visit www.cloudinfinit.com for more details








Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.