Adios Steve ‘genio’ Jobs
Steve Paul Jobs, who died aged 56 on 5th October 2011 of his pancreatic cancer, was one of the few who made an everlasting impact in their lifetime. The co-founder and later CEO of Apple Inc, he gave world a feel of excellence through his inventions like the iPhone and iPod. A visionary and a widely recognized charismatic pioneer of personal computer revolution, he knew the needs of modern era and fulfilled them to perfection. His gadgets redefined people's view of music, mobile telephony and personal computing. As the legend says, in 1976, Steve Wozniak (the other founder of Apple) showed Jobs a computer he made for his personal use. Jobs was impressed and suggested marketing it. They had no capital, but Jobs had a brilliant idea. By persuading a local store to order 50 of the computers, then asking an electrical store for 30 days credit on the parts to build them, they set up business without a single investor. They called it Apple Computers and launched their first product, the Apple 1. A year later the more sophisticated Apple 2 hit the jackpot, and by 1980, when the company went public, the pair were multimillionaires. Along with apple, he also owned 7% of Disney's shares, the largest single of all. This was when he sold his company Pixar in 2006 for $7.4 billion. He himself had bought Pixar (formerly The Graphics Group) from Lucasfilms for $10 million in 1986. He turned a 10 million company into a 7.4 billion one in just 20 yrs. He was the key to the success of Apple and the other companies he bought or started. Defined as a control freak by many, he aimed for nothing less than perfection. He was known to have rejected thousands of ideas his quest for perfection. He once said “Focusing is about saying no. You've got to say no, no, no … and the result of that focus is going to be some really great products where the total is much greater than the sum of the parts.” Steve Jobs lived a cliche of the timeless American dream. His story, from garage tinkerer to multi billionaire, has been told so many times its now a legend. Jobs was one of those exceptional personalities who under certain circumstances could build an empire out of nothing. He will remain as a memory in the minds of people that is not going to fade anytime soon.
Branch Counsellor Prof. S.N. Sinha
Executive Editor Rose Ketty Tete
President Mayank Garg
Design Head
News and Editorial Team
Shubham Jaiswal
Abhay Gupta Siddharth Bathla Mukul Kumar Jain Ranjan Kumar Amish Bedi Rishabh Sharma Soumitr Pandey Vikram Rathore Ankush Pumba Mohit Garg Pallav Anand Nikita Duseja Swapnil Digant Ray Richa Jain Abhishek Nayak Anshul Agarwal Akshay Agarwal Alankrita Gautam Shashank Saurabh Ranjan
Finance Heads
Design Team
Rajat Gupta Shagun Akarsh
Tushar Mehndiratta Rahul Kumar Modi Lokesh Chandra Basu Chirag Kothari Sumit Kumar Shubham Mittal Muskan Rajoria Aditi Gupta Palak Jajoo Manisha Varshney Prithvi Mihawak Vishal Singh Tushar Gupta Kritagya Bawal Aditya Kumar Vijay jain
Finance Team Prakul Gupta Nitin Nandwani Krati Verma Abhilash Jajoo Amey Sahasrabuddhe Devang Roongta Navneet Goel Pravesh Jain Priyanshi Goyal Rishabh Jain Sameer Rastogi Saurabh Wagh Anmol Anand Divyen Jain Karan Kumar Virendra Pratap Sahil Agrawal Sonia Barrolia
(c) 2011 by IEEE Student Chapter, IIT Roorkee All rights reserved. This magazine is meant for Free Distribution and no part there of should be sold at any price without the prior permission of IEEE Student Chapter, IIT Roorkee.
“Let there be light” are believed to be the first words spoken by God before life began on earth. Through ages light in the form of knowledge has shown us the path of progress. Knowledge has frequently personified itself for us humans to develop and move ahead of other beings. From Galileo to Newton to Einstein, their path breaking work has enlightened us all equally. That's how the Big Guy up above too wanted it to be. Times change and so do trends, nothing is free; almost everything comes with a price. The past few months were eventful. Einstein's followers suffered a setback; his theory of relativity was questioned and beliefs of many shattered. We also witnessed the death of Steve Jobs, one of the greatest inventors of our time and Dennis Ritchie, who invented C and UNIX. As we lose such great “lights”, we live in the hope of being led by some other soon. Befitting, we dedicate this issue of Geek Gazette to Steve Paul Jobs. In this issue we bring you “Open Source”, easy and available access to software, electronics, beverages, health and science, and art and recreation. Having innumerable access to information through internet we sometimes stumble upon incorrect and inappropriate answers. One such subject is “cloud computing”; here we try to clear the “cloud” over Cloud Computing. Through timeline we witness the evolution of books. We bring you a wide range of topics from cars, to the future of 3D, to QR codes. From around the campus we have the much-in-news NPTEL and IITR's Team SeiGer. The Patent war between Samsung and Apple is big news these days, only time will tell who benefits out of it. With the motive of bringing you the latest news and giving insights of some intriguing technology, we at Geek Gazette always strive to give you the best of knowledge and create a curiosity towards everything new that we encounter. We thank our mentor Dr. S N Sinha for his continued patronage, and acknowledge the consistent support of our readers. We also encourage the latter to direct their valuable feedback, suggestions and comments to ieeegeekgazette@gmail.com. “Better to be a geek than an idiot”. With these words we sign off for this semester… Geeks (keep) reading! Cheers! The Geek Gazette Team
TESLA’s BLACK BOX The concept of the electric car may seem like a very novel idea but actually it is not so. The idea of the electric car was floated by engineers during the 20th century. But as electric cars did not provide the thrills and chills associated with gasoline driven cars, it was given a thumbs down by the car enthusiasts. Had this idea been realised probably the face of the world would have been different from what it is today. The idea was dead for all practical purposes. But for some enthusiasts, it remained a dream. They just couldn't let it go. One such person was Nikola Tesla. Yes, the same guy you are thinking about. Though it may seem weird but this guy was also obsessed with building an electric car. In 1931, Tesla did a very mysterious experiment on his Pierce Arrow car with assistance from his nephew. He got the gasoline engine replaced with an 80 kW electric motor. The mystery about the experiment was a small little black box designed by Tesla himself. He had been very careful about the dimensions and made sure that they were precisely 24 in x 12 in x 6 in. The circuitry contained twelve tubes, wires and assorted resistors in a strange arrangement. The box had two visible rods protruding out of its rear end along with two antennas. This strange looking and mysterious black box was the heart of Tesla’s experiment. For the experiment, he placed the box in the front seat of the car and pressed the rods protruding from it. He mysteriously claimed that the box had given him power. Now an average electric car of that period ran at maximum speed of 40-45 mph and needed a recharge every hour or so. But this mysteriously modified car was different. Within a week of testing it achieved top speeds of 90 mph and surprisingly did not need to be recharged!! People were flummoxed. They never expected an electric car to achieve such speeds and that too without needing repeated charges. Moreover, there was no exhaust!! This car was virtually running on nothing!! When people asked Tesla the secret behind this “magic” car of his, he coolly replied that the car derived power from the “ether” all around us. Everyone was bewildered and shocked. They branded Tesla’s ideas as non-scientific and called him a lunatic. Tesla was predictably furious and removed the box from the
Page 6 - Autumn 2011
car. He even refused to tell anyone the secret behind his “magic” black box. Eventually this secret died with him and the world lost a brilliant invention. For the past many years now, scientists have been trying to decode the secret behind Tesla’s black box. Speculators have given various theories to explain the secret behind Tesla’s Black Box. The box was supposed to be an energy receiver which derived its energy from the magnetic field of the earth. It may have been able to cut through the Earth’s magnetic field lines or may have amplified their strength, but due to lack of proof, the theory was fed to dead ears facing wide rejections. Although in defence, the supporters of this theory have given an explanation. It might be so that the box itself behaved like the earth. .Confused …eh ? Well, the earth can be seen as an AC motor having a frequency of its own and powered by the Sun’s EM waves via it’s poles. The sun throws out tremendous amount of energy at earth which it absorbs and releases in the form of heat and EM waves. It is this energy that brings about sudden weather changes in the atmosphere. What Tesla must have done was to create a circuit that ran at an operating frequency similar to that of the earth and with the two antennas acting as poles; he “wirelessly” extracted energy from earth. This is supposed to be the secret behind Tesla’s magic electric car. Some people believe that such a thing never existed and the whole thing about the mysterious no-energy-consuming electric car is nothing but a well crafted hoax. Although at some point of time even these detractors were startled with this idea of an infinite source of energy simply waiting to be harnessed. But whatever people might say, it remains one of the mysteries of modern science. Although an intriguing concept, nobody has yet been able to uncover the secret behind possibly one of mankind’s biggest scientific losses, if it existed that is.
Patent Wars A patent is a set of exclusive rights granted by a government to an inventor for a limited period of time in exchange for the public disclosure of an invention. The exclusive right granted to a patentee in most countries is the right to prevent others from making, using, selling or distributing the patented invention without permission. The US patent system was created to encourage inventors to share their brilliance, for the benefit of everyone, while at the same time assuring them protection of their intellectual property for a reasonable time in return. Originally the system applied to machines, with design patents and, grudgingly, software patents added over time. In the past, patent wars didn't gather such momentum or were even featured in mainstream media. So, why did it suddenly come into the spotlight? Tech companies such as Microsoft and Apple have long sparred over patent rights. According to Dennis Crouch, a professor of patent law, "The right to shut down or limit the operation of a competitor is especially valuable when parties are competing over platforms.� Much light can be shed on the subject by this sentence. With the advent of smart phones, three major platforms – iOS, Android and Windows Phone are competing for market domination. Patent infringement lawsuits can severely pull down distribution of a product with the immediate result of removing the targeted product from the market. The mobile patent wars broke out in March 2010, with Apple filing a suit against HTC, alleging that the company was infringing on 20 patents related to iPhone's user interface, underlying architecture and hardware. Microsoft and Apple coming together to fight Google with patents only stepped up the game. Ever since Google launched Android, it came under the competitive radar of Microsoft and Apple as the search giant's OS directly rivals Apple's iOS and Microsoft Windows Phone OS. When Android was launched in October 2008, very few people thought it would be a roaring success. Several companies have sued Android partners such as HTC and Samsung for patent infringement. A company being sued by a patent holder today can buy a set of patents tomorrow that allows them to counter sue immediately. The patent wars reached a new level when Nortel's patents were purchased for $4.5 billion, more than Nortel's net worth by a consortium led by Apple and Microsoft. Google acquired Motorola Mobility for its array of 17,000 patents, for $12 billion. The role of patents in the growth of innovation needs to be restored. The America Invents Act of 2011, which grants patents on the first-to-file basis instead of the first-to-invent basis, makes some improvements, but it does little to address the underlying causes of the patent litigation flood. There may be a way out of this legal morass. Once firms perceive that no one has any short-term patent advantage anymore, competitors may just cross-license intellectual property with each other. Until patent law effectively restricts patents on abstract ideas, it will not promote job creation and economic growth, as it has in the past. This will help pave the way for future collaborative cross-licensed technological innovations.
R E N D E Z V O U S
NPTEL : towards e-IITS
NPTEL is an acronym for National Programme on Technology Enhanced Learning which is an initiative by seven Indian Institutes of Technology (IIT Bombay, Delhi, Guwahati, Kanpur,Kharagpur, Madras and Roorkee) and Indian Institute of Science (IISc) for creating course contents in engineering and science.
NPTEL as a project originated from many deliberations between IITs, Indian Institutes of Management (IIMs) and Carnegie Mellon University (CMU) during the years 1999-2003. A proposal was jointly put forward by five IITs (Bombay, Delhi, Kanpur, Kharagpur and Madras) and IISc for creating contents for 100 courses as web based supplements and 100 complete video courses, for forty hours of duration per course. Web supplements were expected to cover materials that could be delivered in approximately forty hours. Five engineering branches (Civil, Computer Science, Electrical, Electronics and Communication and Mechanical) and core science programmes that all engineering students are required to take in their respective undergraduate engineering programme in India were chosen initially. Contents for the above courses were based on the model curriculum suggested by All India Council for Technical Education (AICTE) and the syllabi of major affiliating Universities in India. OBJECTIVES behind NPTEL The basic objective of science and engineering education in India is to devise and guide reforms that will transform India into a strong and vibrant knowledge economy. In this context, the focus areas for NPTEL project have been higher education, professional education, distance education and continuous and open learning, roughly in that order of preference. Manpower requirement for trained engineers and technologists is far more than the number of qualified graduates that Indian technical institutions can provide currently. Among these, the number of institutions having fully qualified and trained teachers in all disciplines being taught forms a small fraction. A majority of teachers are young and inexperienced and are
Page 8 - Autumn 2011
undergraduate degree holders. Therefore, it is important for institutions like IITs, IISc, NITs and other leading Universities in India to disseminate teaching/learning content of high quality through all available media. To tackle the problem of scarcity of trained teachers for higher education, methods for training young and trained teachers to carry out their academic responsibilities effectively are a must. NPTEL contents can be used as core curriculum content for training purposes. Moreover, a large number of students who don’t have access to quality education can use NPTEL as the source for same.
IMPLEMENTATION at National level There are two committees, the National Programme Committee (NPC) headed by the Joint Secretary, Higher Education, MHRD and the Programme Implementation Committee (PIC) headed by Professor M. S. Ananth, Director IIT Madras and Professor in Chemical Engineering. The NPC oversees implementation of the programme and offers policy guidelines and financial structure. The PIC enables the smooth functioning of the project in several phases and takes care of content creation and technology implementation. Members of the PIC meet periodically (about once every three months) to study the progress and issues related to course work development. In each IIT/IISc faculty are nominated as TEL coordinators to interact with their colleagues and encourage them to prepare course materials and offer technical and financial assistance using funds sanctioned for that purpose. In addition, two National coordinators, one for web based development and one for video lectures offer assistance and oversee the National programme. Groups are formed for solving specific technology or pedagogy related issues and arrive at general guidelines for faculty preparing course materials. In the first phase of the programme about 350 faculty members in all partner institutions worked together to deliver lecture contents .In the next phase this is likely to increase to well over 1000 faculty. Other Institutions such as NITs and major University faculty are also likely to participate.
Geek Gazette
In the wake of recent NPTEL operations & advancements,the GEEK GAZETTE team interviewed Dr. B Mohanty (head of NPTEL operations in IIT R). Here are some excerpts from the interview GG: Would you please give a brief overview of the ongoing NPTEL project ? The NPTEL project comes under MHRD and all the funds are allocated by the ministry itself . Earlier , the institutes didn't have adept and experienced teachers .They worked only for their own profit. The NASSCOM's report of 2006 stated that only 20-25 % of the total graduating engineers every year are employable. The IIT's came up with the solution that e-learning was the only way. Through this, we could provide quality doorstep. So initially, when IIT M didn't have its own server , we uploaded the lectures on youtube. Now that IIT M has its own website and satisfactory bandwidth,the lectures are available on that website and youtube as well. Moreover all the IIT's are planning to have their own well established NPTEL sites as well.
Page 9 - Autumn 2011
GG: When did the project arrive in IIT R ? The project arrived in IITR , some 8 years ago , and is currently undergoing phase 2 . GG: Are all the department s enrolled in the project and how many faculty members are involved in it ? The departments associated with the project are : Chemical, civil, metallurgy, electrical, EC , biotechnology, mechanical, mathematics, nanotechnology & chemistry . the total faculty members involved in the project are 72 . GG: What is the upload frequency of these video lectures ? Does IITR lack some resources for the timely completion of the deadlines for this project ? what is the future prospect of this project ? There is no specific upload frequency for the video lectures. It's just that we have to follow a deadline for the completion of phase 2, which is somewhere near march 2012, by which all faculty members should upload the lectures of their topics.When it comes to the lack of resources, it's not always the funds that we lack, but the fast evolving technology. For example, when we started the project, we didn't have the HD lens cameras; but as the project evolved and we felt its need, we ordered the HD cameras and now have 3 of them. We are still trying to get more of them.
R E N D E Z V O U S
Geek Gazette
R E N D E Z V O U S
GG: Who decides the break up of the topics to the different IITs & IISc , & in the respective IIT's , who allocates the topics to the various faculty members ? For the implememtation of this project , a meeting was held at IIT Madras . All the coordinators from the various departments of the 7 IIT's and IISc Bangalore assembled there and decided the breakup of the various topics among them . These coordinators then allocate specific topics to the faculty member of its specialization . GG: Recently we have heard that IIT R has also opened up its own NPTEL site ; so , are all the IITs planning to open up their different sites ? if so , then will there be any central database of all the lectures ? Yes, all the IIT's are planning to open up their separate NPTEL sites. Then, there would be nothing called central database. All the video lectures would be available on the the different IIT servers & the sites would be multiple images of the same thing. It's only being done to remove the load from IIT M's site, since it has limited bandwidth and people face problems while downloading the lectures.
GG:Are the new IITs also planning to be a part of this project ? The new IIT's, at present , somewhere lack the required amenities for the project . But, with the passing time, they are also planning to enter the flow and get in pace with the project. GG: Is there anything in store for the industries from this project ? The industry is also one amongst the heaviest beneficiary from the project. The web courses and video lectures are useful to them as much as they are to the students, since these students only become the working force of the industry, later on . GG: Please put some light on the web courses as well as many people are not in much acquaintance with it. The web courses also contain similar knowledge as the video lectures , the only difference being , that it contains theory examples and other illustrations. It has been seen that the video lectures are more popular than the web courses , although both are equally good. GG: Anything , you would like to add , sir? Finally, I would like to say that these initiatives like the NPTEL project and virtual labs are all being taken up to create “ VIRTUAL IIT's ”. The students from other colleges will gain heavily from such initiatives . Thank You,Sir! Wish you luck for your future pursuits!!
Page 10 - Autumn 2011
EFFICYCLE
The Society o f Automotive Engineers(S AE) Effi-cycle i s a competition for design, development and race of an electrically assisted human powered vehicle(HPV) contested by engineering students from all over the country. The motive of this competition is to make a hybrid HPV designed to the practical and economical constraints which can be used for day to day activities someday. Such a vehicle is of great need in a scenario where fuel prices and pollution levels are increasing by folds. It encourages the use of human powered technology with use of mechanics that gives maximum power output without effectively tiring the driver. Bicycles, the ancestors of Effi-cycle, have provided humans with cheap and environment-friendly means of transport for a long time, but then they aren't the best of their kind. Bicycles include major drawbacks like exposing the driver to external conditions that sometimes become really harsh, aerodynamically poor chassis that experience high wind resistance, inefficient transmission of human power to the machine, etc. These are some of the things that need to be kept in mind while preparing the design of the HPV. The vehicle can take many forms and rider positions such as upright, recumbent or prone. Although in all cases must be a 3wheeler. Team SeiGeR, formed in the year 2010 under the aegis of Dr. V.N. Saran, is the official team of IIT Roorkee that competes in SAE Effi-cycle. They competed in SAE Efficycle 2010 and designed an electrically-assisted recumbent frame HPV. The driving forces of their design were to eliminate rider discomfort, increase speed by effective power generation, have power assistance in case of driver exhaustion, reduce effective drag experienced by aerodynamic improvements and design for maximum stability. The team has been performing very well in the past competitions. In Efficycle 2010 it Stood at an overall 6th position and also won the “Utility & Manoeuverability Award” and prize money of Rs. 20,000/-. Geek Gazette wishes team SeiGer all the very best for future. May they continue their legacy and bring fame to the institute.
Geek Gazette
CAR COMMUNICATION TECHNOLOGY Ever thought how amazing it would be to be able know the whereabouts of every car running on the road? How great would it be to be able to prevent crashes and accidents just because of knowing what's coming from the next corner? By 2012, well you might not, but your cars will be able to, you can say, talk with fellow cars on the road. Using a technology developed by Cohda wireless and University of South Australia, cars will be able to communicate amongst themselves. This technology, known as DSRC (Dedicated Short-ranged Communication) combines the abilities of Wi-Fi and GPS to make a large network with each car as a node connected to each other and satellite to relay important information about itself such as position, speed, etc. The project, underway since 2006, is being undertaken in South Australia and is expected to be used in Ford cars by the end of 2012. The technology uses satellite support from GPS networks to position each car on the network and makes intra-network 'communication' possible using Wi-Fi. There will be external Wi-Fi spots which will be used to share the information with the centralized location and other cars. This new technology will make car tracking much more accurate and precise because data will be transmitted every 1/10th of a second. The same fact will also help reduce the chances of collision, traffic jams and congestions thus preventing accidents and also increasing fuel efficiency. The car will have on-board processing units which will appropriately advice the driver to slow down, change lane, provide alternate routes, etc and tell him about emergency situations by means of loud noises and flashing lights. The cars will also be embedded with security measures such as automatic emergency braking in case of unavoidable situations in places like a blind turn. These are few of the now implemented advantages of vehicular communication. Possible future uses include use by law enforcement for speed limiting, restricting entry and pull over commands to be implemented by the car directly. Also if ever driving is fully automated, vehicular communication will find its greatest use. A major area of improvement in this technology is the ping, the time lag between the signal been send and the signal been received by cars. Although a lot of improvement has already occurred, constant research is underway to cope up with possible future advancements in car speed and acceleration. But ping here is currently not a problem. The major problem here is monetary. The upkeep of the network is something that the costumers won't be paying even if they agree to pay for the installations done in the car. Many suggestions have been put forward to help keep the cost to bare minimum. Until this should become a “government thing�, this cost would have to taken cared by the manufacturer or the costumer. But one thing is for sure that this technology will prove to be one of the biggest breakthrough of the decade and holds a lot of potential for the coming future.
Page 11 - Autumn 2011
Geek Gazette
TiSP Virgil Romance MentalPlex PigeonRank Motion More
Secret Projects of Google Secret Search
I’m feeling lucky
Over the years since its inception in a garage, Google has continued to launch amazing new products and services across the years. Some of them, such as GMail have redefined their areas of services, whereas, some like Google Buzz have failed to live up to their hype. We bring to you, a collection of zanier Google Products released over the years, annually on a special day.
Google MentalPlex™ In 2000, Google had unveiled its prototype search engine which could work on sheer mental power, eliminating the need to type out search terms and other input devices. Instructions included : ñ Remove hat and glasses. ñ Peer into MentalPlex circle. DO NOT MOVE YOUR HEAD. ñ Project mental image of what you want to find. ñ Click or visualize clicking within the MentalPlex circle. However amazing it sounds, MentalPlex™ was not such a success due to various technical flaws, such as : "Brain waves received in analog. Please re-think in digital." and "Interference detected. Remove aluminum foil and remote control devices.”
Google PigeonRank™ By the year 2002, Google had grown into a search giant with maximal market domination. It came as quite a surprise when it revealed its trademarked page ranking system - PigeonRank™.PigeonRank™ worked on the basis of Dr. Prof. B.F.Skinner's work with training pigeons to solve complex computational algorithms. Google had developed functional pigeon clusters (PCs) for their page ranking system. However, a lot of companies tried to cheat by uploading images of bread crumbs or parrots in resplendent plumage.Page and Brin had experimented with a lot of avian motivators before settling on a combination of linseed and flax. The pigeons were free to alternate between page ranking and going to specially designed break rooms to chew on lin/ax kernels and seeds. The Google heads believe that there is much promise in PCs and are moving forward to train them all over the world in hopes of some day accomplishing a pigeon cloud computing platform.
Google Gulp™ In 2005, Google released the apex of its search engine enhancement addons, the Google Gulp™ with Auto-Drink™. ŸThe Gulp™ acted on your neurons, enriching them for more efficient Google searches. The active ingredient depended
on the flavor : © Beta Carroty - beta carotene © Glutamate Grape - glutamic acid © Sugar-free Radical - free radicals © Sero-tonic Water – serotonins
Sign in It acted by recognizing your unique DNA and modifying its neurotransmitters for optimum results in increasing your cognitive abilities and hence, enchancing your search potential.To get access to a Google Gulp™, you must present a Gulp™ bottle cap in any Google authorized Google Gulp™ store.
Google Romance™
Move over OKCupid and other dating sites, Google Romance™ is here. According to Google, dating sites are just a specialized form of search and now, Google is here to assert its dominance. Google Romance™ is a place where you can post all types of romantic information and, using our Soulmate Search™, get back search results that could, in theory, include the love of your life. Then they will send you both on a Contextual Date™, which is paid for by Google, while delivering to you relevant ads that Google and their advertising partners think will help produce the dating results you're looking for. With Google Romance™, you can: Ÿ Upload your profile – tell the world who you are, or, more to the point, who you'd like to think you are, or, even more to the point, who you want others to think you are. Ÿ Search for love in all (or at least a statistically significant majority of) the right places with Soulmate Search™, Google's eerily effective psychographic matchmaking software. Ÿ Endure, via Google's Contextual Date™ option, thematically appropriate multimedia advertising throughout the entirety of your free date. Google Romance™ ensures quality results and relevant advertisements for all.For legal and moral reasons, the Google Romance™ service was forced to discontinue (by law).
Google TiSP™
At the brink of 2007, Google shocked the world with its revolutionary TiSP™ technology, promising ultra-fast wireless broadband internet for free for home use.TiSP™, short for Toilet Internet Service Provider worked on the principle of dropping a fibre optic cable in your commode and getting direct connection to fast internet.Google had setup various TiSP™ Access Nodes at major sewage stations and deployed Plumbing Hardware Dispatchers (PhDs) to take care of the incoming fibre optic cables.The Google TiSP™ self-installation kit included a 20m fibre optic cable, wi-fi router and even gloves and soap.Sadly, Google TiSP™ encounted a major snag and things went very very wrong from there and it is best not to talk about it.
GMail Motion™
2011 brought a revolutionary change to GMail and GDocs, allowing complete control through the webcome and Google's patented spatial tracking technology. Motion™ brings with it movement controlled features such as reply, delete, back, forward and all the basic tasks. It even includes various motion gestures for framing text replies to an e-mail. Google documented Motion™ to have increased e-mail productivity by upto 12% and general white-collar job holder's health by upto 200%. GMail Motion™ is still in closed beta but it will soon open to the general public and bring loads of advantages.
Google Virgle™
Google, in association with Virgin has finally breached the final frontier. Google has announced, Google Virgle™ a plan to inhabit Mars by 2014. Applications are openly invited by anyone with special abilities such as exceptional talent in the field of physics, medicine, engineering or Guitar Hero. After applying, an automated certifying system would intimate you whether your application was up to standards for a startup civilization and if so, the interested person would be asked to put up a video response as to the Google Virgle™ video.Google Virgle™ is still in beta stage and the applicants who are chosen for creating the New World will be notified soon. DISCLAIMER : All of the above products are not real and their concepts were created for the sole purpose of eliciting humor only and no harm was intended to either Google or its Products™.
11
Google Wallet is an Android app that makes your phone your wallet. It stores virtual versions of your existing plastic cards on your phone. Simply tap your phone to pay and redeem offers using near field communication, or NFC. Google Wallet has been designed for an open commerce ecosystem. It will eventually hold many if not all of the cards you keep in your leather wallet today. And because Google Wallet is a mobile app, it will be able to do more than a regular wallet ever could, like storing thousands of payment cards and Google Offers but without the bulk. Eventually
your loyalty cards, gift cards, receipts, boarding passes, tickets and even your keys will be seamlessly synced to your Google Wallet. And every offer and loyalty point will be redeemed automatically with a single tap via NFC. Payment via Google Wallet app is very much secured. It stores encrypted user information on a computer chip called the Secure Element. The Android device itself can be locked with a personal identification number (PIN) and the Google Wallet app requires an additional PIN to activate the antenna of the NFC chip. The device must touch or be in close proximity to a MasterCard PayPass reader. Once the transaction is completed, the antenna is turned off. Additional transactions require the PIN to be entered again. Google demonstrated the app at a press conference on May 26, 2011 and released it on September 19, 2011. But while Google Wallet may be launched, don't expect it to kick your actual wallet out of your pocket any time soon. This is just the beginning.
Measuring Beauty Helen of Troy (from the Iliad) is widely known as "the face that launched a thousand ships". Thus, 1 milliHelen is the amount of beauty needed to launch a single ship. The Catalogue of Ships from Book II of The Iliad, which describes in detail the commanders who came to fight for Helen and the ships they brought with them, details a total of 1,186 ships which came to fight the Trojan War. As such, Helen herself has a beauty rating of 1.186 Helens, capable of launching more than one thousand ships. Negative values have also been observed—these, of course, are measured by the number of ships sunk or the number of clocks stopped. David Goines has written a humorous article describing various Helen-units. It has a chart with the fire-lighting and shiplaunching capability for different powers of "Helens". For example a picohelen (ph) (10−12 helens) indicates the amount of beauty that can "Barbecue a couple of Steaks & Toss an Inner Tube Into the Pool". Thomas Fink, in The Man's Book defines beauty both in terms of ships launched, and also in terms of the number of women that one woman will, on average, be more beautiful than. One Helen (H) is the quantity of beauty to be more beautiful than 50 million women, the number of women estimated to have been alive in the 12th century BC. Ten Helena (Ha) is the beauty sufficient for one oarsman (of which 50 are on a ship) to risk his life, or be the most beautiful of a thousand women. Beauty is logarithmic on a base of 2. For beauty to increase by 1 Ha, a woman must be the most beautiful of twice as many women. One Helen is 25.6 Ha. The most beautiful woman who ever lived would score 34.2 Ha, and 1.34 H, the pick of a dozen women would be 3.6 Ha, and 0.14 H.
Page 14 - Autumn 2011
Geek Gazette
BREAKING THE THIRD WALL Computers, Tablets, Mobiles, IPods and come what may in the near or the far off future, but Television is definitely to stay. You can imagine 2050 without IIT's, but definitely not TVs. Is anything new going to come out of the Idiot Box? Let's delve deeper into the development of display technologies to find an answer. Display Screen Technology had a very humble beginning with the Cathode Ray Tube , popularly known as a CRT, which is a bottleneck tube fitted in with an electron gun that emits electrons, which in turn are deflected or accelerated to produce images on a required screen. Tried, true, dependable and economical, this technology ruled for decades till it was dethroned by LCDs. LCD technology sparked off a contest to reduce power consumption. LCD screens are made up of a huge number of segments filled with liquid crystals and arrayed in front of a light source (backlight) to produce images in color or monochrome. Doesn't that sound geeky? Well, LCDs have been used with LED backlights too, which resulted in a thinner panel, less power consumption and better heat dissipation, and a brighter display with better contrast levels. The display tech revolution caught pace once again spearheaded by Plasma screens that cleverly minimized the flicker problem by the use of ionized gases. But, the future of television lies in adding another dimension to the viewing experience. Projecting 3D images was conceived a long time back by Jules Verne and George Lucas in Star Wars. Holography, or lens less photography, is the key to capture 3D
Page 15 - Autumn 2011
images and store it on a 2D surface, all without the use of a lens. The trick behind this is Wave Interference. The object is illuminated by a beam of coherent light (from a laser), a portion of which is reflected by a mirror or prism and directed toward the photographic plate; this beam is called the reference beam. The two beams interfere and produce an intricate pattern of dark and light structures on the screen that bear no apparent relationship to the original object. When the hologram is viewed in coherent light, however, the recorded object becomes visible!!! Beyond making a milestone television viewing experience, holography is thought to also develop major applications in the medical and military display fields, as well. As you may have guessed, it would also revolutionize the gaming market, as 3D imagery could be rendered to display the elements of games from first-person shooters to auto racing. Today's digital displays are far from ideal. They are heavy, somewhat fragile and they lack the resolution, flexibility or portability of paper. Researchers at HP labs are developing electronic displays that could one day match paper- and possibly even replace it. Portable display surfaces that can be instantly updated, from foldable computer screens to wallpapers, all of this seems enchanting when we go 100 years back. What had started as a bulky invention by J.L Baird, that could display merely pictures, has today magnified our reach within this world.
Geek Gazette
Unique Identification Card
P
ROJECT NAME AADHAR- a name that has gained hype in recent times aims at giving a “Unique Identity� (UID) to all Indian through a 12 digit code, a gargantuan task considering our humungous population. The information stored, will include all personal details, biometrics, contact details, financial details and medical history
The key components of the UID include the UID server, biometric sub-system, enrolment client application, network, security design and the administrative system. The UID server and biometric sub-system are expected to take care of enrolment and authentication services and storage. The enrolment client application will be analysing and checking the biometric data. It also requires being available offline for regions with no internet and later getting uploaded on the main server. Its network is a critical aspect of the system as well, since all UID enrolment and authentication services will be available online. The UID number that will be allotted to the citizens has a very interesting design. The first 11 of the digits are T O TA L LY R A N D O M generated by a non-repeating algorithm while the 12th digit is logically allotted. The actual ID is the first 11 digits while the 12th digit is the check digit which will help prevent any typo. The generation of the 12th digit is based on the Verhoeff scheme used for checksum to prevent data-entry errors. The errors can vary from most common ones like single digit error, from phonetic to rare ones like transposition. A major victory for this project would be to have a high TAR (true acceptance rate) for the biometric scanning. TAR is a measure of the correctness of the scanner. To achieve this, the database is required to have very high quality images of the biometrics. There are technical groups assigned to collect fingerprints and analyze quality to help eliminate the causes of possible errors. The vast size of the data poses two interesting problems. First is the problem of storage of all the data and to make it available for editing at all times. Second is the problem of creating a UID number which shall be easy enough to be memorized and yet random and difficult enough to prevent guessing.
AADHAAR will be home to all the credentials of individuals including bank A/c numbers, asset's attorney, etc. This information needs to be protected from physical as well as cyber threats, which, if happen, would in a way mean the person's identity stolen. To avoid this scenario, UIDAI has charted out a robust security design that will secure all the technology components from logical or physical attack. Firstly, it is server security which includes a firewall, along with intrusion prevention and detection systems (IPS, IDS). On top of this, there will be network and client security that includes encryption of information transmitted over the network or stored in the database. Government plans to give UID to 50% of the citizens by 2014. This means data of 600 M citizens to be stored, ready for access at all times. Along with this the system will have to compare the information given by a person with the other 599,999,999 to check if it is actually unique, a major challenge. To tackle this, the govt. will join hands with private IT firms. The Unique Identification project is undoubtedly amongst the biggest egovernance projects that the Government of India has taken up. The whole project AADHAR is based on a lot of assumptions on which its success depends. It is only a matter of time we'll s e e whether it turns out to b e t h e biggest success of our govt. or a disastrous failure.
.
What began as a rebellion, is now a p h i l o s o p h y, a c u l t u r e . . . Cooking recipes have been shared since the beginning of human culture, a perfect e x a m p l e o f the Open Source concept, which is by and large, free sharing of found and created content. It was baptised in 1998, as a reaction to Netscape's release of the Navigator source code. The birth of Open Source dates back to the 1960s, when users received the source code of their operating systems and other programs for free. They had full freedom to customize, correct and modify. The scenario changed completely, when IBM started to charge separately for software and services, and ceased to supply the source code. This ignited the spark of the Open source revolution spearheaded by Richard Stallman. Open Source became a development methodology in which source code is developed and debugged by not one company or even one group of individuals but rather, by a fragmented workforce striving for a common purpose. Stallman started off with the GNU project that later on, amalgamated with the Linux kernel, giving us a powerful alternative to proprietary OS.. A Proprietary software don't allow or encourage visibility into core processes, which makes it difficult to integrate with other technologies. Vendors, not users, maintain control over the application, in spite of any customisation done by users. Open source hands down control to the customers providing a new level of IT freedom - multiple technology platforms to choose from, numerous support options, a collaborative development community, and no dependence on software versions. Open-source development can produce business-quality code. Competing with open source is a bit like fighting the invisible swordsman. For instance, in the case of Apache, the code is maintained by a not-forprofit organization. In addition, the software is available for free, which eliminates price as a competitive weapon. The pricing tricks used by Microsoft to attack Netscape are less effective against an already free solution. The philosophy of Open Source gave birth to Ubuntu, now the third largest operating system in use, besides powerful applications and programming languages like PHP, Python, MediaWiki and Firefox. There is more to open source than meets the eye, Open source is developing, s h a r i n g , cooperation and learning all rolled into one. As the variants of the open-source model proliferate, more companies will be forced to adapt to this faceless and distributed competitive force. Ubuntu epitomises this philosophy in its meaning : “I am what I am because of who we all are."
U
buntu Cola is a soft drink certified by The Fairtrade Foundation. Made with Fairtrade sugar from Malawi and Zambia, Ubuntu Cola is the first UK cola to be Fairtrade certified. It is available for sale in the United Kingdom, Sweden, Norway,Finland, Ireland, The Netherlands, Belgium, France, Italy, Switzerland and online
Operating System
Comparison of Open Source v/s Proprietary Software
An Operating System forms the very core of the computer. It interacts with the user and hardware to make the use of computers feasible to the general masses. There are three major types of OS – Microsoft Windows, Linux and Mac OS X. Microsoft Windows – The current market leader in the field of OS. Even though it came out a year later than the first Mac OS, it still dominates the market. The GUI is very easy to use for people new to computers. However, it is laden with bugs and tends to crash a lot. Due to its high percentage of market share, technical support is very easily available for it. Also, a major chunk of software is aimed at the Windows platform. Market dominance also attracts negative vibes in the form of malicious viruses and other malware, native to MS Windows. Security is indeed a matter of concern on Windows computers and hence, secure computers usually run Linux or custom built OSs. Most computer vendors sell their computers with a Windows license. Linux – Linux is not really an operating system. It is just a kernel, on which an operating system runs. Various organizations compile such packages, called 'distributions' for general public use. Linux isn't as widely used as Windows. In fact, it has a very small user base. It is usually preferred by more experienced people due to its harder-to-use interface. Linux has almost no known viruses or malware and is the most secure OS since automatic script execution is impossible and everything is user-controlled. It is the preferred environment for developers (except for MS/Apple specific product development). Linux is completely free and open source. Apple Mac OS X – It is the operating system supplied by Apple on their computers. It has the major drawback of only running on Apple-supplied computers (legally). OS X has a very easy to use interface and supports a large host of applications for general use. They also have a pretty interface. Mac OS X is also secure and has minimum number of viruses or malware. It is a feature rich OS designed for common use. In conclusion, no OS can be said to be the best. Each person would have their own preference based on their need and experience with computers.
Media Players A lot of companies have ventured into media players. A lot of them are proprietary and a few of them are open source. Comparison of media player apps is done on the basis of format support, player features, launch time and CPU/memory utilization. VLC, an open source media player by the VideoLAN organization supports the maximum number of video/audio formats. In this department, a majority of the proprietary players such as Windows Media Player and RealPlayer lack. VLC also has the minimum launch time of all players and uses the least amount of CPU compared to other players while playing a 1080p movie. Even the RAM usage by VLC was equal if not lesser than its major competitors. Judging from this, VLC is definitely the best media player available to us. However, its music playing features aren't really up to the mark, as it does not fully support intelligent playlist creation or organized music libraries. At best, VLC is a great audio/video player but it isn't a real music player. As can be seen, open source music players have trumped the reigning proprietary counterparts and taken their seat on the throne.
WEB BROWSER Change is inevitable and rapid in the world of computers. As soon as it seems that one browser has an unassailable hold on the browser market, all of a sudden other browsers also modernize themselves so as not to fall back in the race to become number one. Now one might think that if all the browsers do the same thing, where does the difference lie? Start-up time: For single tab start up time, Safari leads all the browsers whereas Firefox stands third but for multiple tab start up time, Chrome is the winner, finishing it in almost 2 seconds, whereas IE takes just over 3 seconds to do this and Safari rounds out the line-up far behind the rest of the browsers at nearly nine seconds. Memory efficiency: In single tab memory usage, Firefox loses to Microsoft's IE but when subjected to heavy loads, Firefox is clearly the champion leaving IE way behind it. In case of memory release to OS, Chrome is the fastest followed by IE, whereas our memory usage leader Firefox does not allocate memory to the OS right away. Reliability: Chrome and IE8+ versions have process isolation for each tab, which prevents complete application crash. Firefox runs plug-ins like flash in an isolated process for stability and is less vulnerable to crashes during regular browsing, but a single stuck tab can still bring down the whole application. Open source browsers, Chrome and Firefox are far better than IE or Safari in terms of speed, reliability and compatibility, as can be seen from the results. However, this is platform dependent and Safari is still the best browser for use on OS X but the same is not true for IE on Windows.
I
ts a website that embraces the open source philosophy and carries a collection of over 36,000 books on which the copyright has expired and are thus in the public domain, ensuring that anyone can use that content for any purpose whatsoever.
ABDUL KALAM’S TAKE ON OPEN SOURCE: The concept of Free Software, wherein knowledge is created by the community for the community, without being driven by commercial interests, must be extended to research to solve problems in health care, agriculture, energy and safe drinking water, the former President A.P.J. Abdul Kalam said. Mr. Kalam implored scientists, researchers and academics to embrace the 'open source philosophy' in their respective fields, and work towards building 'open source networks' that can help pool talent, research and know-how from around the world. Such a platform can help evolve scientific solutions to problems, particularly those relevant to developing countries.
OPEN SOURCE IN HEALTH SECTOR: A valuable open source movement initiated in the Indian health care sector is the Open Source Drug Discovery (OSDD), a consortium led by the Council of Scientific and Industrial Research. In the wake of the failure of market forces in this sector, the OSDD is exploring new models of drug discovery and looking at innovative patent regimes. OSDD is currently working on optimizing a patented molecule as a drug. It plans to use this patent to ensure the drugs are affordable in the market, by ensuring non-exclusive licensing. This is an innovative way of using patents to the benefit of poor patients.
Page 21 - Autumn 2011
Geek Gazette
Heralded by the IITR coder junta as being the ultimate form of internship, with perks ranging from ‘bragging rights’ to a ticket to the hallowed halls of Silicon Valley, Geek Gazette delves deeper into Google Summer of Code (GSoC), a global programme that provides a golden opportunity to aspiring programmers all over the world to kick-start their careers. The event is the brainchild of Google's founders Sergey Brin and Larry Page and draws its name from the iconic rebellion of the 60s “1967 Summer of Love”. Here's what GSoC qualified, Vivek Prakash (CSI 4th year), who humbly describes himself as “Just Another Geek”, has to say: GG : Enlighten us more on the Open Source community. VP :The Open source community is a boon for today's software industry. It's a union of programmers from all over the world who strongly believe in and preach the Open Source philosophy. The users of Open Source software (VLC, Eclipse, OpenOffice) have the freedom and rights to change the code of the program, without paying any royalty empowering them to manoeuvre the code in their own way, manipulate it & thus innovate. GG: When did you get to know of GSoC and how did you plan to get into it ? VP : In my 2nd yr, but there were no plans for that at that time. My motive was to contribute to the open source community. GSoC is just a stepping stone from where there are links to the further stages of open source development. GSoC gives you recognition and a platform to reach out for support, besides a nice paycheck of course. You get write access to the source code of a very large repository.
in the form of Wiki pages. Gather information from these sources, seek out the field you want to pursue and contact the developers. When you apply to an organisation that is a part of GSoC, a mentor gives you a list of projects out of which you have to choose. The mentor then conducts a small interview in which he asks you questions related to your project, following whose approval, the participating organisation then conducts the final interview. GG: Before GSoC, were you involved in any open source project? VP : I first started participating in open source community projects in my 2nd yr. I initially worked for a year, on BoostC++ , which is the official library for the development of the next generation C++, i.e. C++ 08.In November 2010, I joined the Open Source community for MINIX (an operating system which existed before linux), whose goal was to act as a fully fledged operating system, but was limited as a teaching tool in many US universities. It's goal is to become a fully fledged reliable OS, targeted at low cost laptops, embedded systems, like cameras, cell phones, and as a tool for education. GG: Any tips or suggestions for the students in general in this regard. VP :We definitely have a wide scope for improvement of the culture of programming in our campus. People should get started with participating in open source projects as early as possible. Also, the programming level of people gets enhanced by being in regular contact with the developers.
GG: How do we become a member? VP :There are no hard and fast initiation rules and protocols in the open source community. The time you start interacting with other members in the open source community, the moment you start brainstorming on new ideas, the time you start getting their mails and reading their blogs, you become a member of this community. You have to pick one out of a thousand open source communities. Out of database operating systems, programming languages, media players, take up what suits your interests, then start interacting with their code developers. In its continuation, there's a mailing list for every open source community and internet relay chat. Plus, a detailed documentation of each Open source community is available
Page 22 - Autumn 2011
Geek Gazette
may take a picture of a QR code on a mutilated paper or under poor light. With error correction introduced (basically adding redundancy to the code) it would be possible to recover most of the data even if some of it is read incorrectly. Potential Uses : If promoted properly, QR codes can offer the potential of being very useful and popular. For example, you are driving by and you suddenly spot a real estate project coming up in the city. There are some contact details that you may like to note down but cannot do so as you are driving. Well, if the board has QR codes, you can quickly point your phone towards the QR code and store the information, which you can read when you have time.
The next time you are handed a business card with an unusual drawing or encounter such image in any advertisement in any newspaper or magazine (as shown in Q) on it, do not think that it is part of the company logo. It is actually a QR (quick response) code! And as the card owner may tell you, if you just point at the picture with your phone camera, your phone could be directed to the website of the company. Invented in Japan by the company Denso Wave in 1994, QR codes were originally used for conventional bar code applications like vehicle tracking but have become extremely popular with the advent of high resolution cameras in mobile phones. They became an ISO standard in 2000 and offer several levels of error correction, position and alignment standards; they can be of variable size and hence of variable data capacity and other settings. QR codes can be tiny, in which case they are called micro QR codes or they can be quite large. They can sometimes contain designer art or logo embedded into them.
Given the rate at which they are gaining popularity, it is not difficult to foresee a future in which mobile phones, QR code tags, RFID (radio-frequency identification) tags, and GPS coordinates, will all interact with each other constantly, delivering relevant and important information to users, irrespective of their backgrounds and geographical location. No wonder QR codes are being described as the hyperlinks of the physical world. DO IT YOURSELF :Try scanning the QR code given above.
Much More than Bar Codes Just like bar codes, QR codes can store information—digital data like your company website, mailing address, and product details. Bar codes can only store small amounts of information — a byte or so. But being 2D, QR codes can store a lot more — up to 2 KB of data under the current ISO standards. QR codes can represent a large amount of information by making more efficient use of display space, apart from offering the convenience of reading and storing automatically. The Power of Error Correction: Error correction in QR code is important because you
Page 23 - Autumn 2011
15
Geek Gazette
2012:Waiting for the end Where will you be on December 22, 2012? Do you seriously consider the possibility of complete obliteration of the human race and the Earth as well? Sceptics or believers alike, we all have come across such hearsay. Most of these conspiracy theories are seemingly backed by complex scientific explanation, debunking these myths are NASA scientists. So here is why, you must definitely forgo the gossip about the DOOMSDAY. 1.Solar Flares Solar pros from across the world monitoring the sun have made a startling discovery: our sun is in a bit of struggle. Fresh solar typhoons have been bombarding the Earth with high radiation energy, which has been knocking out power grids and destroying satellites. Calculations suggest it'll reach its deadly peak sometime around December 21st, 2012 NASA: Solar activity has a regular cycle, with peaks approximately every 11 years. Near these activity peaks, solar flares can cause some interruption of satellite communications. But there is no special risk associated with 2012. The next solar maximum will occur in the 2012-2014 time frames and is predicted to be an average solar cycle, no different than previous cycles throughout history.
2. Pole Shift The magnetic poles we call north and south have a nasty habit of swapping places each 750,000 years or so – and at this time we're about 30,000 years overdue. Scientists have spotted the poles are drifting apart approximately 20-30 kms each year, much quicker than ever, which points toward a pole-shift being right around the corner. While the pole shift is underway, the magnetic field is disrupted and will at last vanish, sometimes for up to 100 years. The result is enough UV out of doors to crisp your skin in seconds, murdering everything it touches. NASA: A reversal in the rotation of Earth is impossible. Many of the disaster websites pull a bait-and-shift to fool people. They claim a relationship between the rotation and the magnetic polarity of Earth, which does change irregularly, with a magnetic reversal taking place every 400,000 years on average. Such a magnetic reversal doesn't cause any harm to life on Earth. A magnetic reversal is very unlikely to happen in the next few millennia, anyway.
3. SuperVolcano Threats Yellowstone State Park in the US is feted for its thermal springs and Old Unswerving geyser. The reason for this is easy – it's sitting on top of the world's biggest volcano, and geological experts are beginning to get frightened sweats. The Yellowstone volcano has a pattern of erupting every 650,000 years or so, and we're many years overdue for an explosion which may fill the atmosphere with ash, blocking the sun and plunging the Earth into a chilly winter that could last 15,000 years. The pressure under the Yellowstone is building steadily, and geologists h a v e s e t 2 0 1 2 a s a l i k e l y d a t e f o r t h e m a s s i v e b a n g .
Page 24 - Autumn 2011
Geek Gazette
4.. Planet X Nibiru Fly-By Scenario: According to Ancient Sumerian texts the Earth (“Tiamat”) was struck by a large planet “Nibiru”, which moved it into its present orbit, and made the Moon and the Asteroid Belt. If their calculations are correct, then Nibiru (sometimes called Planet X) will be on a collision course with the Earth in on Dec 21 2012. NASA: The Nibiru theory and other stories about wayward planets are an Internet hoax. If Nibiru or Planet X were real
and headed for an encounter with the Earth in 2012, astronomers would have been tracking it for at least the past decade, and it would be visible by now to the naked eye.
5. Mayan Calendar Declares End Of World on December 21st, 2012 The first to envision December 21st, 2012 as the end of the planet were the Mayans, a race that were good at building highly correct astrological hardware out of stone. Thousands of years ago they managed to calculate the length of the lunar moon as 329.53020 days, only 34 seconds out. Given that they were pretty near to the mark with the lunar cycle, it's likely they've got the end of the planet right as well. NASA: Just as the calendar you have on your kitchen wall does not
cease to exist after December 31, the
Mayan calendar does not
cease to exist on December 21, 2012. This date is the end of the Mayan long-count period but then just as your calendar begins again on January 1, another long-count period begins for the Mayan calendar. So if you thought that it was going to be slugfest for getting into elite VIP list who would be saved by the US rescue team, you can rest easy. Hard luck for us students who had joyfully renounced all plans of appearing in the exams. And rest assured 3 rd yearites, you can finally feel the elation of campus drive. Facebook and Google waiting for you! Don't fret 4th yearites, a memorable convocation beckons you in 2012. At long last you'll be a 'BTech from IIT'.All you people with deep hidden stashes in Swiss bank, don't go about trying to buy the world's
treasures
because there isn't any end in sight for the world, though your money s u r e l y
h a s !
Page 25 - Autumn 2011
Geek Gazette
Solid State State Drives Solid Drives Is the fight for efficient data storage supremacy over? Will HDDs hold their own in the byte eat byte world of data storage or will SSDs dethrone our reliable magnetic tapes?
A SSD is a data storage device that uses solid state memory to store persistent data with the purpose of providing the access in the same way as is done using the traditional HDD storage devices. They are different from HDDs in a way that SSDs use non volatile memory chips instead of electromechanical spin disks and read/write heads used in HDDs. While many people are just becoming aware of the SSDs, their origin dates back to the 1950s. Two technologies, card capacitor read only store (CCROS) and magnetic core memory served, as the foundation for the arrival of these storage drives. As of today, most of the electronics that we see around us, comprise of semiconductors and chips. In case of a SSD as well, the primary storage medium is through these semiconductors. Now some of us might be led to believe that this type of storage already exists in market in the form of ubiquitous flash drives that plug into the USB port. This thing is somewhat true as SSDs and USB flash drives both use the same type of non-volatile memory chips that retain their information even when they have no power. The only difference between the two is in the way they attached to the computer system. A USB flash drive acts as an external device to the system whereas SSDs are designed to reside inside the computer replacing more traditional HDDs. However, the performance part of these drives is where SSDs reign supreme. The difference between the two is quite unnoticeable for the regular users but as far as power users are concerned, the difference in access time for games and bulk data transfer is huge.
delivering 250/200-250 MBps read/write while HDDs crawl around at 100/60-70 MBps. Besides the increased storage capabilities of SSDs, they are also much lighter in weight than HDDs and only consume about half the power of an HDD.
With all its blazing speeds and flashy specs, SSDs are still quite far from being mainstream due to its extremely high cost and somewhat unreliable nature as compared to HDDs. Per gigabyte, SSDs will bring you down around $1.2-$2 but an HDD will only run you as low as $0.05-$0.10. The maximum number of read/write cycles in a SSD is far lower than that of a HDD and if burnt out, the SSD will have to be replaced. SSD-based laptops are slowly emerging into mainstream with the advent of the MacBook Air, Vaio Z and Samsung Series 9 laptops. Benchmark results gives boot times of these laptops to be less than 15 seconds, much lower than usual 50 second boot times of HDD-based laptops. Due to their form factor, the laptops incorporating a SSD are usually very thin. The MacBook Air is 0.68 inch thick, Samsung Series 9 0.620.64” and the Vaio Z is 0.65” thick. Comparatively, the thinnest HDD-based laptop, the XPS Z-series is at 0.97”. SSDs do not impact battery lives as much as HDDs but an improvement of 10-15% is seen in benchmark results. We hope for a day when SSDs will make HDDs obsolete.
SSDs in general are faster than conventional HDDs. Where a HDD requires about 5-10ms for random data access, SSDs can do it in only 0.1ms. This is because in an HDD, the read head has to come to the exact spot on the magnetic tape, whereas the data is directly accessed in an SSD. For this reason, latency in data fetching is also higher in the case of HDDs and defragmentation is not an issue for SSDs due to its direct data access property. In terms of read/write speed, SSDs are far superior,
Page 26 - Autumn 2011
Geek Gazette
UNIVERSAL SERIAL BRAIN Do you face troubles learning those guitar chords?Sick of mugging up those trigonometric formulae? Keep forgetting those spellings? Or simply tired of the rote nature of learning?
bigger in size, their tail (the axon) becomes bulkier and gets covered in a thick sheet of fat. And hence the info becomes permanent. This is how incredibly slow our learning process is, so slow, that we spend most of the time learning what others have done, invented, discovered, that it leaves very less scope for us to come up with something of our own.
Wouldn't it be nice if your brain worked like a flash drive? So that you could learn everything from driving a car, to playing the guitar, to quantum electrodynamics, to Shakespeare's plays in a matter of seconds, just by downloading it to your head? Or better still, jack your brain to the internet and download the entire Amazon.com? Doesn't it evoke the scene of The Matrix in your mind, where the protagonist learns to fly a chopper by downloading the instructions through a cell phone?
The Blue Brain Project headed by Henry Markram, director of neuroscience and technology at the Swiss Federal Institute of Technology in Lausanne, is an attempt that began in 2005 to use supercomputer-based simulations to reverse-engineer the brain at the molecular and cellular levels and unravel the underlying function of neural processes. Prime objective of this research will be to understand how a bunch of photons striking your retina become your favourite movie or sound waves striking your ear drums become your favourite song. It will involve mapping the areas of the brain which get overly excited while learning a specific word or skill, then targeting those areas with the tiniest electrodes possible to excite them once again. It is like deciphering the brain storage pattern first and then creating new information by exciting this pattern at various points.
Sounds cool, well so does the theory of relativity. This concept is in infancy today. Its inception traces back to the U.S. Department of Defence project that aimed to fly an F-15 with a rat's brain. By placing electrodes into a solution containing cells from the cerebellum (part of the brain that maintains balance) of a rat, and then connecting it to a flight simulator, researchers were able to test nature's way of balancing (repeat no gyroscopes and accelerometers involved). As part of a preliminary research at the University of Pittsburgh, the nerve fibres from biceps of an amputated monkey were coupled with nanometre scale conductors to relay signals from the brain to a microcontroller, which were amplified and sent to a robotic arm. To their amazement, the monkey was able to use the robotic arm to grab a banana. A more striking and interesting observation was that he was able to feel any object the arm was touching, proof that the communication channel ran both ways.
Dumbstruck? Heady? Feeling nauseous? We have more. Advancements in solid state physics and state of the art production methods enable us to pack greater number of transistors on a single chip each day and the number keeps growing. Hopefully, someday we might be able to pack as much info as our brain into the same amount of space. Then create an artificial brain, mount it on our head like an external hard disk.
Till such good times arrive, I would suggest that you start working on that maths tut because there is no way you're going to download Numerical Methods to your brain right now.
In the past year, researchers have developed technology that makes it possible to use thoughts to operate a computer, manoeuvre a wheelchair or even use Twitter — all without lifting a finger.Question is, if we can tap brain signals to run electronic devices, can we use electric pulses to feed information into the brain? The complexity of our brain makes the learning process really slow. For eg. When a child first learns the Alphabet, some unique neural interconnections are made in his brain which depend on how he felt while learning, his level of curiosity and even on the level of noise and the light intensity around him at that time!!!!! (All of you who are reading this article now have a different set of neural networks dedicated to deciphering the text present here). And then the child has to constantly practice writing the Alphabet loads of times. Why? Because, it is only then that the neurons describing the alphabet in his brain get
Page 27 - Autumn 2011
5
Geek Gazette
INVISIBLE WEB EV
CA
EN
NN
OT
GO
OG
FI
LE
ND
ME
Invisible web? No, we are not talking about the latest spidey invention. Invisible web, a term coined by Mike Bergman, is that part of the WWW (World Wide Web) that has not been indexed by the popular search engines like Google, Yahoo and Bing. Simply put, there are pages and websites which exist on the internet but do not show up on search engines! Yes, Google- nuts out there, you got it right. Google simply CANNOT 'see' the invisible web. How many of us have revered Google as the great Oracle or the all- seeing-eye which scoops every byte of information on web? The truth, however, is in striking contrast with the common perception. The popular search engines can only search up a fraction of the data available on the internet. This fraction is termed as “searchable web” or “surface web”. Here are the cold, hard facts. Google indexes about 8 billion pages on the web. The 'surface web' comprises of about 250 billion pages which shows that Google is able to index only about 3 percent of the surface web. Find your faith on the Google-God wavering? Well then, brace for impact! The 'Invisible Web' or 'Deep Web' which is veiled from the eye is estimated to be about 500 times larger than the surface web and is an ever expanding repository of information. As per Wikipedia estimates, the surface web consists of 167 terabytes which pales in comparison to the invisible web which encompasses a humongous 91,000 terabytes. The web is like an ocean of information, only the surface of which is touched by search engines whereas the multitude of information lies hidden in the chasms, unruffled and untapped. What cloaks the Invisible Web? The power of divination through which search engines find pages is not so divine after all. They merely use robot 'spiders' which crawl on the web indexing information and jumping from one hyperlink to the other thus covering various web-pages. Although these crawlers are able to index a large number of relevant web pages, there are places which are not accessible to search-engine spiders. Think of a webpage which is not linked to any other page on the web, i.e., a page that has no backlinks or inlinks. Traditional search methods cannot index such pages making such pages invisible. There are many private webpages on the internet which require registration or password for data retrieval. Search spiders can reach such doors but they cannot enter as they do not have the required key or password. Also, some webpage creators have do not want their pages to be crawling with search spiders so they add 'meta-tags' in their pages that causes crawlers to avoid that page.. There are many technical hurdles which these crawlers cannot leap. Scripted content, pages using Gopher and FTP protocols (Google uses HTTP protocol), dynamic pages which are returned in response or accessed through forms,contextual web-
Page 28 - Autumn 2011
Geek Gazette
pages are some of those. In a nutshell, there remain huge chunks of information that these spiders cannot wrap their software-y arms around (despite having eight of them) and thus the term Invisible Web or Cloaked Web lingers.
Goldmine of info or just useless junk? The Invisible Web is not only gigantic in size but it also surpasses the surface web in the quality content of pages. According to expert claims, it contains 550 billion individual documents which cater to informational, research and market demands. It has more focused quality content than the surface web. Moreover, the information revealed in the Invisible Web is free from any form of commercial motives. Non-Profit organisations and research entities which do not enjoy the same levels of advertisement as commercial ventures are often sidelined in traditional search. Most of the original and authoritative information remains cloaked in the form of Deep Web. The Invisible Web is an untapped gold mine of information. The methods to unveil the hidden unfold in the very next lines.
SEEK AND YE WILL FIND Is it really possible to delve deep into this ocean? Can we expand our horizons beyond the traditional search engines? Can we 'SEE' what Google cannot? Yes, we can. There are methods of peeping in the darkest corners of the web. The best way for competitive professional data extraction from the web is to liberally use directories and databases because directories contains large collection of links that enables browsing by choice of subject area .It contains manually evaluated and annotated data in a systematic manner which ensure quality over quantity. The use of databases should be done in a complementary manner with the search engines. Also, when you are looking for dynamically changing information it is good to use search engines that can extract data from the invisible web. There are search engines which can trawl the spaces of the deep web and fish out valuable data. Some of the useful ones are listed below: ·
DeepPeep: It aims to extract data from databases using forms querying for information. Auto, Airfare, Biology, Book, Hotel, Job, and Rental are the basic domains covered by it.
·
Scirus: It is strictly used for science oriented results and indexes over 450 million science related pages. It has been successful in indexing a large number of journals , scientists' homepages , patents, scholarly reports and articles.
·
CompletePlanet: CompletePlanet calls itself “the front door to the deep web”. It gives you access over 70,000 databases which are searchable under various categories and is updated frequently
·
Gigablast: It is an upcoming search engine and indexes over 200 billion pages (Google indexes only about 8 billion pages). It also possesses the ability to index nom-HTML files like excel files, word files and pdf documents.
·
The WWW Virtual Library: This is one of the oldest catalogs on the web and lists a lot of information under various categories. Infomine: Infomine comprises of a pool of libraries in the United States. University of California, University of Detroit, Wake Forest University, California State University are some of the prominent ones. Searchers can search by the category they are interested in.
·
·
IncyWincy: It is a meta-search engine which uses the results of other search engines and filters them to provide search results. It searches the web, directory, forms, and images.
·
LexiBot: It makes multiple queries and effectively searches the invisible web.
·
DeepWebTech: It has five search engines which cover the field of business, medicine and science. It searches the underlying databases in the invisible web for data.
·
Searchability: It enlists various subject-specific search engines. Hoping that these open new libraries of information to you!
Page 29 - Autumn 2011
Geek Gazette
Wax: The Codex format came into being,. It was more economical, as both sides of the writing material can be used; and it is portable, searchable, and easy to conceal. The Christian authors may also have wanted to distinguish their writings from the pagan texts written on scrolls.Wax tablets were the normal writing material in schools, in
Scrolls: Papyrus sheets were glued together to form a scroll. Tree bark such as liber, from which also comes library, and other materials were also used.The Phoenicians brought writing and papyrus to Greece around the 10th or 9th century BC. The Greek word for papyrus as writing material (biblion) and book (biblos) come from the Phoenician port town Byblos, through which papyrus was exported to Greece. Scrolls were the dominant form of book in the Hellenistic, Roman, Chinese, and Hebrew cultures.
Conception : The earliest attempts at preserving text were in the form of markings on clay and stone tablets. Nearly everything that could be written upon—stone, clay, tree bark, metal sheets—was used for writing. Egyptians would often write on papyrus, a plant grown along the Nile River.
Paper Books and the Movable type: Around 1450, in what is commonly regarded as an independent invention, Johannes Gutenberg invented movable type in Europe, along with innovations in casting the type based on a matrix and hand mould. This invention gradually made books less expensive to produce, and more widely available.
WoodBlock Printing and Manuscripts: Woodblock printing existed before manuscripts. It was efficient for printing picture, playing cards and currency but each page of a book requires that a different woodblock be crafted. Before the invention and adoption of the printing press, almost all books were copied by hand, which made books expensive and comparatively rare. The first books used parchment or vellum (calf skin) for the pages. The book covers were made of wood and covered with leather.
accounting, and for taking notes. They had the advantage of being reusable: the wax could be melted, and reformed into a blank.
From Papyrus to Kindle
As a tribute, Geek Gazette brings to you the interwoven history of the evolution of books and printing methods.
Imagine a world without any form of printed material. Tough... Isn't it?? Starting with the clay tablets from Sumer, and up to the modern-day text editing programs, writing has been the most important form of communication used to spread ideas, concepts, theories, namely - the human culture. The success of the human race stems largely on our ability to store and pass on gathered information to the next generation. No wonder why books are your best friends.
“Books are the carriers of civilization. Without books, history is silent, literature dumb, science crippled, thought and speculation at a standstill.”
· ·
· · · ·
· Mergenthaler. 1870: Paper is now mass-manufactured from wood pulp. 1878: Photogravure printing invented by Karl Klic. 1890: Mimeograph machine introduced. 1891: Printing presses can now print and fold 90,000 4-page papers an hour. Diazotype invented (print photographs on fabric). 1892: 4-color rotary press invented. 1904: Offset lithography becomes common. The first comic book is published.
once. 1886: Linotype composing machine invented by Ottmar
The evolution of books goes in step with the advancement of science and technology. Hope you enjoyed this journey through time.
The concept of "book" has taken a break since the advent of manuscripts, and remained practically unchanged, for hundreds of years. Up until now, that is, because books have made the next step in their evolution. Owing to the advancements in electronics, e-paper is the new papyrus. The Amazon Kindle, an e-book reader uses an E Ink electronic paper display that shows up to 16 shades of gray, minimizes power use and simulates reading on paper. E-paper is a display technology which is designed to mimic the appearance of ordinary ink on paper. Unlike conventional backlit flat panel displays, electronic paper displays reflect light like ordinary paper.Electronic paper is more comfortable to read than conventional displays due to the stable image, which has no need to be refreshed constantly, a wider viewing angle, and that it reflects ambient light rather than emitting its own light. An ideal e-paper display can be read in direct sunlight without the image appearing to fade.
Electronic Paper and Kindle:
·
·
· · · ·
1819: Rotary printing press invented by Napier. 1829: Embossed printing invented by Louis Braille. 1841: Type-composing machine invented. 1844: Electrotyping invented. 1846: Cylinder press invented by Richard Hoe. Cylinder press can print 8,000 sheets an hour. 1863: Rotary web-fed letterpress invented by William Bullock. 1865: Web offset press can print on both sides of paper at
Dawn of the age of Information : The Modern World
Cloud Computing “Cloud computing is really a no-brainer for any start-up because it allows you to test your business plan very quickly for little money.Every start-up, or even a division within a company that has an idea for something new, should be figuring out how to use cloud computing in its plan.”
C
omputational Technology has travelled a long way– from abacus to high speed
computation in today's gadgets. Initially,
centralized computing conquered the computing world, then came distributed computing, which gave us personal computers. After recent advancements in the field of Cloud Computing, the focus has now again shifted back to centralized computing. Cloud computing is computing via cloud i.e. internet. Cloud Computing basically means delivering various services like software, storage etc. over the internet. These services are analogous to the concept of electricity in electricity grid where endusers don't need to have knowledge about generation of electricity. In an electricity grid, the electricity is generated once and then distributed, similarly in cloud computing, the applications are stored once in cloud's data centers and these applications can be accessed by a vast number of users at the same time, thus decreasing the storage requirements exponentially on a local computer.
which can be as simple as a a web browser, and the cloud's network computers take care of the rest. The cloud computing system has 5 layers: Client, Application, Platform, Infrastructure, and Server. Client consists of hardware and software that access cloud services. Cloud application services or "Software as a Service (SaaS)" are the applications that are provided through cloud computing. Cloud platform is the foundation for running cloud applications and storing its data. These platforms run in data service centers owned by the service provider. Cloud infrastructure services or "Infrastructure as a Service (IaaS)”, deliver computer infrastructure (a platform virtualization environment), as a service along with storage space and networking. The server layers hardware and software products that are specifically designed for the delivery of cloud services. Once an internet connection is established, it is possible to share
The major reason for the fast advancement in the
information between any 2 layers of the cloud
cloud computing field is its capability of shifting the
computing system.
workload from local computers to cloud network computers as local computers no longer have to run heavy applications. Cloud server, which comprises of network of computers, handles the load of different applications. The hardware and software demands on the local user's side decrease significantly, thus radically decreasing the cost of infrastructure of a company and this in turn, helps a lot in expansion of a company. The only thing the local user's computer need is to be able to run the interface software of the cloud computing system,
Page 32 - Autumn 2011
Architecture of Cloud Computing
Geek Gazette
Cloud computing has different deployment models: public, private, community and hybrid. Public cloud: In this model, service provider makes resources like applications, storage etc and allows general public to access it. Public cloud services may be free or offered on a pay-per-usage basis. Private cloud: Private cloud is the one in which cloud services can be accessed by only members of a particular organization. This model fails to realize most of the benefits of cloud computing as the costs of infrastructure are spread over a fewer users than a public cloud, thus making it an expensive one. Community cloud: In community cloud, services can be shared between several organizations from a specific community. Hybrid cloud: A hybrid cloud is a composition of at least one private cloud and at least one public cloud. A hybrid cloud is offered as a partnership between a public and a private cloud provider, thus making cloud entities unique but bound together and therefore offering the benefits of multiple deployment models. One of the simplest application of cloud computing is an email. You login to your email account remotely, and the software and storage required for your email exist on the service provider's computer cloud and not on your computer. Some recent SaaS applications include Ubuntu One, Google apps, Google OS etc. Ubuntu One is a cloud storage service for securely storing, syncing, sharing and streaming multimedia files, documents etc at any time and from any device. Various cloud computing OS have also been released. The first cloud computing OS released was Chromium OS by Google. It was released as an open source project on July 7, 2009 and netbooks
having this OS were launched in the market a few months back. This OS is basically the chrome browser incorporating a file manager and a music player. It has numerous cloud applications made by Google like Google docs, Google cloud print etc. Another path-breaking cloud application developed by Google was Google App Engine. It gives users a platform to develop and host web applications in Google managed data-centers. Other innovative SaaS applications developed by various cloud service providers are aviary (photo editing application, like Photoshop), grooveshark(online music streaming, like iTunes), Google docs(document editor, like MS Word), Mint(Finance tool to manage
your money), CloudMe(Online Desktop), Panda cloud antivirus. Even though Cloud Computing reduces the load it still has some concerns, one of it being when working on the Internet, the level of security provided to those visiting various sites. It has been revealed recently that cloud servers are less capable of providing the level of protection many expect from websites today. Google is just one example of a cloud server that has experienced safety issues in recent years. Every new invention with it brings some positive and negative aspects. It is for the users to decide how much to comply with.
Until highly destructive for use go enjoy the “cloud�, But be careful! You may lose your data the next time it rains....!!!
Page 33 - Autumn 2011
Geek Gazette
X_WORD
Across
Down
1. Breakthrough webpage ranking system 4. Desi MIT courseware 7. It's not UNIX 10. 2D Bar Code 13. Java based robot 14. X marks the planet 16. Rapid data storage mechanism 17. His car ran on ether
2. The most popular cloud-office 3. Projection of 3D image 5. "Encourage the creation and distribution of e-books" 6. Creepy crawlies of the web 8. Prototype neural reverse engineering 9. Automotive navigation system 11. Unique 12-digit code 12. IITR efficycle team 15. Humanity towards others
Down: 2. GOOGLEDOCS 3. HOLOGRAPHY 5. GUTENBERG 6. SPIDER 8. BLUEBRAIN 9. GPS 11. AADHAR 12. SEIGER 15. UBUNTU Across: 1. PIGEONRANK 4. NPTEL 7. GNU 10. QUICKRESPONSE 13. ANDROID 14. NIBIRU 16. FLASH 17. TESLA
Page 34 - Autumn 2011
Geek Gazette