Open-Source
GUIDE TO SUCCESS IN TODAY'S EMERGING KNOWLEDGE SOCIETY
Modern society is now defined as the 'Information Society', a society in which lowcost information and ICT are in general use, or as the 'Knowledge(-based) Society', to stress the fact that the most valuable asset is investment in intangible, human and social capital and that the key factors are knowledge and creativity... More Inside
– how you can organise an 'Open Access' workshop – participate in locally organised GNU/Linux workshop – services offered by Prag Foundation
Electronic Edition
VOLUME 1, ISSUE 1
Northeast
15TH AUGUST 2007
Editor's Note Book Feature – Towards Knowledge Society – Open Access Overview – Open CourseWare Tutorial – Using the Internet Insight – Linux and the GNU project Policy Forum – On Corporate Agreement and Partnership with Public Bodies Distro Watch – Choosing Linux Distribution Career – Linux Skill Certification Know How – Essential Componets of a Desktop Computer – Six Steps to Adopting Open Source Software in your Organisation FAQ – Why Use Open Source Technology Software Review & product Info - OpenOffice Download a free copy or join our list Volume 1, Issue 1 at http://www.myopensource.in 1 | OpenSource Northeast | 15th August|2007 Book Review | Classified Resources
About
Open-Source Northeast
Bridging the divide, e足Powering the people
EDITORIAL BOARD Rahul Mahanta, Editor MEMBERS Ferdous Ahmed Barbhuiya Kuntal Bordoloi Sanjay Dutta Sanjeev Sarma ADVISORY PANEL
Partha Gogoi, Senior Consultant, TMA Resources, Vienna, VA Jayanta B Sarma, Consultant, National Health Service, England Ankur Bora, Lead Member of the Technical Staff--Speech Lab, AT&T Lab, Inc., Austin, Texas Dipendra Gogoi, WebEx Communication India Pvt. Ltd (part of Cisco), Bangalore Anjan Sarma, Director, Bhabani Offset & Imaging Systems Pvt. Ltd., Guwahati SUBSCRIPTION ELECTRONIC: Free Download or join our mailing list at www.myopensource.in PRINT INDIVIDUAL: Rs. 49 (per issue), Rs. 179 (annual) INSTITUTIONAL: Rs 99 (per issue), Rs. 358 (annual) ADVERTISING RATES Please contact at contact@pragfoundation.net
Open-Source Northeast is a quarterly Information Technology magazine published by Prag Foundation aimed at the academicians, scholars, enterpreneures and businesses, which is also of interest to the public in general. The magazine focuses on the benefits of Free and Open Source Software (FOSS) in building Information Communication Technology (ICT) capability for development and success in today's emerging Knowledge Society.
Access to knowledge in all its forms is possibly the single most important factor in determining the success or failure of civil society. While traditional media remain essential, new digital technologies hold potential for enhancing civic life that is still untapped in our region. It is essential, therefore, to promote ICT to help access to knowledge and information.
The immediate objectives of the magazine are to raise ICT awareness, promote digital inclusion and literacy, and help readers to acquire skills necessary to use ICT productively and efficiently at an affordable cost. The journal publishes articles, tutorials, career information and latest news on FOSS for personal computing as well as for institutions, offices and businesses. The journal basically aims to provide the technological 'know how' essential for success in today's emerging Information/knowledge society. The overall aim of Prag Foundation is to improve the opportunities for the community using the simple strategy: "invest in the people, stimulate creativity, confidence and self expression using knowledge and information as a medium". 'Open-Source Northeast' is an endeavour towards this. We invite the readers to subscribe to the journals and also to persuade his/her institution to make an institutional subscription. If you are interested to write in the magazine please do send your article for publication subjected to our editorial review. In addition, please keep regularly writing to us with your comments and suggestions. Mail your comments, feedback and suggestions to conatct@pragfoundation.net which is very much appreciated.
PUBLISHED QUARTERLY BY ARUP KUMAR MISRA FOR
PRAG FOUNDATION IN ASSOCIATION WITH ALLIANCE FOR COMMUNITY CAPACITY BUILDING IN NORTHEAST INDIA
FRONTPAGE DESIGNED BY WEBX, GUWAHATI
Advertise your services and business in Open-Source Northeast. With nominal subscription rate, both print and electronic distribution, and targeted readers you are guaranteed to reach your niche market and target audience. Moreover, your sponsorship will go a long way towards bridging the digital divide and e-powering the people. Please help by associating yourself with this effort. OPEN-SOURCE NORTHEAST IS TYPESET IN SCRIBUS,
A
FREE AND OPEN SOURCE DTP APPLICATION
OPEN-SOURCE NORTHEAST IS NOT FOR PROFIT
2 | Open足Source Northeast | 15th August2007
Volume 1, Issue 1
Editor's Note Book
INITIAL CONDITIONS
Some Issues in ICT Development in Asam
Essentially, ICT are enabling technologies (both hardware and software) necessary for the delivery of voice/audio, data (high-speed and low-speed), video, fax and Internet services from Point A to Point B (or possibly to multiple points B, C,
etc.) using wired and wireless media and associated equipments that are connected via Internet Protocol (IP) and non-IP networks, where the option exists that any or all of the communicating points may be fixed or mobile during the
communication process. Once such a quaternary scheme of services, media, network and mobility is fully understood, then what is to be made available, to who (i.e. who is to access them and where are they geographically located? ) and for what (i.e. for business, for pleasure and/or for broader developmental goals) becomes easier to manage. ICT for Development and its categories To my mind, discussing ICT for development can be broken into three broad categories: (i) Primary - having to do with availability, access and affordability; (ii) Secondary - having to do with our development goals; (iii) Tertiary - having to do with appropriate technology, technological obsolescence, and sustainability. Naturally, they are all related, but it needs to be clarified how those relations exist in order to enhance our ability to focus and eventually to choose our priorities right.
Editor's Note Book
Primary issues - availability, accessibility and affordability You can only access what is available and then (and only then) can you really need to ask what the price of what you can access is. Thus availability, access and affordability are naturally the first steps to consider here. With a mix of visionary
government ICT policy (including initial seed funding and government subsidies) and profit-motivated private-sector competition, eventually, the issue of affordability becomes less important once broad availability and universal access are
(almost) assured.There are some final points to be made here. First, it is essential to note that ICT is not equivalent just to
telephony or the use of computers, and that increasing telephone density and giving out computers does not amount to
development per se. Secondly, to significantly enhance availability and access - and bring down cost - it is essential that a stable distributed power supply (rather than so much power generation being concentrated at a few geographically-
dispersed points) be obtained in the state as a matter of urgency. Thirdly, a fiber-optics backbone - with SAT3/SAFE network as the entry to the rest of the global information network -must be firmly established by public-private partnership in Assam also as a matter of urgency. While there is evidence that there are ongoing efforts in this fiber-optics direction terprise -it is essential that these efforts be better coordinated and not left entirely to the vagaries of purely economic considerations.
Finally, a good ICT policy must be one with an APPROPRIATE mix of wired (copper, fiber) and wireless (terrestrial,
extraterrestrial) solutions, coupled with local content and concentration of information. A progressive policy would be one that would aid indigenous innovation, reduce latency problems (i.e. delays in sending/receiving information via satellite 44,460 miles roundtrip when the destination is just 1,000 miles away on the ground via wire ) and reduce capital flight (due to payments for use of satellites, foreign-based Points of Presence etc.) Secondary Issues – regional development goals With the alphabetic soup of UN’s MDGs (Millenium Development Goals), UNESCO’s EFA (Education for All), our Sarva Siksha Avigan, Natiuonal Rural Health Mission - all goals geared eventually towards improving the human development
index of our country and in the face of finite resources (both financial and personnel), there is an URGENT need to
prioritize among the various goals themselves with respect to the impact of ICT. With such priorities laid out, we should then look for:
i) low hanging fruits of ICT, that is where little expenditure outlay can lead to the greatest good for the greatest number of people; and ii) mid-term to long-term goals.
Personally, it is always convenient to view ICT development in terms of what it does to promote
3 | OpenSource Northeast | 15th August 2007
Volume 1, Issue 1
Editor's Note Book
(a) institutional efficiency – i.e. enhancing public sector (that is government) and private sector delivery of service; and increasing private-sector profit;
(b) human development of the individual citizen – improving access to food, shelter, clothing, water, health, education, employment, etc., of man, woman and child; young and old, literate and illiterate, urban and rural, etc.
The challenge then is to convince decision makers and the citizenry at large that ICT in general improves ordinary lives if
properly deployed, and that it is not just an esoteric notion embarked upon to titillate “city dwellers” who merely wish to mimic Western apparatuses and systems rather than provide more basic necessities of life.If I had my druthers, my own priorities for focused ICT deployed in the public sector - along with fully addressing the electric power problem - would be in the areas of Government Services, Education, Agriculture and Health. Tertiary Issues – appropriate technology and sustainability As a late comer into the ICT global arena, in a field in which breath-taking advances are still being made on a daily basis,
we in the developing world (of which India is firmly a part) always have to worry about the weighting and choices that we make of the various components in the whole array of deployed and/or deployable technologies. “Appropriate”
technology should never mean inferior or backward technology, but rather technology that will successfully take most of us
Editor's Note Book
fastest from the present to the future. Present financial cost may be important, but in a fast-moving technology, total lifecycle, support and social costs are also important.
Another aspect has to do with capacity development - manning the technology. All levels of our educational system primary, secondary and tertiary - must be engaged in one way or the other in ICT training, education and usage. On-thejob, for-the-job and training-the-trainers re-training of those who have passed beyond these formal educational systems must be embarked upon.
The final aspect of sustainability has to do with “local manufacturing”: ICT technology costs money and affects affordability. Besides, seeking foreign exchange for importation of copper, fiber, VSATs (Very Small Aperture Terminals),
switches, hubs, modems, phone handsets, TVs, etc. AND - this is important, software - are the order of the day. Hence thought should always be placed from the very beginning in ensuring that any aspect of ICT that is geared towards mass
deployment must have a plan for early local assembly (at the very minimum) and local grounds-up manufacturing (at the
very best) of the deployed technologies, with specifications for local content in material (hardware and software) and personnel.
With particular regard to software, a healthy mix of local commercial software as well as adoption of non-proprietary Free and Open Source Software (FOSS) are absolutely essential. Free/ Open Source Software (FOSS) has a wider perspective
than a Software Development Methodology. It not only increases access, ownership and control of ICT, but also provides a framework for the usage and sharing of intellectual capital in a way that is appliable to many areas of development endeavour. FOSS can play an important role in the application of ICTs for achieving the Millennium Development Goals
(MDGs). At the national Level, FOSS aids in the development of local capacity/ industry, reduces imports, conserves foreign exchange, increases the security of the National ICT infrastructure (this is distinct from application level security),
reduces copy right infringement and brings localised ICT tools to help develop local knowledge communities. FOSS
provides many socio-economic benefits. The most commonly cited are fostering the ICT industry through increased competition, lowering the ICT application cost and total cost of ownership, increases access to powerful yet localised ICT applications, increasing security of ICT applications and providing vendor independence.
Conclusion: It is almost trite to re-emphasize the importance of ICT to modern national development, and its criticality to developing countries if they must leap-frog along their development vector. The real challenge is to strategically deploy ICT in a manner that simultaneously improves ordinary citizens’ lives. This magazine is meant to serve as a resource for any one in the process of formulating their FOSS policies. We will regularly survey the motivation of individuals, institutions
and countries implementing FOSS, summarize the steps involved, list possible strategies to use and touch on cross-sectoral issues unique to FOSS.
4 | OpenSource Northeast | 15th August2007
Volume 1, Issue 1
Feature
Towards Knowledge Society
Modern society is now defined as the "Information Society", a society in which low-cost information and ICT are in general use, or as the "Knowledge(-based) Society", to stress the fact that the most valuable asset is investment in intangible, human and social capital and that the key factors are knowledge and creativity. This article is an edited extract from the introductory chapter of Towards Knowledge Society: UNESCO World Report, published in 2005 by the UNESCO, to highlight the prospect and challenges ahead for all societies. Since ancient times, all societies have probably been, each in its own way, knowledge societies. However, today, as in the past, the control of knowledge can go hand in hand with serious inequality, exclusion and social conflict. Knowledge was long the exclusive domain of tight circles of wise men and the initiated few. In the Age of Enlightenment, the demand for democracy, the concept of openness and the gradual emergence of a public forum for knowledge, fostered the spread of the ideas of universality, liberty and equality. The diffusion of knowledge through books and the printing press, as well as the extension of an education for all through schools and universities, accompanied this historical development. But the ideal of a public knowledge forum yet to be achieved. The current spread of new technologies and the emergence of the internet as a public network seem to be carving out fresh opportunities to widen this public knowledge forum. Might we now have the means to achieve equal and universal access to knowledge, and genuine sharing? Which knowledge societies? A knowledge society is a society that is nurtured by its diversity and its capacities Every society has its own knowledge assets. It is therefore necessary to work towards connecting the forms of knowledge that societies already possess and the new forms of development, acquisition and spread of knowledge valued by the knowledge economy model.
The idea of the information society is based on technological breakthroughs. The concept of knowledge societies encompasses much broader social, ethical and political dimensions. There is a multitude of such dimensions which rules out the idea of any single, ready-made model, for such a model would not take sufficient account of cultural and 5 | OpenSource Northeast | 15th August2007
linguistic diversity, vital if individuals are to feel at home in a changing world. In building real knowledge societies, the new prospects held out by the internet and multimedia tools must not cause us to lose interest in traditional knowledge sources such as the press, radio, television and, above all, the school. Most of the people in the world need books, school textbooks and teachers before computers and internet access. It is impossible to separate the issue of contents from that of languages and different forms of knowledge. What is at stake is the space we should make for local or indigenous forms of knowledge within knowledge societies whose development models highly value the codification forms specific to scientific knowledge. Fostering diversity also means nurturing the creativity of emerging knowledge societies which aims to raise in each society an awareness of the wealth of the forms of knowledge and capacities it possesses, in order to increase their value and take advantage of what they have to offer. A knowledge society must foster knowledge-sharing As the Information Society advances it becomes more important to ensure that disadvantaged people are not left behind. Nobody should be excluded from knowledge societies, where knowledge is a public good, available to each and every individual. Young people are bound to play a major role because they are often among the first to use new technologies and to help establish them as familiar features of everyday life. But older people also have an important part to play. They possess the
experience required to offset the relative superficiality of ‘realtime’ communication and remind us that knowledge is but a road to wisdom. Moreover, since the ‘information age’ knowledge societies differ from older knowledge societies because of the focus on human rights and the inclusive participatory character. The Volume 1, Issue 1
Feature
importance of basic rights translates into the particular emphasis on: 1. freedom of opinion and expression (Article 19 of the Universal Declaration of Human Rights) as well as freedom of information, media pluralism and academic freedom; 2. the right to education and its corollary, free basic education and progress towards free access to other levels of education (Article 26 of the Universal Declaration of Human Rights and Article 13 of the International Covenant on Economic, Social
global population lack equal opportunity in terms of access to education - in order to master the available information with critical judgement and thinking, and to analyse, sort and incorporate the items they consider most interesting in a knowledge base - information will never be anything but a mass of indistinct data. An excess of information is not necessarily the source of additional knowledge. In knowledge societies, everyone must develop cognitive and critical thinking skills to distinguish
knowledge itself. Emerging from the desire to exchange knowledge by making its transmission more efficient, information remains a fixed stabilized form of knowledge, pegged to time and to the user. Thus, information is in many cases a commodity, in which case it is bought or sold, whereas knowledge, despite certain restrictions (defence secrets, intellectual property, traditional forms of esoteric knowledge, for example), belongs of right to any reasonable mind. Today, as we are witnessing the advent of a global information society where technology has increased the amount of information available and the speed of its transmission beyond all expectations, there is still a long way to go before we achieve genuine knowledge societies. As long as vast swathes of the
the right to collective security. Do those substantial freedoms not coincide with the hallmarks of knowledge societies, based on lifelong education for all and on the promotion of knowledge, all forms of knowledge, taken in their plurality, as values? Network societies, as knowledge societies, necessarily foster a heightened awareness of global problems. Environmental damage, technological hazards, economic crises and poverty are some of the areas in which cooperation and collaboration may be expected to bring benefits. Knowledge is a potent tool in the fight against poverty, as long as that fight is not limited to building infrastructures, launching micro-projects (whose sustainability largely depends on outside financing on a case-by-
and Cultural Rights); and, 3. the right ‘freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits’ (Article 27, paragraph 1, of the Universal Declaration of Human Rights). New opportunities for development The simultaneous growth of the internet, mobile telephony and digital technologies with the Third Industrial Revolution has revolutionized the role of knowledge in our societies. These technologies play an important role not only in economic development, but also in human development. For the developing countries, the promise of ‘technological leapfrogging’, of being able to skip the stages of industrial development by adopting the most advanced technologies directly and to capitalize on their tremendous potential, held out special appeal. In emerging knowledge societies, there is also a virtuous circle in which the progress of knowledge and technological innovation produces more knowledge in the long term. Knowledge societies are not limited to the information society The information society is valuable only as a means to achieve genuine knowledge societies. The growth of networks alone will not be able to lay the groundwork for the knowledge society. While information is a knowledge-generating tool, it is not
6 | OpenSource Northeast | 15th August 2007
between ‘useful’ and ‘useless’ information. Useful knowledge is not simply knowledge that can be immediately turned into profit in a knowledge economy - ‘humanist’ and ‘scientific’ knowledge each obey different information-use strategies. Knowledge societies: a new approach to development The new value placed on ‘human capital’ suggests that traditional models of development, deemed necessary for the achievement of long-term growth (at the cost of very great inequalities and possibly a high degree of authoritarianism), are gradually giving way to models centred on mutual help and the role of the public services. Making the most of knowledge leads to imagining a new, collaborative development model based on the guarantee, by government, of ‘public property’, where growth is no longer viewed as an end in itself, but simply as a means to reach the target. By giving knowledge an unprecedented accessibility, and by engaging in capacitybuilding for everyone, the technological revolution might help to redefine the end goal of human development. For Amartya Sen, this is to be found in the search for elementary freedoms so-called ‘substantial’, empirically verifiable freedoms and not simply in the form of rights. They coincide with the elementary ability to gain access to the labour market, to education and health services (particularly for girls and women), and to goods, to take part in political decision-making, and to enjoy equal access to information and
Volume 1, Issue 1
Feature
case basis) or promoting institutional mechanisms, whose usefulness for the least developed countries might be called into question. Infrastructure and capacity-building are just as important, if not more. The successes achieved by a certain number of East and Southeast Asian countries in the fight against poverty is largely explained by the massive investments they made in education and research and development (R&D) over several decades. Which context?
paying institutional players, associated with a network in a limited number of regions, of which the Abilene project is the most outstanding example. Which challenges? However, many experts think that the brisk growth of new technologies might help to overcome a number of constraints that until now have impeded the emergence of knowledge societies, such as geographical distance and the limits inherent to means of communication. Of course, networking helps to
updating. High-speed internet access over power lines (and no longer only over telephone lines), interactive television on mobile phones and the sale of new software significantly cutting the cost of telephone calls are drastically shifting the terms of the debate not only on access to technologies but also on access to diversified contents. Meanwhile, the internet itself might split up into a host of first-, second-and third-class internets, not only because of competing efforts to control mechanisms of registering domain names but also because of changes stemming from the development of a second - generation internet, whose costs will be high, limiting the circle of users to the wealthiest institutions. A case in point is the setting up of national and subregional infrastructures accessible only to
As we shall see, the digital divide helps widening an even more alarming divide - the knowledge divide, which adds up the cumulative effects of the various rifts observed in the main areas that make up knowledge (access to information, education, scientific research, and cultural and linguistic diversity) and is the real challenge facing the building of knowledge societies. This knowledge divide is rooted in the dynamics inherent to knowledge gaps, be they global inequalities in the distribution of cognitive potential (gaps between forms of knowledge) or the unequal value put on different types of knowledge in the knowledge economy (gaps within different kinds of knowledge). The knowledge divide is particularly glaring between the countries of the North and
The limits of existing initiatives The research that has been carried out so far, especially in the areas of education, scientific research and new technologies, is still massively dependent on a fragmented vision of existing interactions and a strong technological determinism. Interest in the short-term impact of the introduction of new technologies into education and learning might lead to neglecting a deeper study of the new contents of education, their quality and their formats. That development could become alarming at a time when education sometimes tends to give a high importance to the management of information at the expense of the development of analytical skills and critical judgement. Instead of offering developing countries a ‘single model’ of knowledge societies, it is worth recalling that the breakthroughs achieved in many nations are largely due to patient and concerted efforts in such areas as education at all levels, technological catch-up in strategic areas of scientific research and the implementation of effective innovation systems. The liberalization of trade has considerably altered the very nature of economic competition, which requires quick and deep changes of national higher education and scientific research policies. Sooner or later those changes will affect all education systems and the very definition of the goals of education at all levels. The steady pace of technological innovation requires periodical
7 | OpenSource Northeast | 15th August2007
open up entire realms of knowledge, such as scientific and technological knowledge, that until now were often jealously guarded for strategic or military reasons. However, a number of old stumbling blocks still hinder access to knowledge and new ones have appeared. How could we accept that future knowledge societies will operate like so many exclusive clubs only for the happy few? Towards a dissociated society? Will knowledge societies be societies based on knowledgesharing for all or on the partition of knowledge? In the information age, and at a time when the advent of knowledge societies is poised to become a reality, we are, paradoxically, seeing divides and exclusions emerge between North and South, and within each society. There is no denying that the number of internet users is growing at a very brisk pace, rising from over 3 per cent of the world population in 1995 to more than 11 per cent in 2003 - or over 600 million people. But it is liable to come up fairly quickly against the glass ceiling of financial viability. We are already living in a ‘one-fifth society’, in which 20 per cent of the world’s population monopolizes 80 per cent of the planet’s resources. The digital divide - or divides, so multifaceted are their forms is a cause for alarm. There is a strong chance that the brisk growth in the number of internet users will slow down when the figure nears 20 per cent.
Volume 1, Issue 1
Feature
those of the South, but it is also a problem within a given society, since it is highly unlikely that equal exposure to knowledge will result in equal mastery. Closing the digital divide will not suffice to close the knowledge divide, for access to useful, relevant knowledge is more than simply a matter of infrastructure - it depends on training, cognitive skills and regulatory frameworks geared towards access to contents. Information and communication technologies still require the development of new cognitive and legal instruments in order to realize their full potential.
The dangers of an excessive commoditization of knowledge The economic and social promises that the information society seemed to hold out, including full employment, the ‘new economy’ and the competitiveness boom, have yielded to
concerns about the limits of the ‘information age’. Some experts have noted that, far from confirming the hypothesis of ‘dematerialization’, our societies may on the contrary be in the midst of a process of ‘hyper-industrialization’ because knowledge itself has become ‘commoditized’ in the form of exchangeable, and codifiable information. In an economy where the focus is scientific and technological knowledge, what role might certain forms of local and indigenous know-how and knowledge play? They are already often deemed less valuable than technological and scientific knowledge. Is there a chance they might simply vanish, even though they are a priceless heritage and a precious tool for sustainable development? 8 | OpenSource Northeast | 15th August 2007
The current trend towards the privatization of higher education systems and their internationalization deserves attention from policy-makers. Knowledge is common good. The issue of its commoditization therefore should be very seriously examined. Overview of the chapters On what foundations could a global knowledge society that would be a source of development for all and, in particular, for the least advanced countries, be built? This question is the focus of Chapter 1, From the information society to knowledge
societies, which emphasizes the need to consolidate two pillars of the global information society that are still too unevenly guaranteed access to information for all and freedom of expression. Our time is the stage of transformations and upheavals so momentous that some people claim we are in the throes of a Third Industrial Revolution - that of new information and communication technologies associated with a change in knowledge systems and patterns. We may be standing on the brink of a new digital knowledge age. Chapter 2, Network societies, knowledge and the new technologies, focuses on those developments and their corollaries. Chapter 3, Learning societies, shows how those changes have kept pace, in terms of teaching and education, with the shift in focus from the possessors of knowledge to those who are seeking to acquire it in the framework of not only formal education systems, but also of professional activity and informal education, where the press and electronic media play an important role. At a time when increasingly rapid changes are rocking old models to their very foundations, and ‘learning by doing’ and the ability to innovate are on the rise, our societies’ cognitive dynamics have become a major issue. Chapter 4, Towards lifelong education for all? examines the impact of these new dynamics on progress towards achieving the universally proclaimed right to education. Basic education for all is still an absolute priority. But adult education, which might seem irrelevant for countries that still have a long way to go to meet basic education needs, has nevertheless acquired decisive importance today because it appears to be an essential condition for development. Thus, lifelong education for all can be one response to the increasing instability of employment and Volume 1, Issue 1
Feature
professions that most futurists foresee. Chapter 5, The future of higher education, also focuses on
education and training, and more particularly on the fundamental role that institutions of higher education, which are facing an unprecedented upheaval in traditional patterns of the production, spread and application of knowledge, play in knowledge societies. Chapter 6, A research revolution? stresses the importance that should be given to science and technology. With the market’s increased involvement in the field of scientific activities, they should implement, at the crossroads of the scientific, economic and political sectors, research and innovation systems that promote sustainable development and benefit all, in the North as well as in the South. New knowledge-sharing models, such as the collaboratory, should be expanded in the future. Chapter 7, Sciences, the public and knowledge societies, throws into relief the public’s role in the debate on the risks and benefits entailed in the use of new technologies and the fruits of scientific research, especially in the areas of biotechnologies and nanotechnologies. Chapter 8, Risks and human security in knowledge societies, studies the emergence of a ‘risk society’. The access of a large number of players to knowledge resources is full of promise, but it can also cause irreparable damage and create unpredictable dangers. Is there a chance that knowledge societies might accentuate the current trend towards a homogenization of cultures? Chapter 9, Local and indigenous knowledge, linguistic diversity and knowledge societies, examines the paradox of describing the growth of knowledge societies at a time when languages are vanishing, traditions falling by the wayside and vulnerable cultures becoming marginalized or tumbling into oblivion throughout all the world’s regions. When one speaks of knowledge societies, what kind of knowledge is meant? There
9 | OpenSource Northeast | 15th August2007
often is a creeping suspicion that such an expression mainly targets scientific and technological knowledge, which is mostly
concentrated in the industrialized countries. But what about other forms of knowledge, in particular those labelled local or ‘indigenous’? Knowledge societies must turn to dialogue, knowledge-sharing and the benefits of translation in order to help create shared areas that preserve and enhance everyone’s diversity. Chapter 10, From access to participation: towards knowledge societies for all underscores the importance of a new conception of knowledge that is no longer a factor of exclusion, as might have been the case in the past but, on the contrary, promotes the full participation of all. However, there are many knowledge-related asymmetries on a global scale (the digital divide, the scientific divide, massive illiteracy in the Southern countries, the brain drain, etc.), whose accumulation is creating a knowledge divide. Without the promotion of a new ethics of knowledge based on sharing and cooperation, the most advanced countries’ tendency to capitalize on their advance might lead to depriving the poorest nations of such cognitive assets as new medical and agronomical knowledge, and to creating an environment that impedes the growth of knowledge. It will therefore be necessary to find a balance between protecting intellectual property and promoting the public domain of knowledge: universal access to knowledge must remain the pillar that supports the transition to knowledge societies. © UNESCO Towards Knowledge Society: UNESCO World Report, published in 2005 (ISBN 92-3-204000-X) available at http://unesdoc.unesco.org/images/0014/001418/141843e.pdf
Volume 1, Issue 1
Feature
Open Access Overview Focusing on open access to peerreviewed research articles and their preprints
This is an introduction is an edited extract to ‘Open Access’ (OA) by Peter Suber. It doesn't cover every nuance but should cover enough territory to promote the understandings of OA. Last revised March 10, 2006 and accessed May 29, 2007, http://www.earlham.edu/~peters/fos/overview.htm Open-access (OA) literature is digital, online, free of charge, and free of most copyright and licensing restrictions. OA removes price barriers (subscriptions, licensing fees, pay-perview fees) and permission barriers (most copyright and licensing restrictions). The PLoS (http://www.plos.org/index.html) shorthand definition - "free availability and unrestricted use" succinctly captures both elements. There is some flexibility about which permission barriers to remove. For example, some OA providers permit commercial re-
use and some do not. Some permit derivative works and some do not. But all of the major public definitions of OA agree that merely removing price barriers, or limiting permissible uses to "fair use" ("fair dealing" in the UK), is not enough. The Budapest (http://www.soros.org/openacc ess/), February 2002, Bethesda (http://www.earlham.edu/~pet ers/fos/bethesda.htm), June 2003, and Berlin (http://www.zim.mpg.de/opena ccessberlin/berlindeclaration.html), October 2003 definitions of "open access" are the most central and influential for the OA movement. Sometimes I refer to them collectively, or to their common ground, as the BBB definition (http://www.earlham.edu/~peters/fos/newsletter/09-0204.htm#progress). For a work to be OA, the copyright holder must consent in advance to let users "copy, use, distribute, transmit and display the work publicly and to make and distribute derivative works, in any digital medium for any responsible purpose, subject to proper attribution of authorship...." While removing price barriers without removing permission barriers is not enough for full OA under the BBB definition, there's no doubt that price barriers constitute the bulk of the 10 | OpenSource Northeast | 15th August 2007
problem for which OA is the solution. Removing price barriers alone will give most OA proponents most of what they want and need. In addition, OA should be immediate, rather than delayed, and should apply to full-text, not just to abstracts or summaries. The legal basis of OA is either the consent of the copyright holder or the public domain, usually the former. One easy, effective, and increasingly common way for copyright holders to manifest their consent to OA is to use one of the Creative Commons (http://creativecommons.org/) licenses. Many other open-content licenses (http://pzwart.wdka.hro.nl/mdr /research/lliang/open_content_ guide) will also work. Copyright holders could also compose their own licenses or permission statements and attach them to their works. Most authors choose to retain the right to block the distribution of mangled or misattributed copies. Some choose to block commercial reuse of the work. Essentially, these conditions block plagiarism, misrepresentation, and sometimes commercial reuse, and authorize all the uses required by legitimate scholarship, including those required by the technologies that facilitate online scholarly research. Open-access (OA) in historical perspective: Scholarly journals do not pay authors for their articles, and have not done so since the first journals were launched in London and Paris in 1665. (See Jean-Claude Guédon, In Oldenburg's Long Shadow at http://www.arl.org/arl/proceedings/138/guedon.html) Journals took off because, for readers, they surpassed books for learning quickly about the recent work of others. For authors, journals surpassed books for sharing new work quickly with the Volume 1, Issue 1
Feature
wider world and, above all, for establishing priority over other scientists working on the same problem. They gave authors the benefit of a fast, public time-stamp on their work. Because authors were rewarded in these strong, intangible ways, they accepted the fact that journals couldn't afford to pay them. Over time, journal revenue grew but authors continued in the tradition of writing articles for impact, not for money. OA was physically and economically impossible in the age of print, even if the copyright holder wanted it. Prices were even
of OA is that the bills are not paid by readers and hence do not function as access barriers. OA is compatible with peer review, and all the major OA initiatives for scientific and scholarly literature insist on its importance. It can use the same procedures, the same standards, and even the same people (editors and referees) as conventional journals. In most disciplines and most fields the editors and referees who perform peer review donate their labour, just like the authors.
Declaration on Access to Research Data From Public Funding (http://www.oecd.org/document/0,2340,en_2649_34487_2599 8799_1_1_1_1,00.htm) For a schematic history of OA, see my timeline of the openaccess movement (http://www.earlham.edu/~peters/fos/timeline.htm). Cost of producing or publishing OA literature OA literature is not costless to produce, although many argue that it is much less expensive to produce than conventionally published literature, even less expensive than priced online-only literature. The question is not whether scholarly literature can be made costless, but whether there are better ways to pay the bills than by charging readers and creating access barriers. The basis
2. OA journals typically let authors retain copyright. 3. Some OA journal publishers non-profit (e.g. Public Library of Science or PLoS) and some are for-profit (e.g. BioMed Central or BMC). 4. OA journals pay their bills very much the way broadcast television and radio stations do: those with an interest in disseminating the content pay the production costs upfront so that access can be free of charge for everyone with the right equipment. Sometimes this means that journals have a subsidy from the hosting university or professional society. Sometimes it means that journals charge a processing fee on accepted articles, to be paid by the author or the author's sponsor (employer, funding agency). OA journals that charge processing
affordable until the 1970's, when they began to rise faster than inflation (http://www.arl.org/stats/arlstat/graphs/2002/2002t2.html). Fortuitously, just as journal prices were becoming unbearable, the internet emerged to offer an alternative. The volume of published knowledge is growing exponentially and will always grow faster than library budgets. In that sense, OA scales with the growth of knowledge (http://www.earlham.edu/~peters/fos/newsletter/03-0204.htm#scaling) and print journal does not. We've already (long since) reached the point at which even affluent research institutions cannot afford access to the full range of research literature. Priced access to journal articles would not scale with the continuing, explosive growth of knowledge even if prices were low today and guaranteed to remain low forever. The pricing crisis itself is just one factor in the rise of OA. Even if scholars did not turn to OA in order to bypass unaffordable access fees, they'd turn to it in order to take advantage of the internet as a powerful new technology for sharing knowledge instantly, with a worldwide audience, at zero marginal cost, in a digital form amenable to unlimited processing. Also, the argument for public access to publicly funded research is a strong one. That is why, for example, 30+ nations have signed the Economic Co-operation and Development (OECD)
11 | Open足Source Northeast | 15th August2007
Where they are paid, OA to the resulting articles is still possible余 it merely requires a larger subsidy than otherwise. Despite the fact that those exercising editorial judgment usually donate their labour, performing peer review still has costs distributing files to referees, monitoring who has what, tracking progress, nagging dawdlers, collecting comments and sharing them with the right people, facilitating communication, distinguishing versions, collecting data, and so on. Increasingly these non-editorial tasks are being automated by software (http://www.arl.org/sparc/resources/pubres.html) including open-source software. There are two primary vehicles for delivering OA to research articles, OA journals and OA archives or repositories. The chief difference between them is that OA journals conduct peer review and OA archives do not. This difference explains many of the other differences between them, especially the cost and difficulty of launching and operating them. The costs of producing OA literature, and the business models for recovering the costs, depend on whether the literature is delivered through OA journals or OA archives. There are other OA vehicles such as personal web sites, ebooks, listservs, discussion forums, blogs, wikis, RSS feeds, and P2P file-sharing networks. OA journals: 1. OA journals conduct peer review.
Volume 1, Issue 1
Feature
fees usually waive them in cases of economic hardship. OA journals with institutional subsidies tend to charge no processing fees. OA journals can get by on lower subsidies or fees if they have income from other publications, advertising, priced add-ons, or auxiliary services. Some institutions and consortia arrange fee discounts. Some OA publishers (BMC and PLoS) waive the fee for all researchers affiliated with institutions that have purchased an annual membership. 4. A common misunderstanding is that all OA journals use an "author pays" business model. There are two mistakes here. The first is to assume that there is only one business model for OA journals, when there are many. The second is to assume that charging an upfront processing fee is an "author pays" model. In fact, fewer than half (http://www.alpsp.org/2005ppts/OAstudyresults_rev1.ppt) of today's OA journals (47%) charge author-side fees. When OA journals do charge fees, the fees are usually paid by authorsponsors (employers or funders) or waived, not paid by authors out of pocket. In fact there are many reasons why OA journals do not exclude the poor (http://www.earlham.edu/~peters/fos/newsletter/11-0203.htm#objreply). 5. Some use a color code to classify journals: gold (provides OA to its research articles, without delay), green (permits postprint archiving by authors), pale green (permits, i.e. doesn't oppose, preprint archiving by authors), gray (none of the above). 6. For details on the business side of OA journals, see the 12 | Open足Source Northeast | 15th August 2007
BOAI Guide to Business Planning for Launching a New Open Access Journal (http://www.soros.org/op enaccess/oajguides/html/ business_planning.htm), the BOAI Guide to Business Planning for Converting a Subscription-
Based Journal to Open Access (http://www.soros.org/op enaccess/oajguides/html/ business_converting.htm), and the PLoS whitepaper, Publishing Open-Access Journals (http://www.plos.org/dow nloads/oa_whitepaper.pdf) . 7. We can be confident that OA journals are economically sustainable because the true costs of peer review, manuscript preparation, and OA dissemination are considerably lower than the prices we currently pay for subscription-based journals. There's more than enough money already committed to the journal-support system. Moreover, as OA spreads, libraries will realize large savings from the conversion, cancellation, or demise of subscription-based journals. 8. For a list of OA journals in all fields and languages, see the Directory of Open Access Journals (http://www.doaj.org/). OA archives or repositories: OA archives can be organized by discipline (e.g. arXiv for physics) or institution (e.g. eScholarship Repository for the University of California). When universities host OA archives, they are usually committed just as much to long-term preservation as to open access. 1. OA archives do not perform peer review. However, they may limit deposit to pieces in the right discipline or authors from the right institution. 2. OA archives can contain preprints, postprints, or both. A preprint is any version prior to peer review and publication, usually the version submitted to a journal. A postprint is any version approved by peer review. Sometimes it's important to distinguish two kinds of postprint: (a) those that have been peer-reviewed but not copy-edited and (b) those that have been both peer-reviewed and copy-edited. Some journals give
Volume 1, Issue 1
Feature
authors permission to deposit the first kind of postprint but the not the second kind in an OA repository. 3. OA archives can be limited to eprints (electronic preprints or postprints of journal articles) or can include theses and dissertations, course materials, learning objects, data files, audio and video files, institutional records, or any other kind of digital file. 4. Authors need no permission for preprint archiving. When they have finished writing the preprint, they still hold copyright. If a journal refuses to consider articles that have circulated as preprints, that is an optional journal-submission policy, not a requirement of copyright law. (Some journals do hold this policy, called the Ingelfinger Rule, though it seems to be in decline, especially in fields outside medicine.) 5. If authors transfer copyright in the postprint to a journal, then they need the copyright holder's permission to deposit it in an OA archive. Most journals, now about 70% (http://romeo.eprints.org/stats.php) already allow postprint archiving. The burden is then on authors to take advantage of the opportunity. This means that authors may publish in virtually any journal that will accept their work (OA or non-OA) and still provide OA to the published version of the text through an OA archive. 6. But if a journal does not allow it, then the author can still archive the preprint and the corrigenda (the differences between the preprint and the postprint). 7. The most useful OA archives comply with the Open Archives 13 | Open足Source Northeast | 15th August2007
Initiative (http://www.openarchives. org/) protocol for metadata harvesting, which makes them interoperable. In practice, this means that users can find a work in an OAIcompliant archive without
knowing which archives exist, where they are located, or what they contain. (Confusing as it may be, OA and OAI are separate but overlapping initiatives that should not be mistaken for one another.) 8. Every university in the world can and should have its own open-access, OAIcompliant repository and a policy to encourage or require its faculty members to deposit their research output in the repository. A growing number do precisely this. 9. We can be confident that OA archives are economically sustainable because they are so inexpensive (http://www.arl.org/sparc/pubs/enews/aug01.html#6). There are many systems of open-source software (http://www.soros.org/openaccess/software/) to build and maintain them. Depositing new articles takes only a few minutes, and is done by individual authors, not archive managers. OA archives require only a small part of a technican's time, primarily at the launch, and some server space, usually at a university. 10. There is no definitive list of OA, OAI-compliant archives. But I maintain a list of the good lists (http://www.earlham.edu/~peters/fos/lists.htm#archives). 11. For detail on setting up an institutional repository, see the SPARC Institutional Repository Checklist & Resource Guide (http://www.arl.org/sparc/IR/IR_Guide.html). 12. For more details on OA archiving, see the BOAI SelfArchiving FAQ (http://www.eprints.org/self-faq/). The OA project is constructive: Even though journal prices have risen four times faster than inflation since the mid-1980's, the purpose of OA is not to punish or undermine expensive journals, but to provide an accessible alternative and to take full advantage of new Volume 1, Issue 1
Feature
technology - the internet - for widening distribution and reducing costs. Moreover, for researchers themselves, the overriding motivation is not to solve the journal pricing crisis but to deliver wider and easier access for readers and larger audience and impact for authors. Since self-archiving is a bona fide form of OA, authors who fail to take advantage of the opportunity are actually a greater obstacle to OA than publishers who fail to offer the opportunity. Even after OA has been achieved, some access barrier might remain in place. The digital divide keeps billions of people, including millions of serious scholars, offline. Removing price and permission barriers is a significant plateau worth recognizing with a special name. OA serves the interests of many groups
Authors: OA gives them a worldwide audience larger than that of any subscription-based journal, and provably increases the visibility and impact (http://opcit.eprints.org/oacitationbiblio.html) of their work. Readers: OA gives them barrier-free access to the literature they need for their research, not constrained by the budgets of the libraries where they may have access privileges. OA puts rich and poor on an equal footing and eliminates the need for permissions to reproduce and distribute content. Libraries: Librarians want to help users find the information they need, regardless of the budget-enforced limits on the library's own collection. University librarians want to help faculty increase their audience and impact and thereby help the university raise its research profile. Universities: OA increases the visibility of their faculty and institution, reduces their expenses for journals, and advances their mission to share knowledge. Journals and publishers: OA makes their articles more visible, discoverable, retrievable, and useful. If a journal is OA,
WWW.OPENARCHIVES.ORG
then it can use this superior visibility to attract submissions and advertising, not to mention readers and citations. If a subscription-based journal provides OA to some of its content (e.g. selected articles in each issue, all back issues after a certain period, etc.), then it can use its increased visibility to attract all the same benefits plus subscriptions. If a journal permits OA through postprint archiving, then it has an edge in attracting authors over journals that do not. Funding agencies: OA increases the return on their investment in research, making the results of the funded research more widely available, more discoverable, more retrievable, and more useful. OA serves public funding agencies in a second way as well, by providing public access to the results of publicly-funded research. Citizens: OA gives citizens access to the research for which they have already paid through their taxes. It also helps them indirectly by helping the researchers, physicians, manufacturers, technologists, and others who make use of cutting-edge research for their benefit. Copyright Š 2004-2006, Peter Suber (http://www.earlham.edu/~peters/hometoc.htm). Peter is a policy strategist for open access to scientific and scholarly research literature. Most of his work consists of research, writing, consulting, and advocacy. He is a Senior Researcher at SPARC, the Open Access Project Director at Public Knowledge 9, and author of the Open Access News blog and SPARC Open Access Newsletter. This work is licensed under a Creative Commons Attribution 2.5 License, you can view a copy of the licence at http://creativecommons.org/licenses/by/2.5/.
The Open Archives Initiative develops and promotes interoperability standards that aim to facilitate the efficient dissemination of content. OAI has its roots in the open access and institutional repository movements. Continued support of this work remains a cornerstone of the Open Archives program. Over time, however, the work of OAI has expanded to promote broad access to digital resources for eScholarship, eLearning, and eScience.
14 | OpenÂSource Northeast | 15th August 2007
Volume 1, Issue 1
Feature
Prag Foundation can help hosting an Open Access seminar
A great way for you to advocate open access is to organise an open access seminar at your institution. Use these events to raise awareness amongst researchers/scholars about the open access movement and its importance for fostering change in scholarly communication, the advantages of open access publishing and to encourage them to submit their research articles to open access journals. Topics covered would ideally include how open access publishing works, the benefits of open access publishing, the ways in which it improves on the current, traditional model of scientific publishing, and an overview of the various initiatives that currently exist in the open access movement.
Some of these initiatives are the Public Library of Science, SPARC and the Budapest Open Access Initiative, as well as the OAI metadata harvesting project.
What format should this event take?
The event can take the form of a single speaker followed by discussion, however it is likely to be livelier if it is either 1. A program of talks (views and experiences of open access and its impact on the communication of research, etc), or
2. A panel discussion (a less structured format, fostering more interaction with panel and audience. Someone needs to chair or lead this discussion and draw it to a conclusion at the end) In either case, it is advisable to factor in time for Q&A sessions after the talk. You might also like to include a demonstration of OA journals and archives websites.
Who should talk at this event?
High-level faculty, enthusiasts and evangelists of open access and authors who have published articles in open access journals and those who serve on open access journal editorial boards if available locally. We will provide up to date reference material
15 | Open足Source Northeast | 15th August2007
on the subject and slides (which can be modified if necessary) in advance for the speakers to prepare the talk.
Who should attend an Open Access Seminar?
1. All those who are involved in research and who are looking to publish their work with a view to gaining high visibility and therefore a high chance of being cited by their peers. 2. Any information specialists including librarians who are required to cater to the need of the researchers and scholars for scholarly publications.
What facilities are needed to host this event?
The event organizer/s should book a venue at the institution whose size should be dictated by the level of attendance expected. Audio visual equipments (include a laptop with appropriate presenting software and a slide projector) will be provided by us. Microphones for the speakers depending on the size of the audience may be required. A live Internet connection if available will add to the impact of the seminar.
Volume 1, Issue 1
Feature
OpenCourseWare - Seting Learning Free
OpenCourseWares (OCWs) are high quality open access (free digital publication) collections of educational materials organized as courses. There is growing momentum among higher education institutions to participate in this “open” movement. Leading universities worldwide going down the open access route and make all their course materials freely available online. In fact, there are now 60 higher education institutions offering open courseware programmes, including several universities in the US, China and Europe following the lead of Massachusetts Institute of Technology, which launched its Open CourseWare site in 2002 (http://ocw.mit.edu/OcwWeb/index.htm). These institutions, as well as around 60 more that are at the
to Geeko Animation Videos. However, accessing these materials does not provide access to Novell’s instructors or Novell Global Training Services support. It will be interesting to see if other companies follow suit. MIT OCW
With 1,550 courses published as of November 1, 2006, this MIT initiative is fast evolving. Since 2002, MIT's Open
planning stage, have formed the Open Courseware Consortium CourseWare site has expanded to include materials from 1,550 (http://www.ocwconsortium.org/) to share ideas and courses, including not just course notes but audio and visual materials too. By experiences. "OpenCourseWare expresses in an immediate and far-reaching way MIT's goal of advancing education around the the end of this There are world. Through MIT OCW, educators and students everywhere can benefit from the academic activities of our year, material OpenCourseWare faculty and join a global learning community in which knowledge and ideas are shared openly and freely for the from all 1,800 projects at Johns benefit of all." - Susan Hockfield, President of MIT courses run by Hopkins University, Utah State University, and schools in Japan, MIT will be available online which has proved immensely China, and the Netherlands. popular. In January this year, the site received 1.5 million visits The mission of the OpenCourseWare Consortium is to advance from all over the world, 60% of which came from outside the education and empower people worldwide through US, with India and China the biggest users. MIT is now opencourseware. MIT OCW is created with the support and generosity of the MIT faculty who choose to share their research, partnering with other organisations In order to pedagogy, and knowledge to benefit others. MIT OCW is likely to reach a steady - though never static - state by to translate its participate in 2008. Between now and then, MIT OCW will publish the materials from virtually all of MIT's undergraduate and OCW materials: graduate courses. Consortium about 400 courses activities, institutions must have committed to publishing, under are currently available in other languages besides English. the institution's name, materials from at least 10 courses in a format that meets the agreed-upon definition of an OpenLearn opencourseware. Organizations that do not publish their own Currently, the only UK member of the consortium is the Open content but whose activities further Consortium goals - such as University (OU), which launched an ambitious open courseware translation and distribution affiliates - also participate in programme (http://www.open.ac.uk/openlearn/get-started/getConsortium activities. started-educator/competition.php), OpenLearn OCW by Corporation (http://labspace.open.ac.uk/), in October last year. At its launch, OpenLearn had 900 learning hours’ worth of material For the first time open education is being used by a for-profit on the site. The target is to have 5,400 learning hours on the training services group Novell. Novell OpenCourseWare site by April 2008 – about 5% of the OU’s entire course (http://ocw.novell.com/) is a collection of educational materials content. developed by Novell Global Training Services for authorized courses and other customer training purposes. 'By making these CREATIVE COMMONS | WWW.CREATIVE COMMONS.ORG materials available to the public, we hope to extend to all people The OCW by the consortium members are licensed under the worldwide the opportunity to access these high quality learning Creative Commons 'Attribution-NonCommercial-ShareAlike' materials.' What Novell is doing shows that corporations can licence which offers exciting opportunities for the universities take the idea of 'openness' beyond open source software and to widen its geographical reach while preserving the moral and into open educational resources. Novell currently has 10 courses legal right of course developers on the material released. The available, from Getting Started with the Novell Linux Desktop Creative Commons licence not only allows the one to use the 16 | OpenSource Northeast | 15th August2007
Volume 1, Issue 1
Feature
material for learning but also potentially allows educators been translating MIT’s open courseware into Spanish and anywhere in the world to use the materials as the basis for putting it online. As a result, many of the member universities, designing a learning course, and then add their knowledge to it, says Pedro Aranzadi, Universia’s managing director, saw the and republish it for open access. Educators worldwide are free benefits of putting their own courseware online, and Universia to reuse and remix study materials provided they accredit the has just launched 10 open courseware sites supplied by Spanish institution as the source of the material, do not use it for profit universities. and republish the What the Creative Commons licence does is to allow private rights to create public goods: creative works set free Making course work under the for certain uses. Like the free software and open-source movements, the Creative Commons aims to achive this by materials available same Creative purely voluntary and libertarian means. It offers creators a best-of-both-worlds way to protect their works while to all is an Commons encouraging certain uses of them - to declare “some rights reserved.” The goal is to build a layer of reasonable, intrinsic good, licence for open flexible copyright in the face of increasingly restrictive default rules. whetting people’s access. appetite for study and ultimately increasing the number of Perhaps it is this global reach of the open courseware educated people in the population. It may be that we are on the movement that offers the most radical challenge to the verge of a shift in the way we see learning resources, from traditional localised method of delivering education. Some of something distributed to the privileged few, to something the Open Courseware Consortium’s members are shared by everyone. experimenting with new models. Universia, for example, is a At the moment, the open courseware movement seems collaboration between a number of Spanish and Latin American unstoppable, yet most of the initiatives can only carry on universities, funded by the Bank of Santander. Its countrythrough outside funding. Whether universities or governments specific web portals, which offer information about higher will want to continue paying to make course materials widely education, such as available courses and grants, already attract available once this funding runs out is yet to be seen. six million visitors each month. It was ideally placed, therefore, to provide open courseware, and for the past four years has Credit: http://www.iwr.co.uk/information-world-review/features/2191580/mit-sets-learning-free http://www.computers.net/2007/03/novell_the_firs.html
17 | OpenSource Northeast | 15th August 2007
Volume 1, Issue 1
Tutorial
Internet: A network of computer networks and computer users worldwide
Using the Internet
There is a lot of good information on the Internet, but
Web: The World Wide Web, shortened to WWW or the web, is a distributed system of information searching on the Internet.
SIZE
7.3
HOW BIG IS THE WEB?1 OF THE
WEB:
OVER
2
BILLION PAGES
| THE WEB
GROWS BY
MILLION PAGES A DAY
| COMBINED
COVERAGE OF ELEVEN
42% OF THE WEB | NO INDIVIDUAL SEARCH ENGINE COVERS MORE THAN 16% OF THE WEB | OVERLAP BETWEEN INDIVIDUAL SEARCH ENGINES IS LOW, WITH APPROXIMATELY 40% OF A GIVEN SEARCH ENGINE'S CONTENT UNIQUE | THE BIGGEST SEARCH ENGINE IS GOOGLE, WITH OVER 4.3 BILLION PAGES FULLY INDEXED The web is based on hyper text - you navigate by clicking sometimes it may be difficult to narrow it down and find the forward to the relevant information. The web uses several quality information. The challenge for you as the user is to find media and includes several types of information such as images, information that is: 1. Scientific, 2. Usable/Serious/Quality sound and video. controlled, and 3. Relevant Browser: Browser is a computer program that lets you search In this module you will Computers (PC, Mac and UNIX) on the Internet can communicate via common protocols for and read web pages. learn: 1. When the and addressing standards. Protocols are a set of rules governing for instance how the Examples of browsers Internet is a useful information is sent and presented. Without protocols the computers would not know how to are Internet Explorer, source, 2. Which search display a web page. There are several protocols for Internet communication. Among the most Netscape, Mozilla and tools that exist, and 3. common are: 1. SMTP - Simple Mail Transfer Protocol: for sending and receiving e-mail余 2. Opera. How to search for FTP - File Transfer Protocol: for transferring files between computers余 3. HTTP - Hypertext Search engine: A information on the Transfer Protocol: for transferring information (files) on the web. computer program that Internet. searches for words in WHAT IS files and documents on INTERNET? the web. Examples of The Internet is a world search engines are wide network made up Google and AltaVista by larger and smaller SEARCHING THE computer networks. Each All computers connected to the Internet must have a precise name - a domain. Domains are WEB computer on the Internet hierarchically structured like this: subdomains.top-level domains, e.g. nit.in. The top-level for can reach any other domain for India is in, the subdomain for NIT is nit, and this can be split into several Searching computer with an subdomains. Addresses used by the IP protocol to navigate on the Internet are called IP information on the web address on the network. addresses, and are written as numbers. All computers connected directly to the Internet have a is quite similar to any search for There are several rules - unique IP address. DNS is a domain name service connecting IP addresses and domains, to other protocols - describing make both valid. 192.168.1.1 is an example of an IP address and corresponds to the domain information. Searching for one single word in a how the computers can ub.ntnu.no. search engine does not "talk to each other". All files, that is all web pages, pictures, programs, etc on the web have specific addresses, necessarily produce a The Internet gives you connected to the computer where they are located. This address is called an URL = Uniform good result. Consider access to large amounts Resource Locator, e.g., : these factors to improve of information stored in http://www.ub.ntnu.no/fakbib/dragvoll/hjelpeside.php. URLs consist of the following your search results: databases or published elements: http:// = type of service or transfer protocol | www.ub.ntnu.no = domain | THE PROBLEM directly on the web. The MAJOR SEARCH ENGINES IS
/fakbib = file catalogue| /dragvoll/ = file catalogue | hjelpeside.php = file name
Internet also gives you an opportunity to communicate with others via services such as email (electronic mail), chat ("conversations" via computers) and news (discussion fora). In this module we will use the following terms:
18 | Open足Source Northeast | 15th August2007
Put the problem into words, formulate it as a sentence/question. You could find search words, phrases, synonyms or connections based on this. THE SOURCE
Volume 1, Issue 1
Tutorial
Consider whether the web is the best source (read more about this on the next page). THE TOOLS
If the web is the right source you need to find the right tool. THE STRATEGY
Search strategy and search technique are important to get a good result. Search strategy is intelligent work, here you will find out how to approach your problem. Search technique is quite simply the instructions, and can be found under 'Help' in most tools. IS THE WEB INFORMATION?
THE
BEST
SOURCE
OF
networks were invented.
Author: Anyone can publish information on the Internet without having the content edited by somebody else. Internet sites could be written by an expert in his field, a journalist, a dissatisfied consumer or a primary school pupil. Payment : Not all the information is free. You may watch many web pages for free, but some commercial websites charge you if you want to go deeper into their sites. Organisation: The information on the web is not organised. Some gateways such as Yahoo!, collect links and organise them in subject lists, but it is impossible for one single catalogue service or search engine to index and organise all the websites.
Context: Most of the web information is taken out of its 1. If you are searching the web for information about current context. Those millions of web pages out there make up a events, enterprises and organisations or hotchpotch of information and opinions. to find information from governments, Stability: Most of the web information is or other public authorities, you will not permanent. Some well-run sites are usually find what you are looking for. continuously updated with new Nevertheless, it is important to information, while others soon are remember that other sources might be outdated or SEARCHING BEYOND GOOGLE AND YAHOO!2 both they simply quicker and AOL Search's slick, friendly interface caters to beginners, but Yahoo and Google are better choices for power searchers or disappear. better. multimedia mavens. LookSmart falls short when it comes to standard Web searches, but its article searches and innovative Furl ONE FINAL 2. If you tool are unmatched among the competition. Lycos makes the grade if you're checking up on someone or scouring discussion ADVICE: are looking forums余 otherwise, turn to Google. A9's unique, personalized approach to search is perfect for those who need a wide range of USE for local or results from their queries. SEVERAL older Altavista offers above-average audio and video searches, but Yahoo boasts the same features--and more. While it's weak on SOURCES! information multimedia and local searches, Ask Jeeves's powerful search history, organizational tools, and one-of-a-kind site preview make it , then a compelling choice for students. Google's the way to go for straight Web searching, local results, maps, or image queries, but SEARCH TOOLS printed we found better engines for multimedia searches. MSN Search may appeal to fans of MSN's various services, but its lackluster information is superior.
local and multimedia searches disappoint. After a few years of eating Google's dust, Yahoo offers new features that give the reigning search king a run for its money and make it a worthy alternative.
3. A lot of facts are more easily available in encyclopaedias or books (for example in the library's reference section - online or on the shelf).
4. Academic information is located in databases which the most common search engines cannot find. You will find links to academic bases on your library's homepage. REASONS SOURCE
WHY YOU SHOULD NOT USE THE WEB AS YOUR ONLY
Amount of information: Only a small amount of the total information in the world is on the web. Just think about all the books and journals written and published before computers and 19 | Open足Source Northeast | 15th August 2007
SEARCH
ENGINES
Search engines are the most common tool used to find information on the Internet. Search engines collect, organise, search for and display web pages. Google and Yahoo!Search are examples of search engines. When searching in search engines you search in the engine's database, you do not search the web pages in real time. Even the most comprehensive search engines only cover some 15% of the web, so if you only use your favourite search engine you will miss 85% of all available websites. Read more about what the search engines find Suitable when searching for narrow topics and well-defined research questions and specific search words and precise concepts.
Volume 1, Issue 1
Tutorial
Remember:
2. If you have doubts about the quality of the information
1. Search engines only cover a small part of the Internet
3. When you want to get an overview of important resources in a subject
2. They lack 'The Invisible Web'
3. They cover some scholarly bases and e-journals, but not in full like academic databases 4. They are not big enough (cover only parts of the Internet). Some subjects are very poorly covered. 5. Search results are not evaluated, they are presented according to mathematical rules for relevance evaluation
6. It varies how often they are updated. Could be several months behind. USE SEARCH ENGINES IF YOU HAVE A SPECIFIC QUESTION! META-SEARCH ENGINES
Meta-search engines search in several search engines' databases simultaneously. This will save you time and gives you an overview of what is out there. You enter your search question as usual, and receive a presentation of results from all the used search services. Metacrawler and Dogpile are examples of meta-search engines.
Meta-search engines are suitable for searches with simple, clear questions. Remember:
1. If you want to conduct a more complicated search, you need to use the individual search ngines. If you conduct an advanced search, not all the engines will understand your question, as questions are written differently in different engines. 2. Meta-search engines often have fewer search options, and the number of hits from each service is limited. USE
A META-SEARCH ENGINE IF YOU WANT TO FIND
AVAILABLE INFORMATION ON A SUBJECT!
"ALL"
CATALOGUES
Subject catalogues are collections of links to Internet sites. The links are classified (organised by subject) in broad categories and are hierarchically built up. Some catalogues include a closer description of each link. General catalogues cover all scientific areas, but there are also specialised scientific catalogues. Examples of general catalogues are Yahoo! and Kvasir. Catalogues are suitable:
1. When searching for broad subjects where a normal search will give you a lot of information 20 | Open足Source Northeast | 15th August2007
4. When you have difficulties defining your research question or finding search words 5. As a starting point for further navigation Remember:
1. Subject catalogues are produced manually - the links are selected by editors. This means a lot of work, which is why the catalogues often are "smaller" and more selective that search engines. 2. Even though catalogues are built up manually, the basis is not the greatest expertise in the world. USE
CATALOGUES IF YOU WANT TO GET AN OVERVIEW OF A
SUBJECT!
SUBJECT GATEWAYS
Similar to catalogues, subject gateways are collections of links for Internet sites. They often specialise in one subject. Subject gateways have their own editorial staff that select and evaluate the pages presented through their gateways. These pages are often good and comprehensive.
Bibsys' subject gateway is made by the university and college libraries in Norway. It covers all subjects, but has its own professional editorial staff for each subject. USE
SUBJECT GATEWAYS TO GET AN OVERVIEW OF CENTRAL
RESOURCES IN A SUBJECT!
SCIENTIFIC SEARCH SERVICES
Scientific search services index material that comes from universities, research institutes, academic publishers, etc.
Scirus is a scientific search engine that includes some 150 million web pages plus a large number of articles from ejournals (requires subscription). Here non-scientific pages are filtered off - for example, a search for 'Dolly' will retrieve information about the cloning of the sheep Dolly, not about Dolly Parton. Use specialised search engines if you are looking for scientific information! THE INVISIBLE WEB
There are thousands of databases with specialised information on the web. The bases may be produced by organisations or individuals, and are often available to anyone. Others are Volume 1, Issue 1
Tutorial
delivered by commercial suppliers, and require subscription and login. Spiders and robot programs cannot find this information, and that is why it is often referred to as 'The Invisible Web'. This part of the Internet is much larger than the retrievable part ordinary search engines can find. Very few people who search for Internet sites know that they miss this information. Lately, the large search engines have begun to index parts of
engine.
Be specific: If you want to find information about poodles search for 'poodle' and not for 'dog'. LIMITING YOUR SEARCH
Search engines like Google and AltaVista let you search for information on the web by using keywords. Each search will often result in very many hits, with some good hits and very
many totally irrelevant ones. To improve your results you may The Invisible Web so that you may find certain scientific articles use advanced search from e-journals. BASIC SEARCH STRATEGY: THE TEN STEPS3 techniques to limit and The Invisible Web For some search requests, you may not want or need to go through a formal search strategy. If focus your search. includes:
1. Databases; 2. Payment services; 3. Reference databases; 4. Types of files other than htm
Access to these resources often go via the library homepage and through hand-picked catalogues. One of these catalogues is The Invisible Web Directory. STRATEGIES FOR GOOD SEARCHES
Good search words: Find good keywords and phrases. Do a little brainstorming in advance, and come up with a list of good words and phrases. Try thinking of words an author of a topical web page would use.
you want to save time in the long run, however, it's a good idea to follow a strategy, especially
when you're new to a particular search engine. A basic search strategy can help you get used to each search engine's features and how they are expressed in the search query. Following the 10
steps will also ensure good results if your search is multifaceted and you want to get the most relevant
results.
1. Identify the important concepts of your search; 2. Choose the keywords that describe these concepts; 3. Determine whether there are synonyms, related terms, or other variations of the keywords that should be included; 4. Determine which search features may apply, including
truncation, proximity operators, Boolean operators, and so forth; 5. Choose a search engine; 6.
Read the search instructions on the search engine's home page. Look for sections entitled "Help," "Advanced Search," "Frequently Asked Questions," and so forth; 7. Create a search expression, using syntax, which is appropriate for the search engine; 8. Evaluate the results.
How many hits were returned? Were the results relevant to your query?; 9. Modify/ limit your search if needed. Go back to steps 2-4 and revise your query accordingly; 10. Try the same search
in
a
SEARCH TIPS4
different
search
engine,
following
steps
5-9
above.
For multifaceted searches a full-text database is best. For a search involving one facet like a person's name or a phrase without stop words, search engines that provide keyword indexing
will be sufficient. After determining whether your search has yielded too few Web pages (low recall),
there
are
several
things
to
consider:
1. Perhaps the search expression was too specific; go back and remove some terms that are connected by ANDs; 2. Perhaps there are more possible terms to use. Think of more
synonyms to OR together. Try truncating more words if possible; 3. Check spelling and syntax
(a forgotten quotation mark or a missing parentheses); 4. Read the instructions on the help pages again; 5. If your search has given you too many results with many not on the point of
your topic (high recall, low precision), consider the following: Narrow your search to specific
Advanced search: Provides the oportunity of more specific searches and fewer hits. Most search tools have both simple and advanced searches. Limitation: There are several techniques to limit a search and get more relevant hits. The way in which this is done varies between engines, but most support the use of +, and "" (inverted commas). Choosing tools: It is a good idea to choose a couple of search tools and learn how to use them properly. Help functions: Check the help function to see the available options in your search tool.
Use subject lists: fields, if possible; Use more specific terms; i.e., instead of sorting, use a specific type of sorting Browse through a subject algorithm; Add additional terms with AND or NOT; Remove some synonyms if possible. Search for several list. Use a subject catalogue as a basis for finding good subject categories that can words: Enter several words in the search box, without using punctuation marks. This will give you hits on pages where one be further limited. or more of your words appear. Try different searches: Search engines are good at ranking search results, so if you don't find relevant pages on the first 20 Example: NTNU student grade - 30 hits, you should try a different search. If you are not happy after two or three searches, you should try a different search 21 | OpenSource Northeast | 15th August 2007
Search for phrases: If you know the exact wording of a name
Volume 1, Issue 1
Tutorial
or a phrase, you can put it in quotation marks (" "). That will give you hits for pages where the words appear together and in the same order as you wrote them. Examples: "queen elizabeth", "cannes film festival", "god save the queen".
Searches with +: You ensure that words and numbers are included in the result by adding a + (plus) in front of the search word.
Field search: Combinations with site: or domain: let you limit your search to a specific domain. Example: +"positions available" +site:ntnu.no SEARCH EXAMPLES
Problem: Does the herbal medicine echinacea have any effect on colds and the flu? Search for English pages.
Example: star wars episode +1
The following words were used:
Examples: Venus +planet retrieves all pages on the planet Venus. Venus -planet retrieves Venus in all other contexts.
IN THIS MODULE YOU HAVE LEARNT:
Searches with –: A - (minus) in front of a word means that this word should not appear in the result.
Boolean search: In most search engines you may use OR between words to search for synonyms, but it is not possible to use Boolean operators or parenthesis to make complex search questions in the same way as you can in databases.
1. echinacea, 2. common cold, cold, influenza, flu, 3. effect, treat, cure, prevent 1. When using the Internet as a source is a good idea 2. Which search tools that exist
3. How to search for information on the Internet.
Example: sport OR athletics
This tutorial is based on http://www.ub.ntnu.no/viko/en/mod5/mod5_side1.php 1http://www.healthlinks.washington.edu/howto/navigating/ 2http://reviews.cnet.com/4520-10572_7-6219242-1.html?tag=back 3,4http://www.webliminal.com/search/search-web05.html#snr © This work is incensed under Creative Commons Attribution-Noncommercial-Share Alike 3.0 License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/
22 | OpenSource Northeast | 15th August2007
Volume 1, Issue 1
Insight
Linux and the GNU Project
by Richard Stallman
Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is more often known as “Linux”, and many users are not aware of the extent of its connection with the GNU Project. There really is a Linux, and these people are using it, but it is not the operating system. Linux is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in a combination with the GNU operating system: the whole system is basically GNU, with Linux functioning as its kernel. Many users are not fully aware of the distinction between the kernel, which is Linux, and the whole system, which they also call “Linux”. The ambiguous use of the name doesn't promote understanding. These users often think that Linus Torvalds developed the whole operating system in 1991, with a bit of help.
Programmers generally know that Linux is a kernel. But since they have generally heard the whole system called “Linux” as well, they often envisage a history that would justify naming the whole system after the kernel. For example, many believe that once Linus Torvalds finished writing Linux, the kernel, its users looked around for other free software to go with it, and found that (for no particular reason) most everything necessary to make a Unix-like system was already available. 23 | OpenSource Northeast | 15th August2007
What they found was no accident-it was the not-quite-complete GNU system. The available free software added up to a complete system because the GNU Project had been working since 1984 to make one. In the The GNU Manifesto we set forth the goal of developing a free Unix-like system, called GNU. The Initial Announcement of the GNU Project also outlines some of the original plans for the GNU system. By the time Linux was written, GNU was almost finished. Most free software projects have the goal of developing a particular program for a particular job. For example, Linus Torvalds set out to write a Unix-like kernel (Linux); Donald Knuth set out to write a text formatter (TeX); Bob Scheifler set out to develop a window system (the X Window system). It's natural to measure the contribution of this kind of project by specific programs that came from the project. If we tried to measure the GNU Project's contribution in this way, what would we conclude? One CD-ROM vendor
found that in their “Linux distribution”, GNU software was the largest single contingent, around 28% of the total source code, and this included some of the essential major components without which there could be no system. Linux itself was about 3%. So if you were going to pick a name for the system based on who wrote the programs in the system, the most appropriate single choice would be “GNU”. But we don't think that is the right way to consider the question. The GNU Project was not, is not, a project to develop specific software packages. It was not a project to develop a C compiler, although we did that. It was not a project Volume 1, Issue 1
Insight
to develop a text editor, although we developed one. The GNU Project's aim was to develop a complete free Unix-like system: GNU. Many people have made major contributions to the free software in the system, and they all deserve credit. But the reason it is an integrated system-and not just a collection of useful programs-is because the GNU Project set out to make it one. We made a list of the programs needed to make a complete
free system, and we systematically found, wrote, or found people to write everything on the list. We wrote essential but unexciting (1) components because you can't have a system without them. Some of our system components, the programming tools, became popular on their own among programmers, but we wrote many components that are not tools (2). We even developed a chess game, GNU Chess, because a complete system needs good games too. By the early 90s we had put together the whole system aside from the kernel (and we were also working on a kernel, the GNU Hurd, which runs on top of Mach). Developing this kernel has been a lot harder than we expected; the GNU Hurd started working reliably in 2001. We're now starting to prepare the actual release of the GNU system, with the GNU Hurd.
Fortunately, we didn't have to wait for the Hurd, because Linux was available. When Linus Torvalds wrote Linux, he filled the last major gap. People could then put Linux together with the GNU system to make a complete free system: a Linux-based version of the GNU system; the GNU/Linux system, for short. The earliest Linux release notes recognized that Linux was a kernel, used with parts of GNU: “Most of the tools used with linux are GNU software and are under the GNU copyleft. These tools aren't in the distribution - ask me (or GNU) for more info.” Putting them together sounds simple, but it was not a trivial job. Some GNU components(3) needed substantial change to work with Linux. Integrating a complete system as a distribution that would work “out of the box” was a big job, too. It required addressing the issue of how to install and boot the system-a problem we had not tackled, because we hadn't yet reached that point. The people who developed the various system distributions made a substantial contribution. 24 | OpenSource Northeast | 15th August 2007
The GNU Project supports GNU/Linux systems as well as the GNU system-even with funds. We funded the rewriting of the Linux-related extensions to the GNU C library, so that now they are well integrated, and the newest GNU/Linux systems use the current library release with no changes. We also funded an early stage of the development of Debian GNU/Linux. We use Linux-based GNU systems today for all of our work, and we hope you will use them too. Today there are many different variants of the GNU/Linux system (often called “distros”). Most of them include non-free software-their developers follow the philosophy associated with Linux rather than that of GNU. But there are also completely free GNU/Linux distros. Whether you use GNU/Linux or not, please don't confuse the public by using the name “Linux” ambiguously. Linux is the kernel, one of the essential major components of the system. The system as a whole is more or less the GNU system, with Linux added. When you're talking about this combination, please call it “GNU/Linux”.
If you want to make a link on “GNU/Linux” for further reference, this page and http://www.gnu.org/gnu/the-gnuproject.html are good choices. If you mention Linux, the kernel, and want to add a link for further reference, http://foldoc.doc.ic.ac.uk/foldoc/f oldoc.cgi?Linux is a good URL to use. Addendum: Aside from GNU, one other project has independently produced a free Unix-like operating system. This system is known as BSD, and it was developed at UC Berkeley. It was non-free in the 80s, but became free in the early 90s. A free operating system that exists today is almost certainly either a variant of the GNU system, or a kind of BSD system. People sometimes ask whether BSD too is a version of GNU, like GNU/Linux. The BSD developers were inspired to make their code free software by the example of the GNU Project, and explicit appeals from GNU activists helped persuade them, but the code had little overlap with GNU. BSD systems today use some GNU programs, just as the GNU system and its variants use some BSD programs; however, taken as wholes, they are two different systems that evolved separately. The BSD Volume 1, Issue 1
Insight
developers did not write a kernel and add it to the GNU system, and a name like GNU/BSD would not fit the situation.(4) Notes:
1. These unexciting but essential components include the GNU assembler, GAS and the linker, GLD, both are now part of the GNU Binutils package, GNU tar, and more.
2. For instance, The Bourne Again SHell (BASH), the PostScript interpreter Ghostscript, and the GNU C library are not programming tools. Neither are GNUCash, GNOME, and
GNU Chess.
3. For instance, the GNU C library.
4. On the other hand, in the years since this article was written, the GNU C Library has been ported to the FreeBSD kernel, which made it possible to combine the GNU system with that kernel. Just as with GNU/Linux, these are indeed variants of GNU, and are therefore called GNU/kFreeBSD and GNU/kNetBSD depending on the kernel of the system. Ordinary users on typical desktops can hardly distinguish between GNU/Linux and GNU/*BSD.
Accessed on May 29, 2007, http://www.gnu.org/gnu/linux-and-gnu.html
Please send FSF & GNU inquiries to gnu@gnu.org. Copyright Š 1997, 1998, 1999, 2000, 2001, 2002, 2007 Richard M. Stallman. Verbatim copying and distribution of this entire article is permitted in any medium without royalty provided this notice is preserved. Updated: Date: 2007/05/05
25 | OpenÂSource Northeast | 15th August2007
Volume 1, Issue 1
26 | Open足Source Northeast | 15th August 2007
Volume 1, Issue 1
Policy Forum
On Corporate Agreements and Partnerships with Public Bodies Issued by FSF India on 19/12/2006
Microsoft Inc. has now openly claimed that it had entered into agreement and strategic partnership with 14 Indian State Governments, prestigious public institutions like National Informatics Centre and the Ministry of Information Technology, Government of India (see http://download.microsoft.com/). Microsoft has claimed that it has been working around some of these agencies for the past 13 years. Most of these projects, now identified as eGovernance projects in operation in various central and state governments, needless to say, are on proprietary non-free Microsoft platforms. The states with which Microsoft claims to have entered into strategic partnership and the states where its projects are in operation include Assam, Andhra Pradesh, Delhi, Tripura, West Bengal, Kerala, Rajasthan, Bihar, Madhya Pradesh, Gujarat, Uttar Pradesh, Himachal Pradesh, Haryana, Orissa, Punjab and Maharashtra.
Some of the top executives of the corporate (Jean Philippe Courtois, President, Microsoft International and Ravi Venkatesan, Chairman, Microsoft India) have expressed happiness and excitement about the prospects of having entered into successful partnership with various governments in India, which, according to the corporate's statement, include 150 e-governance applications running on Microsoft Windows platforms that bring citizens closer to government services. This statement also debunks the claims by some of these egovernance projects on Microsoft becoming their choice for applications. For instance, Information Kerala Mission, a project under the Local Self Governance Department in Kerala had for long claimed that it had decided on the Microsoft platform after extensive technical discussions, a claim that has now turned out to be bogus with Microsoft Inc. asserting that it had entered into strategic partnership with e-governance agencies and state governments.
The performance and operations of all these 150 e-governance projects now stand in direct contrast to the expectations as
listed out in the Right to Information (RTI) Act 2005 as well as to the recommendations of the Knowledge Commission of India on such e-governance projects. The RTI Act insisted on preservation of confidentiality of sensitive information as well as its accessibility - both of which cannot be guaranteed on a proprietary non-free software platform (see http://persmin.nic.in/RTI/WebActRTI.htm (RTI Act 2005, Govt of India)). Confidentiality requires prevention of loss of information, which cannot be guaranteed on closed proprietary
non-free platform without continuous inspection for existence of spy-code through which information loss can occur. Guaranteed accessibility can only be implemented if open and free standards and formats are deployed for public information, which also cannot be guaranteed on closed proprietary non-free platforms. Usage of closed and proprietary systems for egovernance applications builds heavy dependance on proprietary software which again runs contrary to the spirit of Right to Information Act. The Knowledge Commission set up by the Government of India to make recommendations on egovernance efforts had explicitly recommended such projects in India to be on free software platforms using open standards taking into consideration the size and scope of such projects, which will also help in improving scaling (see http://www.knowledgecommission.org/downloads/NKCReco mmendationsEGovernance.pdf (paras 3, 7). In view of the contemporary situation, the state governments and other public agencies that have entered into strategic alliance or partnership with Microsoft Inc, should immediately abandon such projects. Ignoring the spirit of the Right to Information Act 2005 and the spirit of the recommendations of the Knowledge Commission of India on e-governance efforts can well be considered as a naked aggression on the emerging information society in the Indian sub-continent. The Free Software Foundation of India appeals to the Governments involved to shun proprietary non-free software from being used in e-governance efforts and invest in developing such applications only using free software and open standards.
Please send feedback to gnu@gnu.org.in. Copyright Š 2002-2006 Free Software Foundation of India. Verbatim copying and distribution of this entire article is permitted in any medium, provided this notice is preserved.
Support FSF India and the cause of software freedom. You can make a donation by sending a cheque or demand draft drawn in favour of "Free Software Foundation of India" to: The Free Software Foundation of India, TC-27/2207, Chirakulam Road, Thiruvananthapuram - 695 001, Kerala [FSF Account information: Free Software Foundation of India, A/C No : 632000002698, HDFC Bank LTD, Trivandrum]. All donations to FSF India are tax-deductible under 80G Income Tax Act. 27 | OpenÂSource Northeast | 15th August 2007
Volume 1, Issue 1
Distrowatch
Choosing Linux Distribution
Ferdous Ahmad Borbhuiya offeres a few tip to choose a Linux distro that best suits you Ever since the first Linux distributions appeared, people have been having a hard time trying to choose
the "right one" to use. Many people end up asking "Which distribution should I use?" on the web, only to receive heaps of different suggestions (usually just the distributions that the posters like), a few arguments, and inevitably, the RPM vs DEB debate.
The Linux newbie has yet another problem. They are confused why there is so many distribution for one linux!! The problem becomes more complicated because even after you filter out the posts to just the suggestions of distributions, you will find that you end up with just a big list of distributions, with usually only a comment like "This is good" to guide you in your choice.
This is a really bad way to choose a distribution, since you have no real advice on WHY you should choose distribution X over distribution Y. This article aims to give you the advice you need to choose the distribution that best suits you. Distribution Purpose
One of the key things in choosing a distribution is what you are using it for. Most uses fall into one of the 3 categories below: 1. Desktop usage 2. Desktop and Server usage 3. Server usage
"Desktop usage" or "desktop distribution" is a very commonly used term to describe a Linux distribution which provides a GUI (Graphical User Interface) and is suitable for usage on desktop or laptop computers. If you want a desktop distribution, some of the main requirements are: 1. Ease of adjusting settings - in the case of laptops, easy network changing is important 2. Age of the software (you want the programs to be fairly recent) 3. Range of GUI applications If you are looking for a server distribution, you want to look for: 28 | Open足Source Northeast | 15th August 2007
1. Software api stability - do updates ever change the way the distribution works mid-release? 2. Software life - how long will it get updates? 3. Security - servers are often open to the public - it needs to be very well secured. Ease of Use
A common recommendation of what distribution to use, is it's "ease of use". However, this is increasingly irrelevant when selecting a mainstream desktop distribution, as basically all of them have good hardware support and easy-to-use configuration tools. Importance Rating: Medium
Package Management System
Undoubtedly, you will come across a RPM vs DEB argument on the web sooner or later... so which side is right?
Well, there is no real answer, as both sides have valid points. However, from the perspective of choosing a distribution there is very little difference between them. The arguments between the formats which are actually valid relate to the workings of the package system, but in both cases, the user experience is the effectively the same. For example - on a system using DEB, to install firefox, you can do: $ apt-get install firefox
On a system using RPM you can do: $ yum install firefox
Anyone spot the difference that makes one better than the other from a user perspective? I sure can't.
(Note: there are many other package managers out there, such as Gentoo's Portage. However RPM and DEB seem to be the most commonly used amongst mainstream distributions.) Importance Rating: Low Software Uopdate Life
This is one of the most important things you must consider... and unfortunately, many people forget to consider this.
You need to think how often you want to upgrade to a new distribution version and what sort of product life is needed some distributions (such as Fedora) have very fast paced Volume 1, Issue 1
Distrowatch
releases (about every 6 months), but only a short update life (only supported until 2-3 months after the next version is released). These distributions have the advantage of having the latest & greatest features, however they can sometimes be a bit bleeding edge, and the fast update cycle means to keep your system secure you may need to upgrade every 6-9 months which is fine if that's what you want - and many people want the latest software, so this is ideal.
However, I cannot stress this enough: Do not use an outdated distro as a production server! For example, I see a lot of people using Fedora as a production server - this is perfectly fine if you plan to upgrade to the latest releases every 6-9 months. However, if you don't want to upgrade a production box that often (and in most enterprise environments, this would be unacceptable), you need to look for a longer life distribution. Redhat Enterpise Linux (RHEL)/CentOS, Debian or Ubuntu LTS editions are some good choices, but there are others around too. If you continue running a distribution after it's support life has ended, you risk opening yourself to security flaws - and the last thing anyone wants is your server increasing the production of spam!
The same also applies to desktop distributions, but to a lesser degree - it's a lot safer to run an end-of-life distribution on a desktop that's not connected directly to the web and open to the public, but it is still NOT a good security practice, and I advise you NOT to do this.
Linux is one of the most secure operating systems available, but it only takes one nasty vulnerability not getting fixed in a old system, for it to get exploited by a script kiddie. Importance Rating: High Feature Stability
Another important requirement of choosing a distribution, is deciding if you need feature stability or not.
Many distributions (particularly binary distributions) have a policy of not upgrading software in mid-release. So, if you're using fooprogram 1.0 and fooprogram 1.5 comes out with an important security fix, the distribution maintainers will backport that fix to the fooprogram 1.0 release, and rename it like "fooprogram 1.0-1".
Backporting is the method of taking a new features/bugfix/security update from a newer program and applying it onto an older version of the program. 29 | Open足Source Northeast | 15th August2007
This may seem a bit strange - why go to all that work backporting when they could just upgrade? The reason for this, is that if they upgraded to the new version of fooprogram, the features or the configuration might have changed. (redhat has a good example) In a production environment, this can cause problems particularly for servers, which are often setup to automatically update themselves - waking up and finding that apache has been upgraded overnight and due to a change in the way it works, your webserver isn't working, isn't the most enjoyable thing to happen.
Another reason, is that it makes it easy for 3rd party vendors to certify their product will work on a certain release of the distribution, since they know it isn't going to get changed.
Sometimes they don't backport - if the maintainers feel that no problems will be caused by upgrading, and that an upgrade will provide some needed feature or fix, they may upgrade to a newer version. But the usual behaviour is backporting. Backporting is most often seen with the kernel version - by retaining the same kernel, users know that their hardware support isn't going to change after an upgrade one day.
Most of the mainstream binary distributions act this way. Some distributions - such as Gentoo - constantly upgrade to the latest version of a program - this is usually okay when only minor updates come out, but can potentially cause issues when doing larger updates. Importance Rating: Medium Ethics
What does ethics have to do with a Linux distribution?? Simple - some developers feel that we should only use Open Source Software (OSS) in the distribution (such as Debian, Gnewsense and Fedora). Other distributions feel that it is okay to include non-opensource binaries. (eg: Ubuntu) If this matters to you, make sure you choice a distribution that matches your beliefs. Important Rating: Depends on your beliefs. Package Selection
Different distributions have differing amounts of software in their repositories. For example, Redhat's Enterprise Linux has a much smaller package offering than Debian.
See if the distribution has all the software you want easily available - and if not, check if there are any reliable, trusted 3rd Volume 1, Issue 1
Distrowatch
party repositories for it.
Importance Rating: High
What do your friends use?
Hopefully the information above will help you in your distribution decision making - if you're stuck on the many choices, here's some suggestions:
Importance Rating: Medium It pays to consider the distribution choices of your friends - if they all use distributionX, it makes sense for you to use the same, so you can support each other.
(Don't have any friends using Linux? Check if you have a local LUG group - if so, you may wish to get involved to meet new people, and also see what distributions they use - if you pick the same one, it can make getting local help easier) Importance Rating: Medium What style suite you?
This may sound silly, but sometimes you just don't like a particular distribution - not for any real technical reason, but just simply because it doesn't "suit" you. (eg: you may love the style of Redhat EL, but dislike the style/feel of Debian despite them both being good distributions.)
Try out some live CDs of distributions that you are interested in, and see how they feel to you - using a distribution with a style that doesn't suit you, can give an unpleasant experience. Another good thing to do, is install your Linux distribution with a separate /home partition to store all your data - then you can quite easily change distribution, without having to copy your data off and onto your PC again - you can even have more than one distribution using the same /home partition if you wish. Importance Rating: High Language Support
If you use English, you won't have any problems, but if you want a distribution that can be used in another language, make sure you choose one with good support.
Also see if many speakers of that language use it - if your English isn't too good, it is very useful having people that you can ask technical questions with your native language.
30 | OpenÂSource Northeast | 15th August 2007
Conclusion
I personally recommend Fedora on the desktop and CentOS (a RHEL rebuild) for servers as it has a long support life and is feature stable. It is also extremely well designed, and I find it to be a great product to work with. In cases where support is needed, I use the commercial Redhat Enterprise Linux (RHEL).
RHEL has proven itself time after time running mission critical services such as email, web sites and database and I certainly recommend it for purposes as simple as a home desktop to as complex as a high-performance enterprise server - if you ever work with Linux in the corporate market you will certainly come across RHEL where it's reputation is well known. I also use Ubuntu on my desktop & laptop, as it is feature stable, has a long support life (I use 6.06 LTS - long term support edition), and has recent versions of the programs I want. Ubuntu has a large portion of the desktop distribution market share and is a very popular distribution - you certainly won't go wrong choosing Ubuntu for most purposes. Some other distributions that are popular and which are probably worth a look at are: 1. Debian 2. OpenSuSe 3. Mandriva
Also all well known distribution are good, but it depends on which distro I personally feel comfortable and so my recommendations are based on my personal choice which doesn’t have any other implication. You are free to choose your distribution. Open source world believes in Freedom and so do I. Acknowledgement: Many documents http://reallylinux.com, www.redhat.com to just name a few.
on
Volume 1, Issue 1
Career
Linux Skill Certification
A widely-recognized Linux certification ensures that IT professionals have a means of demonstrating their Linux skills to potential employers while assuring customers that they are receiving knowledgeable support. There are a number of benefits to
your employer.
obtaining a Linux certification for both yourself, in advancing your professional career, and to
LPI is an independent, non-profit corporation which has evolved out of the Linux community focusing solely on setting certification standards. LPI is completely vendor independent and distribution neutral and tests have been developed specifically to test competence with all versions of Linux. Red Hat and Novell have their own certification program which focuses exclusively on their distribution, Red Hat and SUSE Linux, respectively. HOW DO I PREPARE FOR LPI EXAMS? LPI specifies only the standards for certification and does not specify at all how you prepare for certification. You can simply look at the objectives and study on your own, read books or take training classes.
In early 2002 LPI initiated the LPI Approved Training Materials (LPI ATM) program. The intent of this program is to offer an open, objective method through which courseware can be checked for completeness of their coverage of the objectives in the LPIC program. Courseware that has achieved approval through the LPI ATM program can be used with confidence that it is properly covering the contents of the LPI exams. PROCEDURE FOR TAKING EXAMS LPI exams are designed to be taken in almost every country in the world, in a manner that fits your schedule. Just follow the five steps listed in this article.
Step 1: Register Regardless of how or where you take your LPI exams, you must register online to receive an LPI identification number first. You must provide your LPI ID to the test centres when registering for an exam. Registration is free of charge and is completed here at the LPI web site. 31 | Open足Source Northeast | 15th August2007
Step 2: Choose Your Program LPI is developing multiple certification programs at multiple levels. It is important to choose the right program in which you want to be certified. There are currently 2 exams for the first level (LPIC1) and 2 exams for the second (LPIC2).
Step 3: Understand LPI Policies It is important that anyone planning to take LPI exams is aware of our policies on issues such as exam re-takes and expiry. Please read and understand these policies before taking LPI exams at the LPI web site.
Step 4: Schedule to Take an Exam LPI exams are delivered through a number of different methods余 choose one that's most convenient: 1. LPI exams are delivered through VUE (www.pearsonvue.com) and Thomson Prometric (www.prometric.com). Visit their web pages to find a testing centre near you. 2. LPI holds exam labs at major Linux and IT trades shows and conferences around the world where a number of paper exams are made available for candidates. Step 5: Take the Exam! If you're taking a computer-delivered exam, you will receive your exam results immediately upon completion. Within one or two days your results will appear on this site in the member area. Please make sure the test vendor has the correct LPI ID on file or we will not see your results on this site. If taking a paper exam you should receive results in about two weeks. Exam Costs Depending on where you take LPI exams, the cost will usually be the equivalent of $150 US dollars. However, please check with your local affiliate for regional prices. Exam Scoring For information about interpreting your scores, please see our entry on the subject. If you have additional questions, please email info@lpi.org.
Receiving Your Certification Once you have passed all the required exams to achieve a certification, you will be automatically notified by email, at the email address you provided when you obtained your LPI identification number. Within four to eight weeks of receiving this email, you should receive, by post, your paper certificate and plastic LPI wallet card.
Contact us (contact@pragfoundation.net) if you need help
Volume 1, Issue 1
Career
to tarin for LPI certification. We also have limited funding
32 | Open足Source Northeast | 15th August 2007
to help candidates with exam fee for LPIC2.
Volume 1, Issue 1
Career
A public Charitable Trust
26, Adarshapur, Byelane No. 2, Kahilipara, Guwahati 781003
website: www.pragfoundation.net email: contact@pragfoundation.net
Our Current Services in collaboration with Alliance for Community Capacity Building in Northeast India (a charity registered in England): 1. Training and seminars on information access and open access movement particularly suitable for scholars and acedemicians in all subject areas.
2. Conduct regular workshops on Free and Open Source Software (FOSS) applications e.g., introduction to OpenOffice.org, graphics, multimedia, games and educational applications aimed at general users.
3. Conduct regular workshops on GNU/Linux covering introduction to GNU/Linux, configuration, program installation, update and using the tools, cross-installation of several GNU/Linux distribution like Ubuntu, Fedora and SuSE, comparison of Linux and Windows, localization, and career opportunity. We provide Self-learning material and support to IT trainees who wish to study towards Linux Professional Institute (LPI) certification. We provide LPI approved learning material from internationally acrediated agencies for self-learning using facilities we provide. 4. Build and host websites for non-profit organisations including teaching institutions at not for profit basis. We host or provide consultancy on ‘Open Access’ journal and archive sites for higher education instituitions using free software at a nominal cost.
5. Distribute both Window and Linux based popular Free and Open Source Software (FOSS) such as OpenOffice.org, a complete office suite alternative to MS Office, GIMP, a graphic programme alternative to Adobe, and other educational software. In addition, latest versions of popular Linux distribution are also be available.
6. We build low cost Linux system (with optimal hardware configuration) preinstalled with quality free and open source office and productivity software to suit home and office computing need and your budget. Students, teachers and enterpreneures on tight budget can avail this service if want to bypass the middlemen. You pay only the hardware component cost plus a small service fee. We can conduct workshops and seminars at your institution which can be tailor made to suit novice to relatively experienced users. We make provisions for necessary audio visual equipemnets and laptops, so you need only to enrole the participants, arrange suitable venue and fix dates in consulation with us. Contact any member of the project committee or visit our website or mail us at contact@pragfoundation.net for further details and how to benefit from our programmes. WWW.PRAGFOUNDATION.NET | WWW.ACCB.ORG.UK | WWW.MYOPENSOURCE.IN WWW.NGO.NET.IN | WWW.PUBLICKNOWLEDGE.IN
In collaboration with
ALLIANCE FOR COMMUNITY NORTHEAST INDIA
CAPACITY
BUILDING
|
IN
Charity registered No. 1106666, Charity Commission for England and Wales
WWW.ACCB.ORG.UK
| WWW.NGO.NET.IN
33 | OpenSource Northeast | 15th August2007
Volume 1, Issue 1
SAMPLE
Career
GNU/LINUX WORKSHOP
To participate in our next GNU/Linux workshop contact at contact@pragfoundation.net. Course fee is Rs. 1500 which includes a CD containing manuals and other useful resources. The course is suitable for novices as well as new linux users. We also recommend the workshop to those who want to explore Linux System Administration as a career option and do LPI certification. Successful completion will earn you a colourful GNU/Linux certificate. 34 | Open足Source Northeast | 15th August 2007
''I attended the GNU/LINUX workshop conducted by ACCB. I, being a computer science student have been through the operating system course in my curriculum where I used Linux in the lab with hardly any knowledge about its workings and functions. After attending the course I gained the basic idea of workings of Linux. This course helped me a lot to build a foundation on Linux. Linux as we know is a very vast and strong operating system余 mastering it completely is not possible. The course not only taught me the basics of Linux but also the right way to learn Linux.I feel lucky that I got to attend this short but effective crash course on Linux which paved the way for learning more of LINUX''. Pranamita Baishya (pranamita_911@yahoo.co.in) Volume 1, Issue 1
Know How
Essential Components of a Desktop Computer
Sanjay Dutta offers a few tips to make an informed same brand name. decision for purchasing a new desktop PC After CPU and motherboard comes the next essential part A computer has become easily affordable today for a middleclass family. Till about a decade earlier computer users in Assam were generally DTP composers, software professionals, research scholars etc. However, by the end of the last century the multimedia applications improved considerably and the Internet became easily accessible. These factors together with falling prices contributed to making a Personal Computer (PC) a household item. Since a Laptop Computer is a rather personalized thing, the first computer bought for a household is generally a desktop PC. Today an ever increasing number of households, including a large number from small townships and villages, are interested to buy PCs. The aim of this article is to give some tips to you, a potential buyer of a new desktop PC, for taking a well informed buying decision. The Main Components of a Desktop PC
The component around which a PC is built is the CPU (Central Processing Unit). The function of the CPU is analogous to the function of the human brain in the sense that just like every human action being controlled by the brain every work performed with a computer is directed by the CPU. Intel is the largest supplier of CPUs today. AMD (Advanced Micro Devices) comes as a distant second. AMD was, till very recently, the producer of the fastest and most efficient CPUs with their Athlon64 line of CPUs. However, Intel has overtaken AMD on both counts with the launch of the Core2 line of CPUs. Athlon64-X2 is still preferred by many because of its better price to performance ratio. There are CPUs available with still lower budgets e.g. Celeron of Intel and Sempron of AMD. The CPU of a PC fits into what is called the Motherboard or Mainboard. It is a circuit board with a set of chips known as the Chipset. It is important to know that the chipset for one line of CPUs does not support CPUs of a different line. Therefore motherboards for Intel CPUs will not work for AMD CPUs. Some notable motherboard makers are Asus, Gigabyte and MSI. Intel also supplies motherboards with the 35 | OpenÂSource Northeast | 15th August2007
RAM (Random Access Memory). There are different types of RAM, e.g. SDRAM, DDR and DDR2. What type of RAM you need depends on what combination of CPU and Motherboard you have selected. A new PC cannot use SDRAM. DDR is the common type of RAM now. However, the newer chipsets are designed to use only DDR2. How much RAM do you need? This is a tough question to answer. A total of 256 Megabytes is the bare minimum required today but to use the PC comfortably you will need 512 Megabytes or more. Kingston, Transcend, Hynix etc. are some of the commonly available brands of RAM.
Another extremely important component of a PC is the Hard Disk Drive (HDD). The HDD is the main storage media of a PC. The installed operating system, application software, user created data, digital music and video etc. are stored in the HDD. The minimum size of a HDD you should go for should be 80 Gigabytes. Seagate is the largest selling brand of Hard Disks in India. Samsung, Hitachi, Maxtor, Fujitsu etc. are also well known manufacturers of Hard Disks. A Hard Disk is generally fixed inside the PC and you need a CD or DVD Drive to install programmes in it. CD and DVD drives are available both as Read Only and as Writable. A cost effective solution is a Combo Drive which can read from both CD and DVD and can write to recordable CD. A Floppy Disk Drive (FDD) is also still in use for transporting data of small size in floppy disks but you should rather have a Pen Drive for that purpose. The components of a PC discussed above are assembled inside a Cabinet which also has a unit called the SMPS that distributes power to the Motherboard and the Disk Drives. Remember that the SMPS should support at least 300 watts. The Keyboard and the Mouse are externally connected to the ports provided on the Motherboard which can the found at the rear side of the Cabinet. If you don’t mind spending a few hundred rupees more then you can have a set of wireless Keyboard and Mouse. If you do a lot of typing then it makes sense to by a mechanical keyboard. The most visible component of a PC is the Monitor. Though the market for 15" CRT (Cathode Ray Tube) monitors is still strong I recommend going for a 17" monitor because the price difference is not much. A 15" LCD (Liquid Crystal Display) flat panel monitor can display as many pixels as a 17" CRT does and consumes a lot less space and energy. But the average price Volume 1, Issue 1
Know How
of a 15" LCD monitor is higher than that of a 17" CRT.
Most of the motherboards in the market today have the graphics, sound and LAN integrated into it. The average user does not have to spend extra amounts for these. The recent trend among young people is to spend a few thousand rupees extra for a dedicated graphics card to play games at high resolution. For Internet connectivity through a phone line you can buy either a dial-up modem or an ADSL modem. Considering the state of electric power supply in our region you must have an UPS (Uninterruptible Power Supply) too. Branded vs Assembled PC
We generally call a PC configured and sold by a reputed company under its own brand name as a branded PC. On the other hand a PC configured by putting together the components obtained from a local vendor is called an assembled PC. Till about a couple of years ago the branded PCs used to cost significantly higher than their assembled cousins of the same configuration. This price difference shrunk fairly rapidly and today some of the models actually cost lower than the cost price of the components bought locally. One popular perception is that a branded PC is always superior to an assembled one. It is not always true. I agree that reputed companies will not risk their brand image by selling a PC made of substandard parts. But the downside is that the buyer does not have any say about the configuration of a branded machine. So, many a time you will see a branded PC with a very powerful CPU having only 256 MB of RAM and bundled with a 15" CRT monitor. In case of an assembled PC the buyer decides what configuration he would go for. The main argument against an assembled PC is that if you lack the basic knowledge of computer components then you may be taken for a ride by a dishonest vendor. To sum up, there are pluses and minuses for both branded and assembled PCs. In fact both types of PCs are manufactured in the same way and you can add or upgrade components in both. Before making a buying decision one should first think about 36 | OpenSource Northeast | 15th August2007
the way the PC is going to be used at present and in the near future. On that basis the configuration should be determined. After that I recommend a branded PC of a reputed company for a novice and an assembled one for an experienced buyer. Different configurations for different needs
From maintaining the daily accounts of a small business to the editing of a blockbuster movie – a PC can be used in myriad ways. In addition to the professional usages it is also used as an entertainment tool. The minimum requirements for the components of a PC differ for different types of applications. For example, if you use the PC only for typing articles and reports and occasional internet browsing then a very low budget configuration will serve your purpose. On the other hand if you are a heavy gamer or you do a lot of audio-video editing on the PC then you definitely need a top end configuration. Remember that there is nothing like a completely future proof PC. Today’s top end model will definitely be outdated three to four years from now. The trick therefore is to try to select a configuration that should suffice for your requirements for about three years. I shall now try to classify some applications of a PC in order of hardware requirements: Type 1: Typesetting reports, theses etc., account keeping for
small businesses, internet browsing and occasional listening to MP3 music. These applications are not at all demanding and the configuration within the lowest budget will suffice. Type 2: Desktop publishing, low resolution gaming, account keeping of large firms and simple types of computer programming etc. These applications too do not need very powerful PCs.
Type 3: Publishing of complex graphics intensive documents, Photo editing, CPU intensive programming etc. You’ll need a fairly high end PC for these. Type 4: Video editing, sound mixing, high resolution gaming and complex engineering design etc. For these types of Volume 1, Issue 1
Know How
applications you should go for a top end configuration.
The following is an indicative list of some of the components for the above types:
Conclusion
A PC with any of the configurations discussed above are not usable until an OS (Operating System), a software on top of which all application software run, is loaded in it. There are two main types of OS and application software: FOSS (Free and Open Source Software) and Proprietary. The most versatile OS which happens to be a FOSS is Linux. As of today you may see a majority of PCs in our region running on Microsoft’s proprietary OS Windows. A closer look will reveal that, barring a negligible few, all are using a pirated version without licence. The latest licensed Windows version costs upward rupees five thousand while Linux is not only free but you can also distribute it freely. Further, a Linux distribution comes with most of the application software that the average user needs and therefore you can start working right after installation. However, to work with Windows without violating any law, the five thousand or more for the OS is only the beginning. You have to pay through your nose for getting the licence for each of the software of
37 | OpenÂSource Northeast | 15th August 2007
word processor, spreadsheet, presentation, image editing, desktop publishing etc. Note that there is a free counterpart on Linux of all the popular software on Windows for the works mentioned above. The majority of the scientific research
community throughout the world use Linux and the most important servers on the Internet run on Linux.
It is worth mentioning that the Govt. of Assam awarded PCs to all first division holders of the HSLC examinations of the past couple of years and those PCs had been preloaded with the Linux OS. Unfortunately, most of the awardees were unaware of the strengths and legality of the Linux OS and wanted to rather have an illegal copy of Windows installed in their PCs. I feel it is a duty of all of us to spread the awareness about FOSS alternatives among all PC users of our region and also to inculcate into our youngsters the importance of adherence to lawful practices. Credits:
1. www.tomshardware.com 2. www.anandtech.com 3. www.techtree.com 4. www.linux.org
Volume 1, Issue 1
Know How
Six Steps to Adopting OpenSource Software at Your Organisation Practical ways to deploy and use opensource software Are you debating whether to adopt Linux or open source software at your organization, but can't figure out how to get started? The NonProfit Open Source Initiative (NOSI) created this guide to help nonprofits make an informed decision about whether to make the switch.
and write Microsoft Office files (.doc, .xls, .ppt), and Mozilla offers open-source programs Firefox and Thunderbird that do Web browsing, email, IRC, and HTML editing. All of these software packages install easily. They are easy to try out and evaluate.
process learn more about it, its capabilities, and its costeffectiveness.
have one). Although generally regarded as more secure than Windows, like any computer you put on your network, you need to be aware of how to make it secure before it is open to the public Internet.
Here, NOSI outlines six steps you can take to begin to put opensource software to work in your organization, and in the
Step 1: Shared Web Hosting
It is very common for small and medium-sized nonprofit organizations to purchase a Web and email hosting account from an external virtual hosting provider. These accounts cost from $10 to $40 per month. This is because external hosting (also called virtual hosting) requires less support and is less expensive.
There are many, many virtual hosting providers, and the vast majority of virtual host providers use an open-source operating system, either Linux or BSD (another open-source UNIX variant). They use these because they are more cost-effective and stable, and it is easier to administer many machines with fewer staff than using Windows. If you are already using a virtual host for your Web site and you did not specifically ask for Windows, then you are very likely already using the open-source operating systems Linux or BSD, and the provider is almost certainly using Apache. You also likely have access to open-source application development using the quite popular languages PHP and Perl and to the database system MySQL. Thus, you already have experience with OSS, use it every day, and you can check off Step 1! (Step 5 of this section will explore more on how to do it yourself.) Step 2: Open Office and Mozilla
Word processing, email, Web browsing and spreadsheets are the primary software programs used by nonprofit staff members. Fortunately, the proprietary software programs typically used to perform these functions all have well-developed open-source alternatives that run on Macintosh and Windows platforms in addition to Linux.
You can download and install one or both of OpenOffice or Mozilla . OpenOffice is a full-featured office suite that can read 38 | OpenSource Northeast | 15th August 2007
A security note: Don’t place this test Linux desktop on a static public IP address without NAT or without being behind a firewall (talk to your tech staff member or consultant if you
Step 3: Small Desktop Trial
If some of your staff are primarily using only the programs mentioned in Step 2, then you could experiment by installing Linux on an extra workstation on your internal network. In addition to providing the applications mentioned in Step 2, Linux comes with many other multimedia and productivity applications.
To evaluate using Linux on a desktop, you can take an old desktop that might be gathering dust in the corner (preferably a Pentium processor of 400 MHz or better), and install a distribution of Linux on it. For a thorough listing of Linux distributions, their features, cost, licensing, and other information, visit Wikipedia's Comparison of Linux distributions . Acquiring and trying out various Linux distributions will give you an idea of how to use it on the desktop, and introduce you to a wide range of OS packages for you to test out. It is a good
way to understand how Linux works. In addition, there are several ways (see list, below) to use Windows software on your Linux desktop when needed. Step 4: Network File and Print Server
One of the easiest ways to use Linux in a network environment is to use it as a file and print server, to replace or retire the Windows server that you might have serving this function. (Note: A dedicated file/print server is recommended for organizations with seven or more staff members.) The case studies show examples of the use of Linux for just that purpose. SAMBA allows the Linux server to share network directories (folders) so that they can be accessed by Windows Volume 1, Issue 1
Know How
clients.
If you would like to use Linux as a print server, and you have an unusual (or very new) printer, we recommend checking out www.linuxprinting.org to make sure that Linux supports your printer.
Step 5: Self-Hosting of Web, Email, and Email Lists
As mentioned above in the virtual host section, Linux is very good at Internet server functions (Web and email hosting, and other Internet server functions). If you have a DSL connection with a static IP address (you generally have to pay more for such an account), or a T1 or higher broadband connection, then selfhosting your Web site and email is quite easy using Linux. You can easily use an older server machine or desktop for this function. Again, you can find or download any distribution of Linux that you like. If you do not want to take on the responsibilities and cost of hosting your server yourself, you can get a dedicated Linux server from many hosting providers, starting at around $99 a month. With this kind of server you can install any specialized open-source software (OSS) that you might want to use in your 39 | Open足Source Northeast | 15th August2007
organization.
Unlike Windows servers, Linux comes with all necessary server functions in the box, and there are no per-seat licenses for anything (Windows servers do come with Internet Information Services --IIS -- the Windows Web server that has no additional
license fees, but all additional server software, like email, requires additional costs). Unlike Microsoft Exchange, where you have to spend $8 (discounted) to $40 per email account, Linux will allow you to have unlimited email addresses at no additional licensing costs. There are a number of mail servers that are available, including Sendmail, Postfix, and Exim. Linux also comes with Apache, the most popular Web server.
Email lists (discussion lists, e-newsletters, fundraising appeals) have become increasingly important to nonprofit organizations. There are a number of OS mailing list managers for Linux/UNIX, with a broad variety of functionalities and ease of use. Probably the most popular and easiest to use is a program called Mailman. Others include Majordomo, Sympa, SmartList, and EZLMM. Step 6: Moving Towards an All Open-Source Office
Volume 1, Issue 1
Know How
There are a variety of other open-source tools that can allow you to move to an entirely open-source office. Database Servers
There are two database servers that are often used in Linux/UNIX environments (and both have been ported for use on Windows): MySQL and PostgreSQL. They are both popular, although MySQL is the most popular. They can be used for any basic DBMS functions that MS SQL server (or even Oracle) can be used for. MySQL is most often used for Web-based databases, and PostgreSQL is considered a possible replacement for Oracle, because of how full-featured and robust it is. Both can be used as back ends via ODBC, with Microsoft Access serving as the GUI (graphical user interface) front end. There are also the membership and donor management packages eBase and ODB , and although they are built upon proprietary development environments (FileMaker and Visual Basic 5 respectively), they provide access to the source code.
40 | OpenÂSource Northeast | 15th August 2007
There is an open-source server-based accounting package, called SQL-Ledger, which some nonprofits have begun to use, and a desktop accounting package called GnuCash for Linux.
Undoubtedly, the options will improve as developers realize that there are needs to be addressed. NOSI has generated an online database of open-source projects that are specifically of interest to nonprofit organizations, and a CD (distributed with some versions of this booklet) that provides installation programs for relevant Windows and Mac-compatible OSS programs.
Here are two tables that compare and contrast proprietary software options with OSS options, both for the desktop and the server. Š 2006 Nonprofit Open Source Initiative (www.nosi.net). This work is published under a Creative Commons AttributionNonCommercial-ShareAlike 2.5 License which can be viewed at http://creativecommons.org/licenses/by-nc-sa/2.5/.
Volume 1, Issue 1
FAQ
Why use OpenSource Software Technology Introduction
You have probably heard something about "open source" and "free" software solutions such as Linux, Perl or Apache. Many people have the mistaken impression that open source software is not for serious or large web sites. Let's dispel that myth right now. Etoys.com was the third busiest e-commerce site during the 2000 Christmas season. They served over 2.5 million page views and processed 20,000 orders per hour. Etoys was built using Perl running the Apache server under the Linux operating system, all open source software tools. Interested in learning more? This article will teach you about the benefits of using open source software. What is Open Source Software?
Open Source software generally is distributed under a license that guarantees the right to read, modify, redistribute and use the software freely. Open source software may be developed by community of programmers interested in developing a software application for a specific purpose. Companies may also develop open source software. These companies will distribute their software for free and make their money from support contracts and customized development.
Much of the open source software is distributed under the GNU General Public License (GNU GPL). The GNU GPL allows you to copy, use, modify, re-distribute the software but prohibits companies or individuals from making modified versions proprietary. Richard Stallman, a McArthur genius award recipient, developed this license, in order to encourage the development of a software sharing community. Tens of thousands of developers and several large corporations such as IBM, Sun Microsystems, and Intel have chosen to participate in the open source software movement. Some of most successful and robust software on the web today has been developed under this license. Open Source Software Dominates the Web
The web is dominated by open source software solutions.
1. Did you know that over 58% of the web is using the Apache Web Server. This is compared to 28% for Microsoft’s IIS Web Server. 2. Linux and Free BSD, both open source flavors of UNIX, are the dominant operating system for web servers, not Microsoft Windows 2000 or Windows NT. Linux has a 34% market share for web server operating systems. Microsoft Windows has a 23% market share (source: Linux Today: www.linuxtoday.com) 41 | OpenSource Northeast | 15th August2007
3. The most widely used language for web programming is Perl, not Microsoft’s Active Server Pages. Perl has been often referred to as the "glue" that holds the Internet together. 4. Over 60% to 80% of e-mail travels across the Internet using the open source program, Send Mail. Source: E-soft inc.: www.securityspace.com Open Source Software is Used By Some of the Largest Web Sites
Some of the most popular and most successful high traffic web sites use open source software solutions. (source: Netcraft: www.netcraft.com)
1. Yahoo! is running free BSD (free Berkley Standard Unix) 2. Amazon is running Linux, the Apache Server, and the ecommerce system is written in Perl 3. Google (the worlds largest search engine) is running Linux 4. Altavista, a popular search engine, is running Linux 5. AOL uses AOL server which is open source 6. CNN uses Perl Why do these large web sites use open source solutions? Because many open source software solutions such as Linux and Apache are rock solid. Others such as MySQL database, PHP and Perl programming languages have a large user community with wide range of support. Benefits of Open Source
Cost: Open Source software available under the GNU GPL license is free. In some cases, you may choose to pay for the distribution (compiled version that contains installable executable software on CD ROM). The cost of the distribution is generally trivial compared to the cost of many enterprise level commercial offerings. In addition, the developers of many of the open source solutions offer support contracts that are suitable to all levels of business or organization. Software Source Code: When you purchase a license to use most commercial software, you are dependent on the software designer to add features or customize the software for the needs of your business or organization. The software manufacturer provides you with only the executable program. You do not have access to the source code. With open source software, you are free to modify the software and customize it in order to suit your application. Scalability and Robustness: a large community of highly skilled software developers has created open source solutions, such as Linux, Perl, and Apache. As you can see from our Volume 1, Issue 1
FAQ
examples, open source software is used across a full spectrum of web sites. Open source UNIX based operating systems such as Linux and FreeBSD are extremely robust and efficient as they are suitable for both small and large organizations. Large Support Community: a large community of developers that communicate through on-line discussion groups supports many open source offerings. This allows common problems to be easily solved and bugs to be quickly exposed and fixed.
Security and Protection of Proprietary Data: There is a myth that open source software is more vulnerable to attack than proprietary solutions. Actually, the opposite is often true. Because the source code is exposed, it is often easier for a security minded software community to close security holes or breeches.
Case Study
Celestial Graphics uses a combination of commercial development tools and Open Source Software that help to develop the best and most cost effective web site designs for their clients. Web Site Development Tools
They use Macromedia Dreamweaver, Fireworks and Flash. Dreamweaver is used by over 70% of web design professionals (source: PC Data). Web Programming Language
Celestial Graphics uses PHP for Server Side application development. PHP is an open source solution built from the ground up specifically for web applications. PHP enjoys a large developers community. PHP runs under Linux and can be compiled as an Apache module. This makes PHP very fast. There are over PHP is one of the fastest growing languages on the web. Over 5 million domains (about 800,000 IP addresses) are running PHP (source: http://www.php.net/usage.php). We choose PHP because it has the flexibility of Perl but was built from the ground up for web application development. PHP is fast, robust and scalable. Database
Celestial Graphics uses MySQL. MySQL is an open source high performance relational database management system (RDMS).
42 | OpenSource Northeast | 15th August 2007
Other rational database management systems include Oracle, Informix, Sybase, and SQL Server. MySQL is widely used on the web and has Interfaces (APIs) to most programming languages including C++, Java, PHP, Perl as well as OBDC. How does MySQL stack-up against commercial offerings?
In terms of raw speed, MySQL benchmarks faster than many other databases such as Microsoft’s SQL Server and performs favorably against industry heavyweights such as Oracle. MySQL
is fast because it was designed primarily as a web based relational database management system. Who Uses MySQL?
MySQL is used Yahoo, Slash dot, and Linux Today. In addition, MySQL is used as a backend database at NASA and NASA Kids web sites. E-Commerce Solutions and Content Management Systems
Interchange: Red Hat’s Interchange is a widely used ecommerce shopping cart and web based order management system. Originally called Minivend, this software has features rivaling systems costing well over $100,000 yet is available for free under the GNU GPL license. Interchange contains fullfeatured web based administration of order entry, inventory, product, content and customer management. Interchange connects to a wide variety of databases including MySQL and Oracle. EZ Publish: This is a Norwegian based highly flexible and complete content management system. EZ Publish allows you to separate your content from your web site design. The web site design is completely template based for flexibility. Content can be uploaded through a web interface. Here are some of the features: 1. Community Building: User Polling, Moderated Discussion Groups, User Registration and Management, and Calendar of Events. 2. Commerce Module: Internationalized on-line shopping system. 3. Banner Advertising 4. News Feeds 5. On Line Statistics 6. Database Driven with Search Engine Friendly URLs
Volume 1, Issue 1
Software Review
Openoffice.org 2, a new, improved version that gives Microsoft a run for its money. Read full OpenOffice.org 2.x and Microsoft Office 2007 Feature Comparison at http://www.openoffice.org. Price: ÂŁ0.0 (Free download) | Manufacturer: Openoffice.org Openoffice is the open source suite of business applications that's gradually gaining wider acceptance as an alternative to Microsoft Office (just as Linux is an alternative to the Windows operating system). The business software market is still strongly dominated by Microsoft, of course, so this upgrade concentrates on improving compatibility with Office documents and making existing Office users feel more comfortable about switching.
The main programs are Writer (word processing), Calc (spreadsheet) and Impress (presentations) and they look and feel much like Word, Excel and PowerPoint. They now have more flexible toolbars that can be rearranged to customise the workspace. There's improved support for Microsoft's XML file formats, plus the suite has a new default XML format called Opendocument. All this talk of XML may sound rather obscure, but support for this is increasingly important for large corporations. In addition, the Opendocument format is one of the official European Union XML formats. Supporting it means that Openoffice can be used by EU departments themselves and also by companies that supply products or services to the EU. Moving on to the individual programs, the similarities between Writer and Word are striking and Writer now includes features that are directly comparable to Word. One example is Customshapes, which mimics Autoshapes. These are vector graphics objects, such as stars and arrowheads, which you can add to your documents to create diagrams or flowcharts. As well as being a useful addition, Customshapes are compatible with Autoshapes, which means you can now import Word documents that contain Autoshapes into Writer. Writer is also now better at importing Word documents that contain tables and its own table tools have been improved, 43 | OpenÂSource Northeast | 15th August2007
allowing you to create 'nested' tables by inserting small tables within the cells of an existing one. And Writer has added some features of its own, such as text frames that can be set to automatically shrink or grow as you alter the text within them. The Calc spreadsheet hasn't been changed drastically but it does have a few important new features. The Excel export filters have been improved and Calc can now handle spreadsheets containing up to 65,536 rows, the same as its rival. There's also a new filter called the Datapilot, which is similar to Excel's Pivottable data analysis tool. There aren't many changes to the Impress presentations program, but there are major improvements to the suite's database tools. Previous versions of Openoffice did have some powerful database features but you needed to be proficient with databases and SQL programming to get anywhere with them. This aspect of the suite has been completely reworked, with the inclusion of a new database program called Base. This can be used to create database files, reports and queries, just like Access, Filemaker or any other conventional database.
There's even a wizard to help new users get started with creating and managing their databases. The only minor drawback here is that Base is Java-based, so you'll need to make sure you have a 'Java run-time environment' installed on your PC before you can use it.
Admittedly, Openoffice still can't match the vast range and depth of features found in Microsoft Office, but as Microsoft itself often says, 80 per cent of Office users only use 20 per cent of its features. With that in mind, there's no doubt version 2 is more than powerful enough for most home and business users and can certainly give Microsoft a run for its money, especially since it's free. Volume 1, Issue 1
Software Review
OpenOffice.org is both an open-source product and a project. The product is a multiplatform office productivity suite. It includes desktop applications such as a word processor, a spreadsheet program, a presentation manager, and a drawing program, with a user interface and feature set similar to those of other office suites. OpenOffice.org also works transparently with a variety of file formats, including those of Microsoft Office.
Localizations of OpenOffice.org are available in 27 languages, with more being constantly added by the community. OpenOffice.org runs on Solaris, Linux (including PPC Linux), and Windows. Written in C++ and with documented APIs licensed under the LGPL and SISSL open-source protocols, OpenOffice.org allows any knowledgeable developer to benefit from the source. Version 2.2 resolves security issues and includes enhanced text display, better support for Pivot Tables in Calc, and Several key improvements in Base.
We distribute OpenOffice.org 2.0, GIMP, a image editor similar to commercial Photoshop, several major GNU/Linux distributions such as Fedora, OpenSUSE, Ubuntu, Debian and other OSI certified software for free saving you the downloading time. You just pay for the cost of the media (CD/DVD or a USB pen drive). We can download for you on demand any OSI certified software saving you time and effort. Contact us at contact@pragfoundation.net or any of our members.
44 | Open足Source Northeast | 15th August 2007
Volume 1, Issue 1
Product Information
45 | Open足Source Northeast | 15th August2007
Volume 1, Issue 1
Product Information
46 | Open足Source Northeast | 15th August 2007
Volume 1, Issue 1
Product Information
47 | Open足Source Northeast | 15th August2007
Volume 1, Issue 1
Product Information
48 | Open足Source Northeast | 15th August 2007
Volume 1, Issue 1
Book Review
Perspectives on Free and Open Source Software (Paperback)
by Michael Cusumano (Foreword), Clay Shirky (Afterword), Joseph Feller (Editor), Brian Fitzgerald (Editor), Scott A. Hissam (Editor), Karim R. Lakhani (Editor), Paperback: 576 pages
Publisher: The MIT Press; New Ed edition (March 30, 2007) | List Price: $22.00
Book Description
What is the status of the Free and Open Source Software (F/OSS) revolution? Has the creation of software that can be freely used, modified, and redistributed transformed industry and society, as some predicted, or is this transformation still a work in progress? Perspectives on Free and Open Source Software brings together leading analysts and researchers to address this question, examining specific aspects of F/OSS in a way that is both scientifically rigorous and highly relevant to real-life managerial and technical concerns.
large amounts of time to the creation of "free" products and services; the objective, empirically grounded evaluation of software -- necessary to counter what one chapter author calls the "steamroller" of F/OSS hype; the software engineering processes and tools used in specific projects, including Apache, GNOME, and Mozilla; the economic and business models that reflect the changing relationships between users and firms, technical communities and firms, and between competitors; and legal, cultural, and social issues, including one contribution that suggests parallels between "open code" and "open society" and another that points to the need for understanding the movement's social causes and consequences.
The book analyzes a number of key topics: the motivation behind F/OSS -- why highly skilled software developers devote We can order the book from Amazon on your behalf if necessary. Contact us at contact@pragfoundation.net
49 | OpenSource Northeast | 15th August2007
Volume 1, Issue 1
Classified
Want to see your advert here? Contact Us at contact@pragfoundation.net Phone
Want to see your advert here? Contact Us at contact@pragfoundation.net Phone
Want to see your advert here? Contact Us at contact@pragfoundation.net
Want to see your advert here? Contact Us at contact@pragfoundation.net
Want to see your advert here? Contact Us at contact@pragfoundation.net Phone
Want to see your advert here? Contact Us at contact@pragfoundation.net Phone
Phone
50 | Open足Source Northeast | 15th August 2007
Phone
Volume 1, Issue 1
OA Resources
Leading Open Source Projects
Apache | http://www.apache.org/ More than 50% of all web servers are powered by Apache. Apache was created by a loose confederation called the Apache Group who started out with a few "patches" to the web server originally created at the National Center for Supercomputer Applications (NCSA--the same outfit that built the original Mosaic browser). They have created the dominant industry standard. Apache Server Project: is the main Trust of the Apache community
C2NET Software | http://www.c2.net/ Most noted for their Stronghold Servers and full-strength encryption software (Perl and other Scripting Languages). Perl (Practical Extraction and Report Language) is a scripting language written by Larry Wall, with contributions from thousands of others. In the words of Hassan Schroeder, Sun's first webmaster, "Perl is the duct tape of the Internet." www.perl.com | http://www.perl.com/ Launched by Tom Christiansen, www.perl.com provides a starting place for finding out everything about Perl. www.perl.com is also a mirror for CPAN, the omprehensive Perl Archive Network. The Perl Institute | http://www.perl.org/ A non-profit organization dedicated to making Perl more useful. Volunteers and interested parties are welcome to contribute.
Packaged Open Source Software: Hundreds, if not thousands, of companies now sell commercially packaged and supported open source software. While many open source software packages do run on proprietary systems (Apache is quite popular on all operating systems platforms), Linux distributions provide a complete (and in some cases, exclusively) open source environment suitable for hand-held, desktop, server, and highend enterprise/cluster/mainframe use.
The Perl Journal | http://www.tpj.com/ A quarterly publication that offers a mix of articles for beginners and power users alike.
ActiveState | http://www.activestate.com/ ActiveState Tool Corp developed the original implementation of Perl for Win32, and is now the leading source of perl tools for the Windows Platform. Tcl Resource Center | http://www.scriptics.com/resource/ Tcl (Tool Control Language) is a powerful and flexible general purpose command language. Scriptics is the company recently formed by Tcl's creator, John Ousterhout, to provide scripting tools, applications, and services based on tcl.
Python | http://www.python.org/ Python is an interpreted, interactive, object-oriented programming language that is often compared to Tcl, Perl, Scheme or Java. This site provides downloads, news, and much more.
Scriptics | http://www.scriptics.com/ A commercial venture that provides scripting tools, applications, and services. Scriptics is the primary distribution point for Tcl/Tk. GNU Project | http://www.gnu.org/gnu/gnu-history.html/ The GNU Project began in 1984 when Richard Stallman wrote a UNIX compatible operating system and gave it away for free. The GNU Project is also dedicated to eliminating restrictions on copying, redistribution, understanding, and modification of computer programs. Cygnus Solutions | http://www.cygnus.com/ Cygnus provides single-source, UNIX, and Win32 desktop and cross-platform development tools for 32- and 64-bit microcontrollers. Netscape | http://www.netscape.com/ One of the big two in the browser and server market. Netscape made their code open source in the spring of 1998.
mozilla.org | http://www.mozilla.org/ Mozilla.org is Netscape's central point of contact and community for those interested in using or improving their source code. 51 | Open足Source Northeast | 15th August 2007
Volume 1, Issue 1
OA Resources
Linux
a free implementation of a UNIX-like OS for personal computers created by Linus Torvalds
Debian GNU/Linux | http://www.debian.org/ Debian is a free operating system built on the Linux kernel. Included with it are over 1526 packages (precompiled applications).
Red Hat Software | http://www.redhat.com/ A computer software development company that sells products and provides services related to Linux, a freely available UN*X-like operating system. Caldera | http://www.caldera.com/Caldera couples open-source operating system technologies, such as Linux, with traditional industry business practices, including channel partner distribution and support余 leveraged third-party application development余 and internal research and development, marketing and technical support. The Silicon Valley Linux User Group (SVLUG) | http://www.svlug.org/ A very active Linux user group located in the heart of the valley.
VA Research | http://www.varesearch.com/ Develops Workstation, Server, and Internet products using the Linux operating system.
BSD
an advanced BSD UNIX operating system for "PC-compatible" computers, developed and maintained by a team of volunteers. There are several active BSD efforts
bsd.org | http://www.bsd.org/ An umbrella site for all versions of BSD, both free and commercial. Free BSD | http://www.freebsd.org/ The home site for FreeBSD. NetBSD | http://www.netbsd.org/ The home site for NetBSD.
OpenBSD | http://www.openbsd.org/ A free BSD with emphasis on portability, standardization, correctness, security, and cryptography.
Free Software Distributors: The following companies and organizations are clearinghouses for a variety of open source and free software
Prime Time Freeware | http://www.ptf.com/ptf/ Prime Time Freeware (PTF) publishes mixed-media (book/CD-ROM) collections of freeware, otherwise known as freely redistributable software. The Written Word | http://www.thewrittenword.com/ Provides precompiled Open Source binaries for Solaris, HP-UX, IRIX, Tru64 UNIX, and AIX. Walnut Creek CDROM | http://www.cdrom.com/ A long-time supporter for the free OS's, Walnut Creek publishes freeware on CDROM.
Other Internet Software Some of the most important software that runs the Internet is open source
BIND | http://www.isc.org/bind.html BIND, the Berkeley Internet Name Daemon, makes the DNS work. Without it, you'd be typing addresses like 204.148.40.9 instead of www.oreilly.com. Bind was originally created as part of one of the great early free software efforts, Berkeley UNIX, and is now maintained by Paul Vixie of the Internet Software Consortium. 52 | Open足Source Northeast | 15th August2007
Volume 1, Issue 1
OA Resources
Enhydra | http://enhydra.org/ Enhydra is an open source Java/XML Web application development and deployment environment.
Majordomo | http://www.greatcircle.com/majordomo/ Written in Perl, Majordomo is the program for managing Internet mailing lists. If you like Majordomo, check out MajorCool (http://ncrinfo.ncr.com/pub/contrib/unix/MajorCool/), a Web interface.
Sendmail | http://www.sendmail.com/ The majority of Internet email is routed by this open source program. Sendmail is still maintained by its creator, Eric Allman, who recently started Sendmail, Inc. XEmacs | http://www.xemacs.org/ XEmacs is a powerful and highly customizable open source text editor and application development environment. It is very popular with internet application and web site developers.
Rich References
Open Source on Wikipedia | http://en.wikipedia.org/wiki/Open_source Note that Wikipedia itself is implemented using MediaWiki 9http://www.mediawiki.org/wiki/MediaWiki) software, which is covered by the GNU GPL (http://www.gnu.org/licenses/licenses.html#GPL), a popular license that is both a free software (http://www.gnu.org/philosophy/free-sw.html) license and also approved by the OSI as an Open Source license. David Wheeler's Why OSS? Just look at the numbers! http://www.dwheeler.com/oss_fs_why.html
David Wheeler's References | http://www.dwheeler.com/oss_fs_refs.html The Free, Libre, and Open Source Software (FLOSS) Surveys and reports | http://www.infonomics.nl/FLOSS/ The Economic impact of FLOSS on innovation and competitiveness of the EU ICT sector, published 20 November 2006 by UNU-Merit | http://ec.europa.eu/enterprise/ict/policy/doc/2006-11-20-flossimpact.pdf
Books
The Cathedral and the Bazaar | http://www.catb.org/~esr/writings/cathedral-bazaar/ Open Sources: Voices from the Open Source Revolution http://www.oreilly.com/catalog/opensources/ The Success of Open Source | http://www.hup.harvard.edu/catalog/WEBSUC.html
Producing Open Source Software | http://producingoss.com/
Open Source Software Directory
OpenSource Directory | http://www.opensourcedirectory.org/ OSD provides a resource for users to find stable Open-Source applications
The Free Software Directory | http://directory.fsf.org/ Lists over 5,000 packages. (Yes, Free Software is also Open Source Software
sourceforge.net | http://sourceforge.net/ Lists over 120,000 projects in varying stages of development. Mainly for developers. The public sourceforge.net site only hosts projects covered by an OSI-approved license (http://sourceforge.net/docs/about/02/). freshmeat.net | http://freshmeat.net/ Lists over 60,000 new releases of Open Source packages. Mainly for people who want to download the latest releases from developers. Strongly prefers software covered by an OSI-approved license (http://freshmeat.net/about/) 53 | Open足Source Northeast | 15th August 2007
Volume 1, Issue 1
OA Resources
Open Source Conferences and User Groups
OSCON | http://conferences.oreillynet.com/oscon/ Note that O'Reilly Media put on many other conferences, most of which have strong open source components and/or constitutents. FOSS4G | http://www.foss4g2006.org/ Free and Open Source Geospatial Information Systems conference.
Blender Conference | http://www.blender3d.com/cms/Blender_Conference.52.0.html There are other regional versions of this conference that can be found at the www.blender.org website. The Ottawa Linux Symposium | http://www.linuxsymposium.org/2006/ A premier event for hackers to discuss implementation experiences and chart the future of Linux. The Wizards of OS conference in Berlin | http://www.wizards-of-os.org/
The FISL conference in Porto Alegre, Brazil | http://fisl.softwarelivre.org/
The FOSSSL conference (and other events) in Colombo, Sri Lanka | http://www.foss.lk/events/ User groups tend to focus on software or groups of software used in a particular context. Linux users in Northern Virginia | http://novalug.tux.org/
PostgreSQL users in Dubai | http://pugs.postgresql.org/uae/
Indian Chapter of OSGeo | http://wiki.osgeo.org/index.php/India GIS developers and users who formed this group.
There are many thousands of user groups, and the best way to find the one that's right for you is to search the web, find the mailing lists, and make contact.
Reports
Reports of tOSSad EU Project The EU funded tOSSad project has created a collection of 15 documents, comprising over 600 pages, that sheds a light on the general economic and social benefits of Free and Open Source Software (F/OSS). The reports propose ways to overcome national barriers of F/OSS adoption, discuss how to improve the usability of F/OSS, outline a F/OSS curriculum for educational purposes and offer hands-on tips on setting up a Linux laboratory for schools: www.tossad.org.
54 | Open足Source Northeast | 15th August2007
Volume 1, Issue 1