linux magazine uk 27

Page 1



Welcome

Ticking Boxes Dear Linux Magazine Reader, While waiting for a new distribution to install over Christmas, I passed the time reading a recent survey. Exciting stuff about the Total Cost of Ownership of different servers. Why am I sharing this nugget of information with you when you have enough spam mail and post to read yourself? Well, the survey is about Linux servers and was completed by a reputable company called IDC. The report compares differing types of servers from small webservers to file and print servers over a five year period. The conclusion was that Microsoft Windows 2000 was cheaper in every case except web serving, where the cost was marginal. The reason for the expensive Linux option was the support costs, where Linux engineers are more expensive. The survey was based on 104 large US companies and was conducted by telephone interviews. A very clever survey only spoiled by the fact that it was sponsored by Microsoft. No matter how you dress up the figures, the licensing cost for Microsoft is an ongoing charge. A Sun manager came out almost straight away saying that the figures for training under

World News This month we have added a new section to the magazine:The World News pages are aimed at showing just how much Linux has become a global phenomenon.The news items will show how Linux is solving problems from many different perspectives. We may find that what occurs in one distant country has great relevance to us and either the encouragement or techniques used could help us to find our own solution to IT problems.

Linux were not quite what the survey proposed. Either way, I started to think about Linux training and certification. Linux is getting more press each day, more companies are turning to it. This in turn is leading to more jobs that require Linux skills. The problem is how do you, as an employer sort, out the wheat from the chaff? How, as an employee, do you ensure your skills are recognized? LPI and RHCE certification may be the answer as both require some knowledge, but I am sure sometime this year we will suddenly see an influx of support engineers adding Linux to the usual list of qualifications. Courses are important as they do help you gain knowledge and I would always recommend one if you can possibly do it. Anything in the quest for knowledge. My heart tells me that Linux is more than just another qualification to list. The desire to learn, play and explore Linux is not easily quantifiable. The time we have been using Linux does not matter as many of those new to Linux are just as knowledgeable and practical skills are gained through exploration. The distribution used does not count as eventually all the distributions start to merge and become similar. The diffeences standing out in your mind like smiles on friends’ faces. I am left with the feeling that Linux will become just another thing to tick on a CV for most and only the real enthusiasts will care. It will become easy to bluff during an interview where the employer does not have the time to do all the background study. Who do you give the job to? Someone with a string of qualifications who lists five years Debian experience or

COMMENT

We pride ourselves on the origins of our publication, which come from the early days of the Linux revolution. Our sister publication in Germany, founded in 1994, was the first Linux magazine in Europe. Since then, our network and expertise has grown and expanded with the Linux community around the world. As a reader of Linux Magazine, you are joining an information network that is dedicated to distributing knowledge and technical expertise.We’re not simply reporting on the Linux and Open Source movement, we’re part of it.

the self-proclaimed newbie who has been through ten distros in the last six months, but prefers the command line? On a recent mailing list, a system admin asked a lot of simple questions. This caused the group to split into those who thought that asking questions was a good thing and those who thought anyone advertising themselves as an admin should have thought about the problem before firing off mail questions to more than one list. We should be encouraging the questions, but we also must prompt them to read around the subject as well. Long live the quest for knowledge… Happy Hacking,

John Southern Editor

www.linux-magazine.com

February 2003

3


LINUX MAGAZINE

February 2003

NEWS

20

Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Business

...............................................9

SE Linux

Sophisticated access controls are fundamental to a secure Linux environment. SE Linux, developed by the National Security Agency and released under the GPL, is a complex

World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Get that international feeling with Linux World News.

system that allows admins granular control over privileges. We look at the basics and practical applications.

Insecurity

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Kernel

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Zack’s news roundup from the kernel developers.

Letters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 COVER STORY

Security Intro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Is your data valuable? Is it worth protecting?

SE Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

44

SuSE Openexchange Server

It’s “Seconds out and round

Systrace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

four” for the SuSE E-Mail

Protect your system by placing it in a tightly locked jail of legitimate system calls.

Server.

VServer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Multiple servers coexisting on a single computer.

The

SuSE

Open-

exchange Server will surely get admins thinking about whether they can completely replace Microsoft’s Groupware

RSBAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

solution. It is based on the

Rule Set Based Access Control security offers protection.

new UnitedLinux and provides for a quick and simple YaST based

REVIEWS

setup

controlling

Comfire.

Graphical Games

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 The latest in graphical games for your entertainment.

SuSE Openexchange Server

. . . . . . . . . . . . . . . . . . . . . . . . 44

52

fstab

The file system table (fstab) contains information on the

Caché 5

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Object-oriented database review.

partitions and volumes that need to be inserted into the directory tree on starting up the system.

KNOW HOW

The table allows the

Initialization

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 What are the processes running as your system boots?

administrator to con-

fstab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

security of a multi-

figure and enhance the user system by apply-

XEmacs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Why launch an extra mail program? XEmacs can cope.

4

February 2003

www.linux-magazine.com

ing various options.


February 2003

61

LINUX MAGAZINE

SYSADMIN

Diskless Clients

Linux-based diskless clients offer the same potential as fully-

Charly’s column

fledged traditional workstations, but with far lower hardware

Real System Admin tips and tricks to help you.

expense, less noise, and less administrative effort involved. We provide the know-how and introduce

the

programs

you

need

to

run

Linux

Diskless

Clients.

Diskless Clients

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

User tools

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Command line control for adding, deleting and modifying user accounts under Linux.

PROGRAMMING

Driving Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Automated tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

70

Peer review remains the best method of ensuring quality code. Automated tools can also be employed. In this article we look at such a tool, and show how it can be used to improve your code.

Driving Data When we start writing a

LINUX USER

piece of software, we usually use data that’s been hard-coded into the system. In this article we look at how to move away

from

hard-coded

primitive, data

handle

. . . . . . . . . . . . . . . . . . . . . . . . . . . 82 KDE system tools to help you get to grips with user administration, runlevels and hard disk storage.

DeskTOPia: Snow and Fire . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Bring some of the seasonal flair to your desktop.

and

learn how to resource and

KTools: Controlling users

external

assets. We make the code data-driven.

Out of the Box: Watching the watcher

. . . . . . . . . . . 86 Every Linux system writes logfiles, but who really looks at them on a regular basis? Keep track of critical events.

WaveTools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 COMMUNITY

88

WaveTools

Linux Bangalore 2002

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 A report from the recent Indian conference.

MP3 files are all the rage, so why bother with wav files? The fact is, every MP3 boils down to a wave descriptor of some

Brave GNU World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

kind and waves

Giving a new perspective on the GNU world.

are universal. This article shows

The User Group Pages

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

you the kind of antics

that

SERVICE

you

can get up to with

Events / Advertiser Index / Call for Papers

. . . . . . . 96

wav files and the WaveTools utility.

Subscription CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Next Month / Contact Info . . . . . . . . . . . . . . . . . . . . . . . . . 98

www.linux-magazine.com

February 2003

5


NEWS

Software

Software News ■ KDE 3.1 release delayed – showstopper bug found If you are a keen KDE user and have been following the predicted timelines set, then many of you would have been expecting to have had the chance to play with what should have been the latest release of the KDE desktop environment. It was hoped that the latest KDE release would have been launched midDecember 2002. Initially, a single

incident of a bug was reported, but this proved to be more widespread, with similar incidents of this problem appearing throughout the code. The call to halt the version release came just days before it was due to be distributed, when it became apparent that the bug had worked its way into many other parts of the code and, unfortunately, became a showstopper. While the code could have been combed through in about a week, this would have put the launch much too close to the holiday season so the development team reached the pragmatic view to hold off until 8th January. If you are still a keen KDE user, then the excitement can start all over again, because the new release date for version 3.1 should be just a few days after you read this. Fingers crossed. ■ http://www.kde.org/info/3.1.html

■ EiffelStudio for Business development software Eiffel may not be the most common programming language in use today, but the continued development of Eiffel Studio, which has just hit version 5.2, means people must be using it. Efficiency in use is Eiffel’s claim to fame and for some it is now the language of choice when developing businessoriented applications for the small to medium size company. This efficiency is made more apparent when you have consideration for the development of multiple platforms, which are also business critical. This latest release leaves the Linux developer with a faster compiler. The GUI, having been redesigned, now reflects Eiffel’s crossplatform nature and aids developers in the creation of applications that will work in an identical fashion even though they are

6

February 2003

running on different platforms, be it a Unix, Windows or embedded system. Continued tweaking has made improvements to the debugging system and now the developer has the opportunity to call upon external tools from within EiffelStudio. Those that have yet to try Eiffel are invited to download a Free Edition of this IDE from the web site. ■ http://www.eiffel.com/

www.linux-magazine.com

■ Box of delights for Java developers Java developers are, on one hand, lucky to have so many development tools and resources at their disposal. The other hand weighs heavy though, with the need to install and configure all of these packages to figure out which one is most capable of handling the task you require. If only someone could package them all up into one development distribution to take this misery away. Yes, you’ve guessed it, that is exactly what EJB solutions has done with their Out-of-the-Box product, which will automatically install over 50 Open Source packages, sensibly configured so they are ready to run in a matter of minutes. You also get QuickStart project guides and samples of Java applications, complete with source code. This would also make for a useful system on which to learn more about programming with projects like JBoss, Tomcat or MySQL. While EJB does offer a free to download community edition just for Linux users, the keener or more professionally based of you will probably want to take one of the chargeable packages, which start from just US $19.95 (£12.75 or Euro 19.75 approx.) ■ http://www.ejbsolutions.com/

■ RealNetworks community initiative Helix is a community of independent developers who are being given a huge helping hand from RealNetworks, makers of RealPlayer, which comes with most boxed sets of Linux. This community is looking to develop an open platform for the delivery of digital media, such as the streaming of audio and video. RealNetworks has handed over more code, this time for their ‘DNA Producer’ package, to the community to further help in this goal. Back in October 2002, they released the ‘DNA Client’ software. This initial release has made a real impact on the community project. Now, more than 5,000 developers have registered with the Helix project. Many hands make light work. ■ http://www.helixcommunity.org http://www.realnetworks.com/


NEWS

Software

■ Desktop Linux advances December saw the release of updated versions of ShaoLin Aptus, a simplified Linux workstation deployment and management solution, which comes in three flavors. ShaoLin Aptus 2.0 Small Business is designed for the small to medium sized enterprise where ease of Linux deployment is paramount. A remote bootup system means that client systems can be configured and managed with just a few mouse clicks from a central control. The Professional version builds on the Small Business version and now includes the necessary scalability required of an enterprise system. The Schools version adds to this yet again, incorporating a Workstation Booking system. It is this which will allow a school to maximize the use of their machines by allow-

ing control of the amount of time a particular user has access to a machine. Where machines are scarce, the last thing anyone needs is someone hogging resources. This functionality is applied to the user’s Linux distribution of choice. ShaoLin Aptus has full support and integration with versions of Linux from Red Hat, Mandrake, SuSE and United Linux systems. ■ http://www.shaolinmicro.com/

7

February 2003

ThinkSQL is a relational database management system designed for modern hardware and an equally modern OS, like Linux. The developer’s aim is to make ThinkSQL as fully compliant to the ISO SQL standard as practically possible, and their most recent beta version release, which is free to download from the link below, takes us all one step closer to this ideal situation. The current features for version 0.4.09, which is still a beta, include Native APIs for Open Standards, ODBC protocols, as you would expect, but also have built in support for Borland’s Delphi and Kylix. The multi-threaded server has support for server-side cursors with hold and return options. ■ http://www.thinksql.co.uk/

■ The Linux Test Project

■ FOSDEM meeting in February While it is so very important for vendors to meet customers at such events like the LinuxWorld Expo to be held in New York this month, the value of bringing developers together is more so. FOSDEM, the Free Open Source Developers European Meeting, is one of the most successful events to do just this, with its 3rd meeting being held in Brussels over a busy weekend of February 8–9 2003. Though it is described as a European meeting, many developers from the world over will make the effort to attend. Last year, just to pick two names out at random, David Wheeler came from North America to tell packed halls about secure Linux programming, while Miguel de Icaza came from South America to tell us more about Ximian and development tools and libraries like Bonobo and Mono. Speakers are still being confirmed, so full details are not yet available as we write, but we can say, for those that have a passion for databases and database design, Ann Harrison will be along to

■ ISO-compliant database management

give details of the Firebird project and David Axmark will talk about MySQL. So much needs to be crammed into the two days that many of the lectures run simultaneously, so subject themes have had to be set up to try and maximize what people can get to. This year’s FOSDEM lecture themes will concentrate on the development of Open Source software for the Desktop, Education, Multimedia and Toolkit Software, as well as the previouslymentioned database development. In previous years the nature of the event has been pretty much a lecturestyle event, it looks as if the organizers are keen to add to this for this year, and include a stream of practical tutorial events following on from last year’s GnomeMeeting demonstration. If this has sparked some interest, do not forget to contact your local Linux User Group, many of these will be arranging group bookings, so considerable savings could be made on both the travel and accommodation. ■ http://www.fosdem.org/

www.linux-magazine.com

Everybody wants stable, reliable and robust Linux distributions. They also want similar featured packages to go with them. With so much development going ahead in the world of Linux, proving just this has become a headache. Where there is a hole, you can expect some Open Source project to fill it, and so it is with the Linux Test Project. In joint partnership with SGI, IBM and the Open Standards Development Labs, this project is developing test suites to put the Linux kernel and closely tied applications under close scrutiny, to show that those main tenants hold true. The goal is to automate as much of this testing process as possible. In doing so, it is hoped to speed the process up and to minimize errors. Being Open Source, obviously there is an encouraging welcome to the Linux community to run these tests for themselves and to look at, and possibly improve upon, the testing suite. Recently added to the range of tests in the suite of programs, take a closer look at sigset() and sigaction() interactions and improvements to the scheduling tests. ■ http://ltp.sourceforge.net/


Business

Business News ■ StarOffice Bundle of Sony joy In a pan-European deal which includes Germany, France and the United Kingdom, Sony Information Technology Europe has signed a deal with Sun Microsystems regarding the StarOffice 6.0 office productivity package. This puts Sun Microsystems in a very interesting position with their efforts to reach out to the IT sector with their affordable Open Standards productivity suite, because Sony has an ever growing worldwide PC market of desktop Vaio systems, with some industry reports suggesting that they are as high as 8th position on the market leader board. This pan-European initiative fits well with its global position, for StarOffice 6.0 is also available in New Zealand and Australia with Hyundai machines and is available as part of a bundle with various Linux distributions like Lindows, TurboLinux, SuSE and Mandrake for the rest of the world.

The deal will help reduce the dominance that Microsoft has gained. ■ http://www.sun.com/staroffice/ http://www.sel.sony.com/sel/

NEWS

■ Open Standards encourage data sharing Recent research by Borland, partly in preparation for its European conference, has highlighted the growth of companies who are now beginning to understand the value to others and themselves for sharing company data, and doing so by using simple and convenient Open Standards for the exchange. It also became apparent that many of these companies were also keen to exchange various types of data through quite specific Web Services. More than half of the companies that took part are actively developing Web Services based on Java Enterprise Edition platforms reinforced by the valuable use of Open Standards, which only goes to make the lives of everyone more straightforward. The growing use of Open Standards allows easier communication between companies, and because of the development of this infrastructure, it will follow on that the maximum use of a much untapped source of business potential can be realized, so that where information needs to be shared, it can be. ■ http://www.borland.com

■ LinuxWorld Conference & Expo 2003 January 21st 2003 will see the opening of the 5th LinuxWorld Conference & Expo, to be held in the USA. The location is New York City. One of the main advantages of bringing together hardware and software vendors with their customer and user base is that it allows for that allimportant exchange of ideas that helps drive the industry onwards. The opening featured keynote speech is due from Hector Ruiz, president and CEO of AMD, a company that sees much virtue in competition: “Together, AMD and the Open Source community are helping to offer new possibilities in 64-bit computing that give businesses and governments alike technology choices they have never had.” he said at a recent press announcement, highlighting the closeness that the open source community can bring, thanks in part to gatherings such as this.

In addition, Steven A Mills, the Senior Vice President of IBM and Randy Mott, senior VP and CIO of Dell Computers will also give keynote speeches to highlight how vendors see Linux developments both now and in the future. LinuxWorld also aims to show how vendors see these important developments and progressions affecting their IT industry world and how the real world will be affected by that in turn. Linux still has strong growth, especially when compared to other IT sectors, and the continuing strength of shows like this serve to illustrate this both to those in the industry as well as those outside. Having an event with such a concentrated and exclusively-focused view on Linux and Open Source gives rise to a synergy where business decision makers will be able to gain more than just information and development of resources and contacts, they will also get

to feel that they will be involved in changing future events and not just reacting to them. ■ http://events.linuxworldexpo.com/

www.linux-magazine.com

February 2003

9


NEWS

Business

■ Java-based retail solutions with UnitedLinux Benefit Retail Solutions has released its ECS retail package. This solution has been specifically designed to provide precise inventory control in businesses that are split over many sites, most commonly retail chain stores. Once again, the factors of stability and reliability are expressed as leading reasons for choosing a Linux-based solution, and the chance to lower the total cost of ownership for any software system a business is looking to install. Tight stock control allows businesses to work with tighter margins and reduce unnecessary expense of surplus warehousing, which, in turn, has a positive effect on purchase order and stock processing. Something that is usually considered a triple win for businesses that can get it right. Even though the ECS software is

platform independent, Benefit Retail Solutions has worked closely with SCO, developing their product for SCO Linux version 4.0. This now ties nicely in with the new line of UnitedLinux distributions as well. “We have been evaluating the move to Linux for a while now and the recent launch and industry momentum of UnitedLinux has made it an even more attractive proposition for our customers,” says Stuart Kay, the Business Development Manager of Benefit Retail Solutions. “Having looked at the different solutions available, we found that the quality, performance and support offered by SCO Linux was unsurpassed, making it the obvious choice for us to work with.” ■ http://www.sco.com/ http://www.unitedlinux.com/

■ Carrier Grade communications for Red Hat proposition for service providers, Red Hat has set down plans to especially for those that are using Intel incorporate the needs of carrier grade hardware for their main telecommunitelecommunication applications into its cations infrastructure. Linux Advanced Server product, aiming The demands put upon a system are for a release date around mid-2003. quite unique for this type of technology. Working with the Open Source Signalling servers, which form part of Development Labs Carrier Grade Linux the telecommunications infrastructure, Working Group support, Red Hat is have very high demands, with requireaware of what additional features need ments to deal with sub-millisecond real to be included to make the product time events in very large numbers, viable. This includes improvements to maybe 10,000 or more. ■ the portability and performance features http://www.redhat.com/ as well as POSIX threading and a further http://www.osdl.org/ effort for additional high availability clustering capabilities. The Red Hat Linux Advanced Server already Carrier Grade Linux Architecture has stability under its Applications belt, which is one of HA Application Interfaces the main criteria when Middleware Components catering for services Java CORBA Databases ... which supply voice, High Availability Components data and wireless needs HA Platform Interfaces in today’s ever demanStandard Interfaces (LSB, POSIX...) High Availability Interfaces Service Interfaces ding technology-driven Linux OS with Carrier Grade Enhancements world. HW Configuration & Management Interfaces Hardened Device Drivers Co-Processor Interfaces It is Linux’s continued High Availability Hardware Platforms ability to reduce the total cost of ownership Solution-specific components to be defined by vendors Scope of the Carrier Grade Linux Working Group that makes it such an interesting and viable

10

February 2003

www.linux-magazine.com

■ Remember to backup The value of making your backups regularly and securely cannot be highlighted enough. Up until now, there has not been a concerted effort, for an enterprise level infrastructure, to offer information about the hardware issues involved in producing tape backup. The TOLIS Group has taken it upon themselves to produce the Linux Tape Device Certification program where they have taken a broad range of tape backup devices from a range of manufactures like HP, Seagate and Tandberg Data, and tested them for compatibility and compliance. The TOLIS Group develops their own data backup and recovery software, but they are making this information more generally available to the Linux community, setting up a site just for this purpose. ■ http://www.linuxtapecert.org/

■ Red Hat Technical Workstation It has been discussed for some time now, but this could be the first concerted effort for Red Hat to stake their claim on Linux desktop use. This offering, which is due to be with us in the first quarter 2003, will see the deployment of a Technical Workstation, aimed squarely at the technical, and experienced user. It could be seen as a new option for those companies who seek an alternative to a Microsoft-dominated environment and reduce the total cost of ownership after changes in licensing models. This Technical Workstation is designed for development and graphical applications, and so will help to provide enterprise development platforms which will be naturally at home with any Advanced Server products. While it is recognized that this is a very tightly controlled area, the lessons learned from this closed environment will surely filter down and make way for more general desktop products, gradually being designed for a wider desktop audience. As yet, no price structure has been announced for the product. ■ http://www.redhat.com


NEWS

World

World News ■ Linux Australia gets together

conference, is coming to Perth, Western Australia in January 2003. Big names have been billed including Alan Cox, Telsa Gwynne, Hemos (from

Slashdot) and Andrew Tridgell (who claims to have been present when Linus Torvalds got bitten by a penguin). Over four days, from the 22nd to 25th January, the entire Linux spectrum will be covered, from GNOME to filesystems to teaching with Open Source as well as the Linux kernel itself. In addition there will be “Birds of a Feather” meetings, tutorials, and ARQuake; all located at the beautiful University of Western Australia, on the edge of the Swan River. This year’s linux.conf.au will also play host to the Debian mini-conf for the two days before the main conference, as well as numerous other mini-confs on subjects such as IPv6, education and Linux gaming. ■ http://www.linux.conf.au/

■ Dutch Open Source Lobby wins in The Hague

■ Project guidance for Indian students

In the Netherlands, the Lower House of the Dutch Parliament has recently accepted a motion by the Groenlinks party to use Open Source software and Open Standards in government and state-financed institutions. In this, Groenlinks claims that the current situation of having just a few software vendors “violates democratic principles of accessibility and transparency”. A first step in the right direction would be to make the use of Open Standards mandatory. Following that, the party insists on Open Source software and copyleft as a means to make the workings of the Dutch government “more transparent, more controllable, and therefore more stable and more secure”. The motion, aptly called “Software open u” or “Open software”, a Dutch pun on “Open sesame”, is available on the Groenlinks website. ■ http://www.groenlinks.nl/partij/2deka mer/publikaties/SoftwareOpenU!.htm http://www.groenlinks.nl/partij/2deka mer/publikaties/Softwarenota191102.pdf

Techies from Mumbai, the commercial capital of India formerly called Bombay, are working to find new ways to promote Free Software projects among their students and thus increase the Indian contribution to Free and Open Source Software projects. “We have plans to set up a groupware system that will enable the respective students to choose a project and be guided by any of the senior LUG members online.” says Trevor Warren, who fathers the idea of the new Project Resource Center together with Dr. Nagarjuna G (a prominent Mumbaibased advocate of Free Software) and the local Linux User Group. The focus is not so much on bringing entirely new software projects into being, but to fill in the missing gaps in various projects such as those available on Freshmeat.net, Sourceforge.net, or in the GNU Hurd kernel. Those interested are invited to join the Free Software Projects mailing list. ■ http://mail.sarai.net/mailman/ listinfo/prc

linux.conf.au, formerly the Conference of Australian Linux Users (CALU), Australia’s annual Linux technical

12

February 2003

www.linux-magazine.com

■ Penguin help for Norwegian schools By the end of 2004, 50 percent of all schools in the Middle Norwegian county Sør-Trøndelag will use Linux. At least this is the aim of the recently founded project SPIST, a loose association of Linux-in-school protagonists from the regional center Trondheim (which Linux Counter ranks 2nd place in the list of cities with more than 100 000 inhabitants) and the county’s 25 municipalities. Key to the project is the (still beta) Skolelinux distribution (see Linux Magazine No. 25, p. 90) for which several local companies already provide professional support. One of the first tasks is to find advisors for student projects helping schools with the deployment process. ■ http://www.spist.no/ http://www.skolelinux.no/

■ Indic language solutions Over the past year, the attempt to find Indian-language computing solutions on the GNU/Linux front has seen some impressive strides take place. Following campaigning by young sparks like Tapan Parikh, G Karunakar and team, the chance of getting “Indic language” solutions seem to have brightened. One problem is the lack of links between Indian initiatives, and the international volunteer software efforts. From font designers to GNOME developers, collaborators in distant parts of the globe have been voicing their interest in helping find solutions to India’s special challenges. These include

multiple scripts, characters which often join each other, and a bewildering variety in the numbers and letters. ■ https://lists.sourceforge.net/lists/listinfo/ indic-computing-users


World

■ Stallman and Gates in India It happened quite by coincidence: In mid-November last year, Microsoft’s Bill Gates was visiting India around the same time as Free Software Foundation founder Richard M. Stallman. Gates poured into India money – with promises of a few hundred million dollars more. It was the guru of free code, Stallman, who turned quite a few heads by his arguments. On the national TV channels, the media was interviewing both Gates and Stallman within hours of each other. The differences in their approach could not be more stark. From talking to academics in Bangalore, to meeting businessmen and engineering college students in coastal Goa, to releasing his book and meeting with officials in Delhi, Stallman carried on determinedly, even if his visit was marked by limited funding and initially markedly less media interest. Coming after Stallman, Gates faced a number of

tough and potentially embarrassing questions over his sudden emphasis for philanthropy in India, including funding for AIDS. For a country squeezed by the high prices of proprietary software, and a limited ability to make its own software skills benefit the common man, Stallman’s message was clear. Free software could encourage local businessmen rather than paying “huge sums to a few rich (global) businesses” for their products, he told industrialists and politicians. Not surprisingly, segments of the Indian business press were also open to such perspectives. Stallman suggested that businesses and governments using GNU/Linux as a “bargaining chip” to get better deals from proprietary firms were “missing the point” about the critical freedom debate. ■ http://www.gnu.org/ http://www.microsoft.com/

advertisement

NEWS

■ Conference on software patents in Belgium Last November, the Green Party Group at the European Parliament organized a conference on software patents at the Parliament’s facilities in Brussels. Among the speakers were Richard Stallman, Brian Kahin of the University of Maryland, and Francois Pellegrini of the University of Bordeaux. The conference was attended by a very large number of people and addressed the patent issue from different perspectives (legal, economic), followed by case studies and discussion. Hope remains that some of the attending politicians gained an understanding of why software patents should not be allowed in Europe. The software patent workgroup of the “Foundation for a Free Information Infrastructure” (FFII) has more information. ■ http://swpat.ffii.org/ http://swpat.ffii.org/events/2002/ europarl11/index.en.html#medi021126


NEWS

Insecurity

Insecurity News ■ smb2www Robert Luberda found a security problem in smb2www, a Windows Network client that is accessible through a web browser. This could lead a remote attacker to execute arbitrary programs under the user id www-data on the host where smb2www is running. This problem has been fixed in version 980804-16.1 for the current stable distribution (woody), in version 9808048.1 of the old stable distribution (potato) and in version 980804-17 for the unstable distribution (sid). ■ Debian reference DSA-203-1 smb2www

■ freeswan Bindview discovered a problem in several IPSEC implementations that do not properly handle certain very short packets. IPSEC is a set of security extensions to IP which provide authentication and encryption. Free/ SWan in Debian is affected by this and is said to cause a kernel panic. This problem has been fixed in version 1.96-1.4 for the current stable distri-

bution (woody) and in version 1.99-1 for the unstable distribution (sid). The old stable distribution (potato) does not contain Free/SWan packages. ■ Debian reference DSA-201-1 freeswan

■ kdelibs The KDE team has discovered a vulnerability in the support for various network protocols via the KIO. The implementation of the rlogin and telnet protocols allows a carefully crafted URL in an HTML page, HTML email or other KIO-enabled application to execute arbitrary commands on the system using the victim’s account on the vulnerable computer system. This problem has been fixed by disabling rlogin and telnet in version 2.2.2-13.woody.5 for the current stable distribution (woody). The old stable distribution (potato) is not affected since it doesn’t contain KDE. A correction for the package in the unstable distribution (sid) is not yet available. ■ Debian reference DSA-204-1 kdelibs

Security Posture of Major Distributions Distributor Debian

Security Sources Info:www.debian.org/security/, List:debian-security-announce, Reference:DSA-… 1)

Mandrake

Info:www.mandrakesecure.net, List:security-announce, Reference:MDKSA-… 1)

Red Hat

Info:www.redhat.com/errata/ List:www.redhat.com/mailing-lists/ (linux-security and redhat-announce-list) Reference:RHSA-… 1)

SCO

Info:www.sco.com/support/security/, List:www.sco.com/support/forums/ announce.html, Reference:CSSA-… 1) List:www.slackware.com/lists/ (slackware-security), Reference:slackware-security …1)

Slackware

SuSE

Info:www.suse.de/uk/private/support/ security/, Patches:www.suse.de/uk/private/ download/updates/, List:suse-security-announce, Reference:suse-security-announce … 1)

Comment Debian have integrated current security advisories on their web site.The advisories take the form of HTML pages with links to patches.The security page also contains a note on the mailing list. MandrakeSoft run a web site dedicated to security topics. Amongst other things the site contains security advisories and references to mailing lists.The advisories are HTML pages,but there are no links to the patches. Red Hat categorizes security advisories as Errata:Under the Errata headline any and all issues for individual Red Hat Linux versions are grouped and discussed.The security advisories take the form of HTML pages with links to patches. You can access the SCO security page via the support area.The advisories are provided in clear text format.

Slackware do not have their own security page, but do offer an archive of the Security mailing List. There is a link to the security page on the homepage. The security page contains information on the mailing list and advisories in text format. Security patches for individual SuSE Linux versions are marked red on the general update page and comprise a short description of the patched vulnerability.

1) Security mails are available from all the above-mentioned distributions via the reference provided.

14

February 2003

www.linux-magazine.com

■ im Tatsuya Kinoshita discovered that IM, which contains interface commands and Perl libraries for E-mail and NetNews, creates temporary files insecurely. The impwagent program creates a temporary directory in an insecure manner in /tmp using predictable directory names without checking the return code of mkdir, so it’s possible to seize a permission of the temporary directory by local access as another user. The immknmz program creates a temporary file in an insecure manner in /tmp using a predictable filename, so an attacker with local access can easily create and overwrite files as another user. These problems have been fixed in version 141-18.1 for the current stable distribution (woody), in version 133-2.2 of the old stable distribution (potato) and in version 141-20 for the unstable distribution (sid). ■ Debian reference DSA-202-1 im

■ kernel The kernel in Red Hat Linux 7.1, 7.1K, 7.2, 7.3, and 8.0 is vulnerable to a local denial of service attack. Updated packages are available which address this vulnerability, as well as bugs in several drivers. The Linux kernel handles the basic functions of the operating system. A vulnerability in the Linux kernel has been discovered in which a nonroot user can cause the machine to freeze. This kernel addresses the vulnerability. Note: This bug is specific to the x86 architecture kernels only, and does not affect ia64 or other architectures. In addition, a bug in the maestro3 soundcard driver has been fixed as well as a bug in the xircom pcmcia driver network driver and the tg3 network driver for Broadcom gigabit ethernet chips. All users of Red Hat Linux 7.1, 7.1K, 7.2, 7.3, and 8.0 should upgrade to the errata packages. Thanks go to Christopher Devine for reporting the vulnerability on bugtraq, and Petr Vandrovec for being the first to supply a fix to the community. ■ Red Hat reference RHSA-2002:262-07


advertisement

■ kerberos A remotely exploitable stack buffer overflow has been found in the Kerberos v4 compatibility administration daemon. Kerberos is a network authentication system. A stack buffer overflow has been found in the implementation of the Kerberos v4 compatibility administration daemon (kadmind4), which is part of the the MIT krb5 distribution. This vulnerability is present in version 1.2.6 and earlier of the MIT krb5 distribution and can be exploited to gain unauthorized root access to a KDC host. The attacker does not need to authenticate to the daemon to successfully perform this attack. kadmind4 is included in the Kerberos packages in Red Hat Linux 6.2, 7, 7.1, 7.2, 7.3, and 8.0, but by default is not enabled or used. All users of Kerberos are advised to upgrade to the errata packages which contain a backported patch. ■ Red Hat reference RHSA-2002:242-06

■ xinetd Xinetd contains a denial-of-service (DoS) vulnerability. UPDATE 2002-12-02: Updated packages are available to fix issues encountered with the previous errata packages. Xinetd is a secure replacement for inetd, the Internet services daemon. Versions of Xinetd prior to 2.3.7 leak file descriptors for the signal pipe to services that are launched by xinetd. This could allow an attacker to execute a DoS attack via the pipe. The Common Vulnerabilities and Exposures project has assigned the name CAN-2002-0871 to this issue. Red Hat Linux 7.3 shipped with xinetd version 2.3.4 and is therefore vulnerable to this issue. Thanks to Solar Designer for discovering this issue. ■ Red Hat reference RHSA-2002:196-19

■ WindowMaker Al Viro discovered a vulnerability in the WindowMaker window manager. A function used to load images, for example when configuring a new background image or previewing themes, contains a buffer overflow.

The function calculates the amount of memory necessary to load the image by doing some multiplication but does not check the results of this multiplication, which may not fit into the destination variable, resulting in a buffer overflow when the image is loaded. ■ Mandrake reference MDKSA-2002:085

■ OpenLDAP The SuSE Security Team reviewed critical parts of the OpenLDAP package. SuSE found several buffer overflows and other bugs remote attackers could exploit to gain access on systems running vulnerable LDAP servers. In addition to these bugs, various local exploitable bugs within the OpenLDAP2 libraries (open ldap2-devel package) have been fixed. Since there is no workaround possible except shutting down the LDAP server, we strongly recommend an update. Please download the update package for your distribution and install it, using the command “rpm -Fhv file.rpm”. The packages are being offered to install from the SuSE maintenance web. To be sure the update takes effect you have to restart the LDAP server by executing the following command as the root user: /etc/rc.d/ldap restart

SuSE reference SuSE-SA:2002:047

■ samba A vulnerability in samba versions 2.2.2 through 2.2.6 was discovered by the Debian samba maintainers. A bug in the length checking for encrypted password change requests from clients could be exploited using a buffer overrun attack on the smbd stack. This attack would have to be crafted in such a way that converting a DOS codepage string to little endian UCS2 unicode would translate into an executable block of code. This vulnerability has been fixed in samba version 2.2.7, and the updated packages have had a patch applied to fix the problem. ■ Mandrake reference MDKSA-2002:081


NEWS

Kernel

Zack’s Kernel News ■ Subversive The BitKeeper version control system continues to make inroads into kernel development. The NUMA scheduler recently decided to adopt BitKeeper, as a way to more closely track kernel developments, and provide timely patches against the latest versions. BitKeeper, a commercial product of the BitMover corporation, was adopted by Linus Torvalds for kernel development after a long struggle by the program’s author, Larry McVoy, to provide all the features needed by Linus. None of the kernel developers are happy about relying on a commercial, closed source product for kernel development, but the absense of a free alternative that satisfy all needs, makes it difficult to be too critical. In the wake of Linus’ decision to use BitKeeper, a number of free version control systems have begun to receive massive support from the developer community. One of these, Subversion, seems to be the most promising, though

it is still far from overtaking BitKeeper’s formidable feature set. Subversion at the moment aims to be a replacement for CVS, the ubiquitous Concurrent Versioning System. Subversion already solves many of the problems that plagued CVS users, such as the inability to delete directories once they have been created, and the difficulty of renaming files. One of the main advantages of the BitKeeper program over the Subversion project is the ability to merge two distinct repositories. In Subversion, there is typically a single repository that acts as a server. Developers pull the directory from the server, make changes, and then push those changes back onto the server for other developers to see. In BitKeeper, each developer has their own fully-fledged repository, which they can use without reference to a central server. When two developers on a single project wish to share their work, they

■ Hunting bugs

February 2003

The Kernel Mailing List comprises the core of Linux development activities.Traffic volumes are immense and keeping up to date with the entire scope of development is a virtually impossible task for one person. One of the few brave souls that take on this impossible task is Zack Brown. Our regular monthly column keeps you up to date on the latest discussions and decisions, selected and summarized by Zack. Zack has been publishing a weekly digest, the Kernel Traffic Mailing List for several years now, reading just the digest is a time consuming task. Linux Magazine now provides you with the quintessence of Linux Kernel activities straight from the horse’s mouth.

simply merge their two monolithic repositories together. BitKeeper makes this a very easy proposition, but it is still out of reach of the Subversion project for quite some time. Folks who are interested in the ongoing development and contributing to the Subversion project should go to http://subversion.tigris.org/ website for more information. ■

■ Retro-computing

A Bugzilla bug-tracking system has been set up for the kernel. The primary goal is to help pave the way for a timely 2.6 (or 3.0) stable series. Such a system can potentially be quite useful for all stages of development. These sorts of auxiliary tools seem to be cropping up more and more in recent months. Traditionally, Linus Torvalds has been reluctant to use anything more than a mailing list and an FTP site for patches, but in that past year we have seen the adoption of BitKeeper; a host of documents describing lists of maintainers, bug reports, status of features; and now at last, an actual bug tracking system. The problem with any bug-tracking system, however, is that they require constant supervision, or else they become more trouble than they’re worth. The ability to track bugs is useful because it promises to keep records of

16

INFO

the progress of debugging efforts, while preventing excessive duplication of bug reports. Duplicate reports can overrun the system unless constantly sifted and organized, and frivolous reports can likewise take up so much time that the value of the bug database is lost. David S. Miller was one of the first to volunteer to maintain a portion of the bugzilla database, and turned out to be one of its most vocal critics. After a day or so of dealing with the system, he had found so many frivolous reports, that he felt his time was being entirely wasted. In typical free software fashion, various developers then proceeded to discuss different ways of improving the system, solving the various problems that had occured, and ensuring that the database would remain usable and useful. By the end of the first wave of discussion, even David felt there was reason to be hopeful. ■

www.linux-magazine.com

Linux development sometimes takes some wild twists and turns. In the nottoo-distant past (2.5 was in full swing at the time), Linus Torvalds offered to let someone maintain the old 0.1 kernel tree, when that person found a bug with that version. Now, at the moment of 2.5 featurefreeze, Carl-Daniel Hailfinger pointed out that somewhere along the way, the ancient XiaFS filesystem had been allowed to drop out of the kernel. Would Linus accept patches to forward-port it into 2.5? Linus called this “an ironic form of retrocomputing,” that “gets high points on my ‘surreality meter’”. He said sure he’d take patches, and he’d even accept them after the feature freeze. It turned out that Andries Brouwer managed to dig up a floppy disk with an XiaFS filesystem on it, which he imaged and sent over to Carl. ■


Kernel

NEWS

■ Crushing

■ Changing time

■ Hard decisions

A new compressed filesystem has hit the scene. Phillip Lougher announced the first public release of SquashFS. SquashFS uses the zlib library to provide a high degree of compression in a read-only filesystem. It is not the only compressed filesystem out there. Cramfs and zisofs both provide read-only compression, and in fact, Phillip found his initial inspiration for SquashFS in the cramfs model. Why a new filesystem, when there are others boasting similar features? Essentially, Phillip wanted to overcome various drawbacks in the other systems. In fact, SquashFS gives better compression than cramfs, can handle larger files and filesystems, and provides more inode information. Zisofs has more overhead than SquashFS, taking between 5% and 61% more space, depending on the directory structure being compressed. It might have been possible for Phillip to pick one of those projects, and simply contribute his code and ideas to it, but one of the benefits of free software is that you do not have to stick with what has gone before. Almost as soon as Phillip made his initial announcement, there were requests for a version of SquashFS that would also allow writing data back into the filesystem. Phillip was not averse to the idea, though he was quick to point out that there would be trade-offs involved. Uncompressing and recompressing the entire filesystem for each change would be prohibitively slow, while simply compressing modifications separately would achieve a lower compression rate. ■

The devfs filesystem has always been controversial. Richard Gooch, its author, maintained it for a long time before its inclusion in the official kernel tree. The intention of devfs was to replace the nightmarish /dev directory, which contained endless unused device files, with a saner interface containing only files corresponding to devices actually installed on the system. The implementation has proved troubling for many developers. Behavior that had been standard for years, was altered under devfs in ways that seemed arbitrary. Even after its inclusion in the official sources, the quality of the devfs code has been harshly criticized. Alexander Viro in particular, has complained of race conditions and other qualities that might make devfs dangerous to rely on. Shortly after the feature freeze, he examined the devfs code quite closely, and proposed a number of changes to its exported API, among other things. The problem with altering an established API is that it breaks backward compatibility for existing programs that rely on those interfaces. Richard pointed this out, saying that any change in the devfs API, especially such far reaching changes as Alexander had proposed, would break compatibility between 2.4 and 2.5 kernels. Other folks have been quick to point out that this compatibility has already been compromised in other ways, and so this further breakage might seem acceptable as well, in light of that. Richard did promise to examine Alexander’s suggestions. ■

EVMS, the Enterprise Volume Management System, radically changed direction in November, after the feature freeze. When it became clear that Linus Torvalds would not accept their patch in time for 2.6 (or 3.0), the EVMS team decided to rethink their ability to maintain their patch. EVMS consisted of a kernel module portion and a user-space portion. The module portion controlled devices, and allowed disks to be set up in seamless arrays, while the user-space portion controlled the arrays thus created. For a long time there was controversy surrounding EVMS, because other kernel modules like the md driver offered much of the same functionality, while EVMS had a reputation for being particularly invasive, taking over functionality that many developers felt would be best left in other areas. After much soul-searching, the EVMS team decided to ditch their own kernel module portion of development, and rework the user-space portion to interface with other existing kernel modules like the md driver. It was a difficult decision to make, because it involved not only extensive modifications to existing code, but also the complete abandonment of their kernelbased work. Alan Cox, Alexander Viro, and others expressed their admiration at the EVMS team’s ability to make such a painful decision, that ultimately, they felt, would be the right thing. The team expects to have a good portion of functionality back by early 2003, with some of the more difficult features taking somewhat longer. ■

has irked many developers, who assert that an in-kernel debugger would in no way diminish the ability of anyone to examine the sources directly. Linus has defended his position by an appeal to Darwinism: he wishes to breed out developers who are unable to do good work using only the sources and a decent testing environment. However, in November he showed signs that his position on this matter may have changed. He told developers that he

would consider including a kernel-based debugger in his tree, if it would allow debugging a running system across a standard network, using standard networking hardware. A number of top developers immediately began laying plans for the proper design to use. The upcoming stable series is unlikely to begin life with a kernel-based debugger, but I wouldn’t be surprised to see an incarnation going into the next developer series. ■

■ Change of heart One feature that Linus Torvalds has steadfastly resisted for years, has been the inclusion of a kernel-based debugger. While such a thing would allow developers to interrupt and step through running systems, examining state information and values of variables in memory, Linus has always felt that the proper way to debug kernel code was at the source level. His insistence that the source code itself be the primary place from which to analyze kernel behavior,

www.linux-magazine.com

February 2003

17


NEWS

Letters

Letters to the editor

Write Access ■ AutoMake your files Dear Linux Magazine, I’ve been reading your Magazine for some time now, and while I agree with most of the articles and think they offer excellent value for the readers, there is one thing I want to point out in issue 25. Pages 62–65 cover the use and creation of Makefiles. These are indeed a valuable and powerful tool when writing software in the hands of an experienced, or better, the ‘conscientious’ programmer. Writing the Makefiles manually is feasible for small projects but for larger projects this becomes a serious effort to maintain these files manually. Furthermore, you’ll quickly find out that developers see this as a drain and do not maintain them properly. In my experience, it pays much better to teach programmers the GNU autotools. If you are aware of these tools, you’ll know that these have the advantage of figuring out dependencies (build), checking the build environment, generating configuration headers, etc. These tools are based on M4 (cf. p40–43) and perl. When these tools are used, you’ll be assured that all the typical make targets are created (esp. uninstall) and I have yet to come accross the first software

package from which it was not possible to create a deb (or rpm) package. The largest problem seems to be that programmers are not aware of these tools and when inspecting software, they start looking at the (generated) Makefile, which obviously results in errors and frustration. As a result, I think it is important to make the users of Makefiles aware of a ‘better’ and more complete system that can be used that requires much less effort to use and enables configuring your software at compile time in a straightforward way. If you are interested, the introduction slides of the session we are using in our research group can be found on http:// lesbos.esat.kuleuven.ac.be/~mleeman/ downloads/athens-opensource-0.4.pdf (as well as on the subscriber CD). ■ Marc Leeman, Email

■ It’s time to grow up

Confidence in technology’s ability to deliver real value to the business is at an all-time low and if the IT industry is to regain credibility in the eyes of businesses then it has to start delivering on its promises. While new technology will always be an alluring but immature child, the lessons of the last three years have taught us that it can no longer afford to act like an adolescent. There are thousands of mediumsized firms in the UK who have paid the highest price for the IT industry’s immature attitude to business needs. They also form the backbone of our economy and represent the UK’s commercial influence in highly competitive global Figure 1: Marc Leeman’s introduction into AutoMake and other GNU markets. tools

18

February 2003

www.linux-magazine.com

It is therefore critical to stop the rot and begin supporting them with technologies and services appropriate to their particular business needs to allow them to grow and meet the challenges of global commerce. Vendors should concentrate on selling direct to enterprise customers and leave the resellers to interpret their technologies for medium-sized firms. Users should see the channel as an invisible extension of their businesses. In turn, the channel must not abuse the trust of IT directors by paying lip service to their business needs while secretly targeting them as juicy prey. The channel needs to earn the trust and respect of IT Directors. Once earned, IT directors will benefit from freeing up valuable resources to concentrate on strategies to grow the business. We must focus on the business, not the technology. If we can achieve this then the IT industry will have truly come of age. But if we fail, vendors will continue to underperform, the channel will shrink as customers hold back from all but the most essential of IT purchases, and the IT director’s job will cease to exist. Most importantly, we will never win back the credibility of the investment community on which the future growth of the UK IT industry depends. ■ Yours sincerely, Mark Simmonds General Manager Anix Group


Securing Linux

M

any roads lead to Linux security, and thus the term “hardening” has a multitude of meanings. The aim remains clear – preventing intrusions and if the worst should happen, at least mitigating the effects. This is a crucial part of the system administrator’s job. The security conscious administrator will ensure that any software installed is absolutely necessary, choose secure alternatives, be prudent with access rights, modify configurations, enable exhaustive logging and auditing, apply security updates quickly, and enforce policies for strong passwords. Bastille Linux is a big help when performing these tasks, and uses a GUI to step the administrator through all the tasks involved.

COVER STORY

Protecting Linux Against Attacks

Hardening! Today’s computers are exposed to ingenious but vicious attacks, some of which are launched by local users. If protected by the appropriate patches and security tools, penguins can be a lot harder than the malevolent hacker might expect, and more importantly, they are survivors! BY ACHIM LEITNER

Preventing Exploits Kernel patches that generically prevent exploits can provide protection from previously unknown security holes. The classic Openwall http://www.openwall. com/ product ensures that the processor will ignore any executable stack code, thus dooming many buffer overflow exploits to failure. Skillful crackers may still be able to cause damage, but at least the hurdles will be a lot higher. An Openwall port to the 2.4 kernel gave birth to GR Security. It uses the Pax patch and comprises of an ACL system (Access Control List). In addition to kernel patches special C compilers also provide protection from buffer overflows. Stack Guard http://immunix.org, a modified GCC, is one product that has managed to make a name for itself. If you are unable to prevent an attack, at least you should be able to mitigate

Cover Story SE Linux.......................................20 A secure Linux environment with granular admin control over privileges

Systrace .......................................28 Protect your system by placing it in a jail of legitimate system calls.

VServer ........................................32 Multiple servers coexisting peacefully on a single computer.

RSBAC ...........................................36 Rule Set Based Access Control protection.

the effects. A chroot jail will lock processes in their own fenced off file system tree, but it provides little protection against a root exploits. Compartments can help in this case: It takes the capabilities away from processes and thus prevents them from breaking out of their jail. VServer (see page 32) allows you to put whole, or multiple, Linux distributions in a jail, and run them simultaneously on a single machine. Systrace (page 28) does not need a virtual environment to be able to restrict the capabilities of individual applications to a minimum.

Access Control Mandatory Access Control (MAC) allows the system administrator to specify permitted access from a central point. What the rule definitions contain, will be defined by the on-site security model. SE Linux (page 20) implements the flask architecture, and RSBAC (page 36) even provides a variety of models. We have previously looked at LIDS in Linux Magazine. Despite its name, the Linux Intrusion Detection System is basically a

MAC system. Medusa DS9, and the new, but promising, Linsec are alternatives. If you do not feel up to integrating these techniques yourself, you might like to use a hardened distribution. Owl is an offspin of the Openwall project http://www.openwall. com/Owl/. Wirex develops and distributes Immunix System 7: This Red Hat derivative was compiled using Stack Guard protection. Castle http://castle. altlinux.ru combines Mandrake Linux with RSBAC and the Openwall patches. And LIDS provides protection for Engarde Secure Linux http://www. engardelinux.org. The OSD group has developed a hardened distribution based on Red Hat and SE Linux http://www. securityenhancedlinux.com. Kaladix http://www.kaladix.org is a project going through some changes and has altered its base platform from LFS (Linux from Scratch) to Gentoo. Kaladix is RSBAC hardened, contains buffer overflow protection, and implements a variety of security strategies. It promises to make a high level of security available to anyone interested in employing it. ■

www.linux-magazine.com

February 2003

19


COVER STORY

SE Linux

Practical Applications for Security Enhanced Linux

Security Rules R

Practical Concepts – not only for Linux The SE Linux security model was not originally designed for Linux. The NSA originally developed the architectural prototypes for the Mach kernel [13] in co-operation with Secure Computing [12] (of TIS Firewall Toolkit fame): DT Mach (Distributed Trusted Mach) and DTOS (Distributed Trusted Operating System). The Linux port was first introduced when continued development led to the release of Flask (the Flux Advanced Security Kernel). The Flask system’s task is to ensure data integrity and trustworthiness – in other words, it provides access controls. Where a normal Linux kernel might tend

20

February 2003

Sophisticated access controls are fundamental to a secure Linux environment. SE Linux, which was developed by the National Security Agency (NSA) and released under GPL, is a complex system that allows the administrator granular control over privileges. This article looks into the background, basics, installation and practical applications. BY CARSTEN GROHMANN, KONSTANTIN AGOUROS AND ACHIM LEITNER

Peter Doeberl, visipix.com

umors about mathematicians who can break any conceivable code abound in various urban legends. But is IT security itself merely a myth? The National Security Agency (NSA) begs to differ on this issue, and has become actively involved in enhancing Linux security. One of the more notable results is Security Enhanced Linux (SE Linux), which started life as an experimental prototype [1]. SE Linux provides additional access control features for Linux. It uses policies to decide what parts of the system users will have access to – that is, what files a process running with the privileges of a specific account can access, or what network connections the process can open. Non-privileged users cannot influence the policy, which is applied as a mandatory control by the admin user. SE Linux thus implements MAC (Mandatory Access Control, see insert “Important Terms”). However, granular security does impact the complexity of SE Linux. To run a program securely on SE Linux, the admin user needs to know every file the process opens and every subroutine it calls. But the level of security the admin user can achieve makes it well worth the effort.

to allow root to do everything and nonprivileged users to do nothing, Flask provides more granular security levels that apply both to file access privileges and to inter process communication and a whole range of additional features. As is the case with other packages, SE Linux has only limited potential for compensating the weaknesses in protocols or applications, but it does help to mitigate their effect. The security server is the central element in the Flask architectural model and responsible for any security based decisions. The name is derived from the original Mach implementations where it used to be a userspace process, but on Linux the server runs as a kernel subsystem. Object managers are the second Flask component. They manage security attributes, ensure appropriate bindings for the objects (files, processes, sockets …) and enforce the decisions made by the security server. Object managers are well-known kernel subsystems, such as process managers, file systems, or

www.linux-magazine.com

sockets whose functionality have been enhanced. Security decisions are reached by reference to so-called security contexts, which are basically a container for a group of security attributes. A context comprises the user ID, the user’s role, a type and an optional MLS (Multi Level Security) level. Only legal combintations that the security server recognizes are permitted.

Practical Abstraction The individual components of a security context originate from the abstraction levels introduced by SE Linux. These levels simplify the task of coping with the complex reality of all possible types access. Access control must specify the conditions under which each program is granted access to specific objects. The first abstraction layer concerns users. The fact that SE Linux user administration is not based on the Linux user ID has several advantages, for example, the SE Linux user ID cannot


SE Linux

be changed after logging on. To change their privileges users have to change either their role (an additional security attribute), or their type (the third attribute class). To help the admin user keep track, despite the sheer bulk and complexity of the rules involved, multiple Linux users can be combined to form a single nonprivileged user. The generic SE Linux user, “user_u”, is an example of this feature. In fact, the policy only needs to be customized for users who require more than the default privileges assigned to “user_u”. On SE Linux the term “user” is normally applied to actual people with interactive access to the system, with “system_u” being the exception. However, there is no need to add pseudo-users for specific processes, as the privileges assigned to these processes are defined by the individual type. Having said this, some programs still need to be run as their own user accounts – in fact, file system privileges, for which SE Linux does not provide an abstraction, require this. Separate user management means that both of these independent components must allow an action to make it succeed. Freely definable roles provide the next layer of abstraction (RBAC, Role-Based Access Control). It is possible to run multiple processes and applications within the context of a single role. Roles are modelled on the tasks performed by a process or file. The sample configuration detailed in the following sections uses three roles: “system_r”, “sysadm_r”, and “user_r”. System processes run in the context of the “system_r” role, normal users are assigned “user_r”, and the “sysadm_r” role is provided for administrative users.

Forceful Types Types provide an additional abstraction layer (TE, Type Enforcement); in fact whether access is allowed or denied will finally be decided by reference to the type. The rules define what types can access what other types. You may also discover references to domains, but the difference between domains and types is purely linguistic. Types that are bound to processes are referred to as domains, although no internal distinction is made.

SE Linux defines the type by reference to the role, and not by investigating the user ID or filename. And privileges are ascertained by the types assigned to a role. A user working in the context of the “user_r” role will not be able to load a kernel module, not even if she is root, but she will be allowed to load a kernel module when working in the context of the “sysadm_r” role, provided she is permitted to assume this role. After issuing the “newrole” command to change your role and after completing authentication, a new process is launched for the new role. Of course this assumes that role changes are permissible on the current machine, and that the user is allowed to occupy both roles. As changing a domain is merely a specific

COVER STORY

way of changing a type, again a new process is required to change from one domain to another. The following conventions are recommended to help keep track of users, types, roles and domains: • user: “_u” • role: “_r” • type or domain: “_t” The “_u” suffix is not used for users of Linux systems to help distinguish between the two models.

Enforcing Decisions To allow practical applications, security attributes are combined to form a security context. The security context of a subject (a process) or an object (a file, socket, IPC object …) comprises a

Important Terms DAC: Discretionary Access Control (the typical Linux procedure) allows users to modify access privileges to their own objects at their own discretion. DAC commonly refers to user ID based access control.Whether or not an action is permitted is decided by evaluating the user ID of the subject and the object owner.There are only two types of users: normal users and superusers. Domain: Security attribute of a process within the TE (Type Enforcement) model. SE Linux TE does not differentiate between types and domains. However, types that refer to subjects (that is, processes) are commonly referred to as domains. Label: Symbolic descriptor for subjects and objects that allow SE Linux to reach a decision on whether to allow or deny access. A label contains the security attributes which are applied by a central policy. In the case of SE Linux the label resides within the security context. MAC: Mandatory Access Control refers to a policy administrator defining access privileges centrally. Users and their processes are not allowed to edit the policy, which governs all access. Many definitions assume a special form of MLS for MAC, and thus refer to generic MAC as non-discretionary access control. MLS: Multi Level Security assigns a security level to subjects and objects, in line with layers of security for important documents: confidential, secret, top secret. Only users with sufficient security clearance are allowed access to objects. Object: Refers to any component accessed, such as files, directories or network sockets. Permissions: Depend on the object type. For files they could be read, write, create, rename, or execute, for example – for processes possibly fork, ptrace, or signal. PSID: The persistent SID is the permanent version of a security ID. A PSID is considered persistent when it survives after rebooting. It represents the binding between an object and its security context, as in the case of files, for example. Policy: This set of rules defines who can access what, where and with what privileges. RBAC: Role Based Access Control describes access control by means of roles. In SE Linux permissions derive from the types and domains associated with a role. Role: Roles simplify user management. Users are assigned roles depending on the tasks they need to perform. Permissions are assigned to users via their roles; users can be assigned to roles independently. Security Context: A combination of user ID, role and type.To retain compatibility to other security models, the security context is a text string whose content is parsed by the security server. SID: The security ID is a number that points to a tangible security context.This binding is applied by the SE Linux security server at runtime. Subject: Active component in a system, that is a process. TE: Type enforcement defines access by domains (subject classes) to types (classes of objects), or other domains by reference to an access matrix. SE Linux simplifies this model and also describes domains as types.The matrix defines permitted interactions between types. Type: Security attribute of an object within the TE (type enforcement) model. User: SE Linux user management is independent of the Linux user ID.

www.linux-magazine.com

February 2003

21


SE Linux

COVER STORY

Subject (process) Access to object Object Manager

Query privileges/permissions

Security Server

Enforce policy

Security Policy

Enforce

Object (file, socket)

Decide

Decision

Figure 1: When a subject attempts to access an object the object manager intervenes by consulting the security server and asking for confirmation that the access request is legitimate

three-part colon-separated text string. Attributes are the user, the user’s role, and the type, for example “system_u: object_r:inetd_exec_t”. The security server assigns a security context to each process. The security context comprises a set of rules, the parent process, and the user ID for the process. The rules must define what processes can spawn what child processes. This technique would stop a compromised sendmail process launching “/bin/tcsh”, for example. In order to specify the security context, the rules assign a label to each object. The permissions are far more granular than the privileges usually assigned by Linux. In the case of files, for example, SE Linux distinguishes between read, write, create, rename, and execute permissions. Process permissions can allow or deny fork, ptrace, or signalling. At runtime SE Linux does not always use an extensive string representation, instead assigning numbers (so-called SIDs, Security Identifiers) to represent the strings. These integers are only valid locally and temporarily, however, persistent SIDs (PSIDs) can be assigned to file system objects. Their security context binding is stored in the file

Object Manager

Query privileges/ permissions

Enforce policy

system. One interesting idea that the developers of SE Linux are looking into [11], is the concept of binding SIDs to IPSEC security associations, thus allowing networked applications running on various SE Linux hosts to be run at the same security layer. Before access occurs, the object manager sends the security contexts of both the subject and the object to the security server, which will make a rulebased decision. If a process attempts to access a file, the object manager registers the “open()” call and asks the security server to legitimize the attempt (see Figure 1). In contrast to normal Linux kernels, the manager will issue the same request for each write or read access. A normal kernel will only make this decision once on initial access. If the rights of the open file are changed, the process can continue reading – SE Linux would prevent this from happening. The Access Vector Cache (AVC) is designed to prevent system performance from suffering under the load generated by requests of this type. The security server’s responses are stored in the cache, allowing faster processing of known requests. And this means that total performance is not noticeably affected by If answer is not cached: pass on query

Access Vector Cache (stores decision)

Answer from cache

Security Server Security Policy Answer in cache

Figure 2: The security server’s decision to permit or deny access is cached as an access vector to accelerate the handling of this request in the future

22

February 2003

www.linux-magazine.com

continual clearance requests. If the permissions defined in the SE Linux policy change, the security server marks any modified entries in the AVC as invalid. Figure 2 clarifies this type of access. Each process is run in a protected context of its own. User ID “0” has no influence on this, unless the rules contain explicit instructions to the contrary. This allows you to confine processes and enable or disable system calls. You can also precisely define any files that the process will be allowed to read, write, or create. You can even remove any privileges that might be harmful to the system from processes that require more extensive privileges (in order to bind ports below 1024, for example). Even if an attacker attains root access, she will still be confined to the jail. The SE Linux sources are covered by the GPL and can be downloaded from the NSA [1] and Sourceforge [3] websites. In our lab environment we installed the complete package [2] on a minimal SuSE 7.3 system. As SE Linux is based on Red Hat, some minimal changes are required before installing. Note that SE Linux currently supports the Ext 2, Ext 3, and ReiserFS file systems.

Installing SE Linux Root privileges are required to install SE Linux. Expanding the archive will create the “lsm-2.4” subdirectory with the revised kernel, and “selinux” with the required programs and rules. The kernel comprises the LSM (Linux Security Modules) [4] patches, which add the hooks the kernel requires to implement SE Linux as a kernel module. The SuSE kernel does not support SE Linux, as the extensive modifications it contains prevent the admin user from installing the LSM and SE Linux patches. Unless other-

SuSE Packages for SE Linux A number of tools are required to compile SE Linux.The tools are included in the following RPM packages for SuSE Linux 7.3: • “d”series: bison, gettext, flex, pam_devel, openssl_devel, patch, slang (for scurses.h) and yacc • “a”series: diffutils, ncurses (for libcurses), texinfo (for makeinfo), and util-linux (for more) • “ap”series: sharutils You may require additional packages depending on your own configuration.


SE Linux

Follow the normal make procedure, “make && make install”, in “selinux/ module” and “selinux/libsecure”. Other tools will require two patches available from [6]:

Listing 1: File Contexts /home/[^/]* -d /var/run(/.*) /var/run/.*\.*pid

system_u:object_r:user_home_dir_t system_u:object_r:var_run_t <<none>>

wise designated, the new “selinux” subdirectory is used as the starting point for any subsequent steps. If you are in a hurry, you can place the whole installation in the capable hands of a makefile: “make quickinstall”, however, the more roundabout approach also has its points of interest. You will need a few tools for the installation, the “SuSE Packages for SE Linux” box gives details. A few modifications to the LSM kernel are also required for SE Linux. The appropriate patches are located in the “selinux/module” subdirectory. Simply issue the “make insert” command here. To write the kernel to “/boot” automatically you will need to uncomment the line containing “export INSTALL_ PATH=/boot” – but again you can leave this step to a patch file. An additional patch changes the kernel image filename to “/boot/vmlinuzselinux”, thus retaining the original kernel, which is particularly useful while performing tests and for troubleshooting later. The last two patches referred to are available from [6]: cd ../lsm-2.4 patch -p0 U < ../kernel_install_path.diff patch -p0 U < ../kernel_vmlinuz-selinux.diff

The new kernel still needs to be configured for the local machine, although you can use an existing “.config” as a reference point. SE Linux requires “Network Packet Filtering” from “Networking Options” and one or two “Security Options” (Figure 3). The “NSA SELinux Development Support” module is a big help when defining your own sets of rules. It launches SE Linux in permissive mode, instead of enforcing mode, which means that any actions that break the rules will not be prevented, but merely logged. The kernel is then generated with the following: make dep U && make bzImage && make modulesU

COVER STORY

&& make modules_installU && make bzlilo && make clean

The next step is to make the boot loader aware of the new kernel and its features. SE Linux automatically boots to Permissive Mode with development support, but you can set the boot option “enforcing=1” to change this. It is usually a good idea to add two boot configurations to the existing configuration: the first entry should boot SE Linux with “enforcing=1” set. No additional parameters should be defined for the second entry, and should boot SE Linux in permissive mode. Enforcing mode should be the default to prevent security measures being disabled on rebooting.

Enabling or Disabling GUI Login

patch -p0<utils_makefile_U uselargefile.diff patch -p0<utils_libncurses.diff

You cannot compile the sources on SuSE Linux without applying these patches. The “selinux/utils/Makefile” contains “./configure” commands for most tools, and you can also configure the options here. “make && make install” in the “utils” subdirectory will create and install these packages; this also applies to the setfiles tool in the “selinux/setfiles” subdirectory.

A Soft Landing Without Hard Links You will need to remove one further obstacle before configuring SE Linux. The “/etc/localtime” is a hard link (see also the “SE Linux Tips” insert), and this is something SE Linux cannot handle, as it means two different security entries pointing to the same inode.

Before starting the installation procedure, you might like to check whether your machine boots to runlevel 3 (without GUI login). The display cd /etc managers have not been adapted to cp localtime localtime.hl reflect the new login pattern and will not rm localtime work. Instead of editing your “/etc/initmv localtime.hl localtime tab” you can simply add “append=3” to the SE Linux entries in “/etc/lilo.conf”, To prevent “SuSEconfig” from redefining and then launch “lilo” to enable the this hard link, you will need to set the newly modified configuration. “TIMEZONE="timezone"” entry in “/etc If you really need a GUI login, you can /rc.config” to “YAST_ASK”. SuSE 8.0 or use a modified GDM for Red Hat, which newer stores this entry in “/etc/syscon is available from [3] or the KDM Patch fig/clock”. If the system uses other hard from [8]. You can easily launch X11 links, SE Linux will issue a warning by typing “startx” to run KDE or Gnome when defining security contexts. on SE Linux, but unfortunately there are As almost all the files that influence no rules for the desktop environments. SE Linux behavior are stored below The SE Linux package has a number of modified userspace programs. Most of them are installed in the directory below /usr/local/selinux”, exceptions being OpenSSH, login, Figure 3: When defining a configuration with “make xconfig”, a few optional the cron daemon, security modules need to be added to the kernel:“Capabilities Support” and and such like. “NSA SELinux Support” are mandatory and “Development Support” is useful

www.linux-magazine.com

February 2003

23


COVER STORY

SE Linux

Listing 2: Mingetty Error Messages 1 Jul 5 17:36:36 max kernel: avc: denied { read } for pid=616 exe=/sbin/mingetty path=/2/fd/10 dev=00:03 ino=163850 scontext=system_u:system_r:getty_t tcontext=system_u:system_r:init_t tclass=lnk_file 2 Jul 5 17:36:36 max kernel: 3 Jul 5 17:36:36 max kernel: avc: denied { read } for pid=616 exe=/sbin/mingetty path=/450/maps dev=00:03 ino=29491213 scontext=system_u:system_r:getty_t tcontext=system_u:system_r:postfix_master_t tclass=file 4 Jul 5 17:36:36 max kernel: 5 Jul 5 17:36:36 max kernel: avc: denied { getattr } for pid=616 exe=/sbin/mingetty path=/450/maps dev=00:03 ino=29491213 scontext=system_u:system_r:getty_t tcontext=system_u:system_r:postfix_master_t tclass=file

“selinux/policy”, this directory will be the starting point for the next few steps. Comments in the configuration files are indicated by hash signs, “#”, as you would expect. The content of these files is interpreted by the “m4” macro preprocessor; many admins will be familiar with this tool from configuring sendmail. The macros help simplify more complex configurations, however, the initial steps and normal use will not mean you needing to brush up on your “m4” skills.

Configuring SE Linux The “users” file assigns a usable role to every user, and the system policy will only apply to user accounts listed in the file. The first thing you should do is delete the sample users “jdoe” and “jadmin”. The entries follow the “user username roles role;” pattern or, if multiple roles are permitted, “user username roles { role1 role2 };”. There are three additional pre-defined users: “system_u”. the system user, “root”, and “user_u”. Any users not explicitly named are automatically assigned to the default user “user_u” and assume the “user_r”. Thus, you do not need to add every single Linux user to this file. The following line contains a sample entry for a non-privileged user: user foo roles { user_r bar_r };

To assign additional permissions to a user, you can allow the user access to the “sysadm_r” role. If the entry for a user specifies multiple roles, the user can change roles at any time. The “sysadm_r” role will assign root equivalent SE Linux permissions to the user, however, this does not imply root privileges on the underlying Linux system, as SE Linux access control works on top of the standard Linux access control. Thus, the

24

February 2003

user will not be able to access the “/root” directory. In practical applications, roles are used to permit or deny access to various programs. One example of this is the “insmod” program. The members of the “sysadm_r” role and the system itself are allowed to use the program, as both roles comprised the “insmod_t” domain (that is, type). In contrast, normal users with the “user_r” role assignment will not have permission. This prevents users with the “user_r” role assignment from loading kernel modules. Even if a user escalates her privileges to root (user ID 0), she will not be able to load any modules, unless she additionally has access to the “sysadm_r” role.

Security Contexts for Files and Processes The files below “file_contexts” assign security contexts to file system entries. “types.fc” contains non-specific, program independent assignments, and application specific assignments are located below “program”. Taking a look at these files should help to shed some light on how SE Linux works (see Listing 1). Each line starts with a file system entry, which can easily be characterized by a regular expression. The “^” and “$” anchors at the start and end of the lines can be omitted

as SE Linux will add these controlling characters automatically. The file entry may be followed by a file type, which is supplied as a parameter with a minus sign prepended. “-d” represents a directory entry, and the rule will thus apply to directories only. Enter “--” instead, if the rule is meant for normal files only. The security context is shown at the end of the line. It always includes the user, “system_u”, the role, “object_r”, and a corresponding type. Files created at SE Linux runtime are automatically assigned the user, role and type defined for the process that created them. To avoid creating a security context, you can also specify “<<none>>” at this point. If a file matches multiple rules, SE Linux will apply the last line the file matches. The second line in Listing 1 matches all the file system entries below “/var/run” and assigns the “system_u: object_r:var_run_t” security context to them. The next line removes this assignment for any of the entries ending in “.pid”. This just goes to show how important the order is: Mixing up line 2 and 3 would assign the security context to the PID files. Thus, entries must be arranged in ascending order of specificity. The wrong order will often produce unexpected results.

Listing 3: Extending Paths # # add SE Linux utilities and man pages to path and manpath # uname -r | grep --silent selinux if [ "$?" = "0" ] ; then PATH=/usr/local/selinux/bin:/usr/local/selinux/sbin:$PATH MANPATH=/usr/local/selinux/man:$MANPATH export PATH MANPATH fi

www.linux-magazine.com


SE Linux

The different terms used for type enforcement – that is “type” for files and “domain” for processes – are reflected in the names of the configuration files. The “selinux/policy/domains/program” directory contains files which end in “.te” (type enforcement), which are used to define permissible access types (that is what domain is allowed what kind of access to what type). Only the type definitions are significant in “*.te” files, but not file names, user IDs or roles. To allow permissions to be assigned as foreseen, you will need to assign security contexts to your processes. Files ending in “.fc” (file context) in the “selinux/ policy/file_contexts/program” subdirectory are responsible for this task. The security context of a process is derived from the security context of the program file, which is defined by an “*.fc” file.

The other files do not need modifying. The only line in “initrc_context” defines the security context for any init scripts run by calling “run_init”. “passwd_ context” and “shadow_context” define the security context for “/etc/passwd” and “/etc/shadow”. This allows various wrapper programs, such as “spasswd”, to restore the context after “passwd” and other programs have added to these files. The “policy/rbac” file is used for configuring the RBAC mechanism (Role Based Access Control) and should not be modified, unless a new role necessitates this step. The existing rules do not require any role changes; an unpremeditated role change could endanger the security of the whole system. The file is line based and adheres to the following syntax:

Customized Settings

In contrast to type changes, which are defined by rule sets of their own, every line in this file explicitly permits clearly defined role changes.

SE Linux will only perform as designed if you tailor it to reflect your current distribution. As the defaults are Red Hat specific, you will need to modify them to correspond to a SuSE system. This particularly applies to the file system entries in the left column of the “*.fc” files, as these two distributions use different paths for various programs. To avoid the time-consuming process of modifying these files manually, you might like to check out the additional SuSE rules located at [7]. To simplify the process of applying modified SE Linux rules in the future, the specific rules have been organized in the “suse.fc” and “suse.te” files. You will need to copy “suse.fc” to the “selinux/policy/file_contexts/program” directory, and “suse.te” needs to be copied to “selinux/policy/domains/ program”. “selinux/utils/appconfig” contains a few files used to configure the programs modified for SE Linux. All of these files should be copied to “/etc/security” first. The “default_contexts” file defines what roles and what type are assigned by default to local logins, logins via SSH, and cronjobs. The entries in “default_type” assign a default type to each role. The format of these entries is “role:domain”. You will need to modify this file, if you define additional roles.

allow old_role new_role;

Mingetty with Maxi Privileges If you use SuSE’s mingetty, you should be prepared for a barrage of messages about missing permissions (Listing 2).

COVER STORY

This mingetty variant needs to parse the PID directories below “/proc”. As these directories are assigned to the security context of the process that owns them, mingetty would require read permissions for too many different types, and thus defeat the aim of SE Linux. One possible solution is to use the Red Hat mingetty package [9]. The binary RPM can be created from the source RPM by issuing “rpm --rebuild mingetty-1.001.src.rpm” and installed by issuing “rpm -ihv --force mingetty -1.00-1.rpm”. This action will overwrite the SuSE counterpart. The exact path to the newly created mingetty package will depend on your distribution, but normally defaults to “/usr/src/packages/RPMS/i386”. After completing these customization steps, you can create your policy in the “selinux/policy” by following the “make && make install” pattern. The policy is applied after rebooting your system. However, “make load” is available during SE Linux runtime to load the new policy immediately.

A Question of Policy The next step uses “make reset” to bind security contexts to file system entries, and places a “…security” directory in the root directories of any file systems mounted and supported.

Listing 4: Processes PID 1 2 3 4 5 6 7 8 214 217 279 489 550 574 575 576 577 578 579 603 622

SID 7 7 7 1 1 1 1 7 169 166 172 176 180 182 186 182 182 182 182 187 187

CONTEXT system_u:system_r:init_t system_u:system_r:init_t system_u:system_r:init_t system_u:system_r:kernel_t system_u:system_r:kernel_t system_u:system_r:kernel_t system_u:system_r:kernel_t system_u:system_r:init_t system_u:system_r:syslogd_t system_u:system_r:klogd_t system_u:system_r:atd_t system_u:system_r:inetd_t system_u:system_r:crond_t system_u:system_r:getty_t system_u:system_r:local_login_t system_u:system_r:getty_t system_u:system_r:getty_t system_u:system_r:getty_t system_u:system_r:getty_t root:sysadm_r:sysadm_t root:sysadm_r:sysadm_t

COMMAND init [ [keventd] [kapmd] [ksoftirqd_CPU0] [kswapd] [bdflush] [kupdated] [kreiserfsd] /sbin/syslogd /sbin/klogd -c 1 /usr/sbin/atd /usr/sbin/xinetd -reuse /usr/sbin/crond /sbin/mingetty --noclear t login -- root /sbin/mingetty tty3 /sbin/mingetty tty4 /sbin/mingetty tty5 /sbin/mingetty tty6 -bash ps ax --context

www.linux-magazine.com

February 2003

25


SE Linux

COVER STORY

The “…security” directory contains a database with the PSIDs (Persistent Security Identifiers), which use the inodes of the individual files and directories for the appropriate security context. Changes are applied on rebooting the system, but you can enable them in the current SE Linux session by typing “make relabel”. This command is also required if a non SE Linux kernel was running. Floppy and CD drives are assigned a security context dynamically when mounted. When adding new rules to a system, you should update the policy first before changing the security context files. Failing to do so could mean that the system does not recognize a new type that you have applied to files and directories. In this case, SE Linux would prevent potentially critical access. Before booting SE Linux for the first time, you might like to extend the paths to include the paths for the manpages and programs. The easiest way to do this is to refer to Listing 3, which can be run as “/etc/profile.local” .

Booting and Logging On Booting to permissive mode is recommended to avoid losing control when incomplete or erroneous rules are applied. In this case you can change to

the “sysadm_r” role – of course this is also true of other modes. If SE Linux boots without any errors, issuing “ps ax --context” will produce similar output to that shown in Listing 4 – that is, it will show the processes within their appropriate security contexts. The third column in this output lists the security context in the “user:role:type” format mentioned previously. Any processes belonging to a user will be run with the user’s ID and role. Both attributes are inherited by any child processes. The security context for system processes is not user-definable, as they will always run with the permissions of the “system_u” user, and within the context of the “system_r” role. Only the domain (that is, the type) will depend on the actual process. If every process runs within the context of the same domain, the files below “selinux/policy/file_contexts” may not have been correctly customized. The fact that child processes inherit domains can mean that some processes will still reside in the “initrc_t” domain after booting. However, this domain is used exclusively for launching the scripts below “/etc/init.d/”. The admin can either stop the RC scripts launching these programs, or define a domain for the programs that are launched.

Listing 5: File System drwxr-xr-x drwxr-xr-x drwx-----drwxr-xr-x drwxr-xr-x lrwxrwxrwx lrwxrwxrwx drwxr-xr-x drwxr-xr-x lrwxrwxrwx drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x drwxr-xr-x dr-xr-xr-x drwx-----drwxr-xr-x drwxrwxrwt drwxr-xr-x drwxr-xr-x

26

root root root root root root root root root root root root root root root root root root root root root root

root root root root root root root root root root root root root root root root root root root root root root

February 2003

system_u:object_r:root_t

./

system_u:object_r:root_t system_u:object_r:file_labels_t system_u:object_r:bin_t system_u:object_r:boot_t system_u:object_r:root_t system_u:object_r:root_t system_u:object_r:device_t system_u:object_r:etc_t system_u:object_r:root_t system_u:object_r:user_home_t system_u:object_r:lib_t system_u:object_r:lost_found_t system_u:object_r:root_t system_u:object_r:root_t system_u:object_r:root_t system_u:object_r:proc_t system_u:object_r:sysadm_home_t system_u:object_r:sbin_t system_u:object_r:tmp_t system_u:object_r:usr_t system_u:object_r:var_t

../ ...security/ bin/ boot/ cdrom cdrw dev/ etc/ floppy home/ lib/ lost+found/ media/ mnt/ opt/ proc/ root/ sbin/ tmp/ usr/ var/

www.linux-magazine.com

The “ls / --context” command lists security contexts for files (Listing 5). The output has some resemblance to “ps”. If new files are created on shutting down or booting the system, they will not have a label or a security context, even if they are covered by a policy rule. You can

Tips for SE Linux Avoid hard links: A file referred to by different names via a common inode is not allowed to have different privileges.The security context refers to the inode and thus both names are placed in the same context, although the file context configuration might attempt to assign two different contexts. Do not convert Ext 2 to Ext 3: If you do use an Ext 3 file system, it should not be a converted Ext 2 file system.The “.journal” file created by this process can cause trouble, as there is no file type defined for it. No RAM disk: You should launch SE Linux without initializing a RAM disk. Mixing SE Linux and standard Linux: If you boot a non SE Linux kernel, you should delete all the “…selinux”directories and regenerate the security context for the files before rebooting SE Linux. Mailing list archive: The archive for the SE Linux mailing list is only updated infrequently on the home page [1].You may prefer to use a different archive [5]. Backing up the configuration: You can copy the “seliunux/policy” to “/etc/selinux/ policy”in order to simplify backing up the configuration files. Faster setfiles: Running “setfiles”can take quite a while if you have a lot of files to process. As an alternative, consider running “setfiles”manually and only processing file systems that have been modified.The working directory must be “selinux/policy”. You will need to issue the “setfiles file _contexts/file_contexts partition_root” command. If the file “file_contexts/file _contexts”does not exist, or is too old, you can instead use the “make filecontext/file context”syntax. It is simpler to use “chcon” rather than “setfiles”for less critical changes. X server: If you intend to use an X server, you should follow the steps described in point 4 of the “selinux/README”section in the installation manual, and then run “startx”. Our experiments with KDE 2 produced only a few error messages, and a set of rules for KDE should remedy the situation. Root login: The current rule set does not allow “sshd” to “/root”‘s home directory. SSH based root login should therefore be avoided.You can and should switch IDs and roles later using “su -”and “newrole”. Boot messages: Looking at the boot messages provides further insight into SE Linux.You can ascertain the current SE Linux mode, for example.


SE Linux

Command Line Tools SE Linux also comprises a few userspace tools. The “avc_enforcing” program displays the current SE Linux mode, that is “enforcing” or “permissive”. The “avc_toggle” tool allows you to toggle between modes and does not require any additional parameters. Just like the standard Linux “su” command, “newrole” launches a new shell with a different role. The “newrole -r sysadm_r” prompts the user for a password before changing to the “sysadm_r” role. The password prompt ensures that only users, and not shell scripts, can change roles. The following conditions must be fulfilled before a user can change role: • The user must be a member of both roles in “users”. • The role change must be permitted by the “rbac” file. The “newrules.pl” is important. It is located in the “selinux/scripts/” directory and allows you to create new rules from kernel messages. Calling “newrules.pl --help” will display the syntax. The “-v” is particularly interesting: it adds a comment comprising information on the offending process to the rule. “run_init” is used to launch init scripts. After prompting for a password, the program changes to the “/etc/security/initrc_context” security context and runs the init script.

Modified Progs for SE Linux To use SE Linux additional features, the package contains a number of modified programs, such as “ps”, “ls”, “find”, “id”, and “mkdir”. The modified “tar” tool also backs up the security context of the files. “runas” launches a program in a different environment, with a different role or in a different context, for example. “chcon” changes the security context of files and directories, however, these changes are reverted when you run “make relabel”. “setfiles” is used to define a security context for a file system, and “load _policy” loads a new policy. “list_sids”

displays the security identifier (SID), and “sid_to_context” displays the security context for a SID. Linux typically allows root to change other users’ passwords without knowing their current passwords. This privilege is hard-coded into tools such as “passwd”, “chfn”, and “chsh”. All three tools need write privileges for the password files, so a simple policy is inappropriate in this case. Wrappers provide a solution: “spasswd”, “schfn”, and “schsh” ensure that users can only change their own data, unless they have special permission. Permission does not depend on the user ID, but on the domain. Critical modifications also apply to “login”, “sshd”, and “crond”. As cron fulfills a large number of system specific tasks, it is difficult to define an appropriate rule. The recommended procedure is to comment out any tasks you do not require cron to perform, and define a set of rules for a cron domain for any remaining tasks. The SuSE crond is called “cron”, whereas the SE Linux enhanced version is called “crond”. The “/etc/init.d/cron” init script will need some modifications (the “CRON_BIN” variable defines the name of the binary). There are also no rules for YaST. The program requires quite extensive permissions. As defining a complete set of rules may be extremely timeconsuming, you can either do without YaST or run the tool in permissive mode.

Conclusions SE Linux provides the admin user with extremely granular control over a system. However, this potential advantage can turn out to be troublesome: the more rules you implement, and the more complicated they become, the more difficult it will be to check and troubleshoot them. In other words, potential for error will increase. However, you can mitigate the danger by defining groups of rules, such as original, unchanged SE Linux rules, operating specific rules used to customize SE Linux for your distribution, and system specific rules that reflect local conditions. This allows you to introduce and maintain new rules more easily. Alternatively, you might consider reducing the range of tasks a program performs, instead of defining a large number of new rules. Your goals will

define your approach: It is easier to remove a few cronjobs (such as updating the locate database via “updatedb”) than defining a few additional crond specific rules. Many tasks, however, are indispensable and you will not be able to avoid defining appropriate rules. ■

INFO [1] SE Linux home page at NSA: http://www.nsa.gov/selinux/ [2] SE Linux package (36 MB): http://www. nsa.gov/selinux/download2.html [3] SE Linux project home page at Sourceforge: http://sourceforge.net/projects/selinux/ [4] LSM-Kernel: http://lsm.immunix.org [5] Alternative mailing list archive: http:// marc.theaimsgroup.com/?l=selinux [6] Patches for SE Linux: ftp://ftp.linux-magazin.de/pub/listings/ magazin/2003/01/SELinux/ installpatches_20020930.tar.gz [7] Additional rules for SuSE: ftp://ftp.linux-magazin.de/pub/listings /magazin/2003/01/SELinux/suse_rules _20021105.tar.gz [8] SE Linux patch for KDM: http://www. coker.com.au/selinux/kdm/ [9] Mingetty packaget “mingetty-1.00-1.src .rpm”: http://www.rpmfind.net [10]Stephen Smalley,“Configuring the SELinux Policy”: http://www.nsa.gov/ selinux/policy2-abs.html [11] Peter Loscocco and Stephen Smalley, “Integrating Flexible Support for Security Policies into the Linux Operating System”: http://www.nsa.gov/selinux/ freenix01-abs.html [12] Secure Computing Corporation: http://www.securecomputing.com [13] Mach-Kernel: http://www-2.cs.cmu.edu/afs/cs.cmu.edu/ project/mach/public/www/mach.html

THE AUTHORS

then issue “make relabel” in the policy directory to add the missing labels. You should then watch your machine in permissive mode for a while, just in case you need to modify the rules.

COVER STORY

Carsten Grohmann has been interested in computers ever since the KC87 was invented, and started working with Linux in 1997. He has been working as a system administrator since 2000. Konstantin Agouros started investigating Unix and the Internet in 1989, and has been interested in Linux since 1994. He is responsible for the Competence Center Security at Netage.

www.linux-magazine.com

February 2003

27


Systrace

COVER STORY

Systrace Enforces Rules for Permitted System Calls

Gatekeeper Vulnerabilities in web servers, browsers, IRC clients or audio players may allow programs to perform all

Systrace protects your system from unpleasant consequences by placing it in a tightly locked jail of legitimate system calls. BY MARIUS AAMODT ERIKSEN AND NIELS PROVOS

S

ystrace, the kernel gatekeeper, forces processes to respect a policy for system calls, thus restricting access to a host. Of course this will not remove any existing vulnerabilities, but it will mitigate the consequences. If a program is not required to launch any other processes, the systrace policy will disable the syscall normally used for this purpose. An intruder will be unable to open a shell, even if she has gained complete control over an active process. To enforce the policy, systrace intercepts system calls at kernel level and launches only those functions intended by the legitimate user. If an application attempts to step outside the bounds set by systrace, a GUI popup warns the user and prompts them to

Table 1: Syscall fork execve open read write connect bind unlink

28

No Attacks without System Calls

A typical attack might succeed due to a buffer overflow in a web server that allows an intruder access to a shell. The malevolent hacker would then inject exploit code that runs with the privileges of the web server process. The code will need to execute a few system calls, such as “fork()” and “execve()” for example, to launch the shell. Thus, the real damage is not caused by the security hole itself, but by a syscall it allows. As security holes are more or less inevitable, admins often monitor system calls in order to provide an extra layer of system protection. Normally an application will have access to any system calls it requires. Nothing Common System Calls would prevent the web server from Function Creates a new process launching a shell and Executes a file serving it up to any Opens a file user connecting to Reads from a file descriptor the web server. This Writes to a file descriptor action is undesirable Uses a socket to open a connection to a remote host and the server was Binds a socket to a name not programmed to Deletes a directory entry perform it, but a

February 2003

www.linux-magazine.com

H&A Ramm fiee visuelle, visipix.com

kinds of malevolent tricks.

decide to either permit or deny the action. Systrace comprises a kernel patch, a command line program, and a gtk GUI (BSD license). All three components are available from [1]. Userspace applications use system calls to access the kernel. System calls provide services in areas where security is critical, such as file handling, network connections, or the heap. Table 1 provides an overview of common syscalls. More than 200 calls are available on most UNIX type operating systems, and they provide the only way to invoke persistent changes to a system. Without them a process could not perform any useful tasks, although admittedly an attacker would not be able to get up to any mischief either.

software bug allows the attacker to trick the application into behaving in way the authors did not envisage. Each application needs access to a subset of the syscall interface functionality. A simple web server listens on TCP port 80, responds to HTTP requests, and serves up files from a standard directory structure. The web server is not required to provide any other services, and it particularly does not need to launch an interactive shell or read “/etc/passwd”. The system calls a program that allows you to describe its legitimate functions. Systrace makes use of this fact: it monitors the system calls and develops a policy based on these calls. Any application that is controlled by the systrace program can only work within the bounds of the policy.

Policies A systrace policy comprises a set of rules. Each rule controls a syscall and its parameters, specifying whether or not the call is allowed. The simple rules outlined in the following example allow for the “fchdir()” and “fstat()” system calls: linux-fchdir: permit linux-fstat: permit

A rule containing the “deny” keyword instead of “permit” would prevent these


Systrace

system calls. Rules can also apply to a specific parameter of a syscall: linux-fsread: filename eq U "/tmp/foo" then permit linux-fsread: filename match U "/etc/*" then deny[enoent]

Based on these rules, the program is allowed to read the file “/tmp/foo”, but files that match the “/etc/*” will lead to an “ENOENT” error. Instead of running the syscall, systrace informs the application that the file does not exist.

Policy Grammar The policy grammar is identical for all system calls. Each rule begins with the name of an emulation and the syscall, e.g.“linux-fsread”. This is followed by a list of conditions, and an action (“deny” or “permit”) to be taken by systrace. As Linux does not support syscall emulation, each rule starts with the “linux” string. Systrace also supports OpenBSD and NetBSD and both these systems can emulate various syscall variants. An optional error code can be appended to the action (this defaults to “EPERM”, operation not permitted). The user can optionally choose to have systrace log specific activities by adding the “log” keyword at the end of the rule. The BNF specification (Backus Naur Form) of the policy syntax is shown in Listing 1. A few predicates are available to restrict the validity of rules. They define additional conditions for the actions and currently apply to users or groups on the system. Predicates are appended to the rule following a comma, for example:

linux-fsread: filename eq U "/etc" then deny[eperm], if U group != wheel

This rule only restricts users who are not members of the “wheel” group. Arguments are defined for the majority of system calls. For example, “open” expects to be passed the name of the file to be opened. Systrace translates these parameters into a human readable format, displaying them as strings and comparing them with the rules. Systrace offers a range of operators for this comparison (see Table 2).

Implementation: Setting Up a Base Camp When implementing systrace functionality, you first need to find an appropriate place to insert control mechanisms. Looking at the path of a syscall reveals several potential candidates. Applications initiate system calls by writing to specific registers and invoking soft interrupts (the “int” instruction on i386 processors). The standard C library (libc) is typically responsible for setting up and initiating syscalls. A large proportion of the C library functionality derives from system calls, for example “open()”, “read()” and “write()”. The system call path is shown in Figure 1. Syscalls can be intercepted and modified in each of these layers. Intercepting system calls in the library layer (libc) would be trivial: You could use the “LD_PRELOAD” environment variable to preload a library on top of libc. The new library would provide all of libc’s system call functionality.

Listing 1: Systrace Policy Syntax 01 filter = expression "then" action errorcode logcode 02 expression = symbol | "not" expression | "(" expression ")" | 03 expression "and" expression | expression "or" expression 04 symbol = string typeoff "match" cmdstring | 05 string typeoff "eq" cmdstring | string typeoff "neq" cmdstring | 06 string typeoff "sub" cmdstring | string typeoff "nsub" cmdstring | 07 string typeoff "inpath" cmdstring | "true" 08 typeoff = /* empty */ | "[" number "]" 09 action = "permit" | "deny" 10 errorcode = /* empty */ | "[" string "]" 11 logcode = /* empty */ | "log"

COVER STORY

Unfortunately, an attacker would easily be able to sidestep this mechanism by making an application invoke the system call itself, instead of using libc. And the method would not work for statically linked programs. Additionally, there are a few systrace functions that cannot be run in userspace.

Gatekeeper – Syscall Gateway So it would seem that the kernel layer is the natural place to intercept system calls. This is the only place where you can be sure to catch every syscall, no matter where or how it was initiated. Every system call enters the kernel via the syscall gateway, which acts as an interrupt handler for the soft interrupt used by system calls. The gateway reads a register (“eax” for i386 processors) to ascertain the system call number, which is a simply index into the system call table containing function pointers to individual kernel functions. The gateway parses the values of the syscall number and then initiates the correct function which performs the task specified by the system call. In order to reject a system call, systrace must intercept it before it is executed. Systrace hooks into the call gateway to do so. Most of syscall’s functionality is implemented in a userspace program. The kernel hook is provided by device: “/dev/systrace”. The userspace section of systrace reads kernel messages via the device and invokes “ioctl” calls for the device in order to return messages.

Systrace Takes the Helm An application must be launched by the “systrace” userspace utility to initialize

Table 2: String Matching with Systrace Operator

Function

match

Is true if the file name for a glob-pattern matches “fnmatch(3)”

eq

Is true if the syscall argument exactly matches the string following the operator

neq

Logical negation of “eq”

sub

Looks for matches in a substring of the system call argument

nsub

Is the logical negation of “sub”

inpath

Is true if the syscall argument is a subpath of the string following the operator

re

Looks for matches for a regular expression in the syscall argument

www.linux-magazine.com

February 2003

29


Systrace

COVER STORY

Listing 2: Sample lines from the XMMS policy 01 linux-fsread: filename eq "/etc/ld.so.preload" then permit

Userspace

/bin/cp write() /usr/lib/libc.a

02 linux-fsread: filename eq "/etc/ld.so.cache" then permit 03 linux-fsread: filename eq "/lib/libpthread.so.0" then permit 04 linux-fsread: filename eq "/usr/X11R6/lib/libSM.so.6" then permit 05 linux-fsread: filename eq "/usr/X11R6/lib/libICE.so.6" then permit 06 linux-fsread: filename eq "/usr/lib/libxmms.so.1" then permit 07 [...]

eax = 4; int 0x80

08 linux-fswrite: filename eq "/dev/dsp" then permit 09 linux-fsread: filename eq "/home/marius/.xmms/menurc" then permit

Kernel

10 linux-fsread: filename eq "/dev/mixer" then permit 11 linux-fsread: filename eq "/home/marius/.xmms/xmms.m3u" then permit 12 linux-fsread: filename eq "/home/marius" then permit

Figure 1: Userspace processes use libc to initiate

13 [...]

system calls.“/bin/cp” calls the “write()” library

14 linux-pipe: permit

function, which selects the appropriate syscall

15 linux-clone: permit

via the “eax” register

16 linux-rt_sigsuspend: permit 17 linux-poll: permit

systrace. The command opens a session to the kernel portion of systrace by opening the “/dev/systrace” device. It forks a new process, uses an “ioctl” command to tag the process, and uses “execve()” to run the application it needs to monitor. The modified call gateway checks each system call to discover whether or not the process has been tagged. If so, control is passed to the systrace hook. Systrace looks up the system call number in its policy cache to ascertain whether or not a simple rule exists for the call (that is “permit” or “deny” without any additional arguments). If systrace discovers a simple rule, it performs the action described by the rule. If there is no cached action, systrace turns to its userspace counterpart to ask for a decision.

Systrace and Monkey.org Monkey.org is an example of systrace in a production environment.The private UNIX shell provider runs the processes of its approximately 200 users on systrace. The admins have defined policies for every program installed at monkey.org for this purpose. Every user’s login shell is set to “stsh” (systrace shell).“stsh”spawns the user’s real shell as a systraced process allowing every process a user starts to be monitored. Systrace runs in enforcement mode and thus denies any syscall not envisaged by a policy, and logs any contraventions.The administrators can parse their logs and change their policies accordingly, if required.

30

February 2003

18 linux-getppid: permit 19 linux-kill: pidname eq "/usr/bin/xmms" and signame eq "<unknown>: 32" then permit

To do so, the kernel component queues a message which is then forwarded to the userspace systrace component via the “/dev/systrace” device. The message contains the number and any parameters for the system call. The userspace component looks up matches for the syscall and parameters in the policy for the current application and tells the kernel what action to perform if a match is found. If it cannot find an appropriate rule, systrace will interactively prompt the user for a decision. In enforcement mode any actions not defined in the policy will be prevented and logged.

Decisive Users Systrace uses either the console or a GUI to prompt the user for a decision, displaying the syscall and any parameters in both cases. The user can decide to permit or deny the action, or create a new rule. If the user chooses to “deny”, the error message defined in the “deny” request is returned (this defaults to “EPERM”). Systrace will allow the system call to be dispatched if the user chooses “permit”. As an additional security measure, the kernel kills any processes currently being monitored by systrace if the monitoring process (that is “systrace”) terminates in an unexpected fashion.

www.linux-magazine.com

In some cases the userspace systrace component wants to know the return value of the system call, and the kernel component indicates the value after the call has been processed. This is particularly useful for “execve()” calls. In the case of successful system calls, systrace will use the policy assigned to the new program in future.

First Training – then Production To use Systrace on a process, it has to be started with the “systrace” utility, for example, to run netscape under systrace: % systrace netscape

Figure 2: In interactive mode systrace warns the user when a program contravenes policy rules. In our example, XMMS has attempted to read the root directory “/”


Systrace

COVER STORY

Figure 3: A systrace policy denies access to “/” and returns an “EACCES”

Figure 4: Systrace catches the configure script in a trojaned version of fragroute:

error message. The file dialog in XMMS reacts by displaying the

The source package has been manipulated by malevolent hackers and attempts

message “Directory unreadable: Permission denied”

to open a TCP connection to port 6667 on IP 216.80.99.202

% systrace -A xmms

In training mode you should launch everything that is considered normal for the program, such as play a few songs, in the case of XMMS.

Taming XMMS After quitting the XMMS application systrace will store the new rules in “$HOME/.systrace/usr_bin_xmms”. Listing 2 provides a few examples. The policy comprises about 100 entries that mainly refer to file system access for libraries and plug-ins, additionally the sound device is opened and used. It makes sense to check the generated policy for any unusual parts – just in

case you were attacked while going through the training stage. Systrace would classify the attacker’s activities as normal and allow them in future. Using the policy, systrace can now monitor the application: “systrace xmms”. This should allow XMMS to run normally, unless the user tries something not envisaged by the policy. A user might attempt to access the root directory “/” by selecting it from the file selection dialog box in XMMS. This would provoke a systrace error as can be seen in Figure 2. The following policy entry would then prevent this kind of access permanently: filename eq "/" then U deny[eacces]

mode. In this mode, systrace will not prompt the user if it notices abnormal behavior, instead denying the syscall and writing a message to the syslog.

Conclusion Systrace places applications in a policy jail, thereby restricting the damage a security hole can cause (see Figure 4). Effectively the policy describes an application’s intended usage of system calls. When systrace is running, it informs the user about system call activity not covered by the policy. The user can then decide whether systrace should permit or deny the call. ■

INFO

The entry also specifies that syscall should return an “EACCES” message when denying access, informing XMMS that it does not have the required permissions. XMMS then informs the user that it cannot read the directory (see Figure 3). If XMMS contains a bug that allows an attacker to access a user’s private files, systrace would notice this abnormal behavior and warn the user. XMMS does not normally need access to these files, and the policy has no rules on them. Once policies have been defined systrace can be run in enforcement

[1] Systrace home page: http://www.citi. umich.edu/u/provos/systrace/

THE AUTHORS

This will cause the tool to launch a Netscape process which it tags for monitoring. If a policy already exists for the application, it will simply be applied, if not, systrace will create a new policy. Systrace notifies the user whenever it encounters a system call that does not match an entry in the policy. Systrace also provides a training mode launched by the “-A” flag. In this mode the behavior displayed by the application is defined as normal. Systrace monitors the system calls initiated by the application and generates an appropriate policy from them. Let us look at XMMS:

Marius Aamodt Eriksen is an open source developer and a computer engineering undergraduate student at the University of Michigan in Ann Arbor, Michigan. He also ported systrace to Linux. Niels Provos has developed numerous Open Source Programs, systrace being one of them. He is currently working on his doctorate at University of Michigan in Ann Arbor, Michigan. His research topics are computer and network security. He is also interested in steganography.

www.linux-magazine.com

February 2003

31


COVER STORY

VServer

Virtual Server Contexts in Practical Applications

Divide and Conquer Multiple Linux Systems coexisting peacefully on a single computer: virtual server contexts permit this kind of segregation, thus providing security without the overheads emulators cause. And even root is confined to his own little realm. BY KURT HUWIG

S

32

February 2003

Emulators as an Alternative Emulators allow you to replicate the same environment on different hardware types, like SCSI on IDE and vice versa. You can install almost any operating system or application on the virtual PC. As emulators require their own memory area and a virtual hard disk provided by special files on the host system, it is more or less impossible to break out of the virtual system. Even if an attacker manages to escalate her privileges to root, the underlying system

www.linux-magazine.com

Peers of the Realm

Hannes Keller, visipix.com

andbox security, a concept made popular by Java, also works for Server processes on Linux. Admins like to keep their customers sites apart, particularly when hosting multiple sites with active content (server-side scripting). An encapsulated environment protects normal servers so well that even a successful attack tends merely to affect part of the system. Virtual server contexts (VServer, [1]) offer exactly this kind of protection, by running programs in a sandboxed environment to protect them from the effects of a successful attack. The idea is as old as UNIX itself. The simplest variant is to set up a user account for each service. UNIX access privileges prevent an intruder from manipulating data belonging to other users, or even from gaining read access. Change root (“chroot”) environments take this concept a step further by defining a root mapped directory for a process. The process is thus jailed in the directory tree and cannot see any files outside the jail. Both of these variants ensure that an attacker cannot see the processes belonging to another user. If the attacker manages to compromise root, she can break out of the “chroot” jail, manipulate arbitrary files, and cause arbitrary damage. Problems of this kind can be avoided by using an emulator. The simplest

is still inaccessible to her. She can only disrupt normal operations by overloading the CPU or hard disks, or generating excessive network traffic. Systems such as Bochs [5] (LGPL license) that emulate not only the peripheral devices, but also the processor allow a Macintosh computer to emulate a PC with an Athlon CPU. However, emulations place a heavy load on the CPU, and virtual execution speeds are normally in the region of a few MHz. This approach thus hardly lends itself to practical applications. Im addition, user mode Linux and the emulators we have discussed so far requires enormous amounts of RAM, as every virtual machine needs its own kernel, including buffers, cache, and some unused memory. The machines cannot share these resources, and also require separate hard disk resources for a complete Linux system.

variant of this category is the user mode kernel [2]. The additional kernel runs as a virtual machine and acts like an application from the real kernel’s viewpoint. Processes running on a user mode kernel cannot access the underlying Linux system. VMware [3], or the LGPL licensed Plex86 [4], take this concept another step further by emulating the PCs peripheral devices, including the hard disk, NIC and video adapter.

Virtual server contexts are an elegant compromise. Fundamentally, a virtual server context is a change root jail with important enhancements. The 2.2 kernel version and later allow root processes to drop some of their capabilities, such as the right to bind a port, to change the time, or kill an arbitrary process, for example (Table 1 contains additional examples). Without these capabilities, a process is not allowed to perform any of these tasks. The “include/linux/capability.h” file in the kernel sources provides for any additional information. The combination of reduced capabilities and a change root environment provides a fairly secure subsystem, where an attacker with (reduced) root privileges cannot do too much harm. However, the attacker will still be able to see all the processes on the machine, including processes running outside of her own environment. VServer resolves this issue by introducing a kernel patch that defines so-called contexts. Each


VServer

context encapsulates its own processes, thus preventing an attacker from interfering with other processes on the host system. Despite this, all these contexts use the same kernel, the same RAM, the same cache, and the same hard disk. The overheads involved with this technique are quite small, allowing a computer to run far more virtual servers than it could run virtual machines without a considerable hardware upgrade. For users particularly low on resources there is even a script that compares individual contexts and replaces identical files (with the exception of configuration files) with hardlinks. No matter how many contexts are running, identical files will always occupy the same amount of space.

A Small Patch for One Server… As the VServer concept utilizes the underlying functionality of a host system, the kernel patch has an extremely small footprint [6], weighing in at a mere 82 Kbytes without compression. Add a few administrative tools, available either as RPMs or as a source archive (“.tar.gz”). After patching the kernel and rebooting, the admin user can get on with the job in hand. The admin user needs to create a subdirectory below “/vservers” for each server context and to install a Linux distribution (see “Minimizing Distributions”). The quickest way to do this, is to install a pre-installed Linux system to the subdirectory. Each server context requires its own “/etc/vservers/Servername.conf” configuration file (“/usr/lib/vserver/ sample.conf” provides an example), where the admin user will at least need to add the IP address for the virtual server. Listing 2 shows an example.

COVER STORY

Table 2 provides an Table 1: Capabilities overview of the availCapability Description able options. “vserver CAP_NET_RAW Creates arbitrary IP packets,as used by “ping” servername start” will for example launch the virtual CAP_SYS_TIME Set time (date”,“netdate”,“xntpd”) server context. As the CAP_NET_BROADCAST Send broadcast packets (e.g. Samba) SSH daemon is not CAP_NET_BIND_SERVICE Binds ports below 1024 CAP_CHOWN Change owner of a file running in our example, CAP_KILL Send signals (such as SIGHUP or SIGKILL) to “vserver servername arbitrary processes enter” will provide CAP_SYS_CHROOT Initiate change root jail access to the context CAP_SYS_BOOT Reboot system (see Figure 1). You can type “exit” to quit the context, just like any shell. they will delete a file first before Within the context none of the host installing a new version. machine’s processes are visible. You Hardlinks should not only be used for do not need to enter the virtual server configuration files but also for binaries context to invoke a single command, and libraries. VServer provides a tool however, instead you can issue “vserver that can distinguish between the two servername exec command”. “vserver and takes care of the binaries: “vunify”. servername stop” will stop the context, It queries the package manager (only as the name implies. “vserver-stat” RPM at present, although work is in provides information on the context, progress on Debian “dpkg”) for any see Figure 2. appropriate files, and automatically Running a number of similar inreplaces any duplicates with hardlinks. stallations on a single host system can One context is used as a reference save a lot of hard disk space. Hardlinks installation, and the copies in all other are used to replace multiple instances of contexts are replaced by hardlinks: files with a single copy. However, this file cannot be written to by any of the /usr/lib/vserver/vunify U server contexts as any changes would refserver Server1 Server2 -- ALL apply equally to all contexts. To overcome this problem the “immutable” flag The parameter “ALL” tells the tool is set for any hardlinks. to check all the RPMs, but alternatively you can supply a list of RPM names. Immutable? The results are amazing – a Red Hat installation shrank from 2 Gbytes to a The immutable bit prevents users – mere 38 Mbytes. “vrpm” can be used to and even root in the case of contexts – install new packages on multiple servers: from modifying a file. However, it also prevents you from deleting the file, which makes updating impossible. To vrpm Server1 Server2 U resolve this issue VServer introduces the -- -hiv package.rpm immutable linkage invert bit. If this bit is set with the immutable flag, you can The server name “ALL” ensures that the delete the file (the hardlink), although package is installed on all your servers: you still cannot modify it. Luckily, this is the way package managers work, that is, vrpm ALL -- -hiv package.rpm

Figure 1: The admin user can issue the “vserver myserver enter” command to

Figure 2: The “vserver-stat” command provides an overview of the status of

enter the server context. Once there, only the processes belonging to the

all the VServers running on a host machine. The rightmost column gives the

current context are visible

Vserver name

www.linux-magazine.com

February 2003

33


VServer

COVER STORY

It makes sense to re-launch “vunify” at this point to remove any duplicate packages installed in your server contents. The processes running in the individual server contexts cannot see each other. Although this is exactly what the doctor ordered, it does have the disadvantage that the admin user can not see every active process on the host machine. Context 1 has a special significance here, as it can see the processes of every other context. You can use this context to display a list of all the processes running in all the contexts on your host machine. To simplify administrative tasks the “vtop”, “vps”, “vpstree”, and “vkill” programs are available for the same tasks as their non-v relatives perform, provided they are running in context 1. “vps” additionally displays the name of a context, where “MAIN” represents the host context, 0, and “ALL_PROCS” refers to context 1. The VServer patch adds three system calls used for context management to the kernel. The “vserver” command initiates these syscalls to create a new context, to assign an IP address to the context, and to restrict its capabilities. Additionally, the tools calls “chroot” in the server

directory, and launches “init”. You can also issue these commands individually: “chcontext” creates a new context, or changes to an existing context, “chbind” binds a process to specified IP addresses, and “reducecap” restricts root privileges. Normal users can only add new contexts with “chcontext”, but root is allowed to change to an existing context. It is easy to demonstrate that processes running in different contexts cannot see each other using “/usr/sbin/chcontext bash”. “ps aux” in the new shell lists only three processes: “init”, “bash”, and “ps”. New processes, such as “xterm” are only visible in this context, and can only be killed from within the context. Despite this segregation of processes, root still has far-reaching privileges. “reducecap --secure bash” removes root’s global privileges, and using “reducecap --show” shows the privileges that still exist.

Traps A server context may look like a real server, but there are a few peculiarities you should be aware of. All the server contexts run on the same kernel and thus on the same TCP/IP stack. Thus, each context can bind only to the IP

Listing 1: Installing Packages 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

34

#!/bin/bash TARGETDIR=/vserver/suse TMPDIR=/tmp/rpmdir PKGLIST=/tmp/paketliste SRCDIR=/media/dvd ARCH=i586 rm -rf $TMPDIR mkdir -p $TMPDIR cd $TMPDIR for pkg in `cat $PKGLIST` ; do for rpm in $SRCDIR/suse/${ARCH}/${pkg}-[0-9]*.rpm ; do test -f ${rpm} && ln -s ${rpm} . done for rpm in $SRCDIR/suse/noarch/${pkg}-[0-9]*.rpm ; do test -f ${rpm} && ln -s ${rpm} . done done test -d $TARGETDIR || mkdir -p $TARGETDIR mkdir -p $TARGETDIR/etc mkdir -p $TARGETDIR/var/lib/rpm cp /etc/passwd /etc/shadow /etc/group* $TARGETDIR/etc rpm --root=$TARGETDIR --initdb rpm --root=$TARGETDIR -hiv *.rpm

February 2003

www.linux-magazine.com

address assigned to it, and definitely not to localhost. Attempts to access 127.0.0.1 will fail. But most programs are quite happy to accept the IP address of the server context as the “localhost” entry in “/etc/hosts”. If you run multiple server contexts on a single-homed host, a dial up computer for example, the guest systems will not be able to connect to the internet

“Minimizing Distributions” A VServer context normally does not require a full Linux distribution with all the gimmicks that implies. A minimal installation is normally perfectly okay for use as a server.Your best option is to install a minimal version of your favorite distribution in order to provide an environment you are familiar with. Debian: Straight to the Target with “debootstrap” The “debootstrap”[7] program is required to install Debian in a directory.The program is available in Debian and RPM packages.Those of you who don’t mind downloading a 22 Mbyte file can install the program directly via the internet by invoking “debootstrap woody /vserver/servername http://ftp.de. debian.org/debian”. If you have the CDs, the command syntax is:“debootstrap woody /vserver/servername file:///cdrom/debian”. Red Hat and Mandrake The VServer package provides scripts for Red Hat 7.2, 7.3, and 8.0, and for Mandrake Linux 8.2. Mount the CD-ROM (“/mnt/cdrom”) to get started.The “/usr/lib/vserver/installrh8.0 servername”command will install Red Hat 8.0, for example. Manual Labor Required for SuSE The late YaST 1 used to be able to install SuSE Linux in an arbitrary directory. Unfortunately,YaST 2 is no longer capable of doing this, so you will have to install the packages manually.The “suse/setup/ descrMinimal. sel”file on the first Installation CD is the place to start.The file contains a list of the RPM packages required for a minimal installation.You need the RPMs between the “+Ins:”and “-Ins:” tags in this list, and also “yast2-trans- en_US”for English language support. The next step is to run a script (see Listing 1) to install the packages.The script reads the “$PKGLIST”file, creates a list of symlinks to the RPMs, and then installs all the packages in a single sweep.This is essential to automatically resolve the RPM dependencies between the packages. If you want to go to all that trouble, you can sort the packages in the right order, and then use a “for”loop to install them.


VServer

01 # /etc/vserver/myserver.conf 02 IPROOT=192.168.0.1 03 IPROOTDEV=eth0 04 S_HOSTNAME=myserver.domain.U co.uk 05 S_FLAGS="lock nproc" 06 ULIMIT="-H -u 1000" 07 S_CAPS="CAP_NET_RAW"

directly, as NAT (Network Address Translation) does not work locally. Outgoing packages will contain the IP address of the server context (which is invalid outside of the host system), meaning that replies will not reach the context. To avoid this, the host system must provide proxies (Squid, Bind…) and each context must use them. VServer cannot use file system quotas, although each context has its own “/etc/passwd”, which it uses to manage its own range of user IDs. However, the file system can see the IDs in all the contexts on your host machine when calculating quotas. If duplicate IDs occur, the file system will regard the files in each context as belonging to the same user ID. If a user in one context exceeds the quota, any other users with the same ID in every other context will be effected. VServer’s author is looking into a patch that remaps user IDs on the fly for every

context to provide globally unique IDs. Virtual server contexts provide new methods of server management. Web servers with root access are one obvious application. If customers want to install their own scripts, databases, services, or similar they can do so within their own contexts. Misconfigurations or successful compromises will be restricted to a single context and will not endanger the other servers.

VServer Provides Enhanced Security It is easier to discover hostile activity within a server context. If an attacker installed a rootkit on a normal server, it might be difficult to discover, as the system tools required to discover the rootkit would presumably have been replaced by tools that conceal the presence of the kit. Virtual server contexts make it easy to discover rootkits, provided the host system has not been compromised. The contexts are stored in subdirectories of the host system and can be viewed at that point; you could use Tripwire to scan for modified files and notify you of any changes. VServer also simplifies backing up multiple servers; instead of backing up multiple hosts, the backup program simply backs up the “/vserver” directories. If a client forgets their root passwords, the host system admin can

Table 2: Options

easily reset it by editing the “passwd” file in the subdirectory belonging to the client’s server context. In a restricted context, VServers are also useful for server consolidation. You can replace multiple stand-alone servers with a single server, provided of course they all run on Linux. The distribution is unimportant in this case, in fact it is quite simple to run Debian, Red Hat, and SuSE simultaneously on a single host. But the inverse case also applies; should circumstances dictate this course of action, you can export virtual server contexts to stand-alone machines.

Conclusion VServer allows a host machine to assume the role of a virtual server farm. You can run multiple parallel Linux installations on a single machine without the overheads involved with emulators such as VMware. If you want to separate your mail, web and ftp servers to prevent a malevolent hacker compromising all your services with a single exploit, this tool can help you avoid investing in additional hardware. And VServers are extremely practical with respect to administrative tasks, allowing you to install new software on the machines in next to no time. ■

INFO [1] VServer: http://www.solucorp.qc.ca/ miscprj/s_context.hc [2] User Mode Linux: http://user-mode-linux.sourceforge.net/

Option

Description

IPROOT

IP address of virtual server. Use space character to separate multiple addresses.The name of the network adapter can be optionally supplied, colon-separated.

IPROOTDEV

The network adapter that should use the IP.

IPROOTMASK

Network mask for IP address.This defaults to the network mask of the network adapter.

IPROOTBCAST

Broadcast address for the IP address.This defaults to the broadcast address of the network adapter.

ONBOOT

Specifies whether the init script should launch the server automatically on booting.“yes”and “no” are valid options.

S_CAPS

The capabilities that should be assigned to the context.

S_CONTEXT

Context number.This defaults to a new number.

[3] VMWare: http://www.vmware.com [4] Plex86: http://savannah.nongnu.org/ projects/plex86 [5] Bochs: http://bochs.sourceforge.net/

S_DOMAINNAME

NIS domain name.

S_HOSTNAME

Host name.

S_NICE

Minimum nice level for all processes in this context.

S_FLAGS

Miscellaneous flags,separated by space characters.“lock”prevents the context from creating a new context.“sched”causes the scheduler to treat all the processes in this context as a single process and thus avoids overloading the CPU with too many processes.“nproc”applies the ulimit value for the number of user processes globally to this context.“private”prevents other contexts from changing to this context – including the host context.“fakeinit”spoofs process ID 1 for the command, allowing “/sbin/init”to be called.

ULIMIT

the ulimit parameter for the context.The “S_FLAGS”entry “nproc”applies the process limit to the whole context.

[6] VServer sources: ftp://ftp.solucorp.qc.ca/pub/vserver [7] Debian bootstrap: http://people.debian. org/~blade/install/debootstrap/

THE AUTHOR

Listing 2: Sample Configuration

COVER STORY

Kurt Huwig is the Chairman of the iKu Systemhaus AG in Saarbrücken, Germany, and has been installing Linux servers since 1996. Kurt spends his free time authoring for the Open Antivirus Project, a GPL licensed virus scanner.

www.linux-magazine.com

February 2003

35


COVER STORY

RSBAC

Architecture of Rule Set Based Access Control (RSBAC)

Security Architecture L

Linux Privileges – not enough

THE AUTHOR

The owner of a file can do what she pleases with that file, this is commonly referred to as DAC, or discretionary access control. If an attacker has compromised a process, the attacker’s activities assume the privileges of the

36

Amon Ott is a self-employed computer scientist and the author of the RSBAC system. His mainstay is bespoke development and Linux firewalls, preferably with RSBAC. He is also working on his doctorate, which he hopes to complete shortly.

February 2003

Integrating multiple security models simultaneously in the kernel and detailed logs of any access:The free Rule Set Based Access Control (RSBAC) security System offers customized protection for a wide range of requirements. BY AMON OTT

Ronald Raefle , visipix.com

inux security holes typically occur in Server programs and S bit tools. The best approach would be to avoid mistakes and update programs immediately if a bug occurs. This is not always possible, the next best thing is to restrict potential damage, and this is where access control systems such as RSBAC come into play [1]. If an attacker exploits the security holes in servers or s bit tools, access to the system should be restricted to a minimum. In this case, even a successful compromise will cause only limited damage, and protective mechanisms can be implemented directly in the operating system kernel. The standard Linux kernel prevents access to various resources such as files, directories or system configurations, but unfortunately the standard mechanisms are fraught with weaknesses: • Poor granularity • Discretionary access control • An all-powerful root user Linux access controls only offer the standard privileges read, write and execute; additionally they only allow distinct privileges to be defined for the owner of a file, the members of a group and all others. Restrictions typically do not apply to the root user. The granularity of these privileges is thus insufficient for many tasks.

account used to run that process. Thus, an attacker, if fortunate, can manipulate any files belonging to the compromised account with the gained access privileges. The all-powerful system administrator, root, is the most dangerous of the issues mentioned so far. Many activities are restricted to the root user, from administrative tasks to simple actions. Thus most service are originally launched with root privileges and have superuser access to the whole system, without ever actually needing these extensive privileges. What is worse is the fact that many services need to be run with root privileges (or to be able to assume root privileges at any time) to allow them to change to any user account. The POSIX capabilities introduced to the Linux

www.linux-magazine.com

kernel a while back, allow a program launched by root to drop some privileges, but this is left up to the program itself.

Architectural Requirements The main aim for the developers of RSBAC was to produce a flexible and effective access control system as an add-on for existing Linux mechanisms. To achieve this goal, the system must fulfill a number of requirements: It must provide the underlying platform to allow the developer to program access control models quickly and simply. This permits a clear distinction between components that make decisions, and components that enforce them. The enforcement components act independently of the components that


RSBAC

Subject 1 Access request

6 If access is denied: error message (and cancel) Linux Kernel

3 Request decision 7 Notify

ADF

AEF 5 Answer: Granted or not granted 0 Confirm q Access

2 read System values

RC AUTH

9 Update

ACL

4, 8

System values

Data structures

Access to data structures

Object Figure 1: A subject’s access to an object is monitored by the Access Enforcement Facility (AEF). The Access Decision Facility (ADF) decides whether to permit or deny access

make decisions. New decision models can use the underlying infrastructure. A large number of tried and trusted security models exist for various tasks, and combinations of these models sometimes make sense, depending on the situation. The underlying framework should thus support multiple security models simultaneously and independently, allowing the administrator to choose the most suitable model for the current assignment. No matter what model is in use, activities and any decisions taken need to be logged, and the logs must be protected from attempted manipulation. The original RSBAC system designed fulfills nearly all these criteria. Over the course of the last five years the range of functions and monitored objects has increased dramatically, allowing RSBAC to monitor networks. The main elements of the original have been tested during this time and the developers see no reason to revise them.

Inner Values We need to explain a few terms, in order to describe the internal architecture, so bear with us. From the access control perspective a subject attempts to invoke a specific type of access to an object. On a Linux system, the following occurs: a process (the subject) attempts to read (access type) a file (object). The various object types are categorized by target type on an RSBAC system (see Table 1 for an overview). RSBAC also distinguishes a large number

of access types (request types) that are applied to the object types. Table 2 lists a selection of access types, some of which are used in our practical example later. The entire list is available in the documentation from [1]. The basic building blocks of the RSBAC systems are shown in Figure 1. The enforcement component, or Access Control Enforcement Facility, AEF, mainly comprises enhancements of existing system functions. These enhancements require the decision making element, or Access Control Decision Facility, ADF, to reach a decision before any access, and so possible compromise is permitted. If the ADF refuses access, the AEF will return an “access denied” error to the subject. The decision facility and the data structures used are mostly independent of the kernel version. Only AEF requires one or two changes to existing kernel functions. This component was produced by enhancing existing syscalls.

Components co-operating Access control involves a number of steps. The subject (the process) calls a system function to request access to an object (1). An extension of this function (the AEF) reads some system values, such as the process ID, the type, and ID of the target object (2), before calling the decision facility, ADF; and handing on the information it has collected and the type of access (3). The request is originally addressed to the central

COVER STORY

decision facility of the ADF. This function requests individual decisions from all active decision modules. The modules read attributes from data structures (4) and reach a decision: permitted, not defined, or denied. The central function collates the individual decisions, and returns a collective decision (5). The ADF is restrictive in this respect; if a single module returns a negative reply, the ADF will deny access. Actions are only permitted if all the modules agree that they should be permitted. In the case of a negative decision, the system call is halted and returns an access error to the process (6). In the case of positive decisions, the AEF forks to the system call itself. If the call is successful, the AEF sends a message to this effect to the ADF (7). The central messaging function of the ADF is responsible for passing the message to the appropriate module functions. The module functions retrieve the current attributes from the data structures (8), update them (9) and confirm that the call has been completed correctly (10). If a new object was created by the system call, the message from the AEF to the ADF will contain the type and ID for the new object. The decision modules then create the attributes of the object. After confirming, the system function passes the requested

Table 1: Target Types Name

Description

FILE

Also includes special device and UNIX network files if they are handled as files

DIR

Directory

FIFO

Pipe with name entry in file system

SYMLINK

Symbolic link

IPC

Inter Process Communication object on System V basis

SCD

System Control Data – global system settings and objects such as host names or time

USER

User object mainly serves the purpose of managing attribute assignments

PROCESS

Process object for receiving signals or reading process statuses

NETDEV

Network device

NETTEMP

Network template

NETOBJ

Network object – normally sockets

www.linux-magazine.com

February 2003

37


COVER STORY

RSBAC

Figure 2: Admins can use network templates to assign access privileges to

Figure 3: The main administration menu provides the RSBAC user with a

network address and port ranges. In our example, the ports on all the hosts

straightforward configuration interface, with simple control over all of the

in the IP network 192.168.200.0/24 have been selected

Decision Module functions

data (11) and control back to the invoking process.

A Practical Example A practical example is useful to our understanding of the theoretical path. When a process wants to open a file for read and write access, it uses “sys_open()” with appropriate parameters. The parameters might specify that “sys_open()” should create the file if it does not already exist, or possibly truncates the file to zero length if the file exists. If the ADF rejects one of the following decisive questions, the system call terminates and issues an “access denied” message. The first thing “sys_open()” needs to do, is to resolve the filename, to discover the inode. An auxiliary function, whose RSBAC extension sends a “SEARCH” request to the ADF for every folder touched, takes care of this. If the file does not exist, the extension of the open function generates

a “CREATE” request for the target directory, creates the file and informs the decision facility of the new object. Otherwise a “TRUNCATE” request is issued for the file, the open function truncates the file to zero, and reports the success of the operation. After this preparatory work, the syscall generates a “READ_WRITE_OPEN” request and opens the file. The ADF learns that the file has been opened and updates the file’s attributes. This provides the process with a descriptor so that the process can go on running.

Data Storage Structures As already mentioned, so-called attributes, which are assigned to every user, process, and object are the basis for each access decision. Attribute management is the task of the general data storage facilities. Additional model specific data, such as groups or access matrixes that cannot be organized within generic structures, also exist. Model

specific structures provide storage facilities in this case. The data storage component takes care of the thankless task of list management, thus reducing the load on the data storage components; this involves disk storage, SMP locking (for multiprocessor systems), and similar tasks. It stores the majority of this data in a generic list system that allows any number of generic one or two-tiered list systems (lists of sublists), with indices and data fields of any size to be easily registered. The decision facilities register their lists on RSBAC initialization or when binding a file system. Only a few of the lists are implemented differently due to specific conditions.

Persistent Data If necessary generic lists can provide persistent data storage, that is the data stored in the lists will survive a reboot or deregistration. To achieve this, the

Table 2: Request Types Name

Object Types

Description

BIND

NETDEV,NETOBJ

Bind network addresses

CLOSE

FILE, DIR, FIFO,DEV,IPC, NETOBJ

Close a file descriptor

CONNECT

NETOBJ

Open connection to remote node

CREATE

DIR (where),IPC, NETTEMP,NETOBJ

Create object

DELETE

FILE, DIR, FIFO,IPC, NETTEMP

Delete object

EXECUTE

FILE

Execute file

NET_SHUTDOWN

NETOBJ

Close connection channel

READ

DIR, SYMLINK,IPC, NETTEMP (optionally FILE,FIFO,DEV,NETOBJ)

Read from object

READ_WRITE_OPEN

FILE, FIFO,DEV,IPC

Open for reading and writing

RECEIVE

NETOBJ

Receive data from remote node

SEARCH

DIR,SYMLINK

Name resolution

SEND

NETOBJ

Send data to remote node

TRUNCATE

FILE

Change length of file

Figure 4: RSBAC needs to be enabled in the Linux kernel. A configuration menu is available for basic settings

38

February 2003

www.linux-magazine.com


RSBAC

“rsbacd” kernel daemon periodically saves any lists tagged as changed in special protected directories on the hard disk, where they are read by the data storage facility on re-registering the list. A registration parameter specifies what partition these files can be stored on to allow targeted binding of any of the file system objects. Modules can optionally supply a default value when registering a list. If a list element goes missing at a later date, the data storage facility will also supply this value. For optimization purposes, any persistent elements containing default values are deleted. If a value changes, the data storage facility reinstates the element. In the case of two-tiered lists, sublists are generated or deleted as required. This procedure keeps the length of the lists to a minimum and thus reduces the access times needed. Every list element is assigned a time limit when it is created or updated, and is removed once this period expires. This characteristic is used by some decision

facilities to generate temporary entries or privileges. Persistent values are marked with a value of “0”. Generic lists are implemented as double linked, sorted lists that allow you to register both descriptive and comparative functions for optimized access. If a function of this type has not been registered, simple “memcmp()” based memory comparison is used. [2] provides a more detailed description of the list management interfaces and parameters.

Rule Templates Network connections are fairly ephemeral in most cases; data packets are often transmitted individually and independently. That makes it particularly difficult to assign attributes to them, as administrative overheads would be punitive. RSBAC provides network templates for this task. They describe multiple network end nodes based on various criteria, such as the protocol family, connection type, network protocol or port number. Figure 2 shows an example of how they are defined.

advertisement

COVER STORY

RSBAC will not store the attributes separately for each network end node, or for each connection, but collectively in a template. The end nodes of the network (that is the source or target of data transmission) inherit their values from the most suitable template (that is the template with the lowest descriptor). This allows the ADF to reach a decision for “CONNECT” type access by reference to the template attributes of the source or target address by simply looking up the template.

Administration Templates allow you to specify that a specific user should only be allowed access to the local network via the Internet Protocol TCP, or that a browser can only access the HTTP proxy port on your firewall. There is no need to configure each individual connection. As RSBAC stores all of these settings in the kernel or in protected files, administrative tasks mean initiating system calls or accessing the “/proc” file system. This allows the kernel to


RSBAC

COVER STORY

designate users who are permitted to change specific settings. And this is RSBAC’s solution to the major issue of the all-powerful root user: If the configuration files were stored in normal files, users with write access to these files would then automatically have administrative privileges. RSBAC allows multiple administrators to have different privileges.

Self-Control With only a few exceptions, each decision module is responsible for its own attributes. Models with scientific backgrounds, such as RC and ACL (see insert “Decision Modules in RSBAC”) in particular, support the delegation of administrative tasks to multiple users. Root still has special rights in the default

configuration of most modules, but is a normal user like everyone else, apart from that. A support module called “AUTH” was introduced to help out with user ID management, which is a critical issue. “AUTH” allows you to define the user IDs that specific programs and processes can assume. A process can only assume an ID that “AUTH” allows it to assume, any others are prohibited. A number of RSBAC administration tools are available. They facilitate many administrative tasks and provide user interfaces for the RSBAC system calls. Menus provide for easier use – see Figure 3 for an example of the main “rsbac_menu” menu. RSBAC is probably the oldest and – judged by its codebase most extensive – free access control

Decision Modules in RSBAC The current, stable RSBAC version 1.2.1 comprises the following decision modules and rules, some of which are used to implement more complex security models. MAC – Mandatory Access Control, Bell-La Padula. FC – Functional Control:This simple role model allows access to security information for security officers only and allows only administrators to access system information. SIM – Security Information Modification: Only security officers are allowed to modify data tagged as security information. PM – Privacy Model: A data protection model devised by Simone Fischer-Hübner to implement European data protection guidelines. MS – Malware Scan: Checks files for malevolent software during read and execute access.Version 1.2.1 contains only a scanner prototype, the pre-release version 1.2.2-pre1 uses a professional virus protection software by F-Prot. Support for additional scanners is planned. FF – File Flags: Global attributes apply to files and directories,“execute_only”,“no_execute”, “read_only”,“append_only”, for example. RC – Role Compatibility:This powerful role model was designed specifically with Linux servers in mind. It defines roles for users and programs, and types for all kinds of objects. Access privileges for each type can be specified for every role.The model also allows a schema for a strict delegation of administrative tasks to multiple roles, and defines time limits for access and administrative privileges. AUTH – Authentication Enforcement:This module governs “CHANGE_OWNER”requests for processes and thus any “setuid()”calls. Processes and programs can only access user IDs specifically allowed to them. ACL – Access Control Lists: An access control list is assigned for each object, to define the permissible access types for various subjects. Subjects are defined as user IDs, RC roles, and ACL groups. If an object does not have an entry for a specific subject, it will inherit the rights assigned to a superordinate object, for example a directory. An inherited rights mask is available to filter inheritance, allowing any rights assigned to be filtered out for all subjects.The ACL model also defines superordinate default ACLs, individual group management for every user, and time limits for any rights and group memberships assigned. CAP – Linux Capabilities: Allows you to assign minimum and maximum Linux capabilities (delegated root privileges) to any user and program.Thus server programs can run as normal user accounts, or root programs can be executed with restricted privileges. JAIL – Process Jails:This module introduces a new system call “rsbac_jail()”, which is fundamentally an extension of the FreeBSD jail. Programs launched within the jail are captured in a chroot environment with restricted administrative and network privileges.

40

February 2003

www.linux-magazine.com

system for the Linux kernel. Its clear and modular structure ensure that the authors could keep track on development activities. RSBAC has become quite popular in Europe where the system is in widespread use. Conservative estimates suggest that RSBAC is in use on several hundred production systems. ■

Installation Before you can install RSBAC, you first need to download the sources from the home page.They comprise three parts: a tar archive contains modules that are independent of the kernel version. A version dependent kernel patch is additionally required.There is also a tar archive with administration tools.The RSBAC patch mainly comprises the initialization calls and adds system calls for AEF tasks. As an alternative, you can also download pre-compiled kernel sources as a bzip2 tar archive. The kernels supplied by most distributions have mostly been through wide ranging modifications, and this often leads to issues. In this case, you may have to resort to the original kernel, available from ftp://ftp. kernel.org/pub/linux/kernels or a mirror site. After expanding the tar archive in the main directory for your kernel sources, and applying the patches, follow normal procedure to configure, compile and install the kernel. The additional “Rule Set Based Access Control”menu shown in Figure 4 comprises a number of submenus with a wide range of options, with help texts for each option. Default values are OK for most applications. When you reboot, the “rsbac_auth_enable_ login”kernel parameter allows the login program to switch to any user ID in order to permit users to log on.The “rsbac_ softmode”parameter is useful for initial tests, as it merely logs decisions without enforcing them. After successfully launching the system, you can go on to expand the support tools and follow the usual steps,“./configure && make && make install” to compile and install them. If the RSBAC kernel sources are not available in “/usr/src/linux”, you might like to try the configure parameter “--with-kerneldir”.

INFO [1] RSBAC home page: http://www.rsbac.org [2] Interface to the generic list system: http://www.rsbac.org/lists.htm


REVIEWS

Graphical Games

M

aybe you are looking to immerse yourself in a different reality, your computer can be the very tool to help you achieve that goal.

BillardGL One very absorbing, and totally frivolous use of time is BillardGL [1], a Free Software pool simulator. Now this might just be some frivolous fun for you, but it is actually being developed as part of the course work for the guys who are taking Computer Graphics at the University of Freiburg in Germany. And the results of that coursework are most spectacular. The games web site has .RPMs and .deb files for download – and if you are very lucky, you may even find them on this month’s subscription CD – as well as binaries for Windows and Mac OS X platforms. The binaries are not very demanding on external libraries, so they should work with almost every distribution, so long as you have OpenGL loaded. This program needs to be able to make the most of your 3D graphics card, and, unfortunately, if you don’t have 3D graphics on your machine, you won’t be playing this game. While the demands on the libraries are minimal, the demands made on the hardware are not so. As a minimum, you will need to have a Celeron 300 processor and 64MB of RAM available, with a graphics card comparable to an nVidia TNT card. To get the most out of the game you will need a Pentium III and 128B RAM to go with your GeForce, or comparable, graphics card. Once installed, either by using your favourite graphical package manager or the command line rpm -i U BillardGL-1.75-6.i386.rpm

all you need to do is start to play, either from a menu, if you can figure out where it’s been hidden, or from the command line by name, BillardGL

From its default starting point you will get the opportunity to enter into a tutorial mode, which is very handy, because you will need the chance to

42

February 2003

Color overload

Playing around It is important to, occasionally, take some time for yourself, to relax, to enjoy other pursuits. It is those very pursuits that we are going to explore, because, now that you have all you need to know about making your networks safe and secure from prying eyes, all that’s left to do is play a few games. BY COLIN MURPHY

Figure 1: Once the shot has been played you then get to see how the balls lie

familiarise yourself with the controls, almost all of which can be done from the mouse or trackball. During the play of the game there are three modes and possibly a fourth, should you just be starting the game or are playing a shot from a foul. This fourth mode allows you to place the cue ball where you see fit, within the rules of the game by using the curser keys. Once the cue ball is fixed in its position, you then have the chance to study how the balls lie. The function keys F1 – F8 will take you to pre-defined views of the table. Pressing either one of the mouse buttons while moving the mouse will allow you to rotate your view of the table or control the angle of elevation relative to the table,

www.linux-magazine.com

while the cursor keys move your point of view relative to the plane of the table. Yes, I agree, it sounds horrendously complex, and, to be honest for the first few minutes it really is, but suddenly something clicks. Hitting the middle button now takes you to your ‘aim’ view, and the control of the view now becomes relative to the cue ball. Pressing and holding the middle button then adds power to the strength of the shot and when the button is released, the shot is played. The game designers have included a tutorial section to help you walk through these initial stages. What is impressive is the rendering of the table and the balls in real time as


Graphical Games

you shoot them across about, even occasionally sinking one. Attention has been paid to the rules of the games you can play, which, at the moment, are only 8 and 9 ball pool, so, for example, at the initial break four balls need to hit cushions or a foul is awarded. The game is glorious to look at and most playable, even though it is in an unfinished form. There is no automated opponent and there is no sound, even though their web site gives a good impression to the Figure 3: Trackballs showing a flying ball (which missed lengths they have gone to capture the platform ledge) some audio for later inclusion in the game. I am sure there will also be a those who just want to get on and play demand for other types of games to be and as source code as well, for those added, Billards for instance. who want to help, and people do. So a nice, gentle stroll around a Due to inexperience, the author was pool table might not be everyone’s idea unable to produce a generic Makefile for a fun time, and I am sure that those that people could use to compile with of you that have blisteringly fast the usual ./configure make make install 3d graphics cards will all be familiar incantation that we are all familiar with, with the first person perspective until someone offered to help out. shoot’em up type games, like Unreal Trackballs Tournament 2003, though wonderfully eye catching it is. Trackballs [3] is another one of those SDL based games. This gets its history Pachi from the arcade classic “Marble Madness”. In the game you control a Instead, we will settle back for some 2D small blue marble which you have to fun and Pachi el Marciano [2]. If you are guide around a maze in a limited time. familiar with games like Manic Miner, Just imagine the gravity of the situation! then you have Pachi, but with the Each maze also has a collection of addition of 20 years of graphics art skills. obstacles ranging from sharp pencils to The charms here are the characters, pools of acid. You can make the marble which, before they make it to screen, are jump ramps if you follow the correct hand drawn, scanned in and then course and build up enough momentum. colorized to give the unique and unusual Hitting the obstacles, running out of time graphic effect. or falling off the maze onto the tiled floor Available for download in binary form, end each turn. The game still remains for Windows and Linux, especially for under development, but the binaries worked flawlessly. A simple editor (Guile 1.6.0) allows you to create your own levels if the three supplied levels are not enough. What is obvious is the effort these games developers are putting into providing all of the other effects that make games so absorbing, and this includes background music. The Pachi developers have friends Figure 2: Pachi the Martian begins another adventure

REVIEWS

who were in a band and persuaded them to allow the distribution of their music with the game. Music Trackballs comes from one of the three supplied Ogg Vorbis encoded tracks, but special effects rely on .wav encodes sounds.

Spheres of Chaos This shareware game (£5, US $8 for the full version) is based on Asteroids but rather than the usual vector graphics uses very fast SDL based colorful images. The small download demo at 254K [4] goes to show just what is possible with an eye for colour and simple modular graphics. The game is smooth in control as you would expect but the colorful explosions and the way the asteroids change shape and size when you shoot them makes the game a cut above the rest. The game gets quicker and alien spaceships materialize to chase and shoot at you. After the first level things start to get much more difficult with space-mines and more modular asteroids that once hit appear to break into smaller spaceships and hunt you down. The overall graphical effect is one of psychedelic mayhem and a good afternoon was happily spent shooting everything that moved. Because the game is written under the SDL system it is already available under Windows, Linux and RISC-OS. ■

Figure 4: Spheres of Chaos – shoot everything that moves

INFO [1] Billard GL: http://www.billardgl.de [2] Pachi: http://dragontech.sourceforge.net [3] Trackballs: http://www.lysator.liu.se/ ~mbrx/trackballs [4] Spheres of Chaos: http://www.chaotica. u-net.com/chaos.htm

www.linux-magazine.com

February 2003

43


REVIEWS

SuSE Openexchange Server

O

f course it is fun to make disparaging remarks about Microsoft’s Exchange Server, but the underlying concept of this groupware solution has become so popular with so many enterprises that market opportunities for alternatives seem realistic. This prompted SuSE to introduce a new product, the SuSE Openexchange Server, at the Systems 2002 show in Munich, Germany. The server is based on an equally new United Linux distribution and provides a quick and simple YaST based setup. Just like its predecessor, which went under the name of SuSE E-Mail Server, the new challenger to Microsoft’s dominance uses a combination of Postfix, Cyrus IMAPD, LDAP, and PostgreSQL. The new name is mainly down to the fact that Comfire [1], which itself uses Apache and Tomcat, has assumed the role of the webmail and groupware component. This central component (Figure 1) is a closed source product, which makes the Open component in the product’s new name somewhat debatable. SuSE supplies two exhaustive manuals designed to provide the admin or user with additional information. Besides describing the installation procedure, the admin manual specifically covers using the administrative web frontends and the configuration of mail clients in a networked environment. The user manual discusses the remaining functionality of the web frontend. In addition to the email server functionality, SuSE also provides Samba with LDAP support. The web frontend allows you to configure the system as a Primary Domain Controller (PDC). SuSE

SuSE Openexchange Server Basic Configuration: approx. £800 +VAT (license for ten groupware clients, an unlimited number of external POP3/IMAP-E-Mail- Clients, 30 days installation support, 12 months system support and update service) Five additional groupware licenses: approx. £100 +VAT Additional information: http://www.suse.de/uk/business/products/ suse_business/openexchange/

44

February 2003

SuSE Linux Openexchange Server

Open Exchange It’s “Seconds out and round four” for the SuSE E-Mail Server – and the new name, SuSE Openexchange Server, shows where SuSE are heading. This will surely get admins thinking about whether they can completely replace Microsoft’s Groupware solution. BY NICO LUMMA

additionally provides a web-based tool for configuring both a DHCP server and a name server for multiple zones, which is certainly a welcome addition for the small business user. The pre-configured spam filter, Spamassassin, is also new with initial tests producing useful results. Both the spam filter and the SIEVE based mailfilter [2] are configurable via the web based frontend. Of course, users will hardly notice these new features, in contrast to the new web interface, which not only impressed on account of its color scheme, but due to the fact that it definitely seems to be far more tightly integrated with the new Comfire groupware and mail frontend than any of its predecessor.

Admin’s Little Helper The Groupware component includes typical components such as a web-based calendar and address book, but also a to-do list, a project management tool, a knowledge base, document management facilities, a clipboard and a forum. This allows you to define associations between objects, such as assigning a task to a file in the document management component. This feature looks extremely polished, although a trouble ticket tool would round things off nicely.

www.linux-magazine.com

Admins in heterogeneous network environments will be more interested in another feature. In contrast to the previous SuSE E-Mail Server product, data synchronization for the ubiquitous Microsoft Outlook product is not only available via net-based Palm-Sync, but directly via an Outlook plug-in. The current plug-in version not only allows you to synchronize Outlook address data with the LDAP based address book, but also to synch calendar data and define appointments. A group appointment feature is due for release at the beginning of 2003. As regards user management, the web frontend simplifies the admin user’s task by allowing easy access to user data, and


SuSE Openexchange Server

Kerio Mailserver 5

not only when creating new users. You can define access restrictions for the groupware product to hide specific components from individual users. The “Groups and Folders” area allows you to assign individual users to multiple groups, and create a folder for Cyrus. Also, the web frontend allows you to change the Postfix and Cyrus IMAPD configuration, or to add an SSL configuration, and is capable of updating any appropriate files in a single step. The administrative frontend is rounded off by an LDAP browser. Openexchange supports system administration by providing a web-based mail queue viewer. The “rrdtool” [3], which creates graphs to help visualize the system load or mail

INFO

The admin frontend for Windows and Linux based systems was immediately available and provided direct access to critical settings. Kerios Web-Mail component is somewhat spartan and restricted to writing an reading email,although this may be sufficient for normal use. A license for a maximum of 20 clients is available for US $370 , with 20 additional user licenses costing a further 70 US dollars.The version with an integrated McAffee virus scanner costs somewhere in the region of US $680 for 20 licenses, with 20 additional user licenses costing a further US $230.

traffic volumes (Figure 2), is particularly useful. LDAP still remains the central component of SuSE’s Openexchange Server, thus providing admins with a uniform basis for user data. If the admin user decides to set up the Samba PDC, to allow the users to authenticate directly to the network, users can type the same password for email and Windows.

[1] Comfire: http://www.comfire.de/englisch/ produkt/produkt.htm [2] SIEVE: http://www.cyrusoft.com/sieve/ [3] “rrdtool”: http://www.rrdtool.org/

restore their user data. The scripts provided by SuSE took the pain out of the backup and restore operations. It is also hard to understand why the server automatically boots to runlevel 5 and serves up the KDE desktop. The initial applause for the server-side spam filter is also somewhat muted by the fact that Spamassassin does not run as a daemon and thus produces unnecessary overheads on larger scale systems. As SuSE offer seamless Samba integration, the fact that the current version offers a server-side virus scanner in the form of AMaViS, and the Samba “vscan” package (which scans Samba based file servers for viruses) is, however, a noticeable improvement compared to previous versions. All in all, the Openexchange Server offers a range of functions that will fulfill the requirements of many enterprises, and that makes Openexchange a genuine alternative to Microsoft Exchange. ■

Admin Chicanery Unfortunately, the installation procedures dictated by SuSE proved to be fairly hostile. Why do SuSE insist on you installing a new system, in stark contrast to their declared goal of simple updates? Sites running SuSE E-Mail Server 3.1 have no alternative but to back up and

Figure 1: The new Comfire groupware product provides enhanced value

THE AUTHOR

The trend towards integrated web frontend based email solutions for small to mediumsized businesses, has prompted Kerio to offer a 30 day trial license for the Kerio Mailserver (Version 5) product (you can download a version from http://www.kerio.com/us/ kms_home.html).The server supports IMAP, Web-Mail,WAP, and POP3 connections with optional SSL encryption. In addition to the standard version, which supports the Grisoft AVG, NOD32, F-Secure, and eTrust InoculateIT virus scanners, the product is also available with integrated support for McAfee Anti-Virus.We had no trouble installing both RPM packages with the server and the separate admin console (approx. 6 Mbyte in total) in our lab environment.

REVIEWS

Nico Lumma is the Head of IT at Orangemedia.de GmbH and looks back on years of experience with the practical application of Linux in enterprise environments.

Figure 2: Mail traffic volume at a glance

www.linux-magazine.com

February 2003

45


REVIEWS

Caché 5

Version 5 of Caché, the postrelational Database

Access to Objects An object oriented database like Caché prevents the so-called paradigm break between the database layer and the object oriented application. Unfortunately, the new Caché 5 version seems to have developed a nasty list towards Windows. BY BERNHARD RÖHRIG

Prajuab Manklang, visipix.com

I

ntersystems from Cambridge, USA, has released a major update of the postrelational database Caché 5. Today, fast and error-free application development is more or less unthinkable without object oriented workflows. However, in many cases this contrasts with the management of the underlying data in the form of SQL tables. This so-called paradigm break requires all kinds of contortions to be handled from the developer. One of the systems that prevents this break, or at least mitigates its effect, is the database management software, Caché, which recently went to version 5. Linux Magazine was given an exclusive opportunity to test a prerelease version. Caché does not manage databases as flat, relation tables, but rather as objects. A proprietary Unified Data Architecture that allows both object oriented and SQL access provides the underlying framework (see Figure 1). This allows almost arbitrary frontends to access the datastores, in order to retrieve and analyze data. This in turn allows developers to use tools with which they are familiar,

46

February 2003

and so prevents a dependence on a specific operating system, allowing for different server- and client-side operating systems to be used. Caché Server Pages (CSP) are a particularly interesting feature, as they generate data and event based HTML or XML code, which is then rendered by the user’s own browser, on the fly. The Linux operating systems Red Hat 7.2 and SuSE 7.3 are both supported, but earlier or later versions of these distributions should also work with only slight modifications provided they use the 2.4 version of the kernel.

Removing Obstacles to Installation As regards the installation, very little has changed in comparison to previous versions. The fact that the documentation can now be accessed directly on the CD makes life somewhat easier for new users. This is quite important as

www.linux-magazine.com

most Linux distributions do not provide enough shared memory for the voracious database server, which unfortunately needs to be fired up during the installation procedure to install the system database. So if you have not heeded all the warnings that can be found in the documentation, your first installation attempt is most probably doomed to failure. To remedy this situation check the maximum shared memory size and increase it, if required: # cat /proc/sys/kernel/shmmax 33554432 # echo "200000000" > U /proc/sys/kernel/shmmax

If your system has 256 Mbytes or less memory, you can use a slightly lower value. The “echo” command should be added to a start-up script, such as “/etc/init.d /boot.local”.


Caché 5

Apart from this, administrators can normally rely on the installation script to do what it is supposed to do. The bugs which were evident in previous versions of the system seem to have been removed, and this allows you to install even the web server connection with very little administartor intervention. You can skip the license key input and install the database in a single-user mode [1], the license can be extended at any time later. After completing the installation steps you can immediately start using the database server. You might like to enter http://localhost/csp/samples/menu.csp in your browser to gain a first impression of the server’s versatility. The documentation is far more exhaustive, more consistent, and easier to use than that of previous versions, and fully integrated in the Linux variant. To access the documentation on Linux, type http://localhost:1972/docbook/DocBook. UI.Page.cls in your browser. Incidentally, the documentation itself is a Caché XML application. Of course progress has not only been made with respect to the documentation; there are many additions and enhancements of the server proper. Some of them are described in the “Innovations at a glance …”. In line with current trends, the program makes more use of standards such as XML, but the Basic language also plays a more prominent role than previously. The new Caché Studio workbench replaces both the Object Architect and the Studio of previous versions and offers a uniform workbench for the development of class definitions, method code and CSP pages. Macromedia Dreamweaver, which provides its own interface, still provides a useful alternative for developing CSP pages. The debugger is new to the Studio workbench; it should help reduce some

Java

EJB

ActiveX

.NET

C++

CSP

Objects

SOAP

ODBC

XML

Basic

REVIEWS

JDBC SQL

Caché Object Script

Class Library

SQLGateway Multidimensional Caché Data Engine ActiveX-Gateway

Caché-Server Figure 1: Caché provides various access approaches for datastores

development effort, and allow the developer to tackle more complex projects. The Studio and the userfriendly management tools where some details are definitely improved, obviously require a GUI, and this is the bad news for Linux users: The new Caché version does not provide an X window implementation.

Mandatory Windows Development Platform In other words, if you want to run everything on a single server, you have no alternative, but to purchase VMware and additional Windows licenses. Of course, all the tools are available for remote use via TCP/IP.

Innovations at a Glance

For the command-line inclined, you can still use telnet to send instructions directly to the database server. The commands required for this are outlined in the Caché Object Script Reference section of the online documentation. Some of the most important commands are also detailed under [2].

Conclusion Caché 5.0 is a noteworthy alternative database and application server for Linux. Improvements in comparison with previous versions mean that up-dating is recommended, and newcomers would do well to look into Caché. However, since the developer environment is reserved exclusively to Windows, Caché is definitely not an option for the Linux purists. ■

• Improved Performance

INFO

• Complete support for XML (class definitions, objects as XML documents) • Not for Linux: IDE with editors, wizards and debugger (Caché Studio) • Class Inspector for effective class definition management • Basic and Java for Caché methods

[1] Free evaluation copy: http://www. intersystems.de/downloads/ [2] Kirsten, Ihringer, Schulte, Rudd:“Objectoriented Application Development: Using the Cache Postrelational Database”, Springer Verlag, ISBN 3-540-67319-9

• More effective compiling of classes

A free single-user license of Caché 5 for Linux is available for download at http://www.intersystems.com/ downloads/index.html The download comes with a online documentation set as well as a “Building Applications with Caché” tutorial.

• Enhanced SQL engine and manager • ODBC driver for Linux • Enhanced Java support including J2EE and EJB • SOAP access • New Active X gateway • Enhanced CSP technology

THE AUTHOR

Intersystems Caché 5

Bernhard Röhrig is an IT consultant and has written several books on Linux/Unix and databases. You can reach him on the Internet at : http://www.roehrig.com.

www.linux-magazine.com

February 2003

47


KNOW HOW

Initialization

From init to eternity

Ready – Steady – Go! When you switch a computer on it

on while your system is booting.

concerned with the hardware. In this phase more exotic peripheral components are initialized. Additionally, certain maintenance tasks are performed; for example the system checks the hard disk briefly for errors, or tidies up areas used for storing temporary files. Finally, various daemons and services are started to allow the Unix system to get on with the job in hand. The order in which all this occurs is precisely defined and carefully pre-meditated.

BY MARC ANDRÉ SELIG

Kernelspace and Userspace

will display a number of cryptic messages before indicating that it is ready for use by showing the login window. This article shows you the background processes going

T

here is something fascinating about the boot procedure. It all starts with a small amount of silicon and a tiny program in the BIOS that is only loading an equally tiny program on the hard disk. Then something happens, and at the end of whatever it may be, your workstation is up and running, and completely in control of its horde of complex hardware features and peripheral devices, with a network connection and a bunch of daemons enabled – in other words, you have a real, live Linux system. The boot procedure’s main task is easily summarized – to initialize the hardware and software. The data structure of the operating kernel is prepared first; this is followed by a rudimentary check of the available hardware, after which the appropriate drivers are loaded. This allows the system to create the preconditions required to load the operating system proper. After the BIOS and the boot sector have completed their tasks, the details they have ascertained about the hard disk are temporarily lost. Linux then goes on to load normal programs, although they too are initially

48

February 2003

Before we take a detailed look at the individual steps, it is important to make a distinction between where and how individual functions are executed. This also decides if and how easily we can manipulate these functions. The first few steps after switching on the system have nothing to do with Linux. The BIOS is something that mere mortals normally have nothing to do with, although you might need to install an update from time to time. And the first program that the BIOS loads from the boot sector is not really part of the operating system proper. The so-called boot loader merely has the task of launching the operating system. Besides the Windows NT

boot loader, there are one or two major contenders in the world of Linux, such as LILO and GRUB. What they all have in common is the fact that the load and launch the operating system kernel at start-up. For every kind of activity that follows this step, Unix type operating systems provide two fundamental

www.linux-magazine.com

options: Functions can run in the kernel or user space. Typical kernel functions are tasks such as initializing major hardware drivers during the boot process. To influence what happens within the kernel you need to modify the Linux source code and re-compile – this is the domain of alpha nerds! Userspace refers to everything that can be controlled by “normal” programs or scripts. Of course, there is interaction between the two. If a userspace program wants to access the hard disk, it will call a corresponding kernel function. And in the case of Linux, kernel modules make things particularly complex. Modules are definitely kernel functions, however, they are loaded and controlled by userspace programs.

Hardware The first few messages that appear on screen are thus generated by the kernel, and we will not be investigating them any further in this article; after all, you cannot influence them without


Initialization

considerable background knowledge. But after ten or twenty seconds the kernel has finished its preparatory work, and launches the first userspace program. For most of today’s Linux distributions this will tend to be a short script called linuxrc, which is stored along with a few modules on a RAM disk created by the boot sector program. linuxrc is designed to load drivers and functions required to continue the boot process which have not been incorporated as kernel components. Drivers for SCSI hard disks, encrypted hard disks or important network drivers are some examples. The functionality provided by linuxrc is optional – if all the drivers required by the system have been compiled into the kernel, meaning that no additional modules need to be loaded, you can do without linuxrc and the accompanying RAM disk. However, it is unlikely that you will need to edit this script yourself, as this task is normally performed by your distribution’s installation routine, to ensure that script will include the commands required for your computer and to guarantee that the RAM disk is available. If you modify your kernel at a later stage, you can save a lot of effort by

GLOSSARY BIOS: The BIOS (Basic Input/Output System) is normally stored on a programmable memory device (modern computers use EEPROMs) on your computer’s motherboard. In addition to the setup program that provides some hardware setup functionality, it normally contains some routines that control your computer’s boot logics. It tells the computer to read the boot sector on the hard disk, that is a few kilobytes of data, and run the program located there – and nothing else.The software in the boot sector is responsible for everything else.Theoretically, the BIOS can include device drivers for DOS based operating systems, but these are not used by Linux under normal circumstances. Boot sector: The boot sector occupies the first track on the hard disk, and may contain a program responsible for loading the operating system.There is only one main boot sector on each hard disk, the so-called Master Boot Record (MBR), and an additional boot sector for each individual partition.The MBR is normally used to select an operating system when several OSes are installed on the same computer, or to load the boot sector on a hard disk partition, which in turn launches the operating system itself.

compiling any features you need into the kernel proper instead of using modules. An important goal must be reached by the end of this step: the remainder of the operating system must be available on the hard disk, that is, at least the root partition with the directories /bin, /sbin, /lib, /etc, /dev, and /tmp must be accessible. Full access will normally not be provided at this stage; that is, write access is normally prohibited. But at least the kernel can access the predefined sections of the partition.

init: The Mother of All Processes After preparing the hardware, the kernel will then launch the boot process as the user sees it. No matter, what you may change at a later stage, the first program to be launched will always be /sbin/init. Everything that happens on a Unix system originates in init as it launches every other program and script. Unfortunately, this is where things start getting complicated again. Although it is clearly defined that init must be the first process to be launched, it is far less clear what init should do. There are two common approaches for Linux. The first approach is called simpleinit and basically runs a simple script. The second approach is called

KNOW HOW

SysVinit and is used by well-known distributions such as Red Hat, SuSE, or Mandrake. The name SysVinit is derived from Unix System V – so you will find a similar init functionality on other Unix distributions, such as Solaris, for example. The configuration of SysVinit is twofold; the configuration file proper is called /etc/inittab. It contains a table that assigns programs to specific events and starts a program when an event occurs. An “event” would be the “boot process”, “transition to a networked environment”, or “shutdown computer”. The second part of the init configuration comprises so-called init scripts, which we will be looking into in more detail shortly.

Runlevel Runlevels are an important concept of SysVinit. A runlevel describes an operating state of a Unix system. Table “Critical /etc/inittab entries“ contains an overview of each status as they are typically defined.

/etc/inittab The inittab file controls the behavior of init and contains comments (introduced by a hash sign #) and instructions in a table. Listing “Typical Runlevels on a

Switch on, Self Check BIOS loads the bootsector of the harddisk Bootloader: Selection of the operating system and kernels Loading Linux... Kernel and initialisation

RAM Disk available

yes

linuxrc loads driver modules

no Start Init rc.sysinit: Network and filesystems preparation Change to matching the runlevel Figure 1: Summary of the boot process

www.linux-magazine.com

February 2003

49


KNOW HOW

Initialization

Unix System“ shows a short excerpt containing some critical entries. Each line in the table comprises four colon-separated entries. The line starts with one or two letters or figures as a mnemonic abbreviation for the corresponding entry. The following entry contains one or multiple digits and specifies the runlevels where the entry is valid. For example, mingetty in Listing “Typical Runlevels on a Unix System“ is only launched in runlevels 2 through 5; the line with shutdown immediately preceding it, is not restricted to specific runlevels, and is thus available for actioning in all cases. The third entry can contain a keyword for additional conditions. This normally defaults to respawn, that is the corresponding program is restarted if it terminates. For example, mingetty generates a login prompt. If a user terminates a session, mingetty is relaunched and is thus available for the next user. once is an alternative option, that is, the program is run once only when the system enters the current runlevel. This operating mode is useful for daemons and other programs that retire into the background immediately after being launched. If init attempted to relaunch these programs immediately, a large number of instances would be created within an extremely short period of time, and this would inevitably cause a crash of the computer. The disadvantage is that init will not react if one of these programs is prematurely terminated. The listing also contains a number of keywords: initdefault specifies the runlevel that the system defaults to on booting; sysinit describes a program that

is only run on booting. Lines including the wait keyword contain scripts that are executed on entering the corresponding runlevel. They stop inappropriate programs, for example, a web server running on the machine should be terminated before rebooting. On the other hand, wait scripts also launch new programs required for the various operating statuses. Finally, the fourth column in the inittab entry describes the program and any parameters, which will be called providing any applicable conditions have been fulfilled.

init Activities

Runlevel

Status Description

0

The system is being shut down.

1

Maintenance mode (a single user mode,where only the administrator can work).

2

Restricted system functionality,for example,network services or GUI support may not be available.

3

The system is fully operational.

4

This runlevel is available for your own settings. However,most administrators will tend to modify an existing runlevel rather than defining a new runlevel 4.

5

The system is fully operational. Depending on your distribution either runlevel 3 or 5 may be used for this status.

6

The system is being prepared for rebooting.

February 2003

Defining Runlevels As we learned previously, a program called init controls the programs run during the boot process. We have also looked into the configuration file, /etc/inittab, which defines the default runlevel that the system will assume on booting. But how does the computer know what the runlevel comprises? Each runlevel from 0 through 6 (see Table “Critical /etc/inittab entries“) has its own directory that contains a detailed description of the runlevel. On a “normal” Linux system the directory for runlevel 5 would be /etc/rc.d/rc5.d,

In a typical configuration as shown here, init will perform three tasks after booting: • First, basic settings are applied by running the rc.sysinit script. • Second, init will switch to the runlevel specified in initdefault, thus starting various daemons and server programs. • Third, a software that generates login prompts is launched. Depending on the peripheral devices attached, this can mean a normal character based or GUI login, or it may involve initializing a Critical /etc/inittab entries modem or an ISDN adapter. Network logins # Default runlevel after boot will be the domain id:3:initdefault: of various daemons, however. # Initialization The basic configuration si::sysinit:/etc/rc.d/rc.sysinit tasks performed by rc.sysinit normally comprise the # Individual runlevels following tasks for most l0:0:wait:/etc/rc.d/rc 0 distributions: the operating l1:1:wait:/etc/rc.d/rc 1 system clock is synchrol2:2:wait:/etc/rc.d/rc 2 nized with the hardware l3:3:wait:/etc/rc.d/rc 3 clock; the hard disks are l4:4:wait:/etc/rc.d/rc 4

Typical Runlevels on a Unix System

50

scanned for errors, and then mounted. The swap partition is activated, and a keyboard driver may be loaded to allow the administrator to interact with the system in case a hard disk error is detected. The network subsystem is prepared, for example, by setting the host name. Additionally various janitorial tasks are also performed. These activities will depend on your distribution – early Linux systems were normally restricted to clock synch, host name setting, and hard disk checks.

www.linux-magazine.com

l5:5:wait:/etc/rc.d/rc 5 l6:6:wait:/etc/rc.d/rc 6 # What happens on Ctrl-Alt-Del? ca::ctrlaltdel:/sbin/shutdown -t3 -r now # Text Mode Login 1:2345:respawn:/sbin/mingetty tty1 2:2345:respawn:/sbin/mingetty tty2 3:2345:respawn:/sbin/mingetty tty3 # GUI Mode Login x:5:respawn:/usr/X11R6/bin/xdm -nodaemon


Initialization

The runlevel directory shown in our example will thus kill the sshd, netfs, and portmap services. It then goes on to re-initialize a number of subsystems, in order to correct network settings or load a keyboard driver, for example. Current distributions tend to avoid calling subsystems that are already running, thus avoiding duplicate daemons. A neat Unix system administrative style involves init scripts in the runlevel directories using links to other directories. The scripts themselves will thus be stored in /etc/init.d or /etc/rc.d/init.d. To activate a subsystem, you simply create a new link:

root gives command: telinit 2 Signal at init Look up in /etc/inittab: Anything new for runlevel 2? call /etc/rc.d/rc 2 Scripts from /etc/rc2.d (or the like): Stop script

Start script

K09sshd K75netfs K89portmap

S08ipchains S10network S16apmd S20random S30syslogd S40crond S60lpd S75keytable S90xfs S99local

# # # # # # # #

Figure 2: Procedures on changing runlevel

for example, but names may vary depending on your distribution – /etc/rc5.d or even /etc/init.d/rc5.d are common alternatives. These directories contain the so-called init scripts, each of which controls one of your computer’s subsystems, such as the clock, the mail server, the web server or even the print spooler. The syntax for init scripts is uniform: when an init script is called, it passes exactly one argument, either start or stop, depending on whether the service in question needs itself to be initialized or stopped. Many scripts can handle additional arguments for other tasks, but each init script will be aware of

start and stop. Thus, init is capable of controlling each subsystem automatically. When a system enters a different runlevel (see Figure 2), init will call all the init scripts in the corresponding runlevel directory. Changing to runlevel 2 will launch the scripts in /etc/rc.d /rc2.d, for example. Let’s take a look at the directory: $ ls /etc/rc.d/rc2.d K09sshd S16apmd S75keytable K75netfs S20random S90xfs K89portmap S30syslogd S99local S08ipchains S40crond S10network S60lpd

/etc/rc.d/rc5.d ../init.d/mysql ../rc0.d ../init.d/mysql ../rc1.d ../init.d/mysql ../rc6.d ../init.d/mysql

S97mysql K02mysql K02mysql K02mysql

Prospective If you have an hour or so to spare, take a look at the init scripts for the major runlevels on your system. You can learn a lot about your system by doing so – and at the same time brush up on your shell programming skills. A word of warning at this point: Do not fool around with init. A single mistake in the boot configuration can tie your computer up completely – and you might not even notice until you reboot next morning. ■

THE AUTHOR

Hard links cannot be distinguished from the original file, whereas symbolic links are in fact only pointers.

The name of each script comprises of three sections: • The letter “K” or “S” specifies whether to Kill or Start a subsystem. • A two digit number between 00 and 99 specifies the order in which the scripts are called. • An abbreviation describes the subsystem to make life easier for admins.

cd ln cd ln cd ln cd ln

The advantage: Changes to the init script immediately apply to all copies, thus avoiding version conflicts. The example shows another important technique; when you create a new start link, you should immediately create appropriate stop links for runlevels 0, 1, and 6.

GLOSSARY Links: Links are a vital concept for Unix style operating systems. Instead of creating a new file, you create a directory that points at the original file.When a program accesses this new entry, it will see the content of the linked original file.

KNOW HOW

Marc André Selig spends half of his time working as a scientific assistant at the University of Trier and as a medical doctor in the Schramberg hospital. If he happens to find time for it, his currenty preoccupation is programing web based databases on various Unix platforms.

www.linux-magazine.com

February 2003

51


KNOW HOW

fstab

File Systems

Fstab in the dark D

uring the boot process the /etc/fstab file is read by the mount command in an init script and implemented line by line. It includes entries for device files, CD-Rom drives and hard disk partitions which are available for immediate access after the system initialization. The administrator can use the configuration in this file to assign mount points to drives and partitions, to specify the file system or regulate the access bits via access bits. Let us take a closer look at the entries in Listing 1. The fact that the entries are divided into six columns is immediately apparent. The first column, (fs_spec), contains the device file name belonging to the partition. The second column, (fs_file), contains the mount point, that is the position where the medium is inserted into the directory tree. The third column, (fs_vfstype), is used to define the file system type. Table 1 contains a list of some of the available system types. The entries in the fourth column, (fs_mntops), define access to the volume. As you can see in Listing 1, this column can contain multiple, comma separated options. These statements are also available in the command line, if you supply them as mount command line options. The manpages for this command also provide detailed information on the various parameters. You can refer to Table 2 for an initial overview of the mount options. The dump program that creates a backup of the data on an Ext2 file

The file system table (fstab) contains information on the partitions and volumes that need to be inserted into the directory tree on starting up the system. The table allows the administrator to enhance the security of a multi-user system by applying various options. BY ANDREAS KNEIB

system, refers to the entry in column five, (fs_freq), for its configuration data. Refer to the dump manpages for additional details on the functionality provided by this backup tool. Like its predecessor, the last column is also read by a program. In this case it tells the fsck command how to check the consistency of the file system. The root directory is tagged with a 1, any other file systems with a 2. A value of 0

Listing 1: fstab example # The following # (fs_spec) # [1] /dev/hda1 /dev/hda2 proc

52

lines are (fs_file) [2] /boot / /proc

February 2003

designed to explain and implement assignments (fs_vfstype) (fs_mntops) (fs_freq) (fs_passno) [3] [4] [5] [6] ext2 defaults 1 2 ext2 defaults 1 1 proc defaults 0 0

www.linux-magazine.com

is assigned for file systems such as CD-ROMs that do not need to be checked by fsck. Now let’s add a few examples to the rudimentary /etc/fstab in Listing 1.

CD-ROM and DVD After taking a quick look at the contents of our two tables, it should be no problem to define an entry that allows us to mount the CD-ROM drive – at least for home users with stand-alone computers, as we will see: /dev/cdrom /cdrom auto ro,noauto,user,exec 0 0

U

Let’s look at the syntax of the line. The /dev/cdrom entry specifies the device name of the drive. In this case,


fstab

/dev/cdrom is a symbolic link that points to the proper device file (for example /dev/hdc). The /cdrom field indicates the mount point in the directory tree. In this case the drive is mounted directly below the root directory in /cdrom. Some distributions collate mount points for removable media below /mnt or /media. You could choose the file system type iso9660 instead of auto if you are experiencing difficulty mounting DVDs. The ro option permits read only access to the mounted medium. The noauto entry does not bind the drive on starting the system, but waits for an explicit mount /cdrom command in the shell. The user allows any user to issue the mount command. The same applies to users executing programs on the CD, as stipulated by the exec keyword; if noexec is stipulated, it is impossible to start programs, although the x attributes normally required to do so are present. Whether or not you decide to use these options depends on your approach to secure administration. You can create a similar entry for a floppy drive: /dev/fd0 /floppy noauto,user 0 0

auto

U

Table 1: Common file system types auto

Assign file system automatically

ext2

ext2 file system

ext3

ext3 file system

reiserfs

Reiser file system

jfs

IBM Journaling file system

minix

Minix file system

vfat

Windows 95,Windows 98 or DOS file system

ntfs

Windows NT/2000/XP file system

msdos

MS-DOS Floppy/Partitions

umsdos

MS-DOS with Unix add-ons

hpfs

OS/2 file system

xiafs

Xia file system

swap

Swap files/partitions

usbdevfs

USB device administration

devpts

Pseudo terminals

proc

Process administration

iso9660

DVDs/CD-ROMs

udf

Universal Disk Format (DVDs)

nfs

Network File System

smbfs

Server Message Block Protocol

ignore

(ignore partition)

KNOW HOW

In this case the ro has Listing 2: Virtual file systems been omitted, to allow write access to the devpts /dev/pts devpts defaults 0 0 floppy. But the exec usbdevfs /proc/bus/usb usbdevfs defaults 0 0 option has been proc /proc proc defaults 0 0 removed to prevent users from starting -P “less +‘/^[ ]*umask’” bash for programs stored on floppy disks. additional information on using masked From Process Administration file privileges. to USB The quiet, iocharset=, and uni_xlate options are interesting in this context. The system stores various internal kernel They specify error handling and administration data in files. This character set management. As these principle applies both to the proc file aspects are beyond the scope of this system and USB devices (usbdevfs). article, refer to the mount manpages for devpts is now the base for pseudo further details. terminals operations. Let us now move on to the next All of these files provide interfaces candidate, Windows XP, where we will used by emulators, such as xterm. To be applying a more stringent level of file allow devices and processes to run system security. smoothly after booting the system, three virtual file systems must be added to the configuration, as shown in Listing 2. /dev/hda4 /winxp ntfs U ro,uid=999,gid=555,user 0 0

Taming Windows The following section looks into the security of DOS and Windows partitions. We will be mounting a Windows 98 partition first. Read and write access to this section of the directory tree should be available to every user. Additionally, the file system will be activated by a mount /win98 command, issued by root: /dev/hda3 /win98 vfat noauto,umask=0 0 0

U

The umask option in this entry has not been discussed previously. As Table 2 shows, the option sets inverse file privileges. What does that mean? Just like the chmod command, umask works with octal numbers. The access bits are calculated by subtracting the desired file privileges from seven, and assigning the result as the umask. The access privileges for the modes read, write and execute (octal 7) are thus assigned by entering 0, r-x (octal 5) by entering 2, and rw- (octal 6) by typing a 1. As Windows 98 does not support access privileges for files, we can use Linux access bits to impose an extra level of security. In this case we are applying fairly lax security privileges, since umask=0 will allow any user to read, write and execute any file. You can type man

As Windows XP, NT, and 2000 use the NTFS file system, only read-only access (ro) is currently available (the driver is also capable of write access, but this is currently experimental and disabled in the standard kernel). The uid= and gid= options are used here. These abbreviations are short for User Identification (UID) and Group Identification (GID). Explanation: The /etc/passwd contains a list of all users, which includes details on the number assigned to a user and the user’s group memberships. You can also ascertain these values by typing id or id username: [andreas]~ > id uid=500(andreas) gid=100(users) Groups=100(users),[...],42U (trusted)

The UID/GID options allow you to assign a user and group ID to each Windows XP file. Now, all you need to do is launch umask and create an appropriate group, to allow for a more granular access control of Windows.

Samba and NFS Let us stick with Windows for the time being and investigate Microsoft’s own variant of a network directory. The

www.linux-magazine.com

February 2003

53


KNOW HOW

fstab

Figure 1: Mounting the subscription CD in the directory tree

counterpart to the Network File System (NFS) commonly found on Unix is the Server Message Block or SMB. A Windows server can use this protocol to provide access to its data. You will need to install Samba, to access external Windows computers via Linux. The smbclient tool provides access to shared Windows directories. But it is a lot easier to mount the directory in the local directory tree: //win/C /winc smbfs user,noauto 0 0

U

This entry allows the C directory on the win computer to be accessed by any user in the /winc directory on Linux. However, the user will be prompted for a password after issuing the mount command. Although users can supply a username parameter when issuing the mount command (-o username=tux, password=pw), you might like to simplify this task:

linux1:/out /nfs user,noauto 0 0

54

February 2003

nfs

U

This causes the computer to export the /out directory as linux1. The directory must be entered in the /etc/exports file on this computer, however, we will not be looking into NFS at this stage.

Users in Command Files in MP3 format are a good idea. You can listen to them, list, manage and collect them. And above all else, you can waste a lot of space on the file system with them. What options are available to the administrator to prevent individual user collections from getting out of control? The answer is, use quotas [1]. Quotas allow the system administrator to restrict the amount of storage capacity available to groups and individual users. You can define the quotas with either dynamic or hard limits. Quotas use separate configuration files to manage partitions, and are simple to apply. The original HOWTO is available

//win/C /winc smbfs U user,noauto,username="tux",U password="pw" 0 0

You might be a little confused at this point, because you have not been able to find the username and password commands in man mount. The program actually runs smbmount at this point, and the smbmount manpages are where you should be looking for further details on this topic. The configuration required to mount a directory via the Network File System (NFS) is similar and can be seen if we use the following:

Figure 2: Refusing access

Table 2: Overview of mount options defaults

defaults:rw,suid, dev,exec,auto,nouser and async

exec

allows binary and script execution

noexec

prevents binary and script execution

user

allows a user to mount the file system

noauto

must be mounted by the mount command

ro

mounts the file system in read-only mode

rw

mounts the file system in read-write mode

umask=

inverse bitmask of the access privileges (e.g. for FAT file systems)

uid=

User ID of the data

gid=

Group ID of the data

sync

Synchronous I/O Operations

www.linux-magazine.com

on the Web at [2]. However, quota support must be compiled into the kernel, if you intend to use quotas. As a full description of configuring this program is beyond the scope of this article, we will be focusing on the entries in /etc/fstab. The usrquota option is provided to restrict the amount of space available to users. The option is entered immediately after the defaults entry and affects the /home partition: /dev/hda5 /home ext2 defaults,usrquota 1 1

U

You can replace usrquota by grpquota to apply quotas to groups: /dev/hda6 /usr ext2 defaults,grpquota 1 1

U

If required, you can apply both settings to a single partition: /dev/hda6 /var ext2 U defaults,usrquota,grpquota 1 1

A separation of system and user data can be achieved by defining appropriate partitions for your Linux installation. This allows for ease of administrative intervention. Also, a well-planned fstab structure will save an administrator headaches – especially when under time pressure with things going wrong. â–

INFO [1] Quotas: http://www.sourceforge.net/ projects/linuxquota [2] Quota Howto: http://www.tldp.org/ HOWTO/mini/Quota.html


XEmacs

KNOW HOW

Email and Newsgroups with (X)Emacs and Gnus

GNU Tools for news Communicating by email is mainly a question of actually writing something – so why bother launching an extra mail program if your (X)Emacs text editor is already running? BY OLIVER MUCH

W

hether you need to edit a text, source code, or a web site, the text editor probably comes top of the “most frequently used program” charts. However, if, to send an email you are supposed to launch an external mail client which in turn will launch a text editor. “You must be joking!”, is the cry that you’ll hear from most Emacs users as both XEmacs and Emacs provide an add-on that allows you to extend their functionality to include mail and News: Gnus [1]. Typing the Emacs commands M-x gnus or xemacs -f gnus & in a shell provides access to the mail and news client. However, do not expect too much at this point, because you will still need to tell Gnus where to access your mail and newsgroup resources.

Beg, steal or borrow? Gnus can read netnews from a special directory (the so-called spool), from a local news server (for example, leafnode, sn or INN) or access an external server. To read news from the spool, add the following line to the .gnus file to your home directory (this is where Gnus will expect to find all the configuration information): (setq gnus-select-method U '(nnspool ""))

The gnus-select-method variable specifies how Gnus will access your

news. You will need to assign a list to the variable including the news access method in the first field (nnspool – “netnews spool” in this case). The second (empty) field is not significant in our case. The advantage of nnspool is that this method is extremely quick. However, this is not much consolation if there are no news groups stored in your spool directory. You will need a program that fetches news, such as aptly named fetchnews tool, which is installed with the leafnode news server, in order to populate your spool directory. Gnus can use the NNTP protocol to talk to the server. This second access method is referred to as nntp and has the advantage that you are not restricted to a single news spool, but can ask a server to request and manage news from multiple external sources. The

If the remote server requires you to supply some kind of authentication (that is a username and password) in order to fetch or send news, you can store your authentication data in the ~/.authinfo file, which you will probably need to create at this point: machine news.server.co.uk loginU username password secret

Replace news.server.co.uk with the name of the server, username with your own username secret with your password on the news server. You can then add the following line (setq nntp-authinfo-file U "~/.authinfo")

to your ~/.gnus file to tell Gnus where to find your access information.

You have mail (or maybe not)

(setq gnus-select-method U '(nntp "localhost"))

entry in ~/.gnus tells Gnus that you want to use a news server that you have installed locally (localhost). To use an external news server instead of a local one, simply replace localhost with the Internet address of the remote server.

Gnus differs from other mail and news programs in one particular aspect; from the user’s viewpoint it does not distinguish between mail and news. This allows you to assign both file types to specific groups or delete them after a while. And there is little to distinguish the procedures for accessing mail or news.

www.linux-magazine.com

February 2003

55


KNOW HOW

XEmacs

Your first task will be to tell Gnus where to access mail. The various backends that are available for this purpose are distinguished by the way they store mail: Should each message be stored in a file of its own, or do you want to store all your electronic correspondence in a single file? In the first case, you will need the “Mail Spool” backend, which stores each incoming message as a file in the ~/Mail directory. The following entry in ~/.gnus specifies the method (setq gnus-secondary-select-U methods '((nnml "")))

Of course this requires tons of inodes, which is a bad idea for computers running low on resources. The good news is that the mail spool allows Gnus extremely fast read access to your mail, so your decision should be based on a compromise between these two factors. This assumes that your mail is already in your local inbox. You might like to use fetchmail to move your mail from your provider’s POP3 server to your inbox, although Gnus can actually fetch mail without any help from external programs. The following settings allow Gnus to do so (setq mail-sources '((file :path U "/var/spool/mail/username") (pop :server "pop3.mail.co.uk" :user "username" :port "pop3" :password "secret")))

The mail-sources variable is used to store the sources from which you will be receiving mail. In our example, Gnus retrieves mail messages for the user username from a file in /var/spool/mail/ username and

also from an external mail server pop3.mail.co.uk. The user account on this system is username and the password for the account is secret. The pop3 protocol is used for communicating with the server. Of course Gnus needs online access to retrieve mail from this location.

Who am I? To provide details on yourself in your mail and news posting headers, you can add the following entries to ~/.gnus: (setq message-from-style 'angles user-mail-address U "myname@provider.co.uk" mail-host-address U "my.computer.name" message-syntax-checks U '((sender . disabled)))

Replace myname@provider.co.uk with your valid email address and type your host name for my.computer.name. You can set the message-from-style variable to ‘angles to tell the program to place your address in angled bracket. You will not want Gnus to check your address, as it may not correspond with the address assigned by your provider (message-syntax-checks ‘((sender . disabled))).

Mail Chaos Gnus will normally store your mail in a single directory called nnml:mail.misc. However, if you subscribe to multiple mailing lists, you might like to tell Gnus to sort your incoming mail on the basis of customized criteria to prevent important messages drowning in a flood of spam. The program reads the nnmailsplit-methods to decide what messages to store where. As Listing 1 shows, the variable expects a list comprising of two elements, where the first element specifies the name of the folder where you want Gnus

Listing 1: Mail Splitting in ~/.gnus (set nnmail-split-methods '( ("private" "^\\(To:\\|Cc:\\|CC:\\|Resent:\\).*myname@provider.co.uk") ("Linux" "^Subject:.*\\[Linux\\]") ("Emacs" "^\\(To:\\|Cc:\\|CC:\\|Resent:\\).*emacslist@anywhere.org") ("other" "")) )

56

February 2003

www.linux-magazine.com

to store the incoming messages, and the second contains a regular expression (“regexp”). The expression should allow Gnus to recognize the messages to be placed in a particular folder. In our example, the first entry stores messages for your private email address (represented by myname@provider.co.uk) in the private folder, no matter whether the strings To:, Cc:, CC: or Resent: occur in the header. The next two elements are designed to tidy up your mailing list subscriptions. Gnus will recognize mail from the Linux mailing list by the fact that the [Linux] string occurs in the subject line. The Emacs list cannot use this method, instead relying on the source address emacslist@anywhere.org to identify appropriate messages. The last entry places any mail that does not match a previously defined category in other, and this is why the record does not contain a regular expression

Up and Running! After completing this preparatory work, it is time to launch Gnus. The program will first attempt to retrieve your mail, before moving on to the newsgroups you subscribe to – although you do not actually subscribe to any newsgroups at this point. Gnus is quite helpful in this point and attempts to load a few

GLOSSARY News: Short for “Usenet News” this originally entailed using the NNTP protocol to access computer network independent newsgroups, and more commonly known as an Internet service today. NNTP: The “Netnews Transport Protocol”is the language that news servers and clients (such as Gnus) use to talk to each other. M-x: Press the [Meta] key (this will normally be the [Alt] or [Esc] key on a PC keyboard) and then [x], to let Emacs know that you are ready to type a command – the Emacs gnus command in our example . Spool: Designates a directory used by a news or mail server to store usenet articles or email. Inode: Contains file information for a single file, such as the type, the owner, or who has access privileges.The number of inodes is defined when creating the file system and thus inodes are a finite resource. Header: The header of a posting or mail contains administrative details on the sender, the subject, the creation date and the path from the sender to the recipient.You can view the header using t in the *Article* buffer.


XEmacs

Figure 1: The *Group” buffer in topic mode

beginners’ groups, which are defined in the gnus-default-subscribed-newsgroups variable for you. The fastest way to subscribe to a newsgroup is to press S s (or alternatively Shift-u). In this case Gnus displays the Group: prompt in the minibuffer, allowing you to type the name of the required group and press RET to confirm. Simply type part of the group name and then press the [Tab] key to automatically complete the group name (the auto-extension function is reliable in nearly all cases). If you are not yet sure about the groups you might be interested in, you can type A A instead of S s. The *Group* buffer will then display a list containing all the groups on the news server you stipulated. This may take a while, depending on the server, so Gnus also allows you to press A a and display only groups that match a regular expression. The mini-buffer prompts you with Gnus apropos (regexp):. If you are only interested in the alt groups stored in the tree structure, that is alt.*, you can type ^alt.* at this point, and press RET to confirm. If the group name is insufficient as a search criterion, you can search for keywords in the group (“description”). If you type A d, Gnus will again display the mini-buffer and expect you to enter a regular search expression. When you launch the program, only newsgroups containing unread articles are displayed by default. If you want to research a newsgroup whose articles you have already read, press A u (or L) in the *Group* buffer to have Gnus display groups without any unread items. You can type A s or l to hide groups with no

KNOW HOW

Figure 2: A *Summary” buffer

new items. To stop receiving postings from a group, type S t in the *Group* buffer to unsubscribe. To have Gnus completely remove the group, press S k (“kill”) for the group you want to remove. To remove multiple newsgroups, press C-Space on the first group, select additional groups with the arrow keys, and remove them by pressing S w or C-w. Incidentally, don’t worry if you make a mistake here, as A k will reinstate the deleted groups.

Keeping Things Tidy If you like to keep things tidy, despite subscribing to hundreds of groups, you can simply ask Gnus to organize your groups by topic in the *Group* buffer. You will need to define a few topics (T n) before doing so. Gnus displays the minibuffer, where you can enter a userdefinable name for each topic. If you notice a typo or find a more suitable name for the topic, you can type T r later, and rename the container. T m will move, and T c will copy a newsgroup to a topic container – in the latter case, the newsgroup will appear in multiple topic containers. T TAB allows you to intend the selected topic to make it a subtopic of a previous container (Figure 1 shows you how neat that can look). If you mistakenly indent a topic you can press M-TAB to go back.

Reading News Postings Neatly organized news containers making reading news postings twice as enjoyable. To do so, navigate to a group that you want to read and press the SPACE key. Gnus will read the group and

display two buffers after doing so. The top, *Summary*, buffer contains a list of any postings you have not yet read, and the lower, *Article*, buffer (Figure 3) displays the content of the first article. You can type C-x o or h to toggle between the two buffers. If you select a group name and press RET, Gnus will display only the *Summary* buffer (Figure 2). The gnus-summary-line-format variable is used to influence the appearance of the summary buffer in ~/.gnus; the C-h v gnus-summary-line-format RET command will provide more details. You can also change the appearance of the summary mode line via the gnussummary-mode-line-format variable. To select the next unread article at the current cursor position in the *Summary* buffer, simply press the SPACE key; pressing RET will open the current article – whether you have read it or not. N (or G N) moves to the next, and P (or G P) to the previous article. Of course, you can also use the cursor keys to move through the list of postings. To move between postings with the same subject, type G C-n for the next, or G C-p for the previous occurrence. You may want to read only a selection of the articles in a group; to do so press / s to restrict the display to a specific (“subject”); / a will restrict the display to the postings by a specific author. Note, that this only restricts how the postings in a newsgroup are displayed, but will not delete articles with different subjects or authors. If a posting that you want to display is longer than the buffer permits, you can use SPACE to scroll the file page by

www.linux-magazine.com

February 2003

57


KNOW HOW

XEmacs

Figure 3: The *Article” buffer

Figure 4: Composing Mail …

page. Press SPACE again at the end of the article to move to the next unread article in the group. If there are no more articles in the current group, you are automatically moved to the first article in the next group. DEL does the opposite, that is it scrolls back page by page. To read line by line you can press RET instead; M-RET goes back one line. < returns to the top, and > goes to the end of the article. Pressing h once more will switch from the *Article* buffer to the *Summary* buffer, which you can quit by pressing q to return to the *Group* buffer.

Have I got news for you! If, after reading a posting, you suddenly feel the urge to communicate with the outside world, you can compose a new article by pressing the a key in the *Summary* buffer for a group. Gnus is quite helpful at this point and automatically designates the group name as the target for the new article. If you press a in the *Group* buffer, however, you must additionally supply the group name after the Newsgroups: keyword. The [Tab] key again helps

simplify your task. Supply a commaseparated list of group names to crosspost in multiple groups. Do not forget to add a Followup-To: header by typing C-c C-f C-f and defining a single (!) target group, to allow any answers to your posting to be routed correctly. Type F to reply to a posting. This will cause Gnus to create a new buffer containing the original text, “>” characters are prepended to the original, allowing readers to recognize it as a quote. You can press S o p to forward an article from one group to another. After editing your contribution to the discussion, you can press C-c C-s to send it on its way. If you run out of time before completing your text, you can save it by pressing C-x C-s. Gnus provides a new pseudo group called nndrafts:drafts for this purpose. To finish off an article, select it using the arrow keys, edit it using M-x gnus-draft-editmessage, and post it using M-x gnus-draft-send-message.

Mail Follows Suit Of course, what we just discussed in the context of news equally applies to mail –

GLOSSARY Tree structure: Newsgroup names are comprised of abbreviations for languages and topics, separated by periods, where the expression to the right of a period is subordinate to the expression on the left.This provides for ease of navigation in hierarchical structure of keywords and allows you to select groups that deal with topics you are interested in. Groups starting with it. are in Italian, for example; it.comp. will lead to Italian computer groups, where the it.comp. groups (such as it.comp. linux) will be dedicated to operating systems.

58

February 2003

Crossposting: This describes the act of publishing articles with identical content in multiple groups. Crossposting is a technique that you should use sparingly, and never use to post a question in round robin fashion in any groups that might be applicable. If you decide that crossposting is appropriate, you should at least ensure that any answers will be posted in a single group, to prevent the threads of the discussion from fraying, and to allow any interested readers to participate.

www.linux-magazine.com

no matter whether you have pressed m to compose a new message, or R to reply to a message in one of your mail or news groups. If you want to reply to all the recipients of message, you can press S W to do so. The commands S o m and C-c C-f are available for the forwarding of messages. Of course Gnus can attach files (Figure 4). To attach a file, press C-c C-a (mmlattach-file) and select the file. Your cursor should be at a position below the following line to do so --text follows this line--

If the file type suggested by Gnus is inappropriate, you can again press the [Tab] key to automagically display a list of possible file types (such as text/html for an HTML attachment, for example), and select an appropriate type yourself. Finally, Gnus will prompt you to type a short description in order to let the recipient know what the attached file actually contains. Receiving mail attachments is a slightly more relaxed procedure: Gnus simply displays the attachment at the end of the message (Figure 5). K o will allow you to store the attachments individually in a directory that you specify yourself.

Best Reads Time is short, so you probably won’t want to wade through newsgroups and mailing lists with hundreds of new entries a day. Many mail clients or newsreaders use so-called killfiles to filter the deluge of new messages preventing messages from


XEmacs

authors you are not interested in from even reaching you. But Gnus provides a far more useful mechanism, allowing you to grade messages according to various criteria (for example, author, keyword in subject line). Gnus will then use the grade to decide what to do with each message – delete it, mark it as read (which will hide the message next time you open the group), or in the case of particularly interesting messages highlight the message and move it up in the ranks of the *Summary* buffer. To get things going, every mail or news message is assigned the value defined in the gnus-summary-default-score variable (normally ‘0’). If you particularly enjoy “Peter Miller”‘s postings in a specific news group, you can increase Peter’s score, in order to highlight anything he posts in future and place any messages from Peter near the top of the *Summary* buffer. Unfortunately, Peter has a regrettable tendency to get involved in discussions on old floppy drives, a topic that does not exactly fire your imagination. The good news is that you can downgrade messages on this subject by searching for a specific word in their subject lines. You can assign messages that contain the keyword a negative score that outweighs the positive score you assigned to Peter. This will allow Gnus to mark Peter’s messages as read. In practical terms this means selecting a message and pressing I to (“increase”), or L (“lower”)the score. Gnus will display the mini-buffer and ask you to specify the header entry that you want to grade in this way: Increase header (asbhirxeldft?):

The options for your answer are defined as follows: • a applies to the author’s name. • s means that you want to grade the subject line in the current message. • x means you are grading the Xref header which contains all the groups this message was posted to. • If you select r, Gnus will evaluate the References header which contains the message IDs for the message that the current message refers to. • l applies to the number of lines.

KNOW HOW

• i refers to the message ID, that is the unique identifier for the message. • f applies the score to the author, just like a but additionally specifies that the rule should be applied to any “follow ups” to this author’s postings. Figure 5: … and reading it Gnus will then then answer Gnus prompts by typing prompt you to decide how future a e p. After pressing p the mini-buffer messages with content similar to the appears as Gnus needs to know what selected header should be processed: you want to compare: in our case it happens to be the content of the From: Increase header 'subject' with U line, which contains his name. match type (sefr?): To save yourself from reading Peter’s meanderings on the subject of floppy You can specify the following drives ancient and modern, now find an • e an exact match, appropriate thread and type L s s p • s only a substring of the specified floppy RET. This means that any messearch string need occur sages containing the word “floppy” in • f Gnus should remove any whitespace, the subject line will be permanently punctuation etc. before comparing assigned a negative score. • r will use a regular expression for You can now press V S to discover the comparison. The regexp still needs to score assigned to the current message. In be defined on the basis of the current contrast, V t will show you the rules that string. If the header you are using as a led to the score. V R tells Gnus to reapply reference point (such as the message your scoring rules to the current ID) contains numbers, you can also *Summary* buffer. perform numeric comparison: Is the compared value smaller than (<), And that’s not all, folks! equal to (=), or larger than (>) the value defined in the header for the As a true descendent of Emacs, of current message? course, Gnus will never be short on You can then go on to specify whether functionality: It can repair damaged the scores defined by the new rule postings, display HTML files, autoshould be applied temporarily (t), matically adjust scoring rules to match permanently (p), or i ( “immediately”). your reading habits, and more. If you choose the first of these options, It is worth spending some time reading Gnus will drop the rule after a certain the info file included with the tool. period, which you can specify in the The Gnus homepage [1] and MyGnus [2] gnus-score-expiry-days variable – this are also recommended as good sources defaults to seven days. of information. This option is particularly useful If you still need more help, you might for subject lines as discussions on a like to turn to the experts in the specific subject do not normally last gnu.emacs.gnus newsgroup. And if this for more than a few days. In contrast, is all getting to be too much for you, you Gnus will never drop a permanent can still console yourself with the scoring rule. Gnus follows the “out of thought that pressing q will terminate sight, out of mind” principle for rules the Gnus. ■ with the i flag without saving them to a file first. INFO To assign a positive score to Peter [1] Gnus homepage: http://www.gnus.org/ Miller, you would first type I after finding [2] MyGnus: http://my.gnus.org/ a particularly good posting by him, and

www.linux-magazine.com

February 2003

59


SYSADMIN

Charly’s column

The Sysadmin’s Daily Grind: DHCP-Server watch

A full tank DHCP is a clever invention. You assign a pool of addresses to the DHCP server which uses them to serve your clients. How can the admin user find out how many and which addresses have already been assigned? It doesn’t bear thinking about what might happen if the pool ran dry. BY CHARLY KÜHNAST

T

he DHCP server’s configuration file designates address pools from which your clients are assigned IP addresses. In the case of the popular ISC DHCP Server the files appear as follows: subnet 10.0.0.0 netmask U 255.255.255.0 { range 10.0.0.50 10.0.0.99; option routers 10.0.0.254; }

This means that addresses 10.0.0.50 through 10.0.0.99 will be assigned. If all 49 of these addresses are already in use and user number 50 powers up her computer, she is in trouble – and you will be too, shortly, as soon as she gets on the phone to you. How can you avoid upsets of this kind?

The first place to look is the “dhcpd. leases” file, which contains an entry like the following example for each user: lease 10.0.0.96 { starts 1 2002/10/07 10:42:44; ends 1 2002/10/07 12:42:44; binding state active; next binding state free; hardware ethernet U 00:04:76:9f:b0:02; uid "\001\000\004v\237\260U \002"; client-hostname "funghi"; }

The “binding state active” status tells me that this lease is currently in use, in other words the IP address 10.0.0.96 is unavailable at present. In larger networks, manually parsing the the “dhcpd.leases” file is far too time-consuming. This is where a reporting tool like Reportdhcp [1] can make itself useful.

Reportdhcp Helps The small perl script is fundamentally ready to run – you simply need to modify the paths in “reportdhcp.pl”:

my $dhcpfile = U "/var/dhcp/dhcpd.leases"; my $dhcpdconf = U "/etc/dhcpd.conf"; my $CGI = "/cgi-bin";

After customizing the “reportdhcp.pl” script, you simply move it to the “cgi-bin” directory on your web server – ideally this will be the machine that your DHCP server is running on. If not you can always use “scp” or rsync-over-ssh to transfer “dhcpd.conf” and “dhcpd.leases” to your web server. After installation, reportdhcp will parse these files and report its findings in HTML format. Now you can tell at a glance how many leases are available in each network, and how many of these are currently “active” (see Figure 1). Additionally, the tool can sort the leases it finds by IP address, age or hostname (Figure 2), and even provides a basic search tool. You can now tell at a glance that the water level in the (address) pool is quite high enough thank you. So it’s off home for the pool attendant. ■

INFO [1] Reportdhcp: http://www.omar.org/ opensource/reportdhcp/

Figure 1: Reportdhcp indicates the total number of leases and how many of them are “active”

THE AUTHOR

SYSADMIN User tools ..................................62 Many paths will lead you to the new user account on your Linux computer.

Diskless Clients......................64 Standard PC components make things even cheaper.

Figure 2: Reportdhcp additional sorts the leases it discovers by IP address, age, or name

60

February 2003

www.linux-magazine.com

Charly Kühnast is a Unix System Manager at a public datacenter in Moers, near Germany’s famous River Rhine. His tasks include ensuring firewall security and availability and taking care of the DMZ (demilitarized zone).


Diskless Clients

KNOW HOW

Linux Based Diskless Clients – A Step by Step Guide

Jump Starting the Network Linux based diskless clients offer the same potential as fully-fledged traditional workstations, but with far lower hardware expense, less noise, and less administrative effort involved. Standard PC components make things even cheaper. BY DIRK VON SUCHODOLETZ

I

T environments that do not tie up too many of your IT staff, or take too large a proportion out of your IT budget are understandably becoming more and more popular. Thin clients provide a useful approach towards standardization and automation. This kind of computer can help you cut costs by using minimal hardware. Using the Linux operating system also provides these platforms with a solid and flexible software basis at extremely low costs. Clients of this type do not depend on specialized hardware, but can be implemented using traditional PC hardware. Depending on the application, the requirements may be so minimal that you might even be able to put your old

Pentium class computers back to work. The pre-condition for high-performance operations with diskless clients – that is a high-performance Ethernet installation – can normally be assumed. This article provides the know-how and introduces the programs you need to run Linux Diskless Clients (LDC). Some basic information is available from [6] which provides information, examples, and additional material. The mathematics faculty of the University of Göttingen, and the Remigianum Highschool in Borken, Germany, provide examples of practical implementations of the material in this article, including a seminary, desktops for the teaching staff and a pool of student desktops.

Protocols and Technologies Diskless Clients normally boot from a ROM. There are two advantages in this. For one thing, major hardware component failures are extremely unlikely, and for another thing, administrative tasks are performed exclusively on the server. The boot ROM implementation Etherboot [4] is also a great example of a GPL tool. Network hardware manufacturers use the Pre-Execution Environment (PXE), one of Intel’s Wired for Management (WfM) components, to harmonize LAN based boot software. Both use the Dynamic Host Configuration Protocol (DHCP) to provide a basic IP configuration that controls the way the

www.linux-magazine.com

February 2003

61


Diskless Clients

KNOW HOW

Linux based client boots. The Trivial File Transfer Protocol (TFTP) is typically used to transport the operating system kernel, although Etherboot can alternatively use the Network File System (NFS). DHCP, TFTP, and NFS as all are based on UDP. In addition to the network file system, diskless clients need a writable file system that can store data dynamically. TEMPFS was chosen for this task, as it dynamically changes its size to reflect the amount of data it is required to store. The ancient RAM disk might provide an alternative. A transparent file system such as the Sun OS 4.x Translucent File System (TFS) would be ideal, but unfortunately this is not available for Linux, although Bernhard M. Wiedermann [8] is working on a promising Translucent FS for Linux. This would greatly simplify the file system structure for the configuration files. The boot kernel, which is transferred across the network, requires a special

configuration in order to boot. Etherboot provides a so-called kernel tagging tool. PXE uses Etherboot’s capabilities or cooperates with the Syslinux bootloader to provide an alternative strategy (see the “Syslinux and PXE” section).

The Client Boot Software When selecting boot software, the idea is to avoid configurations that require specialized hardware. The software should be available immediately after switching on a machine without any user interaction. Special features, such as a boot menu that allows the user to boot from a floppy disk if required, are also conceivable. Chaining is also possible. If the workstation is unable to boot from the network, the boot software will search other devices for boot blocks. All of these approaches should behave in a uniform manner, from the server’s viewpoint, to restrict customization to a minimum. This applies equally to

With or without Bootloader PC-BIOS searches for components Thin Client Server Etherboot

DHCP: (Base data)

NFSD

TFTPD

PXE

DHCP-Request: (Base IP-Configuration)

Tag the Boot kernel per NFS

Tag the Boot kernel per NFS

Boots the Kernel/Mounts the RAM-Disk

DHCPD: IP and further configuration

NFSD: /nfsroot/dxs /tmp/dxs /opt /usr (/var/cache/fonts) (/var/lib/texmf) ...

Export the Init-Scripts/ Initialize the Network card

dhcclient-script mounts the Root-FS and writes configuration files pivot_root swaps RAM-Disk with the Root-FS (after /mnt mounted) Classical Init-Process starts from generated/mounted Root-FS

Figure 1: Boot sequence for a Linux diskless client showing the Etherboot and PXE approaches

62

February 2003

www.linux-magazine.com

various older kernels for Etherboot, PXE/Syslinux (discussed below), and commercial boot ROMs.

Etherboot – the Free Boot ROM Package The Etherboot package [4] provides drivers for nearly every popular network adapter, including gigabit and wireless LAN adapters. It can interact with other boot loaders, dual boot configurations and boot menus. Since Etherboot is available as a GPL license, you can implement any number of installations without incurring any costs. Boot images from the Etherboot package are so compact that they can easily be embedded in standard EPROMs and flash ROMs for NICs. However, you should be aware that any additional functionality, such as using NFS as the kernel transfer protocol, will increase the size and possibly overflow the capacity of older EPROMs, especially in ISA network adapters. As an alternative, the code can be implemented as a BIOS extension code on the motherboard. (DOS) Programs, such as “cbrom.exe” for Phoenix/Award and “amibcp.exe” for AMI BIOSs perform the modification (see insert “Special BIOS Tools”). The boot images occupy between 8 and 64 Kbytes. The code includes NIC drivers and the protocols DHCP and TFTP, or NFS. Thanks to open sources, the prospects of being able to customize to reflect specialized network environments are good. The fact that Etherboot supports alternatively booting clients from hard disk, floppy or CD-ROM drives is an interesting feature that provides a backup solution in case the boot server fails. Grub (the Grand Unified Bootloader, which provides an alternative to Lilo) also has an Etherboot module. The fact that Etherboot can both create executable DOS-files and rides directly to the boot sector is useful for testing and debugging.

Etherboot Configuration Options Etherboot stores its options in the selfdocumenting “Config” file in the source directory. The exhaustive documentation contains further details, however, you


Diskless Clients

will need to download it separately from the development site [4]. The type and quantity of options you choose will affect the size of the ROM image created. EPROMs have a capacity of between 128 and 512 Kbits; BIOS Flash ROMs need enough free space. The administrator also uses the configuration file to specify how to load the kernel image. The current Etherboot implementation can use NFS instead of TFTP, and NFS is needed later for the clients’ route file systems. Additionally, a server side service can increase system security. On the other hand NFS support will add a few Kbytes to the size of the ROM code. If you want to change the boot menu (boot from network or local device), you can edit “etherboot.h” to do so. The file is stored in the same directory as the configuration file. If you want Etherboot to issue a different vendor code identifier string then the default, “Etherboot”, to

KNOW HOW

Figure 2: Etherboot starts from a diskette image

the DHCP server, you will need to edit “main.c”. Both options need to be enabled in the central configuration file.

Compiling the ROM Code To create a “*.rom” image, you simply invoke “make” in the source directory. The images are placed in the “bin32” subdirectory. It makes sense to test the boot images first by booting from a floppy. Invoking “make bin32/name_of_ networkadapter .fd0” will create a ROM image with an additional floppy boot header for “/dev/fd0”.

Attentive readers may already have noticed that the “make” syntax also allows you to use “cat” to monitor the creation of the ROM image on a floppy. You can copy this method to avoid the more roundabout “make” syntax. A similar method is used to create PXE images, which the client boot software will load later.

Mknbi Builds Boot Images The “mknbi-linux” perl script will create boot images for other operating systems – DOS for example – in addition to

Special Tools for BIOS Special tools are used to add the Etherboot code as an extension ROM to the motherboard’s BIOS.These (DOS) tools are actually intended for mainboard manufacturers,who need to add firmware for IDE or SCSI controllers,anti-virus solutions,and so on to the standard BIOS.The Etherboot extension also falls into this category. However,you should exercise caution,as incorrect BIOS contents will cause a total failure of your motherboard. Care and a spare

flash chip containing the original BIOS will help you avoid these pitfalls.The tools described below access the flash chip directly,and this is why you need to store an image of the BIOS in a file – normal flash programs can be used both to create and to restore an image. Award and Phoenix BIOS Type “cbrom.exe”without any parameters to display the available options.“cbrom bios.bin /d”displays the free space in the ROM file –

Figure 3: Cbrom analyses the BIOS of an ABIT BP6 motherboard

that is, the space available for your own programs.The BIOS in the Flash ROM is typically compressed. And this is why cbrom compresses its own code. Etherboot needs between 8 an 20 Kbytes of free storage space. If this is not available,“cbrom bios.bin /[pci|ncr|logo|isa] release”will remove any BIOS components that are not absolutely necessary, such as manufacturers’logos or the Symbios/NCR SCSI code, which a diskless system will have no use for.The “cbrom bios.bin /[pci|isa] bootimg.rom [D000:0]” command adds the compiled Etherboot code to the BIOS.“bootimg.rom”is the code that you would normally burn on an EPROM.The “[pci|isa]”option depends on the NIC. Cbrom supplies a memory address for ISA adapters to allow the code to copied to this address during the boot sequence. AMI BIOS In contrast to “cbrom.exe”, the AMI tool, “amibcp.exe”, is menu driven.You launch it without any command line parameters and load the BIOS file by running the first menu item,“Load BIOS from Disk File”.“Edit BIOS Modules”is used to edit the BIOS modules. The free space available for extensions is shown at the bottom of your screen. You can press the [Ins] key to add extension modules, preferably using the “compressed” option. [Esc] quits the editing area, and “Save BIOS File to Disk”writes the modified BIOS to a file.

www.linux-magazine.com

February 2003

63


KNOW HOW

Diskless Clients

Listing 1: Excerpt from “Config.h” # For prompting ... CFLAGS32+= -DMOTD -DIMAGE_MENU # [...] # Change download protocol to NFS, default is TFTP CFLAGS32+= -DDOWNLOAD_PROTO_NFS # For prompting and default on timeout CFLAGS32+= -DASK_BOOT=3 -DANS_DEFAULT=ANS_NETWORK # [...] # Enabling this makes the boot ROM require a Vendor Class # Identifier of "Etherboot" in the Vendor Encapsulated Options CFLAGS32+= -DREQUIRE_VCI_ETHERBOOT

Linux. Type “man mknbi” for further details. Kernel tagging is enabled by typing “mknbi -o Bootimage -d /nfsroot/dxs -ip rom kernelimage [ramdisk image]”, for example. The options specified in this syntax are the kernel file, the output file (the network bootable kernel), and the NFS root. The last option tells the kernel to retrieve its IP configuration from the boot ROM’s DHCP/BOOTP request. The project discussed in this article does not need this option, however, as the DHCP request is initiated in the INITTAR environment [9].

Syslinux and PXE PXE is another mechanism that allows you to boot diskless machines from the network. Some NICs and compact motherboards may already have PXE implementations. PXE Linux by Peter Anvin is distributed with the Syslinux package [7]. Syslinux provides a kind of enhanced “loadlin” (a second stage boot loader), which cooperates with PXE and uses TFTP to load a Linux kernel with options and a RAM disk. It would be equally feasible to use boot sector images for other operating systems thus providing a network controlled multiboot environment. PXE Linux retrieves kernel boot information from a file in the “pxelinux.cfg/” subdirectory on the server, where the filename is the hexadecimal representation of the client’s IP address. If this file does not exist, PXE Linux truncates the name character by character, and compares the name again. If this also fails, a default configuration file is used. PXE can also cooperate with Etherboot. This leads to a staggered boot sequence: PXE wakes up first and loads a cus-

64

February 2003

tomized Etherboot configuration, which proceeds with the boot sequence. This combination is particularly useful if PXE is available and you want to avoid having both invasive hard and software configurations. A kernel customized to support Etherboot is all you need to avoid having to provide boot kernels, although this does assume a carefully configured DHCP server that will supply the right data for the PXE request and the subsequent Etherboot request. You might like to refer to the howto at [2].

DHCP – the Central Configuration Tool The Dynamic Host Configuration Protocol (DHCP) is the network protocol

used in our case to supply both client system configurations, including hostnames, IP addresses, netmasks and the gateway address, and server IPs (time, swap, NIS, and print server). The menu and motd options also allow the DHCP server to supply parameters for the Etherboot boot ROM software. The administrator should take care to define a meaningful structure for the configuration file that the DHCP daemon “/etc/dhcpd.conf” uses. In addition to a “subnet” statement, the “group” option and categorizing by subnet can help organize configuration blocks with the same parameters. Listing 2 shows an example of a typical configuration. DHCP allows you to supply so-called vendor (that is additional) options. The code area 128 through 255 is reserved for options of this kind. Listing 3 makes heavy use of this potential. The following variable types are available: string”, “integer”, “boolean”, “text”, “ip-number”, and all of these types can be combined to build arrays. Admins who suspect that they may need a large number of configuration files, might want to increase the packet size of the BOOTP reply packet from the default (572 bytes) to 1024 bytes: “dhcpmax-message-size 1024”. These options are set at the top of the server daemon

Listing 2: “dhcpd.conf” 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22

option domain-name "math.uni-goettingen.de goe.net gwdg.de"; filename "/nfsroot/bootimg"; use-host-decl-names on; default-lease-time 72000; max-lease-time 144000; subnet 134.76.60.0 netmask 255.255.252.0 { option domain-name-servers 134.76.60.21, 134.76.60.100; option ntp-servers ntps1.gwdg.de, ntps2.gwdg.de; option font-servers dionysos.stud.uni-goettingen.de; option x-display-manager s4, s5, s6, s9, s10, s11, s12; option routers 134.76.63.254; option broadcast-address 134.76.63.255; class "PXEClient:" { match if substring (option vendor-class-identifier, \ 0, 10) = "PXEClient:"; filename "/nfsroot/dxs/boot/3c905c-tpo.pxe"; } group { option lpr-servers 134.76.60.2; host dxs02 { hardware ethernet 00:00:1C:D2:87:DF; fixed-address 134.76.60.64; '' #...

www.linux-magazine.com


Diskless Clients

and client (“dhclient”) configuration files. Listing 3 is intended as an example but by no means comprises a complete list of options. The option “128” defines a magic packet which enables menu option evaluation for Etherboot. Option “160” specifies default values for the menu options, that is the field to select after a timeout. “192” and following define the appearance of the menu that Etherboot displays after talking to the DHCP server. Options “223” and following (these are arbitrary numbers that are not in use) define various variables for diskless client configuration. Our example shows how to specify whether or not a service should be started. This provides flexibility to distinguish between clients in heterogeneous environments. Vendor code identifiers are defined as fixed DHCP options: “vendor-classidentifier” to allow the server to identify a client, and “vendor-encapsulatedoptions” to allow clients to identify the server. This allows the server to differentiate between clients and return different values for a single option. In fact, this is essential if you have a boot sequence that uses both PXE and Etherboot, as PXE will receive an identical IP configuration but will load the Etherboot PXE image instead of the kernel image. The “class” statement shown in our example helps to allow this distinction to be made. If the code recognizes a PXE implementation on a special 3COM adapter, it will modify the content of the DHCP “file” field.

Client Side Configuration Options The “dhclient” command is used for client side configuration tasks, that is the dynamic assignment of an IP address with Linux. Obviously, the client configuration will need to match your DHCP server configuration. “dhclient-script” is used to enter, or apply, the appropriate settings. The excerpts from “dhclient-script” (Listing 5) indicate how the clients work with DHCP. Variables are interpreted and used for configuration tasks, or other files are written. You could compare this with the boot scripts for System V init. Of course, Listing 5 is a bash script, but you could just as easily use another script language. The important thing is to reflect the flow direction of the information path: DHCP server -> DHCP client -> script -> configuration file/ service. Any additions you make or options you add or remove need to keep to this path.

Organization and File Systems

Installing a client file system that has to do without a storage device of its own, is obviously going to be different from a normal hard disk installation, particularly if it is the root file System. A single, uniform directory is used to serve a large number of machines with different functions and components. This has a positive effect on the server’s caching behavior. If the server machine uses the same operating system on the same processor architecture, you can use some areas of the Listing 3: DHCP-Vendor-Options file system for your 1 # -- lot of information to be transferred --

02 03 04 05 06 07 08 09 10 11 12 13 14 15

dhcp-max-message-size 1024; # -- user defined options -option o128 code 128 option o129 code 129 option menudflts code 160 option motdline1 code 184 option menuline1 code 192 option menuline2 code 193 option menuline3 code 194 option start-x code 223 option start-snmp code 224 option start-sshd code 225 option start-xdmcp code 226 option start-cron code 227

KNOW HOW

Table 1: Assignment Criteria static

dynamic

local;/etc, /boot

/tmp,/var/run,/var/log

distributable;/opt,/usr

/var/cache/texmf,/var/spool

clients. This may even allow you to centralize software management on the server. However, if you do use licensed software, you will need to ensure that the licenses reflect the number of thin clients that access the software. The server does not allow client access to those parts of its file system that are reserved for configuration files, or temporary directories with sockets for XFree86, MySQL and others. This also applies to areas where security restrictions apply. The file system hierarchy standards for free Unix operating systems organize the file system tree along the lines of the criteria shown in Table 1. This allows the NFS export options to be depicted in a matrix, like the one shown in Table 2. The client stores dynamic data, such as configuration files, logfiles, and sockets in its memory (TEMPFS, RAMFS or a RAM disk).

Some Things Don’t Belong On the RAM Disk As you cannot simply move everything onto the RAM disk, some fine tuning in the form of individual shares, and symlinks is required. The standard application directories, “/opt” and “/usr”, can normally be shared for readonly access without any difficulty. The special “/dev” need not be exported if you use a device file system daemon. Otherwise you would need to

Listing 4: “dhclient.conf” = = = = = = = = = = = =

string; string; string; string; string; string; string; string; string; string; string; string;

01 # /etc/dhclient.conf 02 send dhcp-max-message-size 1024; 03 send dhcp-lease-time 3600; 04 request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, host-name; 05 require subnet-mask, domain-name-servers; 06 timeout 40; 07 retry 40; 08 reboot 10; 09 select-timeout 5; 10 initial-interval 2; 11 script "/sbin/dhclient-script";

www.linux-magazine.com

February 2003

65


KNOW HOW

Diskless Clients

export it to the client’s RAM disk, just like “/etc” and “/var”. In this scenario the directories exported by NFS are as shown in Listing 6.

Setting Up the File System Assuming the same processor architecture and a similar software base for both client and server, the next step is to extract the root file system for the clients, preferably using a shell script. The script first creates the complete directory tree before copying parts of the server file system to it, or mounting them during the initialization phase. Additionally, the script either copies or links any programs and libraries in the “/lib”, “/bin”, and “/sbin” directories that the clients need to access. If the client root file system is stored on a separate server partition, the files are copied, in all other cases hard links are created (soft links cannot be used transparently via NFS). Additionally, hard links also provoke desirable behavior by ensuring that changes made to the server file system will immediately be reflected on the server. However, be careful when using hard links, as updates may affect inode data. In most cases you will need to ensure that special machine specific directories, such as “/lib/modules” are not simply copied, but populated with the data your thin clients required. This also applies to some files in “/var” and “/etc”: machine and IP specific files, such as “hosts”, “resolv.conf”, “HOSTNAME”, “fstab”, and so on, should be placed on the RAM disk. But it would be preferable for the clients to use NFS to bind other areas that are the same for every client, such as “/etc/opt” with its copious data, and the root file system. Of course, your decision as to how files and directories are best accessed

Table 2: Possible NFS Exports Directories

Complete

/opt

*

/usr

Part

*

/clientroot

*

/etc

*

/tmp

*

/var

*

/dev

66

None

*

February 2003

must be based on individual requirements. Reducing RAM disk usage will mean more administrative tasks when allocating storage. In our example the “/etc” configuration directory has been split into a static section, “/etc.s”, and a RAM disk section “/RAM/etc”. The former is exported by the server for read-only access. The remainder is already incorporated in INITTAR and is mirrored on “/etc” by a link. It contains links to “/etc.s” for all the static elements. A translucent file system would reduce the effort involved in creating crosslinks [8].

INITTAR The Init-RD (Initial RAM Disk) is not only impractical and inflexible, but has also been unnecessary since TEMPFS was introduced. This is how it works: • Mount an empty TEMPFS as the root file system. • Expand a tar.gz image, that is passed to the kernel just like Init-RD. • Launch the replacement “init” script. INITTAR is not restricted to a fixed size, and does not need to be formatted with a file system; it is quite sufficient to create a gzipped tar file. You will, however,

need to patch and recompile the kernel for TEMPFS. [9]

Two-Step Boot Sequence The Linux diskless client described in this article is initialized along the lines of more modern Linux distributions. The minimal RAM disk environment described previously, is launched before mounting the root file system. It performs configuration tasks, such as loading the kernel modules for the file system or a RAID array. The two-step boot sequence also simplifies troubleshooting the kernel. Only those elements required to start the system are linked into the kernel, everything else is loaded when required. Additionally, instead of running “init” the shell script previously used to find and load the NIC kernel module is launched. “dhclient” retrieves the IP and other configuration files. “dhclientscript” takes care of the IP configuration, mounts the file system and writes the configuration files, before passing control back to “init”, which mounts the RAM disk file system as “/RAM”, and reinstates the mounted root file system as the root directory, “/”.

Listing 5: “dhclient-script“ 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

# dhclient-script #... # set up snmpd configuration test -n "$new_start_snmp" && sed -e "s,NETADDR/MASK,$netaddr/$new_subnet_mask,g" /etc/ucdsnmpd.conf.default >/etc/ucdsnmpd.conf #... sed -e "s,KEYTABLE=.*,KEYTABLE=\"$LANG\",i" -e "s,START_GPM=.*,START_GPM=$start_gpm,i" -e "s,GPM_.*,GPM_PARAM=\"-t $MP -m $MD\",i" -e "s,START_X=.*,START_X=\"$new_start_x\",i" -e "s,START_SNMP.*,START_SNMPD=$new_start_snmp,i" -e "s,START_SSHD.*,START_SSHD=$new_start_sshd,i" -e "s,DISPLAYM.*,DISPLAYMANAGER=$new_start_xdmcp,i" -e "s,DEFAULT_WM.*,DEFAULT_WM=\"$defaultwm\",i" -e "s,CRON.*,CRON=\"$new_start_cron\",i" -e "s,START_RWHOD.*,START_RWHOD=$new_start_rwhod,i" -e "s,START_LPD.*,START_LPD=\"$start_lpd\",i" -e "s,START_CUPS.*,START_CUPS=\"$start_cups\",i" -e "s,START_YPBIND.*,START_YPBIND=\"$start_nis\",i" -e "s,START_XNTPD.*,START_XNTPD=\"$start_xntp\",i" -e "s,XNTPD_I.*,XNTPD_INITIAL_NTPDATE=\"$initntp\",i" -e "s,VMWARE.*,VMWARE=\"$new_start_vmkernel\",i" /etc/rc.config.default | grep -v "#" >/etc/rc.config; #...

www.linux-magazine.com

\ \

\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \


Diskless Clients

Inittab and Client Operating Mode Dynamically configuring “/etc/inittab” allows you to select the operating mode for the client in GUI mode. This approach is more flexible than using the display manager on the desktop. DHCP data can be used to select and configure a window manager. A shell script then launches additional programs. Traditional GUI logins with a host chooser, use “init” and “/etc/inittab” to launch the X server. In case of kiosk operations it would be preferable to start the machine directly without the display manager, using an “init” controlled “xinit” command.

Setting Up the Hardware It is important for the project to support various client hardware configurations. Two approaches are possible to avoid having to create a customized root file system for every individual machine. One of them involves defining DHCP options to specify the graphics adapter, the screen resolution, the mouse, and any kernel modules that need to be loaded. The other approach uses the automatic hardware detection facilities provided by Linux distributions, such as “hwinfo” in the case of SuSE. Both of these approaches can be scripted. Host and device specific files should be avoided, if at all possible. They tend to get forgotten when changes

are made or updates applied, and are a constant source of errors.

Tricky Troubleshooting On a diskless system, the network configuration must work before you can actually start logging anything on the client, and that makes troubleshooting more tricky than usual. The server logfile should contain a few hints. Some troubleshooting tasks can be performed remotely – particularly in the case of XFree86 configurations – if the secure shell daemon “sshd” is running on the client machine. To debug the initial boot steps, your initialization and configuration scripts should be able to issue debugging messages, that indicate the cause of any failures, and offer possible remedies. A logfile that traces the boot sequence, and the hardware and software setup can provide additional information.

Conclusion Diskless clients require more initial setting up work than traditional workstations, but even a smaller number of identical clients will more than compensate. The solution presented in this article scales particularly well. The delay for loading the client configurations and typical client response times is typically shorter than comparable figures for stand-alone workstations – although this will depend to some extent on your server performance and network bandwidth. The open software architecture is based entirely on established standard protocols and applications, which are available for a variety of platforms, and well-documented programming languages, such as bash and perl. This allows the project to interface with other

Listing 6: Possible NFS Export Configuration 01 02 03 04 05 06 07 08 09 10

# Base directory (root file system for client) /nfsroot/dxs 10.10.156.0/255.255.252.0(ro,no_root_squash) # Extension of /tmp for specific users (lots of space!) /tmp/dxs 10.10.156.0/255.255.252.0(rw,no_root_squash) # Application directories /usr 10.10.156.0/255.255.252.0(ro) /opt 10.10.156.0/255.255.252.0(ro) # Additional areas for LaTeX users /var/lib/texmf 10.10.156.0/255.255.252.0(ro) /var/cache/fonts 10.10.156.0/255.255.252.0(rw)

platforms. For example, you can use a Citrix Metaframe client to access a Windows server and provide seamless desktop integration. There are several Java runtime environments for Linux, and this provides access to a number of platform independent programs such as SAP-R/3 frontends. If required, the desktop provided by the architecture discussed in this article can be run entirely in the background. This results in a cheap, license-free, and scalable solution for commercial platforms such as kiosk, or point-of-sale products. And IT security profits from diskless clients. Large sections of the file system are mounted with read-only access which makes manipulating programs and libraries extremely difficult. Also, it is harder for malevolent users to hide their own programs and scripts. The focus of securing and configuring clients moves from the client to the server, a terrain which is far easier for the admin to monitor. ■

INFO [1] Droms, Ralph; Lemon,Ted:“The DHCP Handbook”; New Riders Publishing, 1999 [2] How to setup a PXE 2.x server on Linux: http://clic.mandrakesoft.com/ documentation/pxe/ [3] Rom-o-matic.net: http://www.rom-o-matic.net [4] Etherboot project: http://etherboot.sourceforge.net and http://www.etherboot.org [5] Linux Terminal Server Project: http://www.ltsp.org [6] Göttinger Project Linux Diskless Clients: http://ldc.goe.net [7] Syslinux: http://syslinux.zytor.com [8] Concept of Translucency: http://www. informatik.hu-berlin.de/~wiedeman/ development/translucency-statement.txt [9] INITTAR: http://www.escape.de/users/ outback/linux/index_en.html

THE AUTHOR

Finally, the traditional “init” is launched from the root file system to complete the initialization phase. And this is the point where the traditional boot sequence, with its runlevel system (Sys-V-Init below the “/etc/init.d” directory) kicks in.

KNOW HOW

Dirk von Suchodoletz has been working with Linux since late 1993, but did not realize the potential Linux offered until years later. :-) Working with X11 graphic interface and the Linux kernel at the University of Goettingen, the author designed several Linux Diskless Client (LDC) solutions.

www.linux-magazine.com

February 2003

67


SYSADMIN

User tools

useradd, usermod, userdel

Super User Many paths will lead you to the new user account on your Linux computer – you can either head for to all those configuration files manually or use a graphical tool provided by your distribution. BY HEIKE JURZIK

I

n this issue of User Tools we will be taking a walk on the admin side of life with Linux and providing background information, tricks and tools for command Line user administration.

Back to the roots… You will need root privileges for all the commands covered in this issue. Before we investigate individual user management programs, let us first take a look at the various configuration files and discuss their syntax. Fundamentally, the following steps will lead to the creation of a new account: • specify the username, UID, and group membership • enter the data in the appropriate files below /etc • set the password for the new account • create a home directory for the user • optional: copy configuration files from /etc/skel to the new home directory • use chown and chgrp to assign privileges for the new /home directory To create a new user manually, you first need to edit /etc/passwd, the central file for user management on UNIX; you can use any editor to do so. Older systems tended to store individual user passwords in this file; today they will normally be placed in /etc/shadow. The individual lines in this text file comprise of various “fields” which are separated by colons: username:password:UID:GID:U additionalInfo:home:shell

In more simple terms this means: • username: the name with which the user will log on. • password: formerly the encrypted password was stored here; but now the user will merely find an x –

68

February 2003

that is indicating that the password has been stored in another file (/etc/shadow). • UID: a unique “User IDentification number” is assigned for each user; root is always assigned 0, any other numbers can be assigned freely, however, 1 through 99 are typically reserved for system accounts. • GID: this is the “Group IDentification number” that defines group membership; each user must be a member of at least one group (see also /etc/shadow). • additionalInfo: more specific details on the user; this field can contain several words (although it is normally used for the user’s first name and surname). • home: the user’s home directory, which normally defaults to /home /username. • shell: the program which will be launched as the user’s command line interpreter when the user logs on; this normally defaults to /bin/bash or something similar. Before you enter a new user, you should think about the user ID you intend to assign them, and about the group that the user will belong to. The UID must be unique for every user, and the GID must correspond to a group defined in /etc/group. In most cases the number corresponding to the users group will be assigned (although you can define a new group in /etc/group): asteroid:~# less /etc/group [...] users:x:100:

Now, let us assume that you want to create a new user called petronella, and make her a member of users; the

www.linux-magazine.com

corresponding entry would need to be typed as follows: petronella:x:501:100:Testhuhn:U /home/petronella/:/bin/bash

To add the new user to other groups, you would have to enter her UID in the desired groups, /etc/group, for example: audio:x:29:huhn,easter,U petronella

Most systems use shadow passwords, and this means that you will need an additional entry in the /etc/shadow file. The colon-separated format is still retained here: username:password:age:min_age:U max_age:warning:buffer:invalid:U other

The field can be interpreted as follows: • username: the username, refer to /etc/passwd. • password: the encrypted password; if you see an asterisk * at this point, a valid password that the user could log on with has not been assigned; this is often the case for “administrative users”, such as daemon, bin or lp. • age: the age of the password counted in days after 1st January 1970 (UNIX’s birthday) up to the point of the last modification. • min_age: the number of days which must elapse before you can change the password. • max_age: the number of days until the password needs to be changed.


User tools

• warning: the number of days notice needed to warn the user that her password is about to expire. • buffer: the number of days until the account really becomes invalid (as a kind of buffer). • invalid: the number of days (counted from 1st January 1970) until the password becomes invalid. • other: the final field is reserved. The first two entries are mandatory, while others are optional. The new entry for the user “petronella” in /etc/shadow thus appears as follows: petronella:!:::::::

The second field, which currently contains a placeholder, !, (this field should never be left empty for security reasons) should normally contain an encrypted password. Make sure that you are logged on as root, and round the following passwd command: asteroid:~# passwd petronella Enter new UNIX password: Retype new UNIX password: passwd: password updated U successfully

Now proceed to assign a whole directory to the new user and optionally copy some initialization files to this directory: asteroid:~# mkdir U /home/petronella asteroid:~# cp -v U /etc/skel/.* /home/petronella /etc/skel/.alias -> U /home/petronella/.alias /etc/skel/.bash_logout -> U /home/petronella/.bash_logout /etc/skel/.bash_profile -> U /home/petronella/.bash_profile /etc/skel/.bashrc -> U /home/petronella/.bashrc /etc/skel/.cshrc -> U /home/petronella/.cshrc

To provide the new user with access you will now need to modify the privileges for this directory: asteroid:~# chown petronella U /home/petronella/.* asteroid:~# chgrp users U /home/petronella/.*

Fully automatic The useradd, userdel, and usermod commands are practical command line tools to take the headache out of all that editing work. The useradd program is used to create user accounts, as the name would suggest. Running the command without any additional flags will display an overview of the most important parameters: asteroid:~# useradd usage: useradd [-u uid [-o]] [-g group] [-G group,...] [-d home] [-s shell] [-c comment] [-m [-k template]] [-f inactive] [-e expire ] [-p passwd] name useradd -D [-g group] [-b base] [-s shell] [-f inactive] [-e expire ]

There are two different approaches to using this program: you can either supply the new UID, GID, the new whole directory, etc. as command line options, or you can use the -D parameter to define defaults, which the command will then process. Let’s take a look at the individual parameters first. -u specifies the UID. If you attempt to use a number that has already been assigned, the program will end and display an error message: useradd: uid 100 is not unique. You can use the -g flag to assign a primary GID for the user; additional groups are assigned by means of a comma-separated list, such as -G GID2, GID3…. To define the user’s home directory, add -d /home/username, and to assign a default shell, for example -s /bin/bash. To ensure that the home directory is actually created, you will also need to set the -m flag. By default, the files in /etc/skel are copied to the new /home directory. The -c parameter can be used to define the user’s full name (which is stored in the additional Info field in /etc/passwd). Quotes are needed to write multiple words. If an encrypted password already exists for the user, you can set this password with the option -p crypt_pw. The full syntax is therefore as follows: asteroid:~# useradd -u U 501 -g 100 -d /home/petronellaU

SYSADMIN

-m -s /bin/bash -c "Test-User petronella"

Now let’s take a look at the configuration files to see whether everything turned out OK. The entry for the password in /etc/shadow still reads !. This means running the passwd once more. useradd is a lot quicker, if you use the default settings. You can specify the -D option to view the default settings: asteroid:~# useradd -D GROUP=100 HOME=/home INACTIVE=-1 EXPIRE= SHELL=/bin/bash SKEL=/etc/skel

To apply these defaults, you can simply type useradd petronella; but do not forget to set the -m flag, to ensure that the home directory is really created. Of course, you can modify the default configuration; useradd -D -g GID will change the standard group, useradd -D -b the default home directory, and useradd -D -s the standard shell. If, after creating a new user, you discover that one or more settings are incorrect, and thus you need to modify an existing account, you might like to investigate usermod. This tool provides the same options (and the same functionality) as useradd. The third program in this group is called userdel, and is responsible for deleting user accounts. You cannot delete a logged on user; similarly, any processes belonging to the user must be killed before launching userdel. The command only has one parameter, -r, which will delete the user’s home directory and any files it contains in addition to the account. Debian systems provide a Perl script called adduser. You can run the script interactively when creating new users. The script prompts you to enter data for the new user step-by-step and adds corresponding entries to the configuration files. The current version, 3.47, is still included with Debian Woody. The source code is available from http://ftp.debian.org/debian/pool/ main/a/adduser/adduser_3.47.tar.gz; Debian users can type apt-get install adduser to install the tool. ■

www.linux-magazine.com

February 2003

69


PROGRAMMING

C

Importing Assets

Data Driven By You Whenever we start writing a piece of software, we usually use data that’s been hard-coded into the program. Nearly every 3D demo features a spinning cube. BY STEVEN GOODWIN

W

ith the possible exception of the 16 bit demo scene, nothing ever uses a spinning cube. We want rockets, tanks, people – anything, in fact, except spinning cubes! In this article we look at how to move away from primitive, hard-coded, data, and learn how to resource and handle external assets – to use the geek vernacular, it becomes data-driven. As an example, we’ll be developing a 3D mesh viewer (using 3DS files – see Box 1) from first principles, starting with a look at the file format, and stepby-step learning how to effectively and efficiently bring it into memory and manipulate it.

I Owe You Nothing Before we can load a file we need to understand it. Whilst it is usually desirable to use a pre-written library, it would make for a very short article! We are therefore only re-inventing the wheel for the purpose of education. However, I usually find writing a file loader helps me not only understand the format better, but allows me to make better use of the data I have to process. For all data-driven work, I recommend having good documentation and samples to hand – even if you have to pay money for it! The documentation we shall be using to describe the format can be found at http://www.whisqu.se/per/docs/graphics 56.htm This is a simplified version of the (now released) specification from http:// sparks.discreet.com/downloads/downU

70

February 2003

loadshome.cfm?f=2&wf_id=45. You do not need to rush for Konqueror just yet however, as I shall be describing the pertinent parts here.

Tall Trees In Georgia Every 3DS file is split into chunks, and each chunk is part of a hierarchy. This means that each chunk (a block describing an individual piece of data – an object’s colour, for instance) can have one or more child blocks ‘inside’ it, in much the same way as XML or HTML embeds tags. Unlike XML, it is a binary format. The parent-child relationship of chunks are conceptually no different to that of the Linux file-system. You can sneak a peak at the hierarchy of a sample 3DS file in

www.linux-magazine.com

Figure 1 (p72) to see how this will look for our program. Chunked formats allow the vendor (along with standards bodies, other developers, and end users) to add custom chunks into the format without breaking other applications; it becomes both forward, and backwards, compatible. Because it is a hierarchy, an application may omit or include entire sub-trees of data depending on how relevant they are to the file, or program in question. The application reading the file can then ignore chunks (in their entirety) if they don’t recognise (or need) one, and continue with those that they can, or want to, read. This also makes it easier to write a parser (which in turn ensures


C

a greater penetration for the format) since you only need to handle the elements you’re interested in. This predictable structure encompasses unpredictable data, that makes self-describing formats like this and XML loved – and others, like that of Microsoft Word, hated!

Table 2: Chunk IDs

Table 1: 3DS Chunk Start location (*) End location (*) Size (bytes)

Name

A 3DS file chunk begins with a small header describing the data: an ID, and its size in bytes. This is then followed by the data itself (see Table 1). It really is that simple! Every step of the process involves the same set of operations: • Read two bytes to discover the chunk ID • Read four bytes to discover the size • If we understand what to do with the chunk, process it! • If we don’t, skip over that chunk • Repeat from 1, until there’s no file left Processing a chunk is also very simple – it’s either data (so we load it into memory accordingly), or it’s more chunks (in which case we repeat the steps above). This is known as ‘parsing’ the file; we use the known file structure to perform an analysis, giving meaning to the data.

High and Dry Since we now have a basic grasp of the format we should get a 3DS file and process it by hand. This is known as ‘dry running’. To do this we need a file to use as our proverbial guinea pig. Naturally, we want to be impressed by our work, so we start looking for a T-Rex from Jurassic Park, or Princess Fiona from Shrek, perhaps. Alas, this is misplaced optimism – start small. Very small. Ideally, you

Name

Parent Of

ID (hex)

Main Block

-

4d4d

0

1

2

Chunk ID

Mesh Data

Main Block

3d3d

2 chunk

5

4

Next

Keyframes

Main Block

b000

Object Description

Mesh Data

4000

6

??

??

Data in chunk

Polygon Data

Object Description

4100

(*) An offset in bytes from the beginning of the chunk

Head Music

PROGRAMMING

should start with a model of a cube (Sorry)! At this size, it can be manually examined by hand with a tool named ‘hexdump’. Each point, line and parameter can then be compared with your source model as an integrity check. Then, if you are happy with your understanding of that section of the format, you can add features (incrementally) to the model: if you only change one thing at a time, there’s only one thing to go wrong! Since I am not an artist, I shall use a 9K rocket model from the Internet (see Box 2: 3DS Models). Its validity has been independently verified by running it through several different rendering packages and other model viewers to check for errors.

Light

Object Description

4600

Camera

Object Description

4700

Vertex List

Polygon Data

4110

Face List

Polygon Data

4120

A more comprehensive list can be downloaded from: http://sparks.discreet.com/downloads

IDs for a partial list) that is 8878 bytes in size (since it’s in hex, and we must read all numbers backwards – we’ll see why later). We are only 6 bytes into this chunk, so we can conclude there’s another 8872 bytes of this chunk to go, and so we process it. Which, by the file specification and definition means we must read the next header! This one has an id of 0002 (reading backwards, remember!) and is 10 bytes long. We don’t know what the 0002 chunk is, so we skip over the remaining 4 bytes and read another chunk! 3D3D is

Box 1: 3DS?

Keep on Running Having now got a 3DS file and an understanding of the format, the next step is to take a hexdump of the 3DS file and print it out. I’ll wait while you do that. No really. Print it out! You may be surprised how much easier it is to work through with a hard copy of the file (see Listing 1). Let us now perform a ‘dry-run’! The first chunk we can see is 4d4d, and is ae220000 bytes long. This translates to a ‘main block’ chunk (see Table 2: Chunk

Listing 1: 3DS file header in hex $ hexdump -C rocket.3ds | head 00000000 4d 4d ae 22 00 00 02 00 0a 00 00 00 03 00 00 00 |MM."..........| 00000010 3d 3d b2 21 00 00 01 00 0a 00 00 00 01 00 00 00 |==.!..........| 00000020 ff af 67 00 00 00 00 a0 16 00 00 00 52 6f 63 6b |..g.......Rock| 00000030 65 74 73 68 69 70 20 62 6c 75 65 00 10 a0 0f 00 |etship blue..| 00000040 00 00 11 00 09 00 00 00 00 00 ef 20 a0 0f 00 00 |........ ....| 00000050 00 11 00 09 00 00 00 00 00 ef 30 a0 0f 00 00 00 |.......0.....| 00000060 11 00 09 00 00 00 00 00 00 40 a0 10 00 00 00 31 |......@.....1| 00000070 00 0a 00 00 00 00 00 00 00 00 a1 08 00 00 00 01 |.............| 00000080 00 81 a0 06 00 00 00 00 40 3b 21 00 00 52 6f 63 |.....@;!..Roc| 00000090 6b 65 74 73 68 00 00 41 2c 21 00 00 10 41 80 0c |ketsh..A,!..A.|

3D Studio was a DOS-only package from Autodesk, and was, for many years, the industry standard for 3D computer game modelling. Although Discreet (or its parent company, Autodesk) no longer supports the 3DS format, many packages still do.This is for a good reason. It can hold data for a number of different models, as well as storing keyframe (animation) data and dummy nodes (indicating a position on the model for attaching other objects).There are also several open source viewers and tools available, such as view3ds and lib3ds.

Box 2: 3DS models As a programmer with little-to-no artistic ability I have to download my own personal artist from the Internet! There are several web sites that will provide you with free 3DS models for personal use. Google will return you the following: http://www.fantasticarts.com/3dmodels/ http://www.egypt3d.com/3D_Models/ 3d_models.html

www.linux-magazine.com

February 2003

71


PROGRAMMING

C

next and easily recognised as a model chunk… and so on. This is one reason for using small files – it’s a much easier process. Having done this, we can understand what the program should be doing which makes it possible to check that it’s working properly. We can now move on to implementing it!

Doin’ the Do Looking at the algorithm referenced above, it seems logical to write the ReadChunkHeader function first. However, the code to do it is not as obvious as perhaps you’d think.

fread(&id, sizeof(short), 1, U file_ptr); fread(&size, sizeof(long), 1, U file_ptr);

There are three basic errors shown here, which are common to a lot of assethandling routines. • The size of the ‘short’ variable might not always be two bytes. • No errors are handled. • This might not be compiled on an x86 machine. The first can be fixed simply by creating two custom types in a common header file, say mv_types.h.

typedef unsigned short typedef long tLONG;

tWORD;

The second problem is a question of discipline: all possible circumstances must be catered for, so the error codes from ‘fread’ must be checked. See Box 3: Stability, for more information. Most of the examples presented here are without full error checking, allowing us to focus on the more pertinent parts of the code. However the full source is available on the Subscriber CD. The third situation is subtler. It is the issue of endian-ness (see LM, issue 23, p69). Basically, this is where the order of the individual bytes within a word vary between processors. This is why we had to read the chunk size backwards earlier. The x86 family are considered little endian and would work fine with the above code. The 68000 Motorola architecture would not, however. When handling external file formats you should note carefully the endian-

Box 3: Stability Your program will be working with odd files, of odd dimensions and odd sizes.You can no longer make assumptions about its contents, or limits. If the file supports 65536 options, then make sure your program can cope with 65536 – even if the only software generating those files is limited to 100! Every return value (especially from file and memory functions, like fread and malloc) must be checked. It is possible the file will be corrupt, broken, or maliciously hacked and so can not be trusted! Not only that, but once each section of a file has been found (e.g. model data), we should initialise its contents to sensible values: pMesh->iNumFaces = 0; pMesh->iNumVertices = 0; This way, if a file lacks any particular component, the program will not come across uninitialised data and try to use it. C++ classes support constructors that are automatically called when an object is created, making it an ideal language for these tasks. If you’re not currently in the habit of being this paranoid – start now! Robust, stable, code like this is no bad thing.You wouldn’t trust a stranger typing in your root shell, so why should you allow their data through your program without checks? Figure 1: Sample Rocket Mesh – Hierarchy of Chunks

72

February 2003

www.linux-magazine.com


C

ness of the format itself – this is another good reason for printing out the hex dump of a sample file, as this makes it easy to see. You should not, however, be concerned with the endian format of the target machine since it is always possible to create endian-independent code: BOOL mv_ReadWord(FILE U *file_ptr, tWORD *pWord) { int c; if ((c = fgetc(file_ptr)) U == -1) return FALSE; *pWord = (tWORD)c; if ((c = fgetc(file_ptr)) U == -1) return FALSE; *pWord |= (tWORD)c<<8; return TRUE; }

This can be extended to an equivalent mv_ReadLong function, or combined with it to make an all-encompassing mv_ReadChunk routine. My reasoning for this particular implementation is that by passing the address of a variable into the function, we can effectively pass two values out – the bytes read in from the disc, and an error condition (see Listing 2). If you think it’s paranoia – you’re right – now go and read Box 3 again!

Building Steam with a Grain of Salt From these little acorns, great oaks of code shall grow. Referring back to the file format, we can write a parsing function quite simply – as shown in Listing 3. This function is fairly typical of the type we’ll need for write for this parser. It consists of a prototype that includes the file pointer (telling us where to get the data from), a size of the data to read, and a object pointer telling us where to put the data, once it’s been read. The main loop (lines 18 to 31), consists of each step 1 to 5, outlined above. The functionality of each step should be self-explanatory. Lines 8 & 9 make a note of when we have to stop reading. The method I’ve adopted here is to pass the total block size into each function, and let it selfterminate (line 31) at the appropriate time. This isn’t the nicest code in the

world, but it accurately does the job! If you’re designing a chunked format of your own I’d recommend adding a chunk with an ID (say 0xffff) that means ‘all done, return to your parent’, to make termination easier to handle. Because we’re entering a new branch of the tree, and this branch has some interesting data associated to it, we

PROGRAMMING

create a structure (line 11) for this data to fit into. Lines 15 and 16 prepare some default mesh data in case nothing else does. This way the render code can check the data before blindly using pointers (or data) that may be invalid. Another case of writing robust code. This code can be used as a template for parsing other chunks, say for the

Listing 2: Passing values out BOOL mv_ReadChunk(FILE *file_ptr, tWORD *pID, tLONG *pSize) { if (mv_ReadWord(file_ptr, pID) == FALSE) return FALSE; if (mv_ReadLong(file_ptr, pSize) == FALSE) return FALSE; return TRUE; }

Listing 3: Parsing function 01 MV_MODEL *mv_ParseMeshData (FILE *file_ptr, tLONG mesh_size, MV_OBJECT *pObj) 02 { 03 tWORD id; 04 tLONG size; 05 tLONG end_of_block; 06 MV_MODEL *pMesh; 07 08 end_of_block = ftell(file_ptr) + mesh_size; /* where the block should end... */ 09 end_of_block -= sizeof(tWORD)+sizeof(tLONG); /*...ignoring the header */ 10 11 pMesh = (MV_MODEL *)malloc(sizeof(MV_MODEL)); 12 if (!pMesh) 13 return (MV_MODEL *)0; 14 15 pMesh->iNumFaces = 0; 16 pMesh->iNumVertices = 0; 17 18 do 19 { 20 mv_ReadChunk(file_ptr, &id, &size); 21 22 switch(id) 23 { 24 case SMV_OBJECTDESCRIPTION: 25 mv_ParseObjectBlock(file_ptr, size, pObj, pMesh); 26 break; 27 default: 28 mv_SkipChunk(file_ptr, size); 29 } 30 } 31 while(ftell(file_ptr) < end_of_block); 32 33 return pMesh; 34 }

www.linux-magazine.com

February 2003

73


C

PROGRAMMING

mesh data, polygon data, or vertex list (see Table 2: Chunk IDs). The Main Block will read data, and only respond to Mesh Data, at which point it calls a similar function (called mv_ParseMesh Data) which in turn looks for Object Descriptions. This then looks for Polygon Data, Lights or Cameras. It is best to separate these into functions because it improves readability, re-emphasises the hierarchal nature of the file, and allows you to take special cases into account. For example, the Object Description starts with a NUL terminated ASCII string before reading the chunks. We can implement that easily and cleanly with a separate function – an example is shown in Listing 4. Having now got some code

to read our data, we need to handle it in an efficient way.

Pictures Of Matchstick Men Every 3D mesh is composed of faces. Lots of them. Each face is a triangle with three points; each point being called a vertex. So storing a mesh is simply a case of storing every vertex – of every triangle. This is normally done with two lists: a vertex list, and a face list (see Box 4 and 5). A list of triangle vertices is rarely used because in most meshes, each face normally joins at least one other face along an edge, meaning they will share two vertices. By referencing the points in a list (as opposed to labelling them explicitly) we can save a lot of memory.

Listing 4: Reading in chunks BOOL mv_ParseObjectBlock(FILE *file_ptr, U tLONG block_size, MV_OBJECT *pObj, MV_MODEL *pMesh) { tWORD id; tLONG size; tLONG end_of_block; end_of_block = ftell(file_ptr) + block_size; /* where the block should end... */ end_of_block -= sizeof(tWORD)+sizeof(tLONG); /*...ignoring the header */ mv_ReadString(file_ptr, pObj->szName, sizeof(pObj->szName)); do { if (mv_ReadChunk(file_ptr, &id, &size) == FALSE) return FALSE; switch(id) { case SMV_POLYGONDATA: mv_ParsePolygonData(file_ptr, size, pMesh); break; case SMV_MESHLIGHT: mv_SkipChunk(file_ptr, size); break; case SMV_MESHCAMERA: mv_SkipChunk(file_ptr, size); break; default: mv_SkipChunk(file_ptr, size); } } while(ftell(file_ptr) < end_of_block); return TRUE; }

74

February 2003

www.linux-magazine.com

For example, the rocket has 266 vertices, and 250 faces. At 12 bytes per vertex, and 6 bytes per face, the mesh requires 6,312 bytes. Whereas, if each face was stored with its vertices explicitly listed, it would take 18,720 bytes (as each face is now 36 bytes). The savings become more pronounced as meshes become larger and more complex. So how does this help us? It tells us that the format is optimised for size, not usage. We must take this format and store it internally in a manner that helps our program. Music formats, such as MP3 and MIDI are intended to be played in a linear fashion, so their formats lend themselves instead to streaming (you may notice the slight pause when jumping into the middle of an MP3). To start with we should test our parser by creating a simple OpenGL framework, using the data in whatever format we happen to have. As a bonus to those committed Linux Magazine readers; issue 8 (p72) includes a piece of Glut framework code that opens a window, accepts input from the keyboard and mouse and draws a teapot on screen! A quick copy and paste and it’s in our project, with the glutSolidTeapot call replaced with our own draw code which looks as shown in Listing 5.

Best That You Can Do There are two issues when it comes to choosing the best internal format. The first is for handling the object’s properties (say, position and orientation) and the second is for the rendering. So is this a trade-off? No. They should be held in different structures! The properties could be held in an MV_OBJECT structure (for instance) that details where the objects position is and what it is called. And a separate structure (MV_MODEL, for example) should describe how to draw it. They are, after all, different entities, especially since the position will change more often than the mesh data will. By separating them in this way, the internal format can change several times, so only the rendering function needs to be updated. What’s more, the MV_MODEL can describe which format of data it’s using, allowing us to use different formats within the same program… for the same type of object!


C

Listing 5: Copy and Paste for(i=0;i<iNumFaces;i++) { glBegin(GL_LINE_LOOP); glVertex3d(pVList[pFList[i].v1].x, pVList[pFList[i].v1].y, pVList[pFList[i].v1].z); glVertex3d(pVList[pFList[i].v2].x, pVList[pFList[i].v2].y, pVList[pFList[i].v2].z); glVertex3d(pVList[pFList[i].v3].x, pVList[pFList[i].v3].y, pVList[pFList[i].v3].z); glEnd(); }

PROGRAMMING

Listing 6: More modules MV_OBJECT *Obj_CreateObject(void) { MV_OBJECT *pObj; pObj = (MV_OBJECT *)malloc(sizeof(MV_OBJECT)); if (pObj == 0) return (MV_OBJECT *)0; pObj->pos.x = pObj->pos.y = pObj->pos.z = 0; Obj->xangle = pObj->yangle = pObj->zangle = 0; pObj->pMesh = 0; return pObj; }

typedef struct { char szName[256]; MVERTEX position; float xangle, U yangle, zangle; MV_MODEL *pMesh; } MV_OBJECT;

This object should have its own set of functions to manipulate it, keeping it modular and distinct from the file parsing code. Again, this distance allows features to be added and changed without a major code overhaul (see Listing 6). And a set of manipulation functions would not go amiss, as in our example: void Obj_SetPosition(MV_OBJECT U pObj, float x, float y, float z) { pObj->pos.x = x; pObj->pos.y = y; pObj->pos.z = z; }

Improving the format can be done (in OpenGL) using ‘array elements’

Box 4: Vertex List -21.000000 -34.000000 -31.000000 … etc …

0.000000 100.000000 5.000000 73.000000 8.000000 73.000000

Box 5: Face List 18 2 3 … etc …

1 1 2

0 0 0

or ‘display lists’. These should be computed on load and stored in place of the mesh data we loaded above. The internal methods, or structure, are not important unless you’re an OpenGL programmer (it’s the same data, but in a different format). What is important, however, is that such a format exists and may have no relation to the 3DS file we started with! You should arrange program data in a format suitable for the program – not the disc. We are fairly lucky in so much as a good OpenGL format can be created quite easily by expanding the face vertices with fairly minimal work on our part. Alternative render code using ‘array elements’. Made possible because we load the vertices from the 3DS file in the correct manner initially (see Listing 7). We could also use our MV_MODEL structure to store the colour (or graphic image) for each mesh face within this structure, or add the face normal (the direction it’s facing) to perform hidden face removal, or produce better lighting. This is information that could either be present within the file format, or computed from existing data. We simply put the data at the fingertips of the render code, where it deserves to be. Whatever format results, we could (nay,

should!) save the data out as a raw block that can be loaded in (much quicker) next time. These resultant files are platform dependant and target ready: meaning we load them in, set up our pointers and *wham!* away we go! An example is shown in Listing 8. In a larger project, these files may be packaged with others (in much the same manner as a ‘tar’ file) to speed up loading, and ease distribution. As we’ve seen, there can be a lot of work in parsing a file format and storing it efficiently in memory. When it’s done, your programs take on an extra edge of professionalism and the next step towards the big time. ■

Listing 7: Render glEnableClientStateU (GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0,U (void *)pCurrMesh->pVertexList); for(i=0;i<pCurrMesh>iNumFaces;i++) { glBegin(GL_LINE_LOOP); glArrayElement( pFList[i].v1); glArrayElement( pFList[i].v2); glArrayElement( pFList[i].v3); glEnd(); }

Listing 8: Wham fwrite(&iNumVertices, sizeof(iNumVertices), 1, file_ptr); fwrite(pVertexList, sizeof(MVERTEX), iNumVertices, file_ptr); fwrite(&iNumFaces, sizeof(iNumVertices), 1, file_ptr); fwrite(pFaceList, sizeof(MFACE), iNumFaces, file_ptr);

www.linux-magazine.com

February 2003

75


PROGRAMMING

Automated tools

Quality Code

Walking Upright P

rogram errors annoy users and hinder development. Software that takes a month to write will often take just as long again to debug and fix before the end product is deployed. Compiler warnings, although useful, do not go far enough to prevent a number of bugs that should be trapped at a much earlier stage. Splint (formally called lclint) is a semantic checker which reads and understands your code. It can look at what you have said – and determine if that’s what you meant. In contrast, the compiler will point out show stopping syntax errors (such as undefined variables), which prevent an executable from being built. Many semantic errors can be caught by turning on all compiler warnings, but this is still not enough in most instances. (See Box 1: Swear and curse).

how it can be used to improve your code. BY STEVEN GOODWIN & DEAN WILSON

under any modern Unix.To build Splint from its source tarball where <version> is a string such as 3.0.1.6 $ tar -zxvf splint-<version>.tgz $ cd splint-<version>

start the actual build $ make

$ export LARCH_PATH=/usr/srcU /myproject/include

To make this persistent across sessions add the line to your .bashrc or .bash_ profile file, depending on your setup. Header files in the same directory as your source files do not need to be explicitly referenced as they are included by default. You can test the install by typing:

this stage requires root privileges $ splint --help version

Box 1: Swear and curse Splint can be a very exacting program. As an example of this, please note the following code sample: #include <stdio.h> int main(int argc, char **argv) { int a=0; if (a = 4); return 0; } This code has no warnings when compiled with ‘gcc test.c’, but succeeds in detecting one warning when all compiler warnings are enabled with -Wall. Splint, in contrast, will pick up a grand total of 5 errors (or possible errors) in the same piece of code. Image the potential minefields present in a larger project!

If you have any difficulties or problems building from the source, there is a binary package available that runs straight out of the box with no external dependencies, and is available for Linux, FreeBSD, Windows and Solaris from the Splint homepage.

I’ve got two legs Splint has to be able to read and parse the code in the same manner as a compiler would. So, if you have particular include directories that need to be used, you can either add a command line switch (as you would with gcc), or use the environment variable LARCH_PATH, mimicking how a Makefile would handle it. For example, to add an include directory for a single run: $ splint -I /usr/src/myprojectU /include sptest.c

Or, to make that directory available on each run of splint in this terminal session (if using ‘bash’), we use:

www.linux-magazine.com

A pleasant splint banner indicates that you now have a fully working install of the package. Now let us look at what it can do for us by running it with a simple sample program.

You’re the doctor of my dreams We have, below, the complete source code that we are going to use as our test bed for splint. Please note this code is not production quality, and deliberately contains bugs. It does, however, compile 100% cleanly with the strict gcc settings of -Wall, -ansi and -pedantic.

THE AUTHORS

Splint is available as a binary, or source package which can be downloaded from www.splint.org, with the current version (3.0.1.6) weighing in at around 1.5 MB. The source is built using the standard GNU tool chain and should compile

February 2003

tools can also be employed. In this article we look at such a tool, and show

$ make install

I’m a lumberjack…

76

Whilst peer review is the best method of ensuring quality code, automated

Steven Goodwin is a Lead Programmer who has just finished his fifth computer game. He has had more bugs than you’ve had hot dinners…but he claims they all belong to Dean Wilson. Dean Wilson works in Perl, C and shell scripts at WebPerform Group Ltd in the City. His bug count currently exceeds the GNP of Japan…but he claims they all belong to Steven Goodwin.


PROGRAMMING

Automated tools

$ gcc -Wall -ansi -pedantic U test.c

Although these settings may appear overly conservative, in a real-world scenario where code is critical, or ported across multiple systems, these settings would be the norm (see Listing 1). The program uses Numerology to calculate a mystical number which is derived from a persons name. This number can be used to tell your fortune, describe your personality, or demonstrate your character traits (like health, wealth, and gullibility). Allegedly! So there’s our code. 46 lines of code. 0 warnings. Can there be anything wrong? For starters it doesn’t terminate – it appears to spin. So we need some extra help tracking down the error. Let us run splint and look for further clues…(see Listing 2)

the different categories of problem that splint produces. One of the first things to spot are the two ‘Parameter … not used’ errors on line 28, one with argc as the unused parameter, and one with argv. Both variables are present in our main() function, but neither are used. There is a good reason for this (in our case); our program doesn’t use them. Does this mean we can ignore this error? Only in this specific case. In virtually every other situation an unused parameter means there is some important data being orphaned inside the function. This should, invariably, be corrected. In this instance we should amend our source code to tell maintenance programmers that we are not using these parameters intentionally. For example: argc = argc; argv = argv;

All things dull and ugly Wow! 14 warnings for a ‘perfect’ piece of code. Let’s break these errors down to see where they come from, and why. The experienced reader may care to notice

Splint will no longer report this warning for argc and argv, although it will be reported for other unused parameters

Listing 1: Sample code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

77

#include <stdio.h> int mapping[] = { 1, 1, 4, 2, 4, 4, 2, 1, 5, 4, 4, 2, 1, 5, 2, 5, 5, 5, 4, 3, 3, 3, 3, 2, 5, 5, }; int num_calc(char array[11]) { int i; int total=0; char c; for(i=0;i<sizeof(array)/ sizeof(array[0]);i++) { c = array[i]; if (c >= 'A' && c <='Z') total +=mapping [(int)c-'A']; else if (c >= 'a' && c <= 'z') total +=mapping [(int)c-'a']; }

February 2003

25 return total; 26 } 27 28 int main (int argc, char**argv) { 29 char message[11] = {"Mystic Meg"}; 30 unsigned int num; 31 32 if ((num = num_calc(message))) 33 { 34 /* reduce until its negative */ 35 do 36 num -= 10; 37 while(num>=0); 38 39 /* Since we've overshot, add the last ten back*/ 40 num += 10; 41 42 printf("The magic number for %s is %d\n", message, num); 43 } 44 45 return 0; 46 }

www.linux-magazine.com

in the program. If you wish to ignore this type of error wholesale (i.e. in every problematic occurrence of the code) you can ask splint to ignore this type of error with the command: $ splint --paramuse test.c

After running this command on the original source you will notice there are now only 12 warnings present, with both of those in the ‘unused parameter’ category having been removed. Most of splint’s warnings are grouped into such categories that can be ignored with a command line switch like ‘--paramuse’. This allows you to ignore specific types of error if you either don’t agree with them, or they are not applicable to the product you are working on. Working through the error list above you should be able to pick out a number of such switches. There are over one hundred different flags available, and so would be impractical to list them all here. You can review the categories available by using the command: $ splint --help flags

The individual categories (memory, pointers and parameters, for example) can be shown with the equally simple: $ splint --help memory

It’s fun to charter an accountant… Once the simpler errors have been removed, it is a good idea to progress through the list, solving each error in turn. After a problem area has been detected, briefly read the remaining problems to see if your fix could adversely affect other areas of the code. When you are happy with your change, re-run splint to check that the error has gone, and no other warnings were produced as a result of your new code change. Looking at our output, we see there’s a problem with line 10. t.c:10:19: Function parameter array declared as manifest array (size constant is meaningless)


Automated tools

This tells us we are implying, by including the square brackets, that the function takes an array as an argument. It doesn’t. ‘C’ can not pass arrays; only pointers to the start of arrays. We should therefore correct the code thus: int num_calc(char *array)

Although not a problem as far as the compiler is concerned (both versions produce the same code), a maintenance programmer might imagine that this is actually trying to pass an array, and could be liable to introduce errors based on this incorrect assumption, as we will see later.

Gifts for all the family Upon re-running splint we notice that we’re down to 10 errors. Great, you might think. We’ve only made one (benign) change, but it has produced

an unexpected side effect – it hides a potential error. This is another good reason why you should make changes incrementally to the code (as you would when the compiler produces errors) and not in bulk. The error we lost concerned the sizeof operator in line 16. By referring to the sizeof a pointer, we are only considering (on a 32 bit machine) 4 bytes. By naming the pointer as if it were an array, the coder implied it would be 11 bytes long (the array size). This flaw can be fixed by re-writing the code correctly with the strlen function. for(i=0;i<strlen(array);i++)

Our next error involves a type mismatch. Instead of blindly replacing the type we must make sure there are no complications. In this example, the ‘strlen’ function returns a value of type ‘size_t’.

PROGRAMMING

This is a system-defined type (from malloc.h) which has enough capacity to store any possible memory location that the system could address. It is sometimes referenced as the size of the ‘sizeof’ operator. It is more ‘correct’ to use size_t because the loop is intended to reference an arbitrary array which could, in theory, extend across the whole memory. Changing the type here does not cause us any problems, especially since under our (32 bit x86) machines it is only a change in sign (from signed to unsigned), but it is only true in this case. Changing signs arbitrarily is a dangerous comfort with which to surround yourself. You will see the proof of this point shortly. Our next problem is one of types, which occurs twice (once at line 20, and once on 22): we use a char to reference an integer array element. Since ‘C’

First output from splint $ splint t.c Splint 3.0.1.6 --- 23 June 1912 t.c:10:19: Function parameter array declared as manifest array (size constant is meaningless) A formal parameter is declared as an array with size. The size of the array is ignored in this context, since the array formal parameter is treated as a pointer. (Use -fixedformalarray to inhibit warning) t.c: (in function num_calc) t.c:16:18: Parameter to sizeof is an array-type function parameter: sizeof((array)) Operand of a sizeof operator is a function parameter declared as an array. The value of sizeof will be the size of a pointer to the element type, not the number of elements in the array. (Use -sizeofformalarray to inhibit warning) t.c:16:10: Operands of < have incompatible types (int, arbitrary unsigned integral type): i< sizeof((array)) /sizeof((array[0]))

To ignore signs in type comparisons use +ignoresigns t.c:20:21: Incompatible types for - (int, char): (int)c - 'A' A character constant is used as an int. Use +charintliteral to allow character constants to be used as ints. (This is safe since the actual type of a char constant is int.) t.c:22:21: Incompatible types for - (int, char): (int)c - 'a' t.c: (in function main) t.c:29:20: Initializer block for message has 1 element, but declared as char [11]: "Mystic Meg" Initializer does not define all elements of a declared array. (Use -initallelements to inhibit warning) t.c:32:6: Assignment of int to unsigned int: num = num_calc(message) t.c:32:5: Test expression for if not boolean, type unsigned int: (num = num_calc(message)) Test expression type is not boolean or int. (Use -predboolint to inhibit warning) t.c:37:8: Comparison of unsigned value involving zero: num >= 0

An unsigned value is used in a comparison with zero in a way that is either a bug or confusing. (Use -unsignedcompare to inhibit warning) t.c:42:53: Format argument 2 to printf (%d) expects int gets unsigned int: num t.c:42:38: Corresponding format code t.c:28:14: Parameter argc not used A function parameter is not used in the body of the function. If the argument is needed for type compatibility or future plans, use /*@unused@*/ in the argument declaration. (Use -paramuse to inhibit warning) t.c:28:27: Parameter argv not used t.c:3:5: Variable exported but not used outside t: mapping A declaration is exported, but not used outside this module. Declaration can use static qualifier. (Use -exportlocal to inhibit warning) t.c:10:5: Function exported but not used outside t: num_calc t.c:26:1: Definition of num_calc Finished checking --- 14 code warnings

www.linux-magazine.com

February 2003

78


PROGRAMMING

Automated tools

allows chars to do this (using the rules of promotion), there is not really a big problem with the code. Instead of passing an extra switch to the splint program, we shall formally fix the code with type casts. This, in addition to giving us a nice safe piece of code, allows the program to run under splint without warnings, even if someone else runs it without the command line switches.

Finland. Finland. Finland. The next three issues are very simple so we shall cover them together (although in practice we actually stepped through each one in turn). We have (in order), an initializer with extraneous braces (29), an assignment inside a conditional (34) and (on the same line) the test expression itself which resolves to a nonboolean answer. The braces problem does not show up under the compiler because strings in ‘C’ are simply arrays of characters, so an array of strings is just a bigger array (with NUL terminators at the end of each string). If the array were used more extensively, however, problems would soon arise. The conditional assignment does not show up as a warning under gcc because there are two brackets around the expression. This trick, to stop compiler warnings under gcc, doesn’t work under splint. And, because it should be lint-free we shall amend the code accordingly, thus we can use: num = num_calc(message); if (num > 0) { ... etc ...

The ‘> 0’ not only provides a boolean result, but emphasizes the correct result we seek. Although the function does not currently return values less than zero, if it did (for error conditions, say), these would be picked up correctly too. (Remember that ‘C’ refers to all non-zero numbers as ‘true’). Another quick run of splint and we’re down to 5 warnings.

Some things in life are bad… The next problem actually causes three issues. Sequentially speaking, the first

79

February 2003

error is what appears to be a simple type mismatch. However, remembering what we said earlier about changing types arbitrarily, we take a closer look at how this variable is deployed. First off, the num_calc function returns an integer, and tries to assign it to an unsigned integer. So which should be changed, the function or the assignment? Since the function may return an error code as a negative number in the future it’s not unreasonable to assume it should be a signed int. The next problematic line (37) shows us the real crux of the problem: an unsigned value can not be negative by definition. This means the ‘<= 0’ can never be true, which is the cause of our programming hanging. Doh! Why didn’t the programmer spot this? More to the point, perhaps: why doesn’t gcc? An analysis of the algorithm shows us that the number needs to become negative in order for the loop to terminate (lines 35-37), and so we conclude that a signed integer is the way to go. Checking the third of these errors we notice that the printf format specifier is also wrong, confirming our suspicions.

Finland has it all The last two errors are also connected. They both reference exported identifiers: one variable, one function. In ‘C’, it is possible to reference variables from one file in another by extern’ing them. extern int mapping[];

While this is not necessarily a bad thing, it allows another file to corrupt the mapping data (or call our num_calc) without our permission. Generally, if the function is private to that file – make it private with the keyword ‘static’. This, again, explains to the compiler what we mean, and not what we say. Although it’s a simple change, and may appear to some as inconsequential, it is very important and should not be ignored and allowed to fester.

…buttered scones for tea And there you have it. A completely debugged and lint-free program. It has not taken a particularly long time to do it, but has provided a much stabler base from which to work, and exorcised many

www.linux-magazine.com

bad style demons that could confuse maintenance programmers in the future. This newly found confidence in the code will encourage further features to be added, and old ones enhanced. As you can see, splint enables programmers to detect bugs before they become problems. It should find its way into the development cycle, along with -Wall as part of the build process, to shorten the bug-fix cycle, and so enabling developers to spend more time on new features. ■

Box 2: Never be rude Whilst splint can highlight many of the semantic mistakes that gcc can not, it is by no means a stand alone or infallible program. Because it doesn’t have to generate program code for the source, it can make (occasionally incorrect) assumptions about other parts of the code. For example, it can miss situations where functions are used without format declarations.This can be fatal in situations where the return type of the function is a floating point number, and the implicit declaration will be deduced as an integer: which is incorrect. Fortunately, the compiler will spot this particular instance – so you must not be lulled into a sense of false security by running with lax compiler options. Sometimes, the humanistic element of coding can also cause problems Splint is unable to detect; consider the source fragment below. Not only is Splint unable to find the errors, but usually a human being will also fail to notice them. int a = 10l; int b = 020; int c; c = a/b; Here, the result of c is not 101, but 0! This is because the value ‘101’is actually ‘10l’, with a lower case ‘L’at the end.The visual difference between ‘1’and ‘l’is small, and very difficult to ascertain.This should be handled by enforcing coding standards that require the use of an upper case ‘L’, and commenting when such numbers are used, to ease the readability. The same is true for numbers which are prefixed with zero, this will cause the ‘C’ compiler to treat them as octal numbers. This gives us, essentially: int a = 10; int b = 16; So naturally, the integral result of 10/16 will be zero, causing a fairly severe bug.


LINUX USER

KTools

KUser, KSysV, KdiskFree, KwikDisk

Controlling users It’s time to get administrative. Just a handful of KDE system tools will help you get to grips with user administration, runlevels and hard disk storage. BY STEFANIE TEUFEL

A

nyone who has worked with Linux for a while will appreciate that Linux is a genuine multi-user system. However, the advantages that Linux offers do imply taking care of a few administrative tasks that you might not be familiar with, if you’ve only dealt with Windows so far. No need to panic – KUser provides you with a GUI tool that should make your tasks a lot easier, as do all the programs that we will be discussing in this article. KUser is part of the kdeadmin package, and so your Linux distribution should have it on board. As user administration is one of the system administrators tasks, the kdesu tool will first prompt you to enter the root password, if you attempt to launch KUser as an underprivileged user via the kdesu kuser command or the start menu (in the case of SuSE, for example, System / Configuration / KUser). KUser will then

Figure 1: Account data for Beelzebub

82

February 2003

show you two pages; the first page contains the current users on your system (Figure 1 left), the second shows the current groups. You can use the buttons in the toolbar, or menu items, to add, delete, or edit users and groups, and this allows you to avoid console commands such as useradd and their ilk. As we prefer a hands-on approach even in administrative areas we simply create a new user called Beelzebub. To do so, click on the User / Add menu item, or the Add icon, and type the user name Beelzebub in the textbooks that appears. Then click on OK to confirm. The window that then appears allows you to enter details for the new user (Figure 1 right). What shell you want the user to work with when she logs on? What’s the user’s real name? Should the user have a home directory of her own, or not? Additionally, you can use the Password Management tab – surprise, surprise – to manage Beelzebub’s password. This is also the place where you can specify when Beelzebub’s account will expire, or when a new password will need to be changed. The Group tab allows you to specify the groups where Beelzebub will be a member (Figure 2).

www.linux-magazine.com

Normally users will be members in only one UNIX group, but there are cases where you want to assign specific rights to several users, and therefore decide to add them to groups which contain the appropriate privileges. The Group tab in KUser provides access to a menu containing all the available groups. If you want to add Beelzebub is one of these groups, simply select the required group and check it. KUser will add Beelzebub to the group immediately, and if you uncheck the group, the user will be immediately removed from the group.

Getting to Grips with Boot Scripts Just like KUser, the KSysV program is reserved for system administrators, and

Figure 2: Birds of a feather

KTOOLS In this column we present tools, month by month, which have proven to be especially useful when working under KDE, solve a problem which otherwise is deliberately ignored, or are just some of the nicer things in life, which – once discovered – you wouldn’t want to do without.


KTools

Figure 3: Runlevels at a glance

that’s a good idea because KSysV is used to manage symbolic links from the /etc/rc.d/rc0.d through rc6.d directories to the script directory, /etc/rc.d/init.d, which – amongst other tasks – defined what services are launched on booting the system (older distributions may possibly use different paths). This does not prevent you from launching the program via the KMenu (System / Configuration / SysV Init Editor on SuSE), or entering the ksysv & command in a terminal emulation is a normal user – to view the boot configuration. If you additionally supply the root password (in the command line you can ensure that you are prompted by typing kdesu ksysv), you will be able to specify the services launched when you start your machine, depending on the current runlevel. If you launch KsysV with root privileges, the till the default to English, unless the country and language settings for root have been modified. If you need to modify the settings, you can run kdesu kcontrol in a control center launched on an underprivileged user’s desktop to do so.

GLOSSARY Runlevel: an operating status of a Linux systems. Runlevels 2 through 5 can be individually configured for various tasks, for example, automatically launching all the servers you require on booting, from HTTP to Samba.You could define another runlevel as your workstation mode and boot to a GUI login manager, such as xdm or kdm.The machine will boot to the default runlevel as specified in /etc/inittab file.

LINUX USER

Figure 4: Enough space? KdiskFree and above “kcmshell partitions”

Hard disk storage is a finite resource, If the init scripts on your machine as no matter how large your hard disk may stored in a different place, you will need be, so it makes sense to launch to edit the path settings in Configure / KDiskFree (Figure 4) from time to time, SysV Init Editor setup / paths. Newer just to check the status of your hard disk KDE 3 versions even provide a wizard resources. that appears when you launch the To launch the hard disk manager program for the first time. simply navigate the K Menu down to The KsysV program window (see the System / File system / KDiskFree Figure 3) displays links to any services entry, or simply type kdf & in your with init scripts in the init.d directory. favorite shell. You can also allow the There is also a list of services launched KDE control center to launch the tool for each runlevel. for you and display the results in the If you do not want to initialize a Information section below Block service in a specific runlevel (because Oriented Devices. you do not need the HTTP server, for Incidentally, KDiskFree has a partner: example), all you need to do is drag Look for KwikDisk below System / File the entry from the column in which it system in the K Menu. This twin tool can resides to the trashcan to the bottom left dock onto the control bar and show you of Available System Services. Clicking all the mounted disks, and drives, with the left mouse button on a service including the amount of free space on displays additional information, which each, when you click on the building may be somewhat sparse (Figure 3, block icon. So there are no more excuses small window). for not noticing that you were running To add a new service to a runlevel, out of space (Figure 5). drag the entry from the column KDiskFree also has its stronger points. containing the available system services Besides the disk size, the and drop it in the Start free space, and the load column for the desired the tool will also supply runlevel. KSysV will create details on the file system, an appropriate symbolic and mount points for link in the file system. your disks and devices. Make sure you know Talking about mount what you are doing as points, KDiskFree allows incorrect settings at this you to mount and point can prevent your unmount disks by point Linux system from booting. and click – just like And this is why you need to the CD-ROM and floppy explicitly save your changes icons on the KDE by selecting File / Save Figure 5: KwikDisk for a quick desktop. ■ configuration. overview

www.linux-magazine.com

February 2003

83


LINUX USER

deskTOPia

xsnow and xfireworks

Keeping the fun going The department store shelves are full of after Christmas sales, and street lights brighten the winter nights, and that means it’s time for deskTOPia to keep things in the party spirit, and to bring some seasonal flair to your desktop. BY ANDREA MÜLLER

T

he party season has just finished, and with Christmas and the New Year finally over we can try to extend the happy feelings for a little longer. So why not dress up your Linux computer to match your party spirit throughout the remaining winter months. Two programs, xsnow [1], and xfireworks [2], will help you find the right costume, that is desktop background for the occasion. After performing the installation steps described in the Box “Installation”, the celebrations can continue.

Silent Night When you initially launch xsnow without supplying any parameters, your desktop is magically transformed into a winter wonderland. Santa Claus with his reindeers and his sleigh jingle homeward after Christmas through the forest, while snowflakes slowly collect on the sills of your desktop windows (Figure 1). If Santa Claus is too big for your liking, you might like to try: xsnow -santa 1 &

You can modify almost all the other details in a similar way using combinations of parameters. For example, typing -sc and -tc plus the name of color will change the color of the snowflakes and/or trees (you can use the xcolorsel program to show you what colors are available).

DESKTOPIA Only you can decide how your desktop looks.With deskTOPia we regularly take you with us on a journey into the land of window managers and desktop environments, presenting the useful and the colorful viewers and pretty toys.

84

February 2003

Like a snowstorm? All you need to do, is increase the number of snowflakes from the default value of 100 to a maximum of 1000 (using the -snowflakes parameter) and additionally specify -unsmooth to add a little action. You might like use the -nowind option to prevent the wind from blowing every 30 seconds, or even use the -windtimer and a value in seconds to define a longer interval between storms. If all that snow on your windows and at the bottom of your desktop is getting on your nerves, you can change the setting using the -nokeepsnow and -nokeepsnowonwindows or -nokeepsno wonscreen options. Alternatively, you can heap up genuine snow hills, by redefining the default snow depth for your windows with -wsnowdepth, or for your desktop with -ssnowdepth. Those of you inclined to do so can even remove Rudolph the reindeer’s red nose with the -norudolf option. If you finally do not want to be reminded of Christmas, simply send Santa Claus on a well-earned holiday by additionally supplying the -nosanta parameter. The manpage for xsnow includes a few nice sample command lines and pro-

www.linux-magazine.com

vides details on additional options, but you can produce some quite pleasant effects by experimenting. There is no need to worry about doing something wrong, as the program will automatically use the maximum permissible values if you overstep any thresholds.

Special requests for the party mood Just like in real life, there is no catering for everybody’s taste in desktop backgrounds – and in this case there are one or two quirks to look out for when running the program on KDE. If you want to launch xsnow in version 3 of this popular desktop environment, you will have to enable the option Support Programs in Desktop Window in Look & Feel / Desktop. Use the Background option to select a background color, as the xsnow -bg option, normally used to set a background color

GLOSSARY Root window: this is the mother of all of desktop windows.The root window, from which all other windows are derived, does not have a frame, but instead forms the desktop background.


deskTOPia

Hiroaki Sakai, actually produced this tool with “Hanabi Taikai”, a popular event celebrated in summer on various Japanese rivers, in mind. I am sure Hiroaki would not mind us using the program for other events. Just like xsnow, xfireworks offers various parameters with which you can configure the program, although Figure 1: xsnow without any parameters it is still quite impressive if you causes display problems on KDE. do not use any of You can then launch the program Figure 2: Corrupting icons these options. as described, although you will Fireworks is best on a black have to live with snowflakes destroying background. If you have selected the your desktop icons. A short spell difference background color, you can tell of “virtual snow clearing” with the xfireworks to switch to black by typing: window was soon have your desktop back to normal. xfireworks -bg black &

Bright Lights As the name would suggest, xfireworks produces a firework display on your desktop background. The programmer,

Unfortunately, this does not apply to KDE users, as your favorite desktop environment refuses to cooperate with

Installation It’s easy for Debian users – all they need to do is install the precompiled binary packages. Unless your package manager refuses to install the xsnow Red Hat RPM file on the subscription CD, you will only need to fire up your compiler if you want fireworks on your machine.The XFree development packages for your distribution must be preinstalled. xsnow Type the following command to unzip the source code archive tar -xzf xsnow-1_42.tar.gz …, changed to the new directory, xsnow-1.42, and type xmkmf. If you do not want to install the software in /usr/X11R6, instead preferring to use the standard directory for compiled software, /usr/local, you will need to edit the makefile.To do so, change the lines MANPATH=/usr/X11R6/man BINDIR=/usr/X11R6/bin to MANPATH=/usr/local/man BINDIR=/usr/local/bin After storing the file, type the following: make depend

make su (type the root password) make install make install.man xfireworks Just like xsnow, xfireworks is supplied without a configure script, but at least it includes a makefile.You might also like to modify this file in order to install the program in /usr/local. Unpack the tar archive first, then change to the new xfireworks-1.3 directory, open the makefile with your favorite editor, and change the lines PREFIX =/usr/X11R6 to

LINUX USER

pyromaniacs. Although the program has sensible defaults, you might like to try some fine tuning. As some settings can place a heavy load on your CPU and graphics adapter, the options that you modify will probably be defined by your computer equipment. Provided you have suitably quick computer, the following line: xfireworks -probability 200 U -fine 200 -after-image 125 U -color-length 150 &

should produce some presentable results. You can set the -probability flag to raise the number of rockets. Raising the value for -fine will create realistic and smooth explosions; -after-image specifies the length of the afterglow effect, and -color-length increases the period before the sparks are finally extinguished in the sky. Users of older computers might prefer to reduce the values for -after-image and -fine. 65–80 will still produce quite useful effects, but below this value the whole scene is more likely to remind you of confetti than fireworks. If running the program leads to display problems, you might like to try the -no-direct-draw option. In this case, xfireworks will not draw directly in the root window, but instead store the current image in a file, which it will use for a background sideshow. If you modified the makefile as described in Box 1, you will find descriptions of the individual fireworks in /usr/local/etc/xfireworks.conf. You can use these as templates for designing your own fireworks. Type: xfireworks -f myfireworks.conf &

to tell xfireworks to use the descriptions in myfireworks.conf, or you can use fireworks from the author’s website [3], or from the subscription CD. ■

PREFIX=/usr/local Now you can compile the program with make su

INFO [1] http://www.euronet.nl/~rja/Xsnow/

(type the root password)

[2] http://web.ffn.ne.jp/~hsakai/myfreesoft/ #11

make install and install it in /usr/local/bin.

[3] http://web.ffn.ne.jp/~hsakai/myfreesoft/ xfireworks.html

www.linux-magazine.com

February 2003

85


LINUX USER

Out of the Box

Out of the box

Watching the watcher Every Linux system writes logfiles, but who really looks at them on a regular basis? root-tail and tailbeep can help you keep track of critical events. BY CHRISTIAN PERLE

N

ot reading the logfiles of an Internet machine is just like hiding your head in the sand, and letting the malevolent hackers and script kids out there on the Web have their wicked way. Hopefully – cross your heart and hope to die – this is not a problem, but who really enjoys reading all the logfiles? There is a lot less effort involved in running a tool on top of your X Window system to provide an overview of the current log entries. The tool’s name, root-tail, is derived from the fact that the tool behaves like the UNIX tail command, and displays its output transparently on top of the root window, allowing the current background (if any) to remain visible. You can download the sources at http://www.goof.com/pcg/marc/root-tail. html or from the subscription CD. The sources are not the obsolete originals, but debugged versions maintained by Marc Lehmann. The installation steps depend on the distribution you use (Listing 1). In this case, we have focused on making logfiles, such as /var/log/messages visible to root-tail without having to launch the program with root privileges. If you do not already have an adm group, you will need to create it and

OUT OF THE BOX There are thousands of tools and utilities for Linux.“Out of the box” takes a pick of the bunch and each month suggests a little program, which we feel is either absolutely indispensable or unduly ignored.

86

February 2003

modify the group membership of the logfiles to match (chgrp adm). You will also need to set root-tail‘s privileges so that the program is always launched with the adm group ID (chmod 2711).

Just look who’s logging now! Now you can display the /var/log/ messages file in the root window: root-tail /var/log/messages

If nothing happens, you can always use the logger tool to provoke some output:

will display the latest entries in the messages and kern.log files in white or green. root-tail will prepend the filename in square brackets each time it outputs a block of text. The following syntax tells root-tail to monitor three files and provide output in three different colors. To do so, a shadow is added to the font aspect (-shade) and the color and position of the text output in the root window are defined (-g 80x25+0-52). In addition, the program runs as a daemon in the background, thanks to the -f option:

logger "This is a test"

The message should appear in the logfile a short while later, where it will be picked up by root-tail and displayed on your desktop. Unfortunately, KDE users still have an issue to deal with, as your favorite desktop environment has the unfriendly habit of covering the root window with its own background images. Any other desktop environment or window manager should get along just fine with root-tail. You can quit the test by pressing [Ctrl-c]. Instead of monitoring just one file, root-tail can monitor multiple files simultaneously. To distinguish between logfiles more easily, the program allows you to define a color for each logfile – colors are entered for each filename as a comma-separated list. The following command root-tail /var/log/messages,U white /var/log/kern.log,green

www.linux-magazine.com

root-tail -f -g 80x25+0-52 U -fn 6x10 -shadeU /var/log/messages,white U /var/log/daemon.log,yellowU /var/log/kern.log,green

Figure 1 shows the logfiles for the packet filter as output by root-tail, shortly after terminating the PPP connection.

Beep show Do you prefer an audible warning signal? tailbeep will allow your Linux system to produce a beep with a user-definable frequency and length when specific character strings occur in a logfile. A tool of this type is particularly useful for headless machines, for example, computers without a display or keyboard that normally hang around in office corners, such as dedicated routers. You can download the sources for the program at http://soomka.com/, or as usual, simply access the subscription


Out of the Box

CD. Installing the tool is a lot less complicated than in the case of root-tail, as tailbeep does not need to bridge the gap between root privileges and the X display: tar xzf tailbeep-0.44.tar.gz cd tailbeep-0.44 make strip tailbeep su (enter root password) cp tailbeep /usr/local/bin exit

To test the installation, ensure that you have root privileges and then type the following: tailbeep -f /var/log/messages U -s "test 123" -t /dev/tty12

tailbeep will now wait for the test 123 string to occur in the /var/log/messages file. You can use /dev/tty12 as the terminal for the beep. Now use the logger, which we already mentioned, to write the target string to the log: logger "test 123"

tailbeep should respond immediately. If you do not like the default frequency and length of the beep, you can use the -F frequency (in Hertz) and -M length

Listing 1: Installing root-tail tar xzf root-tail-0.2.tar.gz cd root-tail-0.2 xmkmf make strip root-tail su (type root password) cp root-tail /usr/local/bin cp root-tail.man /usr/local/man/man1/root-tail.1 Only for Mandrake, Red Hat, and SuSE: groupadd adm chgrp adm /var/log/messages chmod 640 /var/log/messages For all distributions: chgrp adm /usr/local/bin/roottail chmod 2711 /usr/local/bin/roottail exit

LINUX USER

(in milliseconds) to change these settings. If your test worked, you can press [Ctrlc] to terminate the program. It makes sense to run tailbeep as a background daemon rather than stipulate a command to launch the tool. You might like to write a short init script for this task (see article on page 48). The tailbeep script on the subscription CD reads Figure 1: root-tail displaying logfiles in the root window the /etc/tailbeep.conf file, supplies defaults for any missing scripts is /etc/rc.d/init.d or /etc/rc.d parameters, and sends tailbeep into the /rc5.d on these systems. background (option -d). Now create the configuration file for Now type grep -w initdefault /etc/ the tailbeep service. Listing 2 shows an inittab to discover the default runlevel. example. The most important line in the If id:2:initdefault: is returned (default for sample file, PAT=“SRC=”, defines the Debian), runlevel 2 will be assumed as string SRC= as a search pattern, which the default on booting, so you will need means that tailbeep will respond to log to create a symlink in /etc/rc2.d: entries for the iptables packet filter. Assuming logging has been enabled for your packet filter, you will actually be cd /etc/rc2.d able to hear port scans. ln -s ../init.d/tailbeep U Other search patterns are possible, S80tailbeep such as Accepted password for in the /var/log/auth.log file. In this case, the SuSE, Red Hat, and Mandrake use system will respond to valid logins via runlevel 3 or 5 by default. Additionally, ssh. Additionally, the -x program_name the name of the directory with the init option allows you to launch a program, when the search pattern is discovered. GLOSSARY Be careful when choosing this option – tail: Shows the end of a file. In a special do not forget that the program is follow-up (option -f), it notices additions to a file and updates the display accordingly. launched with root privileges. ■ Root window: This window is displayed first when you fire up an X server, and provides the background for the X desktop. Daemon:“Disk and execution monitor”, a server that runs without interactive input. Packet filter: A firewall type that is implemented in the Linux kernel, inspects incoming and outgoing network data packets, and decides whether to process or reject them on the basis of pre-defined rules. PPP:“Point to Point Protocol”, a protocol that sends IP packets across serial lines, such as modems or null modem connectors. PPP is also used for DSL connections with PPP on Ethernet (PPPoE). Symlink: A special file whose contents are a path (the target pointed to). If you read or write the content of a “symbolic link”created by the ln -s command, the system will in fact access the target the link points to. Portscan: Automatically rattling on the doors and windows (ports) of a machine connected to the net, to discover the network services listening there.When performed by external users, a port scan is normally the first step in the process of discovering the machine’s security vulnerabilities.

Listing 2: /etc/tailbeep.conf # tailbeep configuration file # # File to monitor? FILE="/var/log/messages" # Beep, when ever iptables logs something: PAT="SRC=" # 5 kHz frequency FREQ="5000" # but only a short beep, to avoid waking up the admin: DUR="25"

INFO Root Tail: http://www.goof.com/pcg/marc/ root-tail.html Tailbeep: http://soomka.com/

www.linux-magazine.com

February 2003

87


LINUX USER

WaveTools

WaveTools

Sound advice MP3 is all the rage so why bother with wav files? The fact is every MP3 boils down to a wave descriptor of some kind, and waves are universal. This shows you the kind of antics you can get up to with wav files and WaveTools. BY VOLKER SCHMITT

T

he WaveTools are currently at version 1.0. One of the changes in comparison to the previous version (0.9) is the fact that DOS support is no longer available, a fact that actually made me quite fond of WaveTools. Installation from the sources is extremely simple. Ensure that you are root of course, and create a new directory by typing: mkdir /usr/local/wavetools

Change into the new directory, mount the subscription CD and expand the archive there: tar xzf /cdrom/LinuxMagazineU /wave/wavetools-1.0-bin.tgz

Then go on to create a few symbolic

Table 1: Wave Tools winf

Displays information for a wav file

wcat

Can concatenate wav files and convert their sampling characteristics

wcut

Cuts areas out of a wav file

wflt

Filters wav files

wfct

Creates wav files

wmix

Links wav files

wview

Displays wav files interactively

wplot

Creates a PostScript file with a time/ amplitude graph

88

February 2003

links to ensure that the WaveTools programs will be in your search path: for i in wcat wcut wfct wflt U winf wplot wview; U do ln -s /usr/local/wavetoolsU /wavetools-1.0-bin/$i U /usr/local/bin/ ; done

The list in Table 1 provides an initial overview of the features offered by the WaveTools suite.

In the beginning there was the wave Before we start discussing wav files, let us create a few for a start. To do so, let us try to remember the theoretical structure of a wav file. It digitally represents the air pressure modulations that our ears interpret as an acoustic sensation, where increased pressure is represented by positive values, and decreased pressure as negative values. The sampling rate tells you how many times per second the value is determined, and the resolution describes the accuracy with which the sampled values are represented (8 bit, 16 bit…). The wfct command is required to produce a wav

www.linux-magazine.com

file. The command offers a variety of options that influence the type of wav file created. In addition to the -o parameter (where the name of the output file is directly appended as in -ofile.wav) the wave form can have the following options: • -r for rectangular wave, • -t for triangular wave, • -w for a saw-tooth wave, • -n for noise, • -i for rectangular pulses and • without any options for a sinus wave. Of course the wave needs to know its length and frequency. These values are also supplied as parameters with Hz for hertz and s for seconds. All the programs in the WaveTools suit are additionally capable of recognizing and applying the letters:

GLOSSARY Sampling rate also referred to as a sampling frequency.The sampling rate of a music CD is 44.1 Hz, for example. The resolution is also referred to as the quantization. A music CD has a 16 bit resolution and thus provides 65536 different amplitude values. In contrast, an 8 bit recording will provide only 256 values.


WaveTools

• m (for milli), • c (for centi), • d (for deci), • k (for kilo), and • K (for a factor of 1024) as qualifiers for units. Thus WaveTools programs will correctly identify 10ms as meaning 10 milliseconds. Therefore a sinus wave that produces an a at standard pitch for two seconds will be produced by

LINUX USER

Figure 1: wfct 440Hz 1s | wplot -s10ms -l10ms -t |

Figure 2: (wfct 200Hz 10ms; wfct 440Hz 10ms;

display

wfct 1000Hz 10ms) | wmix - - - | wplot | display

-sstarttime and -lduration are available for this job. The command

will tend to flatten the amplitude values, which in itself could be an issue. Again a solution is close at hand in the form of the parameter -n which normalizes the wav output by raising all the amplitude values so that they reach a maximum peak of 1. Table 4 contains an overview of the options for wmix. Time for a test run: Let us take three sinus waves, a high frequency wave at 1000 Hz, a standard pitch (440 Hz) and a low frequency wave at 200 Hz. These waves are added by wmix and displayed directly using wplot and display (Figure 2):

wfct -osinus.wav 440Hz 2s

Additional options are available to define the quality (that is the sampling rate) and the resolution (see Table 2).

Taking a Look After you have created a wav file to your own specifications you can use any popular player, such as play or wavp to play the file via the command line. To visualize the sinus wave stored in the wav in our example, you might like to launch the graphical WaveTools program, wplot. As the output produced by a two second sinus wave is slightly difficult to interpret, let’s just plot the first ten milliseconds instead. The wplot options

Table 2: wfct Options -srate

Sampling rate (default 11025 Hz)

-bquant

Quantification (default 8 Bit)

-aampl

Amplitude (default 1 = 100%)

-pphase

Phase shift (default 0 rad/deg/%)

-r

Rectangular wave

-t

Triangular wave

-w

Saw-tooth wave

-n

Noise

-iwidth

Rectangular pulse

-ofile

Output file in wav format

-h

Help

-v

verbose mode

Table 3: wplot Options -sstarttime

start offset

-eendtime

end offset

-lduration

duration

-t

real time axis

-wwidth

width of graph (default 10 cm)

-fsize

font size (default 10 pt)

-ofile

output file

-h

help

-v

verbose

wplot -s0s -l10ms -osinus.ps U sinus.wav

will create a postscript file that you can easily display by typing display sinus.ps. The modular structure of the WaveTools programs allows you to use them in command line pipes, and they also lend themselves to scripting. In this case you would leave out the -o flag and use the | character to pipe standard output to a different command (see Figure 1). The wplot program has a few additional parameters which are selfexplanatory with the possible exception of -t, which merely adds the offset by the -s flag to the legend (see Figure 1 and see also Table 3).

Waves: Peaks and Valleys You can mix multiple input files using the wmix program. Mixing in this case means adding (by default) or multiplying the amplitude values (by specifying the -m flag). Before you start, you should be aware of the fact that the WaveTools display amplitude values in the range [-1,1). Thus, if the addition creates values outside of this range (overmodulation), they would normally be cut off, but wmix is clever enough to reduce the amplitude values to allow the maximum amplitude to be represented. Overmodulation can be prevented by specifying the -s option, which assuming n input files will multiply by 1/n. If you are multiplying anyway, you can omit this step, as values within the range [-1,1) will still be within this range when multiplied. In this case multiplication

(wfct 200Hz 10ms; wfct 440Hz U 10ms; wfct 1000Hz 10ms) | U wmix - - - | wplot | display

Hey Mr. DJ! The filter program wflt is the star of the WaveTools suite. It provides a variety of features, such as linear transformation of amplitudinal values, that is y = y*amplification + bias. The parameters are supplied as -gamplification and -abias. Again you can use the -n flag to normalize the output. Also, three acoustic filters are available: low, high and band-pass. The names of these filters indicate their uses, that is for filtering low and high frequencies, and in the case of the bandpass filter removing a particular (mainly mid-) frequency range from the input (or

Table 4: wmix Options -n

normalizes to a maximum value of 1

-s

multiplies input by 1/n

-m

multiplies input instead of adding it

-ofile

output file

-h

help

-v

verbose

www.linux-magazine.com

February 2003

89


LINUX USER

WaveTools

to be more precise lowering and raising Optical Analysis amplitudes in this range). Now that we have got to know various Filters of this kind are also used filters, let’s get back to our original for practical applications as in the case experiment with wmix; it involved three of two- or three-way speakers, where sinus waves that we superimposed. they are used as diplexers. This allows It would be interesting to find out the tweeters to mainly play higher whether the filters are capable of frequencies and leave basses to the separating the three original waveforms woofers. As you might have guessed, again. To find out, I have programmed a special frequencies are assigned as short script called frequencysplit.sh (you reference points for the frequencies will find it on the subscription CD). allowed to pass through the The script expects the filter. And it is precisely frequencies (without these values that wflt units, i.e. Hz) and the expects as a parameter for duration (without units, each filter (see Table 5). i.e. ms). You can Thus, if you want wflt to optionally supply a fifth Figure 4: Playing amplify the frequency range parameter, -t, -r, or wavemix.wav with XMMS below a specific frequency, -w to define a different 300 Hz for example, and reduce any waveform (compare to wfct). higher frequencies, you need the Calling following syntax for wflt wflt -l300Hz -ofile input

A similar syntax is used to set the highpass filter (option -f) and the band-pass filter (option -b). The band-pass filter, which merely raises the values in the frequency range by a specified value, also allows you to define its scope using the option -w. Experiment with different values; in fact you can control wav files in a similar way to twiddling the dials on your equalizer at home.

frequencysplit.sh U 200 440 1000 10

will create wav and ps files for 10 millisecond sinus waves at 200 Hz, 440 Hz, and 1000. The script then goes on to add and normalize these files, before splitting them again via high, low and band-pass filters. The results are again stored in wav and ps files (the files are stored as waveXY.wav and waveXY.ps). You can then view the wave files (see Figure 3) by launching the disp_freq.sh script.

Table 5: wflt Options -mrange

midrange filter

-lfrequency

low-pass filter

-ffrequency

high-pass filter

-bfrequency

band-pass filter

-wscope

scope of band-pass

-gamplification

amplification

-abias

Offset to zero

-c

center file

-n

normalize file

-r

play file backwards

-i

invert file

-ofile

output file

-h

help

-v

verbose

Experiment with different frequencies when you run frequencysplit.sh, or change the range in the script (you may find that large values will tend to flatten the results). If you want to run disp_freq.sh in order to investigate the wave files more closely, you should choose a low value for the duration, such as 10. If you prefer to listen to the wav files, you will of course need a longer value for the duration, for example 3000 for 3 seconds. Tip: You might like to use a program such as XMMS, if you want to view the frequency while also listening to the output. Figure 4 clearly shows the three peaks of the original low, mid-range, and high frequency waves in the wavemix. wav file, which were superimposed in this wav file. We will be looking at cosinus waves, additional WaveTools programs and a few interesting applications for them in future issues. â–

INFO

THE AUTHOR

[1] WaveTools http://tph.tuwien.ac.at/ ~oemer/wavetools.html

Figure 3: frequencysplit.sh 200 440 1000 10; disp_freq.sh

90

February 2003

www.linux-magazine.com

Volker Schmitt is a mathematician and works for a large insurance company. His previous experience with waves was mainly in the form of Fourier transformations in the context of analytical numeric theory, and as produced by the PA system down at the local disco.


Linux Bangalore 2002

COMMUNITY

Linux Bangalore 2002

On the Linux move Take some 2000 Linux enthusiastic, place them in the region known as the Silicon Valley of South Asia, pepper it up with significant funding support – what you have is Linux Bangalore 2002.

H

eld in early-December for the second year, this event is shaping up into India’s most ambitious Open Source event. In terms of numbers, nothings comes close to challenge it. Still, for a sub-continent sized country of India’s dimensions, a lot more could perhaps be done to rope in the real diversity of GNU/Linux. “We have more talks per minute than many international conferences. It’s a big thing … This year we tried our level best to keep the audiences out, and we failed again. We were about two times over-subscribed in terms of (seating) capacity,” said Atul Chitnis, a key mover and decision-maker behind the event hosted by Bangalore’s LUG.

Figure 1: Indic computing solutions on the GNU/Linux front seem closer than ever

LB/2002’s some 70-plus talks were drawing attention globally too. The website http://linux-bangalore.org/2002/ had drawn some 100,000 hits. Most from outside the country. Like last year, the meet drew in the numbers. Long queues were visible on Day One, a little shorter than last year’s perhaps because the three-day event was priced at Rs 300 (£4, Euros 6) unlike the free-entry of the past. It continued to get high-level corporate support. HP and IBM were the top corporate sponsors. “What really tickles me pink is that the Government of India (a sponsor this year) is saying, ‘Hey guys, you are doing a good thing.’” added Chitnis.

Even Microsoft went there One surprise speaker was Microsoft, speaking about their “shared source” alternatives. Organisers stressed that those representing the company were technical people, and hence questions about license-issues would not be taken. Programmers from local software firms and students were in the majority. Most were duly impressed by the quality of speakers, and the information imparted in parallel sessions that sometimes went on in five halls simultaneously. Tracks were available in development, sysadmins, users and the business sector, emerging issues, government and education, and kernel-related trends. Sponsors also gave keynotes, ranging from the inspiring (for example Brij Sethi of HP’s “preaching to achieve personal excellence” in an Open Source world) to the somewhat boring. Delhi-based 24-year-old Naba Kumar, who spoke on C programming under Linux, didn’t fail to impress the youngsters in the crowd by what is possible at his age. Naba is a GNOME developer, and founder of the Anjuta project, an integrated development environment. Other talks dealt with Qt programming, super-computing clusters, embedded Linux, e-governance, Linux in robotics, and much else. There were discussions on Indian language computing.

Figure 2: Affordable prices of GNU/Linux software is a reason for its popularity in a region like India

There was even a rock show at the end of the three-day meet, in a city where the younger generation is fast moving into the globalised culture of pizzas and pubs.

Local users predominated In some ways, much more could have been done. Talks were largely by volunteers, meaning that a number of potential quality inputs went un-invited and overlooked. Despite its goals of being a national meet, some LUGs across India might have not learnt of it on time. Besides, this is marketed strictly as an Open Source event, keeping proponents of the parallel Free Software movement – already incorporated as a company, and mainly active around the neighbouring state of Kerala – largely outside its activities. “They could have at least shown the two potential paths (of Open Source and Free Software). Here, it was made out to be as if there is just one path available,” said Mitul Limbani, the CTO of Enterux Solutions. Based in India’s commercial capital of Mumbai, Enterux are consultants and solution providers for Free and Open Source-based systems. “It’s very well organised, except that on the first day there was too much pressure (long queues) for food. This event is better organised than many back home,” said Israeli software expert in embedded solutions, Roi Hadar. Anil Bajaj of Anil Electricals, from Bangalore’s plush Mahatma Gandhi Road area, felt that the Bangalore user group was “not talking about giving free support”, said Anil, who works in hardware: “We should create a community were support is available easily (at low cost or for free). Unless we have a revolution by our youth, things will not improve drastically”. ■

www.linux-magazine.com

February 2003

91


COMMUNITY

Brave GNU World

The monthly GNU column

Brave GNU World S

ince coping with administration is certainly among the most common Linux needs, the following feature has been moved to the top.

Welcome to another issue of the Brave GNU World. This time with a with an eye on tools to help make your day easier. BY GEORG C. F. GREVE

Lire It is tradition on Unix systems that all proceedings and activities in the system of services like web server, mail server, name server, databases and many more are written into logfiles. This protocol of activities allows system administrators to monitor their systems closely. The logfiles can quickly become fairly large, which makes handling them hard. Although they are usually in ASCII format, a file size of several megabytes can not really be completely grasped by a human being. On top of this, data only becomes information when approached with a certain question. A significant part of the data will be irrelevant to the question, so in practice this means that information is usually buried under irrelevant data and therefore almost inaccessible. This problem has occurred in many places for several years now and has triggered the development of programs to aid people in the analysis of logfiles. So on April 6th, 2000, several computer scientists from Dutch companies got together to discuss the tedious task of log analysis. It became apparent that each of their companies created solutions that were merely duplicating efforts already completed in other companies. In order to end this multiplication of work, the LogReport team began writing a program as Free Software under the GNU General Public License (GPL), which should accomplish these tasks reliably in a professional environment. Two years later, Lire [1] was published. Like Douglas Adam’s “Electric Monks” of, which free humans from the boring task of believing, it is the goal of Lire to free people from the tedious task of logfile reading. Hence the name, because the French word “lire” means “reading.”

92

February 2003

The program is written in Perl and Bash, massively employing XML and it works in four steps. First, logfiles are normalized into a “Distilled Log Format” (DLF) in preparation of the second step, where they are analyzed by generic tools which can be used across services. The output format of those tools is XML, which in the fourth step can then be translated into one of the final formats. Currently, Lire has input filters for 29 different services, still counting. A new service can simply be added by writing a converter into the DLF format. A special advantage of Lire is that it does allow you to compare different implementations of the same service, like the MTAs exim and postfix. The project has already proven itself to perform well in companies with logfiles of several gigabytes for tasks like performance measuring, system maintenance, problem solution and

marketing, so with this in mind it can be considered stable. Accordingly to Josh Koenig, who filled out the Brave GNU World questionnaire, the biggest weakness is currently the API, which is not easily understood or well-documented. Besides a userfriendly GUI, this is a major concern of further development. Help on these areas as well as filters for new services are very welcome. Also the group seeks help making Lire popular especially in medium to large size companies. The hard core of the LogReport development team consists of Joost van Baal, Francis Lacoste, Egon Willighagen, Josh Koenig and Wessel Dankers, although many developers from different countries around the world have contributed. The project is being maintained by the LogReport Foundation, a charitable association in the Netherlands. Besides being technically useful, this project also offers a very nice example of one of the most important economic advantages of Free Software, the prevention of repetition of work.

GNU Source Highlight

Lire architecture

www.linux-magazine.com

GNU Source Highlight [2] by Lorenzo Bettini takes a source code and creates syntax-highlighted output in HTML or XHTML. It has evolved out of the tools java2html and cpp2html, which were introduced in issue #21 of the Brave GNU World and have dissolved into GNU Source Highlight. Currently input filters exist for Java, C/C++, Prolog, Perl, PHP3, Python, Flex and ChangeLog. Filters for other languages can be added, however. The project itself was written in C++ and is stable according to Lorenzo Bettini. He is now working on a new


Brave GNU World

output format (LaTeX) and would like to write a better description language for programming languages in order to replace Flex, which is currently used for this purpose. Most of the support he received for this project was in terms of filters for different programming languages written by other developers. John Millaway for instance wrote the filters for Flex and ChangeLog, Christian W. Zuckschwedt and Josh Hiloni contributed the XHTML output and Martin Gebert wrote the Python filter. Alain Barbet wrote the filters for PHP3 and Perl. The major weakness of the project at the moment is that references of functions can currently not be mapped to their definitions, as only lexical analysis is being performed. Fixing this and writing more filters would therefore be good ways of supporting the project. Naturally, developers using GNU Source Highlight as commandline tool or interactive CGI in the web are the classical user group of the project, but there are also users who just appreciate a good graphical user interface.

Ksrc2html Ksrc2html [3] by Martin Gebert is a graphical user interface for GNU Source Highlight; also available under the GNU General Public License (GPL), which makes it Free Software. As the name suggests, Ksrc2html is based upon C++, Qt and KDE 2, an update to KDE 3 is planned. Ksrc2html allows a formatting preview in order to allow better control over the parameters. Also settings for colors and

Ksrc2html main window

COMMUNITY

font types can be made interactively and saved for later usage. Thanks to Xavier Outhier, who took care of the French translation. Martin considers the project to be stable, although he does plan to expand the dialog for colors and font types in a way that will allow adjustment for different programming languages. He would like it to be known that help with the KDE 3 port would be welcome. FSIJ

Free Software in Asia As our Asian readers will probably be happy to read, on July 10th, 2002, the “Free Software Initiative Japan” (FSIJ) [4] was founded. It seeks to further Free Software in Japan and create the basis for a future FSF Japan or FSF Asia. Chairman of the FSIJ is Prof. Masayuki Ida, who was acting as the “Vice President Japan” of the Free Software Foundation North America for a long time and with whom the members of the Free Software Foundation Europe led intensive discussions during his trip through Europe last year. In order to provide an impulse for Free Software in Japan, the FSIJ organized the “Free Software Symposium 2002” in Tokyo on October 22nd and 23rd. Being the first event of its kind in Asia, speakers from China, Thailand, Japan, Singapore, Germany, Italy and the USA were invited to provide an interesting conference programme. Besides the more technically oriented presentations about Debian, the HURD project or RedFlag Linux, the Chinese GNU/Linux distribution, there were also speeches about the larger issues of Free Software and the situations in both Asia and Europe. The round table on the evening of October 22nd discussed the issues of better international co-operation for internationalization of programs and documentation as well as the possibility of a solution oriented database for Free Software. Even though these issues would certainly not be solved in two hours, some practical ideas were found that are now being pursued by mail. All in all this was an important step forward for Free Software in Asia, which also intensified the dialog between the Asian countries. Building upon it, it is

considered to hold a followup-event sometime around February or March 2003 in Thailand. Maybe it will be possible to establish these events as a permanent institution wandering from country to country in Asia. It is very good to see that Free Software is also on the rise in Asia. Asian readers of the Brave GNU World who would like to get involved should probably get in touch with the FSIJ or GNU China [5].

Until the next time Enough Brave GNU World for this month. Although the repetition might cause some to skip over it, as every month I am asking for questions, ideas, comments and mails about interesting GNU projects. Despite the danger of being buried under more mail, I’d like to ask you a concrete question. In reference to Douglas Adams, I’d like to hear what is the most important question to you that Free Software provides the answer to. Like everything else, please send your questions to the usual address. [6].

INFO [1] Lire home page http://www.logreport.org [2] Source Highlight home page http://www. gnu.org/software/src-highlite/ [3] Ksrc2html home page http://murphy. netsolution-net.de/Ksrc2.html [4] Free Software Initiative of Japan http:// www.fsij.org [5] GNU China http://www.gnuchina.org [6] Home page of Georg’s Brave GNU World http://brave-gnu-world.org Send ideas, comments and questions to Brave GNU World column@brave-gnu-world.org

www.linux-magazine.com

February 2003

93


Events / Advertiser Index / Call for Papers

LINUX MAGAZINE

Call for Papers

Linux Events Event Location

Date Web Site

Spam Conference Cambridge, MA–USA

Jan 17 2003 www.spamconference.org

LinuxWorld Conference & Expo New York, NY–USA

Jan 21–24 2003 www.linuxworldexpo.com

Linux.conf.au Perth,WA–Australia

Jan 22–25 2003 conf.linux.org.au

SAINT-2003 Orlando, Florida–USA

Jan 27–31 2003 www.saint2003.org

FOSDEM 2003 Brussels–Belgium

Feb 8–9 2003 www.fosdem.org

NordU/USENIX 2003 Västerås–Sweden

Feb 10–14 2003 www.nordu.org

Desktop Linux Summit San Diego, CA–USA

Feb 20–21 2003 www.desktoplinux.com/summit

LinuxPark CeBIT 2003 Hannover–Germany

Mar 12–19 2003 www.cebit.de

PyCon DC 2003 Washington, DC–USA

Mar 26 –28 2003 www.python.org/pycon

Ruby Con Dearborn, MI–USA

Mar 28–30 2003 www.rubi-con.org

W

e are always looking for article submissions and new authors for the magazine. Although we will consider articles covering any Linux topic, the following themes are of special interest: • System Administration • Useful hints, tips and tricks • Security, both news and techniques • Product Reviews, especially from real world experience • Community news and projects

Advertiser Index Advertiser

Web Site

Page

1&1

oneandone.co.uk

11

Cyclades

www.cyclades.co.uk

Outside Back Cover

Dedicated Servers

www.dedicated-servers.co.uk

7

Digital Networks

www.dnuk.com

39

FOSDEM 2003

www.fosdem.org

41

GeCAD Software

www.ravantivirus.com

15

Hewlett Packard

www.hplinuxworld.com

Inside Front Cover

LinuxPark CeBIT

www.cebit-info.de

Inside Back Cover

Linux Magazine Back Issues

www.linux-magazine.com

77

Linux Magazine Subscription

www.linux-magazine.com

Bind-in 66–67

Red Hat Europe

www.europe.redhat.com

13

96

February 2003

www.linux-magazine.com

If you have an idea for an article, please send a proposal to edit@linux-magazine.com. The proposal should contain an outline of the article idea, an estimate of the article length, a brief description of your background, and your complete contact information. Articles are usually about 800 words per page, although code listings and images often reduce this amount. The technical level of the article should be consistent with our typical content. Remember that Linux Magazine is read in many countries, and your article may be translated for use in our sister publications. Therefore, it is best to avoid using slang and idioms that might not be understood by all readers. Be careful when referring to particular dates or events in the future. Many weeks will pass between the submission of your manuscript and the final copy in the reader’s hands. When submitting proposals or manuscripts, please use a subject text that helps us to quickly identify your email as an article proposal for a particular topic. Screenshots and other supporting materials are always welcome. Don’t worry about the file format of the text and materials, we can work with almost anything. Please send all correspondence regarding articles to edit@linux-magazine.com. ■


Subscription CD

LINUX MAGAZINE

Subscription CD

T

he CD ROM with your subscription issue contains all the software listed below, saving you hours of searching and downloading time. On this month’s subscription CD ROM we start with the latest development software to hit the servers. Included, alongside KDevelop, we have all the files that we mention in the magazine, in the most convenient formats.

KDevelop KDevelop is an integrated Linux development environment aimed at producing Linux applications in the easiest possible way. It features: • Project Management: The project file keeps all information for your project files like file properties (include or exclude from distribution), and projects can be created and changed individually. The generated projects are autoconf/automakecompatible. • Dialog Editor: KDevelop provides you with an easy way to create GUI interfaces with the built-in dialog editor. You can let KDevelop generate the dialog sourcecode and get full control of the dialog functionality. • Classparser / Classtools: The classview currently features the parsing of almost any C++ and C statement, nested classes, structures within classes and operators as well as namespaces. It is also possible to add methods and attributes using the classtools dialogs. • Integrated Debugger: With KDevelop 1.1 you are provided with a complete integrated debugger which lets you use KDevelop’s classviewer even more efficiently. When debugging you can easily access the sourcecode to set breakpoints and watch variables of your application. • Graphical Class Viewer: The graphical class viewer offers you the possibility to get an overview of your project and all of your classes. • Application Wizard: The KAppWizard offers the generation of different application frameworks to create new programs. Available as a standard KDE application with menubar, toolbar and statusbar, a mini-KDE-application with an empty Mainwindow, a complete GNOME application, a Qt-only based application also with menubar, toolbar and statusbar and finally a C/C++ Terminal application type.

SpamAssassin SpamAssassin is a mail filter which attempts to identify spam using text analysis and several internet-based realtime blacklists. Using its rule base, it uses a wide range of heuristic tests on mail headers and body text to identify "spam", also known as unsolicited commercial email.

Once identified, the mail can then be optionally tagged as spam for later filtering using the user's own mail useragent application. SpamAssassin typically differentiates successfully between spam and non-spam in between 95% and 99% of cases, depending on what kind of mail you get.

WaveTools

Subscribe & Save Save yourself hours of download time in the future with the Linux Magazine subscription CD! Each subscription copy of the magazine includes a CD like the one described here free of charge.

WaveTools is a software In addition, a subscription will library consisting of eight save you over 16% compared to programs for manipulating the cover price, and it ensures mono WAV Files. It was that you’ll get advanced Linux written as a toolbox for Know-How delivered to your generating and preprocessing door every month. small test samples. If you Subscribe to Linux Magazine want to write your own effect today! filters or sound analysis tools and don't want to mess Order Online: www.linux-magazine.com/Subs around with format conversions or Or use the order form between standard input p66 and p67 in this magazine. filters, or if you just want to arrange some WAVs for your voice modem, you will find this useful.

RAV AntiVirus The evaluation version is fully functional for a period of 30 days. In the evaluation period the user has the opportunity to test and learn about all the product’s capabilities, with no restriction in this respect. RAV provides both an intuitive Graphical User Interface and a command line for expert users.

Games Our games selection gives you the opportunity to relax and play one of these four fine 3D games: • BillardGL: This one requires an nVidia or OpenGL equivalent graphic card as it relies heavily on hardware acceleration. • Trackballs: It requires the SDL libraries. In this game you guide a marble around mazes avoiding the obstacles in a given time. • Spheres of Chaos: This game is based on Asteroids but is much more colorful. • Pachi el Marciano: The last game in the series is a platform game where you have to collect all the objects on each level. ■

www.linux-magazine.com

February 2003

97


LINUX MAGAZINE

Next month

March 2003: Issue 28

Next month highlights Editor Assistant Editor International Editors International News Editors Contributors

Production Coordinator Layout Cover Design

John Southern, jsouthern@linux-magazine.com Colin Murphy, cmurphy@linux-magazine.com Patricia Jung, pjung@linux-magazine.com, Heike Jurzik, hjurzik@linux-magazine.com, Ulrich Wolf, uwolf@linux-magazine.com Leon Brooks, Stephanie Cooke, Armijn Hemel, Patricia Jung, Davyd Madeley, Philip Paeps Konstantin Agouros, Zack Brown, Marius Aamodt Eriksen, Steven Goodwin, Georg C. F. Greve, Carsten Grohmann, Peer Heinlein, Kurt Huwig, Heike Jurzik, Andreas Kneib, Charly Kühnast, Achim Leitner, Nico Lumma, Oliver Much, Colin Murphy, Andrea Müller, Amon Ott, Christian Perle, Niels Provos, Bernhard Röhrig, Volker Schmitt, Marc André Selig, Dirk von Suchodoletz, Stefanie Teufel, Dean Wilson Hans-Jörg Ehren, hjehren@linux-magazine.com Judith Erb, Elgin Grabe, Klaus Rehfeld Pinball Werbeagentur

Advertising www.linux-magazine.com/Advertise Sales All countries Brian Osborn, ads@linux-magazine.com (except phone +49 651 99 36 216, Germany, fax +49 651 99 36 217 Austria, Switz.) Germany Osmund Schmidt, Austria anzeigen@linux-magazine.com Switzerland phone +49 6335 9110, fax +49 6335 7779 Management (Vorstand) Hermann Plank, hplank@linux-magazine.com, Rosie Schuster, rschuster@linux-magazine.com Project Management Hans-Jörg Ehren, hjehren@linux-magazine.com Subscription www.linux-magazine.com/Subs Subscription rate (12 issues including monthly CD) United Kingdom £ 39.90 Other Europe Euro 64.90 Outside Europe – SAL Euro 74.90 (combined air / surface mail transport) Outside Europe – Airmail Euro 84.90 phone +49 89 9934 1167, fax +49 89 9934 1199, subs@linux-magazine.com Linux Magazine Stefan-George-Ring 24 81929 Munich, Germany info@linux-magazine.com, phone +49 89 9934 1167, fax +49 89 9934 1199 www.linux-magazine.com – Worldwide www.linuxmagazine.com.au – Australia www.linux-magazine.ca – Canada www.linux-magazine.co.uk – United Kingdom While every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material provided on it is at your own risk. The CD is thoroughly checked for any viruses or errors before reproduction. Copyright and Trademarks © 2002 Linux New Media Ltd. No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, e-mails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing. Linux is a trademark of Linus Torvalds. ISSN 14715678 Printed in Germany. Linux Magazine is published monthly by Linux New Media AG, Munich, Germany, and Linux New Media Ltd, Manchester, England. Company registered in England. Distributed by COMAG Specialist, Tavistock Road, West Drayton, Middlesex, UB7 7QE, United Kingdom

98

February 2003

Samba control Practical help and tutorials on using Samba to connect Windows clients. Windows users can access file and print services without knowing that those services are being offered by a Unix host. Samba is an open source CIFS implementation. We cover everything you have ever wanted to know about domains and authentication.

Privileged Information

Practical Networking

Read, write, and execute file privileges are explained in nearly every Unix or Linux manual. Hiding in the system however is more, the notorious SUID bit, for example. This special privilege is referred to as the “sticky bit”, and it really does make the directories sticky. The files stored there can only be deleted by their owners, even if other users have write privileges for the directory. We show you how to control this useful feature.

Networking plays an important role in today’s IT world. So important that we will give step by step guides to setting up networks with your distribution. The hands-on workshop will explain quickly and simply what you need to do.

IceWM

We cover a broad spectrum of techniques from printing over a network to using web-based administration tools. The software is explained from the basics so you can be up and running quickly.

The Ice Window Manager has now been around long enough to be considered stable and feature-full. IceWM is a small but powerful window manager for the X11 Window System whose main goals are being comfortable to use, being simple and fast, and not getting in the way. IceWM delivers full GNOME compliance and partial KDE compliance.

www.linux-magazine.com

Bochs Bochs is a highly-portable Open Source PC emulator written in C++. It includes emulation of the Intel x86 CPU, common I/O devices, and a custom BIOS. Currently, Bochs is capable of running most Operating Systems inside the emulation including Linux, Windows 95, DOS, and more recently Windows NT 4. Typically this allows you to run OSs and software within the emulator on your workstation, almost as if you had a machine inside of a machine.

On Sale: 8 February


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.