linux magazine uk 21

Page 1



COMMENT

General Contacts General Enquiries Fax Subscriptions Email Enquiries Letters CD

01625 855169 01625 855071 www.linux-magazine.co.uk subs@linux-magazine.co.uk edit@linux-magazine.co.uk letters@linux-magazine.co.uk cd@linux-magazine.co.uk

Editor

John Southern jsouthern@linux-magazine.co.uk

Assistant Editor

Colin Murphy cmurphy@linux-magazine.co.uk

Sub Editor

Gavin Burrell gburrell@linux-magazine.co.uk

Contributors

Alison Davies, Richard Ibbotson, Dean Wilson, Frank Booth, Robert Morris, Formi, Steven Goodwin, Janet Roebuck, David Tansley, Bruce Richardson

International Editors

Harald Milz hmilz@linux-magazin.de Hans-Georg Esser hgesser@linux-user.de Ulrich Wolf uwolf@linux-magazin.de

International Contributors Björn Ganslandt, Georg Greve, Anja Wagner, Patricia Jung, Stefanie Teufel, Christian Perle, Hagen Hoepfner, Andreas Jung, Dr Jan Wuerthner Design

Advanced Design

Production

Rosie Schuster

Operations Manager

Debbie Whitham

Advertising

01625 855169 Kenny Leslie Sales Manager kleslie@linux-magazine.co.uk Verlagsbüro Ohm-Schmidt Osmund@Ohm-Schmidt.de

Publishing Publishing Director

Robin Wilkinson rwilkinson@linux-magazine.co.uk Subscriptions and back issues 01625 850565 Annual Subscription Rate (12 issues) UK: £44.91. Europe (inc Eire) : £59.80 Rest the World: £77.00 Back issues (UK) £6.25

Distributors

COMAG, Tavistock Road, West Drayton, Middlesex England UB7 7QE

Print

R. Oldenbourg

Linux Magazine is published monthly by Linux New Media UK Ltd, Europa House, Adlington Park, Macclesfield, Cheshire, England, SK10 4NP. Company registered in England. Copyright and Trademarks (c) 2001 Linux New Media UK Ltd No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, emails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing. ISSN 14715678 Linux is a trademark of Linus Torvalds Linux New Media UK Ltd is a division of Linux New Media AG, Munich, Germany Disclaimer Whilst every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material providied on it is at your own risk. The CD is comprehensively checked for any viruses or errors before reproduction. Technical Support Readers can write in with technical queries which may be answered in the magazine in a future issue, however Linux Magazine is unable to directly provide technical help or support services either written or verbal.

Current issues

PRIME TIME

I

BM recently hit the TV screens with a Linux advert. It took me a few seconds to realise what was being advertised, then the reality sank in. IBM obviously believes the general public is finally ready to understand about Linux and that the corporate buyers will think of IBM first for all their Linux needs. It’s a refreshing change to see a marketing department that realises the public is astute and capable of making up its own mind. It is a shame that the UK government doesn’t have the same foresight as IBM. In its new policy of Open Government the Gateway initiative has been introduced, which will force departments to use BizTalk servers and Windows 2000 Advanced servers. This policy now forces every council to also buy the same servers or be left out of the loop with no way to communicate with the proprietary protocols. I am sure it was an equitable deal though... There again, maybe IBM was appealing to Peruvian nationals who, thanks to a letter by a Peruvian congressman, reminded us all of the basic fundamentals for civilisation. The Republic of Peru is considering a government bill for Free Software in Public Administration. Microsoft wrote a letter expressing its concern that this would be a bad thing for trade, the world economy, defence of freedoms etc. The subsequent reply by the congressman was a wonderful defence of Open Source. You can find the letters at http://pimientolinux.com/peru2ms/.

Keep fighting for freedom

John Southern Editor We pride ourselves on the origins of our magazine which come from the very start of the Linux revolution. We have been involved with Linux market for six years now through our sister European-based titles Linux Magazine (aimed at professionals) and Linux User (for hobbyists), and through seminars, conferences and events. By purchasing this magazine you are joining an information network that enjoys the benefit of all the knowledge and technical expertise of all the major Linux professionals and enthusiasts. No other UK Linux magazine can offer that pedigree or such close links with the Linux Community. We're not simply reporting on the Linux and open source movement - we're part of it.

Issue 21 • 2002

LINUX MAGAZINE

3


NEWS

LINUX NEWS Red Hat 7.3 unveiled In the latest version of Red Hat Linux 7.3 you will find added support for new productivity tools, personal firewall configuration at installation and video conferencing software. Red Hat boasts that this package will deliver everything individual users, educational institutions and small businesses need for flexible Internet-based computing. “Small businesses and enthusiasts are looking for a

Caldera praised for Linux support Caldera Education Services, along with Support Services, Professional Services and Online Services, form the Caldera Global Services division of Caldera International. Caldera Global Services was recently selected by Network Computing as Editor’s Choice for Linux support and had a strong showing, in the category of Directory Based Application, at the Network Computing’s Annual Well-Connected Awards at NetWorld+Interop 2002, held in Las Vegas. Caldera Education Services, which is one of Caldera Global Services provides courses designed to meet the demands of IT professionals who need to get Linux solutions up and running within their business environments. A variety of courses are available through the global network of Caldera OpenLearning Providers.

combination of performance, ease-of-use and value in their technology infrastructure, which is hard to find with today’s proprietary operating systems,” said James Prasad, vice president and general manager EMEA at Red Hat. Some of the key features included are: ● KDE 3.0 and GNOME 1.4 desktops ● Evolution email client and contact manager ● GNOME Meeting video conferencing software ● Apache 1.3 Web server ● Firewall configuration ● PostgreSQL relational database management system

Info Red Hat Web site http://www.europe.redhat.com/products/

Mandrake 8.2 on the shelves MandrakeSoft is now shipping Mandrake Linux 8.2 in three retail boxed sets: ● Standard – £39.99+VAT ● PowerPack – £59.99+VAT ● ProSuite – £149.99+VAT Standard comes with 30 days Webbased support; PowerPack has seven CDs, two Manuals and 60 days Web support; and ProSuite has eight CDs, one DVD, two Manuals, 90 days Web support plus two tech support phone calls as well and two update CDs with security fixes during product life.

Py: A Python technical journal All those Python users and developers out there now have their own paper-based independent technical journal. Launched in April this bi-monthly magazine from editor and publisher Bryan Richards hopes to give the Python development community another firm platform for news and views. The April issue included articles on Scientific Python and Simple CGI Template processing. Bryan Richards is hoping to include five articles an issue and is keen to make contact with writers.

Info Info

Info

Caldera Global Services: http://www.caldera.com/services

The MandrakeSoft Web site http://www.mandrakesoft.com

6

LINUX MAGAZINE

Issue 21 • 2002

The Py Web site http://www.pyzine.com/


NEWS

Books from O’Reilly The specialist publishers O’Reilly has once more come up with some new books to tempt you to fill your shelf space up with spines of a similar theme. J2ME in a Nutshell – Java for the embedded world A companion work to Java in a Nutshell and Java Foundation Classes in a Nutshell, J2ME in a Nutshell by Kim Topley is aimed at developers who want to get the most out of Java on embedded platforms. “To the experienced Java developer, J2ME (the Java 2 Micro Edition) looks just familiar enough to be tempting, but just different enough to warrant caution. The 478 pages of J2ME in a Nutshell provide a solid, no-nonsense reference to the “alphabet soup” of micro edition programming, covering the CLDC, CDC, KVM and MIDP APIs. The book also includes tutorials for the CLDC, KVM, MIDP and MIDlets, MIDlet user interfaces, networking and storage, and advice on programming small handhelds. Combined with O’Reilly’s classic quick reference to all the core micro-edition APIs, this is the one book that will take you from curiosity to code with no frustrating frills in between.” Java Web Services – Realising the potential of Web services Java has helped to shape the Internet, but in an ad hoc fashion. The development of Web Services is enabling structure to form part of this development. In its 276 pages, David A Chappell and Tyler Jewell explain the use Java Web Services, showing the reader how to use SOAP to perform remote method calls and message passing. It also covers topics such as the use of WSDL to describe the

8

LINUX MAGAZINE

Issue 21 • 2002

interface to a Web service or understand the interface of someone else’s service; and how to use UDDI to advertise (publish) and look up services in each local or global registry. Security issues with Web services are addressed, as are issues involving interoperability and integration with other Java enterprise technologies like EJB and with Microsoft’s .NET services. Web Database Applications with PHP & MySQL – Online database applications Hugh E Williams and David Lane’s book offers detailed information on designing relational databases and on Web application architecture, with a mixture of theoretical and practical information. The ability to integrate large databases in Web applications was necessary for sites such as eBay, Amazon.com and CNN.com to come about. Their popularity, and the power and ease of use that they provide, stems from their accessibility and usability: thousands of users can access the same data at the same time without the need to install any additional software on their computers. The book details the development of a fictional online shop, which it uses as an example throughout. The shop allows users to browse, search a database, add items to a shopping basket, manage their membership, and purchase wines. Using this site, the book shows you how to implement searching and browsing, store user data, validate user input, manage transactions, and maintain security. If you want to build small to medium-scale web database applications that can run on modest hardware and process more than a million hits a day from users, this book will show you how.

Info J2ME in a Nutshell http://oreilly.com/catalog/j2meanut/ Java Web Services http://www.oreilly.com/catalog/javawebserv/ Web Database Applications with PHP & MySQL http://www.oreilly.com/catalog/webdbapps/


NEWS

New Chairs at Red Hat Matthew Szulik has taken on the post of Chairman of the Board of Directors at Red Hat, in addition to his existing post as CEO. Promoted to CEO in November of 1999, he has been instrumental in the continuing success of Red Hat and the unprecedented growth in the adoption of Open Source computing. “Matthew’s enhanced leadership position as CEO and Chairman will enable Red Hat to improve its customer service and product offerings even faster and increase the speed of Linux adoption in the enterprise.” said Bob Young, a former holder of the position. In addition, Marye Anne Fox, Ph.D., has joined the Red Hat board of directors. Fox, a well-known education leader, is also a widely acclaimed chemist who serves on the American National Academy of Sciences’ Committee on Science and Engineering Public Policy.

Acucorp entrusted with system relocation The Taiwan Securities Central Depository Co. called on the services of Acucorp, an international provider of application development solutions, to assist in the migration of a stock inventory system. The system was moved from IBM’s RS6000 to Linux running on the S/390 mainframe server, making Acucorp the first Open systems COBOL vendor to announce its selection as the COBOL of choice in a server consolidation strategy under Linux on this platform. Acucorp’s extend5 family of solutions, currently available on more than 600 other platforms including Linux for the IBM eServer iSeries platform, provide technologies for Internet deployment, COBOL-based GUI development, COBOL access to RDBMS data (DB2, Informix and Oracle among others) as well as access to ODBC data sources; distributed computing with Thin Client architecture; and programmer productivity.

Info

Keeper Linux You don’t need to have a hard drive in your network routers, especially for xDSL/cable modem connections. This is now even easier to achieve with Keeper Linux KLX-2.01 which will allow you to start up your system with nothing more that a CD drive. The Keeper Linux CD-ROM distribution boots off a single CD-ROM, with a 64 Mb root filing system in RAM disk, so no hard drive is necessary. Once booted the CD-ROM is mounted and the main configuration files are read. A key advantage of a distribution whose system can manage without a hard disk is for system

security, and this was a major factor in the design of Keeper Linux. If any tampering is suspected in the RAM disk root filing system, the system can simply be rebooted as any changes made to the booted system are not permanent. The CD-ROM filing system is read only, so any changed files cannot be saved to the CDROM. This makes the distribution extremely secure against malicious intruders. Keeper Linux is completely free and available for download from the Web site.

Info Keeper Linux: http://www.keeper.org.uk/

Acucorp Inc.: http://www.acucorp.com

Red Hat has hand in puppets Puns about puppets aside the Jim Henson’s Creature Shop is using Red Hat Linux to power its design studio and other digital projects. Red Hat and the Jim Henson Creature Shop have worked together for four years, but this new development will take their relationship to a new level with Red Hat being used at the core of the company’s new Digital Performance Studio (HDPS). With HDPS, the Academy Award-winning Jim Henson’s Creature Shop has created the next generation of puppetry and computer graphics – a system that makes a digital character as instantly performable as a puppet. The company uses Red Hat Linux as the operating system on both the HDPS and the animatronic Henson Performance Control System (HPCS). “Jim Henson’s Creature Shop is changing the way digital production works, so it makes perfect sense that Red Hat Linux would be at the base of their innovative systems,” said Mark de Visser, Vice President of marketing at Red Hat. “We’re thrilled that the company which brought us Kermit and Miss Piggy is using Red Hat technology, and we’re glad to be part of this growing trend within the digital production industry.” The Jim Henson Creature Workshop’s choice of OS is yet another feather in Linux’s hat, with Dreamworks also choosing penguin power for it’s films Shrek and Spirit: Stallion of the Cimarron.

Info Red Hat: http://www.europe.redhat.com/news/article/223.html Henson Digital Performance Studio: http://www.henson.com/hdps

Issue 21 • 2002

LINUX MAGAZINE

9


NEWS

Mammoth PostgreSQL Designed to give small to medium size businesses the power, performance and Open-standard support they deserve, Mammoth PostgreSQL from Command Prompt, Inc. is an SQL-compatible object relational database management system. Compatible with the PostgreSQL 7.2.1 release, Mammoth PostgreSQL provides a commerciallysupported PostgreSQL distribution for various systems, but most importantly for systems running Linux on x86 platform. Shipped with built-in support for SSL connectivity, Mammoth PostgreSQL also has all you need for programming APIs for C/C++, Perl, and Python. A Deluxe version of Mammoth also ships with the SQL/XML application server LXP, which provides high level out-of-the-box developer support for Web site integration with PostgreSQL, XML documents, and other Web-based programming solutions (such as PHP, or Perl). Command Prompt Inc. offer one-time and subscription-based licensing models available for immediate purchase.

Info Command Prompt Inc.: http://www.commandprompt.com/

Online Linux training services Merrow Internet Services, based in Guildford (UK), run web-based training courses for UNIX, Linux and the Apache Web server. By using a Web-based conferencing system to create an online classroom, trainees can take part at home or at work, from anywhere in the UK or any compatible time zone. Simon Ritchie of Merrow Internet Services said: “Web Conferencing is an ideal medium to teach IT skills and I’m very excited about using it. With facilities like document and application sharing we can provide all the advantages of a classroom-based course led by an expert instructor, but without the participants having to travel. “When I taught at Coventry University a few years ago, we used the learn software to introduce students to UNIX and it was very successful. I thought it was great pity that it doesn’t appear in modern systems, especially as reworking it only took a few days. Now people can use it to learn the basics of Linux and UNIX at their own speed before they have to part with money for a taught course.”

Info Merrow Internet Services http://www.merrowinternet.com. 10

LINUX MAGAZINE

Issue 21 • 2002

Award for Trolltech Trolltech’s Qt 3.0 continued its recent record of increasing industry recognition when it recently received the coveted Productivity Award from Software Development Magazine. Qt, which was nominated in the Libraries, Frameworks, and Components category, emerged with one of the prestigious Productivity Awards. Receiving awards in the same category was Microsoft’s .NET initiative and J2EE from Sun Microsystems. With this award Trolltech has been recognised for the most important benefit delivered by Qt – vastly improved developer productivity. “Ever since we first created Qt, our goal has been to build an application framework that increases developer productivity,” said Eirik Eng, Trolltech’s President.

Info Trolltech: http://www.trolltech.com/company/

ServerSure Web hosting Founded in 1989, ServerSure has an established track record of providing cost-effective IT solutions to a large number of corporate clients. Based in Sheffield, this Unix/Linux consultancy, has recently started offering low cost Co-location and Dedicated Server packages aimed at the cost-conscious UK Linux community. With prices starting from £25+VAT per month for co-location and £40+VAT per month for dedicated servers, it’s now possible to take full control of your own online server for less than the cost of some “virtual hosting” packages. As ServerSure operate from their own fullyequipped data centre in Sheffield, they can offer a high quality of service at extremely competitive prices.

Info ServerSure: http://www.serversure.net/


NEWS

Microsoft FUD dispelled The use of second-hand systems is just one way that schools might be able to afford to set themselves up with computer equipment. A recent posting on the Microsoft Education Web site – “A Guide to Accepting Donated Computers for Your School” – was widely regarded as muddying the waters with regards to licensing issues on the

Microsoft products that are often to be found already installed on second-hand equipment. Readers were left with the impression that only the operating system that was supplied with the machine could be used – for the entire life of that machine. In May, Leon Brooks, of the SchoolForge group refuted these statements. “Using Linux, OpenOffice.org and other Open Source software, a school or charity can safely accept almost any donated computer,” he said. “Simply wipe it and replace the software with Linux and Open Source applications, then use the computer as a powerful workstation or server. It’s an excellent idea to erase the existing operating system anyway – this also erases viruses and Trojan horses, protects the donor’s privacy, and complies with the typical EULA – so why not upgrade to Linux while you’re there?” Brooks also noted that Linux removed many of the burdens, costs and legal risks of licence management and software asset auditing faced by most businesses, organisations and individuals. The price tag is also attractive. “School decisions are often dominated by cost; much Open Source software is available at little or no cost, and runs well on donated computers,” Mr Brooks explained, “Linux is easy to set up as a fast diskless workstation or `thin client’, so many schools are rolling out networks using this robust technology with both donated and new equipment.” “On top of this, Open Source software is immune to almost all existing viruses, has an excellent security record, is extremely reliable, and in an educational setting often provides a deeper spontaneous involvement in computers than programs deliberately designed for the classroom.”

Info SchoolForge Web site: http://www.schoolforge.net/ An unusual link for this magazine: http://www. microsoft.com/education/?id=DonatedComputers

BakBone of Britain The physics department at Oxford University – one of the world’s most prestigious academic institutions – has chosen NetVault 6.5 from BakBone, to back up Unix and Linux systems which hold data on particle physics research for around 600 staff and students. BakBone’s NetVault 6.5 is a flexible and easy-to-use product. Installed on an IBM 8-way SMP server, with a Qualstar AIT-3 library attached, it is now backing up locally attached disks of around 1Tb of data, plus other network attached storage used within the group. The files being backed up by NetVault are used mainly for code development and computation – a mixture of code and script and large quantities of data. Typical datasets are a few hundred Gbs spread over a number of sequential files of approximately 2Gb each. Ian McArthur who heads the IT for the physics department said: “NetVault provided the functionality we needed at an affordable price. Being an academic institution cost is always a consideration and we found NetVault to be one of the only low-cost solutions for the Linux platform.” With the new version of NetVault 6.5.1 recently released there is now optional support for ACSLS drive devices attached to an ACSLS storage server, enabling enterprise-class backup and recovery for ACSLS environments.

Info BakBone software: http://www.bakbone. com/products/netvault/ Sales contact: sales_europe@bakbone.com

IARNA takes Linux hosting to heart IARNA, the Web-hosting company, has announced that following its successful merger with the global Hostway corporation, it is re-launching its Web site along with a range of new services. These include SiteControl 3.0, a Web-based management tool and, for the first time, a comprehensive suite of Linux-based Webhosting services. To date the company’s services have been Windows or Unix-based but it has recently configured its servers to exclusively run Red Hat Linux 7.2. This reconfiguration allows IARNA to take account of the growing popularity of the Open Source platform with existing customers. It is offering a new comprehensive suite of Linux packages to include new features such as improved reliability and enhanced email management. Mission critical information is supported by MySQL databases and PHP version 4, so that fully interactive Web pages can be created and structured, giving end-users a richer Web site experience. IARNA’s Web hosting now uses Sitecontrol 3.0, claimed to be the UK’s most advanced control tool which gives customers advanced usability functions including an improved interface, delivering enhanced navigation across the SiteControl 3.0 application. The tool has been re-designed for optimum scalability, speed and stability. Many more simultaneous users can be supported enabling them to manage their Web sites at any time and from anywhere that they have access to the Web.

Info IARNA: http://www.iarna.com Issue 21 • 2002

LINUX MAGAZINE

11


NEWS

K-splitter

CREATIVE FLAIR New programs, new icons and new arguments in the battle for desktop supremacy are all making for a hot month

New generation of the KDE-PIM family

For a while it really did seem as if things were going to be very quiet around the KDE developers of the PIM (or to be more precise Personal Information Management) gang, but happily that has changed in the post-KDE 2.2.2 era. You can find out about the current status of the many projects on the ever-expanding development site at http://pim.kde.org/ and you can take a little peek behind the scenes at http://pim.kde.org/development/work_in_progress.php. It’s not only the core applications of the PIM project which are currently being speeded up, though, thirdparty developments are too. The best example is the recently released LDAP administration tool KDirAdm from Carillon Information Security. At http://www.carillonis. com/kdiradm/ those who are interested can now download the second version of this administration tool for the popular directory utility. But a little caution is in order here: the present version does not yet support any shadow passwords, and since the tool also does not allow any LDAP via SSL, all passwords which you set or check with KdirAdm can very easily be eavesdropped by a sniffer connected somewhere along the line. A second new release will be especially interesting for fans of pocket computers. KDE Pocket PC Contacts Import 0.2 enables you to import all the contacts stored in a Windows-CE pocket PC very easily at the click of a mouse into the KDE address book Kab. The latest version of this useful address importer can be found at http://www.jardino.nildram.co.uk/.

KDirAdm offers assistance with the directory utility

Anyone fortunate enough to have saved all contacts including telephone numbers in the KDE address book might still want to take a look at Kall (http://ftp.kde.com/Communications/Telephony/Kall/). This little tool selects the numbers in the address book for you by mouse click. If you like you can also drag the most important telephone numbers from Kab using Drag and Drop onto your desktop, so as to have them ready to hand, or rather, ready to click.

Argumentative The KDE League has now intervened in the on-going discussion in the antitrust case against Microsoft. In a comment to the presiding judge Colleen KollarKotelly, Andreas Pour, chairman of the KDE League, warned that if Bill Gates and co. were to reach a settlement with the US Department of Justice they

LDAP The Lightweight Directory Access Protocol was developed in the early 90s as a standardised directory utility protocol at the University of Michigan. The intention was to simplify access to X.500 directories, hence the “lightweight” in the name. This protocol provides a standard for communication with data storage systems on the Internet. Using LDAP, for example, email directories such as Bigfoot and Four 11 can be searched for email addresses or people’s names. Double saves last longer

12

LINUX MAGAZINE

Issue 21 • 2002


NEWS

ASCII American Standard Code for Information Interchange”. The ASCII standard was developed in the 60s in the days when data was transmitted by means of telex. There is a numeric code for each symbol, making it possible to interchange texts between different systems. With seven bits however, the ASCII code can only cover the “simple” Latin alphabet and control codes (a total of 128 symbols), which are required, for example, for printer control. This did not include special symbols, such as the Greek Beta symbol “fl”, which are only included in the extended ASCII code which is based on an 8-bit structure.

Immediate connection to this number with Kall

would be less likely than ever to keep to the rules of the game of the market economy. Pour makes it clear in his letter that KDE, especially following a “weakly-worded” ruling with no judgement, would be undefended and at the mercy of the monopoly power of Microsoft, and the Redmonders would do everything in their power to “use unlawful practices to attempt to derail KDE from full acceptance”. It is especially typical to him that the US government rejected all discussions about protection for KDE right from the word go. If they had not, it is precisely KDE who could have proven in recent months that they are the most viable competitor of Windows desktop systems in the market for operating system interfaces. The full letter can be found at http://www.usdoj.gov/atr/cases/ms_tuncom/major/ mtc-00028788.htm, or alternatively you can download it as a PDF file (http://www.kde.com/league/ kde_league_objection.pdf).

Sight for sore eyes Although functionality is always the foremost concern for computer users, anyone who works with a computer

Andreas Pour (far right) with the group of KDE developers (from left): Jason Katz-Brown, Kurt Granroth, Jim Blomo and Waldo Bastian

a lot also needs something to soothe sore eyes. Amongst the most customisable computer accessories are icons, so if you too are a fan of these colourful symbols, you can now look forward to using these in the latest KDE 3.0 version. In addition to the many program innovations the developers have given the outfit a lot of tweaking. For example, work is now proceeding feverishly on the implementation of animated GIFs in mouseovers. But the static icons, too, are also being given a facelift. The leader here was once again Torsten Rahn, alias Tackat. In his committed campaign against the desktop monopoly he has equipped all icons which symbolise an ASCII file format with a little dog eared image, so as to add a little pep to what is otherwise a rather dry subject.

Artists’ meet If you feel the graphic innovations in KDE 3.0 do not go far enough for your taste, or if you prefer to design your icons and graphics yourself, then maybe you would like to become a member of the KDE Art Family. If you would, please send your proposals for motifs to the address icons@kde.org. If your proposals are applauded by the KDE-ers, and if you plan to become involved in the project on a permanent basis, the next step is that you will receive a CVS Account. Usually all the important changes are discussed first on the mailing list kdeartists@kde.org. The KDE graphics people would also like to extend the same cordial invitation to all interested hobby artists for a general chat. The group meets regularly on Mondays at 21:00 GMT. If you are interested connected the IRC client of your choice with the server irc.kde.org and join in the lively debates in the channel #kde-artists.

IRC: The acronym stands for Internet Relay Chat and refers to a loose collection of servers, which allow users to meet in what are known as Channels, so as to converse in writing and in real time. All those who have clicked their way into a channel can see everything that other people in this channel are saying.

Dog eared text file icons

Issue 21 • 2002

LINUX MAGAZINE

13


NEWS

Gnomogram

STOKING THE FLAMES This month

GNOME and .NET

Gnomogram takes a

More than just about any other project in the domain of Free software, GNOME finds itself again and again in the middle of unpleasant flame wars. The latest episode in this sorry tale is an interview between The Register and Miguel de Icaza with the somewhat misleading title “Gnome to be based on .NET”. In this interview Icaza speaks out in positive terms about Microsoft’s .NET framework, which he is porting to Linux under the name of Mono. This was in addition to the fact that a few days before, the licence for the Mono class library was changed from the GPL to an X11type licence, which enables companies such as Intel to contribute code. Even if this licence is free in the sense of the FSF, programs under the X11 licence are distributed in binary form, without the altered source code being released. This was enough to convince the readers of many news sites that Icaza – and thus too the whole GNOME team – had succumbed to the dark side. Accordingly, demands for Icaza’s resignation began to mount. Even if Icaza does play an important role in the GNOME project, this would not put him in any position to take such a fundamental design decision. Within the GNOME Foundation there is an annually re-elected board, with ten people in addition to him who have to reach agreement on such decisions. It is also doubtful whether the GNOME community would go along with such a fundamental change from C to C#. So Mono remains the early version of a C# implementation with plans for GNOME language bindings, like those which already exist for Python or C++. The reports from The

look at GARNOME, Guikachu, Genigma and GNOME’s supposed links with .NET

Libraries required Guikachu: gnomemm, libxml1, libglade, gdkpixbuf, xsltproc

14

LINUX MAGAZINE

Issue 21 • 2002

dotNET for Linux

Register that Richard Stallman had spoken out negatively on Icaza’s plans also turned out to be a hoax, as Stallman himself put right.

GARNOME GNOME is known for its many dependencies, and for newbies especially it can be nerve-wracking to hunt down and compile all the necessary packages. Although, with the vicious build scripts there is an option for loading a current GNOME automatically from the CVS – anyone who would rather work with the archives from the official releases, has until now been on their own. With GARNOME it is now possible, in a similar way to BSD’s port system, to load all archives automatically with their dependencies and to compile them in the correct sequence. Once GARNOME is unpacked, one simply needs to change to the directory gnome/meta-gnome-desktop and enter make – the rest is done by GAR. The system is configured via the file gar.conf.mk, in which, under “BUILD_PREFIX”, one can also specify the installation directory. GNOME 2 should in any case be installed separately from GNOME 1.x, since otherwise there can be conflicts – this is why all compiled programs are installed by default to ~/garnome. (The tilde stands for the home directory.)

Guikachu With the aid of Guikachu, just as with Glade, socalled resource files can be created for Palm OS. As


NEWS

such, it is possible to position various widgets for a graphical user interface with ease, to create menus and dialogs and to save the whole thing as an XML file. Guikachu is not, however, able to compile the created file itself – instead the interface has to be exported, using File/export RPC or with the program guikachu2rcp into a format readable for Pilrc. Guikachu2rcp is in fact only a simple bash script, which converts XML files into a different format with the aid of XSLT – the actual transformation is left to Xsltproc. To actually integrate the interface into a program, however, yet more programs are needed: in the Prc-Tools there are a range of programs which help to produce and to debug code for the Dragonball processor used in Palm. You will also need a Palm OS SDK, which comes with the necessary Includes and can be found at http://www.palmos.com. In the SDK archive offered by Palm, in addition to the documentation there is also an rpm package, which can easily be converted under Debian with alien. Debian users also have to create a symbolic link, with: ln –s /opt/palmdev/sdk-4/ /usr/share/prc-U tools/sdk-4 Other distributions can search at /usr/local/palmdev/ for the SDK – the link then has to be placed in this directory.

M3, in which three of five possible rotors were used. This model was used by the Luftwaffe and the Wehrmacht – the models used in ships and U-boots were more complex still. Each rotor has 26 inputs or outputs, which were wired up in a specific fashion. So a letter typed on the typewriter-like keyboard was replaced several times by a different letter. Cryptoanalysis is also made more difficult by a plugboard, with which certain letters are exchanged. The rotors were also turned after each keystroke, which changed the encryption for each letter. To actually encrypt a text in Genigma, one thus has to enter under roller 1-3 respectively a number between one and five, which stands for a rotor – each rotor is of course included only once. The ring setting can also be changed. On the ring, similar to the contacts on the roller, there were also the letters from A to Z. So setting B equates to a displacement of Genigma encrypting a historic phrase the ring by one position. In exactly the same way, the roller can be turned even before starting the encryption – hence the setting “Start”. Letter pairs can be specified in the plug field, in order to veil the result even further. Naturally here, too, each letter can only be used once. In addition to the GNOME interface Genigma can also be used from the command line – more precise information on the options can be found in the corresponding manpage.

Info

A Palm GUI is quickly created with Guikachu

Genigma Genigma is an emulator of the German Enigma device – probably the most high-profile encryption machine of all time – which has been used and abused in numerous books and films. Even though the fundamental crypto-analysis of the Enigma was performed by a Polish team, it was only in Bletchley Park Station X that the so-called “Bombe” was developed, which by checking standard phrases (socalled cribs) was able to seek possible keys very quickly. All types of Enigma share the same basic principle of the “rotors”, the number of which varies depending on the model. Genigma emulates model

The Register interview with Miguel de Icaza http://www.theregister.co.uk/content/4/23919.html Mono homepage http://www.go-mono.com The Register RMS clarifcation http://www.theregister.co.uk/content/4/23978.html GNOME’s vicious build scripts http://developer.gnome.org/dotplan/notes/viciousbuild-scripts.html GARNOME homepage http://www.gnome.org/~jdub/garnome/ Guikachu homepage http://cactus.rulez.org/projects/guikachu/ Palm OS games http://www.ardiri.com/index.cfm?redir=palm=pilrc Prc-tools homepage http://prc-tools.sourceforge.net Palm OS developers tools http://www.palmos.com/dev/tools/ Genigma homepage http://home.pages.at/kingleo/development/gnome/ gnome-en.html Enigma the movie http://www.enigma-themovie.com

Issue 21 • 2002

LINUX MAGAZINE

15


LETTERS

Come and have your say

WRITE ACCESS Lots of lists

Write to Linux Magazine Your views and opinions are important to us, so we do want to hear from you, about Linux-related subjects or anything else that you think would interest Linux users. Send your submissions to: By post: Letters Page Linux Magazine Europa House Adlington Park Macclesfield Cheshire SK10 4NP By email: Letters-page@linuxmagazine.co.uk Be sure to leave your postal address whichever method you choose.

16

LINUX MAGAZINE

Stubborn LILO

I am running SuSE 8.0 and cannot edit my KDE 3.0 K menu. Menu Edit starts and lets me save but the K menu shows a different set. How do I reinstall Menu Edit? Alan Cooke

I have taken the plunge and compiled my own kernel to include infrared support. Following the relevant instructions I have added the following lines to the file /etc/lilo.conf using an editor:

LM Are you using one monitor on a dual head video card? If so, the Menu Editor is working fine for the other monitor output. Your K menu for your main monitor is in the menu under SuSE. Add your programs to this part of the menu and see if that solves the problem.

This newly compiled version has not yet appeared in LILO when I power up. Do you have any idea what I have done wrong? I have tried recompiling the kernel and I don’t get any errors. Why does this new option not show up when I reboot? Simon Simmons

image=/boot/vmlinuz-2.4.18-new label=IrDA support root=/dev/hda3 read-only

kmenu showing the second monitors menu listing shaded.

Starting services in SuSE 8.0 I have been using SuSE since 7.1 and have found it very simple to use. I am now having difficulty now that I have upgraded to version 8.0. Many of my services are not starting as I would expect. I believe this to be as a result of moving in line with something called LSB and dropping the familiar /etc/rc.config files. As I understand it, LSB has something to do with the way Red Hat configures its startup services. Why did SuSE have to change, if I wanted Red Hat I could have bought Red Hat? T Spagni LM This only goes to prove that you can’t please all of the people all of the time. LSB stands for Linux Standards Base, who you can read much more about on their Web site at http://www.linuxbase.org/. SuSE had previously come in for criticism for not being part of the LSB. They were damned if they changed, damned if they stayed where they were. Hopefully they will be applauded for making the Issue 21 • 2002

LM You haven’t followed the process through to the end. Even though you have edited your lilo.conf file, the change has still to be recognised by LILO and needs to be copied to the boot sector on your hard drive. What you need to do is specifically call LILO from the command line to get it to check its configuration file. Once you have booted up your Linux system with your usual kernel simply run the command: /sbin/lilo and lilo will rewrite the boot sector with the new information. The next time you restart your machine the new kernel will be in the list.

change, which will allow a greater degree of cooperation between developers, allowing them to produce more software of an even more reliable nature. The answer to your specific problem of starting services lies in the directory /etc/init.d and the runlevel directory /etc/rc.d as detailed in the manpage for init.d. You can edit these runlevel properties from YaST 2 > System > Runlevel Editor. If you click on Runlevel properties you will see the whole list of services, what’s running and where it should start to run.


LETTERS

You may also find the insserv command useful, again, there is a manpage for it but in short insserv will enable an installed system init script. cd /etc/rc.d insserv apache would get the Apache service running for you. You can even get to find out the names of the services, in case you are not aware of them by using the innserv –r command, which is also covered in the manpages.

Printing manpages The manpages are a great source of information, I am always calling on them to remind me of command line switches. There are just a few that I look up so often that I would really like to print them out. I have looked at the files and even uncompressed them. But they are in a format that is just too painful to read. What can I use to print these out? R Peterson LM The answer has been in your hands all of the time. Did you not look at the man manpage: the manpage about man. It is in here that you would find the command line switch for man to output your required man page in postscript, ready to print.

Locked out I was running Konqueror and Kmail under Red Hat 7.2. I tried to access mail and Kmail crashed out. I could not find any dead programs with top. I could not sign in under my username either in kdm or just a terminal. Root signs in fine and will run X. All I could do was power down the machine. I didn’t trust it after that and have since done a reinstall. Thing is, I still don’t trust it and I don’t want to lose my data again. Did I do something wrong or is this a bug? John Stamper LM Did you have any other users set up on the system and could you log in as them? We suspect very much that you had simply run out of space on your home partition. This would have been proved had other users suffered similar problems. Root has its own partition, so wouldn’t have been affected by this lock up, adding weight to our suspicions. It’s a shame that you lost data through the reinstall, especially as it wasn’t really necessary. If you had root access, you could have had a look at the state of the /home with the df command, which reports on the disk space left free. If the partition did prove to be full you would then have to make some free space on it by deleting files. You must also be aware of the fact that some of the applications running may make recovery difficult if some of their configuration files have only been partly saved. These may be part corrupted or missing completely. If something seems to be working in an unpredictable manner, it would be best to clear its configuration and start again.

man –t man | lpr would get you something to pin up on the wall, all seven pages of it.

df showing disk space

Computing for the disabled

Kmag in action

I want to set up a computer for my visually impaired brother. Going to the RNIB site I can buy programs to help under Windows. Is there anything available for Linux? David Mathews LM For those who are totally blind it is possible to use a Braille keyboard under Linux. The SuSE distribution checks to see if one is present when it first starts doing an installation. For those with some vision, the main help programs are either colour changers, magnifiers or speech synthesisers. Colour changing programs set foreground and background colours to aid contrast. Fortunately, in most Linux programs we can set the colours in the config scripts. For example we can change the ink colour in a terminal by altering the ~/.Xresources file to have the line: xterm@foreground: White This file lets us change not just colours but font sizes as well. For magnifiers we now have Kmagnifier,

which works by expanding part of the screen. You can always increase the resolution with the key combination Ctrl+Alt++ where the final + is on the keypad part of the keyboard. GNOME users also have support with the GNOME accessibility project. This includes Gnopernicus, which is both a magnifier and screen reader, which outputs to either a Braille display or speech synthesiser. Issue 21 • 2002

LINUX MAGAZINE

17


INTERVIEW

Lawrence Manning

ACTIONS SPEAK LOUDER Linux Magazine caught up with Lawrence Manning, the Development Director and Chief Architect behind the SmoothWall Firewall project. In the past Lawrence has contributed bug reports and the occasional fix to various Open Source projects, including the Linux Kernel

18

LINUX MAGAZINE

Linux Magazine – How did the SmoothWall project first start? Lawrence Manning – Around February of 2000 I met Richard Morrell through a local Linux User Group. I helped Richard set up a server on his home network, for file and print sharing. A few months later Richard told me he had been hacked and was looking at various firewall solutions for his dial-up account. He had looked at a few (LRP etc.) but was not happy with any of them, mostly because they were a pain to set up. So I told him that I would happily set him up a Red Hat box with a couple of CGI scripts to dial and hang up his modem, and it would do basic masquerading. It was VERY basic – just a couple of CGIs written in bash on top of a Red Hat box. Richard was quite impressed, especially with how quickly I got it working. After that, we together had the idea to turn this into something you could install on a stand-alone box. We also went to our LUG and asked for their thoughts on this idea of ours. Mostly they had ideas we didn’t want to consider at all, like running it off CD-ROM. By July I had a basic Web interface for setting up PPP settings. This was still running on my desktop machine though; there was no installer or anything. The next stage was to take a Linux distro and strip it down to its smallest size, though still with enough services and libraries for our code. By pure chance my Red Hat 6.2 CD-ROM had become damaged, so I hunted around for an alternative. Richard, who worked for them at the time, had given me a copy of VA Linux 6.2, so I used that; it’s basically Red Hat with a few improvements anyway. I stripped it down to about 50Mb of “essentials”. The next stage was to work on an installer. I looked at a few options: ncurses, a graphical one, or just a pure text-based installer with no fancy menus. A graphical one was out of my range entirely, and ncurses was interesting, but it seemed to take a lot of code just to do simple things. So I went for libnewt, the API used by the Red Hat text-based installer. This library has served us well over the years. I should say that although it LOOKS similar, there is no common Issue 21 • 2002

code between the SmoothWall installer and Red Hat’s. There were still problems to solve, like how to fit it all onto a floppy disk. The only interesting thing to say about this time is that the network installer was added because it was the only way I could do installs at the time; I didn’t have a CD burner. Even up to the first release of 0.9 (early September) CD installation was untested. In fact it was broken in the first release, so we did a release of 0.9.1 a few days later, which fixed the problem. So, by mid-September we had a SourceForge project registered and we had mailing lists on SourceForge, etc. We also had a small team of testers. Things plodded along quite slowly, until Richard had the project registered on Freshmeat. I remember well: Richard phoned me up the next day and asked me to guess the number of overnight downloads. It was about 50 and that was amazing for just one night! So that’s the early history of SmoothWall, up till around October 2000. Linux Magazine – What dictated the early decisions? Lawrence Manning – With regard to programming, common sense is the best answer I can give. The best way to explain this is to give some examples. I used libnewt because it was a very fast library to develop in. If you wanted an error dialog to appear, the code was already written so it was a single function call. Every dialog I needed at the time was already written so libnewt was easily the best tool for the job. We originally chose, and have stuck to, Apache for obvious reasons. It’s reasonably fast, obviously very secure and tested, and it was also fairly well known by us. That’s another thing that cropped up again and again: where there are two options, I tended to choose the one I knew best, even if I thought it had some shortcomings. Likewise, the CGIs were written in Perl. While this is an obvious choice, the reason I chose to use it was that I had done some Perl at University and it seemed like the best language to use. I don’t especially like programming Perl (I am more at home with C) but it


INTERVIEW

has proven to be a pretty good choice over the years. Some of the old code was really bad though! I am still learning the language, to be honest. Linux Magazine – How did the team come together? Lawrence Manning – Through the mailing lists, and on IRC. Oftentimes, someone from the outside would have a really good idea and we would see that and “invite” them in. Neuro (William Anderson) came with us with proposals for jazzing up the (then) really dull interface with some nice graphics. Richard and myself, to an extent, were really hostile to this but we saw he had huge talent in graphical work, so eventually he became part of the team. Similar stories can apply to various others. The team is split quite neatly into two groups, a core group, and an outside group. The simplest way to explain this is to say that the core group put the hours in and are dedicated to the project, so we can all depend on each other when things need to be done, especially with regard to security patches and the like. Linux Magazine – How are suggestions dealt with? Lawrence Manning – We evaluate them and work out if it meets our criteria. Is it where we want SmoothWall to go? Does it introduce any vulnerabilities – potential or otherwise? How long would it take to implement it? One thing that pretty much sums it up is: just because it can be done, doesn’t mean it should be. We still get people wanting us to put Sendmail or Samba on, something we dismissed at day one. Often, someone has already had the same idea, or we have had it, and already rejected it. Sometimes there are absolute gems though, and it’s a question of “why didn’t I think of that!” Linux Magazine – What are the fun elements of being in a programming team? Lawrence Manning – I suppose the nicest part about it is that many of the people in the SmoothWall team have become best mates. It is no exaggeration that we’re almost all family now. There is social and relational interaction that goes way beyond work or coding. We live very much in each other’s pockets regardless of geographical location. Maybe that’s why some people wanting to “join” the team just simply can’t and won’t ever cut the grade. Linux Magazine – What do you use to keep the code tree in sync? Lawrence Manning – For the old GPL Smoothie, it was mostly done by me being fed bits of code and merging it in (along with testing) by hand. In Lite we have a full private CVS tree. Linux Magazine – Why was SmoothWall Limited started? Lawrence Manning – SmoothWall Limited was started for a very simple reason: to keep SmoothWall

alive. Without having a company behind it, both Richard and I would have to get “normal” jobs, and would have very little time to work on SmoothWall. I don’t think it is big of me to say, but without me and Richard there wouldn’t really be a future for Smoothie. Someone could of course take it up, but it wouldn’t be the same. And besides, I LOVE working on it, and the only way for me to keep doing the thing I love was to start a company and try to make a business out of it. This is what we have done, and so far we have been more successful then we could have hoped. That’s the simple reason. Also Richard, with a family, simply couldn’t afford to keep paying for it forever. A lot of the community think this stuff just happens; it doesn’t. It costs a LOT of money. Linux Magazine – Why was George Lungley persuaded to join the team? Lawrence Manning – George was a major player in corporate IT systems for councils and corporations of twenty plus years standing. Also a SmoothWall user, George was very much the straight man to myself, Richard, and William. George has also created, from virtually nothing, a company that ended up being sold for millions of pounds to a multinational corporate chain. No Linux company in the UK can claim to have done this. We do sometimes wonder why he wants to be involved when the community kick off. I think he views the community with the same scepticism and bewilderment that we all do at times. Linux Magazine – How much time has been invested? Lawrence Manning – Well, I have worked on SmoothWall for just over a year, full time. Before that I spent maybe three to four hours a day on it. Other people like William and Dan Goscomb have invested similar amounts. Richard has invested about the same amount of time, and a very considerable amount of money. Linux Magazine – How does SmoothWall Lite differ from the 0.9.9 GPL version? Lawrence Manning – It is a complete rewrite. There is no common code at all. 0.9.9, and the GPL base served us well for the best part of two years, however the time has come to start again. All code rots, and at some point it has to be time to start anew. Dan has some great ideas and I personally can’t wait to see them come to fruition. Linux Magazine– How does the team focus on direction? Lawrence Manning– The team is just that, it’s a team. Imagine a spider with eight legs. All have to move in one direction to achieve anything. Like a spider, we also have to cling on for dear life sometimes when spinning a Web with no resources. Dan Goscomb and William Anderson work with Richard on focus. Richard will suggest ideas, looking Issue 21 • 2002

LINUX MAGAZINE

19


INTERVIEW

Info SmoothWall Ltd. Web site http://www.smoothwall. co.uk/

20

LINUX MAGAZINE

at competing proprietary products and use a commercial focus to suggest ideas. Dan will say “OK, I can do that but it needs to be coded thus”, William will then come in and design the graphical glue to hide all the skeleton that lays beneath. Dan and William provide (on Lite) the bones of the exterior. Richard is the catalyst, as he has relationships with players in the Linux hierarchy that we don’t. His job is to use these contacts to talk to comms hardware vendors and the like and give us the driver support that we need. Linux Magazine – What was the reasoning for Lite being Closed Source? Lawrence Manning – Lite is a product that has to remain free. We are committed to it being free. Although it may use some GPL code, we will use common proprietary compiled elements from SmoothWall Ltd. and from other companies we have relationships with. We are not about to suggest these other third parties look to GPL their code: it won’t happen. Being Closed Source is the best way to produce a good product, in OUR case. This is because it is the only way that we that the product we care about can remain competitive in important areas, like device compatibility. After the rampant abuse of our rights as developers by the IPCop team, and others, there is no way we will share advantage with the community. And there is no doubting this point. I do not want to go into details or get into a debate. But we were abused. No one would ever fork the kernel, change it’s name, and claim absolute credit. Yet this is exactly what they have done. Also, if we did Open Source a lot of the common code it would disadvantage our resellers and our credibility in the corporate paying world. The community has no real role in the “fee paying world” that subsidises the servers that power the “community”; it’s a food chain. We don’t particularly want to become consumable items – we would much rather be the supplier. Linux Magazine – What are you hoping to concentrate on developing in the future? Lawrence Manning – I really want to get started on our Enterprise level products. We have some fantastic ideas for the “ultimate” SmoothWall, and I can’t wait to get started on it. It is hard to explain to a non-coder, but when you see your ideas that you had while doing the most mundane of everyday tasks, when you see them come into reality, it’s an amazing thing. Still now, when people say “we are using Corporate Server in our hospital/school/whatever”, I get a huge buzz. It’s going to be an even bigger buzz in the future, when we are truly up there with the big boys, competing on a level playing field. Linux Magazine – What is the advantage of the Corporate Server? Lawrence Manning – Corporate Server is a fully Issue 21 • 2002

rounded, “corporate” product, compared to GPL, which is a home level server product. Our competition to Corporate Server is GPL. However they are very different products, not bedfellows. Corporate Server has features and has code that shares common boundaries but the expectation levels are totally different. Corporate Server is also modular, so you can bolt on things like an x.509 certificate authenticate VPN management, complete with Windows remote Road Warrior support. SmoothHost is our module that allows you to replace a Cisco PIX for 10 per cent of the comparative price. This is all gone into in greater detail on the Web site (details are below). But even without the modules, Corporate Server has features that make it “stand proud” with the other servers and services provided by your typical corporate network. Linux Magazine – Do you still get to play RuneQuest? Lawrence Manning – Sadly not. Friends separate, and people have “grown up”. I would love to get into online gaming in a big way, but I don’t have the time! Linux Magazine – I heard you are using a PPC machine? Lawrence Manning – I’m playing with a PPC box at the moment. It’s a complete pig to get going! If anyone has any experience running Linux on a powerstack, I’d appreciate it! Linux Magazine – Tell us about your brother Virgil. He’s the Emmy award winning animator behind such classics as “Walking with Dinosaurs”. What does he think of your success? Lawrence Manning – He’s happy for me. But he’s so laid back, it’s hard to surprise him at all! This guy meets film stars every now and then, has been to the USA more times then I’ve been to London (I live in Southampton). Yes, he is impressed I’m a company director and how many users SmoothWall has. Linux Magazine – What do you do to relax after coding? Lawrence Manning – Well, I’m a big Star Trek fan... And I try to do a bit of cooking every now and again. Just normal stuff. But I’m a computer geek through and through. Coding can be very relaxing! Linux Magazine – If you could change just one thing what would it be? Lawrence Manning – I’d like the community to grow up and stop being so rude and one minded. It can be a horrid place to work on occasion. If the community started behaving more maturely and more like the talented developers that they are then it would be a much nicer world. We’d also get adoption of Open Source further and faster. Right now some of them, a small minority, are a bad advert. I’d also like to have as many Corporate Server customers as we have GPL users.


FEATURE

Open your office to the outside world

GET CONNECTED Networking in mind

Office communication

Linux has been built with networking in mind, making it the ideal platform to share information on. Your office generates information, and the benefits of being able to share that information are increasing daily. It is possible to not only share this information over a local network, but via the Internet too, if necessary. Email is not only very common, but also valuable to your business. You can take full control over your email by running a few servers on a permanent connection. This minimises the impact should your ISP let you down. The ability to seamlessly share files across a network – be it across the office or across the world – enables you to get the maximum out of your data. This is where NFS can help you, and if you are looking to share files with Windows systems, then Samba gives you that option. Sharing printing resources saves time and money, and we also cover that to in this article.

is of the highest importance and networking is what Linux does best. Colin Murphy looks at how Linux could help you in a SOHO environment

Mail services With the best will in the world, a detailed examination of running email services on your network could not be done justice in just a few pages, so we won’t even attempt to try. What we will do is highlight all of the important aspects of email and offer you further pointers to them.

To read and write The handling of email takes on three main processes. At the very outset or endpoint, an email needs to be written or read. This task falls to the Mail User Agent. There are MUAs to suit a range of people and needs from the most basic – mail – through programs like pine and elm – which are terminal based, to MUAs that have all the bells, whistles and graphics, to run on desktops. When receiving mail, the more basic MUAs expect to find files to work on locally – they do not have any means at their disposal for collecting email from other sources – either locally, or externally. For this they need to call upon the services of a Delivery Agent. The

DA in turn calls on the services of the Mail Transport Agent (MTA). As root type mail in to a terminal to get an example of what email was like 20 years ago. Enter a message number to view a message, enter q to come out of the message and q again to come out of mail. Examples of MUAs with bells and whistles include KMail, Sylpheed and Netscape Mail to name just three. These three, and others of their type, are unusual in that they also deal with the actions of the remaining two processes (those covered by the DA and MTA). Because of this, they really fall outside of what we are going to continue to describe, and so won’t be mentioned again. This modularity allows for complete control over the system, which is why, in a multi-user networked system you would choose to administer your system this way. Which MUA you wish to use is largely a question of Issue 21 • 2001

elm is a small MUA, which uses an external text editor for its part in handling email. By default, this is usually vi, though this can be easily changed if that thought sends shivers down your spine. When run for the first time elm will create its own runtime configuration file in the user’s home directory #/.elm/elmrc. In this file you will find the editor option, which you can change to suit your needs.

LINUX MAGAZINE

21


FEATURE

pine is a much more feature rich MUA than elm, as a browse through its configuration file (though the program does provide its own configuration utility) ~/.pinerc will show. pine supports lots of useful features like multiple email folders and automatic backing up and archiving of old messages – both sent and received.

personal choice. When making this decision the administrator will want to be conscious of size, speed and the availability of a global configuration file to administer the MUA, as well keeping in mind the demands of users for more and more features.

Moving on The Delivery Agent can be called upon to deliver mail to a user or to send control messages to other programs. It places mail in mail stores, once it has been handed over by the MTA. There are only a few DAs – /bin/mail being the basic one found on most distributions by default – but a lot of people put their faith in procmail, which enables you to create a range of rules to define where certain messages get stored. A large part of procmail’s success lies in the fact that you can make rules that will automatically dispose of spam.

Transports of delight

Stateless All of the file access operations are independent. Each client call to the server has all the necessary information to carry out the task. If the server crashes the client just keeps on resending requests until the server is rebooted. If the client crashes the server does not care and once the client is rebooted the program can be restarted.

The MTA lies at the heart of the worldwide email system. There are three main players when it comes to transport agents: sendmail, exim and qmail. It is said that as much as 60 per cent of the world’s email travels courtesy of sendmail, compared to as little as four per cent for Microsoft Exchange Server. sendmail has a complicated configuration file that noone in their right mind attempts to configure by hand these days. For those that prefer the quiet tranquillity of their terminal, this elevates the configuration process to something akin to a scripting language, as opposed to plugging away in machine code. Configuration is also possible, and practical in some circumstances, via graphical tools, Figure 1 shows part of this procedure in the YaST tool, as used in the SuSE distribution. Configuration is also possible through the use of

NFS (Network File System) The NFS started in 1985 and was developed by Sun Microsystems. It lets you share a filesystem amongst computers on your network. It is a Server-Client system where one computer (the client) accesses the files on another (the server). For simple crash recovery the system is stateless.

LINUX MAGAZINE

WebMin, which enables you to use any browser to configure, remotely if needs be, many types of services including sendmail, an example of which can be seen in Figure 2.

The POP3 connection If you do not have a permanent Internet connection, then there is one more stage that needs to be considered and configured. Without a permanent connection, email coming to you needs to be stored on a POP3 server, or possibly an IMAP server, while you do not have a connection. As part of your own local mail system you will need to run a service that will regularly collect mail from this store. fetchmail is the most often used in distributions like Mandrake and Red Hat, while SuSE will provide you with popper. Hopefully these highlights will give you the opportunity to further investigate the power and possibilities to be had when running your own email services.

Why use NFS?

Fig 1: The YaST tool can help with the setting up sendmail

22

Fig 2: Further uses of WebMin in helping to configure network services over the network. Here it is with sendmail

Issue 21 • 2001

NFS means we can keep just one copy of data and access if from wherever we choose. When the data is updated all users can see the information at the same time and so are in sync. By having the data centralised we can also have a simpler backup policy and thus keep our jobs. NFS is also access transparent. This means we can be machine and operating system independent, which in a modern network is very important. It is also location transparent. The client adds the filesystem on the remote server to its name space. The server exports the filesystems and the client mounts these before access is enabled.

Drawbacks ● Write performance is slow. ● Giving access to files on a system inevitably introduces security flaws that should be considered. Spoofing can be a problem, as the


FEATURE

server trusts that each client is who they say they are, based on permissions in the /etc/exports file. ● NFS runs on top of the RPC (Remote Procedure Call) protocol either on UDP or TCP. UDP has no headers so can be thought of as streaming. The advantage is speed but the disadvantage is a lack of fallover protection. To prevent anyone with root access from running rampant over the files we can use root squashing. This is where root access in intercepted and modified to act only as a user. Care must be taken with the hosts.allow file, as a missing space in the wrong place can make all the difference: /Public client(rw,no_root_squash) /Public client (rw,no_root_squash) The first line will grant rewrite access without squashing root privileges.

Setting up the server We need to configure the three access files: /etc/exports, /etc/hosts.allow and /etc/hosts.deny. The first contains a list of directories that we wish to share and to whom, thus: /server moon(ro) /Public (rw,root_squash, all_squash) /usr/local 192.168.0.0/255.255.255.0(ro) This gives the client machine (“moon”) read only access to the /server directory and all machines readwrite access to the /Public directory. We have also given read only access to /usr/local to a range of IP addresses. /etc/hosts.allow contains a list of machines to allow, and /etc/hosts.deny contains a list of machines we want to deny access. If a machine is not found in either then access is allowed. So in /etc/hosts.deny we have the lines: portmap:ALL lockd:ALL mountd:ALL rquotad:ALL statd:ALL

The first line restricts the portmapper daemon and so makes it a little harder for an unauthorised user to find where the NFS services are residing. If we were being ultra cautious we would just use the line ALL:ALL and then rely on specific allows in the hosts.allow file. For our etc/hosts.allow we have the following lines: portmap: 198.168.0.1, 189.168.0.2 lockd: 198.168.0.1, 189.168.0.2

mountd: 198.168.0.1, 189.168.0.2 rquotad: 198.168.0.1, 189.168.0.2 statd: 198.168.0.1, 189.168.0.2 We now have access for two specific clients. We can now usually just reboot and the NFS daemon will read in the configuration files. We can also start the daemons by hand if required:

samba samba is a useful tool that allows files to be shared amongst Windows and Linux users. It allows hard drives, directories and printers to be used as shared resources.

rpcinfo –p will tell us if portmapper, rstatd, mountd and NFS are running. rpc.portmap rpc.mountd rpc.nfsd rpc.statd, rpc.lockd (if necessary), rpc.rquotad

Now for the client machine. Create a directory that we will use as the mount point: mkdir /share1 Now mount the NFS filesystem: mount –t nfs earth:/server /share1 This now lets us see the contents of the /server directory on the computer “earth” in our directory /share1. If we want this each time we boot we must include the following line in our /etc/fstab: earth:/server /share1 nfs rw 0 0 That is all that is required. You can now access all those MP3 files from the server wherever on the network you happen to be sitting.

samba The main configuration file for samba, smb.conf, can be found either in /etc/samba/ or in /usr/local/samba/bin/ depending on how you installed the package. Opening the conf file in a text editor we can see two main sections. The first section is for Global settings. The first line lets us set the Windows workgroup or NT domain: workgroup=MYGROUP The second line allows us to define the server name that will appear when browsing in the Windows network neighbourhood: server string = SambaBox

Issue 21 • 2001

LINUX MAGAZINE

23


FEATURE

Next we will include the two lines: encrypt passwords = yes smb passwd file = /etc/samba/smbpasswd This will allow connection from Windows 98, Me, NT and 2000, which use encrypted passwords. 95 just uses PlainText. The second part of the smb.conf file is for the Share definitions. The default settings within the conf file let users with the same username on a windows machine access the user’s home directories on the Linux machines. [homes] comment = Home Directories browseable = yes writable = yes We can now save the config file. We now need to create a samba user password. As root use the command: smbpasswd –a username

setting up printer shares and security levels but you can easily add this with the help of the HOWTOs. For those of you who do not like the idea of using a text editor to change the configuration file there are graphical front-ends.

Ksamba Ksamba acts as a GUI to aid in the configuration of the smb.conf file. With the wizard are some set templates that enable quick set up.

Webmin If you have not previously come across Webmin, now is a good time to install. Webmin is a Web-based interface for system administration – using any browser that supports tables and forms, you can set up samba and a host of other programs and daemons. It uses modules, which allows you to add more CGI scripts to control more tasks. The samba module is samba.wbm The Server Message Block (SMB) is a network protocol for sharing files, printers, serial ports and communications abstractions such as named pipes and mail slots between computers.

where username is the name which you sign on to the windows machine with. You are then asked for the password, which should be the same as that on the Windows machine. Remember the password is case sensitive. Now we can start the daemon as root with the command: PostScript PostScript is a programming language used to describe the way an image is supposed to appear on the page. It is a printing standard which is widely acknowledged as the standard. A PostScript compatible printer actually decodes this page description itself to render the final image to the page. Because of this, PostScript printers need to have some processing power on board, which, in turn, makes them more expensive. However, the inclusion of PostScript means you can rely on the printer running on Linux and will cut down on the number of unpleasant surprises when in use.

24

LINUX MAGAZINE

smb start If we were lucky enough to have installed from source we should start two daemons: Webmin’s samba interface

smbd –D nmbd –D The latter daemon is for SMB or CIFS clients and allows NetBIOS over IP name service requests, while the former gives print and file services to SMB clients. At this stage we can login to the Windows machine and see the samba box in the network neighbourhood. To map the network drive, in Windows Explorer use Tools/Map Network Drive. Choose a drive letter as a mount point (E:, F:, G: etc.) and enter the path to share: \\hostname\username This will then attach the share to the drive letter you chose. To test for errors you can always run testparm as root and then check the two samba log files, log.smbd and log.nmbd as well as the usual /var/log/messages. The smb.conf file does have more options such as Issue 21 • 2001

Printing and print servers The paperless office isn’t with us quite yet, unfortunately, and we still need to get information on paper. Printing in an office environment is one of those core functions that can’t afford to go wrong. By spending a little time and effort, maybe even a little money, you can set yourself up with a rock solid system and have one less thing to worry about. Linux has an advantage in this arena, as it is made with networking in mind. There are many advantages to wanting to have your printing resources on a network. Decent printers don’t come cheap, so being able to share their use out over a number of users reduces the burden of capital expense a little. Wastage is minimised, as are the costs on consumables, which, now you are using more of the same type, can be bought in larger quantities.


FEATURE

Cost implications Having a shared printer does not mean you will have to tie up a computer to serve it. The resources put on a print server can be quite small if all the page rendering is done on the originating machine.

Installation All of the modern commercial Linux distributions will come with their own suite of printer configuration tools, usually graphical. For these to work, you will need to have a printer that is supported under Linux, and not all are. The ease with which you can configure a printer depends heavily on the amount of support Linux offers it. PostScript is the feature to lookout for when deciding on a laser printer. If your printing needs don’t run as far as the extra expense incurred through buying a PostScript printer, then you should check out the suitability of a printer beforehand. Their are some good resources available on the Internet, one of the best being http://www.linuxprinting.org/.

Spools Because of the way that Linux has grown it still relies very much on some old toolsets, which have embedded themselves very deeply into the heart of the operating system. LPD (the Line Printer Daemon), came about in the days when heavy line printers where the only things available. As technology has changed to include more modern printing systems, like laser printers and inkjet printers, so LPD has had elements added to it to help it cope. LPD revolves around the configuration file /etc/printcap, which contains details of all the printers known to the system. Each entry is referred to as a spool. Printers in this situation doesn’t necessarily refer just to the hardware device, it is customary to set up different spools which refer to the same physical printer but have different options attached, like print quality or which paper tray to use. By doing this, it is simple to access the range of features your printers may have from the command line. There are also utilities available that let you select these options graphically, but it is still easier to select a pre-configured spool with a single click than going through the myriad of options each time. LPD, or some development of it like LPRng which has some enhanced printer spool facilities, will be provided by default on all Linux systems.

CUPS CUPS (the Common Unix Printing System) is another way to bring printing services to your system. CUPS is a modern departure from the old school, and calls upon a new printing protocol – IPP. While LPD can be used for network printing, it can be a little clunky in

Network printers To simplify the set up of printing on anything more complicated than a standalone workstation for one, you have the option of spending a little extra on your printer by making sure it is network compatible In effect, you are getting a printer that connects directly to your network, which can then be configured so that every one else on that network can see and use it. This is especially true for those that are going to have a mixture of different clients on their network or if you are planning on using samba. use, especially when you want to monitor the state of your network printer (i.e. is it online, has it finished printing that last job, has it now run out of paper?). With CUPS you have a more natural way of investigating printers on a network, especially if your distribution ships with Webmin, as Mandrake and Red Hat do. Webmin allows you to access the CUPS server control front-end via a HTTP connection, through a Web browser. All of the distributions will come with tools allowing you to set up networked printer sharing without too much hair pulling, but this can only be relied upon if the underlying network has been configured.

SuSE sets up and configures printers with its YaST configuration tool

Info Webmin homepage The Linux email administrator HOWTO Elm user’s guide Pine information centre Sendmail homepage Samba homepage Webmin Ksamba The Printing HOWTO The CUPS Web page An example of Webmin in action

http://www.webmin.com/ http://www.tldp.org/HOWTO/MailAdministrator-HOWTO.html http://www.tldp.org/HOWTO/Mail-User-HOWTO/ http://www.instinct.org/elm/ http://www.washington.edu/pine/ http://www.sendmail.org/ http://www.samba.org http://www.webmin.com/webmin/ http://www.kneschke.de/projekte/ksamba/ http://www.linuxprinting.org/howto/ http://www.cups.org/support.html http://www.webmin.com/screens/edit_printer.html

Issue 21 • 2001

LINUX MAGAZINE

25


FEATURE

Linux networking guide: Part 2

ROUTES AND GATEWAYS In the second part of

Introduction

our simple guide to

As was discussed in the first article in this series, the minimum requirements for a computer to function as part of a network are:

configuring Linux networks from the command line, Bruce Richardson shows us how to configure a Linux box as a router

● A physical connection to the network, such as a Network Interface Card (NIC). ● An address on the network. It’s no use being able to talk to other computers if they can’t find a return address to talk back to. ● A way of determining how to reach any given address. That is to say, given an arbitrary address to connect to, can we find it on this network, can we reach it through this network or must we find some other route? Items one and two were covered in the last article. This article looks in detail at routes, routing tables and how to manipulate them.

Before we begin To implement the procedures outlined here you should be able to do the following: ● Install NICs in a Linux box (covered in the previous article). ● Perform basic configuration of network interfaces on your distribution of choice (also covered in the previous article). ● Compile a kernel. This is not the terrible prospect many recently-converted Linux users seem to dread. There are friendly menuing systems (text or GUI) to help you and if you haven’t tried it yet you really should learn.

Networking concepts Configuring a Linux box as a router is a more complex task than simply connecting it to a network. It is necessary to explain some elements of IP networking on Linux before giving any practical examples.

What is a subnet? A subnet is a subdivision of a network: you might say that a subnet is to a network what a network is to an Internet. 26

LINUX MAGAZINE

Issue 21 • 2002

As the previous article explained, each IP address contains a network number, uniquely identifying the network on which the host may be found. A network may be further subdivided into subnets by allocating further bits to specify subnet numbers. A network with address 194.206.0.0/8 could allocate the third most significant byte to subnet numbers: this would allow for 256 subnets (each with up to 255 hosts) with addresses 194.206.0.0/24, 194.206.1.0./24 and so on. There may be many reasons why a network may be split into subnets, linked by switches or routers, but the greatest benefit is that dividing your network in this way makes the implementation of routing and firewalling much easier.

Routes, gateways and the routing table When the Linux kernel is presented with an IP packet to deliver, it needs to know the route by which the packet should be sent if it is to reach its destination. To do this it consults the routing table. Each entry in the routing table lists a destination and the way to reach it. The kernel works through the entries in sequence until it finds a match. If there is no match, an error is returned. The Routing Table boxout shows a sample routing table as displayed by the route command. The machine in question has two routes in its table. The first route is to a subnet with address 192.168.10.0 and netmask 255.255.255.0, which is the local network to which the computer is attached. The entry tells us that any IP address on that subnet can be reached through the eth0 network interface (see the Iface column at the far right). The second route is a default route: the special address 0.0.0.0 with netmask 0.0.0.0 matches any address. This entry catches any IP address that isn’t on the local network and forwards it to the host at 192.168.10.1 (as specified by the entry in the Gateway column) via network interface eth0. The machine at 192.168.10.1 is expected to know how to send the packet on at least the next step in its journey and is thus acting as a gateway to other subnets and networks.


FEATURE

Routing table – as shown by the route command Destination 192.168.10.0 0.0.0.0

Gateway 0.0.0.0 192.168.10.1

Genmask 255.255.255.0 0.0.0.0

The route command The route command (usual location /sbin/route) is used to manipulate the routing table. If we assume that the Linux box began with no routes at all, we can create the routing table shown in the Routing Table boxout with the following commands: route add –net 192.168.10.0 netmask U 255.255.255.0 eth0 route add default gw 192.168.10.1 The second command doesn’t specify an interface because the route command can work it out from the route added in the first (that is, the kernel already knows how to get to any 192.168.10.xxx address). You can specify the interface if you wish, however. You must be root to manipulate the routing table but any user may examine it (though they may have to type the full path to the command) thus: route –n Which should give the same output shown in the sidebar. Note: the –n parameter isn’t essential but stops the route command from attempting to resolve IP addresses into fully qualified names (which takes longer and hides the IP address information).

How the distributions do it On a simple network you will not normally need to use the route command. For one thing, standard subnet routes like the first entry in our example are now automatically added by the kernel (any 2.2.x or later version) when the network interface is configured. As for default routes, on a Debian box a gateway entry is added to the configuration details for the associated interface (see the Debian Network Config File boxout) and the route is created and destroyed when that interface is brought up or down. On a Red Hat box the same thing can be achieved by adding an entry to /etc/sysconfig/static-routes as shown in the Red Hat interface config script boxout.

What is a router? All networked computers use routes and any host may be linked to more than one network but a router is a piece of equipment that connects different subnets or networks together and acts as a gateway between them, enabling hosts to on different subnets to communicate. Routers can be proprietary

Flags U UG

Metric 0 0

Ref 0 0

Use 0 0

Iface eth0 eth0

hardware or, as in this article, properly configured Linux boxes.

Static versus dynamic routes Static routes do not change unless actively reconfigured. All the examples in this article use static routes. Most hardware routers are, in contrast, able to build routing tables dynamically, using established protocols to map the networks to which they are connected and responding automatically to network changes. Linux boxes can emulate this behaviour but there is not space to cover that here.

Policy routing and the IPROUTE suite Ordinarily, routing decisions are made solely on the basis of an IP packet’s destination. Policy routing allows us to set rules that route IP packets based on other criteria, such as the source address. To do this on Linux we use the iproute suite and a properly configured kernel. A kernel configured for policy routing can maintain multiple user-defined routing tables. The IP tool from the iproute suite allows us to manipulate those tables and to create rules to specify which IP packets use which table. The IP tool is a networking Swiss Army Knife. It can be used to replace most of the common Linux networking commands (ifconfig, route and so on) and adds a whole range of new capabilities. In this article it is used to perform functions beyond the reach of the traditional tools. Iproute packages are available in rpm and deb format for all the main distributions and so installation is a simple one-line (or single-click) operation. There is not space here to describe the suite in any detail: the examples in the section called Building A Gateway Box should give a good introduction to the use of the IP tool.

Debian network config file # /etc/network/interfaces –– configuration file for ifup(8), ifdown(8) # The loopback interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.10.10 netmask 255.255.255.0 gateway 192.168.10.1

Issue 21 • 2002

LINUX MAGAZINE

27


FEATURE

Figure 2: How we want our new set-up to appear

Building a gateway box The rest of this article shows how to turn a Linux box into a gateway linking several subnets on a network.

The old setup The network used to be organised as shown in Figure 1. There was one subnet, 192.168.10.0/24, holding all the servers and workstations. There were two routes to the Internet, one through a hardware ADSL router/firewall on 192.168.10.1 and one through a leased line whose router/firewall was on 192.168.10.2. Most user workstations were set up to use the ADSL router as a gateway while those servers which required Internet access and certain developer workstations used the leased line. There were two main problems with this network configuration: ● 1. There is no easy way to control which hosts go out which gateway. The ADSL connection has been unreliable, sometimes through problems with the line, other times because

Figure 1: How our old set-up was organised

28

LINUX MAGAZINE

Issue 21 • 2002

problems with routing at the ISP. But if any machines need to switch from one gateway to the other they must either be visited individually (in the case of those which are statically configured) or wait for the change to percolate through DHCP (for those which are dynamically configured). The DHCP lease on this network is three days, meaning that a change in the configured gateway takes up to 36 hours to percolate throughout. ● 2. It is insecure. Firstly, there is only one subnet and so access must be allowed from the Internet to the internal network to reach the mail and Web servers. Secondly, having two points of egress/entry to the network makes it twice as vulnerable, with two sets of firewall rules to get right but a failure in just one causing a breach.

The new setup The proposed new configuration is shown in Figure 2. There will be two extra subnets, 192.168.11.0/24, 192.168.12.0/24, the first leading to the leased line and the second to the ADSL line. The two hardware router/firewalls have been allocated IP addresses on the new subnets (192.168.11.2 and 192.168.12.2). The mail and Web servers have been moved to the leased line subnet. The new Linux box will act as a router between all three subnets and as a firewall shielding the internal subnet (beyond the scope of this article). Its configuration will include the following key points: ● It will become the sole gateway for the 192.168.10.0/24 subnet, using policy routing rules to decide which hosts are routed out through ADSL or leased line. ● Its interface to the internal subnet will be assigned both 192.168.10.1 and 192.168.10.2


FEATURE

as addresses, removing the need to reconfigure any hosts on the internal subnet. ● We will add a couple of scripts to switch the default route between the ADSL and leased line, so that users can quickly be routed through the leased line if there are problems on the ADSL. This configuration offers several advantages over the old one: ● It’s simpler to administer, since all routing decisions are made at the gateway box. ● Users can be switched instantly from routing through the ADSL line to routing through the leased line. ● It’s much more secure, since the Internet-facing servers have been moved into a Demilitarized Zone (DMZ) between the Internet and the internal subnet. Now access from the Internet can be restricted to the DMZ, with no access allowed from the Internet to the internal subnet.

The implementation Prepare a Linux box with three network cards in it (a Pentium II or equivalent with 64Mb RAM and fast Ethernet cards will be more than sufficient). You will need a 2.2.x or later kernel and the iproute suite. Make sure the kernel has been compiled to be an advanced router (CONFIG_IP_ADVANCED_ROUTER=y) , with the policy routing (CONFIG_IP_MULTIPLE_TABLES= y), Netlink socket (CONFIG_NETLINK=y) and routing messages (CONFIG_RTNETLINK=y) options. It would also be desirable have it optimised for routing (CONFIG_IP_ ROUTER=y). Additional options are required for firewalling but that is beyond the scope of this article. Configure the network interfaces so that eth0, eth1 and eth2 have IP addresses 192.168.10.1, 192.168.11.1 and 192.168.12.1 respectively. The Debian Gateway Interfaces boxout shows how this would be done on a Debian box.

Simpler steps First we should switch on IP forwarding, so that the box will forward packets that come in on one interface back out on the appropriate interface:

Debian gateway interfaces # /etc/network/interfaces –– configuration file for ifup(8), ifdown(8) # The loopback interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.10.1 netmask 255.255.255.0 auto eth1 iface eth1 inet static address 192.168.11.1 netmask 255.255.255.0 auto eth2 iface eth2 inet static address 192.168.12.1 netmask 255.255.255.0

echo 1 > /proc/sys/net/ipv4/ip_forward The routes to each subnet should have been configured along with their respective interfaces but now we add a default route out through the ADSL line: route add default gw 192.168.12.2 eth2 At this point the routing table should look like the Gateway Routing Table boxout. Now we turn from the route tool to the IP tool, adding an extra address to the eth0 interface so that the gateway box can impersonate both of the routers: ip addr add 192.168.10.2/24 brd + dev eth0

Policy routing implementation At this point we do have a functioning router. If the cables are connected it will route from the internal subnet to the Internet through the ADSL line (via eth2 and to the mail and Web servers via eth1). Now for the complicated bit where we try to add an alternative set of routes for certain hosts. First we add a custom routing table, leasedline, to the rt_tables file (on a Debian system this is in /etc/iproute2/):

Red Hat interface config script # /etc/sysconfig/static-routes # Each line should have the format: # # device args # # When an interface is brought up, each

# matching line # is passed to route as: # # route add –args device eth0 default

gw 192.168.10.1

Issue 21 • 2002

LINUX MAGAZINE

29


FEATURE

Gateway routing table Destination 192.168.10.0 192.168.11.0 192.168.12.0 0.0.0.0

Gateway 0.0.0.0 0.0.0.0 0.0.0.0 192.168.12.2

Genmask 255.255.255.0 255.255.255.0 255.255.255.0 0.0.0.0

#!/bin/sh # /usr/local/bin/goleased – switches default route to leased line route del default gw 192.168.12.2 route add default gw 192.168.11.2 ip route flush cache #!/bin/sh # /usr/local/bin/goadsl - switches default route to ADSL line route del default gw 192.168.11.2 route add default gw 192.168.12.2 ip route flush cache

30

LINUX MAGAZINE

Issue 21 • 2002

Iface eth0 eth1 eth2 eth2

ip route show table leasedline ip rule show Finally, we flush the cached list of routes so that the new routing tables are used: ip route flush cache If everything worked, traffic to the Internet from the three hosts listed above will go through the leased line, no matter what the state of the main routing table.

Making it permanent

tableU tableU tableU devU

Now we add a rule for each ip address of a host that will use the new table: ip rule add from 192.168.10.14 tableU leasedline ip rule add from 192.168.10.17 tableU leasedline ip rule add from 192.168.10.32 tableU leasedline

Use 0 0 0 0

Now all we need do is add scripts that can be used to switch the default route from the ADSL to the leased line and vice versa. A simple example is provided in the Route Switching Scripts boxout. Note: these scripts do not affect the leased line routing table.

The new table has no routes, so we need to add some:

iproute site http://defiant.coinet.com/ iproute2/ Advanced ruting HOWTO http://www.linuxdoc.org/ HOWTO/Adv-RoutingHOWTO.html Linux network administrators guide http://www.tldp.org/LDP/ nag2/index.html

Ref 0 0 0 0

Final detail

/etc/iproute2/rt_tables # syntax: priority name # 255 local 254 main 253 default 0 unspec # # table added for leased line # 200 leasedline

Info

Metric 0 0 0 0

We can check to see if the routes and rules have been properly configured:

Route switching scripts

ip route add 192.168.10.0/24 dev eth0 leasedline ip route add 192.168.11.0/24 dev eth1 leasedline ip route add 192.168.12.0/24 dev eth2 leasedline ip route add default via 192.168.11.2 eth1 table leasedline

Flags U U U UG

If you have all this working, it may occur to you that most of the configuration will vanish if you restart the machine or bring network interfaces down. A crude way to give it permanence would be to place the sequence of commands listed above into a script, to be run during start-up. A more robust solution would be to arrange it so that routes and rules are added or deleted with their associated interfaces. Each distribution has its own way of achieving this and I leave it to you to research as an educative project.

Summary This article has shown you how routing works on Linux, from the simple set-up of a typical workstation to the complex configuration of a router. Most small networks don’t need anything so complex but the ambitious scope of this article hopefully provides you with examples that you can adapt to your own needs. The next article will explain DNS, how to configure workstations to use DNS properly and how to configure a DNS server using BIND or some of the smaller Open Source DNS daemons.


KNOW HOW

diald

DIAL ‘D’ FOR DAEMON The original aim of diald was to bring up and take down a dialup link to the Internet on demand. As Robert Morris explains this is just the start of what diald has to offer

A

dial on demand Internet link works very well over ISDN (although it does work on a traditional modem link too), with the advent of unmetered Internet accounts. It is ideally suited to home network or small office applications and diald provides an Open Source alternative to the more expensive dedicated ISDN routers.

Functionality diald’s functionality can be summed up as a daemon that controls and monitors a non-full-time IP connection. This functionality consists of three elements – connecting and disconnecting on demand – allowing manual control (both locally and remotely), and providing monitoring of the status of the connection. The on demand functionality is implemented with a proxy interface, using the Linux ethertap device. This is a virtual network interface, which gives any packets routed to it to a userland process – diald in this case. This device is set up just like any other network interface – and in the normal configuration of using diald to manage a link to the Internet, the machine’s default route would point to the proxy interface. Thus any traffic destined for the “outside world” is handed over to diald. When diald receives a packet and “triggers” (i.e. decides to bring the link up), it removes the proxy interface and then runs pppd to bring up the real interface. The routing is adjusted automatically, and the trigger packet fed back to the kernel, which then routes it in the normal way. diald then monitors the ppp device and, when it sees it is idle, kills pppd and reinstates the proxy device ready for the next trigger.

/etc/diald.conf mode ppp device /dev/ttyS0 speed 57600 modem lock crtscts authsimple /etc/diald.auth tcpport 1020

32

LINUX MAGAZINE

connect “chat ‘’ ATZ OK ATDT08001234567 CONNECT” defaultroute dynamic local 192.168.0.1 remote 192.168.0.2 include /usr/lib/diald/standard.filter

Issue 21 • 2002

Simple local control of diald can be achieved using signals. The two most useful are SIGUSR1, which tells diald to immediately bring the link up, and SIGINT, which immediately takes the link down (although leaving diald running). These can be useful for implementing regular timed connections (such as for mail polls) from cron, for example. diald also has a more complex command interface, which is available locally through a named pipe, and remotely using a TCP port (or of course locally by connecting to localhost). Authentication must be performed before any control commands are permitted. The named pipe and TCP port interfaces are not enabled unless the applicable commands are specified in your diald configuration file. Once authenticated, a number of commands can be issued to control the link, for example: up to bring the link up; down to take the link down; force to bring the link up and force it to remain up until unforce is specified; and block to bring the link down and block connection attempts until unblock is specified. The command set is documented in the diald-control manpage. The TCP port also provides monitoring of the link status, including whether demand mode is enabled or disabled, or the connection is forced or blocked. Full details of this are in the diald-monitor manpage.

Configuration diald is available in both traditional .tar.gz and rpm archives from http://diald.sourceforge.net. At the time of going to press, the latest version is 1.0. In most installations you’ll want to ensure that diald is started at boot up. If you’ve installed from source then you may have to manually add a line to a startup file such as /etc/rc.local You should recognise some of these commands from the pppd options file – in fact the commands relating to the modem (device, speed, modem, lock, crtscts) and chat script (connect) behave in exactly the same way. The reason they’re specified here is that diald speaks to the modem itself, and runs the chat script (as specified by connect) prior to handing control over to pppd. Therefore you should not specify the modem and chat script commands in


KNOW HOW

/etc/ppp/options when pppd is being called by diald. The remaining commands configure the TCP port and authentication for controlling diald, set the default route to point to diald, and use dynamic addressing. diald needs the local and remote commands when dynamic addressing is used – these specify temporary IP addresses for diald to use for its proxy interface, since the real addresses are only established after the link has been brought up. Finally, standard.filter is included – this line is essential, because standard.filter contains all the rules specifying what types of packets diald will trigger on or ignore, and how long the link will be initially brought up for, etc. This is a minimal ppp config, since many of the usual pppd commands are dealt with by diald instead. Obviously you would need an entry in pap-secrets or chap-secrets to specify the secret for the user and remotename combination you’ve specified here. To use the TCP port for anything other than monitoring, you need to authenticate to diald. Two authentication schemes are supported – “simple”, and PAM. The simple scheme is meant for applications where all the clients are trusted. If you use it, you should make very sure that your TCP port is firewalled from the outside world and only open to hosts on your local network. You specify an auth file with the authsimple command, and this file should contain one or more lines in the following format (my /etc/diald.auth given as an example): rob

up,down,force,block

Here, the user “rob” is authorised to issue the commands shown. Once connected to the TCP port, you need only send auth simple rob and then you may proceed to send other commands. No password etc, is required (use PAM authentication for this). If you’re using the rpm, create a file in /etc/sysconfig/network-scripts for each copy of diald you want to start, with the prefix dialdcfg-. If you only want a single instance of diald, you can simply do a touch /etc/sysconfig/network-scripts/dialdcfginternet, and place all your configuration in diald.conf. You can set up multiple instances of diald, for example I have one instance which connects to the Internet and another to connect to the office dialin service. To run multiple instances, you create one dialdcfg file for each, and put a “DIALDOPTIONS=” line in each. I like to put connection-specific configuration in config files under /etc/diald, and then simply put an “include” command in the DIALDOPTIONS line, to keep the configuration easy to read. Obivously, make sure only one instance has the defaultroute command. You can use addroute to specify a script to do your own routing.

/etc/ppp/options lock user rob remotename internet noauth

Pitfalls Using diald has its downsides. These relate to dial on demand solutions in general. If you’re using an ordinary dialup account with dynamic address allocation, it can be annoying if diald takes the link down on you, and brings it back up, causing your address to change – this breaks any open connections in ssh, FTP and so on. HTTP and POP however (which is what most desktop users will be using) don’t keep TCP connections open once the data is transferred, so they work just fine. Secondly, if you’re using a modem with diald to provide on demand Internet access to Windows clients (a common arrangement in a small business installation), you may find that, because the clients are not aware that the connection is dialup, they time out whilst waiting for the modem to negotiate. This can be frustrating for users. With ISDN, where the connection time is only a second or so, diald works quite nicely. Finally, be careful if you’re using an ordinary 0845 account – diald may trigger when you don’t want it to. Whether it be a mis-configured daemon that tries to connect to an external IP address in the middle of the night, or an anti-virus utility on a user’s desktop machine that tries to download an update from its Web site every time it is started up.

Other applications diald is useful in other applications too. When mode dev is specified in the configuration, diald effectively hands control of bringing the connection up and down to scripts that you specify. In this way diald can be used to monitor and control any type of link whatsoever. For example, it could be used as a frontend to a VPN tunnel – creating the tunnel only as traffic arrives and destroying it afterwards. Another alternative application is using diald to manage a backup Internet connection, to be activated on failure of the primary link (ADSL in this case, or leased line etc). diald was configured to connect to a normal Internet dialup account, but with demand mode disabled and no default route (since the default route is the ADSL line). A small Windows applet was provided that sits in the bottom right-hand corner of the users’ desktops, which they could use to activate the dialup connection in the event that their ADSL line stopped working. If you’ve got any type of link that you want to either bring up and take down transparently, or let users control from their desktops, then take a look at diald. Issue 21 • 2002

The author Robert Morris is a freelance Linux professional, and a contributor to the diald project. He can be contacted at: rob@rmorris.co.uk

LINUX MAGAZINE

33


KNOW HOW

34

LINUX MAGAZINE

Issue 21 • 2002


KNOW HOW

So you want to live on the...

BLEEDING EDGE Mandrake Cooker helps to bring the MandrakeSoft developer closer to Mandrake Linux Users. Formi and Colin Murphy explain how to get involved

Mandrake cooker MandrakeSoft prides itself on producing a cutting edge Linux distribution, with the latest packages and the most recent versions. Mandrake Linux is one of the larger distributions, so keeping an eye on all of the packages is no small task. Making sure they all work smoothly together is an even greater problem. It is here that the bright idea of Mandrake Cooker comes to the forefront, enlisting the help of the community to prove that many hands can make light work. It’s reasonable to assume that keeping a structure to such a large base of developers and testers would prove to be a big problem, but this doesn’t seem to be the case. This must be put down to nothing more than the good will and kind nature found with the community spirit.

The mailing list At the heart of Cooker lies the mailing list: a busy melting pot of suggestions, discussions, announcements and bug reporting. It is through this list, where at its busiest hundreds messages can pass a day, that the buzz for development and sense of activity can be felt most strongly. The Cooker distribution is made available for download from a range of mirrored FTP sites throughout the world. From these you can regularly take the updates to keep your Cooker system current.

A word of caution Cooker is most definitely a development distribution, so you must treat it with caution. It is not designed for every day use, so you should not be using Cooker on systems you rely on. Because it is in flux – always improving and developing – things will go wrong and packages will stop working. That, in a way, is Mandrake Cooker’s raison d’être. Without the inherent bugs there would be little point going to the effort of running it. Bugs are usually fixed quickly, but there is no guarantee that the system will be working if you need it to be working. 34

LINUX MAGAZINE

Issue 21 • 2002

Cooker is also quite big. Even if you just limit yourself to the i586 processor versions, you are still looking at downloads of up to 4Gb to get started, and you might be looking at a good chunk of that again after an update of one of the major libraries, for example. As such, this is only something that you can consider as an undertaking if you have the network bandwidth to make it worthwhile. On a 512Kbs cable connection it can take about three hours to download a 650Mb CD image, so you’re looking at the best part of a whole day. A lot of this can go on in the background though, so for some it will prove to be no great stumbling block.

Here’s how to start The patching of cooker occasionally freezes, enabling the MandrakeSoft Cooker organisers to raise an .iso image of the work done so far. This image can be downloaded or even bought from some vendors and can be treated as any other version of Mandrake Linux, with the exception that it is still a Cooker version of Mandrake and so should still be used for testing purposes only. You can consider it as a prerelease version of a proper Mandrake Linux release. The procedure for downloading and making your own .iso is simple enough (see the Downloading Cooker boxout for more details). Cooker develops and grows daily, as patches are added with their accompanying bugs, and new packages either replace and improve or add functionality. For the developers and those committed to testing, a fixed version would be of little use, so updates need to be done on a regular basis, maybe even daily. If Cooker is to be of any value to you then you too will need to make regular updates. You can use the mirror utilities as described above but once you have a version running, you might want to update it with the urpm utilities. urpm (user rpm) is a Perl script that works as wrapper for, or sometimes a front-end to, the rpm package management system. For those of you familiar with


KNOW HOW

Debian Linux, urpm is to rpm as apt-get is to dpkg, the Debian package manager. rpms are the packages that Mandrake uses to install all of its software. Once urpm is configured it can be called upon to automatically resolve dependencies that need to be met for a package which you are trying to install. It will even fetch other packages, as needed, and install them to meet any dependencies that arise. The configuration of urmp revolves around two utilities, urpmi.addmedia which adds a source of rpms to its database and urpmi.removemedia which does the opposite. So, to add a new source to use you would input: urpmi.addmedia local file://home/colin/rpms if you had a collection of rpms in your local /home directory. But they don’t need to be local. Changing the path to that of an FTP site would work just as well, albeit a tad slower. The important thing to remember is that the file:// element in the above example would change to ftp:// or even http://. To then get a package installed you would do enter: urpmi name_of_the_package If the name we provide is ambiguous, urpmi will print a list with all packages that match and exit. You can modify this behaviour with the option -a: urpmi –a gnome This command will install or update all packages with the string gnome in their name. If the supplied name matches a package, it will be installed, and downloaded first if necessary. If the package needs other packages to be installed, you will first be asked for permission to install them. The option ––auto will install the package and all required dependencies without asking for confirmation. One last option to mention is ––auto-select which checks all configured sources for more recent versions

Downloading Cooker You can create your own .iso image of Cooker without too much hardship. You will need to keep a local copy of the Cooker directory. Initially, you will need to download a copy from one of the current Cooker mirrors, a list of which can be found on the Mandrake Cooker Web page. You can download this with your favourite FTP utility or by using one of the mirroring tools available. The advantage of using a mirroring tool is that it will compare the timestamps of the files in the archive. When you come to update your local version of this mirror, you will only have to update the files that have changed. Something like: cd /mnt/localcookerdir lftp –c ‘open ftp.somemirror.org; cd somemirror/dir/cooker; mirror’ only changing the bits in italics for sensible values. After the mirroring is done, use the mkcds script, in the misc/ directory, to build the iso.

of those packages already installed, lists them and asks if they should be installed. You can even add the ––auto option to this. With Cooker in mind, it would be tempting just to configure urpm to a Cooker FTP site and run: urpmi ––auto-select ––auto nightly. This would be unwise and you should really look to viewing the recent changes made to Cooker, as announced in the mailing lists, and pick only those packages that you feel confident about. When you come across a bug or problem you must make sure it gets reported. Firstly, check for previous reports on the MandrakeSoft Bugzilla Web page and, if you have found something new or further to report, then you get to post details to the Cooker mailing list. This chain of events really is important because the last thing the Cooker list needs is for everyone to post the same problem.

Info The MandrakeSoft Cooker Web site http://www.linux-mandrake.com/en/cookerdevel.php3

How to create a bootable CD Though not strictly necessary, once you have a local version of Cooker, you might find it convenient to burn to a CD. Here is a reminder of how to do that:

boot.img and boot.cat files that will make the resulting CD bootable.

mkisofs –R –b images/boot.img –c images/boot.cat –o cooker.iso cooker

will burn the .iso image to the CD. The dev=4,0 entry will differ depending on your hardware. To find out exactly what device your CD writer is use the cdrecord –scanbus command.

This makes an .iso image, complete with the

cdrecord –eject –v speed=4 dev=4,0 cooker.iso

Issue 21 • 2002

LINUX MAGAZINE

35


KNOW HOW

Brahms – Music notation and sequencing

IF MUSIC BE THE FOOD OF LOVE Whether it be for composing, arranging, printing or playing music, Brahms can assist with all the steps involved in computer music. Dr. Jan Wuerthner shows us how, with the aid of Brahms’ KDE interface, you can be a composer in no time

T

he Brahms project integrates several areas of computer-aided music. As a sequencer, Brahms can read, edit and play music. A piece of music is not seen as an entire piece of audio information, as with an AIFF or WAV file, but as a succession of individual sound events in MIDI format. Brahms works with these abstract notes (which are outputted over the MIDI interface) and puts them together in a less abstract form to create music. Brahms also incorporates a comprehensive music notation system. The MIDI format is not sufficient for the score notation of today and for this reason, Brahms has its own format (written in XML), in which notes have additional attributes for accurate representation. It can for example visually represent double alterations (x and bb), groupings and tuplet-brackets, as well as ornaments through numerous symbols such as slurs, trills or shakes, (de)crescendos and many others. Finally Brahms can also bring the score into a printable format.

Why Brahms was developed Before the development of Brahms began, there were only a few music notation and sequencing programs for Linux. These were in varying stages of development and some were highly specialised. They all however exhibited an important shortcoming: they lacked interfaces for expanding their functionality. As a consequence of this, numerous projects developed their own music notation systems, although their actual goal was a lot less expansive (for example, research projects with very specialised objectives). It was in 1998 that the Brahms project arose out of this state of need, with the goal of offering a music platform. Brahms takes care of all the usual requirements of a music system, as well as being arbitrarily expandable in its function range through a suitable interface. It will not escape the trained eye that there is a parallel with a well-known product from the world of commercially available software. Although this product originally served as an inspiration, Brahms has gone its own way in the meantime. Transferring from one software package to the other is however relatively simple.

36

LINUX MAGAZINE

Issue 21 • 2001

Writing music with Brahms Brahms is put together like a multi-track recording device that works with a piece of music (song) as its largest data structure. Apart from all the magic buttons, bells and whistles, the main window can essentially be divided in two: the “tracks” are represented, line by line, on the left and the “parts” are on the right. Any track can consist of one or more parts. The parts belonging to a track are arranged next to each other on the right-hand side. Whole tracks or individual parts can be manipulated by a range of different attributes (see Figure 1). For example, the pitch or volume of an entire track can be varied. A track can be assigned an instrument and an output, and it can also be muted. The actual (sound) “events” can be found in the parts of a track. All events possess a start position within their part and a duration. Those who are versed in music notation will probably miss the pauses – these can however be automatically completed by Brahms from the note positions and lengths. The different types of events (more specifically: notes, master events and audio events) are assigned to the parts of different types of tracks (i.e. score

Figure 1: The tracks are displayed on the left of the main window and the parts on the right


KNOW HOW

tracks, master tracks and audio tracks). Special editors are used for the different kinds of tracks. Brahms extensions (or add-ons) can define additional event types and new types of tracks. A good example of this is the Riemann add-on, which examines a piece of music both functionally and theoretically. To do this, it produces a harmony track and fills it with Riemann events. These new events describe the harmonious situation of the piece at different times. A number of toolbars can be found above the tracks and parts in the main window. These include all the main functions like Play and Stop, as well as the ability to set the position, tempo and type of beat. In addition, there are multi-step Undo and Redo tools, the usual operations of cut, copy and paste, as well as two other functions for the cutting and pasting of parts. But wait, there’s more: the menu allows the import and export of MIDI files, the Mix down of a piece of music, the reloading of Add-ons, and provides access to the global settings.

editors. In this case, they process the entire piece of music (see Figure 2).

Music notation with the Score Editor One of these editors is the Score Editor, which at first appears as an empty sheet of music. The clef and the type of tone can be selected by clicking on the violin clef (see Figure 3). At the start of the system, a red bar on the left-hand side marks the active track. The pitch (which is displayed on the appropriate toolbar) as well as the velocity (see the info. field below the system) refer to this track. A note cursor can also be used, if necessary, in this system.

Mix down Mixes the notes from several tracks into an individual track.

What you see is what you hear Brahms plays each note as you enter them. The MIDI instrument, as well as the output channel, can be selected in the track settings. The current work can be played by activating either of the two Play buttons in the main window. One of the buttons plays only the marked range (the marking is located above the part field) and the other one plays the entire work.

Cold start When starting Brahms, the fields for tracks and parts are, at first, empty. If you want to create a new piece of music, you will need at least one track with at least one part. The Edit/Add Scoretrack menu point lays a new track, which can receive a new part through Add Part in the right-click context menu. An even easier alternative is to double-click the large field on the right that contains the parts. Brahms then produces a part at the exact point where the mouse is located. The mouse can then be used to drag the part to any other position. The context menu of the part offers a selection of editors, which process the contents of the respective track. The background of the right-hand field also has a context menu that accesses a number of

Figure 2: The context menu in the part field can start editors that work on several tracks simultaneously

Double alteration This changes a tone by two semi-tone steps. For example, from the point of view of the harmony, a doubly increased G is just as different to an A, as a Fis is to a Ges. These differences only escape with a moderate temperament, which is usual in computer music as well as with key instruments today. Neither the enharmonic mix up (Fis/Ges) nor the double alteration is part of the MIDI format. MIDI only describes the sound, not the representation.

Figure 3: Notes can be entered and manipulated in the Score Editor

Most of the work can be done with the mouse. For example a single note or a group of several notes can be inserted or selected and moved with the mouse. Each note also has its own context menu, which is accessed via the right mouse button.

Toolbars for the editors Some of the different editors use the same toolbars. The standard toolbar includes the common working functions of undo, redo, copy, cut, paste, delete and print. A selection menu makes the appropriate addons available and the cog to the right of this activates the selected add-on. The other icons vary the number of the displayed beats (Zoom), define the insertion position (Insert point), switch the audio play on and off when editing and ensure that the editor displays notes with explicitly defined MIDI channels in colour. The Button bar indicates the current position (beat, strong beat and tick) and the pitch of the cursor. A left mouse click will produce the appropriate note at exactly this position. Keeping the Shift key pressed chromatically increases the tone (#), while holding the Ctrl key will decrease it (b). Remote enharmonic changes can be executed with the previous selection of the appropriate buttons on the right (bb, b, NO, #, x) or subsequently, with the help of the Note bar. The note symbols in the Button bar determine the length of the notes to be entered – from a whole note right down to a 64th of a note. The notes are, as a rule, rounded to the nearest 16th, although this can be altered in the Resolution menu. The next two Issue 21 • 2001

LINUX MAGAZINE

37


KNOW HOW

buttons select dotted notes and meter triplets. It is more efficient however to set the note length using the key combinations of Alt+1 to Alt+7. The Note bar indicates the parameters of an individual note. It is deactivated when opening an editor and is displayed through the View/Notebar menu point or with the key combination Alt+E. The fields in the Note bar are filled with values as soon as a note is selected with the mouse or the cursor keys. The Start field describes the position in the piece (as opposed to within the part) by defining three values: beat number, strong beat (for example 1, 2 or 3 in a 3/4 beat measure) and tick within a strong beat. A quarter note consists of 384 ticks. Arpeggi (ornamental notes that are melodically connected with the following main note) can be implemented by changing this value. The length field indicates the note duration in ticks. The pitch describes the pitch by defining the tone, followed by the octave position. The beat value of the note (0 to 127) can be found in “vel” (velocity), and the “chn” field (channel) indicates the number of the MIDI channel (0 to 15). A MIDI channel number of x signals that the setting of the associated track has been used. This value, displayed in the note sheet (Figure 4), can be coloured in with the paint brush symbol in the toolbar..

show instrument, show part events, and show events and pitches (Figure 5).

Figure 5: The parts can be viewed in a variety of different ways

More editors with more input possibilities Different forms of music notation have been developed over the years. Today’s notation has grown over the centuries, meaning that the roots come from a time when the moderately tempered C was not yet known. This will, at the latest, be seen clearly with the attempt to model your structure into an algorithm. In this age of computer music, the representation of a Piano Roll Editor enjoys great popularity. Pianists or musicians, who are not familiar with classical notation can write and work on music using the keyboard on the left. The tones are then graphically set against time in a large diagram. Of a similar set up is the Drum Editor, with which drum tracks can be set down in a true to life form (Figure 6).

Figure 4: When mixing (main window menu, Edit/Mixdown), the individual notes of the MIDI channels are mixed into the original track

Music in word and picture The Score Editor offers different tools (see table 1), which can be used to complete or supplement the score. These aids are selected in the Tool menu of the editor window. A tool remains active until another is selected. The symbol selection windows, which can appear with numerous different tools, disappear again when a new tool is selected. Brahms can also print the score, although the sequencer currently requires the shareware program MUP (The MUsicPublisher) for this. The output from Brahms serves as the input for MUP, which then supplies a very nicely formatted PostScript file. The shareware aspect of this software is a thorn in the side for many, and as such Brahms’ developers are currently working on an output through the Lilypond GPL program. After one or more parts have been produced in the editor, the main window can represent these in a variety of different ways. Using the Edit/Preferences/Desktop menu point the following variations can be selected: plain, show track name, 38

LINUX MAGAZINE

Issue 21 • 2001

Figure 6: The Drum and Piano Roll Editors offer different representations of a track

In all editors, a selected group of notes can be moved (or dragged) with the mouse, or duplicated by simultaneously pressing the Ctrl key. If you hold the Shift key at the same time, then only the time and not the pitch of the notes are moved or copied. The note lengths can also be corrected in the Piano Roll Editor using the mouse. You can do this by clicking the end of the Note bar – the cursor changes its shape by rolling over this region. Since Brahms encases many editing functions in abstract classes, the implementation of other editors is not a problem. Editors for the guitar and the modal notation of medieval music are both on the wish list.


KNOW HOW

Brahms supports aRts and ALSA At present Brahms supports two architectures for the output of sound: ALSA in its version 0.5.x and aRts. Both operating modes have their pros and cons. Brahms also plans to support ALSA 0.9.x in the future, but this is dependent on how aRts is developed. In the long term, a platform is to be supported, whose interface can comply with all the sequencer’s requirements. aRts is a promising candidate to supply such a platform as well as offering many other advantages besides. aRts is not only a software synthesizer, but has also been used as the sound server of the KDE desktop for quite a while. In this mode, the many aRts capabilities can be utilised, such as sound synthesis with the aRts-builder, MIDI recordings from a keyboard, simultaneous playing of several sound sources in the system and the list goes on. The recording and playing of audio data with audio tracks is in development at the moment – the first attempts have already been successful. Each Brahms track thereby receives its own mixer channel (Figure 7).

Table 1: Tools used in Brahms Tools Insert notes Add note symbols Add system symbols

Add lyrics

Range of application This (preset) tool is used to insert and manipulate notes. Individual notes can be decorated with this tool (Figure 9). This tool positions symbols, which are not to be assigned to a note, directly into the system (Figure 9). Even syllables and text can be assigned to the notes. Clicking a note opens a small text field, which can be closed again with the Enter key. If text is to be entered for the following note, then the Spacebar should be used to close the syllable. The reason for this is that Brahms then automatically opens a new text field for the next note (Figure 10).

Figure 8: The output of Brahms is connected with an aRts instrument in artscontrol

ALSA, the Advanced Linux Sound Architecture Brahms can be also operated in the ALSA mode, if ALSA version 0.5.x is installed: brahms –o alsa

Figure 7: Each Brahms track is assigned a mixer channel

The aRts mode is used differently in KDE 2.x and KDE 3.x. If no audio can be heard, then the MIDI manager settings are usually responsible. This MIDI manager is a part of the artscontrol program and can be found under the View/View MidiManager menu point. A synthetic MIDI instrument, for example a Slide, can be created in the manager with Add/aRts Synthesis Midi Output. This instrument then appears as a new output. For Brahms to be able to use this instrument, it must be connected to the sequencer software. To do this, the instrument must be selected in the output list and Brahms must be selected in the MIDI input list – they are then connected with the Connect button (Figure 8).

As far as configuration goes, all that is required is for the ALSA point to be defined as the output in every track. The advantage here is that the sound fonts of the soundcard can be used. In contrast, aRts only handles synthetic instruments at the moment, which is rather unsatisfactory when playing orchestral instruments. Apart from this, aRts only permits a small number of tracks on less powerful systems. Where both aRts and ALSA are supported as sound and MIDI drivers, the ALSA option is made invalid in Brahms.

Flexibility through modules The essential components of Brahms are stored in libraries, which are dynamically loaded by the program at runtime. One could describe Brahms as merely the glue that holds these libraries together. The core library contains all the core components and Issue 21 • 2001

aRts-builder: The aRtsbuilder is a component of the aRts project and serves the production of synthetic instruments. Different modules’ (or effects’) inputs and outputs can be simply built up and implemented with the graphical user interface.

LINUX MAGAZINE

39


KNOW HOW

Hugo Riemann: Riemann (1849-1919) was a music researcher who developed a “function” theory, which states that each accordic entity can be attributed to one of three functions – Tonic, Sub dominant and Dominant – and that each of these three-sounds can be represented by one of their individual tones.

functions (not the user interface, however) as well as numerous abstract classes. These include the structural components of a piece of music (for example song, track, part, event, note), all operations that affect these elements, as well as the basic functions of the editors. The elements of the core library require a presentation, i.e. a user interface. New graphic interfaces can be implemented with a minimum of effort due to the strict separation of function and presentation. There are currently two modes of presentation: Text Presentation and Kde Presentation. The KDE variant is loaded by default. If Brahms is started with the option: brahms –p text then the entire application runs on the console (text mode). Even the presentation of the notes in the Score Editor is abstracted in the core library. It was therefore very simple to implement an appropriate presentation in the text mode (Figure 9). Both figures illustrate the same section of music. The eighth notes are individually set in the text form: the abstraction contains the information about the groupings of notes, but leaves it up to the presentation whether to use these or not. It is easy to see that the text form is not very

Table 2: Extensions for Brahms Name Quantize All

Category Quantize

Quantize Length Fixed Length

Quantize Quantize

Dump

Testing

Debug

Testing

ExtractLyrics

Output

Stretch

Edit

Revert

Edit

Ear Training

Harmony

Parallels

Harmony

Riemann

Harmony

40

LINUX MAGAZINE

Function Rounds the initial positions, the note lengths and other events. Rounds the note lengths. Sets the note lengths at a constant value, which is selected in the editor. Outputs the total content of the piece of music on the standard output. Passes on any further information to the standard output. Outputs the lyrics (i.e. text) assigned to the notes. Multiplies each initial position and length by a factor of two. Reverses the direction of a piece of music (with respect to time). Trains intervals and chords (extendable at will). Searches for quint and octave parallels in a piece of music. A new track is created and filled with copies of these prohibited notes. Covered parallels are highlighted with colour by setting the MIDI channel. This is a great help when creating your own compositions and when analysing works. Assigns the suitable harmonies to a piece of music at different times.

Issue 21 • 2001

Figure 9: The Score Editor in the KDE Presentation mode (above) and in the Text Presentation mode (below)

simple to operate. In an emergency, it would be possible to start Brahms from the command line, i.e. without a graphic desktop. This presentation however basically serves as a proof of concept. It would be conceivable to implement a pure Qt- or Gtk-based presentation. An interesting alternative would also be a Web variation, whereby Brahms would act as a music server, which could be operated by a browser.

Brahms is extensible The big advantage of this architecture lies in the extensibility of Brahms. Extensions (Add-ons) can be written and compiled without the entire Brahms application having to be translated again – even without having to reboot. A concise API offers a simple interface, through which a piece of music with all its components can be edited. In the category of such extensions are the three functions Quantize, Quantize Length and Fixed Length (which should be in all sequencers). It is pleasing to know that these operations also include an Undo function. The currently available extensions are summarised in Table 2. The Riemann extension is not completely implemented and as such is still in its prototype stage. It’s worth noting that this module, when calculating the harmonies, uses the tone material at the respective time (vertical), as well as considering the musical context (horizontal). A sound that consists of Fis and C in the context of G major, is thereby more likely to be interpreted as D major with a minor seventh than a diminished Fis. Beyond this, the Riemann extension shows that Brahms extensions can introduce new types of tracks (Harmony, symbolised by a heart) and new types of

Figure 10: The Riemann extension introduces a new type of track


KNOW HOW

Listing 1: Revert extension

Figure 11: The Riemann events are displayed in the lower part of the editor

events (Riemann Events). The events are displayed in the lower part of the editor. The selection menu, which otherwise contains the beat values as its sole entry, now also supplies the Riemann values (Figures 10 and 11). An extension is indicated either as an option at the Brahms command: brahms -a dump -a riemann -a stretch or loaded at runtime (via the menu point File/Load Add-on...). The libraries to be loaded are located in the $$KDEDIR/lib directory subsequent to installation and begin with libBrahmsAddon. If an extension has been successfully loaded, it can be found in the Editor toolbar’s selection menu or in the context menus of one the following: piece of music, track or part (Figure 12).

Be your own composer For those of you who would like to write your own extension, the well-documented dump extension can be used as a template. The extensions are located in the brahms/addons directory in Brahms’ sources and

Installation The brahms-1.02-kde2.tgz source package for KDE 2 is available on the Brahms homepage. The following instructions unpack, configure and install the program: tar xzvf brahms-1.02-kde2.tgz cd Brahms ./configure make su make install

01 void Revert::song(Song * song) { 02 Position endpos = (new SongIterator(song))->endPosition(); 03 int sz = song->size(); 04 for (int i=0; i<sz; i++) { 05 Track * track = (Track *) song->get(0); if (track==0) return; 06 Part * part = (Part *) track->first(); if (part==0) return; 07 Track * newTrack = song->createTrack(track->isA(),0); 08 newTrack->setName(strdup(track->name()->getValue())); 09 if (track->isA()==SCORETRACK) { 10 ((ScoreTrack*) newTrack)->setProgram(track->program()); 11 ((ScoreTrack*) newTrack)->setChannel(track->channel()); 12 } 13 Part * newPart = new Part(newTrack); 14 newPart->setKey(part->key()); newPart->setClef(part->clef()); 15 // ––––––––––––––––––––––––––––––––––––––––––––––––––––––– 16 for (Iterator i = Iterator(track); !i.done(); i++) { 17 Event * event = (Event*) (*i)->copy(); 18 newPart->setStart(event, endpos - i.part()->end(event)); 19 newPart->add(event); 20 } 21 newTrack->add(newPart); song->add(newTrack); 22 song->remove(track); delete track; 23 } 24 }

the API documentation is also very helpful. The documentation is installed with Brahms, as well as being available on the Brahms homepage. A small example shows how little work it takes to create such an extension. Aside from the formalities such as the declaration of context, category and name, the essential part of the Revert extension consists of the song() method (see Listing 1). The module firstly determines the end position of the piece of music. The outer loop goes through all the tracks (line 4) and the inner loop iterates through all the events (line 16). The position of each event is subtracted from the end position, thus meaning that the time becomes a mirror image of itself. Copies of the events (not the events themselves) are finally edited and laid down in a new track with a new part (line 19). As soon as a track is edited, the original is removed in line 22.

Info

The last step must be executed as the root user (hence the su command). On some distributions (for example SuSE 7.3), an option is necessary in the configuration, and the third step then changes to:

Brahms homepage: http://brahms. sourceforge.net Official site of the aRTs project: http://www. arts-project.org

./configure ––prefix=/opt/kde2 Figure 12: The loaded extensions are accessed through the context menus

Issue 21 • 2001

LINUX MAGAZINE

41


KNOW HOW

Using cookies with PHP

COOKIE CUTTER Cookies are very much maligned and misunderstood but they can be useful tools in the hands of Web developers. David Tansley explains exactly what cookies are and how you can make use of them with PHP

42

LINUX MAGAZINE

A

few months ago the ITV program Pop Idol captivated the British nation and the public was invited to vote on who they thought was the best singer. Votes could be registered by either ringing a designated phone number or via the ITV Web site. On the day of the final, when my two kids tried to vote more than once via the Web, shouts were heard exclaiming that: “It won’t let me vote more than once”. “Ah,” I said, “their Web site is either overloaded or you have been cookied.”

transactions/movements within that Web site. Cookies are also used for login screens and personalising Web pages like at Yahoo. The downside of cookies is that some Web servers now use them to load up lots of unwanted advertising banners based on a previous transaction you may have made. Thankfully, you can disable the use of cookies or let the browser tell you when a cookie is being sent via your browser configuration.

Cookie ingredients So what are cookies? Cookies are very small text files that are sent from a Web server to your browser. The browser will then store them, usually as a cookie file. Cookies do not harm your computer; they are a way of storing general information about you or keeping track on what you are doing on a Web site. Cookies will only store information that you give the Web site, so be careful – if you don’t want personal information to be stored, then don’t give it in the first place! Let’s look at how a cookie might be used. Suppose you visited an online record store. If you decided to purchase a couple of records, a cookie would be used to keep track of your choice and how much you are spending. This cookie will provide you with a unique customer number that your browser will use to identify you to the Web site. When you make a purchase, this cookie is read and your purchase is added to a database, with your (cookie) number as the key that identifies you. When you wish to check out your transactions a database will be displayed. The Web server knows that these are your transactions because it will have used the cookie to identify and keep track of you. Now you’ve got to remember, that the World Wide Web is a stateless transaction. By that we mean once you’ve loaded a Web page, that’s it, the connection is broken. The Web server has no idea who you are, which is why cookies are so important – they keep track of your Issue 21 • 2002

Any Web-enabled script or programming language can utilise cookies. In this article we will demonstrate cookie handling using the Web scripting language PHP. PHP is a server-side language, which means it resides on the Web server itself. PHP has for a while supported sessions – a much better and more robust alternative to using cookies – but for this article we will stick with plain cookie handling. The principals that we are going to show you can be used in any Web language of your choice. The structure of a cookie is as follows: ● Value: The actual (data) contents of the cookie. ● Expiration: The length of time a cookie is valid for. ● Path: Which directory the cookie is valid for. A single slash means all public Web directories on the Web server. ● Domain: The domain name the cookie is valid for. Please note you cannot make up a domain on your cookie, as security will prevent it from being sent successfully. If you do not know your domain then play it safe, leave it blank and it will default to your domain. If you specify a domain name – and you need to if you’re on the Web – you can use a sort of wildcard (a dot at the beginning of the domain name) to match all other domains that belong to you. For example, suppose you belong to www.example.com, by specifying .example.com that also would be valid for


KNOW HOW

www1.example.com and www2.example.com, and so on. ● Security: If this is set to ‘1’ then a secure connection (SSL) can read the cookie. Leaving the Security part blank will default to nonsecure. When setting a cookie not all the above are mandatory: if the Domain and Security are left blank, PHP will assume it is not a secure cookie and the domain will be the current one you belong to (if any). The expiration time is determined by the number of seconds since 01/01/1970. Don’t worry, there’s no need to get a calculator out. By using PHP’s time function, all you need to do is give it the time in seconds when you want the cookie to expire. So, time()+3600 will compute one hour from the current time (the cookie is sent), and time()+86400 will compute 24 hours from the current time, get it? If you only want the cookie valid till the browser closes down then leave the expiration part blank. When a cookie is initially sent from the Web server to the browser, you cannot then read that cookie until the client revisits. Beware of this; it is the most common mistake when learning cookies. Another common mistake is trying to send content to the browser before setting a cookie – this is a big no, no, as it won’t get sent in a million years. Always send your cookie before outputting any content to your browser. (By content we mean any information/pictures that are displayed on the browser.) You may have already guessed how a Web server could stop you from registering multiple Pop Idol votes at a time: by simply setting a cookie with say a expiration time of six hours. The cookie would be set when you initially vote, then when you try and vote again, the Web server would check to see if a cookie is present. If it is, then they must have already voted, viola!

Making a cookie Now we know what the cookie’s ingredients are let’s get our hands dirty and bake one. To set a cookie with PHP the setcookie function is used. The format for this is: setcookie(cookie_name, value, expire time, path, domain, secure flag); To set the expiration time to 12 hours, this would be 43200, worked out as follows: (3600 = 1 hour, thus 3600 * 12hours=43200) The code in Listing 1 is a simple script that sends a cookie to a browser with the contents of “Yum Yum, I love cookies”, the cookie name is “cookie_test”, and the expiration time is 12 hours. All browsers have the options to either accept cookies automatically or prompt you before accepting. When

Listing 1: Simple code to send a cookie with PHP <?php setcookie(“cookie_test”,”Yum Yum, I love cookies”,time()+43200,”/”); ?> testing cookie handling it is always best to set the browser to prompt you before accepting a cookie, as in Figure 1. This way you are absolutely sure that you are getting the cookie whilst testing cookie handling. When the browser is pointed to the script in Listing 1 and the cookie is loaded, you can see the actual contents and structure of the cookie by selecting cookie details, as in Figure 2. Notice that the (value) contents part has been URL-encoded. When we next read back the cookie PHP will take care of the URL de-coding for us. The cookie will be stored in your HOME directory structure inside cookie.txt.

Reading the cookie Now the cookie has been sent to the browser, the next time the browser revisits we can read it. How do we know which cookie to read, after all the browser will probably have quite a few cookies stored? Well for one, you can only read cookies that belong to your domain. Secondly you may have noticed the name we gave the cookie was “cookie_test”. This is how we will pick the cookie up. Before we try to read the cookie, it is best to first make sure the cookie is present. With PHP this is accomplished with the isset function, which tests to see if the object is defined. If the cookie is defined then we then display it to the browser, if the cookie is not defined then we can throw up a nice error message instead. Listing 2 does just that. Notice the use of braces on both sides of the else part. Figure 3 shows the output of the script to the browser after successfully reading the previous cookie that was sent in Listing 1. As a side note you can also read cookies by looking at the CGI environment variable HTTP_COOKIE or the PHP environment $HTTP_COOKIE_VAR.

Listing 2. Displaying the cookie if it is present <?php # showing a cookie if (isset($cookie_test)) { echo “Yum Yum, I’ve got your cookie, the contents are: $cookie_test”; } else { echo “No cookie found...sorry”; } ?>

Issue 21 • 2002

LINUX MAGAZINE

43


KNOW HOW

Listing 3. Deleting the previous sent cookie <?php # deleting a cookie use only one method! # delete. With contents of cookie removed setcookie(“cookie_test”,””,time()+43200,”/”); # delete. With a time that has expired setcookie(“cookie_test”,””,time()-43200,”/”); ?>

Deleting the Cookie

Info: PHP homepage http://www.php.net Konqueror homepage http://www.konqueror.org The Unofficial Cookie FAQ http://www.cookiecentral.co m/faq

By specifying an expiration time, the cookie will go stale (i.e. unusable) when that time has been breached. However, you may want to delete a cookie before the expiration has been reached. For example, suppose a user joins a club. To save them having to sign in all time you set a cookie that then gets read when the user visits the club. If the cookie is present and the cookie content passes your validation then the user bypasses the club sign-in. Now if the user leaves the club, we might as well take that privilege away from the user, so we need to delete that cookie. Deleting cookies brings us back to actually setting cookies. When setting cookies, it is always a good idea to think about setting a realistic expiration time when initially setting the cookie, this can save you a lot of hassle in maintaining your cookies, after all we all like low maintenance, don’t we. To delete a cookie all you need to do is resend the cookie with the same parameters excluding the value part. Another way of deleting them is to set a cookie as above but with an expiration time that has already expired, so for

Listing 4. A simple cookie-based counter script <?php # test to see if cookie set ? if (isset($counter)) { # yes, then add one to counter $counter++; } else { # no, initialise counter $counter=0; } # either way set the cookie ! setcookie(“counter”,$counter,””, “/”); echo “Example Cookie and Counter Page “; echo “Counter:[ $counter ]”; ?>

44

LINUX MAGAZINE

Issue 21 • 2002

example suppose you set a cookie with the expiration set to say 24 hours, time()+86400. If, after a couple of hours, you decide to delete the cookie, just replace the plus sign with a minus, like so: time()-86400. I personally prefer this method, as it is a sure cookie deletion scenario. Listing 3 shows both methods of deleting the cookie sent previously

Simple cookie-based counter Cookies can be used for many tasks, so let’s look at how a cookie can be used as a simple counter. The script in Listing 4 uses cookies to continuously count up when the browser page is refreshed, by sending cookies with the accumulated number. Here’s how it works. First a check is made to see if a cookie is present, if it is then the user must have already refreshed/visited the page, so one is added to the variable $counter, using the piece of code $counter++. If a cookie is not present, the user must have just loaded the Web page for the first time, so we set the counter to zero. The next task is to set the cookie. The time expiration is left blank, so the cookie will expire (go stale) when the browser closes down. The cookie is called counter, the value of the cookie is the current value of the variable $counter. Finally the browser outputs a message with that value. If the user has just loaded the page, it will show 0, otherwise it will display the current count based on how many refreshes the user has clicked on. Notice that nothing is outputted to the browser before the cookie is set. Before we finish with PHP examples here’s one final tip: do not leave a space between the “<?php” and the start of line. PHP will interpret that as content to the browser and your cookie will not work.

Conclusion Cookies are a great way of saving state when a user visits a Web site. They are used in validation, shopping carts, personal greetings, in fact if a Web site knows you, you can bet they are using cookies from information you gave previously in a form. In this month’s article we have shown the basics of cookies – how to set, read and delete them – and demonstrated the purposes of cookies. As you can see, cookies are great – when they’re not being used to bombard us with targeted advertisements, at least.

URL encoding: All data streams sent to the browser are URLencoded by changing the following: ● All spaces are converted to + ● All special characters are converted to their 2 digit HEX number preceded by a %, ie: a ( quote) “ becomes %22. ● All key/value pairs are separated by &


KNOW HOW

Soundcard drivers for Linux

DRIVING TEST A

common argument against the use of Linux was, and still is, the lack of driver support for the vast range of different hardware components. For this reason, numerous projects have been developed that have eliminated, and/or aim to eliminate, exactly this problem. When considering soundcards, there are essentially three possibilities for convincing the hardware to work together with Linux: these are the OSS/Lite drivers contained in the kernel sources, the ALSA project and the commercially distributed OSS driver from 4Front. In order to prevent misunderstandings right from the start, it should be mentioned that this article is based on the current versions of ALSA (version 0.5.12a) and OSS (version 3.9.6b). The information on the kernel drivers is based on the version 2.4.16 of the kernel. In order to keep this article from becoming excessively long, we have limited the focus to an explicit discussion about each of the supported soundcards. Much more information can be found on the appropriate Web pages.

ALSA The ALSA project was created by Jaroslav Kysela at the beginning of 1998. This development has been supported by the SuSE company since December 1999. Nowadays, all the recent distributions aid the installation of sound cards with their own configuration programs. As a general rule, the distributions revert to the ALSA drivers, as these run under the GPL or LGPL and are compatible with numerous different soundcards.

Installation

● “Acquisition” of root rights in a terminal: su – (then enter the root password at the subsequent prompt). ● Return the display: export DISPLAY=0:0 (if they use another shell other than bash, setenv DISPLAY 0:0 can also get you where you want to go.) All programs started from this terminal window, from now on, are executed from root and displayed on the normal user’s screen. Now to install the soundcard:

Operating systems should not only be measured by their stability, but also by their level of hardware support. Unfortunately few hardware

● Start from YaST2 by entering yast2 in the root terminal. ● Click the soundcard configuration in the Hardware submenu (Figure 1). ● The soundcard should now be recognised automatically (Figure 2). ● All you have to do now is repeatedly press Continue.

manufacturers make

And that is basically all there is to it. The only other thing that should be done now is a check of whether the soundcard was set up correctly: To do this, the command:

explains

their own drivers available for Linux. There are ways around this though, as Hagen Hoepfner

cat/proc/asound/sndstat should be entered in the root terminal, whereupon Linux tells us which sound channels were set up. In order not to give an individual soundcard the priority, ALSA’s many other settings and functions should not be changed at this stage. For SBLive cards for example, it is possible to install and use the

As mentioned above, ALSA drivers are used in most common distributions as the standard soundcard driver. It is obviously impossible to deal with all the distributions here, and as such we have chosen SuSE Linux 7.3 as an illustrative example to describe how ALSA is set up. Other than this YaST2-based solution, there is also the possibility of using the alsaconf – the ALSA internal configuration program. You generally have to have root rights in order to set up hardware. This can be done as follows: ● Log in as a normal user. ● Release the output of the X server (in the terminal): xhost + localhost.

Figure 1: YaST2 hardware configuration

Issue 21 • 2001

LINUX MAGAZINE

45


KNOW HOW

Open Sound System (OSS) Why should Linux users be forced to deal with commercial software, when there Free alternatives? This question must be answered individually, as there are many individual questions that play a role: ● Will a soundcard be supported? ● Which functions of the soundcard are supported? ● Which software is to be used?

Figure 2: YaST2 soundcard configuration

SoundFont files that are included in the Windows driver CD. Go ahead and start the YaST2 soundcard configuration program again. You will be surprised how much fine-tuning is possible.

OSS/Lite Let’s turn our attention now to Linux drivers that are a little more mature in their years – drivers such as the cute little penguin Tux. Those of you who have compiled your own kernel will know that it’s also possible to set up soundcards in this way. Although we don’t want to compile our own kernel now, a short look at kernel drivers is nevertheless quite interesting. Simply enter the command: ls/lib/modules/2.4.16-4GB/kernel/drivers/sound/ in the root terminal, which is still open after the ALSA installation (2.4.16-4GB is the current kernel number and should be changed if necessary). The indicated files are kernel modules that support the use of soundcards. Simply switch the previously configured ALSA driver off and use a kernel module: ● Switch off the ALSA driver (first you must quit all programs that use the soundcard): /etc/init.d/alsasound stop ● Load the suitable kernel module (in the example of SBLive): modprobe emu10k1 Was that all? Well no, the SuSE distribution 7.3 used in the test installed only one driver without a problem. The loaded driver was the ALSA driver, which we mentioned in the previous section. By the way, this driver is based on the OSS/Lite driver developed by Creative Labs, which goes by the same name. In order to shorten the discussion here, it must be mentioned that Linus Torvalds recently made an announcement, which was discussed on Pro Linux. This announcement essentially states that ALSA drivers will completely replace the old OSS/Lite drivers and will thus flow directly into the kernel. 46

LINUX MAGAZINE

Issue 21 • 2001

ALSA can be configured in such a way that OSS drivers are emulated, and as such the last question is made irrelevant for the normal user. Tests with special audio software (such as SLAB, ecasound) however show that the original OSS drivers are superior to the emulated variants. Probably the most decisive advantage is the fact that OSS makes one (e.g. with SB 128) or several (e.g. with SB Live) output channels (Figure 3) available, in addition to the standard audio channel /dev/dsp0. It is thus possible to bind the KDE Sound Daemon (ARTS) to /dev/dsp1, for example. It is therefore possible for programs (such as RealPlayer), whose output does not run though Arts, to be heard without a problem through /dev/dsp0. If that is still not enough for you, you can gain a further eight virtual channels by purchasing a license for Virtual Mixer.

License model As was already suggested, 4Fronts OSS is a commercial product, not subject to the GPL. The price is based on a standard license, which must be purchased in all cases and costs $20 for Linux. Additional special licenses are necessary for the use of newer PCI cards, special functions for older ISA soundcards and professional soundcards (refer to Table 1). It is recommended that you download and install the free demo version of OSS before you purchase it. If you are happy with the test version, you can then simply order a license file. The license file is a text file, which contains a license key that unlocks the appropriate drivers (refer to the section

Figure 3: Multiple output channels


KNOW HOW

Table 1: OSS licenses PCI soundcards Avance Logic PCI (AVANCE) Aureal Vortex (VORTEX) C-Media CMI873x (CMEDIA) Conexant Riptide (RIPTIDE) Cirrus Logic CS428x/CS46xx (CRYSTAL) ESS Maestro (MAESTRO) ESS Solo-1 (SOLO) Forte Media (FMEDIA) Intel8xx/SiS7012 (INTELPCI) NeoMagic NM2200 (NEOMAGIC) Sound Blaster SBPCI128/ Ensoniq AudioPCI (APCI) Sound Blaster Live!/Audigy (SBLIVE) S3 SonicVibes (S3VIBES) Trident 4DWave/SiS7018 (TRIDENT)

on installation). The licenses can be ordered either directly through the 4Front Web site or from SuSE.

Installation As previously mentioned, it’s a good idea to first download and install the demo version of OSS from the Web server before you go ahead and buy the appropriate license(s). The installation, for which root rights are again necessary, is executed as follows: ● ● ● ●

Create a source directory: mkdir ~/oss. Download the drivers into the source directory. Change to the source directory: cd ~/oss . Unpack the archive, e.g.: tar xvfz osslinux396b2x.tar.gz. ● Start the installation: ./oss-install. After accepting the license conditions, the installation program tries to install the drivers that are necessary for the current kernel version. If no pre-compiled drivers are found, it will try to compile them. To do this, the kernel sources must be installed. Note: the version of these kernel sources must precisely match that of the installed kernel. There are some cases (some distributions) where the two do not match. An example of this is my Mandrake distribution, which has a current kernel of the version number linux2.4.17-10mdk and kernel sources of the version number linux-2.4.17-15mdk. As we don’t want to have to compile a kernel, we can use a little trick. We can simply edit the file /usr/src/linux/include/linux/version.h and enter the version number of the current kernel there. And hey presto – the OSS installation routine is content. If you have bought a license from OSS, the appropriate license.dat file is simply copied into the OSS directory, which was created in the first installation, and then activated in the oss-install

The author VIA97/Geode (VIA97) Yamaha DS-XG (YMH)

Hagen Hoepfner is a member of the technical staff at the Institute for Technical and Operational Information Systems. In his spare time he is an ardent father and plays the guitar in the rock band “Gute Frage” (“Good Question”) (http://www.gutefrage.de).

ISA soundcards All ISA BUS Soundcards (ISA) SB-AWE64 Wave Table (AWE) Dream SAM9704 (DREAM) Ensoniq-VIVO Wave Table (VIVO) Professional soundcards Virtual Mixer (MIX) Input Multiplexer (IMUX) Envy24/MIDIMan (ENVY24) LynxONE (LYNXONE) RME Digi32/Digi96 (DIGI32) Sonorus Studi/o (STUDIO)

program through the appropriate menu point. By the way, you can display the existing (or set) sound channels in a similar way to ALSA through cat/dev/sndstat.

Result In order to put this summary into perspective, we need to realise that on the one hand it is only a matter of time before OSS/Lite drivers disappear from the scene, while on the other ALSA continues to grow in importance. Both of these drivers are perfectly satisfactory for normal desktop use. However, if you need to use special software, there is no way around using OSS at present. The developers of the ALSA drivers are aiming at OSS compatibility, and as such only time will tell whether the emulation can completely replace the commercial drivers. The bottom line is that nothing stands in your way if you want to install a soundcard under Linux – as long as you use one of the many supported cards.

Info The ALSA soundcard matrix: http://www.alsa-project.org/~goemon/ Soundcards supported by OSS: http://www.opensound.com/osshw.html The kernel sound module: http://www.linux.uni-bayreuth.de/howtos/html/DESound-HOWTO-3.html Homepage of the ALSA project: http://www.alsa-project.org ALSA and SuSE: http://www.alsa-project.org/announce/profi.php3 Installation of ALSA, independent of distribution: http://www.alsaproject.org/~valentyn/Alsa-sound-mini-HOWTO.html Pro Linux discussion – OSS/Lite versus ALSA: http://www.prolinux.de/news/2002/3990.html 4Front’s homepage: http://www.opensound.com Information about OSS licences: http://www.opensound.com/license.html OSS download page: http://www.opensound.com/download.cgi

Issue 21 • 2001

LINUX MAGAZINE

47


KNOW HOW

Accessorise your KDE desktop

MORE TOOLS FOR YOUR BELT Windows collects several useful applications together under the heading of

K

DE applications (here we are referring to KDE 2.2 under SuSE Linux 7.3) are frequently very similar to those of Windows, which makes migration somewhat easier. Some, on the other hand, are very different.

Accessories. Anja M

Viewing images with KView

Wagner explains, in

KView is an image viewer in a similar vein to Imaging under Windows. You can start it either via the start menu Multimedia/Graphics/Image viewer or with the shortcut Alt+F2 and entering kview. In SuSE the menu is Graphics/Graphics/Image Viewer. After opening KView you will be met by a bleak desktop.

brief workshops, where you can find and use these and other odds and ends

Several image viewers on the desktop

you wish to open from the selection menu. As a third option, select File/New window – KView then starts another instance with the blank desktop. The displayed size can be altered via the View option in the toolbar. Here you will find the steps Half as big, Normal size, Twice as big and Full image mode. The symbol buttons with the magnifying glass allow you to zoom into or out of the image. The default is 100 per cent of the original size – if you want to work with a different zoom factor, select View/Zoom factor and enter the new value in the text line. Zooms with a high factor can take some time, depending on system resources.

in KDE

The desktop of KView looks a bit bleak

First load an image via File/Open. In the dialog window, search for the image in your system directory. If you want to load several images in a folder at the same time, hold down the Ctrl key while marking each with the mouse. The desktop of KView opens, adapted to the size of the loaded image. By “touching” it with the mouse the window can be drawn bigger or smaller. A number of image viewers can also be opened at the same time. Load another image in a new window via File/Open. If you have already loaded various images, you can also select the option File/Open Recent. You can then select the previously viewed file 48

LINUX MAGAZINE

Issue 21 • 2002

KView stores your recently opened files to speed up access


KNOW HOW

As already mentioned, KView can load several images simultaneously. The tool saves the paths of the loaded images in a list: this list is displayed via Go to/Image list in the menu bar. The Sort button next to the image list arranges the image files in the usual alphanumeric order. The Random button cancels the sorting again. Once an image list has been created it can be saved for future use: click on the Save list button and store the list in your home directory. It is advisable to create a separate folder for this and give the list a telling name, so that you can find it again later. The Load list button lets you load a previously stored image list and, for example, display them in the form of a slideshow. This only works, though, if the files are still located in the previous location. If the image list contains any pictures that are stored on a diskette, the floppy must then also be in the mounted drive. KView save only the path of the files, but not the image files themselves. A slideshow displays the images in a list in the sequence stored therein. By default the image changes every five seconds. Slideshows are started via Go to/Slide show on/off or the S key. Pressing the S key again ends the show. With KView you can not only view images, but also change them and then save them. In the menu list select the Image option. There are a variety of filters available. The menu item Intensity allows you to edit the brightness of an image and to correct the gamma value. The brightness is set by default at 100 per cent; enter the desired value in the text line. If you’ve reduced the brightness to something like 10 per cent, you cannot restore the original condition of the image by re-entering the brightness value of 100 per cent – you have to reload the original image. The gamma value, which must be greater than one, corrects the mid-tones or grey shades of an image. The default value is one. Filter/Grey shades converts a colour image into a black and white image. The filter option Smoothed reduces the contrast between adjacent pixels and makes the image look smoother. An image can be rotated by 90, 180 and 270

Tux – a credit to any desktop

The vital statistics on each image

degrees and reflected vertically or horizontally. The rotations are performed clockwise. Each image loaded in KView can be used as background for the desktop, if you click on Image/On the desktop and select Tiles, Maximum size or Maximum view. The background image then appears on all installed desktops. You can alter or delete it via the KDE control centre. From the start menu select Control centre/Appearance/Background. On the Background image tab, select either no background image or pick another. The menu item Image/Info is practical: there you receive information about the loaded image such as the size in pixels and bytes, colour depth and the date the image was last changed. As is usual with KDE applications, the appearance and desktop can be adapted to your own individual preferences. In the Settings menu, you can also display the toolbar and status bar. The option Install KView allows you to change the background colour of the tool and to replace the bleak black with a jollier colour. The interval for the slide change can be altered, too.

Memos on the desktop Sticking yellow Post-It notes all over your monitor is one way to remind you of your must-do tasks or computer commands. A much more elegant alternative – and one guaranteed not to blow away in a freak gust of wind – is a Post-It note directly on the desktop. KDE makes this possible with KNote. Start the program via the start menu item Office programs/Organisation/Memos or via Alt+F2 and entering knotes. A bright yellow memo appears on the desktop – click in it and type your memo. You can hide the memo via the X at top right; with

It’s better to stick a memo on the desktop than on your monitor

Issue 21 • 2002

LINUX MAGAZINE

49


KNOW HOW

on-screen memo might be a great reminder

a click on the KNote symbol in the panel the memo reappears on the desktop – and right on the spot to which you last dragged the virtual memo. By clicking in the header bar you can activate a triangular area in the lower right-hand corner of the memo. Guide the mouse cursor over this field and it turns into a little double arrow – here you can use the mouse to change the size of the memo by dragging it. You can make further settings and changes to KNote by right-clicking in the header bar and thus opening the pop-up menu. The first option in the pop-up menu, Insert date, writes the time and date in the memo. Click on Send to send the memo via KMail; all you need do is enter receiver and subject. If a memo is not to be sent via KMail, change the entry in the Actions box. Enter something like sylpheed or another client. The tool simply names each new memo “KNote” with a serial number. Select Rename from the pop-up menu, to enter a more informative name. Whilst an on-screen memo might be a great reminder, it’s far less likely to jog your memory if it’s covered up by large window. To prevent cover-ups, click Always in the foreground on the pop-up menu. Regardless of which window is now opened on the desktop, the yellow memo always “sticks” in front of it. You can cancel this by again selecting the Always in the foreground option. If you use several desktops, you can define on which desktops the memo is to appear via On desktop. The Memo settings offer you the option of changing the background and text colours, font type and size.

English dictionary A practical German-English dictionary is at your disposal with ding. This is not an online dictionary – it is resident on your system, and as such may need to be installed with YaST2. Once installed, start it via Alt+F2 and enter ding.

The desktop of ding, a German-English dictionary

The dictionary, with some 130,000 entries, searches automatically in both directions, thus in both the German and English entries. Type the word you want in the Search word field and click on the Search button. Apart from the word itself, compound phrases are displayed in which it occurs: the search 50

LINUX MAGAZINE

Issue 21 • 2002

The search for the term “computer” produces over 100 entries

for “computer” for example displays 134 results. The list displayed can be saved: right-click in the list and keep the button held down. Select the Save option and store the list, for example, in the home directory, to edit or print it out later. It will be saved as a text file (.txt). The search parameters of ding can be altered via the Search parameter item in the menu list. Instead of looking for whole words, ding can start a partial search. This makes sense if the complete spelling of a term is unknown or all words with specified components are to be sought. After entering “hyper” ding finds precisely one result, when it is looking for entire words. But the partial search comes back with 31 entries. The tool can be made case-sensitive or not casesensitive. Regular expressions can also be used, such as e.g. “nu(ss|ß)”, to search for “nuss” and “nuß”. The search parameter settings Simple search and Reg. expressions appear to have no effect: regular expressions are always evaluated. With the Settings button in the menu list you can change both desktop and the appearance of the tools in the usual KDE manner. Especially helpful are two functions of ding, which are activated via Settings/search methods or faster via the selection button on the right next to the Search word field. With the Spelling option “ding” checks the word you have typed in and comes up with suggestions. Spell check activates the check for English words. Even with some 130,000 entries in the dictionary, sooner or later you will come across omissions but ding has the ability to learn. As root administrator, use a text editor to open the file /usr/X11R6/lib/ ding/ger-eng.txt and enter missing words, explanations and transcriptions manually.

Address management with kab First the good news: all addresses which you enter in the KDE address book kab can also be found in the address book of KMail and vice versa. The bad news


KNOW HOW

is that you cannot import any addresses from Outlook or Outlook Express. Start the program via the start menu item Office programs/Database/Address book or enter kab in the fast starter. When you first start, the directory for the local settings is made in /home/username/.kde2/ share/apps/kab.config and the standard address book is saved in your home directory under .kde2/share/ apps/kab/addressbook.kab. The dot before the directory kde2 indicates that this is a hidden system directory. You can make a new or first entry in the address book via Edit/Add entry. The input mask is split into six tabs, which offer a wealth of options for addresses, telephone, email and also comments.

The input options of the kab address book split into six tabs

On the Addresses tab you’ll find the field entitled Address type. In the drop-down menu there are at first no address types to choose from. If you want to define one or more types, which makes sense, select Edit/Install this file from the menu. On the Address types tab, click on the symbol button for a new entry and enter the designation of the type, for example Private, Work, Branch, Holiday home etc. in the nowactivated text line. When creating a new entry, click on the Add button, select an installed type and enter the details of the address. Now change the display, via View/Business card, and your details appear according to the address type selected.

With the Business card view you can see all details at a glance

You thus have rapid and clear access to the various addresses in an entry. At the bottom left of the Business card appears the email address and homepage of your contact, if you have entered these on the Person tab. With a click on the email address, a KMail window opens, in which, handily, the address of the receiver is already inserted. On the tab right at the back, entitled (User fields), there are four text boxes available to you for additional entries. These fields are initially titled only User field 1-4. Click again on Edit/install file and give a more suitable name to the tab and the user fields. Apart from the view for creating or editing a contact, kab offers another list with all entries. Click on View/Display list of entries for navigation. The names of your contacts will appear in alphabetical order in the left-hand window. Click on a name and then again select, via the menu item View, whether the visiting card or the input mask of the contact should appear on the right.

your contacts will appear in alphabetical order

The list view sorts contacts according to the “formatted name”

Adding addresses by types gives a better overview

You can leave the list view by clicking again on View/Display list of entries for navigation. If the list of contacts is already very long, the fastest way to find an entry is via the search function. Select Edit/Seearch entries. The selection list for the search parameter is long: You can search according to practically any field – from name via birthday to comments. The search function can distinguish between upper and lower case text and above all, the wildcard “*” can search for any number of symbols you like and “?” for just one character. Issue 21 • 2002

LINUX MAGAZINE

51


REVIEWS

OpenOffice.org 1.0.0

WHERE DO YOU WANT TO BE TOMORROW? At the time of writing OpenOffice.org 1.0.0 had just been released. Richard Ibbotson takes a look at the project’s history and what the first full version has to offer

W

hen the latest version of OpenOffice.org appeared on the Net on 2 May 2002, the OpenOffice servers suffered what has become known as the Slashdot effect. Some of the mirrors were working – many weren’t – but they were running at a snail’s pace due to the massive demand for the new office suite. On the following day, the message “internal server error” was seen on thousands of computers around the world. There is such an amazing demand for this office suite that the OpenOffice infrastructure, and that of the whole Internet, cannot keep up with the download requests, or for further requests for an improvement or an update. This once again reinforces the fact that Open Source and Free software is very much in demand, and many of the world’s economies are in need of such wellfinished products. The OpenOffice site hosts a project mission statement which highlights its goal: “To create, as a community, the leading international office suite that will run on all major platforms and provide access to all functionality and data through open-component based APIs and an XML-based file format.” Even in its early stages it is turning heads and much debate is taking place amongst the international GNU/Linux community about issues related to this office suite and possible future developments.

Changing text options in the word processor

52

LINUX MAGAZINE

Issue 21 • 2002

History Where did the OpenOffice project come from? It didn’t just spring up out of the ground in the way that so many things do. Back in the 1990s a German company called Star Division authored an office suite called StarOffice. As it was, it was a bit old fashioned and some people didn’t like it. But, it did work and it could be used to produce some reasonably good finished documents. The concept of the free office suite had been born of a mindset which is not normally found outside the borders of Europe. In the summer of 1999 Sun Microsystems, over in California, came to the conclusion that it was high time that someone gave Microsoft some competition on the office suite front. It was at this time that the US Department of Justice was getting the antitrust case together against Microsoft. It was therefore a fashionable time to offer an alternative to the MS office suite. Sun saw that StarOffice could be improved a little bit with their help and support. StarOffice 5.2 was thus produced and distributed into every corner of the globe. For a long time all you had to do was load up the Sun Microsystems Web site into your browser and order a free CD which Sun would send to you at no cost to yourself. On arrival you could use the software contained in the CD with several different versions of Unix as well as GNU/Linux and MS Windows. In October 2000 Sun Microsystems provided open access to the StarOffice source code, APIs and XMLbased file formats in order to promote growth and innovation in the field of XML. This Open Source project attracted a global community of developers, centred around http://www.openoffice.org. Sun continued to have close links to the OpenOffice project, and itself contributed code. Future versions of StarOffice, including the 6.0 release, will be built using the OpenOffice APIs and file formats. The source code is written in C++ and produces language-neutral and scriptable functionality including the Java APIs. This technology introduces the next-stage architecture enabling users to access the suite


REVIEWS

as separate applications rather than one large piece of uncontrollable bloatware. Other features are also present, which includes XML-based file formats. There will be a commercial version of StarOffice and a separate OpenOffice project. There are probably a lot of people out there who are wondering about licensing issues. OpenOffice uses a dual-licensing scheme for source-code contributions. These are the GNU Lesser General Public Licence and the Sun Industry Standards Source Licence. There is a definite road map to the development cycle, which is being closely followed by all parties involved. There are more than twenty different public projects involved with OpenOffice, all of which are broken down into subprojects – each one with a specific task or goal.

What makes OpenOffice so popular? Maybe it’s because it’s a feature rich full-blown office suite that does all or most of the things that MS Office can do and a few other things besides. The full suite was tested in several small companies around the Sheffield area. A few office secretaries were asked to use the software without training. The result was they all said that they found it hard to tell the difference between OpenOffice and the Microsoft equivalent. Some of them even took theirs home and they now use it out of working hours for their own documents under Windows 98 and Windows 2000 and Windows XP.

Slimmer and quicker In the original StarOffice the user was presented with a heavyweight GUI, which was extremely sluggish. OpenOffice works in a different way. You ask for a word processor by selecting File/New/Text Document and what you see on the screen a few seconds later is just the word processor that you asked for. You can do the same for spreadsheets or the OpenOffice drawing package – which is excellent – or if you want to produce a completely and truly cross-platform PowerPoint presentation, you can produce it on a GNU/Linux workstation or laptop or MS Windows workstation. After producing an MS Word document on your GNU/Linux computer you can print to file as a Postscript document and then convert that into PDF format or TeX using something like ps2pdf or TeXmacs or even an HTML document for Web pages. The possibilities are endless and you can keep your MS Word document handy on your computer just in case you need to email it to someone who is still bogged down in the world of proprietary software. If you do need or want a database then it’s much easier not to be tied to Access. You can use a GNU/Linuxbased database or any other MS Windows-based database such as MySQL or Oracle, both of which are much more stable. If you want support for OpenOffice then there is a great deal of help from the mailing lists. See the info panel for the relevant Web page. If you do have any problems installing or configuring OpenOffice

Drawing functions

then it’s best to consult this online treasure trove of helpful folk who know what to do next. You might also find it useful to know that there is an enormous online help manual, which you can get into by clicking on Help/Contents. What comes up on the screen next makes other proprietary look a bit tame and out of date. Each part of the OpenOffice suite has it’s own help file which is extremely comprehensive, to say the least. So, if you can’t quite understand what to click on, try clicking on Help first and that should sort you out. To sum up, there’s no real reason for paying for proprietary software at exorbitant prices and with such peculiar licensing schemes when you can get hold of an application like OpenOffice. Whether it’s for home use or that all-important corporate presentation there’s not much point in using anything else. This review of OpenOffice 1.0.0 was written using OpenOffice 1.0.0. At no time did the office suite show any signs of instability neither did the computer have to work too hard to produce a finished document. The hardware was exactly the same as that which is available in most offices around the world. OpenOffice would seem to be everything that it’s authors claim it is. To get hold of your copy have a look at the info panel for a Web page to download from.

Info OpenOffice.org Web site: http://www.openoffice.org OpenOffice download: http://www.openoffice. org/dev_docs/source/1.0.0/index.html OpenOffice documentation: http://www.openoffice. org/documentation.html OpenOffice projects: http://projects.openoffice. org/index.html?JServSessionIdservlets=czxtikdas3 Ask questions on the mailing lists: http://www.open office.org/mail_list.html TeXmacs: http://www.texmacs.org

OpenOffice Supplier OpenOffice.org Web http://www.openoffice.org For Big improvement on StarOffice 5.2 Against Database support not ideal

rating

Issue 21 • 2002

LINUX MAGAZINE

53


REVIEWS

LifeKeeper

REAL PROBLEMS REAL SOLUTIONS corporate clients and deliver complementary services and products. The core of the LifeKeeper product is to provide a reliable clustering solution across an Intranet, Extranet or Internet. Supported operating platforms are Linux, Solaris MS NT and 2000. For Linux clusters, the low cost Open Source solution gives a mission critical resilience to critical applications.

LifeKeeper from SteelEye is a system for maintaining high availability on clustered systems. We take a look at what features it brings to the world of Enterprise clients

Scalable by adding nodes

I

n an enterprise-scale environment time is money. When your business depends on e-commerce, downtime hits your profits hard and so the need for high availability is a goal worth striving for. Most if not all enterprise environments have a mixture of operating systems running and so cross platform compatibility is also desirable. Without this the support staff all have to learn new skills with differing software and that can lead to disputes over apportionment of blame if and when issues arise. LifeKeeper’s development started at NCR but was bought by SteelEye at the end of 1999. The product was ported to Linux and certified by many hardware developers during 2000. By forming alliances with vendors such as Compaq and Intel, LifeKeeper can exploit its reliability with more

A right click on a resource gives a pop-up me

54

LINUX MAGAZINE

Issue 21 • 2002

In operation The system monitors the applications and can maintain the connectivity of clients to provide data access that is uninterrupted over the network. This is done through monitoring multiple LAN heartbeats by sending redundant signals between server nodes to determine the system and application health, LifeKeeper confirms a system’s status before taking action. By being proactive, the early warnings enable quicker fixes and so in turn stop full and false fallovers to other servers if the hardware is active. By using fallover where applications are switched to running on different servers, the risk of a single point of failure is minimised and application and system recovery can be carried out without loss of operations. The system also gains in its total cost of ownership benefits by using an active-active server configuration. Typical fault resilience relies on spare servers to take over the application in event of failure. By using fallover the application is moved to other servers, which do not have to be application-specific. Thus the number of spare servers can be reduced, lessening the total cost. When a server fails in its availability the LifeKeeper seamlessly and transparently moves the application. This means the users are unaware and so productivity is maintained. Less calls to the IT department means everyone is happier and can get on with their jobs. By using certified hardware the clusters can be scaled up by simply adding other nodes. The LifeKeeper at an application level supports the scalability. It also supports multidirectional configuration where applications can be spread


REVIEWS

Uptime during maintenance and upgrades

Proactive monitoring

LifeKeeper enables continuous operations during planned downtime for maintenance or upgrades, as well as in the event of a system failure or if an application ceases to respond. The fault-resilient capabilities of LifeKeeper can be leveraged to facilitate system or application upgrades. With LifeKeeper, the amount of downtime required for common maintenance tasks and upgrades is significantly reduced or eliminated. LifeKeeper is available and certified for Red Hat, Caldera, TurboLinux and SuSE distributions.

and failed over to differing servers. Data storage is kept on shared disk arrays and is thus separate from application servers. This gives equal access regardless of the application server being used. Data integrity is maintained by locking the storage drive so only one application is allowed access at any one time. LifeKeeper for Linux provides for an N+1 configuration and supports up to two nodes per shared SCSI bus. This allows shared SCSI diskbased application recovery between two nodes within the cluster that are connected to the same shared disk. In this configuration, one server, in an active-active role, provides backup for fallovers from any of the other nodes in the cluster. LifeKeeper for Linux allows cascading fallover for as many as 32 active nodes to ensure continuous client access in the event of system or application failure. Starting Lifekeeper is simple with the command: $LKROOT/bin/lkstart

Application Recovery Kits SteelEye offers LifeKeeper Application Recovery Kits for packaged software, including databases, Web servers and application servers. These Application Recovery Kits include tools and utilities that enable LifeKeeper to manage and control a specific application. When an Application Recovery Kit is installed for a specific application, LifeKeeper is able to monitor the health of the application and automatically recover the application if it fails. SteelEye also provides an Application Recovery Software Developer’s Kit (SDK) that supports the development of custom Application Recovery Kits. The Application Recovery SDK offers a powerful framework for developing customised recovery routines for proprietary applications as well as commercial application servers. Using SteelEye’s Application Recovery SDK, special recovery routines can be defined by assembling straightforward application recovery scripts.

Cascading fallover

Application Recovery Kits Apache Web Server Apache/SSL (secureweb) Application with Disk Partition Application with Filesystem DB2 WE/EE/EEE 7.x Filesystem Informix Dynamic Server 9.2 IP Fallover IP Local NIC Recovery MySQL 3.23 NAS Recovery

NFS Server Oracle 8.05 RDBMS Oracle8i RDBMS Oracle9i RDBMS Sendmail 8.9/SAMS Print Services SAMBA (File Share) (Planned) SAP R/3 (Planned) Sybase 10.0.2, 11.0.1 (Planned) Lotus Domino 4.0 (Planned) PostgreSQL (Planned)

Issue 21 • 2002

LINUX MAGAZINE

55


REVIEWS

Linux (for PlayStation 2)

CONSOLE YOURSELF Is Sony taking a big

Use and abuse

gamble in releasing

The fact that you’re going to be called upon to compile software yourself is a very good indication of exactly who the Linux (for PlayStation 2) is aimed at: the computer literate enthusiast. You will need to have a PlayStation 2 already and a monitor. The Linux kit also has a mouse and keyboard thrown in for good measure. There is one proviso with the monitor: it must be capable of accepting a sync-on-green signal, and not all monitors do. There is a list of known supported monitors available for consultation at the Linux (for PlayStation 2) community Web site, as well as a utility that should allow you to check for suitability. Once you’ve used a monitor to install Linux (for PlayStation 2), you then have the option of using a TV as a display. None of the commercial games, at the moment, provide a VESA display output mode, so even if you do have suitable monitor don’t expect to play your current games on it.

Linux (for PlayStation 2)? Giving over a software development kit to a group of hackers seems like a strange thing to do, after

B

y the time you read this, excited Linux developers and users should be able to get their hands on this special piece of kit from Sony, which will enable you to use a PlayStation 2 console as a full, Linux-powered desktop computer. That’s not its primary goal however, the real power comes from being able to develop your own games and applications that will run on Linux (for PlayStation 2).

all. Colin Murphy

What do you get

finds out whether

Included with the Linux (for PlayStation 2) kit is a 40Gb internal hard drive, a 10/100 Base-T Ethernet network adaptor and two sets of discs. The first contains the proprietary runtime environment, as well as some very comprehensive system manuals. The second disc will contain a special Linux distribution and a wide selection of software packages. Since the PS2 uses a MIPs processor, most of the standard packages that you would find in a Mandrake or SuSE distribution won’t work ‘out of the box’. This is not too great a hurdle because you are provided with all the software you need to recompile anything you have the source code for.

Sony’s gamble just might pay off

Who would have guessed it was a PlayStation 2?

56

LINUX MAGAZINE

Issue 21 • 2002

Community support Sony has managed to create a bridge between itself and this seemingly untapped community of computer hackers and enthusiasts. The whole project seems to have been brought about by no more than consumer pressure: enough people said what an original idea it would be to release an Open development system at an attractive and affordable price point. It’s turning out to be quite a big community as well, over 9,000 people have registered an interest in Linux (for PlayStation 2) on the SCEE Web site. This isn’t the first time that Sony has opened up to small-scale developers. Net Yaroze allowed PlayStation users to develop their own games, which relied on an additional PC to do the development work. Despite the relatively high price – almost twice the price of Linux (for PlayStation 2) – a strong community of developers formed around it, which exchanged tips, tricks and, most interestingly for an Open Source community, samples of code. It should not come as too much of a surprise then that the next generation of a PlayStation ‘hobbyist’ development kit should be built around Linux, especially


REVIEWS

What you get The Linux (for PlayStation 2) hardware kit comprises: ● ● ● ● ●

40Gb hard drive 10/100Mb network adaptor Monitor adaptor cable USB keyboard and mouse Two DVDs with Linux software and manuals

The first DVD contains the proprietary Sony code documentation and drivers. The second DVD has a collection of much-needed Linux utilities, like the gcc compiler, but compiled to run on the MIPs processor, which lies at the heart of the PS2.

The kit laid out

when you understand that the operating system had already played an important role as a platform on which the libraries and compiler for PlayStation 2 development are released. Linux (for PlayStation 2) also goes much further with the degree of documentation provided and the sets of libraries available, which should give programmers pretty much unfettered access to the hardware.

Limitations Ever cautious of piracy, you will find that the use of the DVD drive under Linux (for PlayStation 2) has some serious limitations – it will only be able to read restricted official PlayStation discs; CD-Rs and DVD-R discs won’t work in the drive. That said, the USB ports on the unit are standard, and some USB CD-ROM drives and CD writers are supported under Linux, so could be used with the Linux kit. With the increase in demand, it’s likely that more USB devices will emerge in the near future. From a programming point of view, there are graphics libraries provided: libSDL (a fast, 2D graphics library), mesa, and ps2gl (a simplified GL clone, which will make use of the PlayStation 2’s hardware). With enough effort, it would be possible to create graphics comparable with those of commercial games. The PlayStation 2 system manuals (provided) include detailed specifications on the vector processing units VU0 and VU1, the DMA Controller, the Emotion Engine CPU, the Graphics Synthesizer (GS) and the IPU (MPEG decode assist). Software libraries, tools, device drivers, source code and examples are provided to show how to access this hardware. The PlayStation 2 contains a subsystem for operating peripherals and audio including the SPU2 (Sound Processing Unit), the IOP, the HDD, the DVD drive, controllers, memory cards, USB and other peripherals. The hardware specification for these units is not disclosed.

Access to the I/O devices is only available by making calls to a runtime engine, which must be loaded from the distribution DVD, even before the Linux kernel is booted. Although this is another antipiracy measure it also serves as another string to the hacker community’s bow, who now see this as their chance to boot their own software to achieve maximum performance. You should not confuse the provision of the hard drive and the availability of the network for those that Sony will provide soon for its games consumer market, which won’t be available until August at the very latest. You will not be able to play commercial games under Linux (for PlayStation 2), so access to these devices will not be available. The community aspect is obvious when you take a look at the Linux (for PlayStation 2) Web site. Here you will find the all-important FAQ which will help you decide if the kit is for you, as well as project areas so you can see what is being developed and offering you the chance to join in. Odd though it may seem, the Linux (for PlayStation 2) came about as a direct result of public demand. It is hard to see Sony making a fortune out of this line of development, in fact the hardware alone seems very reasonably priced. It is good to see a company as large as Sony taking the time to look at the bigger picture and at what some of its consumers want. Maybe Sony does have a streak of altruism running through it. Maybe we just underestimate the true weight of ‘pester power’. We do know that Linux (for PlayStation 2) is causing a great deal of excitement, and only something good can come of that.

Acknowledgement Thanks to Sarah Ewe, Linux Engineer at SCEE, for her assistance.

Linux (for PlayStation 2) Supplier Price Web For Against

Sony US$199 or 249 euros http://playstation2-linux.com/ Create applications for the PS2 Monitor issues, restrictive anti-piracy measures

rating

Issue 21 • 2002

LINUX MAGAZINE

57


REVIEWS

HERDING CATS J

Hank Rainwater’s Herding Cats is a management book with a twist. Aimed specifically at programmers taking on a management role for the first time. The book deals with the difficulties of making a number of independently minded programmers work together as a team – a task that has been likened to herding cats. Keeping to the cat theme, Rainwater breaks different personality types down into breeds, highlighting good and bad aspects of each and how to get them working together. Various management pitfalls are illustrated with real life examples taken from the author’s own experience. A couple of chapters are devoted to planning and organisation with details of software that can be used

to make your life easier. Much of the content is similar to that found in general management textbooks but it is slanted towards our programming industry and takes into consideration the fact that programmers are unique and individualistic and that they cannot be led in quite the same way that workers in some other fields can be. Rainwater illustrates his point with quotations drawn an esoteric range of sources from Star Wars to T S Elliot and includes various classics of cyber literature, which reinforces the fact that there is nothing new and any problems that you might encounter have already been met before. This is a very readable textbook and

touches on many aspects of management. Once you’ve read it you should feel better equipped to cope with all those cats you have to herd. If you happen to hate cats, keep an open mind and just accept the phrase as the amusing analogy it is intended to be. Author Publisher Price ISBN

J Hank Rainwater Apress £25 1-59059-017-1

THE HACKER DIARIES This is an entertaining account of modern teenage hackers. It barely touches on most of the older famous cases, dealing instead with more recent instances which many of us may recall from news items, bringing the book right up to date with events following 11 September 2001 and the US’s increased paranoia about cyber terrorism. The book stresses the point that teenage hackers are normal in most ways and do not conform to stereotypes. It also goes out of its way to demonstrate that the majority of hackers can be converted to the light side of the force and can even end up with jobs in the industry, protecting the very things they once tried to break into. Most of the cases covered do seem to be “white hat” hackers and even those who did commit damage to the sites they hacked, for instance World of Hell, are shown as running down their

58

LINUX MAGAZINE

Issue 21 • 2002

activities and maybe even seeing the error of their ways. At the end there is a detailed chronology and a list of useful Web sites. It is a shame that it did not include a bibliography of further reading, guiding the reader to some of the earlier classics in this field, such as Stephen Levy’s Hackers and Bruce Sterling’s Hacker Crackdown. The hacker diaries is a worthy addition to the genre, bringing everything up to date, but do all the script kiddies have to come over as being nice, if slightly maladjusted young people? Author Publisher Price ISBN

Dan Verton McGraw Hill/Osborne £18.99 0-07-222364-2


PROGRAMMING

Perl: Part 3

THINKING IN LINE NOISE T

he sample application given here will show how Perl earned its monikers, the duct tape of the Internet and the Swiss army chainsaw. Hopefully it will also illustrate how you can replace automations currently done with a combination of shell, sed and awk with a small amount of Perl to give faster, more coherent solutions. In addition to reinforcing old ground the example presented here also touches upon some aspects of Perl that we have not yet

covered; such as regular expressions. Rather than introduce each portion of the language in a piecemeal month-bymonth fashion this approach enables you to start learning the language the right way: by using it. Over the coming months we will then focus on a more indepth coverage of the new topics we introduce in this way. The short application presented here as Example 1 is an example of a glue script: a Perl script that uses a standard command to do a lot of its work for it; maintaining simplicity in the Perl code, while bringing additional functionality to the command.

Having introduced the basic elements of Perl over the past two issues, Dean Wilson and Frank Booth now explain how to combine many of the elements shown previously into a complete program that you can run and tinker with

Example 1: count_logins.pl 01 # Sample script to count the number of logins a user has. 02 # Uses the ‘who’ command to get the user details. 03 04 #Location of the external binary. 05 my $whobin = “/usr/bin/who”; 06 07 # Separates the command from its path 08 # and assigns the command name in $cmd. 09 my $cmd = (split(“/”, $whobin))[-1]; 10 11 # Sanity check to ensure external dependencies are met. 12 die “No $cmd command found at ‘$whobin’\n” unless –e $whobin; 13 die “The $cmd command at ‘$whobin’ is not executable\n” unless –xU $whobin; 14 15 my %usertally = getusers($whobin); 16 17 while (my ($user, $numlogins) = each %usertally) { 18 print “$user has $numlogins login”, $numlogins > 1 ? “s” : “”, “\n”; 19 } 20 21 sub getusers { 22 my $whobin = shift; 23 my %user; 24 25 #Open a pipe to read response in from the ‘who’ command 26 open(WHO, “$whobin |”) || die “Failed to open who: $!\n”; 27 28 # loop over the output from who, assigning the line to $_ 29 while (<WHO>) { 30 next if /^\s*$/; #Skip all empty lines 31 chomp; 32 m/(\w+)\s/; 33 $user{$1}++; 34 } 35 36 close WHO; 37 return %user; 38 }

Issue 21 • 2002

LINUX MAGAZINE

59


PROGRAMMING

a little extra work upfront can save hours

An enhanced version of this command, which runs continually and notifies you of any changes in the number of logins on the system, can be found in the /Perl directory on this month’s CD with the name whoson.pl. This version has more functionality and makes a good demonstration of how to apply some of the theory we discuss in this article. The count_logins.pl script, shown in Example 1, uses a surprising variety of Perl functionality considering its small size. The script is started with a short description of its intended functionality. Depending on whether you are coding for your own benefit or for a more public audience you could add more details such as a created and last modified date, an email address for author contact and any other short details that a user may need. For more

File handle refresher In last month’s article we dipped our toes in to Perl’s file handling commands and showed how to open a file for reading and writing. Due to the large role file handles play in most programs, here is a brief recap on opening, writing to and closing a file handle: open (HANDLE, ‘>afile’) || die “$Failed to U open HANDLE: $!\n”; print HANDLE “Hello\n”; close HANDLE;

The example opens the file “afile” in the current directory for output – clobbering the contents of any existing file of that name. We then use the common Perl idiom to test whether the open file operation is successful. If it fails then the die function is called, exiting the program and printing the error message given and setting the return code. In the string passed to the die function we

60

LINUX MAGAZINE

comprehensive documentation (and you do write comprehensive documentation don’t you?) you may be better served looking at POD, a subject we will cover in a future column. Line 5 starts the actual code, we assign the full path of the external who command to $whobin. This is done both to avoid any path problems we may encounter if we assume the running user has a valid path set up and to allow us to do some checking on the state of the file at the given location. In line 9 we try to establish the name of the binary we are calling so that we can tailor any error messages we emit to show which command the problem occurred with. When writing error messages a little extra work upfront can save hours of head scratching once you begin to create bigger applications. The split command takes the path and command name we assigned to $whobin and separates them based upon the first argument given. We then use a negative subscript (which works in the same way as the array subscripts in article one) to return the last item (the –1; which means count back one from the end) from the split, which is the command name and assign it to $cmd. The sanity checks in lines 12 and 13 confirm that the file indicated by $whocmd is both present and executable. If either of these criteria fails then the program aborts with an error message detailing the problem. Line 15 is where we encounter our custom subroutine, getusers which has its code in 21–38. We call the getusers subroutine with the location of the who command as its only argument and we assign its

Issue 21 • 2002

also pass $! – another of Perl’s internal variables. When used in string context $! reports the system error string related to the last command. If the open was successful we carry on to the next line and then print the line “Hello\n” to afile. After printing the line the file is closed and the program exits. It is possible to check the return value of the closing of the file handle but in this example we gain nothing from it, as there is nothing we can check. We then moved on and covered alternate ways of initialising the handle to allow us different ways of interacting with it: * ‘<’ – Read from a file. * ‘>’ – Over-write or create the file. * ‘>>’ – Create a file if none exists, append to a file if it does. If no prefix is specified the default is ‘<’.


PROGRAMMING

Introduction to IPC and piping In the count_logins.pl script (in the main body of the article) we open a file handle as a pipe to an external system command. In order to understand how this works you will need a basic understanding of file handles, if you are unsure then please read the Filehandle Refresher boxout before continuing. Opening a pipe to an external command can be considered one of the more basic forms of Interprocess Communication. Interprocess Communication, or IPC as you’ll often see it referred to, is a way that multiple processes can communicate with each other. This communication can range from merely knowing that an event has occurred, known as “signal handling” to sharing the output of one process with another on the same host, the same network or even across the Internet. In the example script, count_logins.pl, the pipe is opened to the who command as shown below:

WHO file handle in the same manner as if the handle referred to a plain text file. This simplicity in gathering the output of external commands is one of the main contributors to Perl’s title of duct tape of the Internet. Taking this premise slightly further we can use the same syntax to set up whole pipelines of commands external to the Perl script. These chains eventually return the output of the last command in the chain. This behaviour fits in so well with the standard Unix ideal of filter chains that many people never advance beyond this form of IPC. Now that we have shown how to read data in from an external command it seems fitting to show how easy it is to reverse this and send output from a Perl script to an external application:

open(WHO, “$whobin |”) || die “Failed to U open who: $!\n”;

If you are curious as to Perl’s other forms of IPC then ‘perldoc perlipc’ is a good place to start and Lincoln Stein’s Network Programming with Perl is an excellent title that covers the subject in an unrivalled depth.

The who command is executed and if it is successful then its output is available for reading from the

return value to the %usertally hash for our future use. At this point we will make a leap to line 21 and have a closer look at what is happening in the subroutine that will give us the return value. We declare the subroutine with the sub keyword followed by the name of the subroutine and then an optional prototype. We will cover sub in a future article, or the impatient can take a look at perldoc perlsub. We then follow this with a curly brace to show we are starting the body of the sub. In line 22 we assign the location of the ‘who’ command that we passed in to the subroutine in to the $whobin variable using the shift function. You may remember the shift function from our earlier encounter with it in article one when we used it to remove and return to us the first element of an array. In Perl, subroutine arguments are accessed via the @_ array – an implicit variable and a relative of $_ – that you will be seeing more frequently from this point on. When you pass multiple scalars (variables with a single value and visually represented with a ‘$’) to a subroutine each call to shift returns the next one in the array while removing it. This is one of the methods of iterating through subroutine arguments. We then create the hash that is to hold the users on the system and the number of logins they

duct tape of the Internet

open(PAGER, “| $pager”) || die “Failed to U open $pager: $!\n”;

currently have running before moving on to a variant of the file open we saw last month. Line 26 with the piped file open is explained in the Introduction to IPC boxout. One of the important design considerations for programs that “pipe out”, have other interactions or dependencies with external commands, or that utilise other forms of IPC is that of blocking. When you invoke an external command and attempt to read in its results your program will halt until the IPC or pipe

Issue 21 • 2002

LINUX MAGAZINE

61


PROGRAMMING

Linux signals and signal handlers

you can write your own custom signal handlers

Events can happen at any point while a process runs: the operating system may terminate the application, file handle limits can be reached or the user may simply get tired of waiting and press Ctrl+C. When one of these occurs a signal is sent to your program and it responds by taking an action such as immediately halting execution and exiting or rereading its configuration. To get a list of the signals Linux supports you can type kill –l at the command prompt. While the default responses to signals can be enough to ensure that the program does the minimum of what’s required, they can also cause it to exit in an incomplete state creating problems such as leaving temporary files on the machine or even just prevent it from logging the time the program stopped. To get around some of these limitations you can write your own custom signal handlers to catch and process the signals as you see fit. Overriding signal handlers in Perl is simple, the %SIG hash can have references to user-defined signal handlers

has returned the data that is to be read. If not recognised and catered for this can have many negative effects on your program. A common example is the user prematurely terminating the script from the terminal, leaving any external resources it uses in an undermined state or interrupting the program in the middle of a series of actions that must either all be completed or none completed (this is known as atomic). A common way to deal with this in Perl on Linux is to use signal handlers to protect

62

LINUX MAGAZINE

Issue 21 • 2002

(subroutines by another name) that are called when Perl receives the corresponding catch-able signal. $SIG{INT} = sub { print “I got killed\n”; U exit; }; while (1) { print “Still here\n”; sleep 2; }

In the above code snippet we enter an infinite loop that simply prints the same string until you get bored and press Ctrl+C. This then sends an INT signal to Perl, causing Perl to stop the section of the code it’s currently executing and call the handler assigned to INT. While this example is slightly contrived, if you remove the exit in the handler the program does not terminate on a Ctrl+C and this is one example of how you could protect sections of the code that need to complete from being killed while running.

critical parts of your program and allow them to exit gracefully. Another more advanced use of signals to help mitigate the problems of blocking is with the alarm function – full details of which can be found in perldoc –f alarm, and a custom signal handler but due to its advanced nature we will return to this at a future point when we cover different IPC mechanisms. With line 29 we begin to get the data in from the WHO handle and process it to give us our totals. We set up a standard while loop which will iterate over the file handle assigning the line read in to $_ until no more data is left. This was covered in article two if you need a refresher. On line 30 we get our first view of a regular expression. A regular expression (regex) is a way of expressing a desired set of text to match using special meta-characters. While this explanation may seem less than enlightening regular expressions play such a large part in Perl we will cover them in great depth in a separate article. This regex is a simple one and has been placed just to whet your appetite. Breaking down the regex we have a forward slash that indicates that anything between it and the next un-escaped forward slash is the target we wish to match. The ^ indicates that the regex should be checked from the start of the string and the $ indicates the end of the string. The \s is called a character class and represents the


PROGRAMMING

different forms of whitespace (including spaces and tabs) while the asterisk is similar to a wildcard and means zero or more of the expression preceding it. Putting these together we end up with code that says “If the line from start to finish is empty or comprised only of whitespace then do not process this line and jump to the next.”. While this example may not be crystal clear if you have no previous exposure to regexs hopefully it has shown how terse yet powerful they can be when used correctly. Still operating on the implicit $_ we remove the new line from the end of the string (Using the chomp on line 31) and then we work another bit of regex magic on line 32. This time we use a character class that represents words. A word in this context is a letter in the ranges of a-z or A-Z. A number in the range 0-9 or the underscore (‘_’). We match as many word characters as we can from the start of the line up until the first piece of whitespace we encounter, using \s again, as described above. The parentheses are another regex meta-character, known as capturing or grouping depending on the context they are used in. They cause the value matched by the regex expression inside them to be assigned to one of the special regex match variables, in this case it gets assigned to $1 as it is the first match so that we can use the text matched outside of the regular expression. In line 33 we use a Perl idiom that you will see in the wild. If we matched a new user name on line 32 it will not yet be present in the $user hash. We then add the user to the $user hash and increment the number of times we have seen that user by one. In order to understand why this is successful you must remember that when an empty string is used in numeric context it is a zero, we then increment the zero by one and have the correct number of logins – one login. If the user is already in the $user hash the ++ ups by one the number of logins as expected. This

is an oft-used idiom as it reduces a four line operation to one cleaner line of code. We then act like responsible coders and close the WHO handle at line 36 – even knowing that implicit closes will occur it’s good practice to close them manually. We then return the %user hash to line 15, where we called the subroutine, and where the values are assigned to %usertally and we finish the subroutine with the closing curly brace. Jumping back up to line 17 we iterate through each of the key and value pairs in the hash with a while loop and in the body of the loop print out the user and the number of logins they had. The last interesting example of Perl code in this small application is at line 18. When we list the number of logins the person had we want to say ‘dwilson had 1 login’ or ‘dwilson had 2 logins’ with the s added to the end of the string for anything more than a single login. We do this by building a longer string from composite strings and including a ternary operator. The building of the string is achieved by passing a list of arguments to the print function with each argument separated with a comma. The ternary operator (also called the trinary operator in some Perl books) is in essence a shorthand if-then-else statement. It is actually an expression, so it can be added in places such as function calls where an if statement is not permitted. A ternary maps out like this:

another bit of regex magic

condition ? then : else $numlogins > 1 ? “s” : “”

So if $numlogins is greater than one, meaning the condition is true, then the then part is called and an s is added to the string. If the condition evaluates to false the else part is called and, seeing as, in this case we not do wish to add anything, we return an empty string.

In closing Now that you have seen a complete, albeit, small example of a functional Perl script, the more abstract concepts that were covered in the first two articles should be better understood now. We have lightly touched upon subroutines and regular expressions and handed out some pointers on further reading for those eager to move ahead before we come back to them in the near future.

Source code The source code for the examples used in this article can be found on this month’s cover CD in the /Perl directory.

Issue 21 • 2002

LINUX MAGAZINE

63


PROGRAMMING

C: Part 8

LANGUAGE OF THE ‘C’ In part eight of Steve Goodwin’s ‘C’ tutorial we take a look at memory allocation

T

he data we’ve worked with so far has all been static – i.e. of a predetermined, known size. We declare an array, structure, or array of structures within the program and use it. That’s fine for the projects we’ve done so far but if we were writing a “real” program it is highly unlikely we could do this. A word processor couldn’t know the size of every file it would have to work with; no more than a paint package could know the size of every picture it needs to manipulate. So how do we allocate memory dynamically at run-time?

Set adrift on memory bliss The answer, quite naturally enough, is by using a function to allocate memory dynamically at run-time! That function is called malloc. Listing 1, line 2 includes the appropriate header for memory allocation. This file, as we’ve seen several times before, includes the prototypes for several functions that are implemented inside glibc. We’ve used it before for the atof and atoi functions and random numbers. Now we’re using it for memory management. The malloc in line 8 stands for Memory ALLOCation and requests 4Kb of memory (1,024

Listing 1 1 #include <stdio.h> 2 #include <stdlib.h> 3 4 int main(int argc, char *argv[]) 5 { 6 int *pData; 7 8 pData = (int *)malloc U (1024*sizeof(int)); 9 if (pData) 10 {

66

LINUX MAGAZINE

11 *(pData+0) = 1; /* Write anU int to the first available memoryU location */ 12 printf(“%d”, *(pData+512))U ;/ * Output a byte from somewhereU in the middle */ 13 *(pData+1023) = 1024; /*U Write into the last */ 14 free(pData); 15 pData = NULL; 16 } 17 return 0; 18 }

Issue 21 • 2002

integer elements, each 4 bytes in size). Because the memory allocation routines don’t know what type of data you want, it cannot return a pointer with the correct type. Although there could be routines called ‘malloc_int’ and ‘malloc_short’, this would not help if we created a custom type called ‘Person’ as we’d have to write special allocation code for ‘malloc_Person’ and recompile it into ‘glibc’ every time we wrote a new program. Instead, malloc uses a ‘void pointer’, which allows the pData variable to point to data of an undetermined type. Review last month’s article on type casting to remind yourself about this, if necessary. Because memory is finite, it is possible for this function to fail. We must therefore check that the pointer is valid (line 9), and gracefully handle any allocation that does not happen. The ‘C’ standard specifies that malloc must return a NULL pointer – which is numerically equal to zero – should the memory allocation fail. ptr = malloc(1000000000); /* Probably can’t allocate this! */ /* ptr is now NULL */ *ptr = 10; /* Trying to store the number 10 at memory location 0 */ /* This causes a segmentation fault, or core dump */ The complexity of processing failed allocations is likely to grow proportionally with the size of the project. Still, you must keep thinking ‘what should happen if this fails’ and handle it. It is not polite for the user to be thrown out of the program (by a segmentation fault) because you didn’t code it properly! If you absolutely need some memory (for an ‘I’ve run out of memory’ message, say) then allocate that memory when the program starts. That way, if the program can start successfully it can run successfully – in all situations and without problems. When loading a file, for instance, some text editors create a working


PROGRAMMING

buffer ahead of time. If there is not enough memory for this buffer then the file is opened in read only mode. This is not ideal, perhaps, but significantly better than letting the user edit a text for an hour, only to be told ‘I’ve run out of memory’ and find they are unable to save! Lines 11-13 read and write data from the newly created memory block. Like arrays, referencing elements outside the permitted bounds can cause segmentation faults. The data is not initialised to any specific value either, so line 12 could print junk. Pointers, being expressions likes any other, can reference data with simple pointer arithmetic. Like so:

pData to zero for you automatically. Which is nice! You can de-allocate the memory with the same free function above and, as with malloc, no types are passed in – only numbers. From a personal point of view I usually use malloc, as opposed to calloc. This is because we know that clearing the memory to zeros takes time and since we will be manually filling the memory with useful data, pertinent to our program, we don’t need to waste processor cycles. Before we move on, there is one other allocation function you should be aware of, realloc.

Let’s go round again int *pData2; pData2 = pData+511; /* assign a new pointer to the 512 nd element */ *(pData2 + 10) = 1; /* equivalent to *(pData + 521) = 1; */ Referencing the elements can be done with either pointers or square brackets: pData[10] = 123; /* equivalent to *(pData + 10) = 123; */ although the latter is not recommended, as it implies you’re referencing an array – which you’re not! Finally, in the same box labelled “what goes up, must come down” is the memory de-allocation routine, free, at line 14. It is good practice to do this explicitly whenever you’ve finished using the memory, as this allows someone else to use it and can, in turn, also improve performance. However, some programmers assume that all un-freed memory is de-allocated automatically when you exit the program! We aren’t amongst them – and we would hope you’re not either! Line 15 is a stylistic point. Because free does not (and cannot) clear the memory pointer, it remains non-NULL. However, the data to which it points does not remain valid, so if we re-used our validity check (line 9) it would succeed – but be in error. Therefore, in the same way as we reset file pointers previously, we set the pointer to NULL – just to be safe. Now, before we move onto something completely different, let’s briefly visit two different allocation functions, realloc and calloc.

I can see clearly now The other main function for memory allocation is calloc. Rewriting the above example using calloc would require just one change to Listing 1: 8

pData = (int *)calloc(1024, sizeof(int));

The only difference (syntax aside) is that calloc will reset each newly allocated memory address from

The only re-allocation routine we have at our disposal is realloc. This is used to resize a memory block you’ve already allocated. pNewPtr = realloc(pOldPtr, iNewMemorySize); So, should you request 5Kb of space then find you need 10Kb, you can use realloc to make a little magic work and produce a larger block! pOldPtr is the original pointer you got from malloc or calloc, and iNewMemorySize should be the total number of bytes you want in memory. It could be larger or smaller than the size of the original block, but is an absolute value (i.e. not relative). (As a bonus, passing a NULL pointer for pOldPtr will cause realloc to function exactly like malloc.) The data from pOldPtr will automatically get copied into pNewPtr if the function succeeds. There are two caveats however. One is that there might not be enough memory for the larger size block, and pNewPtr will be NULL. The other is that the pointer to the new memory block, returned by realloc, might differ from the old pointer you passed into it! This is the important point. If you’ve been holding references to your data as pointers to this pOldPtr block they will all be invalid and, as they’re probably scattered throughout your code, very hard to track down and reassign with the new memory locations. struct sMY_DATA *g_pData; int GetMyDataEntry(int idx) { return *(g_pData + idx).iEntry; /* Same as g_pData[idx].iEntry */ } The code above solves that problem by providing a single point of access to your allocated data – the global g_pData variable – and is a good thing. However, be careful of the code such as: g_pData = realloc(g_pData, iNewSize + 1024); /* Bad coding!!! */

Issue 21 • 2002

LINUX MAGAZINE

67


PROGRAMMING

Should the allocation fail g_pData will become NULL and you will lose your original pointer to the data. A core dump will ensue next time the pointer is dereferenced and you’ll be unable to free the memory. Finally, if you’re looking for a function to remind you how much memory has been allocated at the memory location g_pData then keep looking! There is no safe, portable way of finding this information out. If you need the information, then store it along with your pointer.

Book of days Because of the similarities with dynamic memory and static strings (both point to anonymous blocks of memory) there are manipulation functions with almost identical names. They are almost identical in operation, too! However, whereas strings know their size (because of the NUL terminator), the memory functions have to be told. Their descriptions (i.e. prototypes) also live in string.h. The void pointer comes to the fore here, too. Since writing code to handle a memory copy of integers and a memory copy of floats is doubling up code, ‘C’ uses generic memory handling functions, as shown in Table 1, that take void pointers and describe the amount of data in bytes (even if the memory refers to floats). This is because the routines know nothing of the type (naturally – they’re void pointers!) and a byte is the lowest common denominator. One word describes the differences between a memcpy and a memmove – overlap. One sentence describes it – that is, if the range of memory locations in the source overlap any part of the memory locations indicated in the destination you will have to use memmove to prevent memory corruption. This is to allow the C library implementers to use more optimised code within the memcpy function. There are also two simple functions to query memory contents shown in Table 2. At first glance it might look like two structures

Table 1 – Memory handling functions Function example memcpy(pFrom, pTo, 100);

memmove(pFrom, pTo, 100); memset(pFrom, ‘A’, 100);

68

LINUX MAGAZINE

Description Copy 100 bytes of memory from pFrom to pTo. Naturally, both memory ranges should point to our own valid memory. Move 100 bytes of memory from pFrom to pTo. See note below. Fill 100 bytes of memory with the ASCII character ‘A’. If you wanted to write 25 floats instead, you could not use this function since it deals in bytes, and would write the data manually with a for loop. This function is often used with ‘\0’ or 0 in place of ‘A’ to clear a portion of memory.

Issue 21 • 2002

could be checked for equality with the memcmp function, but this is not necessarily a good idea. The reason for which, we shall now explain.

Hole in my shoe Take a structure such as: struct sPOSITION { int iXGridPos; int iYGridPos; char iFloor; }; It could, for example, be used to store the location of a ghost in a 3D version of Pacman! We could write a simple AI routine that moved the ghosts around the maze by changing the iXGridPos, iYGridPos and iFloor elements (we’re not sure how ghosts would climb stairs, but bear with us!). Then, when the player’s position (also stored in an sPOSITION structure) equalled the ghost’s position we could kill the player. The code would probably look like this: if (memcmp(&Ghost.Position, &Player.Position, sizeof(struct sPOSITION)==0) KillPlayer(); However, this would probably never work and the reason for this is padding.

The lunatics have overtaken the asylum If you count the number of bytes in the above structure you should get nine: two integers at four bytes a piece and one single byte character. However, calling sizeof on this structure will yield a different answer – 12! This is because the compiler has automatically padded the structure to 12 bytes to fit in with the memory model of the host machine. This, on an Intel family IA32 system, requires that all structure sizes must be in multiples of four bytes, padding the char above to four bytes. It also requires that all 32 bit values (like ints) should start on 32 bit (i.e. four byte) boundaries. This is called structure alignment. So? Well, the padding means there are three bytes unaccounted for in the structure and the memcmp function will be trying to compare them for equality. Since we haven’t (and can’t easily) set them up they will be uninitialised (set to junk) and will prevent our player from every getting killed (since junk is never the same twice!). Instead of a bitwise test, we need an element test, and the way to do that is to manually compare each element: if (Ghost.Position.iXGridPos == Player.Position.iXGridPos && Ghost.Position.iYGridPos ==


PROGRAMMING

Player.Position.iYGridPos && Ghost.Position.iFloor == Player.Position.iFloor) KillPlayer(); For the completists out there, we will just say there are two ways around doing this! The first is to memcmp the first nine bytes only! This is very ugly, doesn’t port well and breaks if the iFloor variable becomes a short or if it becomes the first element in the structure. The other way is to memset the whole structure to 0 at the start of the game. From any point thereafter the ‘other three bytes’ will be 0 and compare exactly, enabling you to use the normal memcmp routine! It is almost an acceptable solution but will be problematic if you forget to memset any sPOSITION structure you consequently try to memcmp! If you’re wondering why we’ve mentioned structure padding in a section on memory (and not structures) your answer is thus: there wasn’t enough rope for you to hang yourselves in the structures article. You can’t compare structures with ‘==’, like you can integers, so to stop people trying to open the backdoor with the memcmp, we decided to lock it first!

Wrapped around your finger One trick used by a lot of programmers is to write a “wrapper” for the memory allocation routines. This means instead of calling malloc, your program will call a wrapper (a function that sits around some code to monitor its usage, for example) like MyMalloc (free would be wrapped with MyFree). In this way, you can easily keep track of how much memory you are using, how often they have been called, how much has been freed, and how many leaks (memory you have allocated, but not freed) you have. The code might start off like this:

Table 2 – Memory querying functions Function example Description iSame = memcmp(ptr1, ptr2, 10);

pFound = memchr(ptr, ‘A’, 128);

This compares the first 10 bytes of data stored at each pointer. If they are identical, it returns 0. If the data at ptr1 is less than that at ptr2, –1 is returned. If it is greater the function returns +1. Note the similarity to strcmp. Searches 128 bytes of memory, starting at ptr, to look for the ‘A’ character. If no ‘A’ is found, a NULL pointer is returned, otherwise pFound points the first location in memory where one occurs. Like memset, this only deals with byte (i.e. character) data. Searching memory for an integer would require a custom loop.

being used. Writing the MyFree function then becomes problematic since we can’t find out from a pointer how much memory has been allocated there. What we need is to attach our own structure to each block allocated through malloc to hold information such as size and reason for allocation. The most common way of doing this is to create a structure (for the auxiliary data) and use MyMalloc to allocate enough memory for the user’s data and your structure, in one block: ptr = malloc(iSize + sizeof(MEMORY_BLOCK)); and then give the user the memory pointer N bytes after ptr. This can be done in two ways. Either create a byte pointer and increment it N times (where N is the size of the block), or use a MEMORY_BLOCK pointer and increment it once. Both are equivalent.

int g_MemAlloced=0; void *MyMalloc(int iSize) { void *ptr = malloc(iSize); if (ptr) g_MemAlloced += iSize; return ptr; } We can then find out how much memory we’re using with:

char *pBytePtr = (char *)ptr; MEMORY_BLOCK *pMemBlock = (MEMORY_BLOCK *)ptr; pBytePtr += sizeof(MEMORY_BLOCK); pMemBlock++; Now we can review our memory usage at any time by looking through all the pointers. The difficulty is the innocuous phrase: ‘all the pointers’. We need to store each pointer as it is allocated, but where? An array? Another allocated block? Or perhaps the question is how?

printf(“Total allocated = %d K\n”, g_MemAlloced/1024);

Union of the snake

But that’s just a start. We will also want to know how much we’re freeing, and perhaps why the memory is

Linked lists are an oft-used data structure and feature on every computer course we’ve ever known! It is a general-purpose storage method that can grow Issue 21 • 2002

LINUX MAGAZINE

69


PROGRAMMING

dynamically as your data does, with very little memory overhead. It comprises of two features: ● Each element has a pointer to the next element in the list. ● A single variable that points to the first element in the list. Declaring a list is easy, and uses the syntax we’ve already seen: typedef struct sMEMORY_BLOCK { char szReason[64]; int iSize; struct sMEMORY_BLOCK } MEMORY_BLOCK;

*pNext;

MEMORY_BLOCK *g_pFirstBlock; As a structure, MEMORY_BLOCK can point to itself. But the structure hasn’t been declared yet – so how can we set up a pointer to itself? It’s this recursive nature that can appear a little counter-intuitive at first sight, but the code above should show you how we do it. By the time the compiler has reached the first brace it knows about a ‘struct sMEMORY_BLOCK’. It doesn’t yet know what’s inside it or how big it is, but it knows it exists (unlike MEMORY_BLOCK – our easy-to-use typedef – because it hasn’t seen it yet!). This allows the pNext pointer to be declared – the size required by an int * is the same as a void *, or struct sMEMORY_BLOCK * – and incorporated into the structure. From here, we then use typedef to create a synonym for MEMORY_BLOCK to make the code look neater, although this is not essential. (Also see the Mutual Inclusion boxout). So with this knowledge, let’s return to the memory allocation example and see how MyMalloc can add elements to the head of the list. pBlock->pNext = g_pFirstBlock; g_pFirstBlock = pBlock; When calling MyFree, it will search the list of allocated blocks for a matching pointer. It does this by iterating through each block with the pNext pointer. MEMORY_BLOCK *pBlock = g_pFirstBlock; while(pBlock) { /* Do something with pBlock */ pBlock = pBlock->pNext; }

70

LINUX MAGAZINE

Issue 21 • 2002

Once MyFree has found the pointer it can modify the pNext pointers so the previous block to this points to our next block. Also, in the special case where we delete the element at the head of the list, we need to reassign the g_pFirstBlock. Linked lists, as a data structure, lend themselves well to recursive searching routines and can be extended by adding a previous pointer, or can be upgraded to a tree structure by including two pChild pointers instead. It is fairly easy to create code that adds elements to the end of the list, or removes specific elements from the middle. A great amount has been written on these data structures and their implementation, so we won’t cover it here. Suffice it to say, however, that these ideas can be utilised in a myriad of software projects, and it’s well worth taking the time and effort to master it. The CD has a complete example of the MyMalloc and MyFree routines using linked lists, filling in the gaps above which have been intentionally left blank, as an exercise for you, the reader!

Mutual inclusion Although it’s not sensible (or possible) to include a structure inside its own structure (as this would create a recursive structure approximately infinity bytes long!), it is possible (and sometimes desirable) to include a pointer inside A to point to B, and vice-versa. Here’s how: struct sBETA; struct sALPHA { struct sBETA *pBeta; }; struct sBETA { struct sALPHA *pAlpha; }; As you can see, there’s no difference in the creation or handling of the structures compared to the linked list example (we’ve omitted the typedef’s here to prove it can be done!). You just need to add the solo ‘struct sBETA’ line. This says “there is a structure available called sBETA, but I don’t know what it contains yet”. The compiler can then use it happily in any structure where the size of sBETA doesn’t need to be known (i.e. as a pointer). GCC will happily forgo the ‘struct sBETA’ line, and work exactly the same without it. However, this is not guaranteed to happen across all compilers or platforms and so should be included, as above.


PROGRAMMING

XML processing with Python, Part 1: SAX and DOM

STRUCTURAL ANALYSIS O

ver the last few years XML has developed into a platform-independent standard exchange format for data and documents. Apart from the actual XML standard there are other standards, such as XSLT and XPATH, which relate to converting and accessing XML documents. Python and XML could therefore both be described as middleware – reason enough to have a closer look at the possibilities of XML processing with Python.

XML modules in Python The standard distributions of Python 2.1 and 2.2 already contain the most important modules for XML processing. However, these do not cover all the functionalities that we require for the purposes of this article. The PyXML package provides a much greater functionality range. PyXML can be installed either as an rpm or directly from the sources using Distutils (python setup.py install). Binaries for Windows are also available for download.

Objectives In the following example we will be using the XML file pythonbooks.xml. This file contains data for three Python books, which we will convert into a simple HTML table using various XML techniques. To simplify matters, a “book” (<book> tag) consists only of its title, author and publisher (Listing 1).

Keep it simple with SAX SAX stands for “Simple API for XML”. A SAX parser is essentially based on a callback API. This means a number of functions, which the application has registered for a certain event type, are called during the process of parsing an XML document. Such events typically include opening and closing XML tags, text and entities, but also parser errors, which are reported to the application

Listing 1: pythonbooks.xml <?xml version=”1.0” encoding=”utf-8” ?> <pythonbooks> <book id=”1”> <title>Programming Python</title> <author>Mark Lutz</author> <publisher>O’Reilley</publisher> </book> <book id=”2”> <title>Python & XML</title> <author>C. Jones & F. Drake, Jr.</author> <publisher>O’Reilley</publisher> </book> <book id=”3”> <title>Python Essential Reference </title> <author>Guido van Rossum & David Beazley</author> <publisher>New Riders</publisher> </book> </pythonbooks>

XML and Python make a great team. With Python it is easy to control SAX as well as DOM parsers, which allow you to analyse structured documents. Andreas Jung explains how

as events. The most important component is the content handler, which implements the callback functions startElement(), endElement() and characters(). The content handler is registered with the SAX Parser via its setContentHandler() method. For example, for every opening

Table 1: SAX ContentHandler class methods Method startDocument() endDocument() startElement(name,attrs) endElement(name) characters(content)

Call Call Call Call Call

for for for for for

Description starting parser terminating parser opening tag <name> closing tag </name> text

Issue 21 • 2001

LINUX MAGAZINE

71


PROGRAMMING

tag it calls startElement() with the tag’s name and attribute list. Listing 2 (sax.py) shows the implementation of the converter using the SAX parser. The actual application logic is contained within the individual if-then-else blocks. Table 1 shows the most important functions of the ContentHandler class.

DOM: A few sizes up The Document Object Model (DOM) is defined by a number of standards set by the World Wide Web Consortium (W3C) and covers all aspects of XML processing. Unlike SAX, when the DOM parser parses an XML document it creates an internal hierarchical tree structure which the application can access using the DOM API. Figure 1 shows the internal structure

for the earlier XML example. There are various types of nodes on the tree (see Table 2) representing, for example, XML tags (ELEMENT_NODE) or text elements (TEXT_NODE) between XML tags. All nodes have a number of attributes that can be used to navigate a DOM tree (see Table 3). The difference to SAX becomes obvious in the DOM implementation of our example (Listing 3). With DOM the application determines the processing sequence (with SAX the application reacts to the parser events). The simplest way of transferring an XML document to a DOM tree is via the FromXmlStream() method, which reads an XML document from an input stream, parses it and returns the top node of the DOM tree. In our example we are first of all looking for all element nodes representing the <book> tag. The getElementsByTagName() method searches for all element nodes representing the tag in question. Once you have located these nodes you can retrieve the nodes for <author>, <title> and <publisher> in the same way and then extract the text contents of the corresponding tags. The

Listing 3: dom.py Figure 1: Internal XML structure for the example “Pythonbooks”.

from xml.dom.ext.reader.Sax2 import U FromXmlStream

Listing 2: sax.py from xml.sax import make_parser from xml.sax.handler import U ContentHandler class BookHandler(ContentHandler): book = {} inside_tag = 0 data = “” def startElement(self, el, U attr): if el == “pythonbooks”: print “<table>” print “<tr>” print U “<th>Author(s)</th><th>Title</thU ><th>Publisher</th>” print “</tr>” elif el == “book”: U self.book = {} elif el in U [“author”,”publisher”,”title”]: self.inside_tag = 1

def getText(nodelist): lst = [] “<td>%s</td><td>%s</td><td>%s</tdU >” % \ (self.book[‘author’], U self.book[‘title’], U self.book[‘publisher’]) print “<tr>” elif el in U [“author”,”publisher”,”title”]: self.book[el] = U self.data self.data = ‘’ self.inside_tag = 0 def characters(self, chars): if self.inside_tag: self.data+=chars # Content handler bh = BookHandler() # Instantiate parser parser = make_parser() # Register content handler parser.setContentHandler(bh)

for node in nodelist: if node.nodeType == node.TEXT_NODE: lst.append(node.data) return ‘’.join(lst) def td(txt): print “<td>%s</td>” % txt, fp = open(‘pythonbooks.xml’,’r’) dom = FromXmlStream(fp) print “<table>” print “<tr>” print “<th>Author(en)</th><th>Title</th><th>U Publisher</th>” print “</tr>” for book in dom.getElementsByTagName(‘book’): print “<tr>” for item in [‘author’,’title’,’publisher’]: node = U book.getElementsByTagName(item)[0] td( getText(node.childNodes) )

def endElement(self, el): if el == “book”: print “<tr>” print U

72

LINUX MAGAZINE

# Parse XML file fp = open(‘pythonbooks.xml’,’r’) parser.parse(fp)

Issue 21 • 2001

print “\n</tr>” print “</table>”


PROGRAMMING

Listing 4: dom1.py from xml.dom.ext.reader.Sax2 import U FromXmlStream from xml.dom.ext import PrettyPrint fp = open(‘pythonbooks.xml’,’r’) dom = FromXmlStream(fp) # find ‘pythonbooks’ node top_nodelist = U dom.getElementsByTagName(‘pythonbooks’) # new ‘book’ node new_book = dom.createElement(‘book’) # all child nodes for ‘book’ new_author = dom.createElement(‘author’) new_author.appendChild( U dom.createTextNode(‘Andreas Jung’) ) new_title = dom.createElement(‘title’) new_title.appendChild( dom.createTextNode(‘XMLU processing with Python’) ) new_publisher = dom.createElement(‘publisher’) new_publisher.appendChild( U dom.createTextNode(‘Linux Magazine’) ) # link nodes new_book.setAttribute(‘id’, ‘4’ ) new_book.appendChild(new_author) new_book.appendChild(new_title) new_book.appendChild(new_publisher) # and attach new book to book DOM top_nodelist[0].appendChild(new_book) PrettyPrint(dom) function getText() steps through every child node and tests whether it is a text node. If it is, the text is extracted.

Modifying a DOM tree The great advantage of DOM is that its tree structure can be reorganised dynamically. dom1.py in Listing 4 shows how simple it is to add a new <book> element. The corresponding element nodes are created using createElement(tagname), while text nodes are created with createTextNode(text). The crucial point is the integration of the nodes into the tree structure. In our example the text nodes are appended to the element nodes using appendChild(). The element nodes for title, author and publisher are in turn appended as descendants of the newly created book node. In the last step the new “book” with all its child nodes is appended to the existing tree. We are using the PrettyPrint() utility to output the extended tree (see Listing 5 pythonbook1.xml).

Spoilt for choice

Table 2: The most important DOM node types Node type ELEMENT_NODE ATTRIBUTE_NODE TEXT_NODE CDATA_SECTION_NODE ENTITY_NODE ENTITY_REFERENCE_NODE COMMENT_NODE DOCUMENT_NODE DOCUMENT_TYPE_NODE DOCUMENT_FRAGMENT_NODE NOTATION_NODE

Description element nodes (XML tag) attribute nodes (XML tag attributes) text nodes (text within XML tags) nodes for CDATA elements XML entities (e.g. &) XML entity references (e. g. ®) XML comments document nodes document type definitions document fragments notation nodes

Table 3: Attributes and methods for all DOM nodes Attribute/method attributes childNodes firstChild lastChild nodeType parentNode nextSibling/previousSibling removeChild(childNode) appendChild(newChild) insertBefore(newChild,refChild)

Description node attributes lists all child nodes the first child node the last child node node type (see Table 2) the node directly above in the DOM tree right/left sibling node removes a child node adds a new child node Inserts a new node before another child node

Table 4: ELEMENT_NODE API Attribute/method tagName getAttribute(name) getElementsByTagName(name) setAttribute(attr, val) removeAttribute(attr)

Description name of the XML tag retrieves the value of an attribute for the node retrieves a list of all descendant element nodes of the same name adds a new attribute to the node removes an attribute from the node

Table 5: TEXT_NODE API Attribute data length

Description string representation of the text length of the text

Table 6: SAX Parser vs. DOM Parser SAX + fast + memory efficient – no modification of XML documents possible – no navigation possible within the XML document

DOM + modification of XML documents possible + flexible navigation – not suitable for very large XML documents – whole document in memory

Whether you use SAX or DOM very much depends Issue 21 • 2001

LINUX MAGAZINE

73


PROGRAMMING

Listing 5: pythonbook1.xml

The author Andreas Jung lives near Washington D.C. and works for Zope Corporation as part of the Zope core team. Email: andreas@andreas-jung.com

74

LINUX MAGAZINE

on the specific requirements in each case. SAX impresses with its speed and simplicity. More complex applications would suggest the use of DOM if they involve multiple access to many parts of an XML document or changes to the document’s structure. The most important advantages and disadvantages are compared in Table 5. In part 2 of this article we will take a closer look at the use of XPath and XSLT under Python.

Issue 21 • 2001

<?xml version=’1.0’ encoding=’UTF-8’?> <!DOCTYPE pythonbooks> <pythonbooks> <book id=’1’> <title>Programming Python</title> <author>Mark Lutz</author> <publisher>O’Reilley</publisher> </book> .. .. <book id=’4’> <author>Andreas Jung</author> <title>XML processing with Python</title> <publisher>Linux Magazine</publisher> </book> </pythonbooks>

Info Python XML: http://pyxml.sourceforge.net C. A. Jones and F. L. Drake, Jr, Python & XML (O’Reilly, 2002)


BEGINNERS

Welcome to the Linux User section of Linux Magazine Our Linux User section continues to show you what’s available on the desktop and the command line, proving that you can get the most out of your Linux system without being an Open Source veteran. Seeing as the command line is so useful and important to making full use of your system, it seems only right that it should be made as pleasant to use as possible. With this in mind, Desktopia this month shows you ways of altering and beautifying your terminal and shows you how to have fun with it too. Less is more, some say, and in our Out of the Box section we show you how to make more of less. less, the command line pager utility, is used by all, but with the help of a little pre-process scripting you can get it to handle a wide range of file types. With the price of CD writers dropping all the time, it is only fair that some effort should go into the development of the front-end tools that desktop users need to drive them. K-tools this month looks at CD Bake Oven, the latest in a long line of CD burning front-ends, but one that has now learned the lessons of ease and usability, features that some others have lacked. We will also take a walk through many of the features available in CD Bake Oven, some of which are now only just available to Linux users. And finally... our Internet pages gather together some of the more useful Linux-related Web pages that, all too often, get lost amidst the tangled World Wide Web.

CONTENTS 75 BEGINNERS A knowledge base for users new to Linux. 76 K-tools CD Bake Oven, which is also on this month’s coverdisc, makes the chore of burning CDs much less challenging. Time to banish the toasted coaster mountain... 79 The Right Pages Here we have a list of some useful, Linux-related Web sites that we have found. If you know of better, write in and let us know. 80 Desktopia Blend your command line terminal into your desktop with our Desktopia feature. We will even show you ways to make them fun to use – as if such a thing should was so unusual. Into the baking oven

82 Out of the Box Pre-processing the input to the less command means you can open it up to a world of possibilities. With a bit of effort you can have less handling a whole new range of file types.

Mathematical wonders on the Web

Issue 21 • 2002

LINUX MAGAZINE

75


BEGINNERS

K-tools: CD Bake Oven

OVEN FRESH A

Stefanie Tefuel takes a look at the

graphical front-end CD Oven Bake, and shows us how burning, or should

we say baking, CDs has never been so easy

s nice and powerful as all those Linux-based command line burner programs may be, who wants to have to immerse themselves in endless manpages just to find the parameters for copying a CD? You can save yourself the effort of information gathering by installing CD Bake Oven, a new graphical front-end for the programs cdrecord, mkisofs, cdda2wav and cdparanoia. The mkisofs package enables you to create the ISO 9660 filesystems needed to store the data to be contained on data CD-ROMs. The program cdrecord copies that data onto the blank CD. The programs cdparanoia and cdda2wav enable you to read data from an audio CD in a CD-ROM drive and write it to the hard disk as WAV files. To start your own CD bakery you need to get the program cdbakeoven from the project homepage (http://cdbakeoven.sourceforge.net/download.php). Users with a current Mandrake, Debian or SuSE distribution can then immediately install the rpm packages they have downloaded; anyone else first needs to unpack the source code, change into the directory created in the process and then compile and install the program with:

K-tools

In this column we present tools, month by month, which have proven to be especially useful when working under KDE, solve a problem which otherwise is deliberately ignored, or are just some of the nicer things in life, which – once discovered – you wouldn’t want to do without.

under Red Hat 7.2. You also need to ensure that the burner programs and utilities mentioned above can be found on your machine. This should not present any significant problems as they are usually included as standard with any major distribution. Once you have installed the burner software you will notice a new entry, Applications/CDBakeOven, on your K menu, giving you quick and easy access to the program. To make sure you won’t miss out on the fun of the burning experience even as a normal user, you should assign sufficient permissions for cdrecord before running this program. It can normally only be executed by root. Log in as the superuser and change this as follows: chmod 4711 /usr/bin/cdrecord

./configure; make; make install

Configure me! Red Hat users can also try their luck with the Mandrake package – we had no problems installing it

CD Bake Oven does its best to recognise any hardware required for burning at startup. Should it fail you can add missing devices manually using the configuration dialog boxes. Select Settings/Configure CDBakeOven... from the menu bar and opt for the menu item Device Settings. If your hardware is recognised you should see something similar to Figure 1 on the Autodetect tab. If your device has not been detected change to the Scanbus tab and click on Retry to initiate another hardware scan. Should your endeavours continue to remain fruitless you can enter the hardware path yourself on the Custom tab. Some burners, particularly ATAPI ones, often prove problematic at this point. If you own one of these you must activate the kernel’s ATAPI SCSI emulation before burning! This is a module included in standard distributions and can be loaded easily using the command: modprobe –k ide-scsi

Figure 1: A lucky break: the hardware is detected straight off

76

LINUX MAGAZINE

Issue 21 • 2002

If in doubt, it’s best to swallow the bitter pill that is a


BEGINNERS

BurnProof A special feature of newer CD burners. Previously the data stream could not be interrupted under any circumstances during the burning process, as this would allow the burner’s buffer to become empty. If this happened, for example if another program required too many resources or the source CD was difficult to read, then the writing of the blank CD could not be completed and it would be unusable. If the data stream is interrupted during burning with BurnProof technology the system determines the position of the last data written and sends the burner into a loop. As soon as new data arrives, i.e. when the source drive is able to send more data to the burner and the buffer fills up again, the write laser moves to the marked position and resumes the burning process. DAO The Disk At Once process writes the individual tracks without their own lead-in and lead-out; there is only one common lead-in/leadout for the entire CD. Warning: not all recorders can cope with DAO. ISO image An ISO image is an exact reproduction of the track that is being copied onto the CD, i.e. the new CD is a 1:1 copy of the ISO image. kernel compilation. While we are in the configuration menu let’s take a look at the other settings. The Recording Options dialog (Figure 2) is of special interest. At the very least you ought to activate the BurnProof option if your burner supports it. Your blanks CD-Rs will thank you for it. If you would like the finished CD to be ejected once the burning process is completed you need to activate the Eject when done field. It is also a good idea to tick the option Fixate CD, otherwise CD players and quite a few CD-ROM drives will not be able to read your data. In this dialog you can also specify whether you want to use the DAO option. The option Customize defaults (Figure 3) is also

Figure 2: Choose your recording settings

Figure 3: General options for CD Bake Oven

worth a second look. CD Bake Oven automatically displays a selection dialog featuring the main burning actions (copy, create new CD, etc.) at startup (Figure 4). If you primarily compile your own data CDs you are probably better off without it. To stop this dialog appearing, deactivate the option Run ‘welcome dialog’ at start. The arrow buttons next to the Maximum record speed field allow you to set the burning speed. If you frequently work with ISO images, you should use the Working directory section directly below to specify a directory that will continue to have sufficient space available.

Figure 4: The optional welcome dialog

Issue 21 • 2002

LINUX MAGAZINE

77


BEGINNERS

Figure 6: Too much for the blank

Figure 5: The opening window of CD Bake Oven

CDs à la carte Once CD Bake Oven has recognised your burner the program has got all the angles pretty much covered: not only can you copy or burn audio CDs with it, you can also master CDs, create multisession and bootable CDs, burn on the fly or erase CD-RWs. Even overburning in order to utilise a slightly larger part of the blank CD is possible at a click of the mouse. A particularly nice feature of CD Bake Oven is the option of compiling your own data or music CDs using drag and drop. For this you need to familiarise yourself with the window in Figure 5, which you encounter at the start of the program. In the upper window on the left you first of all select the directory containing the data you want to copy. Its contents will appear in the window on the right. Now you just need to highlight the required files with the mouse and drag and drop them into the window below and you have everything you need for creating your CD. CD Bake Oven supports the addition of individual files as well as entire directories. Your selection is only limited by the size of the blank. If

Figure 7: The last burning details

78

LINUX MAGAZINE

your choices are too large for the CD then an error message like the one in Figure 6 will prevent you from doing any damage. Occasionally this error message can also be caused by CD Bake Oven assuming the wrong size for the blank CD. The tool gives you a choice between blank sizes of 650Mb, 700Mb and 875Mb. It is therefore important that you set the relevant size in the Size pull-down menu before each burning session. You can also use 875Mb for the increasingly popular 800Mb blanks, as long as you ensure that your desired data quantity does not exceed the capacity of the blank. Do not be distracted by the values displayed as Used and Wasted: the latter is based on the assumption that you’re using an 875Mb disc. You start the actual burning process by rightclicking in the lower window. Select Create CD from the context menu and you should see a window that resembles the one in Figure 7. Use the Recording Details section to specify how you would like CD Bake Oven to proceed. You have the choice of simply producing an ISO image, going straight to burning the disc or of putting your data directly onto the CD on the fly. The last option can be tested in dummy mode before you actually burn anything. As soon as you are happy with your settings you can kick off the actual burning process by clicking on the Create! button. If you like you can also monitor the progress in the Process Output section (Figure 8). The program additionally uses this window to inform you once the process has been successfully completed.

Figure 8: Making progress

Issue 21 • 2002


BEGINNERS

The best Web sites for Linux users

THE RIGHT PAGES Linux Will Prevail

HamSoft

http://www.linuxwillprevail.com/ Help expedite the process of making GNU/Linux and its collection of Open Source software tools as good or better for the average user.

http://radio.linux.org.au/ HamSoft is an excellent Web resource featuring Linux software for the Hamradio community.

Janet Roebuck sifts out the best Linuxrelated (and just plain interesting) Web sites to light up our

World of Mathematics http://mathworld.wolfram.com/ Welcome to Eric Weisstein’s World of Mathematics. Be careful – browsing here will cause days to disappear in wonder.

browsers

Linux for small businesses http://www.linux4smallbiz.com/ Find out the benefits that Linux use could bring for your small business. You may need to use the scroll bars a lot on this site.

Maximum Performance http://www.maxpro.org/ Get the maximum performance from your Mandrake Linux distro.

Debian Help http://www.debianhelp.org/ Articles and news from Debian users. This site is dedicated to helping others use and understand Debian and is aimed at the typical user.

UserMode http://www.usermode.org/ Here you’ll find a wealth of information on *nix installations as well as a selection of Free Unix software.

Linux/Unix tutorial site http://www.ctssn.com/ Practical tutorials designed to speed up the Linux user and ease newbies into the world of the *nixes.

One page Linux manual Dot Gnu http://dotgnu.org/ Dot Gnu is an Open Source project focused on developing programs that can communicate with each other via the Net, with the aim to create a competitor to Microsoft’s .NET framework.

http://homepage.powerup.com.au/~squadron/ If you’re after a one page summary of all the most common Linux commands then this (and perhaps a printer) is all you need.

Issue 21 • 2002

LINUX MAGAZINE

79


BEGINNERS

Desktopia

COMMAND LINE T

It’s not only graphical

user interfaces that can be themed. Patricia

Jung embarks on an adventure into the world of themed shells

he Linux command line may be powerful but for anyone who is striving for more than mere efficiency on the GUI desktop, the words “console” and “shell” are often synonymous with “unimaginative” and “boring”. Some X terminal programs, such as Eterm, do offer options for beautifying the command line window and xtermset or the ls-option – – color provide a solitary splash of colour amidst the black and white allsorts. All the same, these customisations pale into insignificance in comparison to the theme manager of a desktop environment.

Themes for the console

Desktopia Only you can decide how your Linux desktop looks. With deskTOPia we regularly take you with us on a journey into the land of window managers and desktop environments, presenting the useful and the colourful, viewers and pretty toys.

It is possible to add a light relief, even to the console, in the form of the bashish theme engine. This offers the option to stoke your nostalgic fires by simulating the command line appearance of OS2, the Amiga Workbench or VMS (Figure 1).

Figure 1: Changing from Technicolor to the VMS theme

If a non-root user unpacks the bashish-DR7.8.tar.gz archive via tar –xzvf, change to the newly-created directory, bashish-DR7.8, type ./InstallBashish and press Enter to copy almost everything you require to ~/.bashish. The main bashish script itself still needs to be copied to a location in the search path, ideally ~/bin, a directory which usually has to be added to the variable PATH. If, on the other hand, a root user calls up the

Figure 2: Root is greeted in accordance with its standing, if you veer over to the Holy theme

installation script, bashish ends up by default in /usr/local/bin, and the help files in /usr/local/share/bashish. So long as the root does not force unprivileged users to log into bashish using /etc/passwd (Listing 1), the bashish tool will be invoked. Under X there then opens a new terminal window (Figure 3), and the current shell is “replaced” on the console. A new command, changetheme, will also become available, which enables the appearance on the shell and other details to be changed. For all theme files located under ~/.bashish/themes or /usr/local/share/bashish/themes, all you need specify as argument is the name (leaving out any file name ending .bt) (Figure 1 lists the operating system themes from the directory themes/os, as an example). Should themes installed from elsewhere (such as from the archive themes-base7.tar.gz) be used, the respective path specification will be needed. When testing out the various themes it’s quite likely that you’ll stumble across an error or two. To make up for this, the README that accompanies the tool explains, amongst other things, how to compile your own scenarios.

Shell in adventureland

Listing 1: bashish and the Adventure shell are also possible as login shells trish:x:1000:1000:Patricia Jung,,,:/home/trish:/bin/bash test:x:1001:100::/home/test:/usr/local/bin/ash pjung:x:1002:1000::/home/pjung:/usr/local/bin/bashish

80

LINUX MAGAZINE

Issue 21 • 2002

Anyone disappointed by the fact that bashish doesn’t seriously alter the user philosophy of the command line, needs harder drugs. As a matter of fact, ten years ago and more, little programs such as the ever-nagging Marvin shell were popular gimmicks distributed via Usenet newsgroups. Many of these splendid examples are now missing, presumed dead, but nevertheless the Adventure shell from the popular “Text Adventure” branch has survived in two versions.


BEGINNERS

Figure 3: bashish in Eterm

Unfortunately, the C implementation, advsh.tar.Z, cannot compile under Linux without major adjustments and the shell script variant, advshell.shar.Z (unpacked with gunzip and unshar), also demands a few corrections. This is why on the CD you’ll find a tar archive together with advshell_LU042002.tar.gz, which firstly corrects a bug and secondly adapts some path specifications in Makefile and in the code of the Adventure shell, ash.sh, to common Linux defaults. After unpacking with tar –xzvf advshell_LU042002.tar.gz and changing to the newlycreated advshell directory, a make, entered as root, should serve for installation in the /usr/local branch of the file tree. Provided /usr/local/bin lies in the search path, all you need now is a simple ash on the command line to find a discarded, empty rucksack (line 1, listing 2), in which files will next be transported (lines 9-22). If you reply to

the question about help with yes, you will receive a small introduction, otherwise the shell immediately issues location information (in line 3 we find ourselves in our own home directory), shows possible ways out (exits, thus directories) and also a passage overhead (line 7), and names the objects lying around (alias files). The Adventure shell will list all the available commands via help. For example, if you want to go through the passage into the room above, enter up – this takes you into the parent directory. Files can be picked up with get (line 9), which can also be used to put them down elsewhere (line 21). Unix Monster alias commands can steal your treasures (line 23): the file created when this happens, size, contains the outputs of the command wc -c form1.ui and can be read with open. Even with these commands in your repertoire, you still need to be a bit careful: for example, anyone who bombards the printer daemon with an object (as in line 28) should not be surprised if the daemon simply throws the object away. In line 33 the room contains precisely one object fewer. Feeding a Unix Monster (feed file to command) is also not without hazard – at this point one should select only those commands that read from the standard input. Editors, GUI programs and other monsters, which come with their own user interface, can be brought to life with the wake command: in line 37 for example vi awakes to edit the file text. KDE, GNOME and other X programs, however, impose the pre-requisite that one has access rights from the Adventure to the respective X server.

Virtual desktop: Most window managers offer several “screens”, which can be filled with windows or applications. You can switch between these without having to close an application, but you can only see those applications that were started on the current desktop.

Listing 2: Adventure in the shell 1 You find a discarded empty rucksack. 2 Welcome to the Adventure shell! Do you need instructions?no 3 You are in your own home. This room contains: 4 Telephone.tif advshell_LU042002.tar.gz and cc_3.png 5 There are exits labelled: 6 applications article example 7 as well as a passage overhead. 8 There are shadowy figures in the corner. 9 -> get cc_3.png 10 cc_3.png: taken 11 -> inventory 12 Your rucksack contains: 13 cc_3.png 14 -> go example 15 You squeeze through the passage. 16 You have entered /home/trish/example. This room contains: 17 Makefile form1.ui form1.ui.h computerhex.db computerhex.pro 18 There are exits labelled: 19 images 20 as well as a passage overhead. 21 -> drop cc_3.png

22 cc_3.png: dropped. 23 -> steal size from wc -c form1.ui 24 The wc monster drops the size. 25 -> open size 26 Opening the size reveals: 27 3756 form1.ui 28 -> throw computerhex.pro at daemon 29 The daemon catches the computerhex.pro, turns it into paper, 30 and leaves it in the basket. 31 -> look 32 The room contains: 33 Makefile cc_3.png form1.ui form1.ui.h size computerhex.db 34 There are exits plainly labelled. 35 images 36 ... and a passage overhead. 37 -> wake vi text 38 You awaken the vi monster: 39 The monster slithers back into the darkness. 40 -> exit 41 Do you really want to quit now?yes 42 See you later!

Issue 21 • 2002

Info bashish homepage: http://bashish. sourceforge.net/ Adventure shell homepage: http://www. ifarchive.org/indexes/ if-archiveXshells.html

LINUX MAGAZINE

81


BEGINNERS

Out of the box

IN THE PIPELINE D The program less, as an easy-to-use substitute for more, will already be familiar to most people. As Christian

epending on which Linux distribution you are using, you will already have noticed that in some configurations the pager less can show more than just pure text files – manpages are automatically formatted or archive contents listed. But what mechanism lies behind this? Is there a normal less and a super less? No, the features used here are all part of the normal range of functions of less and can also be expanded into the bargain. The shell script lesspipe, by Wolfgang Friebel, takes advantage of the fact that when less starts, it checks the environment variable LESSOPEN. If this is not empty, its content is interpreted as a program name and less leaves it to this tool to open the file to be displayed. Its output is then displayed by less. A LESSOPEN program such as lesspipe thus acts as an input pre-processor.

Out of the box

There are thousands of tools and utilities for Linux. “Out of the box” takes the pick of the bunch and each month suggests a little program, which we feel is either absolutely indispensable or unduly ignored.

Perle explains, once equipped with a script, less can display more than just mere text files

Piping in It’s not hard to install lesspipe. First you need to obtain the archive lesspipe.tar.gz from http://www.desy.de/zeuthen/~friebel/unix/lesspipe.h tml. After unpacking the configuration script (which requires a Perl interpreter) it finds and collects a range of auxiliary programs and adapts lesspipe to them. If an installed tool is not found, you can help the script on its way by specifying the full path name in response to its question “Include code anyway?”. The following installation steps are necessary:

Pager A program for displaying a file page by page. Common pagers include more and less. Manpage Linux, like all Unix systems, has a kind of online reference manual for the installed programs. This utility is invoked by “man programname”, for example “man less”. Shell script A file with shell commands, which are executed automatically. Work steps that recur frequently can be automated well by shell scripts. Environment variable A variable set in a shell, which can be read by all programs started from this shell.

82

LINUX MAGAZINE

Issue 21 • 2002

Figure 1

tar xzf lesspipe.tar.gz cd lesspipe-1.32 perl configure su (enter root password) make install ; exit

After this procedure you need only tell less that it should use the script just installed in the /usr/local/bin directory. To do this enter in the file .bashrc in your home directory the following line: export LESSOPEN=”|/usr/local/bin/lesspipe. U sh %s” If, instead of the bash you are using the (t)csh as login shell, you should instead insert the line setenv LESSOPEN “|/usr/local/bin/lesspipe. U sh %s” in the file .cshrc. On next login the expanded less functions will then be at your service.


BEGINNERS

Show me... For a simple test you can use any LaTeX short introduction or another dvi file which may be installed on your system (and which you can find with locate l2kurz.dvi): less l2kurz.dvi By the way, the lesspipe script does not recognise file types from their endings, but with the command file, which reads a piece of the content of the file and compares it with the entries in its type database. In Table 1 you will find the file types which lesspipe supports, provided the appropriate utility programs are installed. Many of these tools, such as the compressors gzip and bzip2, are part of the standard installation in most distributions, others (such as antiword) often have to be installed later. Non-Debian users will also have to do without the option of viewing Debian archives, because the corresponding tools do not exist for rpm-based distributions.

Nested Compared to simpler input filters for less, lesspipe has one especially useful property: it can be used nested. If, for example, you have a gzipcompressed tar archive to hand, you can not only see the list of the files contained therein, but also the content of these files – again interpreted by the input filter. In Figure 1 less displays the file linf/package.list from the archive linf-box.tgz. The command for this looked as follows:

The colon serves here as a separator between the name of the archive file and that of the file contained therein. The nesting can go even further. Source rpm archives contain, in addition to the package information, the original .tar.gz archive with the source text of the respective program. If you want to look at the file gnomo-0.1/README.html from the archive gnomo-0.1.tar.gz, contained in gnomo0.1.src.rpm, enter: less gnomo-0.1.src.rpm:gnomo-0.1.tar.gz:gnomo0.1/README.html Because the file type is HTML the file is filtered with the text browser lynx. If you want to cut out this filter step, when you call up less simply place another colon after the file name.

ASCII art With a bit of know-how about shell programming lesspipe can also be expanded – for example to display graphics with the aid of the graphics filter from netpbm. The expansion can be found on the cover CD in the form of a patch. In order to apply this patch on the script, install, if necessary, netpbm, change again to the directory lesspipe-1.32 and enter there: patch < path_to_your_cd/ootb/lesspipeasciiart.diff perl configure su (enter root password) make install ; exit

less linf-box.tgz:linf/package.list

Table 1: File types supported Type .gz, .z .bz2 .zip .tar Manpage .a Dynamic library Executable program .rpm, .spm .deb .doc .html .pdf .rtf .dvi .ps

Utility program gzip bzip2 unzip tar groff ar nm strings rpm, rpm2cpio, cpio dpkg, dpkg-deb antiword lynx pdftotext unrtf dvi2tty ps2ascii, gs

The patch adds four lines to the file lesspipe.sh.in, which serves as the model for the configuration script. Since less can only display text, netpbm filters convert image files into ASCII graphics. In Figures 2 and 3 you can see an example of the graphics capabilities of less with this lesspipe expansion. This extends Table 1 by all the graphics formats supported by netpbm. Naturally only icons and other simple graphics are recognisable with this method. Additional filter mechanisms are waiting to be built in – maybe a task for rainy summer days?

Full path name The complete directory path to a file. This unequivocally indicates the position of a file in the entire filesystem. The command top has for example the full path name /usr/bin/top. Home directory This directory is where users will find themselves after logging in. This is also where their personal settings are stored. rpm With the Red Hat Package Manager (used by Red Hat, Mandrake, SuSE and other distributions) software packages can be cleanly installed and uninstalled. The associated package format is also called rpm. netpbm A collection of graphics filters, which process various image formats or convert them into different ones. Unlike the Gimp, netpbm can cope without a graphical user interface and is therefore ideal for scripts. Patch A file containing amendments to one or more text files. It is created with the command diff and played in with the command patch.

Figure 3

Figure 2

Issue 21 • 2002

LINUX MAGAZINE

83


COMMUNITY

The monthly BSD column

FREE WORLD Version 3.1 of

OpenBSD has been released, much to the excitement of its fans and devotees here and around the world. Richard Ibbotson takes a closer look to find out why so many think of it as an operating system of beauty and reliability

he new release of OpenBSD has many more new features and updates than we can mention here. In addition to this, the first time user might like to know that there are many features of the OpenBSD operating system, which were originally only available with OpenBSD. Later on they were ported over to GNU/Linux and other BSD software. The OpenBSD developers have a reputation for being there first and getting it right. Open SSH is a fine example of this and the updated version is packaged into the OpenBSD 3.1 software.

T

improvements to the OpenBSD/macppc port, as well as accelerated X11 servers for some models. ● A lot of enhancements have been made to the new packet filter, pf(4) including performance improvements, as well as the ability to filter other protocols than the usual TCP, UDP and ICMP, such as ESP. There also exists a utility to achieve per-user pf rules changes – typically intended for gateways – which is authpf. ● Wavelan bridging is now possible on Prism-II based cards.

What’s new?

The software also includes the following major components from outside vendors:

A complete list of OpenBSD 3.1’s new features can be found on the OpenBSD Web pages but here’s a brief summary of what it has to offer: ● OpenSSH, which supports both the SSH1 and SSH2 protocols, is now at version 3.2. ● Secure file transfers are encouraged using the greatly enhanced SFTP subsystem which comes with both an SFTP server and client. ● Improvements to the documentation, the manpages and the Web FAQ are made with every release. ● A larger part of the Web site is now to be seen in several languages. There are over 1,000 pre-built and tested packages – when someone tells you that an OpenBSD package has been tested you had better believe it. ● There is greatly improved hardware support in the OpenBSD/sparc64 port, and the addition of X11 support. There have also been many performance

● XFree86 4.2.0 (and i386 contains 3.3.X servers also, thus providing support for all chipsets) ● gcc 2.95.3 (plus patches) ● perl 5.6.1 (plus patches) ● Apache 1.3.24, mod_ssl 2.8.8, OpenSSL 0.9.6b (plus patches), DSO support ● groff 1.15 ● sendmail 8.12.2 ● lynx 2.8.2rel.1 with HTTPS support added ● sudo 1.6.5p2 ● ncurses 5.2 ● Latest KAME IPv6 ● KTH Kerberos 1.0.8 ● Heimdal 0.4e (plus patches) ● OpenSSH 3.2 If you are planning on dual booting OpenBSD with another OS, you will need to read the included INSTALL i386 document if you are using i386 hardware. You can also install OpenBSD into: ● alpha – DEC Alpha-based machines. ● amiga – Amiga m68k-based models (MMU required). ● hp300 – Hewlett-Packard HP300/HP400 machines. ● i386 – Intel-based PCs. ● mac68k – Most MC680x0-based Apple Macintosh models. ● mvme68k – Motorola MVME147/16x/17x 68K VME cards. ● macppc – Support for Apple-based PowerPC systems. ● sparc – SPARC Platform by Sun Microsystems.

88

LINUX MAGAZINE

Issue 21 • 2002


COMMUNITY

● sun3 – Sun’s 68020-based Sun3 models. ● vax – DEC’s VAX computers. Installing OpenBSD can be achieved from an internal network or across the Internet by FTP. The OpenBSD developers like to boast that there hasn’t been a security hole in a remote install for four years. The more normal approach is from the OpenBSD CDROMS, which are easy to get hold of from the OpenBSD Web site. If you’re using i386 hardware, you should set your BIOS to boot from CD-ROM and then boot from the first CD. The rather nice artwork that comes with the CDs also contains some useful instructions on how to install the software. You should read this carefully before you start the installation and whilst you are configuring the partitions. You can also download the recently upgraded FAQ from the OpenBSD site. This can provide a great deal of help if you run it on another monitor whilst you are installing your OpenBSD 3.1 system. During the installation process you will be asked about your network cards and your ISP’s name servers. When it comes to installing the sets (compressed software archives) you will also be asked whether or not you want to install Xwindows. This means that you can have an up-todate KDE or GNOME desktop, if you like, or any other desktop that you might find in GNU/Linux such as XFCE or Windowmaker. After the installation it’s best to have a look at the afterboot manual for first time user information. To do that at the command line type in man afterboot and a large selection of useful information will be displayed on screen. Things to notice here are the references to other manpages and also man whereis, man whatis, man mount and man mount_desc, if you haven’t mounted CDs or floppies in BSD systems before. Have a look at the bottom of all of the manpages for further references to helpful info. Just after you have rebooted your new OpenBSD computer you might want to mount a floppy disk or a CD in the CD-ROM drive. This can be done via: mount –t msdos /dev/fd0a /mnt/floppy mount –t cd9660 –r /dev/cd0a /mnt/cdrom To have a look at the boot messages, type dmesg, and if you can’t read anything then do dmesg || more which will allow you to use the space bar to scroll down the dmesg one console space at a time. Part of the dmesg that was created by the software that was installed for the test can be seen below. At this point you will probably want to get some help from somewhere. Have a look at the OpenBSD site and look for the mailing lists. Choose the one that looks to be the most useful and ask questions on that list. Kernel compiling may be required for some

Info OpenBSD http://www.openbsd.org CD-ROMs or T-shirts http://www.openbsd.org/orders.html CD-ROMS with minimal postage http://www.kd85.com/ Supported hardware http://www.openbsd.org/plat.html How to install http://www.openbsd.org/faq/faq4.html ftp://ftp.openbsd.org/pub/OpenBSD/3.0/i386/INST ALL.i386 Documentation http://www.openbsd.org/docum.html http://www.openbsd.org/cgi-bin/man.cgi Security issues http://www.openbsd.org/security.html http://www.openbsd.org/crypto.html Firewalls http://www.obfuscation.org/ipf/ipfhowto.txt Goals of the project http://www.openbsd.org/goals.html Professional support http://www.openbsd.org/support.html

configurations. This is fairly easy to do. You can find plenty of info in the manpages and on the OpenBSD site. If you do want to use OpenBSD as your desktop of choice you can configure Xwindows with xf86config or xf86cfg. This is a lot like configuring a Debian GNU/Linux system. If you can do that you shouldn’t have any problems with OpenBSD as your desktop. Before you do anything you should have a look at /usr/X11R6/README to make sure that you understand what it is that you have to do.

There hasn’t been a security hole in a remote install for four years

Conclusion To sum up. OpenBSD is the software to use if you need that secure firewall to secure a Net-facing connection, or if you just want internal security between your company’s different departments with an internal firewall. You can also use OpenBSD for many other things. We hope that this short introduction to the new release of OpenBSD 3.1 software has been of some help to you and that you will want to install and use the software as a result of reading it.

The author Richard is the Chairman for Sheffield Linux User’s Group. You can view its Web site at http://www.sheflug.co.uk/

Issue 21 • 2002

LINUX MAGAZINE

89


COMMUNITY

The monthly GNU column

BRAVE GNU WORLD Welcome to a new issue of Brave GNU World. As promised at the end of the last issue, this month Georg CF Greve would like to introduce a project that has made his life much easier

SpamAssassin The SpamAssassin by Justin Mason allows the “assassination” of spam in your incoming email – at least it marks the spam and so allows Procmail or the mail reader to handle spam in the least annoying way for the user. The heart of the SpamAssassin is a Perl program distributed under the same dual GNU General Public License/Artistic License as Perl itself. This made it possible to distribute the SpamAssassin through the “Comprehensive Perl Archive Network” (CPAN) and reuse code from it without any legal problems. Licensing issues have been a crucial part in the beginning of this project, by the way. Before he

Configuring Smap Assassin is just a text editor away

90

LINUX MAGAZINE

Issue 21 • 2002

wrote the SpamAssassin, Justin used another mail filter written in Perl, which became a problem because of its static rules and unclear license situation. From this project Justin adopted the idea of working with scores, a concept very similar to the “Adaptive Scoring” employed by the GNUS News- and Mailreader. The SpamAssassin works by applying many different tests to the email it parses. There are tests for HTML-only mail, whether mail contains oftenused spam-phrases, whether a mail claims not to be Spam according to certain laws and regulations, whether it contains an unusual amount of exclamation or question marks, talks about “Millions of Dollars” and so on. For every test that is triggered, an email collects points; how many points each test scores can be specified by the user in a rather simple ASCII configuration file. If the sum of all scores passes a certain – also user-definable – threshold, the SpamAssassin judges that the mail is probably spam. Based on this decision, the SpamAssassin inserts header flags informing about the user about the test results. If the user so wishes, the spam emails are also forced to have Content-Type “text/plain,” which makes it much easier to later check the results. Also the SpamAssassin can insert a more detailed test report at the beginning of the mail,


COMMUNITY

so a user can easily see why a mail has been rated as spam. The biggest potential risk in using SpamAssassin is clearly “false-positive” results – regular, normal email falsely classified as spam. Therefore it is recommended you regularly take a look at the spam folder, which is where all detected spam should normally go, in order to rectify falsepositive results. You can also choose to lower the sensitivity of the SpamAssassin, which will increase the amount of undetected Spam. Finding the proper balance is the tricky part for the SpamAssassin administrator. To prevent spammers from finding ways to bypass the SpamAssassin tests, the project incorporates as many different tests as possible and is also easily extensible. Of course it also supports the onlineblacklists. Standard DNS-blacklists referencing known sources and relays for spam are supported, as is Vipul’s Razor, a database allowing identification of known spam. In order to allow easy filtering of large amounts of mail and connections to as many mail-sources as possible, Craig Hughes wrote the spamd daemon, which comes with the SpamAssassin package. The biggest weakness of the SpamAssassin is that it is more or less targeted at the technically experienced user and does not (yet) have a graphical user interface. Fixing this, as well as writing more tests and creating more bindings to mail sources, is the focal point of further development. Currently available are bindings to Procmail, Qmail, Postfix, Sendmail through the Milter library and a Mail::Audit plug-in. I hope to be excused for mentioning that the sendmail-milter plug-in was written by myself after an unsuccessful search for existing solutions, so I could use the SpamAssassin to filter all incoming mail. Lack of time on my side forbids me from maintaining the project properly, however, so Michael Brown, whose company employs/offers it in a commercial environment, has taken over as the maintainer in the best Free Software tradition. This is a nice example of how Free Software can harmonise the classic “scratch your own itch” approach with commercial interests of a company for the benefit of all users. Facing an increasing flood of spam that threatens to bury the Internet beneath it, I have to admit I hold great sympathy for projects like the SpamAssassin, which gets rid of about 60 spam emails a day for me.

Voxximate Regular readers of the Brave GNU World should by now be pretty familiar with many of the arguments for Free Software in the scientific field. Ultimately, Free Software is the only sensible long-term choice

Vortex in action

for all kinds of scientific work, because only Free Software can offer the guarantee that it will remain useful for future projects and can be included alongside scientific results i.e. publication as part of one’s work. Voxximate by Andreas Neumann is such a scientific Free Software project under the GNU General Public License. Voxximate stands for “Vortex flow simulation made at home”, and it is a program for the simulation of currents/vortices in fluids. The program works based on predefined starting positions and uses concrete steps to calculate the influence of all vortices on all other vortices, tracking the development through time. The project is probably most useful for students facing fluid dynamics at some point in their studies, who are interested in studying how vortices interact and build structures. Voxximate was written in Java, which brings the usual Java problems, but this should not keep anyone from supporting further development. For the next steps, Andreas hopes to include a graphical editor to define starting positions and capabilities to save graphics and animations that can then be published on the Web.

Monica Monica is a monitor calibration program by Tilo Riemer. It was written in C++ and uses the Fast Light Toolkit (FLTK) and the xgamma program of XFree86. If a monitor’s gamma correction is wrongly set that it can make it impossible to distinguish between colours that lie close together, or create an unsatisfactory impression of the colouring schemes. If the computer is used for graphical work then this can be particularly problematic. Issue 21 • 2002

LINUX MAGAZINE

91


COMMUNITY

Initially, Tilo Riemer tried to use the related project KGamma, but failed to compile it because several KDE libraries were missing and Kgamma also seemed to be so deeply embedded in KDE that it needs large parts of KDE to work. So in January 2002 he began writing Monica, which has the advantage of being very small and fast. This enabled the inclusion of an “on-the-fly” mode, making dynamic feedback possible. On a 900MHz computer this needs about 10-20 per cent of the CPU time. Further strengths of Monica are an absence of dependencies beyond the FLTK and the policy to save changes in the user’s .xinitrc to make them independent of the window manager or desktop. The recent release of version 1.0 indicates that Tilo does not plan to invest a lot more time into Monica, although he would welcome efforts to internationalise it. Originally, Tilo planned to release Monica as “Public Domain,” since it seemed too small and insignificant to warrant thinking about licenses. Into the sourcecode he wrote “Copyright © Tilo Riemer” though, without further thinking about it. The notion of Public Domain isn’t totally unproblematic in continental Europe, however. In Germany, the standard legal interpretation of the term is free of authorship/copyright claims, which usually means either the author is unknown or dead for more than 70 years. Both cases clearly did not apply. Therefore Tilo decided to publish Monica under a BSD-like license, solving the immediate problems and making Monica Free Software. This scenario is not uncommon and demonstrates that developers obviously don’t like thinking about licenses very much, although it is very easy to introduce insecurities when not doing so. Therefore

Calibration made easy

92

LINUX MAGAZINE

Issue 21 • 2002

I will try to give an understandable introduction into the legal maintainability of Free Software.

Legal maintainability of Free Software Most people are aware that a large part of all software requires permanent technical maintenance or it will quickly lose its usefulness. Often only software that is permanently maintained can be employed over longer periods of time. This per se technical procedure depends on access to the source code and the right – i.e. the freedom – to perform the maintenance. Generally speaking, defining the rights and obligations of every member of society is what the political/legal system does. Whether the legal system is working well or not isn’t the important point. What one should realise is that some of the prerequisites for technical maintainability are of legal nature. Particularly in a commercial environment, the guarantee of permanent and lasting maintainability is one of the seminal advantages for Free Software. This advantage depends strongly on the legal maintainability of Free Software. The freedoms, rights and obligations of Free Software are granted and sometimes protected through licenses, which are “anchored” to the software through the copyright of the author. Free Software does not strictly depend on copyright law to work, but since copyright law exists, we need to deal with it.

What does legal maintainability mean in this context? Even if this is not how most people percieve it intuitively, the legal system is not static, it is everchanging. Changes affecting the copyright law could, as was recently the case in Germany, potentially weaken or even outlaw Free Software. In this specific case, ifrOSS and FSF Europe were capable of suggesting a change of the proposed copyright law revision, introducing an exception for Free Software. This change made it into the law in the original form suggested by the ifrOSS and became part of the law passed on 25 January 2002 that will be enacted soon. One of the tasks of the FSF Europe is to keep looking for such developments and influence them in a positive way for Free Software. Without cooperation with organisations like the ifrOSS, which is clearly entirely legal in nature, doing this would be much harder; which is why the FSF Europe works on establishing and strengthening cooperation throughout Europe. It would also have been possible that changes in other legal parameters would have required an adaptation of the licenses. Legal changes or new


COMMUNITY

technical concepts, like “Application Service Providing” (ASP), could possibly bypass the protection of freedom in some areas or even render it ineffective, effectively violating the spirit of the licenses. Most developers publish their software under the GNU General Public License, conscious of, by doing so, having secured and protected the freedom of their software. When doing so, the most important step towards securing Free Software has already been taken. By employing the “or any later version” clause the FSF is also partially empowered to globally protect, defend and maintain the licensing under the (L)GPL. Sometimes it may become important to explicitly change a license, however. Projects that have not established a central maintenance of legal rights can get into serious trouble in such a case, especially when the “or any later version” clause of the GPL has been removed. In such a case all developers – assuming they can all be found – have to agree to the change. Given the rather wide spectrum of interests and opinions of the developers working on some projects, this does not seem very likely. Additionally, in most cases only the holder of the so-called “exclusive exploitation rights” – i.e. the “Copyright” – is legally entitled to enforce the license in court. So projects can run into serious difficulties when trying to represent the interests of the project in court. Given that many authors are working on a project, they will effectively have to team up and act together in order to protect their individual interests. This requires a lot of coordination, time and effort. Also not all authors are willing or capable of seeing a potentially protracted legal struggle through to the end. It would be good if more projects became aware of these relationships and took adequate precautions. By appointing a trustee, authors can also get back to improving the software itself. For the future it seems likely that projects with clear and orderly legal circumstances will have an advantage gaining popularity, since users will quite likely more often pay attention to this. In order to secure the legal maintainability of Free Software – especially inside the core area of the GNU Project, but not limited to it – the Free Software Foundation has started early to work with the so-called “Copyright Assignments,” which empower it to defend the rights of Free Software (even in court, if need be) and adapt the licensing to the changing circumstances. Since the continental-European Authorship law has a different basis than the Anglo-American Copyright, the FSF Europe has also been working together with Axel Metzger, Carsten Schulz and Till Jaeger of the ifrOSS on a “Fiduciary Licence

Agreement,” which allows the FSF Europe to act as the fiduciary of the authors. The author retains an unlimited amount of “single exploitation rights,” which can be used by the author to dual-license the software under other (potentially even proprietary) licenses. At the same time, the FSF Europe guarantees to only use the transferred rights in the interest of Free Software and will only publish the software under a Free license – otherwise all rights fall back to the author. This agreement is currently in the final internal “review phase” and will be introduced to the public in the not too distant future. As the president of the FSF Europe I consider the Free Software Foundation to be best-suited for this task as they will continue meeting these challenges with the reliability that the FSF has been known for in long years. They not only possess the largest knowledge and experience with the GNU General Public License and Lesser GPL, they can act worldwide and have a justified reputation for being able to defend the interests of Free Software; also with legal means, if need be.

Enough for today That should be enough for today. I hope to have succeeded in the last part to create some more awareness for the background and the tasks and work of the Free Software Foundation. As usual, I’d like to ask for loads of email containing ideas, comments, questions and new projects.

Info Send ideas, comments and questions to Brave GNU World Homepage of the GNU Project Homepage of Georg’s Brave GNU World “We run GNU” initiative world/rungnu/rungnu.en.html SpamAssassin homepage Comprehensive Perl Archive Network (CPAN) GNUS homepage Vipul’s Razor homepage Sendmail-Milter Plug-in homepage Voxximate homepage Monica download Fast Light Toolkit (FLTK) homepage XFree86 homepage KGamma homepage ifrOSS homepage FSF Europe homepage

column@brave-gnu-world.org http://www.gnu.org/ http://brave-gnu-world.org http://www.gnu.org/brave-gnuhttp://spamassassin.org http://www.cpan.org http://www.gnus.org http://razor.sourceforge.net http://savannah.gnu.org/projects/ spamass-milt/ http://voxximate.sourceforge.net http://lincvs.sunsite.dk/index.php? order=download,Monica&lan=en http://www.fltk.org http://www.xfree86.org/ http://www.vonostheim.de/kgamma /index.html http://www.ifross.de http://fsfeurope.org

Issue 21 • 2002

LINUX MAGAZINE

93


COMMUNITY

Bristol: 5-7 July 2002

UKUUG LINUX Developers’ CONFERENCE There’s something for everyone at this year’s Linux Developers’ Conference, where experts and newbies alike get to mingle with the leading lights of Linux development

Once again, a wide cross-section of the Linux development community will gather at the start of July for the UKUUG’s fifth summer technical conference. Due to the generous sponsorship from IBM and AMD the conference and tutorial registration fees are again low. The conference moves about the UK from year to year and in 2002 it is the turn of Bristol, the home of Brunel’s magnificent suspension bridge. Speakers will travel from nine countries around the globe to present their work and form the event’s largest programme to-date. The

conference begins on Thursday 4 July with tutorials on shared libraries, given by Ulrich Drepper, the glibc maintainer, and the Linux Terminal Server Project, given by the project’s founder, Jim McQuillan. After a Linux printing workshop on Friday morning (CUPS/KDEPrint), the conference proper begins at lunchtime and runs through to Sunday lunchtime. The delegate fee includes all presentations, tea/coffee beaks and a sandwich lunch on the Saturday. There will be informal discussions on all three days. Please check the Web site at http://www.ukuug.org/events/ linux2002/ for confirmation and bookings.

Programme highlights Speakers David Axmark Marcus Brinkmann Stephen Coast Ulrich Drepper Phil Hazel Christoph Helwig Luke Leighton Gerv Markham Michael Meeks Simon Myers Mark Probst Mark Probst

Talks MySQL The Hurd Lego programming glibc2.3 Exim 4 Linux-ABI FreeDCE Bugzilla GNOME 2.0 RT Dynamic Binary Translation MathMap

And also: Securing Linux Servers; Wireless Networking; Grid Computing; LTSP; PHP; Linux in Undergraduate Teaching; Reliability, Availability & 94

LINUX MAGAZINE

Issue 21 • 2002

Speakers Stephan Richter Alistair Riddoch Julian Seward Sander Striker David Sugar David Sugar Bo Thorsen Marcelo Tosatti (2.4 maintainer) Wookey

Talks Zope 3.0 WorldForge Valgrind Subversion DotGNU Free Telephony Linux on AMD’s Hammer architecture The Linux Kernel Emdebsys

Serviceability and, if we’re lucky, a chance to examine IBM’s Linux Wristwatch and maybe even Sony’s Linux (For PlayStation 2)!


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.