COMMENT
General Contacts General Enquiries Fax Subscriptions Email Enquiries Letters CD
01625 855169 01625 855071 www.linux-magazine.co.uk subs@linux-magazine.co.uk edit@linux-magazine.co.uk letters@linux-magazine.co.uk cd@linux-magazine.co.uk
Editor
John Southern jsouthern@linux-magazine.co.uk
Assistant Editor
Colin Murphy cmurphy@linux-magazine.co.uk
Sub Editor
Gavin Burrell gburrell@linux-magazine.co.uk
Contributors
Alison Davies, Richard Ibbotson, Dean Wilson, Frank Booth, Robert Morris, Malcolm McSween, Cammas MacCormick, Steven Goodwin, Janet Roebuck, David Tansley, Bruce Richardson
International Editors
Harald Milz hmilz@linux-magazin.de Hans-Georg Esser hgesser@linux-user.de Ulrich Wolf uwolf@linux-magazin.de
International Contributors Björn Ganslandt, Georg Greve, Jo Moskalewski, Anja Wagner, Patricia Jung, Stefanie Teufel, Christian Perle, Andreas Jung, Boris Schauerte, Viola Bräuer, Tim Schurmann Design
Advanced Design
Production
Rosie Schuster
Operations Manager
Debbie Whitham
Advertising
01625 855169 Carl Jackson Sales Manager cjackson@linux-magazine.co.uk Verlagsbüro Ohm-Schmidt Osmund@Ohm-Schmidt.de
Publishing Publishing Director
Robin Wilkinson rwilkinson@linux-magazine.co.uk Subscriptions and back issues 01625 850565 Annual Subscription Rate (12 issues) UK: £44.91. Europe (inc Eire) : £59.80 Rest the World: £77.00 Back issues (UK) £6.25
Distributors
COMAG, Tavistock Road, West Drayton, Middlesex England UB7 7QE
R. Oldenbourg
Linux Magazine is published monthly by Linux New Media UK Ltd, Europa House, Adlington Park, Macclesfield, Cheshire, England, SK10 4NP. Company registered in England. Copyright and Trademarks (c) 2001 Linux New Media UK Ltd No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, emails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing. ISSN 14715678 Linux is a trademark of Linus Torvalds Linux New Media UK Ltd is a division of Linux New Media AG, Munich, Germany Disclaimer Whilst every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material providied on it is at your own risk. The CD is comprehensively checked for any viruses or errors before reproduction. Technical Support Readers can write in with technical queries which may be answered in the magazine in a future issue, however Linux Magazine is unable to directly provide technical help or support services either written or verbal.
Current issues
QUICK CHAT I have spent the last week fighting, both with a cold and with my network. Testing out new distributions can be exciting and full of wonder, but when things go wrong just what do you do? With other operating systems, the chap next door could normally come around and say “well if you reinstall it then it’ll fix the problem”, because his fourth cousin’s daughter’s boyfriend told him to do it once and it worked. Under Linux he normally takes a quick look, shudders because the cursor in your drawing program points the wrong way, and leaves. Not much help apart from saving on the teabags. Hours of manual reading later and I’m no closer to a solution. The Web seems to be nothing but blind alleys and dead-ends this week. Email offers no quick solutions and I don’t feel well enough to attend the local user group. I want instant help by someone knowledgeable. A couple of emails later, a friend reminds me of IRC. Internet Relay Chat has a tarnished reputation of seedy pick-up rooms but the computer section is alive, healthy and very robust. After a few clicks to join some rooms and a short listen to make sure it’s the correct topic, I’m in there with my questions. I was mildly chastised on occasion for not thinking before typing and sometimes received no response, but on the whole the answers that came back were so quick, accurate and helpful that my network was up and running again in no time. With some of the major distributors sending out major releases this month I can see IRC will become a necessity. Happy Hacking
John Southern Editor We pride ourselves on the origins of our magazine which come from the very start of the Linux revolution. We have been involved with Linux market for six years now through our sister European-based titles Linux Magazine (aimed at professionals) and Linux User (for hobbyists), and through seminars, conferences and events. By purchasing this magazine you are joining an information network that enjoys the benefit of all the knowledge and technical expertise of all the major Linux professionals and enthusiasts. No other UK Linux magazine can offer that pedigree or such close links with the Linux Community. We're not simply reporting on the Linux and open source movement - we're part of it.
Issue 19 • 2002
LINUX MAGAZINE
3
NEWS
LINUX NEWS Web browsers on the move Much is happening in the development and refinement of Web browsers for the Linux community. Galeon has recently hit version 1.2, Mozilla 0.9.9 – just missing out on the code freeze for Mandrake 8.2 it seems – and Opera has released version 6.0 TP3. Variety is the spice of life, so they say. Mozilla and Galeon give you a prime example of Open Source software while Opera offers all that you would expect from a commercial product.
Info Galeon: http://galeon.sourceforge.net Mozilla: http://www.mozilla.org Opera: http://www.opera.com
BlueOS – BeOS interface to Linux The development of BlueOS, a Linux API, is hoping to speed up the compiling and porting of applications to BeOS. BeOS has suffered in that very few applications are available for it and this has held the OS back from appealing to a wider audience. Much work still needs to be done, so the usual call for developers to help with the project has come about again. There is development, which is always good news, and will hopefully spur those with some free time to go back and look at the project again.
Info BlueOS development site: http://blueos.free.fr
The ever improving Galeon
SuSE develop support for AMD’s Hammer processor AMD have announced that SuSE Linux AG has submitted enhancements to the official Linux kernel to support AMD’s x86-64 instruction set. As you know, the Linux kernel is the fundamental source code upon which all Linux operating systems are based. AMD’s next-generation processor, code-named “Hammer”, is supposedly designed to provide unprecedented levels of performance for both 32-bit and 64-bit software applications. Hammer processorbased systems will grant business and home users the benefit of long-term investment protection as these systems are designed to enable seamless transition from a 32-bit to 64-bit environment. AMD expects to begin shipping the first version of the Hammer family of processors at the end of 2002. “With support for AMD’s future processors in the official Linux tree, Linux users everywhere will appreciate being able to run their native x86-64 applications and their existing 32-bit x86 applications,” said Linus Torvalds, creator of Linux.
Info A GNOME skin for BlueOS
6
LINUX MAGAZINE
Issue 19 • 2002
SuSE: http://www.suse.com AMD: http://www.amd.com
NEWS
Apple to license CUPS Easy Software Products has announced that Apple Computer has licensed the Common UNIX Printing System (CUPS) for use with Apple operating systems and software. The standard CUPS distribution will be provided with Apple’s Open Source Darwin operating system, while an enhanced version of CUPS with Apple’s Aqua user interface will be provided with MacOS X. CUPS uses the Internet Printing Protocol (IPP) as the basis for managing print jobs and queues. The Line Printer Daemon (LPD), Server Message Block (SMB), and AppSocket (also known as JetDirect) protocols are also supported with reduced functionality. CUPS adds network printer browsing and PostScript Printer Description (PPD) based printing options to support real-world printing under UNIX. CUPS has become the de facto standard for most modern boxed-set Linux distributions. With this news, printer manufacturers may be encouraged to start producing drivers for CUPS. We shall see.
Info Easy Software Products: http://www.easysw.com CUPS and CUPS support: http://www.cups.org
KDE 3 – not a bed of roses The release of KDE 3 has been cast into a bad light, following accusations that the KDE 3 developers have mismanaged the project. The standards and practices which helped bring forth KDE2 are said to be lacking from recent development, causing some consternation throughout the user community. None of this has been taken lying down by the KDE 3 developers, who are keen to point out the complexity of the development process. For more details, see the NewsForge article at: http://www.newsforge.com/article.pl?sid=02/03/09/2 24213
Linux on a PlayStation 2
Linux (for PlayStation 2) Sony has finally announced a release date for its eagerly awaited Linux kit, which allows you to use your PlayStation2 computer entertainment system as a fully functional desktop computer, or so the Sony FAQ says. The hope is that this will start shipping in Europe on the 22nd May 2002. Song has even set a price of 249 Euros for those of us in (or at least near) Europe. The Linux Kit (for PlayStation2) comes complete with an internal hard disk drive, plugging in a keyboard, mouse and computer monitor will allow you to install and run a wide variety of computer applications that have been written for the Linux operating system. In addition, the kit allows you to develop your own programs that
operate on Linux (for PlayStation 2). The kit comes with a broadband network adaptor (Ethernet) (for PlayStation 2). This allows for connection to high speed Internet services as well as home PC networks. There is one caveat that is causing some concern amongst all the excitement. To use Linux (for PlayStation2) you will need a computer monitor that supports “sync on green”. A significant proportion of standard computer monitors do have this support, but not all, so to avoid disappointment you had better check first.
Info Linux PlayStation 2 community: http://playstation2-linux.com
GTK+ 2.0 released GTK+ is the multi-platform toolkit used for creating graphical user interfaces. Offering a complete set of widgets, GTK+ is suitable for projects ranging from small one-off projects to complete application suites. GTK+ is Free software and part of the GNU Project. However, the licensing terms for GTK+, the GNU LGPL, allow it to be used by all developers, including those developing proprietary software, without any license fees or royalties. What’s more, they have released version 2, which you can get at http://www.gtk.org.
Issue 19 • 2002
LINUX MAGAZINE
7
NEWS
Gnome 2.0 Beta 2 now out Great improvements have been made in the latest release of the GNOME desktop, or so it’s been claimed by those in the know. These improvements include enhancements for anti-aliased text to make your screen reading that much more comfortable, as well as many improvement and new features for disabled users. If you want to know whether the commentators are telling porkie pies, or whether GNOME 2.0 really is as good as they say, then slip this month’s coverdisc in your CD-ROM drive and find out for yourself.
Info GNOME
www.gnome.org
Crisis in kernel patching The guys at the UKUUG arranged a most entertaining discussion meeting, held in London at the end of February. Everyone went expecting a lecture from Eric Raymond, as he’s very good at starting discussions! During the meeting he expressed his concerns at the way the kernel patching gets done. According to Eric Raymond, Linus Torvolds has “reached his stress limit” acknowledging that no one person could deal with the vast number of kernel patches which come forward from the kernel maintainers. Eric Raymond claimed that patches, many of which would help with the development and progress of Linux, are being dropped without good reason. “Linus needs to get better at delegating,” said Eric Raymond. “Sometimes he can file out jobs and then arbitrarily reverse the decisions made.”
Info UKUUG: http://www.ukuug.org/events Eric Raymond: http://www.tuxedo.org/~esr
Hardware donations required The sad truth is that old hardware often gets consigned to the skip simply because it’s taking up too much space. It pains us to do it, but we live in a world where space is precious. The Stone SouperComputer is trying to lay its hands on as much old hardware as possible in its attempt to construct a Beowulf-class Linux cluster for an A-level computing research project. The pleading email we got at the office suggests that they will take anything, 386 or greater. They also will take non-Intel machines as well.
Info The Stone SouperComputer – http://stonesoup.esd.ornl.gov
AOL to use Mozilla Gecko Last month there was much concern about Red Hat being bought out by AOL. Well, they were seen in the same room together! But it appears that these discussions had more to do with setting up a support deal for AOL to use Mozilla as its browser of choice over Internet Explorer. There have been rumours for a while that AOL was keen to start using the Gecko rendering engine – which is the prime part of Mozilla – for its own purposes. The switch should save AOL lots of money. Even though Gecko might be used by AOL it is unlikely that they will produce any client software that will run under Linux. A support thing, we guess.
Info Mozilla: http://www.mozilla.org
Programming Jabber Yet another O’Reilly book hits the shelves this month, this time covering the messaging client Jabber. Jabber is a set of protocols expressed in XML, and an extensible framework that allows people and applications to exchange all sorts of information, from simple text messages to extending the backbone of an enterprise data system. Jabber gives programmers the power to build applications that have identity, presence, and that can take part in conversations. “Programming Jabber: Extending XML Messaging”, provides programmers the opportunity to learn and understand the Jabber technology and protocol from an implementer’s point of view. “I was intrigued by the protocol; my entry point into the Jabber world was from the bottom up, so to speak,” says DJ Adams, the author. “From day one, I was looking at the XML flowing between client and server. At the time, my head was full of XML, messaging, and Internet-wide communication. Jabber seemed to encapsulate all these things in one neat little box of potential. The more I learned about Jabber the more mesmerised I became.”
Info O’Reilly: http://oreilly.com/catalog/jabber
8
LINUX MAGAZINE
Issue 19 • 2002
NEWS
Open Source software to cure open sores? Open Source software is the key to building a reliable and affordable IT infrastructure for the NHS, claims leading IT security consultancy Skygate Technology. The company’s comments follow the recent BBC “Your NHS Day” and the announcement from Prime Minister, Tony Blair that taxes will have to rise to fund improved NHS services. “The NHS recently announced its £500m ‘Building the Information Core’ strategy to modernise its IT systems by 2005,” says Skygate director Pete Chown. “With much public scrutiny and limited resources the NHS should be seriously considering the cost and reliability advantages of using Open Source software.” “For example, systems where reliability is paramount would be ideally suited to the Open Source Linux operating system. These could include the planned systems for electronically booked appointments, electronic patient records and electronic health records.” “There is no doubt that large-scale, public sector IT projects are fraught with risk. The Inland Revenue, National Air Traffic Service and Passport Office are all examples of IT projects that have either failed, run late or gone well over budget. If the NHS is to invest heavily in IT then it should take the time look at all the available options. These have to include Open Source software because the business case, in terms of cost and reliability, is compelling.”
Info Skygate Technologyhttp://www.skygate.co.uk
“Free Consultancy” as in “Free Beer” Openweb Analysts Ltd. is now offering free consultancy to London-based firms considering Open Source solutions for Web development. Companies and government bodies who are thinking of using free software can now contact an expert for advice. They should email Openweb Analysts with a full explanation of their business and their interest in Open Source. They will receive a response examining potential Open Source solutions for their business. Those companies most able to benefit will be offered a free face-to-face meeting to discuss it further.
Info Openweb Analysts Ltd: http://www.owal.co.uk Email advice: advice@OWAL.co.uk
New ELSA graphics cards out ELSA has launched its new range of consumer graphics boards: the ELSA Gladiac 925, ELSA Gladiac 725 and ELSA Gladiac 517 products. These new boards use the latest generation GeForce4 technology from nVidia, setting new standards in 3D performance and multimedia flexibility. At the top-end the ELSA Gladiac 517 TV-OUT provides PC gamers with an entry-level step-up to performance gaming. Based on the new nVidia GeForce4 MX 440 GPU, the board is equipped with 64Mb DDR RAM, an effective memory clock rate of 400MHz and a core clock speed of 270MHz. The
ELSA Gladiac 517 TV-OUT offers gaming and DVD-movies on TV and provides superb image quality with the new nFinite FX II engine, Lightspeed Memory Architecture II and Accuview Antialiasing. All four of the new boards in the ELSA GLADIAC range support the 3D shutter glasses ELSA 3D Revelator. It so happens that nVidia also produces drivers for Linux users, just a shame that they are closed source.
Info ELSA nVidia
http://www.elsa.co.uk http://www.nvidia.com/
GPL test case It seems that the case between NuSphere Corp and MySQL will not be tried in court in a case that would have proved the standing of the GNU general public licence (GPL to it’s friends). The preliminary hearing between NuSphere and MySQL was to begin on February 27, but has had cold water poured on it by Massachusetts US District Court Judge Patti Saris, when she refused to allow arguments to be brought forward which would have expanded the case beyond a trademark dispute.
Info MySQL http://www.mysql.com NuSpherehttp://www.nusphere.com
Issue 19 • 2002
LINUX MAGAZINE
9
NEWS
NetBSD for the desktop Wasabi Systems has released its NetBSD for desktops boxed set. It contains over 2,000 applications for NetBSD, including GNOME, KDE, office suites, Web browsers, development tools and games. The package release of Wasabi NetBSD 1.5.2 contains 5 CD-ROMs with the full binaries of the world’s most portable operating system for 20 platforms, plus over 1.9Gb of recompiled third party software for x86 PCs. These CDs are bootable on x86 PC, Alpha, DECstation, SPARC, Power Macintosh, and VAX platforms. A 16-page installation guide is included, and as an added bonus, you will also receive a colourful NetBSD CPU badge. The NetBSD operating system is a fully-featured, Open Source, Unix-like operating system descended from the Berkeley Networking Release 2 (Net/2), 4.4BSD-Lite, and 4.4BSD-
Lite2. NetBSD runs on thirty-one different system architectures featuring twelve distinct families of CPUs, and is being ported to more. The NetBSD 1.5.2 release contains complete binary releases for twenty different machine types. NetBSD is a highly integrated system. In addition to its highly portable, high performance kernel, NetBSD features a complete set of user utilities, compilers for several languages, the X Window System, firewall software and numerous other tools, all accompanied by full source code. It also supports third party software (including the KDE and GNOME desktops) through its package management system.
Info Wasabi: http://www. wasabisystems.com
GNUPro ports to Xstormy16 Red Hat, one of the leading Linux providers, has announced a partnership with SANYO to complete a highly complex GNUPro port to SANYO’s Xstormy16 processor. GNUPro is recognised as the world’s most popular embedded development tools suite. “We chose Red Hat to build the GNUPro port to our processor because of its extensive knowledge of embedded architectures, prominence in GNU tool development and exceptional technical support,” said Susumu Yamada of SANYO. “Our customers appreciate the ubiquitous nature of GNU-based tools, and we look forward to working with Red Hat to provide these benefits. “The architecture on which SANYO’s processor is based is designed for memory-constrained applications, and SANYO plans to utilise it on home appliances, portable units and audio systems. Red Hat’s embedded tools team will draw upon their expertise in porting GNU-based tools in order to develop compiler, debugger and related software development tools for SANYO’s architecture. Red Hat will also provide ongoing maintenance support to SANYO.
AdAstra invests in SuSE Linux AdAstra Erste Beteiligungs, a privately held venture capital company has invested 4.4 million euros in SuSE. Large enterprises benefit from SuSE’s deep project knowledge in terms of design and implementation of individual IT structures. For the IT infrastructure of small and medium-scale enterprises, SuSE offers cost-efficient, easy-to-administer standard solutions such as the SuSE Linux eMail Servers and SuSE Linux Firewall. During the past months, SuSE Linux restructured its internal organisation into four target-group-oriented Business Units, which handle the various customer segments according to their specific needs. “In the IT world, SuSE Linux AG is renowned for its highly innovative products and services. With the new organisational structure, SuSE has established an optimum base for handling the various requirements of the individual customer segments, thus being able to participate over proportionately in the rapidly growing Linux market,” commented AdAstra’s HansChristian Perle. “The new management exceeded our expectations. Despite a generally difficult environment, SuSE Linux AG was able to increase its total turnover by about 50 per cent to 40 million euros during last year. The former technology start-up SuSE successfully transformed into a reliable and auspicious enterprise,” added Tillmann Lauk, CEO of the lead investor, e-millennium 1.
Info SuSE: http://www.suse.co.uk AdAstra: http://www.adastra.com
SuSE has a new CTO Boris Nalbach, an experienced IT professional, has assumed the position of Chief Technology Officer (CTO) at SuSE Linux, the international Open Source technology leader and solutions provider. “Boris Nalbach was our favourite candidate for staffing the Executive Board vacancy,” said Gerhard Burtscher, Chairman of the corporation’s Executive Board. “Under his supervision, we will introduce additional smart Linux solutions for end customers as well as for enterprises and promote the strategic and operational collaboration with our technology partners.”
Info Info Red Hat: http://www.redhat.com/solutions/embedded 10
LINUX MAGAZINE
Issue 19 • 2002
SuSE: http://www.suse.co.uk
NEWS
New SuSE OS Red Hat help develop SuperH SH-5 compiler Both Red Hat and SuperH, Inc., the developer of RISC microprocessor cores, have announced that they are working together on a compiler for the new 64-bit SuperH SH-5 processor. The Red Hat GNUPro compiler is a commercial software development suite of tools built around the Open Source GNU standard. The joint work with Red Hat is part of SuperH’s commitment to using, and making available, Open Source software for the SuperH architecture. The SuperH architecture is ideally suited to digital consumer convergence products and targeted at service providers, system manufacturers and semiconductor suppliers. Red Hat is working with SuperH to develop Open Source solutions across the SuperH architecture to deliver powerful, reliable and affordable solutions ideal for the mass consumer device market. The SuperH architecture, including the new 64-bit SH5 core architecture, will be supported by the Red Hat Embedded Linux developer suite enabling developers to gain access to the latest software upgrades as they become available and to Red Hat Open Source support. The SuperH architecture is designed for digital consumer applications such as set-top boxes, portable multi-media appliances, games consoles, and many other ‘infotainment’ products. Automotive applications include car information systems, motor control systems and satellite navigation products. Telecoms applications include video telephones, home gateway systems and smart phone applications. “When designing consumer devices, getting the embedded technology right is critical – it must be able to support feature-rich applications but also come in at the right price point for the mass consumer market,” said Jon Frosdick, Software Director of SuperH, Inc. “Open Source software is an important part of our strategy for ensuring that the SuperH architecture is designed to deliver the highest price performance ratio. We’re delighted to be working with Red Hat to develop Open Source solutions for the new SuperH SH-5 architecture.”
Info RedHat: http://www.europe.redhat.com SuperH: http://www.superh.com
SuSE Linux has announced the launch of the eighth generation of its OS, which will be available from software retailers from mid-April. This release comes with increased security, the newest KDE desktop, KDE 3, and is the first distribution to feature it. With incredibly fast installation and expanded multimedia capabilities this could make it one of the most advanced Linux distribution for professional and private desktop users. An almost fully-automated installation routine and the comfortable graphical desktop environment KDE 3 make SuSE Linux 8.0 the ideal choice for all Linux newcomers and private users, who want to benefit from the advantages of the Linux operating system on a desktop computer. SuSE Linux 8.0 will be available from software retailers from mid-April 2002. The recommended retail price for SuSE Linux 8.0 Personal (3 CDs, 2 manuals, 60 days of installation support) is £39; SuSE Linux 8.0 Professional (7 CDs, 1 DVD, 3 manuals, 90 days of installation support) is £59. An Update offering for previous users is also available, priced £39 (inclusive of VAT).
Powering the Welsh spirit E-spirit Wales specialises in Linux training and implementation for small to medium size companies. They run courses teaching Linux from beginners classes to using Linux in the enterprise, namely changing over from a Windows NT to a Linux-based system. Training is carried out on-site or at seminars provided by representatives of Espirit Wales Ltd in close co-operation with the various Linux distribution manufacturers and Linux IT – the largest Linux solution providers in the UK. Linux implementation is carried out by certified Linux engineers on all conceivable systems. They aim to give a guaranteed software solution with both telephone and on-site support.
Info E-spirit Wales: http://www.e-spirit-wales.co.uk
PointBase – 100th partner for MontaVista MontaVista Software Inc., has announced the one hundredth member of the MontaVista Partnering Program: PointBase, Inc., developers in Java database technology for managing and synchronising enterprise data among servers, mobile and pervasive computing devices, headquartered in Mountain View, Calif. “PointBase has supported Linux for some time now and we are naturally excited about our milestone partner status with MontaVista Software,” said Cameron McEachern, executive vice
president for sales and marketing at PointBase. “Our pure Java, smallfootprint relational database complements the strengths of VisualAge Micro Edition for MontaVista Linux, while delivering the persistent storage capabilities needed to enable embedded applications. We look forward to continue working with MontaVista Software to bring value to our mutual customers.”
Info PointBase: http://www.pointbase.com MontaVista: http://www.mvista.com
Issue 19 • 2002
LINUX MAGAZINE
11
NEWS
K-splitter
MONEY MATTERS In this month’s K-
Introduction to the change-over
splitter we look at
KDE 3 casts its shadow way before it but before we all dive in and start using it, it’s always better to be safe than sorry. That’s why not everyone will want to throw KDE 2 series overboard straight away. In fact, it is possible with a little fore thought to use both KDE 2.2 and KDE 3.0 at the same time. However, because no two computer are the same, when it comes to switching over, the more information the better. For all those who want to try out KDE 3.0 right away, we hereby present you with additional sources of information; firstly on how you can keep your stable working environment without having to miss out on experimenting with the new version. You can find the official introduction, fresh from KDE Headquarters, at http://www.kde.org/kde2-andkde3.html. A somewhat different approach is taken by AnneMarie Mahfouf in her description at http://women.kde.org/projects/coding/kde2+3.html (Figure 1). Here you can simply define a separate user for the use of KDE 3.0, while when you log in as normal the usual KDE 2.2 will still be at your disposal. Take care: both instructions use the CVS tree of KDE 3.0 and thus take up quite a bit of disk space. The Qt version alone, which is indispensable for smooth
the financial side of KDE – banking online with Konqueror and converting money into euros with Keurocalc – as well as tips on implementing KDE 3
CVS tree: When several developers are working on a software project, there is a particular need for a procedure which prevents anyone unintentionally overwriting amendments made by their co-programmers, or destroying the sole working version. Many projects therefore use a “Concurrent Versions System”. An older development version can be rebuilt at any time from a CVS file tree, but it is mostly used to distribute the latest program code.
LINUX MAGAZINE
There is, and always has been, good news to report from the printer front, so what could be better than giving the KDEPrint-Project its very own Web site? At http://printing.kde.org you will find chatter and gossip from the printer scene, enriched by FAQs, Tips and Tricks and Tutorials on the subject of printing under KDE. If you’d like to join this project: Chris Howells and co. have set up a KDEPrint mailing list, and you can register on it at http://mail.kde.org/ mailman/listinfo/kde-print.
running, demands 120Mb of your hard disk. For KDE itself you will have to factor in at least an extra 600Mb of spare disk space. Users of KDE 2.2 may also wish to re-optimise their system. For this, Oliwier Ptak has provided an introduction, explaining how to build optimal KDE2.2.2 binary packages from the sources. This can be found at http://www.userlocal.com/articles/ kde222/kde222fromsource.htm.
Konqueror and the banks Unfortunately aggro with the bank is something that happens frequently, however this isn’t always necessarily to do with unpaid bills. Sometimes it’s just
Figure 1: Simply define a new user for KDE 3.0!
12
Let’s see how it prints!
Issue 19 • 2002
Figure 2: Does your bank get along with Konqueror?
NEWS
that the browser simply won’t do what you want when it comes to the increasingly popular online banking – and it’s still Linux users who are at a disadvantage. This is a situation Oliver Strutynski wants to change. Bugged by the fact that his favourite browser, Konqueror, didn’t work on one or two financial sites, he has now set up a homepage to point out the black – and thank heavens, also the white – sheep among financial service providers. At http://home.in.tum.de/strutyns/banking you’ll find, neatly arranged by countries, a list of banks and how they cope with the KDE browser. If your bank has a green field, you’re in luck: no problems there. But even for all those who have already encountered problems, the site is well worth a visit: for those banks marked out in yellow there are some tricks and tips from other users, which may help you to handle banking transactions with Konqueror. Active participation is also required: at the end of the site you will find an input box, in which you can enter banks not yet listed and potential workarounds for known problem children.
Read me!
The euro is here If you found handling exchange rates a problem before the introduction of the euro, then it’s unlikely that things have become any simpler. For those who are struggling with conversion rates, there’s a new lifeline in the form of Keurocalc (Figure 3). You can find the latest version at http://www.caldera .de/~eric/keurocalc/. In addition to pure Figure 3: The right computer makes dealing with the euro easier conversion, Keurocalc also makes a marvellous pocket calculator, so that when you are buying something new you can not only find out in seconds what this fun is costing you, but you can also find out whether you can actually afford it all... Tips and tricks on dealing with the euro symbol within KDE and KOffice can be found at http://www.koffice.org/kword/euro.phtml.
Hong Feng, a former employee at O’Reilly & Associates in Peking, has started an ambitious new project. His online magazine, the “Free Software Magazine”, will not only be about the world of Free software but will also follow its principles. The approval of the Free software scene is correspondingly great: the magazine is officially supported by the FSF (Free Software Foundation). The FSF chairman, Richard Stallman, has himself insisted on contributing the editorial for the first issue, which you can see for yourself at http://www.rons.net. cn/english/FSM/RMS_preface.
Figure 5: Magazine founder Hong Feng with the FSF chairman Richard Stallmann
(FDL) is given preference, and you can look up its principles at http://www.gnu.org/licenses/ licenses.html#FDL.
Summit conference
Figure 4: The Free Software Magazine
But what does all this have to do with KDE? A great deal, since Hong Feng is still looking for authors to write articles to do with KDE and Qt software development. The only condition for this: the articles submitted must be under a recognised Open Source Licence. The “GNU Free Documentation License”
A whole range of KDE developers were invited to FOSDEM (Free and Open Source Software Development Meeting, http://www.fosdem.org/), which took place this year on 16 and 17 February in Brussels. In addition to swapping news, there was also a chance to hack and the developers were provided with a KDE room of their own. Many turned up and lectures were well attended. The weekend was a great success. A British contingent flew the flag and some even managed to sample some of the Belgium beer. Issue 19 • 2002
LINUX MAGAZINE
13
GNOME NEWS
Gnomogram
IN WITH THE NEW This month Gnomogram takes a look at Fidelio, Firewall Builder and porting programs to GNOME 2
GNOME-Foundation Elections Somewhat later than expected, the results of the second GNOME Foundation Elections have now been confirmed. Since the majority of professional GNOME developers work either at Ximian or at Red Hat, this year a special clause was brought in, according to which not more than four directors can be employed at the same company. As such James Henstridge and George Lebl have been promoted instead of Michael Meeks. The founder of the Free Software Foundation, Richard M. Stallman (known as RMS), whose selfnomination attracted a great deal of attention, was
Porting assistance for GNOME 2
An example of a preference dialogue in accordance with the Human Interface Guidelines
14
LINUX MAGAZINE
The release of GNOME 2 represents more of a change for programmers than it does for users, as the platform – now available in beta – comes with incompatible API modifications. To give developers the opportunity to benefit from the new platform, there are some guidelines at http://www.gnome.org and a set of documentation for the GTK2-APIs. Anyone who has relied on Glade when developing his or her program may in future be able to save themselves some of this labour, since it will be possible to convert old Glade files. GNOME 2 is, however, more than just the platform – all programs are intended to share a common look and feel. Anyone who ports their program onto GNOME 2 should therefore take a look at the Human Interface Guidelines. For example, GNOME 2 programs should display the Cancel button to the left of the Go button (such as Save). If this rule is consistently implemented, it will increase the user’s productivity. Issue 19 • 2002
not elected. He has been known to put forward some very extreme views regarding Free software. Also on board again, of course, are Miguel de Icaza and Federico Mena-Quintero – the two founders of the Project – together with some other old acquaintances. In addition to diverse legal questions, which remain unanswered, the new GNOME Foundation Board must now also clarify the details of the next GUADEC (Gnome User And Developer European Conference), which is planned to take place in Seville this April.
Ximian Connector When Evolution 1.0 was launched, Ximian announced a new program for 2002 called Connector, which makes Evolution in to a full value exchange client. Unlike Evolution, Connector is being distributed under a proprietary licence for US$ 69. Contrary to some other fears expressed, though, Evolution itself remains free in the sense of the GPL.
Fidelio Fidelio is a GNOME client for Hotline, a system which is strongly reminiscent of old mailboxes. The program lets the user upload to and download from a hotline server, as in the case of FTP files; as well as reading and replying to messages in a similar way to Usenet News. Just as with old mailboxes, there is no large central hotline server, but lots of small ones, which exist independently of each other. The advantage of this is that the server can continue to exist independently of the truly punch-drunk company Hotline Communications. On the other hand, it’s relatively hard to find one’s way about in the many servers. To get an overview, there are “Trackers”, which maintain the latest server lists. Since countless trackers exist in addition to the official hotline ones, there are also Tracker-Lists, such as http://www.tracker-tracker.com.
GNOME NEWS
Info
In addition to numerous DivX-Animes there are also lots of books on hotline servers
The unofficial trackers not only offer a greater range of server lists than the official one, but they also include numerous sites with legally dubious content. For example, hotline servers are regularly used to distribute the latest DivX movie files. Although there are many thematically arranged hotline servers, it is often more useful to rely on a search engine such as SADwyw, to find a specific file. Once the user has found it, though, it is not unusual for him or her to have to wait a while first before the – usually very limited bandwidth – server releases the resources. Also, with some servers, users have to apply for an account or upload a few files to the server, before they themselves are allowed to download.
Firewall Builder With the aid of the Firewall Builder it is possible to use drag and drop tools to create even complicated firewall rules and make them into a script. Firewall Builder gets on very well with both Iptables (Linux 2.4) and with Ip_filters (FreeBSD). Plus, with the aid of an expansion, the program can install policies directly on the one-diskette-firewall Floppyfw. Firewall Builder manages all hosts, firewalls and services as objects, which can be set out in a tree view. These objects can in turn be combined into groups. So, for example, all services connecting to Kerberos can be handled as one object. There is no need to enter all hosts by hand; at Tools/ Discover Objects the program hostname can be found via SNMP, from a DNS-Zone, or via the file /etc/hosts. The objects found can later be inserted into the rules. For example all Kerberos services could be allowed on all weekdays only for a specified host. For standard-configurations Firewall Builder also offers a wizard, which takes the user by the hand
GNOME Foundation homepage: http://foundation.gnome.org Porting to GNOME 2: http://developer.gnome.org/dotplan/porting GNOME API reference: http://developer.gnome.org/doc/API/ GNOME 2 Human Interface Guidelines: http://developer.gnome.org/projects/gup/hig/ Ximian Connector homepage: http://www.ximian.com/about_us/press_ center/press_releases/ximian_connector.html Hotline Communications homepage: http://www.bigredh.com Tracker-Tracker: Hotline: http://www.tracker-tracker.com/hotline/ SADwyw homepage: http://ac2i.tzo.com/cgi-bin/search Firewall Builder homepage: http://www.fwbuilder.org Firewall Builder file list: http://sourceforge.net/project/showfiles.php?group_id=5314 Floppyfw homepage: http://www.zelow.no/floppyfw/index.html MrProject hompage: http://mrproject.codefactory.se during the configuration. The rules can be set to make even finer distinctions via the Interfaces tab of a firewall: in addition to the global rules, you can also do interface-specific installations here. Firewall Builder also supports Network Address Translation (NAT). So IP-Datagrams can be diverted to other hosts or ports.
MrProject MrProject is, as the name suggests, a project management software, which lies somewhere between the abilities of Evolution and ToutDoux. Since MrProject uses GAL to access the widgets used in Evolution, the interface ought to be familiar to every MrProject can assign each task Evolution user. to a group for processing All the tasks dealt with by MrProject can be displayed either in a calendar view or in a GANTT-diagram. The diagram has the advantage that it displays the temporal limits of the individual tasks and subtasks, progress is measured by percentages and dependencies among the tasks are shown. Dependencies can be created by simply connecting two tasks; if the deadline for the first task is moved, the start time of the second is also moved with it. Furthermore, all dependencies and subtasks can be displayed in a network view, which simply creates a flow chart from the data. It is also possible to assign tasks to persons or whole groups for processing. The materials necessary for a task can be defined under Resources. Email addresses can be defined for all resources and MrProject can send messages to these addresses with the aid of Evolution. The co-operation between these two programs should progress further in future; one of the planned features is to synchronise the calendar from MrProject with the one from Evolution.
Libraries required
The greatest drawback to Firewall Builder is that the icons are sometimes very ugly
Fidelio: libxml2 Firewall Builder: libfwbuilder, libxml2 >= 2.4.0, libxslt >= 1.0.0, libgtkmm >= 1.2.3, ucd-snmp >= 4.2, openssl >= 0.9.6 Mrprojekt: libgal >= 0.11.2, gnome-vfs >= 1.0.0, libxml >= 1.8.14, gnome-print >= 0.25, oaf >= 0.6.5
Issue 19 • 2002
LINUX MAGAZINE
15
FEATURE
Penetration test: Background, Methods and Tools
SIMULATED INTRUSION From the point of view of a hacker, your company network is often not half as safe as you might assume. However, security holes need to be found before they can be plugged. Viola Braeuer explains
The author Viola Braeuer is a Bachelor of Information Technology and operates as an independent security advisor. She has been working with Linux since the start of her studies and with different aspects of IT security for the last four years.
16
LINUX MAGAZINE
I
t’s a well-known fact that the Internet has not been spared the attentions of business spies and curious hackers alike. We might like to think they won’t trouble us, or that our firewall will provide all the protection that we need, but the truth of the matter is often far different. What’s really needed is prevention rather than damage control after the event.
Vulnerability assessment In the world at large, people generally make sure their front doors are locked. A penetration test does pretty much the same thing across a system: it checks a computer network’s possible weak spots looking for any vulnerabilities. The analysis takes the point of view of an aggressor looking for weak areas that could be exploited. What can someone who wants to enter your computer see? What information about operating systems, applications and data can they find out? Which targets will they pursue and which strategies will they use? The purpose of the penetration test is to clarify the answers to all these questions, pursuing the motto: “know your enemy”. A system’s access to the Internet is the largest potential point of attack into company’s network, in particular the network services (known or unknown) that are offered externally. However, the enemy could also be within your own camp: the majority of attacks on IT systems stem from frustrated employees. Another of the roles of the penetration test is therefore to identify who has Intranet access to what data. On the whole, this is a weak point analysis of the actual state of the company network – and it quite relentlessly uncovers the gaps. The next step is to plug Issue 19 • 2001
these holes by activating current patches and by changing the standard and trivial passwords. An internal investigation can also serve to investigate a network topology that has grown over time, i.e. which systems are actually on the domestic LAN and what runs on it?
Procedure Despite all caution, a professional intrusion test can impair or even paralyse the examined systems. Both the client and the contractor should therefore find out and sum up the risks. Beside the client’s signed declaration of consent, the safety specialist needs only the IP address area that is to be analysed. The penetration test then consists of several steps: ● Passive procurement of information: foot printing ● Active procurement of information: scanning ● Entering the system, “gain root”: enumeration
First step: foot printing Before the tester makes contact with the computers to be examined, he or she should collect as much information about the client as possible. A good source for this are the “Whois” databases; these supply the domain (of the appropriate IP address), the email and postal addresses, the provider, technical partners and their telephone numbers, as well as the assigned IP address space. The data so found may be out of date already. Even if it was correct at the time of log-on, the IP addresses may well have changed in the meantime. The date of the last change helps to measure the probability that the information is correct. The client’s Web site gives the first impression of the client’s safety philosophy: does it give across a professional impression or does it resemble a playground for the latest
FEATURE
in animation? Are the pages even readable with safety-conscious browser settings? Every now and then, a Web site can be a veritable treasure chest of data. A search system for all employees of a company with their direct-dial numbers and email addresses may appear a good idea at the time, but at the same time it leaves the door to the social club wide open.
Second step: scanning Following this rather passive step comes the active scanning phase. Its purpose is likewise the procurement of information, but in contrast to foot printing, the target system is contacted directly. The different methods used here have varying levels of conspicuousness – some can even completely hide themselves in the normal communication. Other, less sophisticated procedures will quickly set off the alarm bells in the target network. Not all machines will tolerate a port scan, even if they look like they are the latest and greatest. Some older IBM machines even react to this by crashing. Apart from this, many system administrators may not like being put under the microscope in this obtrusive manner. In scanning, the tester’s attention focuses on the operating system that the target computer uses, as well as the offered services (open TCP and UDP ports) and the patch level of the programs. The tester will also try to look behind the firewall: some of the network topology is often visible, and in many cases there isn’t even any firewall. Through security holes in the offered services, an intruder can, despite the firewall, get to the root rights. These services therefore represent one of the greatest risks, and their role is accordingly important in the vulnerability assessment. The test is usually over after the scanning phase. The tester gathers the identified weaknesses and the usable information and analyses this. On top of this, the tester will suggest possible measures to eliminate the gaps.
Security scanners Free tools Nmap from Fyodor: http://www.insecure.org/nmap Nemesis: http://www.packetfactory.net/Projects/Nemesis Satan (Security Administration Tool for Analysing Networks): http://www.fish.com/satan Sara (Security Auditor’s Research Assistant): http://www-arc.com/sara Saint (Security Administrators Integrated Network Tool): http://www.wwdsi.com/saint Whisker: http://www.wiretrip.net/rfp/p/doc.asp/i2/d21.htm Nessus: http://www.nessus.org
Commercial scanners ISS Internet Scanner: http://www.iss.net/eval/specs3.html QualysGuard: http://www.qualys.com/services Cisco Secure Scanner (was NetSonar): http://www.cisco.com/warp/public/cc/pd/sqsw/nesn Symantec NetRecon: http://enterprisesecurity.symantec.com/products/ products.cfm?ProductID=46
completed exploits, i.e. programs that use a certain weak point, save a lot of work for the intruder and the tester. A vulnerable computer needs to be found (by scanning), whereupon the exploit is released. More sophisticated attacks are accordingly possible with more know-how, time and money.
Tools Many of the steps in a vulnerability assessment can be easily automated – above all, port scanners and full-blown security scanners are frequently used for this purpose. Some products even give suggestions as to how the ascertained weak spots can be plugged. The port scanner of choice is often Fyodor’s Nmap. Available free of charge and well documented, it offers a broad palette of options and is an almost ideal starting point into the field. The Security
Figure 1: Nessus uses plug-ins for the different scans. Individual tests can therefore be re-tooled at any time
Third step: enumeration Not all jobs end with the scanning. The tester will often try to actually attain root rights on the target system. At this is point, he or she differs from a genuine intruder: the tester will not install any rootkits and won’t read or modify any internal data. A genuine intruder would cover his tracks and in many cases use the computer as a launching pad for further attacks. Weak passwords are frequently the path to root; especially with databases. Not every administrator goes to the trouble of modifying the standard password. The second largest path uses server services, which often contain security gaps. The cheapest method, in view of time and necessary knowledge, is the “Script Kiddie method”:
Figure 2: As a successor of Satan, Saint is also locally installed and operated through a browser. The level of detail and ruthlessness Saint uses to examine its targets can be adjusted
Issue 19 • 2001
LINUX MAGAZINE
17
FEATURE
scanners boxout lists even more tools, which are suitable for penetration tests. It should be emphasised here that the free security scanner Nessus is just as powerful as the commercially available tools. With two or three free tools on your laptop, a good toolbox of UNIX commands under your arm and the necessary expertise in your head, you are already quite well equipped. As well as the free tools, there is also a handful of commercial security scanners. The advantage of these is usually in the support, maintenance, updates, training and warranty obligations. They are also frequently faster and easier to use. Their most serious disadvantage (apart from the price) is that the source text is not made public. You can thus never be certain what the program is exactly doing. This is particularly irksome with a penetration test, as you want to be able to produce a real picture of the planned procedure for your client.
Figure 3: The powerful port scanner Nmap has several graphical front-ends, for example NmapFE (now included in Nmap)
ISS Internet scanner The ISS Internet scanner manages to mark potentially dangerous functions with a small bomb. The tool, which comes from the Internet Security Systems (ISS) company, can be set to five different function levels. Levels one and two determine the
operating system of the scanned computers. Level three tests the system’s sensitivity (or robustness) against simple attacks. Level four and five simulate automatic intrusion tools and the procedure of a qualified attack.
Security scanners made illegal?
The free security scanner Nessus is just as powerful as the commercially available tools
18
LINUX MAGAZINE
That penetration testing cuts both ways is well known: programs that check computers for their security aren’t solely used to protect one’s own system. They can of course also be used as tool for breaking and entering into other computers. Administrators, advisors and crackers essentially use the same tools and know the same weak spots – the only difference is the intention behind the knowledge. It is substantially simpler to make use of a hole than to configure and administer a usable and nevertheless safe system. The deadlock over these tools is endangered by a new law in the making – the intention of which is to provide more security in a quite different area. The European draft, almost amounts to a professional ban on security consultants who execute vulnerability assessments. It also outlaws the manufacturers of security scanners and decrees that administrators be blind-folded. This problem is not new but with this proposed law, the limits are being pushed. One of the main reasons for this amendment was to find a way of eliminating the ways and means of not paying for Pay TV and similar services. This then additionally forbids the possession, distribution and development of tools, which enable this. Now laws need to be wide enough so that they cannot be circumvented. In the widening of the law,
Issue 19 • 2001
the legislators have in this case shot way beyond the actual target, punishing activities for which the law is not at all meant. This mistake will then need to be corrected: a lengthy process examining the literature and the high court jurisdiction and determining how to alter the necessary wording and terms.
A question of access The main term of conjecture in this case is “access control service”, a description of what a set-top box does. This is to decode coded information only when the user has proved his authorisation. The law is to apply to television services, media services and broadcast presentations, which are broadcasted against payment. In the definition of this term, the train has already left the station. It is clear that individual information units are meant in this law, such as video streams or MP3 files, delivered for individual payment. The wording however encompasses much more, i.e. each and every Internet access. This also costs money and is “access controlled” by the user-identification. An “avoidance mechanism” is by definition any device or technical procedure that enables unauthorised use. The wording thereby covers any unauthorised access onto server services. This culmination creates the following after effect: not only is the actual act of unlawful entry to be punished, but also preparatory
FEATURE
The scanner compares the version and patch level status of operating systems and applications with its database and thereby reports missing patches. The regular update of this database is therefore essential. The scanner unfortunately only runs under Windows NT – as do many commercial tools. It contains an editor, in which individual test runs from different categories can be compiled. Not all tests are always really necessary, the selection however remains clear due to the grouping. The ISS Internet scanner is a quite complex and very useful tool.
QualysGuard QualysGuard can be used from anywhere as a Web service (over HTTPS with password protection). The user doesn’t have to worry about updates, as the scanner runs directly on Qualys’ own servers. This tool also offers both port scans as well as a database with application-specific weak spots. The relatively scant selection of options shortens the acquaintance period drastically, but it also makes it more difficult to estimate the function range. Alone, the report permits vague conclusions about the scan methods used. QualysGuard supplies a quite useful first result with
and support actions. The draft justifies this with the ease of distribution of hacker tools and a low threshold of inhibition. The model therefore also forbids the possession, manufacture, maintenance or exchange of such avoidance mechanisms or processes. It goes on to say: “The law is formulated to be technically neutral and therefore applies independently of the concrete definition of the protection of the access control service or the avoidance mechanism.” An infringement against this law carries a maximum penalty of one year’s imprisonment as well as a fine of up to 50,000 euros. This is therefore what faces someone in the possession of a port scanner – all because of this “technically neutral” formulation. It begs the question whether it is only the European Union guideline that has undergone such a particularly incompetent conversion. Perhaps the threat against corporate and financial order through the Internet will be fought preventatively by the sword of the justice system – present tendencies look this way. This will hardly be of concern to the real hackers out there. By breaking into foreign computer networks, they are infringing against the law anyway – it then doesn’t really matter which law does it? The draft was referred for further amendment by three specialised parliamentary committees in late 2001.
the minimum of time and energy. With its simple, easy operation it can even be used before your first cup of coffee in the morning.
Report analyses A lot more is asked of the user in the analysis of automatically produced reports. Depending on the settings, commercial tools can supply reports big enough to fill half a filing cabinet – which of course no one wants to read. What is needed is a careful selection of the options and a gradual procedure, in which the configuration is refined step by step. No less demanding is the estimate of the threat potential from the exposed weak spots. The generated bar and pie charts are often referred to as “management reports”. They have however the main function of filling many pages with their key of: “If there’s a lot of red on the page, it looks really bad.” The estimate of the real situation becomes more accurate when one regards several weak points together as a combination and investigates their topicality and importance on the appropriate Web pages. The CVE list records the well-known software gaps, assigning each one its own, unique number.
Commercial tools can supply reports big enough to fill half a filing cabinet
Result A purpose of a vulnerability assessment is to point out the weak spots of a computer network, in order to arrive at a better estimate of the actual risk. Not all exposed holes can be plugged, for example an upgrade is not always possible. The reasons for this are varied – frequently it is the incompatibilities between used software. In addition, a mixture of different versions and patch levels in one environment is often undesirable, as the required maintenance input increases. Last but not least, the firewall cannot easily close all the ports, we also need to actually communicate over the Internet. The ports 80 (HTTP) and 443 (HTTPS) are the ones that should remain open for a Web server, even if any other service can be tunnelled.
Info List of Vulnerablities, with their own repective CVE numbers: http://www.cve.mitre.org/cve/ Bruce Schneier: Secrets & Lies, Wiley Computer Publishing, 2000 Who-is Databases: http://www.ripe.de, Generic Top Level Domains http://www.internic.net, USA http://whois.arin.net, Asia http://www.apnic.net Top 50 Security Tools: http://www.insecure. org/tools.html Scanning strategy: http://unixgeeks.org/security/ newbie/pen/ssarh.html
Issue 19 • 2001
LINUX MAGAZINE
19
FEATURE
The dangers of rootkits
ROOT TREATMENT Rootkits are part of the cracker’s standard repertoire, allowing them to hide their activities and the results. If you find a rootkit on your machine it is high time for some “root treatment”. Boris Schauerte explains
The author Boris Schauerte lives in Dortmund and works mainly in the field of data and Internet security. He programs enthusiastically under BSD and Linux systems. At the moment he is particularly working on the design and implementation of Free operating systems.
20
LINUX MAGAZINE
R
ootkits are one of the most popular aids of crackers and script kiddies. These collections of tools make an attacker’s job that much easier, as they ensure that once a break-in has been successful they can easily regain root access within the system by installing “backdoors”. They also disguise the cracker’s presence, so that the administrator won’t even notice that the uninvited guest is controlling his or her machine. To top it all off, rootkits cover up any tracks left during the break-in, so you won’t even know that it’s taken place. Many rootkits are capable of collecting information about the machine and its environment, such as the passwords of the local machine or interesting data, which they filter from network traffic with the help of a “sniffer”. This knowledge makes it easier for intruders to spread their attentions to neighbouring machines. Rootkits most commonly take the form of Trojan horses. These are patched system programs that behave according to the cracker’s wishes. However, there are also rootkits whose main part is a kernel module – these don’t even require any host programs.
Trust no-one Rootkits with patched programs work on the assumption that the superuser is going to trust the output of these programs. This assumption is normally correct, as administrators generally have no other choice. The rootkit modifies important system tools in such a way that they no longer output any Issue 19 • 2001
information that could betray the intruder. For instance, the ps command from a rootkit does not show certain processes, while ls conceals the existence of some directories and files. Normally this sort of manipulation would be hard to detect unless different tools are used for checking. You could, for example, compare the ls listing with the output from a find call. However, this will only be successful if the intruder hasn’t also replaced find with a patched version. The situation is slightly different for rootkits that have their own kernel modules. Such a kit usually consists only of the module in question and tools designed to remove any traces of the break-in. Tasks, such as hiding the existence of certain files and users or inserting backdoors, are normally dealt with by these modules within the kernel, which means that all programs are affected.
System break-in Before attackers can install their favourite rootkits they need to acquire root permissions. The associated break-in normally follows a set pattern. When attacking a system directly, the cracker generally tries to collect as much information as possible about his or her target. The cracker will probe for weak points and then exploit them. Also common are indiscriminate searches for victims supported by network scanners, which may also be used to automate the break-in. Once inside the target machine the intruder eliminates the traces of the break-in. In order to do this the log entries are removed as well as other evidence pointing to them. The majority of this
FEATURE
Listing 1: Simple local backdoor #include <stdio.h> #include <unistd.h> #include <stdlib.h> int main(int argc, char *argv[]) { char buf[40]; /* Is root executing the program? */ if (getuid () == 0) { /* Set the file owner to “root” */ printf (“Set file owner to root.\n”); sprintf (buf, “chown root %s”, argv[0]); system (buf); Figure 1: The program in listing 1 assigns SUID root permissions to itself when started by root. After that it can turn any user into root
process is carried out by programs like Zap, which remove all entries in the log files utmp, wtmp, lastlog and messages. Other tools clean additional files in /var/adm and /var/log. The clean-up is usually restricted to the standard files, since it is very time consuming to remove traces manually. This can result in the cracker overlooking some log files, enabling the administrator to detect the break-in after all. A special case is remote logging, where syslog continually transfers its entries to another machine. As long as the attacker has not cracked the log host he will not be able to clean its files. Once the cracker arrives in the target system, the backdoors provided by the rootkit can be installed, as well as any other programs. The rootkit is often transferred before he covers his tracks as the transfer itself creates new traces. The source of the tools are mostly public servers rather than the attacker’s own machine.
Local backdoors There are two types of backdoors: local backdoors and network backdoors. A local backdoor allows an existing local user account to gain root permissions. In this case the cracker logs into the system as a normal user and executes a program that provides him with a root shell. The backdoor is normally password-protected, so it can’t be accessed by any other users. Suitable host programs for backdoors are login, chsh, passwd or any other program with SUID root permissions. Listing 1 shows how this works (without a host): the user only has to start the program and he immediately becomes root. The file even assigns itself SUID root permissions when it is started by root. The effect can be seen in Figure 1. By the way, this simple example contains a buffer overflow error, which can be ignored for our purposes.
/* Set the SUID flag */ printf (“Set SUID flag.\n”); sprintf (buf, “chmod +s %s”, argv[0]); system (buf); } /* A normal user is executing the program */ else { /* Set UID and GID to 0 (root) */ if ((setuid (0) != 0) || (setgid (0) != 0)) { printf (“File is not SUID root.\n”); return -1; } printf (“Open root shell.\n”); execl (“/bin/sh”, “sh”, 0); } return 0; }
The second part of this example, setting the user ID and the group ID to 0 and starting a shell, is similar to what you find in the program patches of rootkits, which may require their own passwords first. Particularly interesting for such amendments are programs that already provide similar functions, for example su or login, as it is much more difficult to find the changes in these than in other programs.
Network backdoors The second group of backdoors are network backdoors. These allow the cracker to enter a system he has already cracked at any time via the network without having a normal user account. Here again there are stand-alone backdoors and ones that hide within normal services. Patched versions exist of virtually all Internet daemons that have or can acquire root permissions, such as inetd and sshd. If the attacker wants to use his or her backdoor the network version generally also requires a password. With well-hidden backdoors this password has to be entered at a point where other data would Issue 19 • 2001
LINUX MAGAZINE
21
FEATURE
normally be expected. If the cracker is recognised at the door it attaches a shell to the port or executes his commands and transfers the output. There are many varieties of stand-alone network backdoors; the differences lie mostly in the way they are implemented or in the cryptographic method used. Most of these backdoors offer at least simple encryption in order to protect the transferred data from sniffers and therefore from the eyes of the administrator. This provides additional protection for crackers and makes it more difficult to trace their actions.
Network sniffers The Ethernet sniffer forms an important part of many rootkits, as this allows the cracker to filter out important information from the network traffic and to store it. This will often enable him to crack other systems on the network, or to learn internal and confidential information. The sniffer is generally a stand-along program that needs to be protected by a number of modified programs. For this reason rootkits containing a sniffer will offer patched versions of ifconfig, netstat and similar utilities, so that these normally dependable tools will no longer be able to find the sniffer. Despite all protective measures there are many varied strategies for detecting rootkits in your system. With the help of a few tricks most of the rootkits currently in circulation can be tracked down and expelled from the system fairly easily.
Prevention is better Ideally an administrator should take preventative measures to detect rootkits even before a break-in is suspected. In any case the administrator is going to require the most important system programs on a write-protected medium; it’s vital that the
administrator can guarantee these programs haven’t been tampered with. They also need to be independent of all other files on the machine. For this reason they should ideally be statically linked, as manipulation could also occur within the shared libraries. However, even these tools are powerless against a manipulation of the kernel. The only defence in this case is to boot up the system from a secure medium (boot disks with a disk operating system or a live CD), to mount the hard disk as read only and then to examine the system. One important weapon in the fight against rootkits are checksums, which can be used to determine whether files have been changed. Simple CRC sums are not suitable, however, since some tools ensure that the patched file has the same CRC sum as the original. The checksums must be created before any manipulation can take place, ideally directly after the installation of the system. The lists should be stored on a write-protected medium to prevent attackers from tampering with them. Armed with this type of list the administrator can check the system’s programs on a regular basis. This test can also be completely automated using a cron job, provided that attackers can’t amend the MD5 list and don’t search the cron files for relevant jobs or modify either md5sum or the kernel. Instead of the rather simple md5sum, more complex tools such as Tripwire or Aide are also suitable. However, in many cases the simpler program will be sufficient. The important thing above all is to back up and check the data often enough, and to ensure that no program is overlooked in this process.
Finding rootkits Even if no backups and checksums are available, all is not lost; there are still some ways of finding rootkits. A tell-tale sign of many cracker tools stems from the fact that they require their own subdirectory or their own configuration file. Many rootkits write these to /dev in the hope that no one will look there. This directory usually doesn’t contain any normal files, only device files. A simple find call is enough to detect any interlopers (see also Figure 3): find /dev –type f The patched programs naturally try to open their config files. Consequently many modified files contain the string /dev. This is also easy to find with the help of the strings command: strings patched-file | grep /dev
Figure 2: The program md5sum compares the current MD5 checksum with the stored values: 10 files have been modified. Using strings we can determine that we are dealing with Ambient’s Rootkit (ARK)
22
LINUX MAGAZINE
Issue 19 • 2001
A somewhat more complicated method of finding rootkits is the system call-trace. All system calls made by a program can be monitored using strace. The
FEATURE
Figure 3: Telekit hides its configuration files in /dev. A simple find call is enough to track them down
output of this can be quite sizeable, but also very revealing because when a rootkit wants to open one of its configuration files this is done using the relevant sys call.
/proc, the administrator’s friend When looking for rootkits the /proc directory plays a very significant role. A lot of important information that rookits are trying to hide can be found here. The programmers of cracker tools are aware of this, and some rootkits do hide some of the information from /proc, but this directory is still almost always worth a look. Amongst other things, /proc lists all processes that are running along with their process ID. If ls and find have not been manipulated in such a way that they will hide some of these files then a comparison of the output from /proc with that of ps may already be enough to detect a rootkit. For hidden processes a look in /proc/PID/stat or in the easier to understand /proc/PID/status is well worth it. It is also possible that ps and top remain unchanged but that /proc no longer shows every process. Entries with network information can also be informative. The most important task is to check the open ports and to compare them to the output of netstat or a similar tool. This can be done in the folder /proc/net, which contains files that will give you the open sockets for every protocol. A portscan of your own machine from a secure source can also be very revealing. If this throws up open ports not shown by netstat this is a pretty definite indication of a rootkit. Unexpected open ports should set an administrator’s alarm bells ringing in any case.
Rootkit found: what now? Should your investigations actually turn up a rootkit, the first thing is to separate the system from the network. This enables you to examine the machine properly and to clean it. During your analysis it is not
only important to find out who has broken in, more significant is how the cracker gained access to the system and which weak point he used to do it. This is where log files once again play a crucial role. One important question is whether the system needs to be completely re-installed or whether it is sufficient to remove the modified programs and backdoors. There is no easy answer to this. It depends, amongst other things, on how much you can find out about the break-in and whether you can really trace all the cracker’s actions. Otherwise it is possible that you might overlook a backdoor and the intruder could break in again as soon as the machine is back on the network. In that case nothing would have been gained from an administrator’s point of view. You can use the fact that crackers normally return to your advantage, however. Provided the administrator knows all of the cracker’s access points, but not his identity, an administrator can set a trap for his adversary. A “honeypot” allows him to watch every action of the attacker, to track him and to study his behaviour. Such studies enable the administrator to draw conclusions about the cracker’s motives and to anticipate his future behaviour. If he has only cracked this machine by accident it is unlikely that he will bother it again. After all, he must assume that the administrator has now plugged the gaps and will be reading his log files with renewed interest. However, if the intruder was searching for internal information or had other motives for breaking into this host specifically then he probably continues to pose a threat.
You can use the fact that crackers normally return to your advantage
Info “Rootkits – How Intruders Hide”: http://www. theorygroup.com/Theory/rootkits.html “Know Your Enemy”: http://project.honeynet. org/papers/
Issue 19 • 2001
LINUX MAGAZINE
23
FEATURE
The Bad packets stop here
IPCOP FIREWALL IPCop firewall isn’t as well known as firewall proxying software such as Freesco or E-Smith but even in its early stages it has some good features that you might not see somewhere else. Richard Ibbotson takes a closer look
I
PCop Linux is a complete Linux Distribution, which has the sole purpose of protecting the networks it is installed on. By implementing existing technology, outstanding new technology and secure programming practices, IPCop is the Linux Distribution for those wanting to keep their computers/networks safe. Whether for your home or SOHO, IPCop may be all the firewall you will ever need. At the time of writing there is a 0.1.1 stable release, which you can register after downloading. You don’t have to register but it’s worth it for the support and help that you will receive. Releases 0.1.x are IPChains-based and when 0.2.x appears it will be IPTables-based. To install the software, download it from the UK Linux Web site or alternatively you can also get it from this month’s coverdisc. Like most instant firewall software, you can create a floppy disk and then boot the computer you intend to use as a firewall from this disk. To make a floppy disk use the command:
A brief feature list ● Analogue/ISDN/ADSL modem support ● PPtP ADSL support ● PPPoE support ● USB ADSL firmware upload area ● Integrated Java-based SSH shell area ● DHCP server ● Intrusion Detection System – Snort ● DMZ pin-holing capacity for publicly accessible servers ● Creates a virtual private network easily ● Full status display ● Full traffic graphs ● Full connections information ● Full system logs ● Web proxy logs ● Firewall logs ● Remote shutdown/reboot area ● IPCop GNU/Linux updates area There are many more features, which you can read about by having a look at the IPCop Web site.
24
LINUX MAGAZINE
Issue 19 • 2002
dd if=/mnt/cdrom/images/boot.img U of=/dev/fd0 bs=1k count=1440 Or if you are lucky enough to have a CD burner you can put your downloaded ISO image on to a CD and boot your computer from that. For more info about this see the online installation documents, which are extensive and extremely helpful. Before going any further with the installation, or perhaps before you even begin, you might like to consider the ways in which you are using your network now and the ways in which you may need to change things in order to improve security of your data. To make sure that you are doing things in the right way you might want to write a few things down and make a few mental notes in order to get things straight. It’s good to think about network security in this way so that you don’t have to completely re-install everything later on. You might even find that more haste and less speed will take care of some of the holes that are presently in your network or home computer. Wisdom and network security go and hand in hand. After your first boot you’ll see the lilo boot screen, which welcomes you to IPCop. You can then press Return to continue. You will then see the all too familiar Linux boot messages scrolling down the screen. Language selection is next and then you’ll be asked if you want to install from the Internet or from CD-ROM. At the next screen the installation program, which looks a lot like the Red Hat one, will tell you that it will now format the hard disk. There is a colour-coded scheme that may help you to understand what to do with your various network interfaces.
Red, gold and green IPCop uses a familiar method of describing the various parts of your network, colour-coded to highlight the dangers imposed by where they are in that network. First of all, the IPCop computer is classified as RED and connects to the untrusted Internet. This is the most dangerous part of your network and should be treated with contempt until proved otherwise. Once past this part of the network you have your protected computers, which are considered a GREEN
FEATURE
network, so if IPCop has been configured correctly these should be safe and secure. Any further parts of the network that you might want to be available to the outside world such as Web and FTP servers are classified as an ORANGE network. This ORANGE network is only permitted access to the GREEN network by a secure channel, with the firewall maintaining security. In the IPCop computer you will need a minimum of one Network Interface Card (NIC) connected to your GREEN network. If you are using a cable modem then that requires another NIC and if you have an ORANGE network then another NIC for that system is also needed. In the simplest of networks you have one RED computer running IPCop in addition to your home computer. You would connect your home machine to the IPCop machine via a twisted-pair Ethernet cable. If you have more than one computer in your GREEN network then you would connect these via a hub rather than a twisted pair cable.
First things first After you have given IPCop some basic values for netmask and network addresses the software will install. It will then ask you for things like your location, time zone and the name of the machine. You will also be asked for things like the name of an ISDN card or perhaps a USB-based ADSL modem. If you get lost with any of this check the IPCop Web site: there’s plenty of installation help. There is also an excellent online administration manual, which is written by Charles Williams who is the project manager for IPCop. After finishing your installation you can then connect to your IPCop machine with a browser across your home or small business network. You can view the status of data packets moving across your protected network interface to the outside world. In the Administration Window (AW), there is a dialup section, which allows you to control a 56K modem, ISDN interface or ADSL interface. You can also change the dial-up number and DNS settings. There is even an SSH section, which allows you make remote access to your firewall by SSH a possibility (or not if that’s what you want). This option is disabled by default so if you are a bit forgetful then you don’t need to worry about it. Web proxying can be done in the AW section as well. Most people recommend that if you are using a dial-up firewall then it should be a caching proxying firewall. One of the nicer points of the AW section is the easy configuration of Snort, which is an intrusion detection system. Quite a few people who are new to Snort spend a lot of time learning how to use it. It’s a very comprehensive piece of software and quite good when it’s used properly. There is a rather nice shutdown and reboot feature built into the AW for remote administration,
Patchwork Recent IPCop patches have seen the following fixes: ● Shadow passwords are enabled ● VPN Config can be succesfully restored ● Netmask 255.255.255.255 is valid ● FTP Masquerading Module is loaded with the correct in_ports option ● Snort rulesets are updated ● Squid FTP vulnerability fixed ● Squid SNMP vulnerability fixed ● Squid HTCP vulnerability fixed ● Bug fixed where log rotate did not compress rotated logs This highlights the fact that the IPCop developers aren’t content with things the way they are. Their intention is to improve the software as quickly as is possible.
which finishes off the suite of tools that are available. This means that you can then remove the monitor and keyboard from the IPCop machine. Updates can be obtained through the built-in updates AW. This is similar in operation to the Debian GUN/Linux or BSD update tool.
Conclusion Did we encounter any problems with the software while it was being tested? Not really. The only thing that we found slightly annoying was that some of the higher ports are left open by default. It is thought that in later versions this will be changed. After installing the software you can get help and support from the various IPCop mailing lists. If you’re a developer and you want to be involved then you might like to know that IPCop is going through a major re-write just now. There is a developer’s list just for you and you can actively discuss the future and improvements in IPCop. If you have tried all of the others and they didn’t work then why not try IPCop. You’ll be presently surprised. It’s a quick and easy way of making your home or office network more secure than it was.
The author Richard is the chairman and organiser for Sheffield Linux User’s Group. You can view its Web site at – http://www.sheflug.co.uk
Info IPCop homepage: http://www.ipcop.org Installation and configuration: http://www.ipcop.org/ cgi-bin/twiki/view/IPCop/IPCopInstallv01#Caveats Administration manual: http://www.ipcop.org/cgibin/twiki/view/IPCop/IPCopAdministrationv01 Security issues: http://www.securityfocus.com Download: http://mirror.uklinux.net/ipcop ftp://mirror.uklinux.net/ipcop/ Mailing lists for support: http://www.ipcop.org/cgibin/twiki/view/IPCop/IPCopMailingLists IRC channel #ipcop on: http://irc.openprojects.net
Issue 19 • 2002
LINUX MAGAZINE
25
KNOW HOW
Linux Authentication: Part 3
THE LIGHTWEIGHT DIRECTORY ACCESS PROTOCOL In the final article in
What is a directory service?
this series Bruce
A directory service is a specialised database used to store information in a freeform, flexible heirarchy. It can hold any kind of information but network/Internet directories typically hold information on users or network resources. Directories differ from standard databases in that they are optimised for fast information retrieval rather than robust data storage or bulletproof transactional updates. You wouldn’t use a directory service to store your financial records but you might well use one to store your company address book. Any information that isn’t constantly updated but is frequently looked up is a possible candidate for storage in a directory. Novell’s NDS and Microsoft’s Active Directory are two examples of commercial, proprietary directory services.
Richardson widens the scope. As well as acting as a password database, LDAP can also store a huge range of user and network information
What is LDAP? LDAP is a protocol for accessing directory services over a TCP/IP network. It was originally designed to be a lightweight front-end to the much grander and more complex X.500 directory access protocol. Since the X.500 protocol is complex, difficult to implement and runs over the little-used OSI network protocol stack, early adaptors of LDAP found they were better off using LDAP on its own as a front-end to simpler datastores, which is how LDAP is now most commonly used.
Why should I use LDAP? ● LDAP offers a way to centralise information on all your network resources, greatly reducing administrative overheads even if you run a mixture of operating systems.
Getting OpenLDAP The source for the OpenLDAP server and utilities are available from the main site at http://www.openldap.org/software/download. At the time of writing the latest version is 2.0.23. Be warned, though, that the 2.x version of OpenLDAP has a greatly increased set of dependencies, mostly for the secure authentication methods required for version 3 of the LDAP protocol. We recommend using the versions packaged with your distribution unless you have specialist requirements. At a bare minimum you need the package containing the slapd daemon, which stores the directory information.
26
LINUX MAGAZINE
Issue 19 • 2002
● It’s an Open standard. ● Disparate applications, which normally each have their own datastores, can share information, eliminating duplication and a potential source of error.
The OpenLDAP suite The OpenLDAP project maintains and develops a suite of software for maintaining and querying LDAP servers. It is the only practical Open Source implementation currently available (the Michigan University version is a reference implementation only and not actively maintained) and will be used for all the examples in this article. This article cannot hope to cover the whole vast topic that is LDAP. After an overview of the structure of LDAP Directories it will show you how to place data into an OpenLDAP directory and make some basic use of it.
LDAP objects There are two parts to an LDAP datastore: the schema, which defines what kind of objects may be stored in the directory, and the database, which contains a record for each object stored. The schema defines what types of objects may be stored in the directory. For each object it defines a set of attributes – some of which are compulsory, some optional. The schema defines how many instances of each attribute an object may have and the properties for each attribute (is it case sensitive, what format may its data take, etc.) This article will not examine the details of schemas (you will normally never have to edit your own schema files) but it is useful to know of their existence because you can extend the capabilities of your LDAP directory by including new schemas. Nor will this article enumerate the LDAP objects found in the core schemas, or their various attributes. Hopefully the examples given will be enough to give you the general idea.
LDAP heirarchy Information in an LDAP directory is organised into a heirarchical tree structure in much the same way as
KNOW HOW
your computer’s filesystem is organised. It starts with a root node (or “suffix”), to which a number of nodes are appended. These nodes in turn may contain sub-nodes and so on. Each node is represented by an object in the datastore. The root node, for example, might be represented by an “organization” object (“o”), while the subnodes are most often organizationalUnit objects (ou). A node is named for its position in the heirarchy, starting with the least significant name. So the Returns department within the Sales department within the Example company would be named “ou=Returns,ou=Sales,o=Example”.
Figure 1: Our example heirarchy
The distinguished name Any database needs a way of uniquely identifying each record. LDAP objects use their dn attribute, where dn stands for Distinguished Name. The dn is constituted from the path to the tree node where the object is located and an attribute that uniquely distinguishes the object from all other objects in that node. This attribute is referred to as the rdn (Relative Distinguished Name) and is often the cn (Canonical Name) or uid (login id) of the person in question. So the dn of Harry Chalmers, who works in the Sales department of the Example organisation, might be “cn=Harry Chalmers,ou=Sales,o=Example” or “uid=hchalmers,ou=Sales,o=Example”. The dn of the Sales department itself is “ou=Sales,o=Example”.
The LDAP protocol The LDAP standard defines a communications protocol. It’s not at all concerned with how a directory service actually stores its information. Current LDAP implementations use a wide range of datastores, ranging from flatfile text databases to fully-fledged SQL database servers.
Issue 19 • 2002
LINUX MAGAZINE
27
KNOW HOW
The slapd.conf file directory
################# # Global settings # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Schema check allows for forcing entries to # match schemas for their objectClasses’s schemacheck on pidfile argsfile replogfile loglevel
/var/run/slapd.pid /var/run/slapd.args /var/lib/ldap/replog 0
########################### # Database backend settings # The backend type, ldbm, is the default standard database ldbm # The base of your directory suffix “o=Example” # Where the database file are physically stored
# Indexing options index objectClass eq # Save the time that the entry gets modified lastmod on rootdn = “uid=sysadmin,ou=People,o=Example” rootpw = “notverysecure” #################### # Access permissions # The userPassword by default can be changed # by the entry owning it if they are authenticated. Others should not be able to see it, except the # admin entry below access to attribute=userPassword by dn=”” write by anonymous auth by self write by * none # The admin dn has full write access access to * by dn=”uid=sysadmin,ou=people,o=example” write by * read
The people unit
Example structure
What happens though if Harry moves to the Accounts section? LDAP objects can’t be renamed, so his old entry would have to be deleted and a new one with the new dn would have to be created. This might of course have unwanted side effects. To avoid these it’s usual to create a notional “People” or “Staff” organisational unit, put all the staff in it and use that ou in the dn. LDAP objects can be in more than one ou, so you can still reflect your organisational structure in the directory. With this scheme, Harry’s dn is always “uid=hchalmers,ou=People,o=Example” no matter how many times he moves within the company.
This article will show the creation of an LDAP directory for the Example organisation, whose structure is shown in Figure 1. As you can see, it’s a very simple organisation with only two departments (though we will add the notional “People” unit) and two members of staff.
Multiple back-end databases ########################### # Database back-end settings
28
LINUX MAGAZINE
“/var/lib/ldap”
database suffix file
password “ou=people,o=example” “/etc/passwd”
database suffix directory
ldbm “o=example” “/var/lib/ldap”
Issue 19 • 2002
Configuring the server First of all you’ll need to get the OpenLDAP server – see the “Getting OpenLDAP” boxout. Once installed you should edit the slapd.conf config file. This will usually be located somewhere like /etc/ldap/. A simple example can be seen in the sidebar. Simple it may be, but most OpenLDAP installations will not need anything more complex than this. The slapd.conf file is divided into two parts. The first contains global settings for the server and the second contains settings for each of the various databases amongst which the administrator chooses to divide the directory information. The access control settings, which look ostensibly like a third block, can be global or back-end-specific.
Global settings The first block of settings import schema definitions needed for a typical range of storage tasks. The
KNOW HOW
third block are administrative settings, which you will normally never need to worry about or alter. The “schemacheck on” setting makes the server reject records that don’t match the defined schemas. This may be turned off for a small performance gain but if you subsequently enter bogus records this can cause indexing problems and a dramatic slowdown.
Database section The example database configuration is very simple. It specifies one back-end, using a traditional *nix dbm hash-database system. This backend contains the whole directory. It is possible, however, to split the directory information across multiple back-ends of differing types. All that is needed is to add entries for each back-end, specifying the tree node from which each back-end starts. In the config shown in the “Multiple back-end databases” sidebar, the “people” organisation unit information is retrieved from the /etc/passwd file, while other information is stored in the dbm database. The order in which back-ends are allocated is significant. When a back-end is assigned a suffix it is assumed to include that node and all subnodes which have not already been assigned. In other words, when doing a lookup the server goes through the list of back-ends in the order they are defined until it finds one that includes the part of the directory tree it is looking for, at which point it looks no further. So if the order in which the two back-ends in the sidebar are defined were reversed, the password database would never be used. The rootdn and rootpw settings together define the name and password of a user who may administer the database remotely, even if there is no actual entry for that user in the directory. This is a quick hack to do some of the initial setup. These settings should be deleted as soon as a proper entry for the sysadmin user, complete with password, has been placed in the directory. In addition to the password and dbm datatabases shown already, OpenLDAP can retrieve information from SQL database servers or arbitrary shell scripts.
Access control settings Access control settings can be part of the global or back-end-specific settings. Back-end settings override global ones for their specific section of the directory hierarchy. There isn’t space here to go into the structure of OpenLDAP access settings. The first rule in the example allows any user to log in or to change their own password, while the second sets default access, giving full rights to an admin user and read-only rights to anyone else.
example.ldif dn: o=Example o: Example objectclass: top objectclass: organization dn: ou=People,o=Example ou: People objectclass: top objectclass: organizationalUnit dn: ou=Sales,o=Example ou: Sales objectclass: top objectclass: organizationalUnit dn: ou=Accounts,o=Example ou: Accounts objectclass: top objectclass: organizationalUnit dn: uid=hchalmers,ou=People,o=Example cn: Harry Chalmers givenname: Harry sn: Chalmers
mail: hchalmers@example.com uid: hchalmers userPassword: default ou: People ou: Sales objectclass: top objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson dn: uid=cwalsh,ou=People,o=Example cn: Carrie Walsh givenname: Carrie sn: Walsh mail: cwalsh@example.com uid: cwalsh userPassword: default ou: People ou: Accounts objectclass: top objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson
Extending the server LDAP directories are intended to be easily extensible and the OpenLDAP server makes this simple. If, for example, you wanted to use this server to store Netscape Roaming Profiles then all you need to do is include the Netscape schema and add an appropriate access configuration line.
Enabling your changes Once you have configured your installation the way you want it, restart the server. From the command line this is as simple as: # /etc/init.d/slapd restart
Adding records to the directory LDIF (the Data Interchange Format) is a standard format for representing LDAP data, which is guaranteed to work no matter what the actual datastore back-end. An LDIF representation of the Example organisation is shown in the sidebar example.ldif. The records must be entered in sequence so that no object is inserted before an object that it relies upon. If you look at the records you can see the Distinguished Name attributes that uniquely identify each object, the attributes that are used to store personal and system information about the two staff members and the objectclass properties, which identify the object type, according to the directory schema. Issue 19 • 2002
LINUX MAGAZINE
29
KNOW HOW
Info OpenLDAP homepage http://www.openldap.org/ Linux LDAP HOWTO http://www.linuxdoc.org/ HOWTO/LDAPHOWTO.html LDAP/PostgreSQL HOWTO http://www.samse.fr/GPL /ldap_pg/HOWTO LDAP to DNS Gateway http://ldap2dns.tiscover. com/
Experienced data mungers will note that this is highly structured data, which can be seen in example.ldif, can easily be generated through scripts.
Entering the data If logged in as root at the LDAP server, you can use the slapadd tool to insert the data directly. In this case you should shut down the server and run: # slapadd –l example.ldif From a remote computer you can use the ldapadd tool: # ldapadd –D “uid=sysadmin,ou=People,o=Example” –h ldaphost –f example.ldif Finally, you can use the ldappasswd tool to give them some proper passwords.
Using the data Now that you can make use used to query specific set of this:
have the data in the directory, you of it. The ldapsearch tool can be the server and extract data from a records. Running a command line like
$ ldapsearch –x –H ldap://ldaphost:389/ –b “o=Example” “ou=Accounts” givenname sn mail should return the names and email addresses of everybody in the Accounts department.
All that effort for that? That’s just for starters and even then it’s pretty useful. Pipe the output of that command through an awk or perl filter and you can feed the result to Mutt’s query_command address lookup function. Or you can simply point Netscape (or even Outlook Express!) at the ldap server to have a ready-made internal address book.
Further uses for the directory ● Password database – The pam_ldap config sidebar shows a sample config file for the pam_ldap module. Properly configured, this module can be used to add ldap-based password authentication to any pam-enabled application, which you will be an expert at, having read the first article in this series. ● Network Resources – The LDAP Name Service Switch module allows your systems to look up a range of traditional *nix networking information including group membership, hostnames and mail aliases. ● Mail delivery – Most of the popular Open Source Mail Transport Agents, such as Exim, Postfix and Qmail, can be configured to do LDAP lookups to make mail delivery or routing decisions. ● Netscape Roaming Profiles – With only the smallest modifications to a standard OpenLDAP installation you can use an LDAP server to store users’ Netscape browser preferences (bookmarks and so on). Instructions for this can be found in the Linux LDAP HOWTO (see the Info boxout at the end of the article). ● DNS Back-end – The ldap2dns utility creates DNS records directly from an LDAP Directory. It can be used with both Bind and djbdns to eliminate the admin tasks of flat-file editing, zone-file editing and all the clunkiness of maintaining a distributed DNS set-up. ● Gateway to traditional services – One of the database back-ends that OpenLDAP recognises is “shell”, in which a shell script is run, returning data from an external process in a format that slapd can serve up as LDAP information. There are sample scripts on the OpenLDAP site that can be used to act as gateways to standard *nix daemons like fingerd but you can go further than that. Once you’ve learned the format for returning data then anything you can pull out of a script can be served up as LDAP information.
Don’t stop there
Pam_ldap config # Your LDAP server. Must be resolvable without using LDAP. host ldaphost # The distinguished name of the search base. base o=Example # The distinguished name to bind to the server with # if the effective user ID is
30
LINUX MAGAZINE
root. Password is stored in /etc/ldap.secret (mode 600) rootbinddn uid=sysadmin,ou=People,o=Example # Do not hash the password at all; presume the directory server will do it, if # necessary. This is the default. pam_password exop
Issue 19 • 2002
You now have your directory running, a powerful set of command line tools and a simple yet powerful data description language with which to manipulate and maintain it. The only limit to the uses LDAP can serve within your network is your imagination.
Summary LDAP can be used to centralise the administration of a huge variety of network tasks. With careful planning all your network resources can be described and configured in one place, and because it’s a popular Open standard it can be used to link all your network operating systems. If you aren’t already using it it’s time to ask yourself why.
COMMUNITY
LINUX: AN INTRODUCTION T
his is a handy little guide published in the style that Dorling Kindersley is well known for – beautifully illustrated and very user friendly. The book starts with a brief history and introduction to the various distributions. The examples in the book are all based on Red Hat, but most of the information is generic and can be easily adapted for any of the other distributions. There is a very clear table of hardware requirements needed to run Linux and a nice summary of its pros and cons, although the hardened Linux user may feel that the cons are not qualified enough. For
example the lack of software availability, such as Microsoft Office, does not mention that there are good alternatives to proprietary software. Any technical terms are clearly defined and references are made to those definitions wherever the term is used.
Having introduced the user to Linux the book then shows how to log on and off. It doesn’t attempt to explain how to install Linux, that would be beyond the scope of such a small handbook, and admits that the novice user may need someone else there to install the operating system. The rest of the book is then devoted to programs and applications; it covers how to mount and unmount devices and how to access the Internet. It concentrates on GNOME and only briefly mentions KDE as an alternative. The final chapter is devoted to locating and installing software. Given that the book was aimed at novices, we felt that it should have had advised users to stick to tried and tested programs and avoid anything that still needs work done on it. Aside from these considerations, the book is a very good introduction to the subject, and excellent value at under a fiver. Author Publisher Price ISBN
Brian Cooper Dorling Kindersley £4.99 0-7513-3582-7
RELIABLE LINUX I
n contrast to the previous book this one is aimed at experienced users. The target audience is systems administrators, or
similar, running a server under Linux. The book assumes that however reliable a system is it will eventually go down and the book aims to minimise the chances of that happening and to minimise the damage caused when it does happen. The book opens with a guide to assessing the risk of your server failing and goes on to give details on choosing the hardware, how to assess its reliability and how to run it optimally. It also gives examples of how to tweak the software when necessary. The following chapters then discuss the software, which version of the kernel to use, how to choose a distribution, installation and how to configure the program to be as reliable as possible, giving examples and possible scenarios.
Other chapters cover storage and backups and how to monitor the Linux server in use to catch potential problems. It also covers how to recover data, should the worst happen. This is a very good guide for anyone who uses a Linux server in business. We all talk about how reliable Linux is, but eventually the best system in the world will suffer from some problems and advance planning could prevent these from becoming serious. This is a guide for doing just that and could help solve many potential problems. Author Publisher Price ISBN
Iain Campbell Wiley Computer Publishing £33.50 0-471-07040-8
Issue 19 • 2002
LINUX MAGAZINE
31
REPORT
RHCE: Red Hat certification
GETTING CERTIFIED Now that Linux is becoming recognised in the corporate world, what can you do to make your skill set more noticeable and relevant so to gain that new job? Robert Morris has the answer
32
LINUX MAGAZINE
R
HCE is an acronym of Red Hat Certified Engineer, and is Red Hat’s certification programme for Linux professionals. It’s one of a number of certification schemes throughout the IT industry – with perhaps the most ubiquitous being Microsoft’s MCP and MCSE qualifications – all geared towards giving the employer or manager a benchmark against which the suitability of a candidate for a job position or project can be evaluated (and, of course, generating profits for the vendors who are providing the certification). The MCSE, in many respects the Microsoft parallel to the RHCE scheme, has received bad press in some quarters for being too easy to pass (due to it consisting entirely of multiple-choice questions), and therefore not a good reflection of the “real world” skills which professionals dealing with Microsoft systems should possess. Red Hat has countered this with the lab-based tests in the RHCE, which account for the majority of the marks. It would have been very easy for Red Hat to make money from dishing out easy-to-pass certificates, however it has resisted this temptation and instead taken the view that the industry needs a well-respected Linux certification programme, and that to gain such respect the exam needs to be challenging and be reflective of candidates’ skills and experiences, not just their
Issue 19 • 2002
ability to “cram” answers to simplistic multiple choice questions. This is an important point, for one of the hurdles facing the adoption of Linux in a corporate environment is the extra difficulty of selecting and managing Linux-skilled people by managers, who most likely won’t possess Linux skills themselves. Such managers need to have a reliable method of evaluating personnel for Linux projects and job positions, and the presence of a Linux certification scheme which carries industry-wide respect can surely only help in this.
The exam Before commencing the exam a non-disclosure agreement must be signed, to prevent details of the questions/scenarios used being leaked, thereby ruining the validity of the certificate. There are three components to the exam – debug, general knowledge and install/setup. Each of the three components are given equal weight, and in order to pass, an overall score of 80 per cent is required, with no less than 50 per cent in each component. The debug lab exam is 2.5 hours in duration and consists of four scenarios where you are given an already installed Red Hat system and a fault report. Although no external documentation may be referred to, man pages and other online documentation can be consulted, as you would ordinarily find on an installed system. The general knowledge exam is multiple-choice and is an hour in duration. No documentation may be referred to whatsoever in this part of the exam. The install lab (“Server Install and Network Service Setup”, to give it its full title) is 2.5 hours in duration. You are given a specification to work to, and a “clean” machine, onto which you install Red Hat and configure as specified. As with the debug exam, you are allowed to refer
REPORT
incorrectly. This is a pity, because I would have like to have looked up the correct answers afterwards, out of interest. Lunch was provided by Red Hat, which was a nice touch. I had quite an interesting talk with the examiner over lunch, who proved to be very knowledgeable. The afternoon session was then taken up with the install lab. As promised, I got my results emailed a few working days later (I did pass by the way!). My certificate arrived in the post a couple of weeks afterwards. This came with an RHCE lapel badge, which was a pleasant surprise. The certificate could benefit from being printed on something other than standard 80gsm paper, although what is probably more important is that the certificate number can be used to verify an RHCE using the form on Red Hat’s Web site (the URL to this service, and other RHCE resources, are provided on my RHCE page at http://www.r-morris.co.uk/rhce.html)
Conclusion to any documentation that may be found in the Red Hat distribution. In total, together with coffee break and lunch, the exam takes up a full day. It’s conducted at Red Hat training centres and is incorporated into their training programmes, as well as being a stand-alone exam. The Rapid Track Training and Certification Course is a week-long programme consisting of four days of training, with the exam immediately following on the fifth day. The cost of the exam only (module RH302) is £485, plus VAT. RH300, the Rapid Track course, costs £1,599, and includes the RHCE exam. This compares quite favourably with other lab-based courses.
My experience I originally booked for RH302 in my home town of Manchester, but this was cancelled due to lack of demand, so my booking was transferred to Red Hat’s headquarters in Guildford. Having stayed in London the night before I had to struggle with the rush hour trains to get down to Guildford, and only just made it to Red Hat by 9am. After the initial briefing, we got straight into the debug lab. I was impressed with the examining system, which is very well designed – using kickstart files downloaded from a remote server to give you your scenario installations. These are randomised, so should you look over the shoulder of the person in front, they will not be tackling the same problem as you, which is a good idea. After a coffee break came the multiple-choice exam. You were able to collect your marks immediately afterwards from the examiner, although one criticism I had was that the examiner could not tell me which questions I had answered
Overall, the design and implementation of the RHCE programme is first class. It is a good measure of “real life” Linux skills, which of course can only be gained with an amount of hands-on experience. Although Red Hat is eager to point this out at every opportunity, the RHCE exam is under the same section of their Web site as the training courses, and their Rapid Track course, which combines the exam with four days of intensive training, only serving to reinforce the association. Whilst certification may be an obvious follow-on from a training course, in order to gain the respect of management and decision makers, the RHCE needs to have some “real” experts on-board – people who have considerable Linux skills built from experience, and are obtaining certification to prove it. If it turns out that the vast majority of certificates are issued to those who have only just completed a training programme, then I fear that the respect Red Hat seems so anxious to acquire may be lost. The RHCE has met with some opposition from certain sections of the Linux community. It is argued, with some justification, that its not appropriate for a company such as Red Hat to be setting standards and that this should instead be a community-driven process, and that any profits made from certification should be ploughed back in to the Open Source movement. Unfortunately we do not, at this point in time, have the luxury of choice – in the corporate world Linux is still an outsider in the majority of cases, and therefore we need a good certification scheme for Linux professionals, with the respect of the industry at large and not just within the community, if Linux is to penetrate into the mainstream. At the moment, for this the RHCE is our best bet. Issue 19 • 2002
The author Robert Morris is a freelance Linux specialist. He has been using Linux since kernel version 1.0.9
LINUX MAGAZINE
33
REPORT
Company focus
ENTERPRISE MANAGEMENT CONSULTING Support and consultancy on a professional level
E
nterprise Management Consulting has been providing consultancy and support services for Novell and Unix-based systems since 1995 – extending its services to include Linux a year later. Within two years its structure had changed to support Linux exclusively. It was at that time Red Hat chose it as a certified support partner in the UK. The company currently maintains systems for a wide client base, including small businesses, international governments and high-availability Web sites.
Bynari (an MS Exchange alternative) With the publication of the standard Internet mail model, the market for proprietary systems has started to shrink in size. Customers are asking for a universal messaging system where any email client can send and receive email messages and share important information regardless of the Mail User Agent (MUA), platform or method of connecting to the network or the Internet. Bynari has developed and released Insight Server 3.02 to facilitate complete messaging and collaboration capabilities within the enterprise. It provides lightweight or enterprise level messaging services within and among the various parts of an organisation’s network of people and resources, whilst providing a safe harbour for an organisation’s messaging needs by using the Internet mail model. User’s respective sites may have vast geographic, technological and social differences, which demand a robust and flexible framework. Unlike closed, proprietary commercial mail systems, Internet messaging defines a series of specifications that are Free and Open for all. Insight Server 3.02 provides IMAP, POP3 and SMTP mail protocols and allows users to access global address books built on a standards-based directory server (LDAP). For users needing calendar and scheduling services, it provides free and busy time access, shared folders, and meeting requests and replies. Mobile and remote users find Insight Server’s IMAP protocol a pleasant change from having to manage and synchronise POP3 mailboxes. IMAP provides advanced server-side ownership of the user’s mail and multiple mailboxes for management of differing kinds of email. With these capabilities, employees can directly access the company’s global address book as well as their own personal address books to send/receive mail from remote locations.
34
LINUX MAGAZINE
Issue 19 • 2002
EMC specialises in providing its customers with cost-effective solutions to a wide range of technical problems. With its commitment to long-term partnerships, it ensures efficient use of available resources, allowing its customers to reap the maximum benefit. The company prides itself on an outstanding record of customer satisfaction and its ability to provide fast, efficient and reliable support remotely. With the majority of support calls being resolved by the help desk within minutes, EMC can ensure that its clients are able to run their businesses with the minimum of downtime. Employing consultants who have backgrounds in a broad range of different environments, who have been able to facilitate the installation and support of mission-critical solutions to companies such as J.P. Morgan, Bank of America, CSC and Anderson Consulting, with the highest quality of Linux support assured at all times. There are a number of support options available to clients ranging in price from £1,200 a year for unlimited Linux support calls on a single Linux server, up to customised services for entire organisations. Contracts are available with any level of support including 24/7. EMC has developed and maintained partnerships with some of the leading lights in Linux including Red Hat, MandrakeSoft, SuSE and Caldera. It’s also strategically allied itself with Bynari Inc. and Ayrsoft, both of whom are producing leading Linux-based business solutions.
Info Enterprise Management Consulting: http://www.emcuk.com – 0208 659 2000 Ayrsoft: http://www.ayrsoft.com Bynari Inc.: http://www.bynari.net
REPORT
Ayrsoft Ayrsoft is based in Irvine, Scotland and as a systems design company it specialises in supporting businesses with process management and communications. Its eBoxit solution (see below), which started life in 1995 has been developed as a server that’s secure and stable behind a firewall. By continuing to develop the server to meet everchanging needs it now includes groupware, VPN (Virtual Private Network) capabilities and CRM (Customer Relationship Management) tools. As an off the shelf solution it has managed to be validated by NetKonect, MandrakeSoft, and WorldCom. It is also IBM ‘Server Proven’ and is part of the IBM Global Solutions Database. The hardware is based on Pangolin International’s A4 Net range of machines. Where the eBoxit scores well is with the supported software. This is produced in partnership with Enterprise Management Consulting.
of the most widely used and useful tools around. With its ability to track on-line communications for suppliers and customers alike, it is a valuable addition for all businesses.
Communications Manager This module allows secure Web-based email and event scheduling, allowing you to keep in touch with your business and personal contacts wherever you are. With the added benefit of filtering your mail into folders or redirecting them elsewhere it gives full access whilst you work on the move.
CRM in action
So what is eBoxit? Eboxit is the cost efficient office server, ideal for those businesses wanting to remove themselves from the expensive Microsoft licensing procedures. Based around Mandrake Linux and Open Source it enables access to six different modules through any Internet Browser, regardless of the operating systems in use.
Network Manager The Network Manager provides the ultimate Internet connection and network administration tool. With security that is second to none, easy installation and complete management of all users, allowing a company to fully share data, schedules and documents between clients, partners, remote workers and even what used to be competitors. In these days of information management too much reliance is put on simple ineffective and insecure email conversations. The eBoxit range manages email between companies (allowing full visibility to managers of company email sent to and from contacts).
Customer Relationship Manager The Customer Relationship Manager (CRM) is one
Knowledge Manager running over a browserL
Knowledge Manager The Knowledge Manager is an essential addition for any business. With a central repository area ideal for web news and FAQs, you can always find that document when you need it most and it is fully configurable with revision control and history for every document.
Internet Trading Manager With the Internet Trading Manager you can sell your products 24/7 in a secure environment. Sales and purchase reports can be set up to be viewed on-line for those suppliers and customers keeping track of your sales. There is also a multi-vendor system to collect only the data required by the accounts department, helping to streamline your business.
Where the eBoxit scores well is with the supported software
Human Resources Manager
Network Manager
The final module allows you to allocate resources whenever it is convenient for you. You can use this module to aid in your project planning as the resources are company wide and therefore give you the full picture.
Issue 19 • 2002
LINUX MAGAZINE
35
REPORT
Firewall Guru
BOB ZEIGLER When we’re looking at security it would be remiss not to mention Bob Zeigler and the impact he’s had on the field of Linux firewalls. Richard Ibbotson caught up with him in the US
Author Richard Ibbotson is the Chairman and organiser for Sheffield Linux User’s Group. You can view its Web site at http://www. sheflug.co.uk.
36
LINUX MAGAZINE
B
ob is one of those rare individuals whose personality, humanity and depth of character are hidden to the casual observer. To get the best out of him you have to get to know him reasonably well or at the very least you have to talk to him for a few days. His understanding of the human condition is probably quite unique. Without him the world would have been a much sadder place. I talked to him at his home in Cambridge, Massachusetts and at Ryles jazz club at Hampshire street, a place that was once inhabited by the world’s most amazing academics. The club and the musicians are still there but the people are very different. We live in times when the sharing of knowledge is officially frowned upon. Bob was originally from the state of Wisconsin, which is known for its beautiful scenery and laidback attitude to most things. He lived at Madison with his parents before going to college at the University of Wisconsin-Madison to take an undergraduate degree in psychology in the 1970s. He says that there is a very large cross over in psychology and computing at the college that he went to. After that he went back for a degree in counselling. His interest in computers began to take shape when he got his first machine from Radio Shack (in more recent times known as Tandy). He says that he didn’t know what assembly language was, so he began to take an interest in it and also in BASIC. He taught himself a lot about it and really liked it. His hobby really began to get a grip on him. Bob’s line of study made him think that he was going to be a corrections professional for the rest of his life – possibly a person who looks after serious criminals. He said he liked the prisoners and didn’t like the guards. He found himself working for the State of Wisconsin and most of the time he performed a role as a statistician. Testing of prisoners was computerised – something that needed a large budget. His colleagues at that time told him that he “had to do a Masters degree in computing science”. He objected and said that he couldn’t do the maths but they finally got him there. He found himself working with highly academic people and discovered COBOL whilst still studying. If it hadn’t been for this strange mix of circumstances Bob would never have Issue 19 • 2002
Copyright Bob Keene 1999
become involved in Unix and his interest in GNU/Linux and network security would not have become what they are today.
The lure of Unix Bob has worked as a Unix operating systems developer since his days of academia. He was working with a team of people on a mini supercomputer where he had to write just about every line of code. He developed a multi-processor version of BSD 4.3 as a spin-off from the original uniprocessor project. Since then he has worked as a Unix system kernel developer in the Boston area. Then later on things began to fall off and he wasn’t doing very much. Eight companies have died under him in ten years whilst he was an employee. Whilst he was working for Hewlett-Packard he began to take an interest at home in firewalls and network security. The people he was working with told him that he should develop this further as his
REPORT
proper line of work. He began to develop his Linux firewalls site. He was also working on Tiger and Tripwire at the same time. It all came to a head when a publisher approached him and asked him to write a book about ipchains. The first edition of book about Linux Firewalls was first published back in November 1999. In between the first and and second editions he worked for Nokia as their principal engineer, designing and developing firewall products for Nokia’s Ipsilon family. At that time it was explained to him that he should write his second book whilst at his place of work. His claim is that he thinks that his first book, which gave a more than adequate description of ipchains, was probably lacking in something somewhere. When iptables came along he thought that it was about time that he corrected any mistakes that he might have made with his first book. He thought that some help from someone else would probably be a good thing. He asked Carl Constantine to be his contributing author. Carl has worked in the IT business for many years. He was been a technical writer, a programmer and a consultant. He works at the Department of Computer Science at the University of Victoria at British Columbia in Canada. The technical reviewers are Joshua Jensen, who was the first Red Hat instructor and examiner, and John Millican, who has been providing information consulting services since 1978. John is currently certified by SANS GIAC for intrusion detection in depth and firewalls VPNs and perimeter protection and related security issues. He is the chairman of the SANS Unix security certification board. With this impressive line of highly qualified and experienced people Bob set to and wrote his second book.
A second coming Linux Firewalls second edition was published in November 2001. It’s all about iptables and is extremely comprehensive owing to the nature of the people who helped to write it. It was pressure from this book and some of the people mentioned above that brought an update in the iptables application with a view to fixing a few bugs. It covers, in 13 chapters and four appendices, the kind of things that most small SOHO LANs might need. What it doesn’t cover the security policies and procedures that large businesses need. However, if you are someone who is involved in administering a large business or Government network then you might just find that this is a good book to read for some introductory ideas. There is also a Web site that Bob has put together for reference and for creating firewall rules for your network. You might want to have a look at that. The reader is given some basic concepts about network security such as packet-filtering firewalls and then he or she is carefully taken through some simple and
more advanced concepts. Bob describes his local area in Massachusetts as “greed central”. To the casual observer it is a beautiful part of America to go to. Boston, which is on the other side of the river, has some great attractions. Cambridge itself is home to both the Massachusetts Institute of Technology and Harvard. On the day that I was there the Patriots parade took place and the New England Patriots soccer team won for the first time in years. MIT was its usual busy self, bustling with activity and expectation of the new semester. It was good to be the most popular Englishman in America for just a few hours. To finish off I might mention the dedication which never got published in the second edition but it was published in the errata... “In constant memory of Jake”. Jake used to be Bob’s pet cat. He loved Jake as much as he loved his wife. Jonas is now his friend and life companion. Long live Jonas. We hope you will be with us next month when Bob Zeigler starts his Linux firewalls tutorial series.
which gave a more than adequate description of ipchains
Info Buy a Linux Firewalls book: http://www.newriders. com/books/title.cfm?isbn=0735710996 Bob’s site: http://www.linux-firewall-tools.com University of Wisconsin-Madison: http://www.wisc.edu Errata: http://www.linux-firewalltools.com/linux/book/errata.html#top Design your own GNU/Linux firewall: http://www. linux-firewall-tools.com/linux/firewall/index.html Jazz: http://www.rylesjazz.com/index.shtml
Issue 19 • 2002
LINUX MAGAZINE
37
KNOW HOW
Tighten your network security with TCP Wrappers
KEEP IT UNDER WRAPS Exclusive nightclubs don’t let just anyone wander in from the street and the same should be true for your system. David Tansley shows you how to enforce a strict door policy with TCP Wrappers, the meanest bouncer in town
38
LINUX MAGAZINE
Most nightclubs these days have door staff to restrict access to certain types of clientele. Not only will there be age restrictions, but a dress code may also come into the equation: no white socks or trainers, for example. Just like a doorman, you can restrict access to your computers based on certain criteria. Firewalls might automatically spring to mind but this month we’re going to be looking at a utility called TCPD, more commonly known as TCP Wrappers.
What is TCP Wrappers TCP Wrappers is installed by default on most Linux boxes and it can also be built on practically all UNIXes as well. What it actually does is wrap itself around all incoming TCP connections, that is TCP daemons that are controlled via xinetd (or inetd, if you haven’t yet moved over to xinetd). When a TCP connection is made on your system, TCP Wrappers (TCPD) is run instead of the required daemon. For instance, if a user connects to your system via FTP, TCPD is invoked rather than the in.ftpd daemon. TCPD will then look at two files: /etc/hosts.allow and /etc/hosts.deny, which – as their names suggest – either allow or deny connections based on rules or patterns. Once TCPD has read these files and found a match, the relevant connection will either be granted or denied. If the connection is allowed, TCPD then writes to syslog – the system messages file – and hands over control to the real daemon that was called, in.ftpd in our example. TCPD’s work is now done, and will sleep until the next connection is invoked through xinted. If the connection is denied, i.e. it fails due to the access rules or a pattern match in the hosts.allow or host.deny file, a message is written to syslog, logging this failure attempt. The connection is then broken and TCPD goes back to sleep awaiting the next connection. Some of the most popular TCP daemons are: telnet, ftp, shell, rdate, tftp, talk. The rule here is if it is TCP Issue 19 • 2002
and is invoked from xinetd then you can control access to that service from outside connections.
Getting xinetd to recognise TCP Wrappers Although TCP Wrappers is installed by default on most Linux systems you will need to tell the xinetd daemon that TCPD is there if you wish to use its services. Generally speaking all the TCP/UDP daemons controlled by xinetd house their configuration files in the /etc/xinetd.d directory. However, this is governed by the includedir entry in the /etc/xinetd.conf file, so check this out first if you don’t have an /etc/xinetd.d directory. You may find that all the configurations are stuck in the actual xinetd.conf file. You will need to change every service configuration file where you want TCP Wrappers to handle the connections. For Telnet, you would have an entry like in Listing 1. This shows the Telnet services configuration file – your Telnet service file will probably be slightly different. Notice the use of the flags and server entries in the Telnet service configuration file the use of the entries, these tell xinetd that it is to call TCPD first, the server_args is the actual daemon to run after TCPD has finished. Make the same sort of changes for the rest of the TCP services files you wish to protect. After making changes use the service command to restart xinetd: $ /sbin/service xinetd restart Or alternately: $ /etc/rc.d/init.d/xinetd restart
Those access files When a connection is initially established TCP Wrappers will look first look in /etc/hosts.allow before checking /etc/hosts.deny, if there is a pattern match
KNOW HOW
then access will be denied or allowed. Confused? Don’t be, the general rule of thumb here is to allow access unless otherwise specified. In other words, keep it simple. When TCP Wrappers has been enabled via the services configuration files, if neither the hosts.allow or hosts.deny file exist then TCP Wrappers will deny access to everybody, except connections from the localhost, (the actual Linux system where TCP Wrappers is running). All connections are logged via syslog to either /var/log/messages or /var/log/secure, depending on your TCP Wrappers installation. The general format of the rules or patterns for both files is: daemon_list : client_list : [Shell Commands][Banners] Where both Shell Commands and Banners are optional. We’ll take a look at banners later in the article. The daemon list is the names of the daemons you wish to allow or deny. The client list is host names, IP Addresses or domain names you wish to allow or deny. To specify multiple daemons or clients use a comma to separate the entries. You can also use wildcards to specify daemons or clients. For instance: ● ALL will match every daemon or every client list ● LOCAL will match the local host only – any host that does not have a ‘.’ in the name ● . (that’s a dot) will match anything, a bit like the * in the bash shell. For example, .boo.com, will match any domain that ends in boo.com When making changes to the hosts.deny or hosts.allow file, the changes are dynamic, by which we mean you don’t have to restart any daemon or process for the changes to take effect.
Listing 1: Listing of /etc/xinetd.d/telnet. service telnet { flags protocol socket_type wait user server server_args log_on_failure disable }
= REUSE NAMEINARGS = tcp = stream = no = root = /usr/sbin/tcpd = /usr/sbin/in.telnetd = USERID = no
When initially learning the rules and patterns, it is best to keep the hosts.deny file to ALL:ALL and only allow access to hosts/daemons specified in the hosts.allow file. Remember – keep it simple, it works! To allow (only) Telnet and FTP from everybody. /etc/hosts.allow in.telnetd,in.ftpd:ALL /etc/hosts.deny ALL:ALL Notice the use of the comma to separate the two daemons, in the client list. To allow access to Telnet only from hosts that have the network address part 192.168.1: /etc/hosts.allow in.telentd: 192.168.1. /etc/hosts.deny ALL:ALL
Types of access As usual most things become clear with examples, so let’s do that now. To allow access to all daemons belonging to the domain mycompany.com and to deny access from everybody else we would enter: /etc/hosts.allow ALL:.mycompany.com /etc/hosts.deny ALL:ALL Notice in the above example the .mycompany.com, the dot is a wildcard and means “match all domains that have mycompany.com as the end part of their domain name”. In the hosts.deny file all other daemons and hosts are denied.
Notice the use of the dot at the end of 192.168.1. This will match all IP (network) addresses that start with the IP number 192.168.1. To allow access to all hosts belonging to the domain mycompany.com but to deny hosts belonging to the bighacker.com domain:
When making changes to the hosts.deny or hosts.allow file, the changes are dynamic
/etc/hosts.allow ALL: .mycompany.com EXCEPT bighacker.com /etc/hosts.deny ALL:ALL In the above example using the EXCEPT does what it says: it allows the client lists on the left of the word EXCEPT, but disallows access to those on its right. You can use EXCEPT to allow all of the 192.168.2 Issue 19 • 2002
LINUX MAGAZINE
39
KNOW HOW
network in, but not the hosts with say, the following IP addresses:
to the end of the line entry. The hosts.deny file should now look like this:
192.168.2.12 , 192.168.2.12, 192.168.2.22
ALL:ALL :banners /etc/banners/deny/
/etc/hosts.allow ALL: 192.168.2. EXCEPT 192.168.2.12,192.168.2.12,192.168.2.22
If a host is denied from connecting via Telnet or FTP, based on your rules in hosts.deny or hosts.allow, they will now get a denial message before the connection is closed. The connecting host has an IP address of 192.168.1.12. My hosts.allow file contains the following:
/etc/hosts.deny ALL:ALL However, when using TCP Wrappers internally don’t use EXCEPT with IP numbers on an exposed side of your network, as you are open to potential spoofing. When a host tries to connect to your Linux machine using a denied daemon the connecting host will simply get a blank screen. It is considered good form to display a refusal message, as that way the connecting user will immediately know that they are not allowed to access this particular host. These refusal dialogs are called banner messages. You have a banner message for each daemon that you wish to protect or guard. In most cases you’ll want to display the same message, so it makes sense to copy the same message across to the different banner daemon files you are creating. We will create a denial message for Telnet and FTP connections, which are denied access. From the /etc directory create a new directory structure to hold the banner file(s). $ pwd /etc $ mkdir banners $ cd banners $ mkdir deny $ cd deny
First create the banner file for the Telnet daemon. Insert the following text into the file called in.telnetd in the /etc/banners/deny directory: You are not authorised to enter this machine! Your attempt has been logged. Access denied to %c Notice the %c at the end of the text: this will display the calling host’s IP address. Next, we handle the FTP connections. There’s no need to re-type the text, simply copy the file. Staying in the same directory:
Info
$ cp in.telnetd in.ftpd
ftp://ftp.porcupine.org/pu b/security/index.html ftp://ftp.pld.org.pl/softwa re/tcpd/binary/
The next task is to tell TCP Wrappers about the banners. Edit /etc/hosts.deny and add the following: :banners /etc/banners/deny/
40
LINUX MAGAZINE
Issue 19 • 2002
ALL:192.168.1. EXCEPT 192.168.1.12 Notice the above example accepts all IP addresses that start with 192.168.1, except a host that has an IP address of 192.168.1.12. Using the rules in the last example the message below is printed to the /var/log/messages file courtesy of syslog: Feb 14 20:43:54 bumper xinetd[1057]: refused connect from 192.168.1.12 You know the IP address of the rogue host trying to connect, though in reality this will probably be the NAT address or the gateway address the user connected to via the Web. If you’re running TCP Wrappers on an internal network, then you’ve got your culprit pinned down to rights. Similarly the following messages are printed to the /var/log/secure file from the previous example: Feb 14 20:43:53 bumper xinetd[658]: START: ftp pid=1057 from=192.168.1.12 Feb 14 20:43:54 bumper xinetd[1057]: FAIL: ftp libwrap from=192.168.1.12 Feb 14 20:43:54 bumper xinetd[658]: EXIT: ftp pid=1057 duration=1(sec)
Informing you that access was denied and what service the calling host tried to connect with.
Listen in please When putting your rules to the test it’s always a good idea to start off by allowing access to all users and all daemons. From this point gradually start cutting down the hosts you want in, once that is accomplished then start on the daemons. This will save you from struggling up a steep learning curve. Hopefully the basic examples we’ve provided in this article are enough to get you going and some will probably do the job for you.
Conclusion This utility allows you to quickly and easily close the doors of your computer to potential trouble. Be sure to check out the manpages of TCPD and hosts_options for a full description of this utility.
KNOW HOW
Development environments on test
THE RIGHT FOUNDATIONS L
inux has been something of a paradise for programmers right from its modest beginnings, in fact there’s hardly a programming language around that can’t be found under this Free operating system. However, the problem with many languages is their somewhat cryptic operation. For example, Visual Basic programmers who are used to a supportive graphic interface often recoil at command line compilers. Thankfully, an ever-increasing number of graphic environments make the development of software child’s play. In this article, we will present a small selection of these integrated development environments (IDEs).
Invasion The line-up for our IDE test is as follows: Anjuta, in the current beta version 0.1.8, KDevelop 2.0.2, KDE Studio Gold 3.0, as well as Kylix 2. In contrast to Anjuta and KDevelop, the latter two environments are commercial software. KDE Studio from “theKompany” is available in a no-frills, Free, Open Source version, whereas Borland’s Kylix may only be used free of charge for non-commercial projects. The operation and user-interface of all the programs presented here relies rather strongly on the relevant windows applications. The concept of the project serves as the working basis for all the candidates. Usually, this is nothing other than a collection of all the files that are needed for the new program. This applies not only to the source code, but also to the used libraries and the documentation. Some IDEs even permit the administration of several programs and libraries within a project. With the exception of Kylix, the environments do not have their own compiler (nor the accompanying help programs). Instead, they all (without exception) access the appropriate GNU command line tools. They could therefore be defined as more of a central, graphic attachment than as a standalone product. Those who have migrated from the development environments of Microsoft and Borland under the Windows operating system will miss the integrated dialog editors. These provide assistance in the creation
of windows, dialog boxes and the contents thereof – similarly to in a painting program. If offered at all, the current IDEs access external software for this. If this is not installed on your computer, the necessary source code must be manually entered by the programmer – costing valuable time. It is worth pointing out that all the development environments, with the exception of Kylix, enable the direct creation of distributable pages, for example in the rpm format, which is certainly something worthy of praise. Despite the features our test candidates have in common, the four applications differ widely from each other. Where these differences lie will be revealed in the following sections. At the same time we will present the individual IDEs in detail.
Development environments have the function of simplifying the programming and development of software. Tim Schuermann presents an overview of the most interesting products under Linux
Anjuta 0.1.8 Anjuta comes under the GPL development environment. Its current version is in the beta phase and is therefore not yet completely refined. This makes it the youngest of the projects presented here, but nevertheless it already gives a very promising impression. Anjuta, in its current version, aims at the development of Gtk, i.e. GNOME-based applications. It therefore comes as no surprise that it was created on this base. Anjuta does not directly support applications that use Qt or KDE. All assistants and assistance entries are completely aligned to Gtk and GNOME, however if you conscientiously ignore these, you will have no problems creating Qt-based programs. This is, by and large, a rather awkward way of doing things, and it is simpler for Qt or KDE developers to use KDevelop or KDE Studio. Anjuta understands the programming languages C and C++. The compatibility with further languages, such as Java for example, is in the planning stage at present, and some are already partly supported. In the creation of a new project, an assistant helps the user, leading him to the desired target in few steps. In our test, it took a mere six mouse clicks to create a skeleton of a new application. Anjuta automatically stores all projects in the Projects directory, though this path specification can be changed in the settings. Issue 19 • 2002
Gtk, Qt Two libraries, which programmers can use in their own applications. They supply graphic objects e.g. menus or dialog windows. In this way, working with windows is made easier for programmers. Gtk comes under GPL, and Qt is likewise available under this license.
LINUX MAGAZINE
41
KNOW HOW
Hello World program A classical programming example, which crops up in many books introducing a programming language. It is an application whose sole purpose is to output the text “Hello World”.
Figure 1: Anjuta’s Application Wizard helps the user when creating a new project
The number of created files cannot be moaned about: A simple “Hello World” program in C needs a minimum of one, but usually two files (a “make” file for the compiling process and a second file with the actual source code). This is naturally without the otherwise usual documentation in form of the mandatory README and INSTALL files. Under the standard settings, Anjuta creates three directories with 64 files altogether. Beyond that, we recommend that Anjuta’s main window is not enlarged to cover the entire display. The reason for this is that the software likes to hide software messages and additional important dialog windows in the background.
Figure 3: The automatic completion in Anjuta
The available functions offered by Anjuta come to an end here; it unfortunately does not support group work or data exchange, as yet. There is however easy access to the debugger Gdb, a standard program for the detection of errors. The graphic creation of an application surface, based on Gtk, takes place through the external program Glade. The offered assistance is merely sufficient and is essentially limited to short descriptions of the most important menu options. Anjuta accesses external sources for the documentation of the library functions.
KDevelop 2.0.2
Figure 2: Anjuta’s main window. A “Hello World” program is given as an example in the editor
Figure 4: The Anjuta editor currently supports the all these formats
42
LINUX MAGAZINE
The editor is very user-friendly for processing source code. By pressing Alt+Enter, even an autocompletion can be activated. This pops up a list of all functions that are available at the current cursor position. As well as this, Anjuta loads and processes other text files of the most diverse formats, such as Java source code or LaTeX files. For a multitude of title formats, the development environment constructs appropriate templates for coloured emphases. We were also very impressed with the ability to quickly fade functions and classes in and out with the plus and minus symbols to the left of the input code. This promotes clarity in projects of all sizes. Issue 19 • 2002
In the attempt to copy KDevelop’s rpm archive to our test computer with SuSE-Linux, we were more than exasperated: our package manager pointed out a total of 20 uninstalled packages. Most files thereby referenced the Docbook documentation system. This provides KDevelop assistance in executing all documentation functions. If the packages refuse to be installed, then the development environment automatically used Yast2 on its first starting and therefore undertook the installation of the missing packages itself. Beware those who are logged in as root though: no direct warning or confirmation through the user exists. With an inserted installation CD, it only takes a few seconds and the hard drive is a few megabytes fuller. The first impression is confirmed directly after opening: KDevelop unites a large number of external programs under one surface. Many familiar external programs repeatedly appear in the menus. Thus, for the graphic design of dialog windows, Qt-Designer will be started. This software is a creation of Trolltech, the manufacturer of Qt. The surface, in comparison with Anjuta, leans more strongly towards the Windows models.
KNOW HOW
However, it seems overloaded and acts somewhat chaotically. The standard three-compartment main window contains a view in the left border, which, depending on register page activated there, permits different views of the current project. One can also access the very detailed assistance, which even offers a complete language reference for C++. External sources are however accessed for the documentation of the Qt and KDE classes. An editor window is located on the right-hand side, which apart from editing the source code, also undertakes the display of other documents, such as assistance files. Like so many other elements of the surface, the behaviour of this window can be freely configured. At the bottom border of display is a status window, by which different message types over different register pages can be surveyed and assessed. Altogether, KDevelop emphasises integration very strongly. This should come as no surprise, after all the development environment comes from the KDE project, whose KDE desktop pursues the same goal. It is also noticeable here that the roots of KDevelop are found in the KDE project. Beside the many KDE and Qt program variations, the development environment unfortunately offers only one type of application, based on the competing GNOME libraries. In contrast to Anjuta, KDevelop is thus an ideal candidate when it comes to the creation of KDE or Qt-based C or C++ programs. KDevelop does not speak any other languages or dialects. Fortunately, the assistant offers many possibilities during the set up such as the configuration of the version management tool CVS. This enables several people to simultaneously work on the same project. KDevelop and Kylix are the only programs presented here, which allow a team to work on one project in this way. Apart from the pure source files, the assistant can even create the documentation belonging to the program. This can be done for function and class documentations (using its own source code) and for user manuals. For the latter however, some knowledge of the documentation system (used by KDevelop especially for this purpose) is necessary. Once all development packages are installed, the entire software project can thus be created and administered within one surface. Just like Anjuta, KDevelop does not scrimp on the number of newly created files. This not withstanding, the overview remains intact due to the different tree views of the project in the left window section. The integrated class browser is likewise a success. This allows all the classes used in the project (complete with their attributes and methods) to be easily seen and manipulated. An assistant even takes over the creation of new classes by generating the code frames and the associated files. Methods and attributes can then be added or
deleted using the appropriate dialog window. In doing this, the right mouse button proves to be a real magic wand. Methods or attributes, entered into the code by hand, are automatically transferred into the addressed views after going through a translation procedure. The graphic class opinion is likewise very helpful. This is a window, in which the class hierarchies are clearly represented in form of a diagram. The depiction would however be desirable in the popular UML notation. All in all, Figure 5: The main window of KDevelop. A help KDevelop contains many useful document has been selected for display functions, which are unfortunately hidden to the user at first glance. The bypass of the console messages into the different status window registers takes time getting used to. The product of our test program was thus rerouted into several registers. This procedure has however the advantage that the messages are outputted clearly and sorted according to type. The text editor does not approach the ease of Anjuta, problem-less operation is however ensured.
KDE Studio Gold Version 3.0 KDE Studio was developed by “theKompany” and is available in two versions. KDE Studio (published under the GPL) is the Free, Open Source version of the commercial KDE Studio Gold. The difference between the Figure 6: KDevelop’s application Wizard versions is in the offered range of functions. Beyond that, the commercial version is the only version that will be developed and supported by theKompany from now on. We will be taking a look at the test version of KDE Studio Gold, which can be downloaded free of charge from the manufacturer’s homepage. It offers the full function range and is restricted only to a limited duration of use of 15 minutes. The full version is available for approximately US$25. Directly after the start users are welcomed by an assistant, which, among other things, enables the creation of a new workspace. KDE Studio differs from the other programs in this test by referring to a project as a workspace (working environment). KDE Studio uses the name project to define a subgroup of a workspace. This means a project can be a library Figure 7: KDevelop displays an or a new program. open project in this tree view Issue 19 • 2002
LINUX MAGAZINE
43
KNOW HOW
When creating a new project, it quickly becomes clear that KDE Studio also puts its accent on C and C++ development using a Qt/KDE base. This IDE offers no option of creating GNOME applications. What remains however is the possibility of creating a custom project by hand. In the generation of a small terminal program, KDE Studio, in contrast with the competition, only Figure 8: The different class views in KDevelop creates one copy of the most necessary scripts and files. Somewhat too economical perhaps: the source coding file with the ever-needed main() function must be manually created. The application only offers templates for a KDE or Qt program. The surface of the main window resembles that of KDevelop, the difference being that it seems less cluttered. KDE Studio offers a tree representation of the project in its upper left-hand window, though assistance files cannot be displayed there. A status window is located in the lower section and the upper right-hand part is where the source files are worked on. One can switch backwards and forwards between different, opened files with the help of the register. The editor offers the standard fare and, like Anjuta, offers the possibility of selectively fading functions and classes in and out. KDE Studio does not achieve the function range of KDevelop. It does, on the one hand, have a class browser (called class explorer here) and a graphic view, but these need to be explicitly called up from the menu. On top of this, they look a little showy and unclear. There are absolutely no help mechanisms for the creation of classes, like those integrated into KDevelop. The complete documentation was likewise conspicuous in our demo version by its absence. This seems strange considering theKompany advertises the purchase of KDE Studio with the promise of complete documentation. Figure 9: KDE Studio Gold’s assistant Few other interesting functions were to be found in KDE Studio Gold 3.0, apart from the mandatory debug and compiler options.
name suggests, only Free, Open Source projects under the GNU license may be created with it. Those who want a greater function range or commercial software, are given two rather costly alternatives. The first of these is the professional version, at a price of 325 euros from Borland’s Web site. The expensive company version is the second possibility, and yours for 2585 euro. The free Open Edition is usually sufficient for private users. Heed must be paid, that programs compiled with this version have an appropriate note inserted at their start. The exact differences between the individual Kylix versions can be found on Borland’s Web site. Kylix brings with it a conversion of Delphi, the popular Windows development environment. Similarly to Microsoft’s VisualBasic, this is a complete self-development from Borland. Programs written with Delphi, as well as the Kylix environment use both their own language (named Object Pascal) and the program libraries created by Borland. Domestic applications can be created relatively quickly in this way, but this is dependent on the manufacturer and the libraries thereof. With Kylix, Borland is pursuing the target of being able to make an application that was developed under Delphi, available under Linux simply by recompiling it. The reverse of this is naturally also possible. For this purpose, Borland not only transferred the CLX class library to Linux, but also made the compiler and the IDE available under Linux. A disadvantage in comparison with the other development environments is the closed source strategy, meaning only the source code of the CLX libraries is laid bare. After the start, the Open Edition requires the entry of a registration code. This is mandatory, but can be taken from http://register.borland.com for free. If the entry of the two codes is correct, a familiar picture awaits the experienced Delphi user. It is no coincidence that the surface resembles that of its big brother from the Windows world. As in the first Kylix version, the surface from Delphi has been ported over and brought to life under Linux with the Wine emulator. This unfortunately has the
Kylix 2 Open Edition Kylix 2 breaks the mould in several ways. On the one hand it comes from the highly regarded compiler manufacturer Borland, who already has many years of experience with different development environments on the Windows OS. On the other hand, this is a commercial product. Contrary to KDE Studio Gold, Borland publishes a downsized version as a Free, Open edition. As the 44
LINUX MAGAZINE
Issue 19 • 2002
Figure 10: KDE Studio Gold’s windows in action
KNOW HOW
Borland internal development and has a striking resemblance to its Windows counterpart. The assistance documents are supplemented by attached demo-programs, which was unique in our test.
Result
Figure 11: KDE Studio Gold class views. In the picture, the classes b and c were derived from class a
unpleasant side effect that the entire environment reacts somewhat sedately – lets just say it’s not one of the fastest in its class. The programs created through Kylix are fortunately native Linux programs and not dependent on Wine. Delphi users are however not the only ones who will feel right at home. It is really noticeable that the manufacturer of this product has had many years of experience under his belt. Directly after the start, an empty project, consisting of an empty dialog window, is opened beside the main window. This space can be used to create and arrange appropriate items, like switches or lists, just like in a painting application. A bar in the top margin presents all the available elements, appropriate at this juncture. Adjustments of the respective characteristics take place via the everpresent object inspector. This window is commonly located on the left-hand side of the display. The user creates graphic elements in windows defined as forms, and while this is happening, Kylix automatically generates the suitable source code in the background. The user must then merely fill this out in another editing window. A useful overview over all the classes and components used in the project, like the diagram offered in KDevelop, is sadly missing here. After one has acquainted oneself with all the elements of the window and is familiar with the language Object Pascal, then the creation of a complete application with Kylix occurs very quickly and effectively. This is, in no small part, greatly aided by the user-friendly window. This window can be adapted to one’s own needs in almost every fathomable area. All settings can be saved and called up again at the push of a button. Different adjustments for different projects are thus no longer a problem. Kylix is the only program presented here that supports teamwork. In contrast to the competitor, data is not exchanged by means of the Free document management system CVS, but through its own repository. The assistance provided was the only one in our test of any real quality. Detailed and complete passages left hardly any questions unanswered. The assistance system used is again unfortunately a
A comparison is rarely clear cut, as each presented development environment has its own target group. C or C++ programmers, who Figure 12: The registration dialog of Kylix 2 want to predominantly write Gtkbased programs intended for GNOME, should cast an eye on Anjuta. KDevelop should however be the first stop for KDE and Qt programmers. KDE Studio unfortunately disqualifies itself, as its function range lags far behind KDevelop and the asking price is disproportionate. Programmers coming across from Delphi, should take a closer look at Kylix. Programmers, who have collected their first programming experience under Windows with VisualBasic, still need to be forced to transfer to something else. Those who don’t want to descend into the crypt depths of C and C++ just yet, should instead try out Kylix for size. For beginners, this is the pick of all solutions presented here. If you only write small programs, you should really consider whether the use of a large development environment is worthwhile at all. Small projects often don’t need the packages required by the environments. In such cases, it can be like shooting a canon at a sparrow – overkill to say the least. Finally, it must be said that apart from Kylix, none of the development environments achieve the programming comfort that Windows users are familiar with. Difficulties in Figure 13: The start window of Kylix 2: The source code of converting from Windows should the window in the background is directly transferred into therefore be taken into account. the appropriate source code, seen in the editor window
Info Homepage of Anjuta http://anjuta.sourceforge.net Homepage of theKompany, as well as KDE Studio Gold http://www.thekompany.com Homepage of the KDevelop project http://www.kdevelop.org Homepage of the company Borland http://www.borland.de Central location, where the codes for the Open Editon of Kylix 2 can be found http://register.borland.com
Issue 19 • 2002
LINUX MAGAZINE
45
KNOW HOW
gPhoto and gPhoto2
INSTANT PICTURES T
gPhoto enables you to load photographs from your digital camera onto your Linux system. Whether you want to share them via the Internet or manipulate them in Gimp, Colin Murphy is at hand to show you all you need to know
he sad truth about digital cameras is that many of the manufacturers still hold back on information vital to getting their product to communicate with our computer systems – proprietary code again. This makes the development of programs that work with digital cameras so much more challenging. If you intend to buy and use a digital camera with a Linux system, you should therefore be cautious about what systems you buy. The list of drivers is continually growing thanks to some dedicated souls who spend their time and energy reverse-engineering these for Linux systems. It’s very important to check these lists before you make a purchase, to ensure a suitable driver exists. It’s also advisable to get firsthand evidence that the drivers work from someone who has successfully run the very make and model you are interested in. Even assuming your digital camera is supported by a Linux-compatible driver, you’ll need an application to pull the photos from the camera and onto your machine. This is where gPhoto comes in. gPhoto falls into two separate projects – gPhoto and gPhoto2, which in turn comes with a slew of front-ends. The gPhoto developers’ original project was gPhoto – a stand-alone graphical application that you can use comfortably from the desktop. gPhoto 0.4.3 has support for 105 camera from many different manufacturers, with Casio QV, Fuji MX and Kodak DC cameras being particularly well supported. There is another set of digital cameras that can be used with gPhoto even though they are not directly
Thumbnail images A thumbnail is a copy of an image but in a much reduced size. Thumbnail images are much easier to deal with quickly, as they load and move much faster than their full size counterparts. The filenames given to images downloaded from cameras are usually just frame numbers, which doesn’t give you an awful lot to go on, therefore thumbnail images are a useful visual reference telling you what the file actually is.
46
LINUX MAGAZINE
Issue 19 • 2002
Figure 1: gPhoto starting up
supported by the program. If your camera can be seen as a USB mass-storage device, then it can be connected to a Linux system. Even though it’s not strictly necessary to use gPhoto for this type of camera, you do get some benefits, including features like being able to produce thumbnail images and automatic catalogues for Web pages.
Plug and play If you’ve been wise enough to buy a supported camera then all you need to do is plug it in and select the port that you have your camera connected to. These days this will quite often be a USB port. These details can be set in the Select model/port... box, which you can get to from the Configuration dropdown menu. Once you have this entered you can then start to play with the camera and download images. If you have a camera that is a mass-storage USB device then you need to do things slightly differently. The mass-storage USB device actually mounts the camera (or disk, or whatever you happen to be using) onto a file directory. You then need to put this file directory into gPhoto in order for it to see the camera. This is best achieved by going to gPhoto’s File menu and selecting Open Directory. It depends very much on the type of device you have, as to what the directory name will be once you have passed the mass-storage device mount point – and this can cause a problem. Because the directory navigation in gPhoto is not as intuitive as it is in a file browser, for
KNOW HOW
Figure 5: Kamera with a small, but ever-growing list of supported cameras
Figure 2: Thumbnail images for you to select from
Figure 3: Colour setting in gPhoto. With a motley group of hackers from FOSDEM
good reason, it is sometimes hard to see where the files lie in the camera. The way around this is to explore the mass-storage USB directory first with a file browser, like Konqueror. In this way you will be able to see a full directory layout and select the appropriate path, which you can then copy into gPhoto’s file system. Once the camera or directory is selected, gPhoto will start to download images and show them to you as thumbnails, as you can see in the Figure 2. You now have the option of selecting from these thumbnails and having the full-sized file copied over. Once you have the full file copied you can then start to make small adjustments to it, such as adjusting the colour balance (Figure 3). Depending on whether or not your camera supports such delights, you might also be able to move and remove files in the camera, as well as operate the camera remotely and, with some, even use it as a webcam.
gPhoto2 gPhoto2 is a new departure for the project, with development concentrating on bringing together a comprehensive set of libraries so that others can concentrate their efforts on writing graphical frontends for it. gPhoto2 can be used from the command line however, without the need for one of these graphical front-ends. This allows you the luxury of automating processes that you might have to do repeatedly. What you loose in colour you gain in control. Entering a command like:
Figure 6: Nautilus browsing the files directly on the camera
# photo2 ––list-cameras returns the facts: Number of supported cameras: 183 Supported cameras: “AEG Snap 300” (TESTING) “Agfa CL18” and so on. Concentrating purely on the libraries needed also has the benefit that any graphical front-ends for gPhoto2 need not actually be front-ends at all. The real advantage is that the application can now be embedded into existing utilities, most usefully file browsers, so that you can just plug in your camera and browse away. Examples of this type of development can be seen in Figures 4 through 6.
Info
Figure 4: Here we see how Konqueror has embedded Gphoto2 applications
gPhoto and gPhoto2 homepage http://www.gphoto.org Help with USB and the mass-storage devices http://www.linux-usb.org/USBguide/x498.html Help with mass-storage device cameras http://www.harald-schreiber.de Kamera gPhoto2 front-end www.thekompany.com/projects /gphoto/ GnoCam http://rzstud1.rz.uni-karlsruhe.de/~ urc8/GnoCam
Issue 19 • 2002
LINUX MAGAZINE
47
KNOW HOW
Access more than one Linux system
MULTIBOOTS Colin Murphy explains how, without too much effort, you can give yourself access to a new Linux system to play around in
N
ew versions of GNU/Linux are being made available all the time. Sometimes it’s hard to keep up with what’s going on. You’ll hear about lots of exciting developments, which you might like to try out, or at the very least play with. The problem with the latest developments is that they lie on the cutting edge of technology, so you may not find them in stable systems. Obviously, any sensible person will want to play it safe and only use a stable system they trust for their day-to-day work. So how can anyone adventurous enough manage to get the best of both worlds? Dual booting is something that’s very common amongst Linux users, it’s the most useful way of keeping more than one operating system on just the one machine. Normally a dual boot system will have some flavour of Windows, be it 95 or XP, as well as something useful, like a Linux system. But there is no reason why you can’t have more than one Linux system on the same machine, even if they are of different distributions. Sharing information between these systems is also pain free.
Methods of booting If you are creating a dual boot machine for the first time you need to make sure that you take some precautions. Make sure you have a boot disk for your current install; you will be changing things in your standard boot loading system, so if something does go wrong you need to know that you can get your system back up. There is the command line instruction mkbootdisk, which also needs to know the version number of the kernel that you will boot, which you can find with uname doing something like this:
Here are the partitions set aside for the new install. You only need a ‘/’. You don’t need a separate ‘/boot’ or ‘swap space’ – learn by my mistake
We are making the assumption here that you have another hard drive onto which you are going to put this installation. This need not be the case – you can use any free space you have on your existing drive – but using a separate drive gives you another degree of separation from your existing, stable system. During this new install you will be asked about making partitions for the new system. You should have a partition set aside for the new ‘/’ partition, and I like to have a separate /home partition as well. You don’t have to make a new /boot partition. You may also want to mount some of your original partitions on this new installation, this would give you access to your original /home directory, for
# uname -a 2.4.8-26mdk #1 Sun Sep 23 17:06:39 CEST 2001 i686 unknown # mkbootdisk 2.4.8-26mdk and the boot disk will be made. It is also useful to know what partitions on the hard drive get mounted to. This information is held in the file /etc/fstab/ and it’s always a good idea to keep an up to date printed copy somewhere safe with your boot floppy and rescue disk set. 48
LINUX MAGAZINE
Issue 19 • 2002
Here you can see how I have mounted my original ‘/home’ partition in the new installation, giving it the name ‘/mnt/home2’
KNOW HOW
instance. Because you have a print out of /etc/fstab from your original install you can see which partition needs to be mounted. There are two ways in which you can boot into your new system. The first would be to rely on boot floppies, at least for booting any secondary installation you make. The advantage of this is that you don’t need to touch the boot loader for your existing stable system, the system that you most want to keep intact and in working order. The only real disadvantage of using a boot floppy is that you end up with a system that isn’t quite as streamlined – you’ve got to remember to put the floppy in if and when you want to boot your new system, and you have got to remember to take the floppy out should you want to boot to your original, stable system. These might not seem like great disadvantages, but once you’ve booted into the wrong system three or four times you soon hanker after something less cumbersome. The second, less cumbersome way is to amend the boot loader you have already so that you are given the option of loading whichever Linux installation you want, in very much the same way that you can choose between a Windows or Linux boot. The disadvantage is that you are now playing with a component of your stable system. However, if you’ve made your boot disk as we suggested then you can still get access. Should you choose the first option then, when you start to install your new system, you should select something like ‘expert’ mode, where you are given the option of making full choices along the way. The standard install options will automatically make a boot device on your hard drive, this is the case with Mandrake at least. When you get to the ‘install bootloader’ section, or equivalent for your distribution, you need to change the boot device to /dev/fd0. Taking the second, more streamlined option of installing a new bootloader, you should again select ‘expert’ mode to give you as much control over the process as possible. This time, once you reach the ‘install bootloader’ section, you will need to leave the boot device set to your hard drive. You can then add
The ‘Install bootloader’ section in Mandrake. To stop your existing bootloader from being tampered with change the ‘boot device’ to /dev/fd0
The addition entry in the boot loader configuration screen for our existing system.
the details of your existing system to this new boot loader. You might also want to set the default boot image to be that of your original install, if you think that is the one you will be booting into most often.
Wrapping things up That should be about it for the new install. Depending on which method you have chosen, you can now boot into your new system by making sure your new boot floppy is loaded or by selecting the new image from the boot loader. There is just one more thing to play with, you may want to give yourself access to your new /home directory from your original install. This is easy to set up, you need to add an entry to your original /etc/fstab file pointing to the new /home directory, for the sake of simplicity I mount it on /newhome. Having access to the /home directories on your alternative installation is useful for passing data between the systems, but you should use caution when you try and share things like configuration files from utilities. The format of these configuration files might change between versions, and after all, you’ve gone through this process just to give you access to cutting edge versions, and you may be left with a configuration file that is incompatible with your existing application. So you now have a new, secondary Linux installation with which you can thrash about in, installing and breaking things to your heart’s content.
But you should use caution when you try and share things like configuration files
Using the graphical configuration toll to add the newly created ‘/home’ partition to my original install, calling it ‘/newhome2’. ‘/newhome1’ is really the new ‘/’ partition
Issue 19 • 2002
LINUX MAGAZINE
49
KNOW HOW
Venture into the Web with Konqueror
EXPLORE THEN CONQUER A browser should be fast, easy to use and secure. Linux has a few options to offer for this – Anja M. Wagner explores KDE’s Konqueror
Anyone who’s used Netscape under Windows can sit back and relax, as Netscape has been running under Linux for ages. Many recent distributions, such as SuSE Linux 7.2, have Netscape up and running right from the start. You can find the Netscape browser on your desktop or via K/SuSE/Internet/WWW. (In other distributions Netscape hides away in a different submenu; in case of doubt you can start it by entering netscape in the terminal window.) In this Workshop we want to introduce you to the KDE browser Konqueror. The tool may already seem familiar to you, as Konqueror is also the standard file manager of KDE. Konqueror starts via the icon on the desktop
Figure 1: The most important components of the Web browser Konqueror
Konqeuror can be started either via the icon on the desktop or the corresponding button in the panel. On a practical point for confirmed users of Internet Explorer from Microsoft: the favourites can be imported into Konqueror without any problem. The same also applies, of course, to the Netscape bookmarks.
Figure 2: Install the browser according to your own requirements
50
LINUX MAGAZINE
Issue 19 • 2002
Once connected to the Internet you can start surfing straight away: enter a Web address in the URL line and press Return. After the first pleasant surprise at the rapidity of the “conqueror”, we shall devote ourselves to the configuration of the program. From the menu list, select Settings/Configure Konqueror. In the left column of the window which then opens, select Konqueror browser. On the first tab, HTML, you can set whether links to a Web site should always or never be underlined. A third option “Hover” is set by default. Regardless of what you select, no difference can be detected in the display. Links are always underlined in some KDE versions, while in others they are always shown Hovering – a minor bug.
Back to basics The options “Change cursor over links” and “Automatically load images” are also set by default. If you want the browser to load a Web site even faster, you can deselect the automatic loading of images, however the Web then becomes very drab and colourless. If you’re surfing with a 28K modem, it does make sense to do without the images at first, because they are data-intensive and will therefore slow down the construction of a Web page. After selecting this option, a new button will appear on the
KNOW HOW
Figure 3: Images can be loaded after the text by clicking on the button on the far right of the Konqueror toolbar
Konqueror toolbar: with a click, the browser then loads the images from the current site. If you have defined some style templates for displaying Web sites, such as Cascading Style Sheets (CSS), you can select these via “User-defined stylesheet” option. On the tab sheet Appearance, you can define the size and type of font. In addition to the usual font sizes from “Very small” to “Very large” you can select the minimal font size. The default is the fairly small 7. Independently of the setting defined here, you can find two buttons in the toolbar to adjust the zoom factor.
set the security settings to “High” and can then adjust downwards for “trustworthy sites”, so as not to be constantly confronted with warning messages when surfing, in Konqueror you can usually leave the execution of Java deselected but allow it for specified sites. To do this, click Add in the Domainspecific area and enter the name of the computer or domain in the new window. Confirm with Apply/OK. The procedure can also be done in reverse, by generally activating Java and rejecting it for certain computers. The first way is safer, so don’t let the amount of work put you off. For JavaScript, there are corresponding settings.
All that you’re looking for
Figure 6: The shortcut to search engines and databases
There’s one thing of particular note on the Java tab sheet: the execution of Java applets is not usually selected. It now depends on your security requirements, whether you wish to change this. Java and JavaScript, like Microsoft’s ActiveX, are executable contents, which in some circumstances can manipulate your system. For the correct display of many Web sites, you will need to use Java. Konqueror offers a similar security strategy to Internet Explorer. Exactly as in IE, where you usually
On the last tab sheet, labelled Plugins, you can usually only select or deselect existing Netscape plugins. Now from the options in the left-hand column select Enhanced Browsing. The URL line of Konqueror not only opens the door into the Web, but also directly to your favourite search engine. In this section the keyword search is activated with Google selected as default search engine. A smart choice, which you can of course alter via the dropdown menu “Fallback search engine”. If you’ve selected the keyword search, you can enter a search term in the URL line of the browser. After pressing the Return key the browser connects to the default search engine and displays the search results, thus saving work steps. In the large window
Figure 5: Java or not Java, that is the question
Figure 7: It’s easier to remember your own abbreviations
Figure 4: Zoom a Web site larger or smaller from one to the next
The Java dilemma
Issue 19 • 2002
Offers a similar security strategy to Internet Explorer
LINUX MAGAZINE
51
KNOW HOW
in Enhanced browsing area are your Web commands. You can reach other search engines by using these short commands. For example, if Google is your default search engine and you want to find out something from the KDE problem database using a full-text search, type in the URL line the short command bugft:searchword and press Return. If an abbreviation like bugft is too cryptic for you, click on Change and enter a different term. If you want to access one of the search services on the starting page, such as the translation database LEO, just type leo: in the URL line. In the Cookies section you will again find something familiar. As with Internet Explorer you can always reject cookies, always accept them or demand a confirmation from the browser. Since many Web sites cannot be used without cookies, a global Reject will mean you are only able to use a small part of the Internet. Cookies are not so much a security problem as an intrusion on your privacy. If, on the other hand, you get a prompt for confirmation with every cookie, you will be unable to surf because you’ll be so busy confirming. As with Java and JavaScript you can reject cookies generally and accept them from certain servers or vice versa. You should not accept servers that pass cookies on to third parties. The Management tab offers more detailed information about the cookies you receive.
a proxy. The browser saves all the pages loaded in a buffer, known as the cache. If you return to a Web site you have already visited during an online session, the browser loads the site from the cache. This is quicker than re-loading the Web page from the Net. The bigger the buffer, the more pages can be saved there. The default size of 512Kb should be altered according to the resources of your system. In the Crypto section, SSL v2 (Secure Socket Layer) and SSL v3 are activated by default. These settings should not be changed, unless instead of SSL you want to use its successor, TSL (Transport Layer Security). The browser can warn you if it leaves the secure SSL mode when surfing – and this, too is pre-set. It can also tell you when you change to SSL mode.
Going underground An important section for unhindered surfing pleasure with Konqueror is the User Agent. Unfortunately, you will still come across some Web sites which are not correctly displayed with Konqueror. There is a trick which helps: Konqueror has to mask itself and fool the server into thinking it is Internet Explorer. Enter the “unfriendly” Web address in the line “When connecting to” and select your camouflage from the drop-down menu “Send
Cache and carry If you are using a proxy server, you can configure it in the Proxies & Cache domain in the left-hand column, however this section is also interesting even without
Cookies Cookies are information about the status of an HTTP connection between a client and server. The simplest form of cookies takes the form of symbols created by the server and transferred to the browser. The browser saves these symbols, Figure 8: Precise information together with the URL about a cookie received invoked. If the user again invokes the same URL, the browser checks the values in the cookie (e.g. domain and path) and if they match automatically transfers the information belonging to these values to the respective server. Its operator can thus determine which computers on which sites have accessed this Web service. Cookies received are first buffered in the main memory and saved in the cookie file at the end of the session. Cookies are of interest to Web providers for collecting information about the surfing habits of their users and potentially orienting their service to these habits.
52
LINUX MAGAZINE
Issue 19 • 2002
Figure 10: Sometimes Konqueror has to put on a mask
Cache The cache is a special buffer to speed up access to data. Any information that has already been read is saved by the system in the cache. If a new Figure 9: Installing a proxy server and cache read access occurs, the system first checks whether the data requested can be found in the cache. If this is the case, the data is loaded from the cache and not from the medium to which the read access is actually directed. This speeds up data access, because the cache has a substantially shorter access time. (This is the Web cache, not to be confused with the hard disk cache.) SSL (Secure Socket Layer) is a transfer protocol for secure transactions on the Internet. It works by using server authentication via a certificate together with data encryptions and data integrity via public and personal keys. Banks and online shops use this as a procedure acknowledged as secure.
KNOW HOW
user-agent-identification”. With the selection “Mozilla/4.0 (compatible with MSIE 5.5, Windows 98)” you will more than likely be on the safe side. Click on Add and the new user-agent-identification appears in the window. Konqueror’s functionality can be extended using Netscape plug-ins. If required it can search your system for new plug-ins (via the Search tab). You can even make the browser do this every time the program starts. Behind the tab sheet Plugins is hidden an overview of the plug-ins existing in the system.
Figure 13: Konqueror can make a link list too
Favourite things In the course of a surfer’s life, many bookmarks mount up, which Internet Explorer calls Favourites. You can easily import your valuable collection of Favourites from IE into Konqueror. Start IE and select File/Import/Export. As usual, Windows starts an assistant to lead you through the process. Export the Favourites into the file “bookmarks.htm”.
Figure 14: Even more tools in the extra list
independently of Konqueror via K/SuSE/Internet/WWW. In this menu it is given the common name of Bookmark Editor. In the editor, it is quickest to edit entries by using a right click on a directory or a bookmark. You don’t need to do without the links bar either. Make a directory called Links in the bookmark editor. Put the links you want to appear in the bookmark list into this directory. Highlight the directory with a mouse click and select Settings/Set as toolbar folder. Save the changes when closing the editor and finally select, Settings/Show bookmark toolbar from the Konqueror menu. The practical link list will then appear beneath the URL line.
Figure 11: Take your collection of favourites along into the world of Linux
If you’ve installed Linux on a separate computer save your Favourites collection on a diskette or CDR, depending on the file size. If Linux and Windows are installed on the same computer, simply access the Windows partition. Open KEditbookmarks via Konqueror’s menu list: Bookmarks/Edit bookmarks. It’s advisable to make a new folder (such as “Favourites”). Open the file manager Konqueror and use drag and drop to place the file “bookmark.htm” in this folder. You can edit this list using KEditBookmarks tool. This can be started
Figure 12: With KEditBookmarks you can edit your bookmarks
Figure 15: In the extended sidebar you can find the “History” from Internet Explorer
Issue 19 • 2002
LINUX MAGAZINE
53
KNOW HOW
Figure 17: Which buttons would you like in the toolbar?
Figure 16: Split view is twice as easy
Your history In the Settings menu, you will find an item labelled “Show Extra toolbar”. You can use the buttons on this bar to start some helpful and practical capabilities of Konqueror. The first button opens the Extended Sidebar on the left. Here you have easy access to all the directories in your system. In addition to the Bookmarks directory you will find the History item: as with Internet Explorer, you can get an overview of the history of your online session and thus go back to Web sites you have already visited. A right click can sort the entries by name or date. Internet Explorer’s History sidebar may be easier to use, but Konqueror also lists accesses to folders and files in your system and not just Web sites. Another click on the History button closes the site list again. The other buttons on the extra toolbar split the window view of Konqueror. You may be familiar with this from using Konqueror as a file manager. This property can be very helpful when surfing, too: by clicking on Split View Left/Right, you can surf in parallel on two or more Web pages. This means you can compare sites or keep the page in one
Figure 18: Konqueror can start with your favourite page
54
LINUX MAGAZINE
Issue 19 • 2002
window and look at a linked site in the other. However, if you have more than two windows it soon starts to get confusing. The current active view can be distinguished by a green dot in the lower left corner of the window (Figure 18). The active window is closed by the right button in the extra toolbar. Which buttons should there actually be in the toolbar? You can configure them via Settings/Install toolbar. In the upper drop-down menu, you should first choose which bar you want to configure. In the righthand window underneath this, you will see the available action buttons and on the left are the ones that already exist on the toolbar. By selecting the actions and the arrow buttons on the platform between the windows you can add or remove items. In order to install a standard start page, enter the address, wait until the page has been built up and then select Window/Save View Profile “File Management”. If you intend Konqueror to start with a blank page (which is the quickest way), open the browser, enter in the URL line “about:blank” and then select the menu item just mentioned. The input field in the URL line is deleted by a click on the small black button on the left; this is quicker than selecting and deleting. And that brings the latest Migrations Workshop to an end. Have you any topics you would like to see covered in this series? If so, please write to the editor.
PROGRAMMING
Perl
THINKING IN LINE NOISE O
riginally released in 1987, Perl has spread from niche to niche (including CGI, Databases and XML), assimilating buzzwords that stray unwittingly into its path, such as Object Orientation, Bio-informatics and Aspect Oriented Programming). For all of the above reasons, Perl is renowned as a ‘glue-language’: it interacts with most popular applications. The Comprehensive Perl Archive Network (CPAN) repository is one of the jewels in Perl’s crown (groan). It provides a library of language extension modules as comprehensive as J2EE and the .NET framework, providing a set of APIs that enables integration with other languages including C, C++ and Java, to name but a few. Perl’s strength has always been its active user community, which created and maintained sites such as CPAN (http://www.cpan. org), Perl.com (http:// www.perl.com), use.perl (http://use.perl.org), Perl Monks (http://www.perlmonks.org) and various geographically diverse Perl Mongers groups. As well as having sites devoted to it, Perl runs
some of the busiest sites on the Net, including geekhavens Slashdot (http://www.slashdot.org) and Kuro5hin (http://www.kuro5hin.org). In fact Perl is so widely used on the Web that it’s often referred to as the duct tape of the Internet. Perl is a terse but high-level language that removes the burdens of memory allocation, the distinction between primitive data types, file handling and the need to constantly reinvent the wheel. It is because Perl allows the developer such freedom and functionality within so few key-strokes that Perl has the semi-deserved reputation of resembling line-noise. Perl’s integrated regular expression handling (a super-set of the POSIX standard) – the variety of operators provided to manipulate, describe and access Perl’s data-structures – has meant Perl had to spill over to lesser-used areas of the keyboard or adopt a larger, more esoteric vocabulary. Of course, big words can sometimes obscure meaning – just take the last sentence as an example – so more obscure keyboard symbols were instead adopted. Here’s a snippet of Perl code seen in production
Perl is a language steeped in the history and evolution of Unix (and by extension Linux) platforms, so it’s only right that it should have a place here at Linux Magazine. Dean Wilson and Frank Booth begin our journey with an overview of Perl and its syntax
Getting Perl Any moderately recent and well-stocked Linux distribution will come complete with an installed Perl interpreter, a full set of Perl core modules and the standard (and copious!) documentation in pod format. If your install does not include Perl then there are two paths open to you; you can either get a binary distribution from your distro’s package repository or download and compile your own. As this is a beginner’s tutorial we will cover getting the package and installing it rather than compiling your own; this topic is more than adequately covered in the INSTALL file in the root of the source code tarball. Installing the binary package varies more upon your Linux distribution but can be summarised as:
rpm-based Step 1: Download the package from either your distro’s repository or from one of the links at http://www.rpmfind.net or http://www.perl.com. Step 2: As root, issue the ‘rpm -i <perlpackage>’ command.
Debian Debian saves you the wasted time fetching the package by hand and instead allows you to get by with the following: Step 1: apt-get update. Step 2: apt-get install perl. While Debian makes the initial install simpler; for some packages that have external dependencies you are reliant upon the apt-get mechanism, as an example modules that use Libmagick or expat (an XML parser) must be installed via apt-get or will require modification of the source to allow a successful install.
Issue 19 • 2002
LINUX MAGAZINE
55
PROGRAMMING
software that illustrates why Perl’s syntax is so easily misunderstood and consequently decried:
The code is confusing due to the high frequency of special characters
$/=\0; $_=<>; tr/A-Z/a-z/; %_=map{$_,1}/[a-z0-9_.–]+@[a-z0-9._]{3,67}(?=\W)/g;@_=sort keys%_; Although it has to be said that Perl needn’t be written like this.
Scalars In case you were wondering, the previous example finds email addresses in a file, removes duplicates and sorts them alphabetically. The code is confusing due to the high frequency of special characters (sigils). The most common and essential of these in everyday programming is $. $ denotes a scalar variable. In Perl, scalar variables are use to hold numbers, text and many more types of data. For example: $percent = 12.7; # Assign 12.7 to the U variable $percent $count = 1; # Assign the value 1 to the U variable $count $name = ‘Guido’; # Assign the string U ‘Guido’ to $name. $beast = $name; # Copy the value of U $name to $beast Below are the most popular methods to alter numeric scalars: $count = $count + 1; # count now equals 2 $count +=1; # count now equals 3 $count++; # count now equals 4 The first example is probably the simplest to understand. $count is set to the value of $count + 1. The operator += in the second example is shorthand for the same function, it can be applied to the multiply, subtract and division operators amongst others. The final line of code uses the post increment operator ++, this adds one to the existing value of $count. There is also a post decrement function –– that subtracts one from $count. Perl has a rich variety of ways to assign and manipulate strings. $curry = ‘Chicken’; # This sets $curry U to Chicken $curry = “$curry Phaal”; # This sets U $curry to: Chicken Phaal In these examples the value of $curry is manipulated using string operators. As with numeric operators the strings are assigned using the equals operator. In the second example the use of a variable inside double quotes replaces the variable $name with its currently assigned value, the official term for this is “interpolation of the variable”. $mistake = ‘$curry’; to literally: $curry
56
LINUX MAGAZINE
Issue 19 • 2002
# This sets $mistake U
Unlike its predecessor, the above example uses single quotes which prevents the variable from being interpolated: it returns the literal value within the quotes: $word = $curry . ‘ is ‘; # This sets U $word to: Chicken Phaal is $word .= ‘tasty’; # This sets $word to: U Chicken Phaal is tasty The dot operator ‘.’ is used to concatenate values together. In these last two examples the dot operator is used to append strings to $word; in the latter case using the same philosophy as the += operator. Note that concatenating single quoted strings to a variable does not affect the interpolation of the variable that is not wrapped in quotes. Perl allows us to use the string operators on numbers (it treats the numbers purely as characters) and strings as numbers (by taking the numeric part of the string until the first nonnumeric character): $count = 3; # Set the value of $count to 3 $order = “$count $curry”; # Set $order to: U 3 Chicken Phaal $count += $order;
# $count = 6
In this example the numeric part of the string (3) is added to the value of $count, the remaining part of the string $order is ignored. $order = $count . $curry; # $order is now: U 63 Chicken Phaal
Using concatenation the value of $count is prepended to $order.
Listing the ways While scalar variables are useful in day-to-day programming they alone are not adequate for more complex programs. Every modern language has developed more complex data types such as arrays and hashes; Perl is no exception. Perl’s arrays are indexed by integers and dynamically sized – you don’t need to set a maximum size of an array when you create it and the array will resize itself as elements are added and removed. @Foodgroups = (‘curry’, ‘kebabs’, “ice cream”); In the previous example we create an array called Foodgroups and populate it with three values, note that the values can be single or double quoted and that the rules of scalar quoting apply in the assignment. All arrays in Perl are indicated by the @ character, indexed by integers and start at 0, so in the example curry is at position 0 and ice cream is at position 2.
PROGRAMMING
# Prints “After curry we have ice cream” print “After $Foodgroups[0] we have U $Foodgroups[2]\n”; Notice that in the example’s ‘print’ statement we use the scalar $ sigil rather than the @ for array; this is because we are accessing a scalar at the position of the given value, called a subscript, that is in the square brackets. If you wish to change a value in an array and you know its position you can use the same syntax without impacting the rest of the array. If you try and retrieve a value from an index that does not exist then an undef will be returned and the size of the array will not be changed. $Foodgroups[2] = ‘beer’; # Prints “After curry we have beer” print “After $Foodgroups[0] we have U $Foodgroups[2]\n”; While being able to directly access a value by its index is useful in many cases for the programmer to work on the start or the end of the array. Determining the length on a dynamically sizing array is easier than you might think using what are known as negative subscripts: print $Foodgroups[-1]; # Prints “beer” If you try and retrieve a value from a non-existent negative position using a negative subscript then the undef value is returned and the size of the array is not modified. If you try and store a value in a nonexistent negative position the Perl interpreter will generate a fatal error. While working with arrays is comparatively simple, an area many people new to Perl find confusing is the difference between the length (number of elements) in an array and the last position in the array. Because the last position is a scalar value again we use the $. print $#Foodgroups; # Last position. U This prints 2 print scalar(@Foodgroups); # Number of U elements. This prints 3 In the second line of the example we introduce a new function, ‘scalar’. While Perl is often smart enough to do automatic conversion of variables to suit the current context, in places where the usage is ambiguous and more than one usage may appear correct we can give the interpreter a helping hand. By using the ‘scalar’ function we tell Perl to give us the length, if we run the snippet again without the ‘scalar’ function then we get a completely different result: print @Foodgroups; # This prints U
‘currykebabsice cream’ print “@Foodgroups”; # This prints ‘curry U kebabs ice cream’ $” = ‘ and ‘; print “@Foodgroups”; # This prints ‘curry U and kebabs and ice cream’ In the first line of the example we print the array without telling Perl a context so it picks the most obvious one (to itself) and prints all of the array’s literal values. The second line of code wraps the array in double quotes and the values are printed out in a more readable form. The spaces that are emitted from seemingly nowhere are dictated by another of Perl’s implicit predefined variables, $” or the “List Separator” as its known in Perl parlance. If you set this variable directly, as we do in the third line, and then reprint the array in a double quoted string each element of the array is printed with the separator between them. As arrays are collections of values it is often desirable to iterate through an array, repeating an operation for each element. There are two simple ways of doing this and the first way illustrates one of the places where inexperienced Perl programmers can confuse array position and array length. Given below are four small for loops, two that are valid and do as expected and two that do not. See if you can pick out which are which: for ($i=0; $i < @Foodgroup; $i++) { print “$Foodgroup[$i]\n”; } for ($i=0; $i <= $#Foodgroup; $i++) { print “$Foodgroup[$i]\n”; } for ($i=0; $i <= @Foodgroup; $i++) { print “$Foodgroup[$i]\n”; }
Inexperienced Perl programmers can confuse array position and array length
for ($i=0; $i < $#Foodgroup; $i++) { print “$Foodgroup[$i]\n”; } The first two examples are both valid, they will iterate through the array incrementing $i on each pass, so that each indexed value will be printed once. The final two examples are both incorrect; The third line of the example executes the loop body for once too often (if there are three things in @Foodgroup, the loop executes when $i is 3, which is incorrect as it’s not a valid position). The final loop executes the body of the loop one time too few (if the final element is at position 2, the loop stops after executing the body of the loop with $i set to 1). It is common to use either a for loop (shown above) or a foreach loop to be able to operate on every item in an array without knowing anything about the array other than its existence. The most visible difference between the two is that foreach loops use an alias for the value rather than storing an Issue 19 • 2002
LINUX MAGAZINE
57
PROGRAMMING
index. This is useful when it’s unnecessary to know the index positions:
Perl’s ability to use implicit values is both one of its benefits and banes
foreach $Food (@Foodgroups) { print “$Food is bad for you\n”; }
Or, if you want to make your code a little more implicit and you call a number of functions in the loop that use $_ as their default variable you can execute the loop without an alias yet still have it process the values: foreach (@Foodgroups) { print “$_ is bad for you”; print length, “\n”; } The above foreach loop will print out the message with each value and then print out the length of each value. This is possible because in the absence of an argument Perl refers length to $_ and print then prints the value that length returns. Perl’s ability to use implicit values is both one of its benefits and banes, depending on how sensibly it’s used. The for and foreach loops are almost identical in functionality and can be used interchangeably; you should use the version that is easier to read in your code.
Hashes – associative arrays for lazy typists An associative array is a data structure, which is accessed by a string value called a “key” rather than an index, as seen in arrays. In Perl, associative arrays are used so frequently they’re called “hashes” which is easier to say. The % symbol denotes that a variable is a hash. As with arrays, hashes utilise brackets to access individual elements. For hashes, curly braces are used. To assign a variable to a hash we need to specify both the key and the value: $hash{‘key’} = 10; # %hash now has a key U with a value 10
$a[$num]=’Array’; # Puts “Array” in the U 4th element of @a $a{$num}=’Hash’; # Associates “Hash” to U 3 in the hash %a The keys in a hash are unique, so if a value is assigned to a key, the previous value will be overwritten and lost. At first this seems to be a disadvantage: it’s one of Perl’s most heavily exploited features, as we will discussed later. $hash{six} = 6; # Value of the key ‘six’ to 6 $hash{six} = 9; # Value of the key ‘six’ U to 9, no longer 6 Initialising a hash is similar to the methods used for arrays. A hash can be initialised with a full complement of keys and values. Hashes utilise array and list operators but the manner in which the data is manipulated is subtly different. %numbers = ( ‘one’,1,’two’,2,’three’,3); This expression assigns the following keys and values to the hash: keyvalue one1 two2 three 3 The hash knows to pick the first element as a key and the next as a value. Elements are read from the list and alternately given the role of key or value. Perl will complain (when run under warnings) if the list contains an odd number of elements. However, the last key will be included and its value will be a undef. The => operator is used to improve the legibility of list assignments when initialising a hash. It allows us to quickly differentiate the keys and values within the list. The value to the left of => is the key of the element, the item to the right is the value. Using the => operator also means that the key needn’t be wrapped in quotes if it’s a single word. %hash = ( six => 6, seven => 7, ten => 10);
Again the $ prefix is used when accessing an element of the data-structure because the element will be a scalar variable. So the only thing differentiating the hash from a normal array is the shape of the braces. For a hash the curly braces encapsulate the key. The following example illustrates how those brackets alter the semantics of the entire line:
Hashes have a few explicit functions as well as borrowing many of the list functions, the most popular are: keys values
%a; # An associative array @a; # A traditional array $num = 3; # A numeric scalar value # Assign a value to each
58
LINUX MAGAZINE
Issue 19 • 2002
which returns the keys found in a hash as a list which returns the values of the hash as a list
Each of these functions returns a list, which will be in seemingly random order. If order is needed it must be imposed using the function ‘sort’.
PROGRAMMING
%a = ( ten=>10, nine=>9, eight=>8, seven=>7); @b = keys %a # Places keys in array @c = values %a # Places values in array The order of elements in @b (the array of key elements) may be: nine, seven, eight and ten. The order of elements in @c will then be: 9, 7, 8, 10. Regardless of the order of elements, key and value are returned at the same point. The functions keys and values are frequently used to traverse the contents of a hash; there are several methods of accessing every element of a hash: for my $key (keys %a){ $value = $a{$key}; print “$key => $value\n”; } This is probably the most common way of accessing all elements in an array. Using a for loop in the same manner, we would use it to access all the elements of an array. There are two more functions used with hashes, the delete and exists functions can also be used with arrays, but are more commonly seen in code relating to hashes. The delete function removes elements from hashes. Removing an element with the key ‘fred’ is expressed in the following way:
Perl documentation Perl has a wealth of documentation which comes with the standard distribution. It covers every aspect of the Perl language and is viewed using your computer’s default pager program. Perldoc pages resemble manpages, citing examples of use and pertinent advice. There are many parts to the Perl documentation. To list the categories, type the following command at the shell prompt: perldoc perl The page displayed for the previous example has two columns; the left column lists the mnemonic titles and the right column a description of the topic. perlsyn Perl perldata Perl perlop Perl perlsub Perl
syntax data structures operators and precedence subroutines
To invoke documentation for a subject, simply type perldoc and the mnemonic for that topic on the command line. The example below will display the documentation for “Perl Syntax”. perldoc perlsyn A further use of perldoc is to read the usage for any of Perl’s functions, this is done by calling perldoc with the -f option and the function name as an argument. The following example will display the documentation for the function map.
delete $passwd{‘fred’};
perldoc -f map
exists is a function that, given a key in a hash, will return true if that element is present. To make full use of the exists function we need to use it with a conditional operator. In the example we use if. When running under warnings, it is prudent to use exists before calling an element of a hash if it’s doubtful that the key is present.
Perldoc also provides quick access to frequently asked questions about Perl.
%passwd=( fred => ‘xSSx13A0Oav’, root=>’root’); if ( exists($passwd{$user}) ){ print “Success, that user was found\n”; } else { print “Sorry, that user was not found\n”; } if tests the return value from the exists function; if the hash element does exist then if will run the code wrapped in the curly braces that follow it. When a function evaluates to false the ‘if’ statement disregards the first set of curly braces and executes the contents of the curly braces following else if else is present. Here are some simple examples of the common uses for hashes in Perl: ● Creating a look-up table of values to substitute: %dns=(10.3.1.0 =>’firewall’, 10.3.2.0 => U
perldoc -q punctuation
‘email’, 10.3.0.1 => “bob’s machine”); print “$dns{$ip-address}\n”; ● Removing duplicates from a structure by exploiting a hash’s use of unique keys: @a=(1,2,1,2,4,6,7,2,1,10,6,7,8,8); # U Initialise the array %a=map{$_=>1}@a; # Make a hash where the U keys are the elements of @a @a=keys %a; # Reassign @a so that it U contains unique values At the beginning of this example @a contains (1,2,1,2,4,6,7,2,1,10,6,7,8,8), after being filtered through the hash @a contains (7,8,1,2,10,4,6). Focusing on line 2 of the above example, map is used to create a key value pair for the hash. The value being assigned is irrelevant. The only important function occurring is taking place implicitly: for keys that are already in existence the value will be overwritten (by an identical value) since keys in a hash are unique. Issue 19 • 2002
LINUX MAGAZINE
59
PROGRAMMING
C: Part 6
LANGUAGE OF THE ‘C’ In part 6 of Steve
File handling
Goodwins’ ‘C’
Most software will at some time need to read from (or perhaps write to) a file. Text editors obviously need to, device drivers less so. The files can be generalised into three categories: user data files, program configuration files, and program data files. From the program’s perspective, however, they are handled in exactly the same way.
tutorial we continue our look at file handling and keyboard input
The x files The programming metaphor for file handling is the same as it is for the user, that is, you open a file, work with it (read, write or both) and then close it when you’ve finished. There is an example of this in Listing 1. Let’s deal with the necessities first: line 1 includes the header file stdio.h. We should be used to this by now
Listing 1 1 2 3 4 5 6 7 8 9 10 11 /* 12 13 14 15 16 17 18
60
LINUX MAGAZINE
#include <stdio.h> int main(int argc, char *argv[]) { FILE *fp; char szText[80]; fp = fopen(“listing1.c”, “r”); if (fp) { fgets(szText, sizeof(szText), fp); U grab line 1 */ printf(szText); fclose(fp); fp = NULL; } return 0; }
Issue 19 • 2002
as it allows us to use the printf function. However, it also allows access to the file handling functions. Line 5 declares a pointer (fp, by the way, stands for file pointer) to a FILE structure, defined inside stdio.h. The fopen function gives us a valid FILE to point to, and requires two (guessable!) string arguments. The first is the file name (with either a relative or absolute path), whilst the second is the “mode”, indicating how we wish to open the file. It is permissible to use the modes detailed in Table 1. If file * is non-NULL, then lines 11-13 are executed. The first of these, fgets (file get string), will read plain text from the file (fp) into the buffer szText, up to a maximum of 79 characters. The reason for this limit is that it is one less than the size of the string, giving it space to add the NULL terminator. It doesn’t have to read 79 characters however, as it will stop when it finds a new line character (or it reaches the end of the file). This is the same function we saw briefly as a replacement to the (rather awful) gets function. Finally, line 13 closes the file. Because fp is a local variable, and its value alone is passed into fclose, it will still hold a file pointer when fclose returns. It will be an invalid file pointer, but a pointer nevertheless. Therefore, I like to manually reset the pointer after I’ve closed a file to remind me it is no longer in use (line 14).
Search and destroy For our conversion routine to gain a wider audience, we’re going to let it convert between any pair of units: miles to kilometres, pints to litres and, yes, Fahrenheit to Celsius! Let’s create a file (called “convert.conf”) with the following entries: m pt f
km l c
1.6093 0.568 1.8
0 0 32
Each line has four tab-separated fields: the “from” unit, the “to” unit and two numbers. We multiply by
PROGRAMMING
Table 1 Mode Method
Comments
r
Reading
If the file doesn’t exist fopen returns a NULL pointer. This could also happen if you do not have read permissions, or it has been opened exclusively by another program. Reading starts at the beginning of the file.
w
Writing
Creates a new file and allows write access to it. Will return NULL if the file cannot be created (perhaps it already exists, and you don’t have write permissions). Writing starts at the beginning of the file.
a
Appending
Opens an existing file (or creates one, if it doesn’t exist) and allows write access. Returns NULL if the file doesn’t exist, and a new file can’t be created in its place. This usually happens when you don’t have write permissions in the directory. Writing starts at the ‘end’ of the file (as you might already have guessed!). To reset this ‘file marker’ to the beginning use ‘fseek’, explained later.
r+
Read and write
w+
Read and write
When mode features the ‘+’ symbol it works as above, but additionally supports read and write. So ‘r+’ will open the file for reading (failing if it doesn’t exist), but will also support writing data back into the file. Similar to ‘r+’, but does not fail if the file doesn’t exist.
a+
Append and read
Similar to ‘w+’.
Note: Some software will use the letter b to open a binary file (as opposed to ASCII). This is not necessary since file type is determined by how you access the file; with fgets (implying an ASCII file), or fread (binary) for example.
the first, and then add the second to convert from “from” to “to”! If we were parsing this configuration file it would be possible to read each line into a string and scan it manually, one character at a time, for each field. It wouldn’t be very difficult, given the code we’ve already learnt, but as good programmers, we’re lazy! We’ve got a library function that does most of this for us. It’s called fscanf. fscanf(fp, “%s %s %f %f”, szFromUnits, U szToUnits, &fMultiplier, &fAddition); fscanf is a direct equivalent of the scanf we’ve already seen. It works in exactly the same way, but takes an extra parameter of the file pointer. It also returns an EOF if the end of file has been reached, or a count of successfully read parameters, like scanf.
Finally, there is also a feof function, which returns TRUE if the EOF has been reached. Naturally you should check the return value of any fgets before you use any of the data it gives you. However, if you are reading complex files using two or three of the above functions, feof can make a convenient loop terminator. For example: while(!feof(fp)) { if (fgets(szText, sizeof(szText), U fp)) { /* do something */ } if (fscanf(fp, “%f”, &fVar) U != EOF) { /* do something */ } if ((ch = getc(fp)) U != EOF) { /* do something */ } } If we were to compile under a system that doesn’t
End of the century We can now read files. Great! But we’ve seen nothing to tell us if there is any more data in the file to be read. That’s because I’ve not shown you any way of knowing when (or how) the end of file is flagged. The fact is, it isn’t! Not really. What happens in C is that you try to read from the file (with fscanf or fgets, say) and then it tells you there’s no more data left. Not before. But after! This end of file (EOF) condition is indicated by the return value of whichever function you use to read the data, as shown in Table 2.
Table 2 Function
What it returns on EOF
Comments
fgets
NULL
fscanf
EOF
getc
EOF
Will normally return a pointer to string to the read data (which you also passed in) EOF is a numeric constant, defined to be -1 in stdio.h Like getchar the return type is an int, allowing it to return 0 to 255, and EOF
Issue 19 • 2002
LINUX MAGAZINE
61
PROGRAMMING
The specials
Table 3 Function
What it does
fprintf
Works exactly the same as printf, but takes an additional (first) parameter indicating the file pointer. Works like puts, but takes an additional (second) parameter, indicating the file pointer. As a peculiarity, fputs does not add an end of line character to the string like puts. A mirror of getc, takes two parameters: the character to output (also as an integer, not a character), followed by the file pointer. Some code will use putc in place of fputc. Both take the same parameters, in the same order, and are identical in operation. However, fputc is a function, and putc is a macro. Which one you use is a matter of style.
fputs
fputc
instead of scanf(“%s %f”, szFromUnits, U &fConversionNumber); because we have three standard file pointers that always exist: stdin, stdout and stderr. These are all variables (of a file * type), but should not be modified from within the program like most variables. An old trick was to write:
Paperback writer
stdout = fopen(“output”, “w”);
Listing 2 1 #include <stdio.h>1 #include <stdio.h> 2 3 int main(int argc, char *argv[]) 4 { 5 FILE *fp; 6 int i; 7 8 fp = fopen(“dataform”, “w”); 9 if (fp) 10 { 11 fputs(“Data Collection Report\n”, U fp); 12 for(i=0;i<32;i++) 13 fputc(‘-’, fp); 14 fputc(‘\n’, fp); 15 fputs(“Time : Temp in Celsius\n”, U fp); 16 for(i=0;i<24;i++) 17 fprintf(fp, “%.2d-00 : ____\n”, U i); 18 fclose(fp); 19 fp = NULL; 20 } 21 22 return 0; 23 }
LINUX MAGAZINE
fscanf(stdin, “%s %f”, szFromUnits, U &fConversionNumber);
use the same end of line character as Linux (or even one that used two end of line characters) we wouldn’t need to change our code! That’s because the fgets function is inside an OS-specific library, the writers of that library would handle the appropriate end of line character(s) for us.
Writing data into a file is no more difficult than writing it out to the screen. Once we’ve opened the file with fopen we can use any combination of the three primary output functions shown in Table 3. These can be used as shown in Listing 2.
62
You will notice that there are marked similarities between the console I/O and file I/O functions. This is intentional, as it allows C to follow the Unix/Linux philosophy that everything should be a file – our input stream (usually from the keyboard) is actually a ‘file *’, meaning we could read formatted keyboard input with:
which caused every printf and puts to automatically find its way into the output file. This is bad!!! If you want an easy way to redirect output to a file (from inside the C program) create a ‘file *’ and output all text through it. The ‘file *’ variable can then be made to point to either stdout, or a file created with fopen. However, it is generally better to leave file redirection of this sort to bash (or some other shell).
Binary files Most files consist of chunks. These are groups of entities that belong together. In a graphic format, one chunk might be the header (containing image width and height), another might contain the palette information, whilst another might be the image data. In ASCII files, these chunks might be distinguished by a line break or a tab (like “the file what I wrote” above!). With binary formats, the unit of persuasion is the byte. In these cases, the end of line character (“\n”) is treated like any other. For it to be handled as
Other functions The functions covered here are “standard” functions. Linux also includes low level file access with a number of other functions – source code voyeurs may have noticed calls to ‘open’, ‘creat’ (sic) and ‘unlink’. It is perfectly valid to use them, provided your work will not be ported outside the Linux arena. However, their usage will not be explained here.
PROGRAMMING
Listing 3 1 #include <stdio.h> 2 3 long MyGetFileSize(char *pFilename) 4 { 5 FILE *fp = fopen(pFilename, “r”); 6 long iSize; /* Not an int since long could U traditionally cope with much larger numbers */ 7 8 if (!fp) 9 return -1; /* Error! File doesn’t U exist */ 10 11 fseek(fp, 0, SEEK_END); /* First byte U after the file ends */ 12 iSize = ftell(fp); 13 fclose(fp); 14 fp = NULL; 15 16 return iSize; 17 } 18 19 int main(int argc, char *argv[]) 20 { 21 printf(“This file is %ld bytes long!\n”,U MyGetFileSize(“listing3.c”)); 22 return 0; 23 }
Table 4 Example
Explanation
int iData[16], iNum; iNum = fread(&iData[0], sizeof(int), 16, fp);
fread reads a number of data elements into the memory specified (parameter 1). The size of each data element is held in parameter 2, while parameter 3 tells C how many there are to load. The return value indicates how many were actually loaded. Normally this is identical to the number we requested, unless an end of file was reached, in which case it will (naturally) be lower. (To work with individual bytes, the size of the data element is set to ‘1’) This writes out the data, as is: no end of line character(s) are added (since this is a binary operation). Here, we decided to write out just one integer. The parameter order is identical to fread.
fwrite(&iData[0], sizeof(int), 1, fp);
such, we need functions that read (and write) data without stopping at the first new line it finds, as in Table 4. After each read or write operation, the file marker is incremented beyond the data we just read (or wrote). This marker is basically an index indicating how far (in bytes) into the file we are. It is very similar to an array index when we are dealing with memory. We can discover this index with:
or EOF if you exceeded the bounds of the file. (An improvement on arrays, notice, which do not give an error if you try referencing data that is out of bounds). That, unbelievably, is the entire sum of the standard file I/O library (but see the Other functions boxout). There are no standard functions to list every file in a directory, report the file attributes, copy a file or calculate the size of one. This is for portability, since not all systems work with the same system of attributes or directory structure (as much as we might want it to be, Linux is not the centre of the computing universe!). However, with a little thought we can use the given functions to create our own ‘file size’ routine, as shown in Listing 3. A file copy is also a simple routine using fread and fwrite. For other file handling functions, see a cheat method in the System boxout.
position = ftell(fp);
Bright lights, big city
position is the number of bytes (from the start of the file) we are currently at; where zero indicates the first byte, and minus one is an error code (EOF). Its result can be stored in a ‘long’ variable and used to rewind the file to that position later in the code:
We have covered quite an array of features so far in this series! We can read a conversion table from a file, parse it into variables, work with strings (part 3),
Table 5 Name
Value
Use
fseek(fp, position, SEEK_SEL);
SEEK_SET
0
The last parameter is the interesting one. It can be one of three possible values, as defined in stdio.h. When writing code, you should always use the name to ease readability. However, some (very cheeky) programmers don’t, and use the values in column 2 of Table 5. The fseek function returns 0 if everything went OK,
SEEK_CUR
1
SEEK_END
2
Move to “position” bytes from the start of the file. Negative values are allowed, but make no sense. Move “position” bytes from the current position. Positive values move the marker forward, negative ones move it back. Move to “position” bytes from the end of the file. Positive values are allowed, but make no sense. A position of “-1” sets the file marker to the last byte of the file.
Issue 19 • 2002
LINUX MAGAZINE
63
PROGRAMMING
Listing 4 1 #include <stdio.h> 2 #include <stdlib.h> 3 4 int main(int argc, char *argv[]) 5 { 6 FILE *fp; 7 char szFromUnits[32], szToUnits[32]; 8 float fMultiplier, fAddition; 9 float fValue; 10 11 if (argc < 3) /* We need two U arguments: number and then units */ 12 return EXIT_FAILURE; 13 14 if (fp = fopen(“convert.conf”, “r”))/* U Checks the file exists */ 15 { 16 while(!feof(fp)) 17 { 18 if (fscanf(fp, “%s %s %f %f”, U szFromUnits, szToUnits, &fMultiplier, U &fAddition) != EOF) 19 { 20 if (strcmp(argv[2], U szFromUnits) == 0) 21 { 22 fValue = U (float)atof(argv[1]); 23 printf(“%.2f %s => %.2f U %s\n”, fValue, szFromUnits, fValue * U fMultiplier + fAddition, szToUnits); 24 } 25 } 26 } 27 fclose(fp); 28 } 29 return EXIT_SUCCESS; 30 } look at arguments passed in on the command line (part 1) and convert data between Celsius and Fahrenheit (parts 2, 3, 4, 5 and 6!). It wouldn’t be difficult to put them altogether to create a generalpurpose conversion utility. And that, coincidentally, is what appears in Listing 4! We can test this with: $convunit 54 f 54.00 f => 129.20 c $convunit 10 m 10.00 m => 16.09 km Note: the atof function in line 22 converts a string into a double. We then have to “type cast” (i.e. convert) it into a float, since that’s what we are using. Casting will be explained more fully in a later issue. 64
LINUX MAGAZINE
Issue 19 • 2002
System One of C’s biggest strengths, and the reason it is so widely used, is its portability. Writing our own version of “cp” might make our code portable, but at the expense of extra work. Writing our own portable version of chmod, however, is not possible. Period. For this functionality we have to resort to using the operating system – forgoing the need for portability – and for some software, this is never an issue. The function I am building up to is system. #include <stdlib.h> system(“ls -al”); The above line does exactly what it says on the tin! It runs the given ls command in the shell, waits until it’s finished, and continues executing your C code. All environment variables are inherited, and its output goes to the same place. You may notice, however, that all output from a system call is flushed before that of any text printfed before it. If this is undesirable, you can manually fflush before calling system. This function can also copy files, mount filesystems and shutdown the computer, but I’m sure you can think of other examples! The ball’s now in your court as far as adding a touch of polish goes. For example: ● An error (to stderr, remember) if there are not enough arguments. ● An error if the file doesn’t exist. ● Work out the inverse – i.e. if ‘f=>c’ is given in the conf file, work out “c=>f”. ● Read the conversion information into an array of structures. ● Use a separate function to produce the conversion. ● Convert a range of values if three arguments are passed in. The difference in a casual programmer’s C program (like above), and a professional industrial-strength one is how it handles the errors. Making enhancements to the above program is therefore highly recommended.
The author Steven Goodwin celebrates (really!) 10 years of C programming. Over that time he’s written compilers, emulators, quantum superpositions, and four published computer games.
PROGRAMMING
The new class model in Python 2.2
CLOTHES MAKETH THE MAN J
ust before last Christmas PythonLabs, under the leadership of Guido van Rossum, brought us Python 2.2. As mentioned in a previous article, the class model has changed substantially in Python 2.2, which seems like reason enough to take a detailed look at the changes.
What’s new? Until now there has been a strict separation between built-in data types, such as lists, dictionaries and tuples, and user-defined classes. Some classes could not derive from built-in data types. There were wrapper modules like UserDict and UserList for emulating built-in data types, but these were more of a work-around. Python 2.2 introduces something called new-style classes (NS). This type of class is characterised by being derived from object.
The new-style classes introduced with Python 2.2 allow cleaner programming. Andreas Jung investigates how these innovations help the programmer to avoid having to resort to dirty tricks
Listing 1: Dictionary with sequence “sortedDict.py” class sortedDict(dict): def __init__(self): dict.__init__(self) self.lst = list() def __setitem__(self,k,v): dict.__setitem__(self,k,v) if not k in self.lst: self.lst.append(k)
class X(object): ...
def __delitem__(self,k): dict.__delitem__(self,k) self.lst.remove(k)
In Python 2.2 the following built-in data types are now also NS classes and therefore derive from object:
def keys(self): return self.lst
* * * * * * * * *
int long float complex str unicode tuple list dict
They can now also be used as factories; for instance, d=dict() is identical to d={}. In Listing 1 we are going to explain the application of NS classes using the implementation of the class sortedDict. sortedDict is intended to behave like a dictionary, the difference being that the sequence of
def values(self): return [ dict.__getitem__(self,x) for x in self.lst] def items(self): return [ (x,dict.__getitem__(self,x)) for x in self.lst] if __name__ == “__main__”: d = sortedDict() d[‘linux’] = ‘magazine’ d[17] = 42 d[ (2,3) ] = (1,2) print ‘keys():’,d.keys() print ‘values():’,d.values() print ‘items():’,d.items()
Issue 19 • 2001
LINUX MAGAZINE
65
PROGRAMMING
Listing 2: Extended sortedDict.py 1 class sortedDict(dict): 2 .. 3 4 def items(self): 5 return [ (x,dict.__getitem__(self,x)) for x in self.lst] 6 7 8 class sortedDictIterator: 9 10 def __init__(self,lst): 11 self.lst = lst 12 self.__num = 0 13 14 def next(self): 15 16 if self.__num < len(self.lst): 17 self.__num+=1 18 return self.lst[self.__num-1] 19 else: 20 raise StopIteration 21 22 def __iter__(self): 23 return self.sortedDictIterator(self.lst) 24 25 26 if __name__ == “__main__”: 27 28 d = sortedDict() 29 d[‘linux’] = ‘magazine’ 30 d[17] = 42 31 d[ (2,3) ] = (1,2) 32 33 for key in d: 34 print ‘Key=%s, value=%s’ % (key, d[key])
the keys is maintained when reading from the dictionary after new elements are added. Just as a reminder, Python dictionaries do not define a fixed key sequence, that means keys() does not necessarily return the keys in the order in which they were added to the dictionary. The script in Listing 1 provides the following output: keys(): [‘linux’, 17, (2, 3)] values(): [‘magazine’, 42, (1, 2)] items(): [(‘linux’, ‘magazine’), (17, 42), ((2, 3), (1, 2))]
Line 1 defines the new class sortedDict as being derived from the built-in dictionary class dict. It is not necessary to specify object explicitly, as dict itself already derives from object. The constructor in lines 3 to 5 initialises the dictionary and also defines the internal variable lst, which is going to store the keys in sequence. lst is needed later to obtain the sequence of the keys when reading from the 66
LINUX MAGAZINE
Issue 19 • 2001
Figure 1: The “diamond rule” illustrates inefficient name resolution
dictionary. __setitem__() and __delitem()__ are required if new keys are added or deleted. When setitem is used a new key is appended to lst, while use of delitem deletes a key. Our example overrides the keys() method so that the list of keys is returned. The same is true for the methods values() and items().
Iterator extension Until now you had to use __getitem__() in Python classes in order to iterate objects within a for loop. This approach can be ambiguous and prone to errors. Instead, Python 2.2 allows you to define __iter__() within a class and to return an iterator object. Python invokes __iter__() once when entering the for loop. Essentially an iterator implements a next() method that Python calls at each iteration within the for loop and which returns the next element of the object. To demonstrate this, we are going to extend our sortedDict class by an iterator interface. In previous versions of Python it was not possible to iterate directly over a dictionary, only over the list returned by keys(), values() or items().
PROGRAMMING
For our example we are going to introduce the new class sortedDictIterator. __iter__() returns an instance of this class as soon as a for loop is used to iterate over an instance of sortedDict. A reference to the dictionary’s keys initialises the constructor of the iterator object. At each iteration of the loop next() returns the next element of the keys until the end of the list is reached. The implementation returns the keys similarly to the implementation of the iterator object for dictionaries in Python 2.2. When a StopIteration exception is raised this indicates the end of the iteration. The extended program in Listing 2 provides the following output: Key=linux, value=magazine Key=17, value=42 Key=(2, 3), value=(1, 2)
Multiple inheritance Up until now name resolution during multiple inheritance has often led to unexpected results. This becomes particularly obvious when looking at the example of the famous diamond rule (Figure 1). When invoking save() on an instance of D, the
base classes B and A are searched first, followed only then by C (depth-first rule). That means the old resolution algorithm calls A.save(), even though C.save() would be more logical. NS classes use a new resolution method, based largely on Common Lisp: ● All base classes are listed according to the depthfirst rule, with base classes that are used several times getting multiple listings: [D B A C A] ● Duplicates are removed from the list, leaving only the last occurrence of the element: [D B C A] ● The remaining list is searched from left to right in order to find the method (i.e. in the above example C.save() would be found)
Attribute access Until now, access to the attributes of an object took place in two stages. First a check whether the required attribute existed in the instance’s dictionary
Issue 19 • 2001
LINUX MAGAZINE
67
PROGRAMMING
Listing 3: properties.py
Listing 5: static.py
class Test(object): def set_number(self,n): self.n = n def get_number(self): return self.n*self.n def del_number(self): del self.n number = property(get_number,set_number,\ del_number,”Number”)
class Demo: def foo(x,y): print x,y foo = staticmethod(foo) Demo.foo(‘a’,’b’) demo = Demo() demo.foo(‘a’,’b’)
T = Test() T.number = 5 print T.number del T.number
__dict__. If it did not, __getattr__() was called. This approach is very popular for calculating attributes on the fly (also known as computed attributes), but it can easily lead to infinite recursion. For NS classes there is a new __getattribute(attr)__ method, which is called every time an attribute is accessed.
Properties and slots Properties are a special type of attribute. They behave like attributes but have their own read, modify and delete functions. The new property() function packages the relevant get, set and del functions and creates a property object. The use of this sort of property is externally transparent (Listing 3). In Python 2.2 it is possible to limit the number of attribute names permitted for an object using the __slot__ attribute. This makes access to other attributes impossible (Listing 4).
Static methods Static methods are methods that are not tied to any instance of an object but are instead called directly
through the class. In contrast to normal class methods self is omitted as the first argument and the method itself is defined as a static method by a staticmethod() call. Listing 5 creates a static method called foo(). Calling this through the Demo class or one of its instances returns “a b” in both cases.
Backwards compatible? Old-style and new-style classes co-exist peacefully in Python 2.2. Any semantic changes relate solely to new-style classes (and these are easily identifiable by being derived from object).
Conclusion The unification of built-in data types and userdefined classes makes programming under Python easier, by avoiding redundant code and through clear structures such as properties. Many important details can’t be explained here due to the limitations of space. Guido van Rossum has described all the changes at length at python.org; a shorter summary can be found at amk.ca. Some of the innovations are hard to understand at times, but use of trial and error and Python’s interactive mode should allow you to familiarise yourself with the new concepts quite quickly.
Info: Listing 4: slots.py class Demo(object): __slots__ = [‘x’,’y’] Output >> from slots import Demo >> D = Demo() >> D.x = 2 >> D.y = 4 >> D.z = 2 Traceback (most recent call last): File “<stdin>”, line 1, in ? AttributeError: ‘Demo’ object has no U attribute ‘z’
68
LINUX MAGAZINE
Issue 19 • 2001
Python 2.2: http://www.python.org/2.2 Guido von Rossum: “Unifying types and classes in Python 2.2”: http://www.python.org/2.2/descrintro.html AMK: “What’s New in Python 2.2”: http://www.amk.ca/python/2.2
The author Andreas Jung lives near Washington D.C. and works for Zope Corporation in the Zope core team. Email: andreas@andreas-jung.com
BEGINNERS
K-tools: PixiePlus
POINTS OF VIEW Stefanie Teufel takes a look at why Mosfet’s latest sweet treat, PixiePlus, makes image viewing and editing easy
I
n these days of the digital camera and huge hard disks, the computer is beginning to replace the family album for storing your cherished pictures. But what’s the point of storing all you digital memories without something to view, sort and edit them? That’s why we take this opportunity to present PixiePlus, a program which will make archiving this year’s holiday snaps twice the fun. The latest version of this all-purpose graphical weapon can be obtained from the homepage of the author Mosfet, which might mean something to one or two of you from his poppy-bright KDE styles. At http://www.mosfet.org/pixie/download.html there are links to a package with the source code and also to the very latest Red Hat, Debian and SuSE packages. But beware: whatever happens, you must install the developer package, too. On our Red Hat 7.2 machine, PixiePlus wouldn’t start without it. You should also be using at least KDE 2.x and Qt 2.3.x. With lower version numbers you will sadly not be able to enjoy the program; but those using more recent KDE and Qt versions shouldn’t have any problems. If you have decided on the sources, PixiePlus is installed into the unpacked source directory with the following commands:
K-tools In this column we present tools, month by month, which have proven to be especially useful when working under KDE, solve a problem which otherwise is deliberately ignored, or are just some of the nicer things in life, which – once discovered – you wouldn’t want to do without.
Figure 1: A file manager for images
mouse on any image in the browser and it will be displayed here immediately. Double-click on it and you’ll see the image displayed at full size.
Your personal cinema make -f Makefile.cvs ../configure make install rpm users can fall back on the usual: rpm -ivh pixieplus-kde*.rpm. To start the program enter a simple pixie & in a terminal emulation of your choice, or via the K menu entry Graphics/ PixiePlus Image Manager. As you can see in Figure 1, the initial configuration is split into three different sections. The right-hand side of the window is occupied by the thumbnail browser, in which all files of the current folder are displayed (by the way, PixiePlus starts straight off in your home directory). At the top left you will find the File Management Tab bar, which contains a directory tree. You can forage merrily through this with the mouse and thus define the directories your thumbnail browser should display. The third part of the starting window is taken up by the image preview. Just click once with the 70
LINUX MAGAZINE
Issue 19 • 2002
PixiePlus deals with the resources of your computer very efficiently, but however beautiful the thumbnailsized preview of the images in the browser window may be, anyone who has hoarded thousands of photos in a folder will have to wait a while for Pixie to load the corresponding directory. This is why Mosfet does not give you automatic loading of images in the default setting, but marks these out with a stand-in icon, as shown in Figure 2. Anyone who only has a few images stored or doesn’t want to do without the preview can quickly replace Mosfet’s image by clicking on the thumb icon in the menu list. Pixie then works for a while and loads in your images as thumbnails (Figure 3). Don’t worry, you don’t have to repeat this procedure every time you visit this directory. Once the process is complete, Pixie makes a note of this and in future will load the preview images in seconds. If you have only saved a few image files on your computer, you may want to automate this procedure. This is also easily done by clicking on View/ Previews/ Automake Previews. PixiePlus thus automatically
BEGINNERS
generates preview images when you select a new folder. By the way, you can also freely define the size of the “thumbnails”, via the menu item View/ Thumbnail Size. The default is “Large 90x90”. If you want the images larger or smaller, it’s easy to seek out the desired size here.
The navigator PixiePlus is kind enough to support the type of navigation you might be accustomed to from file managers under KDE, which makes coping with overstuffed directories full of images a great deal easier. All it takes is a glance at the menu list. There you will find icons, which are familiar from Konqueror, with which you can quickly leap into the home directory or move one directory back or forward by clicking on the arrow symbols. If these gymnastics take too long for your taste, you can click at any time on the folder you want in the directory tree or enter the directory path in the corresponding field in the icon list. If you have a jumble of miscellaneous file types rather than a purely images folder, you will be glad that PixiePlus has an option for displaying only directories and images. To do this, select the item View/ Images and Folders only in the menu list. If you want to know more about these thumbnails, simply position the mouse over the image and after a short wait PixiePlus will then present you with a tool tip, in which you can find out everything worth knowing about the respective file: from the size to the file privileges. Nor should you ignore the right mouse button, because the pop-up menu (Figure 4) allows you to do a whole series of things. With “Edit image...” you gain access to the Image Editor of the program. Although PixiePlus cannot yet keep up with the options of a purely image-processing software package like Gimp, with over 30 inbuilt effects it gives you a number of options without having to load up a separate program. Also interesting is the option Convert to..., with which you can convert your images at the click of a mouse from one format to another. If they are animated GIFs, you should also try out the option Play Animation, so as to enjoy the animation at full size.
Figure 2: No preview for miles around
Figure 3: Preview images everywhere you look Figure 4: The pop-up menu has quite a bit to offer
The file list The FileList contains all files you have loaded into PixiePlus or specified as command line arguments when starting the program. It is especially practical when you want to look at your images in full screen mode (View/ View images as.../ Fullscreen). You can then navigate through the loaded images via the back and forward buttons in the top left-hand corner of the screen. If you want to store the latest file list, select File/ Save FileList As in the menu list. You will then be prompted to give the list a name, which you can then load later at any time via File/Open FileList. The image list is also wonderfully well suited for an additional PixiePlus feature: the slideshow. To do this, select File/ Slideshow... from the menu list. You can then decide whether you want to enjoy a slideshow consisting of images from the FileList or from the current folder (Figure 5). You can also define how many seconds should elapse between image changes, or whether you want to look at the whole sequence of images again immediately after it ends, by means of the Loop option. To stop a slideshow that’s underway, by the way, simply click at any point on your desktop.
Figure 5: Put together your own personal slideshow!
What else PixiePlus can do? When it comes to screenshots, PixiePlus won’t leave you in the lurch. With a couple of mouse clicks you can retain the current desktop or individual program windows for posterity. To do this, just select File/ Take a Screenshot.... In the window that appears (Figure 6) you can decide whether you want to make a screenshot of the complete desktop (Grab the entire desktop) or of a specified window, which you can easily choose with a mouse click. With the field Delay (in secs): you define how long you want to wait for PixiePlus to create the screenshot for you. If you’d just like to take a quick look at an individual image, you may be put off by the idea of starting a fat program like PixiePlus with all its many features. Pixie developer Mosfet is one step ahead of you, and for this reason has equipped the program with command line options. A pixie photo_of_your_choice, entered in a terminal window is all it takes – the time-consuming browser and image editor of PixiePlus is then not loaded. Issue 19 • 2002
Figure 6: The screenshot section
LINUX MAGAZINE
71
BEGINNERS
The Answer Girl
COMMAND LINE JUGGLER The Linux command line can do a great deal more than the good old DOS command .com. Many of its treasures are hard to find, though, but Patricia Jung is at hand to help root them out
Shell prompt The prompt for the shell. Only when a character string, which can of course include a username and/or computer name, but also the working directory and typically ends in $, > or (in the case of root) in #, can be seen on the command line, will the shell accept commands. Obviously, the prompt can be individually adapted.
I
t’s well known that typing in a command at the shell prompt of an X terminal or a console often produces more rapid results that a whole raft of mouse clicks in a GUI application. With the arrow keys, it’s possible to retrieve and edit commands already entered from the History – the store for used commands – and with a Return, send them on their way again. The Tab key, for adding to commands and file names in the bash, is also part of the general education of a Linux user. In most cases, that’s as far as it goes. By the time you’ve spent five minutes digging around with up and down arrows for an old command, which it would have been quicker to type in afresh, you start wondering if the shell has any more shortcuts to offer. Driven by this thought, the Answer Girl discovered in previous issues the event designator !#, which simply repeats whatever one has already entered in the current command line. With modifications such as :1 you can restrict the selection to the second word of this expression (the count starts at zero), so that an echo hello !#:1
executes the command echo hello hello. Even if one adds on another :p and merely outputs the command thus created (print), but does not actually allow it to proceed, this is something which is still nice to know. In reality, those of us who are less practised will usually have typed in the command completely, before remembering the very crude syntax.
Impact point Shell The command line interpreter, which prepares user commands for execution by the kernel and passes them to it. From the viewpoint of a command line user the shell encompasses the kernel like a mussel, hence the name.
72
LINUX MAGAZINE
Anyone who has ever looked over the shoulder of some guru as he or she is typing may have noticed that the exclamation mark peppers the command line quite liberally, and usually in the form: !commandstart A !man, for example, recalls the manpage in which you have most recently been rummaging, while !ssh re-executes the most recently entered ssh command. It’s quite likely that you won’t really trust this thing, Issue 19 • 2002
The Answer Girl The fact that the world of everyday computing, even under Linux, is often good for surprises, is a bit of a truism: Time and again things don’t work, or at least not as they’re supposed to. The Answer-Girl in Linux Magazine shows how to deal elegantly with such little problems. and would rather know in advance what command was going to pass. So why don’t we just find out what happens when we link the modifier already mentioned :p with the exclamation mark search: pjung@chekov:~$ !ssh :p ssh bashir :p pjung@bashir’s password: That was in fact wrong, since the target computer bashir is asking for the password, :p has apparently not stopped the last ssh command, ssh bashir, being executed. But wait! Why does it say, in the first answer line of the shell, that it is now executing the command ssh bashir :p? Because all we did there was to add the character string “ :p” to the command line concerned. That may not have been what we wanted, but it’s good to know it works.
BEGINNERS
Ctrl+C will in any case ensure that the wrongly invoked command is stopped. But where was the error? A simple space, because pjung@chekov:~$ !ssh:p ssh bashir :p actually shows that the last-sent (as the result of our failed attempt) ssh-command was called ssh bashir :p. But we don’t need the :p. Anyone who now carries on bravely and types in ssh bashir, though a little puzzled, can use pjung@chekov:~$ !ssh:0-1:p ssh bashir to display the command created when we take away from the last-used ssh-command the reset (ssh) and the first word (bashir). !ssh:0-1 will now call up this command line.
Glimpses of history Unfortunately, the bash does not save the exclamation mark variations of the commands in its history like this. Instead, using the arrow keys or the shell built-in history, one finds only the version already replaced by the shell (the history function interprets a numerical argument as the number of recent commands to be listed): pjung@chekov:~$ history 3 955 date 956 ssh bashir :p 957 ssh bashir It should be no problem to re-use the command lines output by this, using the numbers. If, on the other hand, !# relates to the current command line, while !! relates to the previous one, it seems a good idea to have a go at
X terminal: A GUI program which provides a command line. It doesn’t matter whether the program is called xterm, konsole or aterm, there is always a shell running in it too. bash: The standard shell under Linux. Its name, “Bourne Again Shell”, indicates that it is compatible with the traditional Bourne Shell, sh, but also comes with a whole lot of functionality which the other does not have.
pjung@chekov:~$ !955 date Thu Jan 3 14:03:49 CET 2002
Built-in The work begins for the shell when the user presses Return on the command line. It checks to see if anything in what has been typed in needs to be replaced or supplemented (the exclamation mark constructs are one good example). It is only after this preliminary work that it will charge the kernel with executing the corresponding processes. The first word of the edited command line is the command which is to be started. The bash and related shells first check to see if there is an alias of this name. If not, they check whether they are dealing with a shell function. These can be functions implemented in the shell itself, the Shell-Built-ins, or else they can be self-defined. Unlike aliases, functions can not only be given arguments along the way, but can also edit these. Only when the shell finds neither alias nor function does an external program come into play. The Bash-Built-in type gives the user the option of finding out whether a command is really an independent binary or “only” a command built into the shell. With surprising results, such as: pjung@chekov:~$ type cd cd is a shell builtin So let’s put it to the test: The change directory command (change directory) is not in fact an executable file, but a shellbuilt-in, which we can overwrite with a self-defined shell function: pjung@chekov:~$ cd(){ echo Do you want to change to $1? U Nothing to it... ; } As in other programming languages, after the function name
comes a set of round brackets as an indicator that this is a function. However, these brackets can be left empty in the bash itself, if the function deals with (command line) arguments. Curly brackets contain the commands to be executed when the function is invoked. What matters most here is that each command must end with a semicolon, and don’t forget the space after {. With $1 we can go back to the first command line argument of cd. If we now feel the urge to change the directory, the computer digs in its heels: trish@checkov:~$ cd /mnt/cdrom Do you want to change to /mnt/cdrom? Nothing to it... By way of comparison: It is not possible to evaluate the parameter variable 1 with a cd alias: trish@linux:~$ to $1? Nothing trish@linux:~$ Do you want to
alias cd=”echo Do you want to change U to it...” cd /tmp change to ? Nothing to it... /tmp
Here the shell takes the entire cd /tmp command and does nothing but replace cd with echo Do you want to change to $1? Nothing to it.... echo Do you want to change to $1? Nothing to it.../tmp is executed. With unalias cd we cancel the alias. If we now enter the cd command, the shell again goes for the function defined by ourselves. To get rid of this and be able to change directory in the normal way again with the built-in, there is fortunately another built-in named unset: unset cd lays the ghost of a shell variable.
Issue 19 • 2002
LINUX MAGAZINE
73
BEGINNERS
Emacs After vi, Emacs is the second most common standard text editor, installed on almost every Unix system. As with vi, there also exist various implementations of this editor, of which the most popular must be the GUI application xemacs. Anyone who has familiarised themselves with its operation, which sometimes takes quite a bit of getting used to, finds they have acquired an extremely versatile tool which can be expanded in the programming language Lisp, which, with the aid of various modules written in Emacs-Lisp, covers all possible areas of application from the programming environment to email and news programs.
Foreground process: If one calls up a command on the command line, this shell will remain blocked until this foreground command comes to an end. With command line commands such as ls this is normally no problem, but anyone wanting to start a GUI program will not be keen to see the shell put out of action for the duration of its use. This is why commands can be sent into the background: if you add an & to the command, it no longer blocks the invoking shell.
74
LINUX MAGAZINE
and behold, it works. The event designator does not even have to be in the first position here: ping !956:1 for example simply grabs for itself the first argument from the 956th command in the history and thereby executes the command ping bashir.
Almost like Emacs It’s usually simpler if you get – as with the arrow keys – an old command on the instruction line and then you can edit this to your heart’s content. In charge of this is – man bash gives a clue – the Readline library, which in turn ensures that Emacs users can use familiar Emacs key shortcuts to edit the command line. (Another possible mode, and one which can be activated in the current shell with set +o vi, is vi mode, which is scarcely used, even by hardcore-vi advocates.) All you have to watch out for here is the fact that not everything that you can do in an editor with expanded options is also useful for a line editor, like the one the shell offers with the command line. The Emacs mode of the shell thus covers only a tiny fraction of the options of its namesake. But let’s try a few things out. Since Emacs uses Ctrl+R to search backwards (reverse), we should also be able to do something in the bash with this key combination. Let’s first try to ferret the ping command on bashir out of the example history. As a matter of fact a Ctrl+R ba produces the result: (reverse-i-search)`ba’: man bash The last command typed in to contain the character string ba. Pressing the Return key to send off this command is not an option at this point, since we have not yet even found the command line we are seeking. So we complement our earlier search term ba with shi, and soon the shell suggests (reverse-i-search)`bashi’: ping bashir If this is not to our taste, either, the Emacs cancel command Ctrl+X Ctrl+C (cancel without saving) will help out. But why do it the hard way, when there’s a simpler way: a simple Ctrl+C (familiar as the key command to end foreground processes) works here, too. But what can you do when the command found, although largely matching our expectations, does not do so completely? An (again, not quite conforming to Emacs) Esc makes sure that the command now found appears in the command line for editing. The target computer is not called bashir, but bahsir? In Emacs Ctrl+T swaps two mixed-up letters. So place the cursor on the h in bashir and press Ctrl+T – the h and the s before it then swap places. What works with a letter should also work with entire words. Here one can be guided by the rule of thumb that similar actions (sometimes) also have Issue 19 • 2002
similar shortcuts: The t as in “trade” stays, but instead of Ctrl you should press Alt. If the cursor is over bahsir, with an Alt+T this word trades places with its predecessor: so ping bahsir becomes bahsir ping. Another Alt+T will also swap them both back again. All you need to watch out for here is that hyphens and dots also count as “word separators”: If, say, you have entered the name of a file (for example index.html) at the prompt and you now realise that you have lazily forgotten which command to apply to it, you can write the vi (or emacs or less...) after it: pjung@chekov:~$ index.html vi and press Alt+T. The result, though, is not vi index.html, but pjung@chekov:~$ index.vi html The file name ending, separated by a dot from the basic name index, counts as a word and is consequently swapped for the character string vi. This clearly mistaken swap action is one we would like to reverse. In Emacs this is done using Ctrl+X+U, and lo and behold, the bash again puts the old index.html vi after it for show. Pressing the backspace key twice will now ensure that the vi at the end disappears again, but as soon as whole words to be eradicated start getting a bit longer, a key shortcut for deleting the word before the cursor will save a bit of strain on your wrists. So we make a proper job of it and set about searching for the corresponding key combination.
All is meta Except, what exactly are we looking for? The bash manpage unfortunately does not contain such a thing as a key shortcut table. But there’s something, Readline library ..., if this is responsible for manipulation options of the command line, then there should be something to find under this subject.
Figure 1: The READLINE section of the bash manpage (left) is almost completely cribbed from the readline-manpage
BEGINNERS
There is in fact a section called READLINE, which is also a neat explanation as to why it is so difficult to trawl this manpage for key shortcuts as an inexperienced user: the documentation uses Emacs syntax for its details. This means that C stands for Ctrl, while M designates a mysterious Metakey. However, there is no such thing on PC keyboards. Depending on the pre-configuration of your computer, the Alt and/or Esc key, as mentioned in the manpage, takes over its function. And it really works: instead of Alt+T, Esc+T can swap two words, too. However, as the manpage then makes clear, all these details are subject to change: if documented key combinations have effects different from those described, then this is presumably due to individual setting in the Readline configuration files: unless the environment variable INPUTRC says otherwise, the personal configuration file ~/.inputrc goes into action. There is also the option of a global configuration file not mentioned in the Manpages of some distributions /etc/inputrc.
Foreign characters are a matter for Readline Inputrc? Anyone who has ever tried, in a badly preconfigured distribution, to get the accented characters on a keyboard to show up in the text console, will find this name rings a bell. Three mysterious lines (Listing 1) in the /etc/inputrc have already helped many people at this point – but only now is it becoming clear what they mean: the Readline library can be correctly configured with the three variables set therein. But back to business: what we are looking for is obviously a Readline command, which deletes a word backwards. In fact the seemingly-appropriate sub-section Commands for Changing Text has nothing that fits, but in Killing and Yanking (“deleting and re-inserting”, where yank in the literal (and figurative) sense “yanks” strings already deleted from an ominous waste paper basket, the “kill ring”) we get lucky:
Listing 1: Accented characters In the console, accented characters only function if the Readline variables are correctly set: set meta-flag on # The variable meta-flag now activated ensures that # the Bash never cuts off the eighth bit of a letter. # Namely, accented characters can only be shown in 8-bit, but not in # 7-bit ASCII. set output-meta on # 8 bit characters are now shown correctly (and not as # comical escape-sequences). set convert-meta off # convert-meta is activated by default and then ensures # that 8-bit characters are converted into an escape-character and # a 7-bit ASCII character. Foreign characters obviously get messed up # when this happens, which is why this option should be deselected.
Ctrl+Y (“yank”). The appropriate Readline command is in the section Commands for Moving and is easy to remember with Ctrl+A. We also learn just in passing that Ctrl+E sends the cursor to the line end, like the useful option of jumping one word forward with M-f, and one word backward with M-b. Now all that’s actually missing is an overview listing all the pre-set key shortcuts followed by their meaning. This does in fact exist – but not in all distributions. Anyone who uses man readline to find an individual manpage on readline(3), need only look in the section called DEFAULT KEY BINDINGS. But before the rest of you start cursing your own distributors, let me tell you: This section is almost all the readline manual has over the bash manual. Who has copied from whom here?
backward-kill-word (M-Rubout) Kill the word behind the cursor. [...] If only we knew what a Rubout key is... fortunately the very first hit in a Google search for Rubout key (Figure 2) informs us that it is just another name for the Backspace key. As a matter of fact Esc+Backspace works as desired – but not Alt+Backspace, which is a pity. The manpage agrees though, providing Esc as substitute for the Metakey but not the Alt which is an option in ordinary Emacs defaults. The annoying vi string from the “index.html vi” example command line is thus gone – now we just have to get back as quickly as possible to the start of the line in order to re-insert it there with C-y, thus
Figure 2: Rubout is just the Backspace key
Issue 19 • 2002
LINUX MAGAZINE
75
BEGINNERS
Out of the box
KILLING WITH A SMILE Friends of the Atari-ST classic MidiMaze might recognise one of the latest additions to Linux gaming. Christian Perle gets his fingers twitching with the rediscovered classic, iMaze
MIDI Musical Instruments Digital Interface; a standard for the control of electronic musical instruments. Server A program which offers a specific service, which “client” programs can use when they connect to the server. Examples of the services offered are www, ssh and iMaze. Shell script A script is executed directly by an “interpreter” (such as bash or sh for shell, or Perl for Perl scripts) and does not have to be compiled before being executed.
C
In 1987, long before Doom and its compatriots coined the term “first-person shooter”, there was one game on the Atari scene that used to keep people up at night. Its name? MidiMaze. The program used the MIDI interface for networking, as this was a fixed component of the Atari ST and could manage cheap DIN cables. The idea behind the game was very simple: the players – as Smilies – would run through a 3D labyrinth shooting at each other, with the walls serving as cover. One point was awarded for shooting someone and after taking a hit, players would soon respawn at a random location. The joke of the game was that computer opponents can never be as annoying as human players... In 1994, under the guise of a practical software exercise, Hans-Ulrich Kiel and Jörg Czeranski of the Technical University of Clausthal set themselves the task of implementing their favourite game under Unix as a client/server version. This is how iMaze came about, which henceforth prevented the students in the Clausthal computing centre from doing much work. Unlike MidiMaze, iMaze can be played not only with local networking, but also via the Internet – all you need is a 28,800bps modem. Although development was halted in 1996, it has now been resumed with version 1.4.
Semi-automatic To simplify the installation process we’ve provided the shell script iminst.sh on the cover CD, which performs most of the work steps itself. For the installation you will need the tarball imaze-1.4.tar.gz, which you can find at http://home.tuclausthal.de/student/iMaze/or on the cover CD. Copy this file and the script iminst.sh into a shared directory, and enter the following commands:
LINUX MAGAZINE
There are thousands of tools and utilities for Linux. “Out of the box” takes the pick of the bunch and each month suggests a little program, which we feel is either absolutely indispensable or unduly ignored.
Figure 1: The iMaze menu window
and the necessary header files (glibc-dev, xlib-dev and xaw-dev) are installed. If everything works, iMaze will be in the /usr/local hierarchy once the script is completed.
Where are the servers? When starting the iMaze client with imaze & the first thing it wants to know is which server governs the labyrinth (Figure 1). Here you will also define the message other players will receive when they get hit by you. If there are no other players to be seen on the proposed server, imaze.rz.tu-clausthal.de, you can also start your own server for your local network. To do so, enter the command: imazesrv /usr/local/lib/imaze/labs/doors.lab &
su enter root password sh iminst.sh exit Before you start make sure that the C compiler gcc
76
Out of the box
Issue 19 • 2002
As well as doors.lab there are also some other labyrinth files, which you will find in the same directory. As server name, enter localhost in the client. Other players in the local network should enter
BEGINNERS
the name or IP address of your computer in the Server field. If you have your own server this also gives you the option of becoming familiar with controlling the client without being shot down straight away. The available key functions (Table 1) may be quickly learnt, but cornering requires a certain finesse if you don’t want to end up wrapped around the scenery.
Shooting away in style Anyone playing iMaze who simply starts shooting away at random will soon notice that they hardly hit anyone. This is because the iMaze server only manages one shot per player at a time. So if you fire a new shot before the old one reaches its target, the old one will be taken out of the game and only the new one will be taken into account. It’s therefore better to fire off fewer but better-aimed shots. Since iMaze offers no function for sideways movement, you will instead need to move backwards around a corner in order to avoid being shot by your opponents. Some good additions to the Front View are offered by the windows Map, Compass and Rear View, which you can reach via the Window menu (Figures 4 and 5). In labyrinths, such as doors.lab, in addition to grey walls you’ll also come across coloured ones. You can pass through these walls via the coloured side, but not the other way round. Such one-way doors can be very useful for pursuit tactics.
Table 1: Key functions in the iMaze client Key Cursor up Cursor down Cursor left Cursor right Shift, Alt or Space Ctrl+S Ctrl+Q Tab
Function Move forwards Move backwards Turn left (can be combined with movement) Turn right(can be combined with movement) Shoot Pause (Smiley is temporarily taken out of the game) Continue play (Smiley awakes in a random position) Immediately turn through 180 degrees (only with server option -Q) Chase through the labyrinth
labyrinth, enter killall ninja in the shell. If you ever forget your Ninjas, it’s not the end of the world: after a maximum of two hours they will shut down on their own. When several Ninjas have been started, it is advisable to use the option -m to assign special shoot-down messages to the individual processes. The imaze_demo shell script, on the cover CD, starts a server and six Ninjas. All you need to do is start the client and link to the server localhost: imaze -H localhost &
Figure 4: Map for a quick overview
Figure 5: Where am I running to?
Electronic shadow targets If there aren’t any other players to be found, or you simply want to bump up the number of available opponents in a game, you can start so-called “Ninja” processes, which link to a game and play like a normal human client. They may not be especially clever, but sometimes brawn beats brain. You can start these electronic pains in the neck with: ninja -H Server-Name & thus on your own server computer with: ninja -H localhost & If you don’t want to see any more Ninjas in the
Big Brother and the special server Anyone who just wants to take a peek at an active server without taking part in the action can select Camera mode in the menu window before connecting. Your own Smiley is then invisible to the other players and can run through all walls; but it does not have the ability to shoot – making you a passive observer. A look at the manpage of the iMaze server with the command man imazesrv reveals a few more special options, which can be specified when starting. The option -Q enables all players to rotate rapidly through 180 degrees with the Tab key, the option -R makes all shots ricochet off the walls, and -F makes players “faceless”, so that you can’t tell which direction your opponents are facing. Last of all, with the programs genlab and xlabed you have tools to create and to modify your own labyrinths. Happy shooting. Issue 19 • 2002
Tarball: tar is an archiving tool common under Unix. A collection of files packed together into one file by it, referred to in slang as a tarball, usually bears the file ending .tar.gz or .tgz. This is because such archives are first amalgamated with the program tar and then compressed with the program gzip. Header files: In header files (also called “include files”) there is a list of the functions available in a library plus parameters. The C compiler needs this information when compiling a program. In the most common distributions a header package for a library usually includes dev or devel in its name.
LINUX MAGAZINE
77
BEGINNERS
Desktopia
WINDOW PICKER Is your .xinitrc a permanent building site? Jo Moskalewski gets your house in order with selectwm
D
o you fancy an ABC of window managers and desktop environments? Well, here goes: AfterStep, Blackbox, CTWM, dxwm, evilwm, FVWM, GNOME, HeliWM, IceWM, KDE, lwm, MWM, NovaWM, OLWM, PWM, qvwm, Ratpoison, Sapphire, twm, UDE, VTWM, wmx, XFce and YAWM. Admittedly, there’s no J or Z just yet, but they’ll surely join the crowd at some point. Nevertheless, there’s no denying that we have a huge number of interfaces to choose from. (By the way: the alphabet could also be made up from completely different window managers). All that’s missing now is a tool via which the user can select one of the installed window managers and desktop environments for the next session. This gap in functionality has been neatly filled by Luc Dufresne, who has given the fruits of his efforts the name selectwm; made it available for free download from his Web site, http://ordiluc.net/selectwm/; and placed the program under the GPL.
Rules of the game The principles of the program are simple and effective: instead of a window manager, selectwm will start in future. This program in turn offers a choice of the window managers and desktop environments entered by the user (with the option of starting a default entry automatically after a pre-defined time limit). If required the user can go back to selectwm after shutting down the window manager – otherwise the user Xsession is ended immediately. So as not to complicate matters needlessly, selectwm needs no separate configuration tool for all this, but can be configured completely on the fly within the one interface by the use of a mouse. Even a perfect tool needs a bit of help every now and then: it would be pointless if every programmer had to paint the buttons and texts in their programs themselves. In general the so-called “Toolkits” are used for this, and these are available as libraries for several applications. selectwm relies on the GTK+ toolkit in version 1.2.0 or higher, which should be familiar from Gimp and GNOME. Don’t worry, 78
LINUX MAGAZINE
Issue 19 • 2002
deskTOPia Only you can decide how your Linux desktop looks. With deskTOPia we regularly take you with us on a journey into the land of window managers and desktop environments, presenting the useful and the colourful, viewers and pretty toys. though, this minimum requirement stems from February 1999, and so any halfway recent distribution should easily meet this.
Preparation It’s still up to you to check that the associated Devel package (Developer package) is also installed on your system. This contains parts of the GTK+ library, which users of pre-compiled distribution packages do not need. Only those who want to compile or install the corresponding source code must retain these. selectwm comes as source code, but if you’re using the Debian distribution, there’s an up-to-date, ready compiled deb package, which can be installed directly with dpkg. Even if you don’t use Debian, self-compilation is not all that much harder. Once GTK+ is on the hard drive, the following steps (entered at an input prompt) deal with the rest of the installation: tar xvfz selectwm-0.3.tar.gz cd selectwm-0.3 ./configure make su make install logout
The tool tar unpacks the archive, at which point the configure script, which comes with it, interrogates your system. In its output, you can ignore a simple “no” – but if clear indications of any errors appear, or if configure stops without creating a Makefile, you’ll first have to correct the reasons for this. You can find whatever is wrong from the script output.
BEGINNERS
If no errors occur, you will finally be prompted to proceed with the command make. This tool translates code written by the programmer using the previously made Makefile into a code the Linux kernel can execute; the binary is created. You can get to the right place with the command make install, and to execute this the user root must briefly enter the source directory.
Casting If selectwm is now in place on your hard drive, the next thing to do is to allow this tool to control your Xwindow system. If a user has an (executable) file in his home directory called .xinitrc (in the case of a text-based log-in) or .xsession (in the case of a graphical log-in), then this will control the progress of an X session at user-level. This is dealt with in exactly the same way as a shell script: as soon as the last command in it is complete, the whole script comes to an end. If selectwm is to come into play, you could make this file as follows:
blank field immediately below the selection list? This question can be answered by pressing the Config button. A further dialogue opens, in which a length of time can be defined to the precise tenth of a second: Once this has elapsed the default window manager starts. The remaining tenths are counted down in this very field (Figure 4). If on the other hand one sets the time limit to “0”, selectwm awaits a user action. There’s also another button, labelled “Go back to selectwm when the WM exits”. If you select this, when your window manager is shut down you will return to selectwm. You can also pre-set this in the Config dialog (Figure 3). As you can see from Figure 4, the visual appearance of selectwm can be modified. But there are strict limits imposed on the freedom to do so: Since this is a GTK application, its appearance is in line with the current GTK theme. You can select this, for example, in the GNOME Control Center (gnomecc) or else in the palette dialog of the XFce desktop.
#!/bin/bash
Figure 1: Right click in the still-blank selectwm window
Figure 2: A new interface is added
Trick 17 xsetroot -solid black selectwm First call up everything which is to remain the same in every X session, regardless of which window manager is being used. In this example xsetroot starts by colouring the background black. Lastly, selectwm takes up the baton.
The game begins When you first start you will be met by a completely blank window, as in Figure 1. So that you can now actually choose something, you must first create the future selection list. Press the right mouse button within the clear, white window area. A pop-up menu will appear and you can now add the description and start command for a new window manager entry (Figure 2). This list is stored (together with the rest of the configuration) automatically in the file .selectwmrc. If you have already entered your window manager and desktop environments, you can also use the right mouse button to make an entry the default – this will then be selected in future unless you manually choose a different alternative. If you no longer like the sequence of selection options, then simply drag them with the left mouse button to the right spot. A double-click, on the other hand, starts the entered command. The selection list also responds to the arrow keys and space bar.
Time limit? There must be one or two of you wondering, when you are experimenting, what is the point of the so-far
Anyone who only starts large interfaces like XFce or Window Maker will be delighted with the options described so far for selecting their window manager. But what if, for example, a window manager comes onto the desktop with its own clock, while the next one graces the desktop without any trimmings at all – and a clock would thus have to be started manually, as well? There is a simple solution – the desktop environments XFce and KDE show you how: their interfaces are not invoked in one go, but are assembled behind the scenes using the scripts startkde or startxfce. All the necessary tools are additionally started by these. Maybe you want to provide the window manager PWM with a clock at all times (which is not necessary with another interface), in which case you make an executable shell script named startpwm in the /usr/local/bin directory. There you should enter, before actually starting the window manager, your clock command:
Figure 3: Behind the “Config” button
#!/bin/bash oclock -geometry -0-0 & pwm You can expand any simple window manager by an autostart function using this type of wrapper. This is also the way to realise various configurations with one and the same window manager. In future there will also be no need for fiddly modification of your .xinitrc before starting an X session. Issue 19 • 2002
Figure 4: Theme-capable
LINUX MAGAZINE
79
BEGINNERS
The best Web sites for Linux users
THE RIGHT PAGES Janet Roebuck takes her monthly look at the best sites to have lit up our browsers here at the Linux Magazine offices
Video conferencing http://www.tcm.phy.cam.ac.uk/~kgs20/Video Conferencing.html If you ever wanted to know how to use video conferencing under Linux then this site provides all you need to know.
Manhattan Virtual Classroom http://manhattan.sourceforge.net This is a password protected, Web-based virtual classroom system that includes a variety of discussion groups, live chat, areas for the teacher to post the syllabus and other handouts.
Port scanner
Basilisk II
http://www.insecure.org/nmap/nmap_download. html Following this month’s focus on security, we present Insecure.org, where you can get your hands on a stealth port scanner to check the security of your own network.
http://www.UniMainz.DE/~bauec002/B2Main.html While Basilisk II won’t turn you to stone, it will let you emulate the 68K Apple Macintosh. What’s more it’s Open Source and distributed under the GNU GPL.
Linux Gazette http://www.linuxgazette.com/index.html If you’re tired of dry and stuffy online linux resources then the Linux Gazette may be a breath of fresh air. This e-zine is a good read and attempts to bring a little fun into the world of Linux.
Perl Script http://www.fuzzymonkey.org/perl The quirkily-named FuzzyMonkey.org provides an array of Web services including free CGI and Perl scripts.
Linux art
Vipul’s Razor
http://gnuart.onshore.com/gnu_linux_art.html If you’re looking for something to brighten up your desktop, or even your walls, then this GNU/Linux poster art could be just what you need.
http://razor.sourceforge.net If you’re one of the many people plagued by the scourge of spam then you may want to check out this distributed, collaborative, spam detection and filtering network.
Linux Graphic.org http://www.linuxgraphic.org For those of you with a more creative bent, Linux Graphic provides a wide range of examples and techniques on how to create art under Linux.
80
LINUX MAGAZINE
Issue 19 • 2002
Geometry Junkyard http://www.ics.uci.edu/~eppstein/junkyard OK, so this one has got nothing to do with Linux, but it’s a great resource for information on all things geometrical – from fractals to origami.
BEGINNERS
Freakzone
Dizum
http://www.freakzone.net This is a great resource for exchanging tips and knowledge about Linux and all other flavours of *nix – from Unix to SunOS.
https://ssl.dizum.com/help/remailer.html For those crippled by an overpowering sense of paranoia, Dizum is an online service that anonymously remails your email messages.
Tux History http://www.woodsoup.org/~sbaker/tux/doc Tux has led a varied and exciting life and you can read all about it here, including where his name arose from and why Linus Torvalds is so taken with the flightless critters.
BSD Central http://bsdcentral.com Created for BSD users, this Web site is here to act a central resource for all BSD products. Go on, be a devil.
Linux Search Engines http://www.fokus.gmd.de/linux/linuxsengines.html If you can’t find what you want then head down to this Web site, which features links to a huge collection of search engines.
KernelTrap http://www.kerneltrap.org If up to the minute news on the Linux kernel is your bag of peanuts, then the KernelTrap should be your first port of call.
Unix Guru Universe http://www.ugu.com The Unix Guru Universe is the largest singlepoint Unix resource on the Net, so if you’re a Unix user you simply shouldn’t miss it.
C-Scene http://cscene.org C-Scene is a free online magazine devoted to C and C++ programming.
Zedz http://zedz.net Zedz.net is not-for-profit organisation whose main focus is encryption software, privacy, freedom of speech and freedom of information issues. The site also features articles on Internet security and cryptography.
SEUL http://www.seul.org SEUL, or Simple End User Linux to give it its full title, focuses on Linux in education, Linux in science, advocacy documents and managing and coordinating communications between projects.
Issue 19 • 2002
LINUX MAGAZINE
81
COMMUNITY
The monthly GNU Column
BRAVE GNU WORLD Pingus under Mandrake
Welcome to another
Pingus
issue of Georg CF
Pingus is a game developed under the GNU General Public License by Ingo Ruhnke, Giray Devlet, Cagri Coltekin, David Philippi and Alberto Curro. The game was inspired by DMA Design’s proprietary game Lemmings, in which a player had to direct a group of lemmings through perilous levels to the safety of the exit. The only way to influence the flow of the lemmings was to give some of them special jobs, such as diggers, climbers or even bombers. Pingus, as the name might suggest, lets you do all these things to little penguins, which bear a striking resemblance to Tux, the mascot of the Linux kernel. Game development was started in 1998 and after an announcement on Slashdot, gained the input of some graphically adept users, which gave Pingus a very attractive look. The graphics even triggered spinoffs like Xpenguins, which lets Pingus’ protagonists roam the desktop. At the end of 2000 development came to an almost complete stop, but a year later some fresh programmers helped overcoming this involuntary pause. As it stands, the game is still not finished and according to Ingo Ruhnke “it still doesn’t feel like a real game”. This may also be partially because no sound effects have been implemented although there
Greve’s Brave GNU World. As this marks the column’s third anniversary you’ll find a few words of celebration as well as some new projects, but we’ll kick off the proceedings with another Free game
88
LINUX MAGAZINE
Issue 19 • 2002
is plenty of music. Some more interesting levels are also needed. Help is wanted in many forms: developers are as welcome as people contributing to sound effects or more levels. Level design does not require programming knowledge, by the way, since everything is done with XML. The game itself was written in C++ and runs under GNU/Linux. It may be possible to also run it on other Unix-based systems, but the developers are more interested in a Win32 port at the moment. The immediate plan is to finish the Windows port and release a new, completely playable version. Afterwards multiplayer, as well as network support, and a consistent storyline are needed to finish the game. At the moment, the game can only be recommended to users willing to play around with half-finished games and who might like to contribute parts to it.
Process View Browser The Process View Browser (pvbrowser) by Rainer Lehrig provides a structure for process visualisation. This is important in all areas where technical processes are to be visualised or controlled. Examples of proprietary programs performing similar tasks are WinCC or Wonderware. The project consists of a server and a browser, which communicates with the user. Unlike in comparable projects, all configuration is done on the server-side. Users can modify the server according to their own needs, i.e. they can write routines interacting with the hard or software defining which objects are to be displayed and controlled in which way. These components are then displayed by the browser according to the information provided by the server over the network. The programming language used for this project was C++ with Qt as the graphical toolkit for the browser and ANSI C for the server. The project is very platform-independent: it runs on GNU/Linux as well as under Windows or VMS. According to the information provided by Rainer Lehrig, the browser is faster on a 330MHz GNU/Linux notebook than on a
COMMUNITY
1GHz Windows NT system, which he blames on the networking code. The platform independence is also linked to the one downside of the project, which made me think very hard about whether I should feature it in the Brave GNU World. The Process View Browser is only Free Software under GNU/Linux, for which it is released under the GNU General Public License. The project is proprietary under Windows and VMS. However, two factors convinced me to write about it in the Brave GNU World. First of all this area has been dominated by purely proprietary versions until now, so the project certainly takes a step into the right direction. Also the user is capable of using it as entirely Free Software as long as the GNU/Linux platform is being used. Additionally, the license-situation of the Qt toolkit is very much comparable since only the X11-Version is available as Free Software, while the Windows and Mac versions remain proprietary. Since Qt is being used by the Process View Browser, a Free Windows version of the pvbrowser would not really help a user since dependency on the proprietary Qt still remains. Qt is a respected Free Software library under GNU/Linux that is being used for many important projects like the K Desktop Environment (KDE) and has been mentioned in the Brave GNU World many times already. So it seemed unjust to not mention the Process View Browser because of a comparable licensing policy. Hope remains that the versions for systems other than GNU/Linux will also be available as Free Software in the long term – a hope that equally applies to Qt and the Process View Browser. Rainer Lehrig has been working on the pvbrowser all by himself so far and is now looking for others who might be willing to help testing or contribute ideas and code. Volunteers are also sought for documentation. If you are interested in this field, please feel free to participate in the Process View Browser. I recommend only doing so for the GNU/Linux version, however and authors of documentation should take care to release it under the GNU Free Documentation License or a similar license. Only in these cases will it be reasonably safe to assume that work contributed will continue to benefit the Free Software community. In order to prevent possible misunderstandings I’d like to emphasize that the described problems do not lessen the contribution of Rainer Lehrig to Free Software. Bringing Free Software into a hitherto proprietary field is always a very important task. Still it remains important to be aware of the problems and understand what they mean.
Process View Browser showing an image with widgets
January 2001 as PowerPhlogger under the GNU General Public License. Similar services are relatively common on the net, but they are usually proprietary and also rather unsatisfactory. PowerPhlogger allows everyone to set up such a service, even if the relevant pages do not support PHP. The creation of accounts with these services can either be done by an administrator or the users themselves. An example for this is the gratis PowerPhlogger service, Freelogger. The functionality of PowerPhlogger surpasses that of most proprietary solutions. Among the features is the counting of real visitor numbers through “unique hits” instead of counting every page access. This is done by IP-comparison in combination with a cookiecheck and a timeout defined by the user.
PowerPhlogger In June 2000 Philip Iezzi began working on a software to host Web page counters under PHP and the result of his work has been available since
PowerPhlogger showing statistics
Issue 19 • 2002
LINUX MAGAZINE
89
COMMUNITY
filesystem hierarchy where other programs look for them. If the package is to be uninstalled, one can simply delete the install directory and/or remove the links by calling Stow again. Stow was originally written by Bob Glickstein in 1993 using Perl, but lack of time forced him to suspend development. GNU Stow is now maintained by Guillaume Morin, who has had only mainly minor changes to implement since the project has been stable for some years now. If you haven’t tried out GNU Stow yet, I can only recommend taking a look.
GNU gettext
Process View Browser showing diagrams and other widgets
PowerPhlogger also offers so-called “visitor paths”, enabling administrators to trace the path a user has taken to reach the page. It also keeps track of the time a user spent browsing the page. Of course the PowerPhlogger is capable of displaying counters on pages that can be fitted to the layout of the page through TTF and user-defined colours. Even the layout of the statistics page can be modified to suit the user’s taste with CSS modifications. Additionally, the project has been internationalised for 16 languages and supports different time zones. All data is currently stored in a mySQL database, but version 3 of the PowerPhlogger, which is pencilled in for an October 2002 release, will contain a database abstraction layer. Also some of the less desirable sections of code will be cleaned up and rewritten for object orientation. Help is welcome in any form, including financial support. Philip also needs volunteers to provide support in the online-forum.
GNU Stow GNU Stow is an extremely useful project for everyone installing software that’s either not available for the distribution you’re using or has to be installed from source for other reasons. Under normal circumstances such activities tend to act as proof that the second law of thermodynamics applies to computer systems, as well: entropy remains the same or rises, but it never decreases. In other words this means that systems have the tendency to become increasingly messy. GNU Stow offers a solution for this. Stow has its own directory tree, which usually resides at /usr/local/stow. New packages are installed into their own subdirectories in this directory tree. Calling Stow will create symbolic links, making sure all files of the package appear in the standard 90
LINUX MAGAZINE
Issue 19 • 2002
GNU gettext is a project probably known to most developers already and also a package that only developers and translators will ever come in direct contact with. It does play a crucial role for users, however, since it allows programs to communicate with them in their native language. So I’d like to introduce this important component here. Although details should be spared, a short introduction into the functional concept seems useful at this point: when developing programs, all output is normally written in English. All user interaction strings are collected by GNU gettext in a single file. If a program is to be localised, translators can make a copy of this file, translate all the strings in this simple ASCII file into their native language and mail it back to the developer. If this file is then copied into the right directory under the right name, the program supports that language after the next compilation. When the user runs the program, GNU gettext will try to supply him or her with the messages in the user’s preferred language. Whenever this isn’t possible because the translation is not complete or doesn’t exist at all, gettext falls back to the original/English version. Supporting incomplete translations was one of the design goals of GNU gettext, because programs evolve step by step and very often the translators are one or two steps behind the developer. GNU gettext consists of several tools under the GNU General Public License as well as libraries under the GNU Lesser General Public License. It complies with the Unix-standards X/Open and Li18nux2000, was originally written in 1995 by Ulrich Drepper. It has since rapidly become the de-facto standard for software internationalisation in and outside of the GNU Project. Bruno Haible recently took over as the GNU gettext maintainer. He currently focuses on expanding GNU gettext to more languages and is considering integrating simple spell checking sometime in the future. Bruno felt two anecdotes were worth sharing with the Brave GNU World audience. First of all there is a distinguished and apparently quite active team
COMMUNITY
working on the translations from American to British English. This seemed somewhat easier than a translation to Japanese to him. He also warns other programmers against trying to translate their programs in to another languages themselves – especially if this is not their native language. As far as he was concerned, some of the translations he has encountered are worse than no translation at all. Experience shows that translations into French, Swedish, German and Spanish are provided quite often, other languages could use more volunteers, however. Localising a program for your own language is a very good way of furthering Free Software in a practical way that needs little technical expertise.
Three years of Brave GNU World What began as an experiment is now three years old, so I’m tempted to take a quick glance back. The column initially began as a wild idea between Tom Schwaller and me on the 512-node GNU/Linux Cluster “CLOWN”, at which I gave my first appearance as European speaker for the GNU Project. Tom approached me with the idea of a GNU column. Arriving back home I knew that I wanted to try writing a column that would also have the mix of technical and philosophical issues making the GNU Project so special. Still, I was sceptical whether this was possible and whether I’d be able to fill the column each and every month in time for the print-issue. From the very first moment it was clear that the column should also be published on the Net in order to make it available to as many people as possible. Doing this only in German seemed of limited use, so the initial issue was first written in German and then translated into English by me in order to be released online together. After releasing issue two, something remarkable happened. Within a few days, Okuji Yoshinori and Francois Thunus contacted me and asked whether I’d agree to them translating the column into Japanese and French. Of course I was quite happy about that and immediately included them in the production process of the Brave GNU World. The dam broke; more volunteers contacted me for other translations and soon other magazines requested permission to print the Brave GNU World. Today the column appears in up to 7 languages online and 4 magazines worldwide. Without the help of so many volunteers, this would never have been possible. The companions of the early days mentioned above all went their own ways by now, their jobs being taking over by others. I’d like to list everyone helping with the Brave GNU World, but there is hardly enough place for it. On a single issue you’ll easily find 30 people helping as scout, proofreader, translator, Web master and so on. Even if some have participated for a long time now, there is always a certain amount of fluctuation.
Pingus version 0.4.1
To all these people and the other supporters of the Brave GNU World I would like to express my heartfelt thanks to for the past three years. I’d also like to thank all those who contacted me in person or via email to tell me about interesting projects, give feedback or discuss topics they had a different opinion about. Their involvement was a seminal part in filling the Brave GNU World with life.
...to another year Enough said, I hope we’ll see another good year for Brave GNU World and of course I don’t want to finish without the mandatory request for feedback, comments, new projects, questions and ideas. Which project is incredibly useful, funny or good and still unknown to many users? Please send answers to the usual address.
Info Send ideas, comments and questions to Brave GNU World Homepage of the GNU Project Homepage of Georg’s Brave GNU World “We run GNU” initiative Pingus homepage XPenguins homepage Process View Browser homepage PowerPhlogger homepage Freelogger homepage GNU Stow homepage GNU gettext homepage “History and Philosophy of the GNU Project”
column@brave-gnu-world.org http://www.gnu.org http://brave-gnu-world.org http://www.gnu.org/brave-gnu-world/ rungnu/ rungnu.en.html http://pingus.seul.org http://xpenguins.seul.org http://pvbrowser.sourceforge.net http://www.phpee.com http://www.freelogger.com http://www.gnu.org/software/stow http://www.gnu.org/software/gettext http://www.gnu.org/philosophy/greve-clown.en.html
Issue 19 • 2002
LINUX MAGAZINE
91
COMMUNITY
Want to know more about OpenBSD?
POWER TO THE DAEMON In this month’s Free World Richard Ibbotson follows up his look at FreeBSD and NetBSD with a closer inspection of OpenBSD – whose many developers and administrators include the likes of Theo de Raadt and Wim Van de Putte
I
n previous months we’ve mentioned that BSD is considered by many to be even more secure than GNU/Linux; this couldn’t be more evident than in OpenBSD. The OpenBSD project is now more than six years old and the authors boast that there hasn’t been a hole in the remote install in over four years. The OpenBSD source code is being continually edited to patch holes long before they ever become an issue. Therefore, if you require a secure server or firewall, then OpenBSD is the logical choice. That’s not to say you couldn’t also use it on your desktop if you wanted to. The entire system is based around secure cryptography routines – OpenSSH began as part of the OpenBSD project – which reinforce the project’s no-nonsense approach to software development.
Installing OpenBSD Having taken the decision to install OpenBSD on your secure server, firewall or even your notebook – a popular use for the OS – the installation process can be started by using a boot floppy or installing from a CD-ROM. These can be purchased online from the
92
LINUX MAGAZINE
Issue 19 • 2002
OpenBSD site, or alternatively you can install directly via FTP. For the purpose of this guide we’ll assume that you’re installing OpenBSD from a CD-ROM – version 3.0 is the latest release at the time of writing. Starting with CD1, you will quickly move through some install screens, which are not unlike those we encountered in FreeBSD and NetBSD. If you can’t get access to online help at the start of the installation, it doesn’t matter. The CDs come with a quick install guide that should give you a fair idea of where to start and what you’re going to do. The first thing that you will see on the screen after booting will look like this... rootdev=0x1100 rrootdev=0x2f00 rawdev=0x2f02 Enter pathname of shell or RETURN for sh: erase ^?, werase ^W, kill ^U, intr ^C (I)nstall, (U)pgrade or (S)hell? i =============================================== = Welcome to the OpenBSD/i386 3.0 installation program. This program is designed to help you put OpenBSD on your disk in a simple and rational way. As with anything which modifies your disk’s contents, this program can cause SIGNIFICANT data loss, and you are advised to make sure your data is backed up before beginning the installation process. Default answers are displayed in brackets after the questions. You can hit Control-C at any time to quit, but if you do so at a prompt, you may have to hit return. Also, quitting in the middle of installation may leave your system in an inconsistent state. If you hit
COMMUNITY
Control-C and restart the install, the install program will remember many of your old answers. You can run a shell command at any prompt via ‘!foo’ or escape to a shell by simply typing ‘!’. Specify terminal type [vt220]: <Enter> Press enter here and then move on to the next part of the installation...
The installation program needs to know which disk to consider the root disk. Note: the unit number may be different than the unit number you used in the boot program (especially on a PC with multiple disk controllers). Available disks are: wd0 Which disk is the root disk? [wd0] <Enter> Do you want to use the *entire* disk for OpenBSD? [no] yes [...] Next you are asked to partition your hard disk. The quick install guide, which comes with your CDs, shows all of this info as well. You may find it useful to know that you can multi-boot this version of BSD with Microsoft Windows NT or XP. You’ll now be prompted to set up your partitions and mount points. After this you can then set up any network connections, such as your modem or network card. Since we are installing OpenBSD, it’s best to leave ADSL and ISDN configuration until later on. Configuring the Ethernet version of ADSL is fairly easy under OpenBSD, but if you want a different setup then you should really seek out some advanced reading. You might want to consider a wires-only service for ADSL and then get hold of a modem/router that has RJ45 sockets, so that you can use the Ethernet version of ADSL that way. In any event, you will have to firewall your new connection, and it is much better to firewall around an Ethernet device than anything else. The Alacatel USB modems can be troublesome to configure. If you are just using a 56K modem with an ordinary telephone line then you shouldn’t have any problems at all. Finally, you will be asked which sets of software you wish to extract from the CDs, such as KDE or the XFCE desktop. This is the screen that will ask you which software you want: The following sets are available for extraction. Enter filename, `list’, `all’, or `done’. You may de-select a set by prepending a ‘–’ to its name. [X] base30.tgz [X] etc30.tgz
[X] misc30.tgz [X] comp30.tgz [X] man30.tgz [X] game30.tgz [X] xbase30.tgz [X] xshare30.tgz [X] xfont30.tgz [X] xserv30.tgz [X] bsd File name? [] –game* Your selected software will then be installed into the hard drive in your computer. /mnt2//3.0/i386/base30.tgz: 100% |********************************************** ****| 21192 KB 00:00 ETA /mnt2//3.0/i386/etc30.tgz: 100% |********************************************** ****| 987 KB 00:00 ETA
Now all you have to do is to set the time zone and then the boot blocks will be installed for you. This makes your machine bootable from the hard disk you have chosen.
Troubleshooting As your machine starts up it might be wise to watch the boot messages for failed hardware detection. You can Issue 19 • 2002
LINUX MAGAZINE
93
COMMUNITY
Info OpenBSD http:/www.openbsd.org To order some CD-ROMs or T-shirts http://www.openbsd.org/orders.html Compatible hardware http://www.openbsd.org/plat.html How to install http://www.openbsd.org/faq/faq4.html ftp://ftp.openbsd.org/pub/OpenBSD/3.0/i386/INSTALL.i 386 Useful documents http://www.openbsd.org/docum.html
http://www.openbsd.org/cgi-bin/man.cgi Security issues http://www.openbsd.org/security.html http://www.openbsd.org/crypto.html Firewalls http://www.obfuscation.org/ipf/ipf-howto.txt Goals of the project http://www.openbsd.org/goals.html You would like to support OpenBSD http://www.openbsd.org/donations.html Professional support that you can pay for http://www.openbsd.org/support.html
type dmesg when the system is up and running to view boot messages. Having done that you can configure any hardware or software that you missed out on thus far. You may need to adjust /etc/hosts or /etc/resolv.conf or something similar. If you get confused here then have a look at the documents on the Web site. There is also a PDF-based FAQ you can download, which is very helpful. After looking into all of this you might well want to ask questions and raise issues such as compiling a kernel or migrating from GNU/Linux completely. You can consult the mailing list archives for that. Before sending mail to one of the lists you should consult the document at http://www.openbsd.org/mail.html, which gives a useful guide to netiquette on the OpenBSD lists. It’s also good to know that the OpenBSD manpages are amongst the best available. Shortly after installing your new computer you should type in man afterboot and read the document hidden therein. After that you can begin to ask constructive questions. Other useful documents are man ifconfig and man route. Reading these will help you to understand the shortened and simplified instructions that you can see above.
look at the firewall URL below. This is one of the best HOWTOs on the subject. Other well-known security tools include snort and portsentry; both of which can be good when used properly.
Security It’s probably a good idea to mention some more about security and security issues at this point. As well as the usual things that you might expect from a well-designed operating system, there are useful tools like S/KEY one-time passwords. KTH Kerberos IV and V are integrated with many kerberised applications. The newly produced comprehensive firewalling system can be used with syntax that is not unlike those used in the other BSDs. There was a licence problem with the old IPF and a new version had to be written. PF(4), as it is known, is already thought to be something of a world-beater. It has built-in NAT functionality and support for both IPV4 and IPV6. There is a built in “scrub” that sanitises fragmented and overlapping IPV4 packets. For more info about using firewalls with BSD have a 94
LINUX MAGAZINE
Issue 19 • 2002
Supported hardware You can install this version of BSD into several different types of hardware. There isn’t quite the same range of hardware that you can use with NetBSD but there’s plenty to go at and most system administrators are happy enough with OpenBSD as it is. The hardware that can be used is: ● alpha – DEC Alpha-based machines ● amiga – Amiga m68k-based models (MMU required) ● hp300 – Hewlett-Packard HP300/HP400 machines ● i386 – Intel based PC’s ● mac68k – Most MC680x0-based Apple Macintosh models ● mvme68k – Motorola MVME147/16x/17x 68K VME cards ● macppc – Support for Apple based PowerPC systems ● sparc – SPARC Platform by Sun Microsystems ● sun3 – Sun’s 68020 based Sun3 models ● vax – DEC’s VAX computers To sum up. You can do most things with OpenBSD. It’s very secure and the developers pride themselves in this simple fact. If you want to do something else you might consider NetBSD and or FreeBSD, which were explained in simple terms in previous issues. You might even think about a mixed network where GNU/Linux is to be found alongside one of the BSDs.
The author Richard is the Chairman for Sheffield Linux User’s Group. You can view their Web site at http://www.sheflug.co.uk