linux magazine uk 22

Page 1



COMMENT

General Contacts General Enquiries

01625 855169

Subscriptions Email Enquiries Letters CD

www.linux-magazine.com subs@linux-magazine.com edit@linux-magazine.com letters@linux-magazine.com cd@linux-magazine.com

Editor

John Southern jsouthern@linux-magazine.com

Assistant Editor

Colin Murphy cmurphy@linux-magazine.com

Sub Editor

Gavin Burrell gburrell@linux-magazine.com

Contributors

Dean Wilson, Frank Booth, Derek Clifford, Steven Goodwin, Janet Roebuck, Bruce Richardson

International Editors

Hans-Georg Esser hgesser@linux-user.de Ulrich Wolf uwolf@linux-magazin.de

International Contributors Björn Ganslandt, Georg Greve, Anja Wagner, Patricia Jung, Stefanie Teufel, Christian Perle, Frank Wieduwilt, Juergen Jentsch, Jo Moskalewski, Marianne Wacholz, Andreas Jung Design

Advanced Design

Production

Hans-Jorg Ehren

Operations Manager

Debbie Whitham

Advertising

01625 855169 WORLDWIDE: Hans-Jorg Ehren hjehren@linux-magazine.com GERMANY: Verlagsbüro OhmSchmidt Osmund@Ohm-Schmidt.de

Publishing Publishing Director

Robin Wilkinson rwilkinson@linux-magazine.co.uk Subscriptions and back issues 01625 855169 Annual Subscription Rate (12 issues) UK: £44.91. Europe (inc Eire) : £59.80 Rest the World: £77.00 Back issues (UK) £6.25

Distributors

COMAG, Tavistock Road, West Drayton, Middlesex England UB7 7QE

Print

R. Oldenbourg

Linux Magazine is published monthly by Linux New Media UK Ltd, Europa House, Adlington Park, Macclesfield, Cheshire, England, SK10 4NP. Company registered in England. Copyright and Trademarks (c) 2001 Linux New Media UK Ltd No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, emails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing. ISSN 14715678 Linux is a trademark of Linus Torvalds Linux New Media UK Ltd is a division of Linux New Media AG, Munich, Germany Disclaimer Whilst every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material providied on it is at your own risk. The CD is comprehensively checked for any viruses or errors before reproduction. Technical Support Readers can write in with technical queries which may be answered in the magazine in a future issue, however Linux Magazine is unable to directly provide technical help or support services either written or verbal.

Current issues

UNITED

SUPPORTER

A

s predicted some four months ago, several of the major players in the Linux market have finally decided to cooperate with each other. Caldera, SuSE, TurboLinux and the Brazilian Conectiva have all agreed to work with a common aim. Each will still ship products with differing value added content, but they will be based on the same development core and so reduce some costs. It also means it will be a little easier to get standards and compatibility issues sorted. This could explain why SuSE has been making inroads into conforming to the Linux Standard Base. This is a good thing for Linux within businesses and the common goal of world domination. With the main rivals to this union being Red Hat, Mandrake and Debian we are not quite heading for a single Linux distribution just yet. Red Hat is certainly known within the business world and Mandrake also has a corporate server version. Debian gets used in most corporate server rooms thanks to its steady support. Although fragmentation was the main flaw in UNIXes of old, I like the idea of there being some differing distributions: it encourages innovation and allows many paths to be taken. With some 270 current distributions I can always find a way to pass a rainy day installing a new system. Some systems I favour over others, some I will no longer give disk space to, as they go out of their way to make rain clouds darker and installs harder. Who knows, maybe we will see another big player enter with a distribution of its own and make the day a little sunnier. Happy Hacking

John Southern Editor We pride ourselves on the origins of our magazine which come from the very start of the Linux revolution. We have been involved with Linux market for six years now through our sister European-based titles Linux Magazine (aimed at professionals) and Linux User (for hobbyists), and through seminars, conferences and events. By purchasing this magazine you are joining an information network that enjoys the benefit of all the knowledge and technical expertise of all the major Linux professionals and enthusiasts. No other UK Linux magazine can offer that pedigree or such close links with the Linux Community. We're not simply reporting on the Linux and open source movement - we're part of it.

Issue 22 • 2002

LINUX MAGAZINE

3


NEWS

LINUX NEWS No strings attached 802.11 technology goes by a variety of names, depending on who is talking about it. Some people call it wireless Ethernet to emphasise its shared lineage with traditional wired Ethernet. Wi-Fi, from wireless fidelity, is another popular name, but those who work hands-on with the technology call it simply 802.11. In 802.11 Wireless Networks: The Definitive Guide, author Matthew S Gast delves into the intricacies of wireless networks, revealing how 802.11 technology can be a practical and even liberating choice for businesses, homes, and organisations. At the same time, he leads the reader through all aspects of planning, deploying, and maintaining a wireless network, and covers the security issues unique to this type of network. The adoption of 802.11 wireless technology is moving at an explosive rate. With transfer speeds of

up to 11 Mbps, it’s the fastest practical wireless technology approved by the FCC for low-power unlicensed use. “Using new network technology always requires a balance between theory and practice,” says Gast. “The theory helps you design the network and troubleshoot the equipment when it breaks, but it is not always helpful when you have a piece of equipment that implements one vendor’s view of the world. Most books will tell either how the standard works or how to use a specific piece of equipment or software. In this book, I have tried to weave together both the theory and the practical sides of the matter.” 802.11 Wireless Networks: The Definitive Guide shows readers how to configure wireless cards under Linux, Windows, and OS X systems. The book is written for the serious system or network administrator who is responsible for deploying or maintaining a wireless network.

Info 802.11 Wireless Networks: The Definitive Guide http://oreilly.com/catalog/802dot11/

Banks stay closed Thanks to some banks’ insistence on using closed format information protocols, some people are being effectively shut out of using online banking systems and taking part in the ‘information revolution’ that is supposed to be happening. If your bank likes to say ‘this Web site is designed for the following technology’ and you don’t fit their bill, then cold comfort is at hand with the knowledge that you’re not alone. Evan Leibovitch has been collating information about how well the banks suit the needs of the computer literate general public. On his Web site you will find details of banks in 32 countries. It shows that, in the UK, there is multiplatform support available from a lot of the banks. It’s just a shame that some of those that are still letting the side down happen to be big high Street names. To find out whether your bank is a stick in the mud, check out how it fares at http://www.starnix.com/banks-nbrowsers.html. 6

LINUX MAGAZINE

Issue 22 • 2002


NEWS

C# in a Nutshell C# in a Nutshell, written by Peter Drayton, Ben Albahari and Ted Neward is designed to be the handbook for the new breed of programmers to this language, who the authors expect will use the book daily and will keep next to their keyboards for years to come. Two years ago, there were no C# programmers: the language didn’t even exist. Now all C# programmers find themselves in the same boat, having to learn to master a language and a new platform in quick order, without a lot of tried-and-true resources on which to depend. Neither a “how-to” book nor a rehash of Microsoft’s documentation, this latest addition to O’Reilly’s Nutshell series goes to the source of the language and APIs to present content in a way that professional programmers will value above all other books. According to the co-author Neward: “Hordes of programmers will be migrating to this entirely new platform and an entirely new language. Just as happened with Java, programmers will want somebody to point out the new and interesting stuff and will need careful guidance to avoid the ‘gotchas’ in this brave new world.” In addition to the reference section, “C# in a

Nutshell” includes an accelerated introduction to the C# language and the .NET Common Language Runtime and a tutorial section on using C# with the core classes of the .NET Framework Class Library. “C# in a Nutshell” was written for the working C# programmer who will be able to find answers to most questions of syntax and functionality that he or she encounters on the job. Experienced Java and C++ programmers encountering the C# language and the CLR for the first time will be able to put this book to good use.

Info C# in a Nutshell http://oreilly.com/catalog/csharpnut/

Red Hat Alliance The Linux provider Red Hat has announced the Red Hat Alliance: a partner programme designed to enhance and strengthen relationships between Red Hat and the premier technology providers to the enterprise market. Recently committed partners include: Alias|Wavefront, BMC Software, Borland Software Corporation, CheckPoint Software Technologies, ComputerAssociates, IBM, Legato Systems, Novell, Rogue Wave Software, Softimage, Synopsis, TIBCO Software and VERITAS Software. Through the Red Hat Alliance, Red Hat is forging long-term relationships with partners who are supporting Red Hat’s enterprise family of products, including the recently released Red Hat Linux Advanced Server. Advanced Server is the ideal Linux solution for partners and customers due to longer release cycles, added enterprise-class features, and unprecedented reliability and stability. The Red Hat Alliance provides viable Linux solutions to enterprise customers through: ● Certification: Certified software and hardware programmes assure customers that

8

LINUX MAGAZINE

Issue 22 • 2002

their applications and hardware are certified to work on Red Hat’s enterprise products. ● Support: Red Hat will work closely with its partners to ensure that customers deploying enterprise-class Linux solutions receive enterprise-class support. ● Technology: Red Hat and its partners will collaborate to improve Linux solutions for enterprises looking to migrate to Red Hat Linux. ● The Red Hat Alliance programme is a milestone for Red Hat, uniting the top hardware, software and embedded technology providers in a partnership with Red Hat to deliver the best enterprise Linux solutions to their customers,” said Mark de Visser, Vice President of Marketing at Red Hat. “We look forward to working with our partners to drive further innovation in Linuxbased enterprise solutions.”

Info Red Hat Linux

http://www.europe.redhat.com


NEWS

Network administration As Craig Hunt comments in the just-released third edition of his TCP/IP Network Administration: “The Internet has grown far beyond its original scope. The original networks and agencies that built the Internet no longer play an essential role for the current network. The Internet has evolved from a simple backbone network, through a three-tiered hierarchical structure, to a huge network of interconnected, distributed network hubs. Through all of this incredible change one thing has remained constant: the Internet is built on the TCP/IP protocol suite.” TCP/IP is a set of communications protocols that define how different types of computers talk to each other. The suite gets its name from two of the protocols that belong to it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP). According to Hunt, TCP/IP is the leading communications software for Local Area Networks and enterprise Intranets, and is the foundation of the worldwide Internet. As such, it is the most important networking software available to a Unix network administrator. “This book is a practical, step-by-step guide to configuring and managing TCP/IP networking software on Unix computer systems,” says Hunt. “It’s a book about building your own network based on TCP/IP. It is both a tutorial covering the ‘why’ and ‘how’ of TCP/IP

networking, and a reference manual for the details about specific network programs.” Hunt also provides a tutorial on configuring important network services, including DNS, Apache, sendmail, Samba, PPP, and DHCP, and covers troubleshooting and security issues. With coverage that includes Linux, Solaris, BSD, and System V TCP/IP implementations, TCP/IP Network Administration, Third Edition is intended for everyone who has a Unix computer connected to a TCP/IP network.

Info TCP/IP Network Administration

http://oreilly.com/catalog/tcp3/

A force to be reckoned with Caldera, Conectiva, SuSE and Turbolinux have joined forces, resources and resolve and created UnitedLinux. Linux is good at supplying verity and alternatives; which many developers see as one of its strengths. Enterprise users, and the majority of business users, see this as a cloud of uncertainty and have been know to back away from using Linux because of this. Now uniformity is at hand. Software vendors including AMD, Borland, Computer Associates, Fujitsu Siemens, Fujitsu Japan, Hewlett-Packard, IBM, Intel, NEC and SAP, have also come together to support this effort in creating standard Linux platform This new initiative will streamline Linux development and certification around the globe, with it’s uniform distribution of Linux designed for business. UnitedLinux addresses enterprise customers’ needs for a standard, business-focused Linux distribution that is certified to work across hardware and software platforms, accelerating the adoption of Linux in the enterprise. Under terms of the agreement, the four companies will collaborate on the development of one common core Linux operating environment, called UnitedLinux software. The four partners will each bundle value-added products and services with the UnitedLinux operating system and the resulting offering will be marketed and sold by each of the four partners under their own brands. Nearly every vendor supplying a piece of the technology infrastructure used by businesses has expressed support for UnitedLinux, including systems and software vendors AMD, Borland, Computer

Associates, Fujitsu Siemens, Fujitsu Japan, HewlettPackard, IBM, Intel, NEC, and SAP. Independent hardware and software vendors spend considerable effort certifying their products and services on individual Linux distributions to ensure product compatibility for their customers. UnitedLinux will significantly diminish the number of distributions that vendors are asked to certify and will provide a true standards-based Linux operating environment. The collaboration of these four Linux companies will result in an enterprise Linux offering which is truly global by virtue of the companies’ ability to provide local language support, training and professional services, in addition to the support of strategic partners. UnitedLinux will provide one unified Linux codebase for IBM’s complete eServer product line and AMD 32-bit and 64-bit platform and Intel’s x86 32-bit and Itanium processor family platforms. In addition UnitedLinux unleashes a massive research and development organisation for Linux in the enterprise. Effectively, the four companies involved in this process will shift dollars and resources once allocated to creating and maintaining custom Linux operating environments and divert them to new R&D on Linux enterprise software. UnitedLinux is dedicated to bolstering the enterprise readiness of the platform, but in the same collaborative spirit from which Linux was founded and continues to flourish.

Info UnitedLinux

www.unitedlinux.com

Issue 22 • 2002

LINUX MAGAZINE

9


NEWS

Open APIs enable seamless sharing Webraska has released SmartZone Application Platform 4.0. According to JF Sullivan, Vice President of Telecom Marketing at Webraska: “A major focus for Webraska is to find ways to increase application ease-of-use, drive down development lead-times and enable applications to share data and interact with each other.” The SmartZone Application Platform supports standards-based interfaces that help application developers create a continuous enduser experience. The platform provides powerful foundation technologies that enable development of viral person-to-person and community applications, while still maintaining privacy of subscriber information. These technologies accelerate and simplify the construction of location-based services. This launch makes the following services of the SmartZone Application Platform available to local and remote applications via Simple Object Application Protocol (SOAP) and Java APIs: ● Application Integration Services that allow applications to work together and share subscriber context information ● Access Control Services providing protection of personal subscriber information ● Publishing Services for the creation, storage, retrieval and sharing of information and search results by different applications and subscribers

● The SmartZone supports MMS and WAP 2.0, in addition to the interfaces supported by previous versions (Web, WAP 1.0, SMS, i-mode and HTML browsers on PDAs). “Our customers require software which greatly improves the ease with which they roll out their LBS applications”, added Korak Mitra, VP Telecom at Webraska. “The release of SmartZone(TM) Application Platform 4.0 provides the killer environment to develop every conceivable LBS application and quickly deploy those services to their end-users.” The Webraska SmartZone Application Platform, used alone or combined with Webraska’s SmartZone Geospatial Platform, enables the rapid development, integration and deployment of high-usage viral location-based applications and services. Supporting the latest wireless industry development standards including Java, XML, SOAP, and Web Services, the SmartZone Application Platform is easily integrated into mobile operator environments. The platform architecture is fully redundant and can be scaled to handle requests from millions of active subscribers. Webraska’s SmartZone Application Platform can be deployed on the customer’s premises or hosted by Webraska on their behalf. Solaris, Linux and Windows operating systems are supported.

Info Webraskahttp://www.webraska.com

Linux excels at digital effects IBM is to create the digital effects for the next Lord of the Rings film with a little help from Linux. Digital effects facility Weta Digital, Ltd. will move a significant proportion of production work related to the Lord of the Rings film trilogy onto IBM Intellistations running Linux. IBM will supply Weta Digital with more than 150 6580-WEA Intellistation workstations with 21” CRTs by the end of 2002. Jon Labrie, who led the project for Weta Digital said: “We chose IBM as we were impressed with both the level of commitment to Linux on an organisational level, and the skills of the IBM Linux team members. “Failure is not an option. I needed an organisation with a commitment as strong as my own to Linux. We’re betting a good portion of our business on it. So has IBM. Our interests are in alignment and that’s always a good starting point for a business relationship” said Labrie. Labrie is responsible for the implementation of Linux as the primary operating system within the facility. The primary motivation for the shift to Linux has been cost. “Linux makes development work much more cost effective for us” said Labrie.

10

LINUX MAGAZINE

Issue 22 • 2002

“The financial reasons for moving to Linux were compelling, as we’re still growing at a phenomenal rate and we need to be able to support that growth in the most cost effective manner” Weta Digital had run Linux on servers for two years and the success there in terms of price and performance was the catalyst that led to the investigation of Linux workstation options in the marketplace and the decision to partner with IBM. The first of the new machines were installed at Weta Digital in early May for use by the special effects artists working on The Two Towers, the second film in The Lord of the Rings Trilogy. IBM’s New Zealand Southern Region Manager Sunil Joshi says the deal represents a significant win for IBM: “This deal showcases the value proposition that IBM’s Digital Content Creation Solution delivers, and highlights our total commitment to Linux, being able to benchmark that commitment through significant deals like this is proof of our ability to deliver on our promise to the Linux market.”


NEWS

Fastest server at Wimbledon This year was IBM’s thirteenth year as official information technology supplier to the All England Lawn Tennis Club for the Wimbledon Tennis Championships, managing all parts of the club’s ebusiness infrastructure. New developments for 2002 included: improvements to the TV graphics and to the information system used by TV and radio commentators; increased Internet services for the players; and an enhanced Match Information Display to deliver scores and other information to the public in the grounds. From speed-of-serve radar guns to the official Web site, IBM will provide the hardware, software and services needed to run, build and manage the club’s e-business applications. Scoring information and a host of e-business services will be delivered in real-time via many channels, including the official Web site and an Intranet used onsite by players, press, public and media; Match Information Display in the Grounds; WAP and SMS services to mobile phones; and a data feed to BBC Interactive TV and other international broadcasters.

Security under the thumb Or under the finger. AuthenTec have launched its new, some would say breakthrough, Fingerprint Sensor for low cost access control markets. AuthenTec, a developer in advanced biometric semiconductor technology, is showing off its new FingerLoc AFS8500 fingerprint sensor designed to enable integration into a whole new range of lowcost security applications. The AFS8500 is relatively low in cost and its high level of integration makes it suitable for use in time and attendance, access-control, commercial and residential building security applications. Unlike any other sensor in its segment, the FingerLoc AFS8500 features a small form factor (14mm x 14mm x 1.4mm); a low price point of $23.00 for 10 KU quantities; a full industrial temperature range (-20˚C to +85˚C) and full +/-15KV (IEC61000-4-2 Level 4) ESD immunity. With high-speed serial and 8-bit parallel interfaces, the AFS8500 is the product of choice for easy integration.

Info AuthenTec

http://www.authentec.com

Interface architects Imperial Software Technology has introduced a new version of its X-Designer graphical user interface builder for Motif, Windows and Java. Features of X-Designer 7: Enterprise Edition include full integral support for Motif 2, the latest version of the standard GUI toolkit for Unix and Linux workstations. Designers of application interfaces for Unix can now fully exploit the power of Motif 2, with the ability to transport the interfaces to Windows and Java for full cross-platform capable applications. The new version also provides the ability to save GUI designs in XML, rendering them available for processing by other tools such as design documentation utilities or for translation to other types of interface. A legacy application migration feature enables existing Motif applications to be captured, optionally preparing the interface for generation as a Java GUI. Built-in XD/Replay technology offers automated interface testing. “We have introduced the latest version because of unprecedented demand for a Motif 2 builder from our customers and huge interest shown in downloads from our Web site,” says IST president and CEO Derek Lambert. X-Designer provides a powerful application environment that not only includes GUI development but software test, design, capture, and replay.

Info IST

http://www.ist-inc.co.uk/NEWS/press/xd7_launch.html

Issue 22 • 2002

LINUX MAGAZINE

11


NEWS

Marker pens to become illegal Well, possibly! How so? Some bright spark has discovered that by covering the rip protection track on some of the copy protected audio CDs that are on the market – these are the ones that will result in a Mac having to be sent back to the repair shop if one is inadvertently played on it – the rip protection is no more. The law is very clear, in some parts of the world, in expressing that “any device which circumvents such protection schemes is illegal, under the DMCA”. Relax though, you can still use them to write license key codes on them. The Digital Consumer organisation – protecting fair-use rights in the digital world – is trying to bring some sanity to the world of copy protection and has posted the “The Consumer Technology Bill of Rights” in the USA, with a plea for everyone there to write to their congressmen. In the UK the Campaign for Digital Rights have similar goals in mind in defeating the EUCD – the European Copyright Directive – which, if taken to its current logical conclusion, would also want to see Post-It notes drummed out of town.

Info Digital Consumer http://www.digitalconsumer.org/ Campaign for Digital Rights http://uk.eurorights.org/

Gartner see value in Open Source Security by obscurity is not the way to go, claims Gartner in a recent report. Security is seen as top priority in the IT industry, even for Microsoft. In its report Gartner “believes that Open documentation and public review of program interfaces between OSes and applications will lead to stronger security mechanisms over the longer term. Of course, attackers may exploit the exposed interfaces in the short term as the process brings to light existing yet undiscovered vulnerabilities. But this approach simply means that insecure code will become secure more rapidly.” “Computer hackers have had little difficulty breaking into Microsoft’s closed source software. A strategy of relying on security through obscurity (hiding source code) has already proven a failure for Microsoft.”

Info Gartner http://www3.gartner.com/DisplayDocument?doc_c d=106790

Lucas Linux renderfarm a success Industrial Light and Magic produced its first movie after converting its workstations and renderfarm to Linux last year, That film? Star Wars, Episode II: Attack of the Clones, no less. ILM took a big gamble when it undertook the move to convert to Linux, especially since it was in the middle of a major film production. “We thought converting to Linux would be a lot harder than it was”, said the director of Research and Development, Andy Hendrickson. “Linux is so like what we had before. We pushed forward deployment in November 2001 and will finish conversion after Episode II.” All ILM 3D particle simulations are done in Alias|Wavefront Maya. “We have, I’d say, 90 per cent of our Maya users on Linux”, says Robert Weaver. “It seems incredibly stable on Linux. I haven’t had Maya crash on me in months.” R&D Principal Engineer Phil Peterson reports that ILM is about 80 per cent finished with its Linux software conversion. He says, “A team of three people ported over a million lines of code to Linux.” “The biggest issue we had in porting was the compiler and other tools”, says Peterson. “Newer C++ code is fairly dependent on STL.” The gcc 2.96 compiler included with Red Hat didn’t support the C++ Standard Template Library (STL), so ILM uses gcc 3.01 instead. Its multi-platform build environment is customized based on Python cooperating with GNU make. 12

LINUX MAGAZINE

Issue 22 • 2002


NEWS

Gnomogram

MONKEY BUSINESS Ximian Connector Ximian’s latest product, the Connector extension for Evolution, sees the company venturing into new territory. Connector is Ximian’s first commercial enduser product and joins existing commercial services to help secure long-term financing for freeware products such as the Evolution suite. In addition to supporting iTip and iCal, Connector allows Evolution to communicate with existing Microsoft Exchange 2000 servers (5.5 is not supported). You can use Connector to schedule group appointments with Outlook users, and access mail or calendar entries directly on your Exchange servers, however, you will need to enable Web Access for Outlook, as Connector requires the Web interface to access some information. You can use Red Carpet to install the Connector, although this does entail purchasing a separate license for $69 (with volume discounts available for 10 or more licenses). The license will be mailed to you, and can be activated directly in Evolution (version 1.0.3).

This month’s slimline congress providing the delegates with a good excuse to join in the “International Beer Day” celebrations and ensuing “body part” signings. GNOME 2 was the major topic with numerous libraries presented from the developer’s viewpoint, but the conference also provided a platform for technologies such as the GNU PDA Environment that are not directly related to GNOME. Miguel de Icaza also held several talks on Mono, a .NET framework implementation, and was optimistic that GNOME will profit from Mono in the future. Various discussions on the future of GNOME led to plans for easier installation routines and more tightly integrated style guides. The whole event was captured on numerous digital cameras. Sample pictures can be seen at the links below.

Gnomogram takes a brief look at Ximian’s Connector and what went on at the GUAD3C conference

Miguel de Icaza at the party

Info Ximian’s Connector interfaces between Evolution and Exchange

Guadec 3 The third GNOME Users and Developers European Conference alias “GUAD3C” took place in Seville, Spain, this year and was attended by GNOME developers from all over the globe. Luckily, the third GNOME 2 beta was completed just before the

Evolution Connector

http://www.ximian.com/products/connector/

Red Carpet

https://store.ximian.com/

GUAD3C

http://www.guadec.org/

GUAD3C pictures

http://algol.prosalg.no/~docpi/php/gallery/albums.php

GUAD3C pictures

http://www.chicasduras.com/modules.php?op= modload&name=NS-Gallery&file=index

GUAD3C pictures

http://www.gnomemeeting.org/~damien/gallery/

Issue 22 • 2002

LINUX MAGAZINE

13


NEWS

K-splitter

SUMMER FUN Whether it’s

Change of address

thumbnails for

Anyone who was surprised when their favourite desktop wallpaper disappeared upon installing KDE 3.0 can breathe a sigh of relief. All the wallpapers are still there, they’ve simply found a new home. Because of the increasing amount of data and the escalating quantities of graphical elements in the kdebase package, the developers decided to give the background images a home of their own with the package kdeartwork. Henceforth there are only three wallpapers left in the base-package and ten background tiles, while the rest has emigrated into the Artwork package.

KOffice, new maintainers for old programs or the first stable version of another editor little in the KDE world stays the same

Thumb-size cinema Enthusiasm can be infectious, so when someone goes crazy about something it sometimes spreads to others, too. Simon MacMullen was so enthused by Konqueror and the KDE file dialog’s ability to display thumbnails that he also wanted to equip Koffice with similar functionality. After a bit of coding, Figure 1 proves that he succeeded in his plan and in the next version of the KDE Office package we will be able to assess Word files or Excel spreadsheets at thumbnail size before opening them.

Figure 2: Krayon is now known as Krita

now changed, as Patrick Julien has officially taken on the role of Krayon maintainer, further development is underway at quite a pace. Just in passing, too, the baby has also been re-christened and now revels in the name of Krita in the Office family.

Learning to print If printing under KDE is causing you sleepless nights then there’s a new wealth of information online at the homepage of the KDEPrint project. At http://printing.kde.org/developer/tutorial/ the developers of the KDEPrint module have released a tutorial for coping with the Print components. It is aimed at all KDE coders wanting to use KDEPrint in their applications, and covers both the basic functions and the more complicated features such as altering the print dialog or the automatic preview mechanism. To make the whole thing a bit more visually attractive, the tutorial is niftily adorned with screenshots and code snippets.

Figure 1: KOffice has a thumbnail cinema now, too

Another editor New name, new management In the past few months Krayon, the graphics program of the KOffice Project (Figure 2), has been treated like something of a poor relation. This state of affairs has 14

LINUX MAGAZINE

Issue 22 • 2002

Anyone who simply cannot get on with the KDE standard editor Kate, may be happier with KVim. The first stable version of this was recently launched by the developers Thomas Capricelli,


NEWS

Philippe Fremy and Mickael Marchand. According to a statement by these three KVim is supposed to bring you the “power of Vim together with the friendliness of KDE”. Should this declaration mean nothing to you, Vim stands for Vi improved. Behind this lies the default incarnation found in many Linux distributions of the standard Unix editor vi. As the name suggests, Vim expands the traditional vi by a few features, and KVim packs the whole thing into a neat KDE interface. In addition, a Vimpart component means that KVim can be embedded in the Konqueror. Work is still in progress on support for KDevelop, KMail and Kate and because it’s all so great, it now looks very much as if in future KVim will be integrated into the Vim source distribution. The current version of the program can be downloaded any time from the homepage of the project at http://freehackers.org/kvim/download.html.

Coffee morning Which developer is the best? Which is the best looking? Who’s already spoken for? Do animated icons make KDE prettier or just slower? What should the new mascot be called? Why are the coders of GNOME so good looking, cheeky or stupid? In future you can discuss all these questions at the virtual coffee morning, as the KDE-cafÈ has re-opened its doors. KDE-cafe sees itself as the virtual chill-out zone of the KDE project: A combination of Slashdot and the IRC, the local pub and the opinion columns. Here you can natter away about everything that gets you going. The sole condition is that you keep it nice and friendly. Anyone who wants to join in the chatter or to be more precise, join the mailings – just send an email to the address kde-cafe-request@kde.org with the subject join, and there’s no longer anything in the way of your exchanges on the latest hackerfest.

KDE 3 developer conference, which took place from 25 February to 4 March 2002 in Nuremberg. This is because when KDE hackers (Figure 3) get together, they don’t just talk, they also code. Cristian Tibirna has put a comprehensive summary on the conference online at http://www.kde.org/ announcements/kde-three-report.html. You can look at images of the hacker summit at http://develhome.kde.org/~danimo/kdemeeting/ and at http://www.suse.de/~cs/KDE3-Meeting/images.html.

KDE Worldwide Chris Howells’ new project, KDE Worldwide, has completely re-written the internationalisation of the KDE project. At the new-born Web site, http://worldwide.kde.org/, Howells reports on the status of the internationalisation and points out, on a developer world map at http://worldwide.kde.org/ map/ (Figure 4), the corners and ends of the world in which work is now going at fever pitch on new lines of code for the KDE Desktop. Every developer, graphics artist, translator or other colleague in the KDE Project is warmly invited to put themselves on the map at http://worldwide.kde.org/ map/form.phtml. Anyone wanting to make a contribution to the success of the project can join the exchanges on the specially created kde-worldwide mailing lists. You can join at http://mail.kde. org/mailman/listinfo/kde-worldwide.

Hacker summit The fact that the KDE 3.0 release is the best-tested release in KDE history has a great deal to do with the Figure 4: There are KDE coders all over the world

Security leak

Figure 3: The latest boy band...

Frank Schwanz, developer of the popular Samba share browser Komba, requests all users who are using the program with a version number of less than 0.7.3 to update it as a matter of urgency. This is because in the older versions there is the possibility that other users can find out your password with the command ps –x, if you have mounted a share. The latest – secure again – version can be downloaded from the homepage of the author at http://zeus.fhbrandenburg.de/~schwanz/php/download.php3. Issue 22 • 2002

LINUX MAGAZINE

15


GNOME NEWS

16

LINUX MAGAZINE

Issue 22 • 2002


LETTERS

Come and have your say

WRITE ACCESS Word from the front line

Write to Linux Magazine Your views and opinions are important to us, so we do want to hear from you, about Linux-related subjects or anything else that you think would interest Linux users. Send your submissions to: By post: Letters Page Linux Magazine Europa House Adlington Park Macclesfield Cheshire SK10 4NP By email: Letters-page@linuxmagazine.co.uk Be sure to leave your postal address whichever method you choose.

16

LINUX MAGAZINE

With the ever-increasing capabilities of the Linux platforms and the ability to run selected windows applications using either VMware or Wine, Linux has become a very real alternative to Windows. I currently am using the download version of Mandrake 8.2 and before that I owned a copy of Mandrake 8.0, which was my first proper look at Linux. Before that I had briefly seen Mandrake 7.0 but the rate of change between these three has been exceptional. The main reason I chose to try Linux was that as a computer student at University I was going to have to install a version at home and so I dived in head first to see what Linux was really like. I have to admit that from what I saw I liked it a lot. The sheer range of software and how configurable the system is was a pleasure to see. I say all these things about Linux as if it was the greatest thing since sliced bread but there is another side to my comments. I have found that the driver availability is still something that needs some work and the help files themselves are a bit confusing, requiring a bit of knowledge of the system being required before you start. In many cases when I have installed software from a magazine in tar format (on the full version of Mandrake 8.0) the information provided isn’t always complete. The requirements of some software were matched by my system but it still failed to get past the ./configure stage and just listing a file like conf.h. Would it not be better for a more detailed installation help file to have that bit more on what library files need to be installed and where to find them instead of having to hunt through the Internet using a search engine. My final point is to do with something I have not seen yet for Linux and that is Encyclopaedias such as Compton’s or Encarta, are there any available and do you know where I can obtain them? Neil Davidson LM I wonder how much solace you gained by having a boxed set for your first install. Yes, Linux is available to download, or you can buy the discs on their own, but people who are just about to set out on the road to Linux discovery normally need some hand holding Issue 22 • 2002

Browsing the DK encyclopaedia disc with nothing more than Konqueror

and the documentation that comes with boxed sets must surely be their first comfort stop, hopefully changing what can be the fretful uphill struggle of a first time install into something more akin to a cruise. Trying to answer the last part of the letter led us on a trail, with a pleasant discovery at the end. First of all, an online search brought up the Free Encyclopaedias project. This is an online project, which allows you to view its catalogue of research articles for free. It even allows you to help and you can submit articles you have written for peer review before they are finally placed into the encyclopaedia. We realise though that this isn’t quite what you were after, so we took the plunge and popped into the drive a “DK Eyewitness Encyclopaedia of Science” CD that just happened to be on the shelf. Much to our surprise, we were able to mount this CD and take a look at the encyclopaedia files held there using Konqueror. Expecting to find some tightly locked database we were pleasantly surprised to find that we could browse each of the pages. Konqueror managed to present us with thumbnail images and Gimp and Kview had no problem in displaying the pages perfectly. Obviously, the front-end didn’t work, but, if you are prepared to do some digging around, much of the information does seem to be available. Alas, our research didn’t stretch any further than the DK disc, but it would be interesting to hear from any users with success stories on viewing other CD encyclopaedia or from any developers that might be working on applications capable of fully unlocking this information so that the maximum use can be made of it.


LETTERS

In development I would like to get started with digital photography. My main use will be getting images onto my Web pages. I don’t want to sell myself short though, by going as low as using a webcam. I have my eye on the FujiFilm FinePix 1400 zoom. Is it going to work with Linux though? It doesn’t seem to be listed on the gphoto Web site http://www.gphoto.org. What alternatives do I have to Gphoto? Hugh Ficeon LM Buying hardware that you want to use on a Linux system is still a bit of a minefield. Support from hardware manufacturers is nowhere near wide ranging and it is always a good idea to try and buy products from people that, at least, acknowledge the existence of other operating systems. If you have your heart set on the Finepix 1400 zoom then you will be in for a smooth ride, as the Finepix range are USB mass-storage devices. Support for these are good under Linux so you will have no problem getting hold of the images from the camera. You will have less luck if you were hoping to do anything a little more advanced, like taking photographs by remote control via the USB. If you have a fairly recent Linux distribution, with a kernel greater than 2.4.x then you should find support for the USB mass-storage device built right in. The only thing that you might need to add is the usb-storage module with:

Dynamic IP address blues I have registered an Internet domain name and am hosting my Web site on my own machine. My ISP, Blueyonder use dynamic IP allocation. Every time I reset the machine my IP address changes, so no-one can find my Web site. I know of a ‘remote DNS management’ company, DNS Wizard, who seem to have the solution to my problem, except I want to host the Web site on a machine running Linux and they don’t support Linux. Can you think of a solution? Martin Atisford LM Because the IP address for your Internet connection is dynamic, it is prone to change. Should this happen, the route to your Web site is lost, making your pages become inaccessible. When you register with a ‘remote DNS management’ company like DNS Wizard, you are given a static IP address from their pool. This static address is then redirected to your own address, so that the data can be requested from the right server – in this case, your Web pages. You run a client on your machine that constantly monitors your IP address. If it changes, the client passes this new address onto the remote management server, which then knows the new address to redirect the traffic to your Web page. The sticking point comes if the remote management company doesn’t offer a client for Linux. DNS Wizard doesn’t, though there may be some third party software that could help. However, there are other DNS management companies that do offer you support for Linux, http://www.no-ip.com/ is just one of many.

insmod usb-storage If you are running a Mandrake system, you may also find linux-hotplug useful, for which see http://linuxhotplug.sourceforge.net/.

Windows now second choice Hello there. I am a Red Hat Linux user and I have just installed version 7.3. However, I use it next to Windows for games and I have made my partitions with Partition Magic 6.0. When I start my computer the boot manager loads and then LILO loads. My question is, how do I set LILO to default the boot image of Linux rather than Windows? Chris Nash LM Do this by editing your /etc/lilo.conf file, and change the default line to that of your Linux Label. I expect your config file looks a bit like this:

vga=normal delay=20 image=/vmlinuz label=Linux read-only other=/dev/hda1 label=dos Change the top line to read: default=Linux

default=dos boot=/dev/hda1 root=/dev/hda1 install=/boot/boot.b map=/boot/map

All you need to do now, and it really is the most important step, is to run the /sbin/lilo command to update your boot record and maps in order for your changes to take effect. Issue 22 • 2002

LINUX MAGAZINE

17


18 Extreme

17/6/02 2:21 pm

Page 18

REPORT

Technology meets art meets people

EXTREME COMPUTING The world of computing has always had its weird side, and at the annual Extreme Computing show this weird side is unleashed on the unsuspecting public. Colin Murphy plucked up the courage to attend

18

LINUX MAGAZINE

T

he Camden Centre in London’s King’s Cross was the home to Extreme Computing 2002 – an event unlike any other. That’s what the event’s organisers would have had us believe before the event, and after? Well, just maybe they were right. Extreme Computing was described as “a gigantic village fete for the 21st century, an off-the-radar cyber jumble-sale, an all-day celebration of do-ityourself technological unusualness”. Geekdom reigning supreme was the order of the day with a whole slew of inappropriate technology, and some appropriate, on display and in use. The meeting was split into four main parts. The main hall held theatre style seating for the main presentations while around the edges were trestle tables for those with wares to sell or display. There was the bring and buy sale room to one side which also featured the cafeteria for food and the cafeteria for music. For Music? It was here that visitors could choose from a menu of electronic tracks, which would be played to them in a private screening on headphones. For a small extra charge, they could also opt for the doggie bag feature, and have their music meal burned to a CD on the spot. In the opposite corner you could find the slackers lounge, with its more relaxed atmosphere, away for the tremendous commotion of the main hall. The organisers had the foresight to also make this a family room. Again, the fact that families were there in strength only goes to show you the range and diversity that the organisers hoped to attract. By this point, it was considered that the venue had run out of suitable corners, so the forth part of the meeting convened in the pub across the road, and so, as an adjunct to the main meeting, we had the Take It Outside event. This did offer people the chance of quieter surroundings, and proved to be an ideal venue for small group discussions. There was a full programme of talks throughout the day giving the meeting some semblance of a fixed structure, but it was the breadth of these that gave you the best indication of the diversity of the audience: the main ingredients were comedy, Issue 22 • 2002

technicality and art. The real challenge was to figure out exactly which category fitted which presentation.

The talks Some of the presentations took the role of a panel discussion, the first of which was “Salute the 20 Years of the Spectrum”. Rupert Goodwins hosted and chaired this panel, which consisted of four famous developers for the Sinclair ZX81 and Spectrum: Nigel Alderton, John Hollis, Sandy White and Paul Holmes. Much of the history of the Spectrum and ZX81 hardware development was laid bare during this time, as well as what it was about the Sinclair machines and the domestic computer market that lead to such enormous growth in such a short time. After taking part in this panel discussion, John Hollis, who was better known for writing games such as Meteor Storm and Time Gate, then revealed to the massed crowds the delights of “Circuit Bending”. It was here that John took hold of a very cheap child’s toy electronic musical instrument, opened it up and ‘fiddled’. Finding the timing circuits, John was able to make the toy take on a brand new appeal by adding a variable resister. By adding some flying leads and skipping one of the audio output sections, the toy took on a different character. On a wet weekend I can see the appeal of circuit bending. Some might find themselves saying “What’s the point?”. The challenge is quite simple, all John was trying to do is make the toy do something it wasn’t designed to do – to push the toy to an extreme. The extreme concept took on new flavours when Paul Granjon for Z Lab (http://www.zprod.org/zLab/) took to the stage. Here we have a technological pursuit that is performance art. Granjon shared selections of his video “Two minutes of experimentation and entertainment” with the audience. His introduction as “The man behind the ‘The cybernetic parrot sausage’” gave waiting crowd little to prepare themselves with. Here Paul’s video showed us ways of fitting the inner workings of ‘Furbee’ type toy to a sausage so that it could move and talk under its own battery power. As if this wasn’t enough, Paul went on to show us how he managed to return a fish steak – the sort of this you would find in a fish


18 Extreme

17/6/02 2:21 pm

Page 19

REPORT

hamburger – to the wild oceans by building it an exoskeleton and giving it a battery powered motor. From here we had a hard snap back to reality with the talk “When Science Fiction Becomes Science Fact – And Then Becomes Science Fiction Again”. Chaired by cyberpunk author Pat Cadigan, the author Tom Standage told us about the 18th century chess-playing automaton “The Turk”, arguing that this marked the beginning of the study of artificial intelligence and not developments like Charles Babbage’s Difference Engine. Sharing the panel were George and Freeman Dyson. It was here that we were told of the plans of the topsecret 1950s project to find a peaceful use for US nuclear weapons. “Project Orion” was being designed to use the power from nuclear explosions to propel a 40 man spacecraft to the moons of Saturn. George Dyson was one of the main developers of the project. His son Freeman, who has recently written a book about “Project Orion” also fielded questions. Especially interesting was the fact that NASA, which had cleared its archive of this project in the ‘70s, was now trying to buy back as much information about it as possible. In defence of weblogs was a panel discussion about the uses and abuses of the phenomenon of online diaries. Were these “grassroots content management systems of the future, or just a load of self-obsessed secret diaries of Adrian Mole?” was the question, to which no real consensus was arrived at. What did become apparent was that people don’t write weblogs to express a ‘different viewpoint’ but are written in the hope that their viewpoint might be shared by others and were really searching for confirmation of that by finding out how many other weblogs share the same view. Having had doses of technology, art and science, we were only left waiting for some politics. Thankfully, this gap was filled by Cory Docorow from the Electronic Frontier Foundation, who highlighted the widespread concerns about the erosion of consumers’ rights to the fair use of digital media. Around the hall were many stands to catch the attention of those passing by. Again, the varied miscellany of those with something to show is the important point: ●

● ●

Alt.Cyberpunk.Chatsubo (http://www.accanthology.com/) were showing their compilation of Usenet newsgroup written cyberpunk short stories. Bricklane TV – a model for Reality TV, but based on real issues like culture and conflict. The British Go Association (http://www.britgo.org/) – highlighting the pleasures of simple pursuits with this ancient board game. C64Audio.com – CDs of original and remixed Commodore game themes (http://www.c64audio.com/). The Campaign for Digital Rights

Sounds from a digital past – C64Audio.com

(http://uk.eurorights.org/) – fighting draconian copyprotection measures and legislation in the UK. Copenhagen Free University (http://www.infopool.org.uk/) – domestic, mutating, autonomous institution asking what an aesthetics for life in the knowledge economy might be. Digital Tables (http://www.digitaltables.co.uk/home.html ) – custom-made table-top MAME arcade machines. Linux (for PlayStation 2) (http://playstation2linux.com/) – official Sony port of Linux to the popular next-gen games console. The Redundant Technology Initiative (http://www.lowtech.org/) – pro-Linux PC recyclers from Sheffield. Sinclair Archaeology (http://www.etedeschi.ndirect.co.uk/book.htm) – Sir Clive-endorsed guide to “every single Sinclair product ever”. Thomson & Craighead (http://www.thomsoncraighead.net/docs/thapf.html) – commemorative Web browser tea towels from the digital artist duo. Wearable computing (http://the.earth.li/~martin/wearables/) – headmounted wireless Internet connectivity that fits in your pocket.

The show has pulled off a remarkable feat – at the end of the day everyones expectations of the event seem to have been fulfilled, even though at the start of the day, no-one was quiet sure what anyone else’s expectations actually were. The official figure is that over a 1,000 people attended throughout the day and I understand that the organisers would have been happy to see 500. The buzz factor expressed by everyone who attended would say it was an absolute success and I can’t wait for next year.

Info Extreme Computing 2002 http://www.xcom2002.com/ Need to Know http://www.ntk.net/ Mute Magazine http://www.metamute.com/ Gentle pursuits were in evidence, as well as the frantic

Issue 22 • 2002

LINUX MAGAZINE

19


FEATURE

Spreadsheet applications

BETWEEN THE SHEETS John Southern shows us that when it comes to Linux, there’s a whole array of spreadsheet applications to choose from

S

preadsheets come in many forms from the all encompassing StarOffice to the tiny and modular Teapot. Their use is diverse, from sheets that simply select lottery numbers at random, to the large multisheet business plans that can create job prospects at random. Spreadsheets were the killer application that drove much of the personal computer industry. Original business sales of personal computers were driven by VisiCalc and then later Lotus 123. Microsoft initially brought us Multiplan with its R1C1 notation for cell referencing. Whereas others, especially VisiCalc, favoured the A1 system because it managed to address cells in a much condensed form. A spreadsheet is a network of cells with an underlying column and row structure. But the real power comes from the ability to reference constants, formulae or and other cells. Because of this, very complex modelling can soon be built up. Running any sort of business will most probably require some use of a spreadsheet. This may range from producing the Financial statements of a Plc to recording the petty cash. From adding up the bank statement to consolidating many sites of an enterprise. It is hard to imagin any sort of business that can manage without the power and flexibility that speadsheets have to offer. Many Plc’s use spreadsheets not because they are reliable or easy to control, it only takes one bad formulae hidden from view to reduce a spreadsheet to a mess. The real beauty is that it is very easy to gain some basic skills in using and manipulating spreadsheet. Line managers in organisations can usually work out what is trying to be done with each sheet without having to have a full understanding of how the sheet was put together.. A little degree of proficientcy goes a long way.

Recent developments Not much has really changed over the last few years. Yes we can now highlight in dozens of colours and add wonderful fonts, so it is clear to see that most of 20

LINUX MAGAZINE

Issue 22 • 2002

the effort in development is with displaying your data, not manipulating it. The last main change was the introduction of multiple sheets, with the facility to Interlink the data from one sheet to another. Many is the time when a decent database would be more useful, but databases require some thought and preparation before you can start to use them. Spreadsheets can be setup in a matter of moments and you work and develop on them on the fly. Our first consideration when about to start create all but the most basic of spreadsheets is to decide if the spreadsheet is the right application to use at all. Would something be better suited to our task? The petty cash and tea money maybe fine on the back of an envelope but as the task grows we move first to a notepad then spreadsheets and finally a purpose built application. We must also consider what we want to do with the data. Is the spreadsheet going to be a one off, with the data printed and never amended again used again or will we need to refer to and reuse at a later date? A much more important consideration, especially for the Linux user, is, will we need to pass the data on to a third party? If so, we then have to consider the what format of data can they handle. Because computers are good a saving data, we can reuse large amounts of it. This makes many tasks more productive, saving us time and money. Occasionally we might even get off work early because of it. The concerns about having to pass on your spreadsheet data to a third party is a much bigger problem. Much as we wish, the majority of the world does not use your Linux, and so they will be expecting your data to be in a spreadsheet format that they can understand. Admittedly simple formats are preferable. Data presented in CSV format can be imported into so many applications that you rely on any spreadsheet worth the name can read it, the data will never be lost. The most compatible spreadsheet package we have running under Linux is StarOffice 6.0 from Sun


FEATURE

In StarOffice the merged cells appear fine.

Microsystems. This has a wide variety of file types supported, from text to Excel formats. And remember, some of these formats are not fixed,so data from an Excel 97 spreadsheet might not give you the required information if viewed in Excel XP. If we make a simple spreadsheet with just a few functions we can save in .xls format. Care must be taken as this some vendors may not have invested enough time in developing their product to meet its compatibility standards. This may fresult in the loss of some formatting, especially if very complex. The important point though, is that the data will still be secure, so only the formatting would have to be rewoirked on a new platform if needs be. What you do have with the spreadsheet solutions in Linux is choice. You get to decide on size, speed and cost. Knowing in advance the type of features

Converted to Excel the merged cells do not appear the same.

you will need for you project will help you decide which package best fits your needs. We have listed here some of the more popular packages, even though some are quite old.

Spreadsheets... Abs Abs is from http://www.ping.be/bertin/abs.shtml and is released under GPL. It copes with everything we tested it with and comes with its own form of MS Visual Basic called ABVisual. Started with the command ./abs Figure 3: Abs spreadsheet showing it’s Xaw widgwt kit.

Issue 22 • 2002

LINUX MAGAZINE

21


FEATURE

...Spreadsheets Calc (StarOffice 6.0) Almost identical to OpenOffice.org this commercial offering gives added fonts and an improved initialisation time. http://wwws.sun.com/software/star/staroffi ce/6.0/ £56 including the rest of the StarOffice package. Supports Excel 2000 and XP file formats. Will also import form a wide range of formats as well.

market to enter, Hancom has managed to make a mark very quickly by releasing a highly polished product based on QT architecture. Pivot tables and macro function work quickly and the definable redo and undo functions should be taken up by all other spreadsheet makers. The number of columns has been increased to 512 but the number of rows has been dramatically reduced to 16Kb. File format filters consisted of MS Excel and nothing else.

NExS Personal Edition 1.4.6 The Network Extensible Spreadsheet from GreyTrout at http://www.greytrout.com/ is a proprietary system costing $50 for the personal edition. Motif based the spreadsheet is based around the client server model with some internet capability built in. Limited to one sheet it copes well with Lotus123 and Excel formats. Not aimed at the desktop but suitable for a datacenter. Plugins give an indication of its use ranging from Genetic Algorithms to a Perl Interface.

Siag 3.5.0-2

Figure 4: Calc from Sun – the best you can get for Linux at the moment

Another GPL product. This time the Scheme In A Grid spreadsheet based on Scheme. Not the usual desktop spreadsheet but something a programmer may like for the added functionality. For more information see http://siag.nu/. Hancom Sheet showing it’s embedded image abilities

Gnumeric

Kspread

Released under the GPL, Gnumeric is part of the Gnome desktop environment. As such it works well featuring multiple sheets. The only drawback we found in practise was the limited number of file format filters available. Tools available include Goal seeking which is used to calculate break-even or budget points. http://www.gnome.org/projects/gnumeric/

Not to be out done by Gnumerics the KDE environment has released Kspread under the GPL. Not the fastest of spreadsheets and importing excel sheets caused display errors due to the limited number of functions supported. You can not save in excel format although you can to Gnumeric format. As a stand alone it fails due to speed and compatibility. With the other Koffice applications it may well be adequate if you do not need to exchange files. Find out more at http://www.koffice.org/kspread/

SIAG showing its graphing ability.

Anywhere Desktop (Applixware) At $99 for this office suite it is capable of using Java to become a thin client operating spreadsheet. Support for Excel

Gnumeric showing off its goal seeking function.

HancomSheet The new release from Hancom costs $50 including the rest of the Hancom Office suite. http://en.hancom.com/ . A tough

22

LINUX MAGAZINE

Issue 22 • 2002

Kspread failing to display an Excel sheet correctly.

Applixware spreadsheet


FEATURE

...Spreadsheets and Lotus sheets worked through the built in wizards although others have had import problems. Where Applixware succeeds is that third party manufactures are producing add-on utilities such as Analyst for extending the analytical capabilities. Many more details from http://www.vistasource.com/products/axware/sprea dsheets/

XESS Starting at $70 this spreadsheet has been around for a long time. Motif based but continued to be developed it coped well with importing Excel 97 sheets. http://www.ais.com/linux_corner.html

XESS showing graphing

Wingz http://www.wingz-us.com/wingz/news/linux.html The Wingz site sadly shows us that it has not been updated since the end of 1998. As such we did not test this product.

Oleo Coming from the GNU Free Software Foundation at http://www.gnu.org/software/ oleo/oleo.html. Oleo comes with a motif interface. The lack of import filters make this only suitable for hard core Linux only fans. Production again seems to have halted in 2000 and the FSF is now also supporting Gnumerics.

Oleo in action

Teapot Teapot is really a table planner but has a very small footprint and comes with the SuSE box sets. It uses functional addresses and has 3D table modelling.http://www.moria.de/~michael/ We could go on and on with spreadsheets. Just briefly, here are some others that we came across: ● moodss – http://jfontain.free.fr/moodss/index.html ● xxl – http://www.esinsa.unice.fr/xxl.html ● Xspread – http://www.mnis.fr/home/linux/appli/U spreadsheet/xspread.html ● Abacus – http://www-cad.eecs.berkeley.edu/U HomePages/aml/abacus/abacus.html ● OleoTK – http://public.ise.canberra.edu.au/~U rpj/oleotk.html ● SC – http://www.ibiblio.org/pub/Linux/apps/U financial/spreadsheet/

Comma Separated Values Data is stored separated by fixed delimiters such as commas or tabs. Databases and spreadsheets can easily interchange data and being in a text format can always be worked upon with a text editor.

More information

CSV

There is a very interesting and informative site on spreadsheets and there history, with special reference to Spreadsheets on Linux from Christopher Brown, which you can find at http://www.ntlug.org/~cbbrowne/spreadsheets.html. Though some of the information is out of date, the real joy comes from the history of spreadsheets Much discussion and problem solving can also be found on Usenet in comp.apps.spreadsheets Issue 22 • 2002

LINUX MAGAZINE

23


FEATURE

DTP with Scribus

MAKING PAGES Linux needn’t be the gooseberry at the desktop publishing party. As Frank Wieduwilt explains, Scribus is on its way to becoming a serious solution for print production under Linux

U

ntil recently, it seemed as if there wasn’t ever going to be a program for the production of high-quality print material for Linux. Adobe discontinued the development of FrameMaker for the penguin operating system some time ago and other companies didn’t seem to be interested in a DTP program for this platform. Approximately a year ago however, Scribus came into existence. Since then, this has taken great strides toward becoming a program suitable for basic layout tasks. The most important functions for document organisation are already there and the program is sufficiently stable for nonprofessional purposes. This article reviews the Developer Version 0.5.7. The older version 0.5.0 has the advantage of greater stability, however it does not possess all in the functions mentioned here. Development presses ahead and the next stable version will soon be available.

Installation

DTP Desktop Publishing means the production of print material from a computer (see box 1). You will also find an introduction to DTP on the Scribus Web page.

24

LINUX MAGAZINE

Scribus is available on the program homepage as source text. First, download the file scribus0.5.7.tar.gz or copy it from this month’s coverdisc then unpack it with the command tar –xzvf scribus0.5.7.tar.gz or with a program such as ark. For translation, you will also need Qt (self-compiled or including the dev(el) package) in a version above 2.2 (but not Qt 3.x). If you have several installed Qt versions, do not forget to direct the variable QTDIR to the correct Qt directory with: export QTDIR=/usr/lib/qt2 Change to the newly created scribus-0.5.7 directory, and enter the ./configure and make commands in a Issue 22 • 2002

console to translate the program. For installation, use su to go to root and then enter the make install command. From now on, start Scribus by the typing the scribus & command in a console.

Desktop Publishing Desktop Publishing is the generic term for the production of publications on the computer. Instead of different people writing, arranging and setting a document, a graphic designer can do all these tasks alone with the help of a DTP program. Text and pictures can therefore be laid out exactly, to form a high quality document ready for print. DTP programs allow the pixel-exact, typographically correct setting of text and the precise integration of graphics. These are capabilities that normal word processing programs do not usually have. An important component of DTP is the WYSIWYG principle (What You See Is What You Get). This means that the product the designer creates on the screen is exactly the same as what it later looks like on paper. DTP documents can be printed using different devices, from laser printers to professional typesetters. Lately, the PDF format has been added as a universal format for the production of publications. Documents in the PDF format can be both viewed on a computer screen as well as printed out in high quality. DTP allows the economical production of print material of all shapes and sizes, from menus and flyers all the way up to magazines.


FEATURE

Franz Schmid Franz Schmid, the author of Scribus, works as a commercial employee in a natural stone company in Eichstaett, Germany. Frank Wieduwilt and Patricia Jung ask Franz about the development of Scribus. Linux Magazine What was your motivation to develop a DTP program for Linux? Franz Schmid I have worked with all kinds of computers for the last 19-20 years, firstly with various home computers, then Macintosh and now Linux. I enjoyed working on various DTP programs on my Mac, and then sorely missed these under Linux. StarOffice was the next best thing, but the problem is that it doesn’t run on all platforms and is, in my opinion, too “fat” if you only want to do something quickly. The other alternative, KWord, was simply unusable a year ago when I started on Scribus. So I thought to myself, let’s try and see if you can put something together. The real reason why I gave it a go was simply the fun I have with programming. LM What is your target group with Scribus? FS It ranges from ambitious normal users, who want to do their own layouts every now and then, all the way to professionals, who want to look over the garden fence and see what Linux looks like. LM What are your role models, and where do

you want to go with Scribus? FS My first point of orientation was QuarkXPress 3.32, but now I also try to integrate interesting and useful concepts from Pagemaker and Indesign. My final goal is to have the approximate functionality of Quark 4, naturally with appropriate modernisations. LM Will it include functions such as import filters for word processing and automatically produced tables of contents? How about the possibility of allowing pre-selected text to be integrated with format template tags, so that it is automatically formatted? FS Import filters will come with time – they are however rather time-consuming, particularly where binary formats are concerned. Things such as automatic tables of contents, however, will be built into one of the next versions – that I am sure of. I still have very many ideas that await realisation. Whereby... that formatting tags idea is very interesting and is something that I hadn’t even thought of. LM Projects such as the Gimp justify the lack of colour separation with patent problems.

How then does Scribus achieve this functionality? FS The colour separation in Scribus is in principle quite primitive. Each of the four basic print colours simply has its own page. The procedure for this can be seen in the PostScript Reference Manual and in the PDF Reference Manual. The contents of these manuals were expressly intended and published for the creation of drivers. The separation found in Scribus can also be created with Gimp – the difference is that Scribus writes the result straight into a file. Patent problems only arise with special colour transfer curves and special colours (e.g. Pantone colours). LM What are you working on at present? FS At the moment, I am working to prepare Scribus for the next stable version 0.6, which means removing as many bugs as possible, fully programming all inserted features and bringing the translation up to date. LM Could you use any help, and if so, how? FS I can always use help, as I am the only programmer on the project. Any feedback is welcome, particularly from those who are proficient in HTML and building Web pages.

Getting started After starting for the first time, the program welcomes you with a large empty main window. To create a new document, select File/New... from the menu or click the button with the empty page in the tool bar. A dialog box appears, in which you can set up the basic properties of the new document (Figure 1). You can specify the page size in the Page Size area. A range of different formats are pre-defined, but by selecting the Custom format, the user can define his or her own paper format. Under Margin Guides, you can define the spacing at the sides of the page. Note that margins in a DTP program are more functional than mere guideline assistance, i.e. all objects can also be externally

Figure 1: Creating a new document

Figure 2: Setting page margins

positioned. With the Facing Pages option, Scribus automatically creates left and right pages, in which the margins of the left and right pages are mirrored. Selecting the Autom. Textframes Box option produces frames for the viewing of text, which lie exactly within the page margins. In the Column Guides area, you can specify the number of and distance between columns. A text frame can then be created for each column. The page margins can be adjusted and reset at any time. To do this, simply select File/Document Setup... from the menu. In the pop-up dialog box, you can now set the margins to what you want (Figure 2). Issue 22 • 2002

Context menu A pop-up menu that offers commands, which are useful in connection with the selected object. Context menus are normally opened by rightclicking an object.

LINUX MAGAZINE

25


FEATURE

PDF Portable Document Format, developed by the company Adobe, is a file format for the exchange of formatted text independent of operating system.

Easy operation Scribus can be operated completely with the mouse. All program functions can be reached by means of the menu or tool windows. The most important tools are shown in Figure 3 next to the program window. They can be positioned on the screen independent of this window and be faded in and out individually using the Tools menu. Most important program functions can also be reached though the context menus of the individual layout objects. The most important tool is the measurements palette (as seen in Figure 3 at the bottom left-hand corner). This is used to format the frames and contents thereof, and it contains two tabs: Frames will show you the co-ordinates and the position

of the selected frame (Figure 4); Contents will indicate all text formatting options (Figure 5). Nearly all commands can also be executed by keyboard shortcuts, so that the program can be also operated by the keyboard. The pre-set key combinations can be found in table 1. To change these, select Edit / Preferences... from the menu. In the Preferences dialog box, go to General and click Keyboard Shortcuts.... This will put you in a new dialogue box, the top part of which lists nearly all program functions. Click the User Defined Key button in the bottom part and type the desired key combination for the selected function. Clicking OK will save the new setting.

Everything is framed

Figure 3: The set of tools Scribus offers

Figure 4: The measurements palette – Frame format

Figure 5: The measurements palette – Content format

Table 1: Important key combinations Function Create new file Open file Close file Save file File information Print file Quit program Select all Modifies object Duplicate object

Key combination Ctrl+N Ctrl+O Ctrl+W Ctrl+S Ctrl+I Ctrl+P Ctrl+Q Ctrl+A Ctrl+M Ctrl+D

All elements of a document are arranged in frames, which you can arbitrary position on the page. These are represented on the screen with a dotted edge; which is red for a selected element (Figure 6). Each frame has a tab (or a small square) on all four corners, which can be held and dragged with the mouse to change the size of the frame. One way of making the exact positioning of elements easier is to make the help lines magnetic with the View/Snap to Guides command – this means that the edges of the element will automatically “snap” onto the help lines. The space of these guides can be changed in the program options, reached by means of the Edit/Preferences... menu command. Behind the Guides tab, you will find the necessary options. You can create a rough grid with Major Grid Spacing, and then a fine grid with Minor Grid Spacing. Furthermore, you can also specify whether the help lines are to be shown in front of or behind the elements.

Text So how does the text get into the frames? There are two possibilities: The first is to click on the Edit Contents of Frame icon and enter the text in the desired frames. That is somewhat laborious with longer texts. It is therefore advisable to enter the text into a text editor or word processor first and then to load this into the frame. To do this, select the appropriate text

Function Key combination Group objects Ctrl+G Ungroup objects Ctrl+U Zoom: Whole page Ctrl+0 Zoom: Original size Ctrl+1 Soft separation Ctrl+Paragraph left justified Ctrl+L Paragraph right justified Ctrl+R Paragraph centred Ctrl+E Insert page number Alt+#

Figure 6: A frame is selected for editing

26

LINUX MAGAZINE

Issue 22 • 2002


FEATURE

Figure 7: The text frame is too small

an image conversion process before the appropriate picture can be inserted in the document. You will first require a frame, which is inserted by clicking the Insert Picture button on the toolbar and then dragging the mouse on the page. Right clicking the empty element (marked with an X) will pop-up a context menu, in which you can click Get Picture... and then select the appropriate image file. A preview window integrated into the appropriate dialog box helps the finding and selecting of the correct graphic. To change the size of a picture that has been imported, select the Modify... entry in the picture frame’s context menu. The Horizontal Scaling and Vertical Scaling text fields allow the entry of specific scaling percentages (Figure 9). If you click the chain symbol next to these, the aspect ratio of the picture frame is maintained. In the Horizontal Offset and Vertical Offset fields, you can specify the distance between the picture and the upper left-hand corner of the frame.

Developer Version There are frequently two versions of Linux programs: a stable version for the end user and a developer version, which integrates new functions that have yet to be fully tested.

Editing frames

Figure 8: The text flows from one box to the other

frame in the Select Items mode, right-click the frame and then select Get Text... in the context menu. Scribus can only import unformatted text; if you use a word processing program (such as Abiword or KWord) for the creation of the text, you must pay attention to save the text with no formatting information. If the text is too long for a frame, Scribus indicates this with a small box with an x through it, at the bottom right of the frame (Figure 7). To create more space, you do not necessarily have to increase the frame size – it can also be connected with another frame. To this end, mark the current frame, select the Create Textchains tool from the tool bar and click on the second frame. A continuous red line will now surround both frames and the text will flow from the first frame into the second, as shown in (Figure 8). The formatting of paragraphs, as well as the type and size of fonts, is either done through the Style menu or by using the measurements palette (Figure 5). If you intend to write the document as a PDF, Scribus can provide a kind of table of contents, which can be read in Acrobat reader. “Bookmarking” a frame will make the first line of this frame come up as a heading in the PDF table of contents. Simply right-click the desired frames and select Is Pdf-Bookmark from the context menu.

Once created, the frames are stacked one above the other in the order of their appearance; in each case the upper frame covers the lower. Only text frames have a default transparent background, so that the text of underlying frames can be seen through overlying text frames (Figure 10). The frame sequence can be changed through the context menu – the background can also be coloured. For this purpose, select the Modify... entry from the

Figure 9: Editing picture frames

Pictures Scribus can import pictures in the PNG, JPEG, TIFF, XPM and EPS formats; other formats must be converted with

Figure 10: Transparent text frames

Issue 22 • 2002

LINUX MAGAZINE

27


FEATURE

have to specify the radius of the corner in the Corner Radius text field of the Modify Textframe dialog box (Figure 12).

Lines, ellipses and rectangles Besides text frames and pictures, you can also draw lines, rectangles and ellipses. The buttons for these functions can be found in the tool bar. Each object can have any selected line and fill colour (Figure 13). The geometrical shapes are edited and arranged in exactly the same way as picture and text frames.

Templates Figure 11: Changing the shape of a frame

context menu of the appropriate frame. Next, select a suitable colour from the Background Colour list in the dialog box. For text frames, the None choice means transparent. You can also determine and set the Shading as a percentage value. An example of this can be seen in Figure 12, where the background is yellow with a shading of 15 per cent. A frame does not have to be rectangular, and by using the Item/Shape command from the menu, you can specify whether the marked frames are to be rectangular or oval. You can individually change the shape of a frame using Item/Shape/Edit Frame, which changes the active frame to dark blue with four marked corner points. A new tool bar offers three buttons, from left to right: Move Point, Insert Point and Delete Point (Figure 11). Frames of any shape or size can therefore be produced. Select Insert Point and click to the place where you want a new corner point (in red) to appear. To move a corner point, click the appropriate button and drag the point concerned to the desired position. This feature is of particular importance where text is to flow around figures etc. To create a frame with round corners, select the Modify... entry from the context menu. You will then

Figure 12: Frame with round corners

28

LINUX MAGAZINE

Figure 13: Drawing

Issue 22 • 2002

Designers who lay out magazines and books don’t want to have to manually repeat formatting procedures every time. Scribus therefore allows format templates for pages and sections in the same way as professional DTP programs. They are called page templates and are created by selecting Edit/Templates... from the menu. Clicking on New opens a dialog box, in which you can set the properties. First, assign an appropriate name for the new page in the Name text field. For double-paged layouts, you can then select whether you want a right or left page from the list below. As long as the Edit Template dialog box is open, you will be able to see the sample page in the work area. You can then add all the elements that repeatedly appear (e.g. header or footer lines). Automatic page numbers are inserted by pressing the Alt+# key combination in a text frame (for two digit numbers, press Alt+# twice). Clicking the Exit button in the style page dialog box will automatically save the template together with the document. All elements that have been added to the sample page are protected against changes when working on a document that has been formatted in this way. Using the Page/Insert... menu command, you will arrive at a dialog box, in which you can insert the pages (that are based on the style pages) into the document using the Left Page Based and/or Right Page Based lists. Changing the sample pages at a later time will also change all the existing pages based on that sample page. In order to assign another sample page to an existing page, select Page/Use Style Page... from the menu. You can then select the sample page for the current page in the pop-up dialog box. Style Templates store positioning and spacing information for sections of text, not however font, size or colour. These formatting tasks must be carried out each time, using the commands in the Style menu or in the measurements palette. To get an overview of the existing style templates, select Tools/Show Style Templates from the menu. At first, none will be present. You can create such a template by double clicking the No Style entry. You can then define and set your own style templates in the ensuing dialog box


FEATURE

Clicking New opens a window, which allows the entry of specifications for the appearance of the style. The style name is entered into the Name field, and after this the Vertical Spacing, Indent and Alignment fields are used to define the appearance of the section to be formatted. Clicking OK will bring you back to the style template overview. Save then stores the new template and closes the overview. To use a template, simply select the desired paragraph or section and click the template name in the template list.

Output The author was not content with the PostScript printer driver provided with Qt, and as such developed a completely new PostScript driver. The File/Print... command will open a dialog box, in which you can set the appropriate print options (Figure 14). In the Print Destination list, you can either select a printer or specify that the document is to be output to a file. If you decide for the latter, you must enter a file name in the File: text field. Range allows you to specify which pages and how many copies are to be printed. The type of file to be printed is defined in the Options field. If you want to give the document to a printer, the Print Separations option will be of interest. This option causes the program to print the document in four parts – one for each of the colours: Cyan, Magenta, Yellow and Black. To output files in PDF format, you’ll be happy to know that you don’t need any auxiliary programmes. Selecting File/Export.../Save as PDF... from the menu will bring up the Create PDF-File dialog box as seen in Figure 15. Enter the name of the new PDF file in the Output to File field. Specify the output format and the resolution in the File Options area. 300 dpi is a good resolution for documents that are to be printed, but you can reduce this a lot (thereby making a smaller file), if the

Figure 15: Creating a PDF file

document is only to be read on the screen. Choose the Compression option you if you want to reduce the document’s file size to save memory space. As data compression takes time, particularly with large files, you should only select this option when the document is complete. On the Fonts page, you can specify which fonts are to be embedded in the document. These fonts will therefore also be available on systems that do not have them installed. Finally, the Bookmarks option allows you to specify whether the file is to be outputted with bookmarks. Some PDF-display programs list these as a table of contents, thereby helping the reader navigate within the text. The Bookmarks list displays all the bookmarks defined in the text. On the Extras page, you can specify whether, in the PDF display program, the transition from one page to the next is done with effects such as fading or rolling.

Help! Scribus has an online tutorial, which explains the most important functions of the program. This manual can be accessed through Help/Manual... in the menu and introduces the main functions of the program. Use the tool bar at the upper edge of window to navigate within the help pages.

Why Scribus? Scribus, in its current form, is already suitable for the layout of newsletters, menus, brochures, building manuals and school newspapers. Larger catalogues, professional magazines, books or other elaborate projects are however (at this stage) too much for the program. The program still occasionally crashes, so that saving before new actions is a must. The quality of the PDF export is good, meaning that documents can be passed on with ease. Owing to its ability to separate colours, Scribus is one of the first Linux print layout programs whose path has not been blocked from the start. So in The Scribus Web page conclusion, Scribus fills a large hole in Linux’s software http://web2.altmuehlnet.de/ fschmid/ palette and we are looking forward to its continuing development

Info

Figure 14: The print dialog box

Issue 22 • 2002

LINUX MAGAZINE

29


FEATURE

Virtual Network Computing

REMOTE CONTROL Virtual Network Computing (VNC) allows different computers to access a common desktop. Hans-Georg Esser explains the ins and outs of VNC and how to make it more

T

hose who regularly work on more than one computer know the procedure: after logging on, you have to open windows from scratch, load documents again and find your old Web pages. Modern desktops have session management features, so that applications on associated desktops at least reappear in the correct place. This is however not possible with “non-desktop” programs such as Netscape. It gets even worse when two assigned computers run on different operating systems. A good network set up can make sure that you can access your Linux home directory from Windows, but all the normal applications will be missing.

secure with an SSH

An elegant solution

tunnel

VNC solves the problem by taking the complete desktop from one computer and reconstituting it in a window on a second computer. In this article, we will describe the setup of a VNC server under Linux as well as the setup of a VNC client under Linux and Windows.

Server start The next step is the starting of the server. If you want to first test the settings of the VNC server, simply call up the vncserver script – this then starts the actual server, Xvnc, with the standard parameters. Pay attention to the output of the server start script. A display number is displayed here (: 1, : 2, etc), which you will need in order to access the server. For security reasons, you now need to specify yet another password for access. To do this, enter the command vncpasswd and type your desired password (which has nothing to do with your account password) twice. It is then coded and entered in the file ~/.vnc/passwd.

Linux client For a first test, locally access the current VNC session. Install the vnc package that contains the vncviewer client. If you now enter the command: vncviewer local host: 1

Server A VNC server under Linux is in reality a double server. On the one hand it can be considered an X server – you can usually access it through localhost: 1 after startup. This however will do nothing, as the server will be running without an output. In order to see the desktop that is running on the new X server, you have to start a VNC client (vncviewer). The VCN client then plays the role of the VNC server and transfers the desktop contents as picture information. To set up the VNC server, you either need the VNC source texts or a precompiled rpm package named vnc server. For the latter case, the installation is a simple:

in the console (where “: 1” is replaced by the correct display number where necessary), the new desktop will appear in its own window. Tip: make sure that the VNC password is correctly entered. The window manager is an old Unix WM. twm is not the easiest to use, and you will therefore probably want to change it.

rpm –Uvh rpm-server.....rpm Some distributions provide the vnc and vnc server packages already, and in this case it is simplest to install these. For other distributions, you can find these packages on this month’s coverdisc. The sources are also available from AT&T’s download page (http://www.uk.research.att.com/vnc /download.html). 30

LINUX MAGAZINE

Issue 22 • 2002

Figure 1: A VNC viewer under Windows (identifiable by the frame of the window) indicates a Linux desktop with an xfce window manager


FEATURE

Adapting the desktop The VNC server implements the ~/.vnc/xstartup script after start up. This contains the line twm &, which starts the twm. It is possible to simply replace this line with one that starts a desktop of your choice, for example startkde &, startgnome & or startxfce & (for the XFCE desktop, which comes with a small memory).

as many simultaneous connections as you want with the altered set up. The whole thing makes sense when you access the server from another computer. If the computer is a Linux PC, it is a good idea to install the client there (the server is not necessary). In place of the vncviewer local host: 1 command, you need only enter the name of the computer. For example, you would enter: vncviewer myserver: 1

Figure 2: The window manager is also specified in the ~/.vnc/xstartup file

if the computer on which the VNC server runs is called myserver and desktop number : 1 is used. As a result, you will now see the same desktop on two computers.

Network rush hour Several clients Trying to start a second vncviewer process will be only partly successful. After entering your password, a new window with the new desktop will open, but the first window will however be closed. The reason for this is that you started the clients in the “nonsharing” mode. This means that VNC displays can only be shown by one client at a time. If you want to work at two workstations (for instance in two offices or areas in the same building) at the same time, then a divided access is for you. The simplest way to achieve this is to start the server in the “always shared” mode of operation. The following command is used for this purpose: vncserver –kill: 1 (The display number can again be adapted if necessary.) The server should then be started again, this time however with the following command line: vncserver –alwaysshared –geometry 1000x700 –depth 24 This example has another two common arguments in addition to the “–alwaysshared” option for the sharing mode: ● –geometry 1000x700 defines the size of the VNC desktops, in this case 1000 x 700 pixels. ● –depth 24 defines the depth of colour. The server runs with a mere 8 bits as standard, which doesn’t look too good on higher resolution monitors. Start vncserver again. If there has not been enough time since the last interruption, the server will not be able run on the same port and will take another. This will then also change the display number (: 2 instead of : 1), which must be taken into account when performing new client starts. You can therefore have

If you only set up one local connection, the data between the client and the server will be transferred in an uncompromised (raw) form. This would be a problem in a network, which is why VNC uses compression algorithms for network connections. In a test connection over DSL, the time required for screen actualisation was acceptable; this does not apply to slower modems or ISDN connections. Nevertheless, if you want to try it with a modem, a small trick will be helpful: Simply use a very small desktop (for example 500x400 pixels) and only 8 bit depth of colour – this should be enough to display a smaller application.

Windows client In a heterogeneous environment, you will perhaps want to access the VNC desktop from a Windows computer. To this end, you will use the VNC Windows client, which we have also included on the coverdisc. After installation (typical to Windows), you will find an icon on the desktop with which you can call up the viewer. You will be asked for the server name in the form of “computer name: display”, thus “myserver: 1”. If you cannot find out the name of the local network on your Windows computer, use instead the IP address such as “192.168.1.199: 1”.

Long live the desktop If the VNC server runs on a computer that is not switched off, then you can also keep a VNC session eternally open. Even when all VNC clients are logged off, the VNC server still remains active, and all programs started under it will continue to run. Thus when you log on again, you will come to the exact same state as when you ended the client. For security reasons, you should however save all open documents before you close the client.

Security with SSH VNC has a password that protects against the establishment of unauthorised connections – the actual transmission is however unencrypted (similarly Issue 22 • 2002

LINUX MAGAZINE

31


FEATURE

to a Telnet session). If this isn’t secure enough for you, you can create an SSH tunnel. To do this, firstly start the VNC server with the additional option –localhost. In this way, no connections can be established by other computers. As your intention is to create an access from another computer, you have to “tunnel” into the VNC port through SSH. That can be done by firstly calculating the port number of the VNC server by simply adding 5900 to the display number – thus display: 1 corresponds to a port number of 5901. Next, enter the following SSH command: ssh –L 5901: myserver: 5901 myserver

Figure 6: Illustration of a KDE desktop under Windows, exported by krfb (picture = 1600x1200 pixels, size: 390 KB)

This assumes that the VNC server runs on a computer, myserver, and that you are entering the command at a client computer. SSH will now ask for the password as usual. Another thing that happens is that connections to port 5901 at the client are forwarded to port 5901 at myserver. If you now want to start the VNC Client, enter:

a krfb window will open on the server, which will inform you that a client would like to establish a connection. If you permit the establishment of the connection, you can then enter the password at the client computer. And hey presto, you can see your active KDE server desktop in the window. This therefore does away with the need to start a new (empty) desktop, but you can still work on the distant computer with the normal desktop. This feature is not offered by the regular VNC server. In the VNC window, you can even switch between the different KDE “desktops” by using Ctrl+F1, Ctrl+F2 etc.

vncviewer localhost: 1 The viewer then looks for a VNC server that is running locally and finds the local port 5901. This is encoded and passed on by SSH to the correct port on the server, thus maintaining the VNC connection. If the client is a Windows computer, you will have to install an SSH package for Windows – a description of which is beyond the scope of this article. The SSH command is then identical.

krfb: VNC for KDE with more features Some of you, while reading this, may have had the thought: “Why do I have to create a new desktop and not just use the active one?” This is where the krfb project comes into play. If you translate and start the program from the sources, a new icon will appear in the KDE panel (KDE 2.2 and 3.0 are supported), which allows the configuration of the VNC server. Select a desktop number (e.g. 1, to be able to address the desktop as “server: 1”) and enter a password. If you now start the VNC Client on another computer (not on the same one!), a direct connection will not automatically be established. What will happen is that

Info VNC homepage: http://www.uk.research.att.com/vnc/ Direct VNC, a VNC client which doesn’t run under X, but on a Linux Frame buffer (console): http://www.adam-lilienthal.de/directvnc/ krfb, KDE VNC server: http://www.tjansen.de/krfb/ X0rfbserver: http: hexonet.de/software/x0rfbserver/ CygWin Tools for Windows, including Bash, ssh etc: http://sources.redhat.com/cygwin/

32

LINUX MAGAZINE

Issue 22 • 2002

Compiling krfb Here is a short installation guideline, as there are no rpm packages for krfb. Use the kfrb version 0.5.1 for KDE 2.2.x, and the version 0.6 for KDE 3.0. Unpack the source code archive from this issue’s CD, change to the newly created krfb-0.x.x directory and, here, carry out the usual Linux command lines as root user. /configure make make install The last step installs all created files below /usr/local/kde/. Then simply start the client by entering: /usr/local/kde/bin/kfrb & If you would rather install krfb into the KDE standard directory (/usr or /opt/kde2 depending on the distribution), use the option – –prefix=/usr (or – –prefix=/opt/kde2) when doing the configure step. Those who don’t work with KDE, but want to be able to access the normal X server, should check out the X0rfbserver project. The software fulfils the same function as krfb, but doesn’t need any special desktop system such as KDE or GNOME. The VNC Web site, http://www.uk.research.att.com/ vnc/, offers a lot of additional information on VNC, including a comprehensive FAQ document amongst other things.


FEATURE

Files on the run

DATABASES FOR THE MODERN WORLD

K

eeping your information under the bed in a shoe box is not the best way to store away any sort of data - unless you happen to be using shoes as an aide memoir for something. The value of your information only make itself apparent once you have found what you are looking for. To do this effectively, you need to be very strict with the rules you set yourself on how you should file away your knowledge later retrieval. Problem is I am appalling lax at these things, maybe you are too. Luckily, computers are very strict about what they will do, give them a rule and they are quite happy to continue following it until the electricity runs out. But it is still down to use to set out those rules and that is why we have Database Management Systems. The more complex your data, and the way in which you want to handle it, the more complex a DBMS you will need. There are three main parts to how you would assess the value of a DBMS:-

● Data acquisition ● Data Storage ● Data retrieval You will also need to be aware of the differing types of database available Hierarchical databases Excluding flatplan type databases, which you might find yourself creating on a spreadsheet, hierarchical databases have the longest legacy in computer databases. It is this type of database which would first come to mind. All of the items of data in this type of database have a single link leading from a parent. Locating a piece of information in this database requires you to know something about why it is in the database, to know who its parent link is. Imagine, if you will, a database of published works. To find any given work you need to know under what criteria you will be able to find it

filed away. This could be by title, then by year of publication, then by type of publication, name of publication, name of the piece of work. If we have the answers to all of these questions we get progressively nearer to finding what we are looking for. Since the links to these types of criteria need to be set in advance, and can’t be changed, they are very important to get right from the outset of building the database. Should you set a database up like this but then find that you hardly ever know when a piece was published, your database will prove worthless to you because you won’t even be able to get started with the searching. This type of ridged structure doesn’t take a lot of processing power to get results, because a lot of the effort was spent in the way the information was laid out in the first place. Luckily, processing power has become much more available these days, so the days of the hierarchical database must be numbered.

Power doesn’t come from information, it comes from the ability to use that information effectively. Colin Murphy looks at

what is available for the Linux user.

Relational databases These are the current mainstay of the database business. The majority of all of the databases we will mention will be relational. In relational databases, gone are the hard links that take you relentlessly to that piece of information that you seek. In this case the data is filed by type and the DBMS is able to compare the data by these types, by predefined characteristics. This then means that the DBMS can make some comparisons and create new data from it. With the hierarchical database, you would only ever get out what you put in, now you can have more. A hierarchical database might be able to tell you how many widgets I bought this week, but with the help of a relational database and buying details about John and Steve, it can tell you how Issue 22 • 2001

LINUX MAGAZINE

33


FEATURE

JDBC Connecting Java and Databases Java Database Connectivity is a standard SQL database access interface, providing uniform access to a wide range of relational databases. It also provides a common base on which higher level tools and interfaces can be built. This comes with an “ODBC Bridge”. The Bridge is a library which implements JDBC in terms of the ODBC standard C API.

many were sold in total. If you keep information on how many were purchased at a time and when, you could work on pack quantities and stock control. On the days when none are sold, you know you can close the shop, especially since its a Sunday. Go on, fill your life with conjecture. There is still the need to set up the conditions and the relationships between the data, but this can be amended afterwards to a much greater extent. Network databases Here we almost have a melding of the two previous designs. Best described as a data organisation, links are made between data that is ‘sees’ as having a valuable relation to, and this is not limited to just the one link. Think of them more like web pages, they usually have more than one link, so the World Wide Web is nothing more than a network database, it’s just that it has very little predefined structure.

Some history The first Relational Database is widely regarded as the the Multics Relational Data Store, and dates back to 1976. This database management system is widely regarded as the first relational DBMS ever offered by a major computer manufacturer. Things have come on a long way since. Here we have a list of some of the database application that you can find working with Linux.

MySQL - http://www.mysql.com/ The MySQL database server is the world’s most widely used open source database. It boasts speed and flexibility and can be customised to a great degree. Extensive reuse of code within the software and the concentration given to providing a core system have allowed the developers to perfect a rich set of functions and features, giving you a database management system which is compact, stable and easy to set up and administer. The unique separation of the core server from the table handler makes it possible to run MySQL under strict transaction control or with ultrafast transactionless disk access, providing you with the best options for the majority of all database applications. MySQL AB, the company behind MySQL, offer a full range of professional services, again, showing the flexibility and power of open source projects in a commercial world.

GNOME-DB - http://www.gnome-db.org/ The GNOME-DB project aims to provide a free unified data access architecture to the GNOME project. It makes use of its own database front-end for administrators and libgda - a data abstraction layer. It can manage data stored in databases or XML files and it can be used by non-GNOME applications. Although focused on the GNOME desktop environment, care has been taken to cleanly separate the data access framework (libgda) from the user interface.

Ksql - http://ksql.sourceforge.net/ KSql, previously known as KMySql, this is a KDE database client. It was originally mySQL specific, but uses now plugins to access databases like miniSQL and PostgreSQL. With the latest version you can view queries result in multiple tabular views, print them or export then in HTML, edit your queries in a comfortable edit box with history, create nice forms with a WYSIWYG editor and save save usual queries and recall them with a double-click.

Mimer SQL - http://developer.mimer.com/ Mimer SQL 9 is a high performance, easy-to-use Relational Database Management System (RDBMS). Mimer SQL offers scalable performance, including multi-processor support, and with its availability on all major platforms is ideally suited for open environments where interoperability is important. Recent additions include the new New Mimer JDBC Driver, providing Java support for: array fetches, batch operations, CallableStatements, DataSource, distributed transactions (XA), LargeOBjects (LOBs), scrollable ResultSets, Set FetchSize and Unicode. Development is also underway for Mimer SQL Embedded a small footprint DBMS especially for use on hand held devices, mobile phones and other small appliances.

MySQL Data Manager - http://www.edatanew.com/ Fully featured web client for MySQL allows users to manage database records and structure via user friendly graphic interface, which allows you to remotely manage mySQL databases and user access over the Internet using web browser based interface. MySQL Data Manager is an all-inclusive web based

The MySQL Data Manager front end

34

LINUX MAGAZINE

Issue 22 • 2001


FEATURE

mySQL front end with a powerful interface for management, development and support of databases on the web.

PhpMyAdmin - http://www.phpmyadmin.net/ phpMyAdmin is a tool written in PHP3 intended to handle the administration of MySQL over the WWW. Currently it can: - create and drop databases - create, copy, drop and alter tables - delete, edit and add fields - execute any SQL-statement - manage keys on fields - create and read dumps of tables export/import CSV data.

PHP - http://www.php.net/ PHP is an HTML-embedded scripting language. With special sets of tools you can develop web pages that are able to access databases directly Much of the PHP syntax is borrowed from C, Java and Perl with a couple of unique PHP-specific features thrown in. With PHP you can create web pages which are dynamic, showing details based on client input with results from your database.

PHP Builder - http://www.phpbuilder.com/ This is probably the most valuable resource for anyone that is hoping to develop with PHP. It features Articles, chat rooms, code libraries, documentation and lots more.

TOra - Toolkit For Oracle http://www.globecom.se/tora/ Tool for Oracle (Tora) is a tool for DBAs and database software developers. It currently features a schema browser, SQL worksheet, PL/SQL editor & debugger, storage manager, rollback segment monitor, instance manager, security manager, SQL output viewer, schema comparison and extraction and SQL templates.

LEAP RDBMS 1.2.6 - http://leap.sourceforge.net/ LEAP is an RDBMS (Relational Database Management System). It is used as an educational tool around the world to help students, and assist researchers and teachers as they study and teach databases.

PgMathematica 2002.0 http://www.petroff.ch/pgmathematica/ Mathematica http://www.wolfram.com/products/ mathematica/ a technical computing tool can be used with a PostgreSQL database.

Oracle9i http://www.oracle.com/ Another one of the main players in the database market. Increasing their Linux support with a concerted effort with Linux cluster solutions for the enterprise market with its ‘Unbreakable Linux’ product.

SuSE Linux Database server http://www.suse.co.uk/uk/products/suse_business/d atabase_server/index.html

Tora - Powerful statistical analysis of the performance of your database.

SuSE Linux Enterprise Server is the cross-architecture operating system solution for company-wide IT infrastructures. SuSE Linux Database Server couples the advantages of SuSE Linux - speed, security, cost efficiency, and high quality - with a professional and proven database system. Based on an optimised operating system basis, SuSE Linux Database Server offers security and stability for DB2 Universal Database and all relevant server services.

If you are a SuSE user then you have a database solution ready, out of the box.

Red Hat database http://www.redhat.com/software/database/

pgMathematica combines the power of a PostgreSQL database with the Mathematica engine

Red Hat also offers a boxed set solution featuring Red Hat Linux 7.1 and PostgreSQL 7.1.2. The value comes from a supplied with an understanding for the Linux market. In addition to the software, the product includes Red Hat Installer, extra documentation and support packages, designed for seamless intergration. This reduces the amount of time spent on installation, allowing businesses to get up and running as quickly as possible. With business growth, such as Web and e-business deployments, in mind, this package will allow for a quick implementation by businesses with little database expertise. Issue 22 • 2001

LINUX MAGAZINE

35


FEATURE

StarOffice 6.0

CHANGE IN THE OFFICE What and why?

Linux users are blessed with a range of office products that suit a broad cross section of needs. Users of other OSes may not be so lucky and may be stuck with a choice of just one. As Colin Murphy explains, StarOffice 6.0 gives us all more choice

36

LINUX MAGAZINE

StarOffice 6.0 is a fully featured office suite offering plenty of facility for all of the common day-to-day tasks that we find ourselves with, such as creating documents, spreadsheets and presentations. If you have access to Solaris and Windows platforms, as well as Linux, then you will be able to run these applications across the board. StarOffice 6.0 has been described as the biggest Open Source project in history. Building on the development of StarOffice 5.2, Sun was wise enough to release the source code to the Open Source community, which led to the creation of the OpenOffice.org project, which recently reached its own milestone release when version 1.0 became available. Conscious of the scepticism the corporate and enterprise market have of the Open Source community, Sun has held on to the SunOffice badge, maintaining its own development while remaining closely akin to OpenOffice. Sun’s enterprise customers were keen to see the development of an alternative, but still mainstream, office suite, which would be free from all of the failings that their one main choice had – mainly licensing issues – and freedom from unreasonable pricing and the fear of forced upgrades. In effect, Sun took the code from the OpenOffice project and added value to it, though this value comes at a cost. OpenOffice is free to use and freely available; StarOffice 6.0 is not free and this is its blessing. Sun realises that ‘free’ carries a stigma with it, especially for the corporate market. Free, quite often, means no support and no comeback if things go wrong. With the backing of Sun for worldwide support and the carefully considered pricing structure, these fears melt away. Sun also believes that it can lower the cost for the end user, epecially in the enterprise sector. With StarOffice the end user is given much more control over how and when they want to upgrade their office packages. With deployment and migration programs in place, Sun can help and advise on how the user can improve their productivity. Flexible Issue 22 • 2002

licensing also allows a much wider range of users to benefit from StarOffice. Education customers even have the benefit of only having to pay for media and shipping costs. Flexible licensing is also apparent for the individual user, where a single licence allows you to run up to five complete installations, so that means the office machine, the home machine, the laptop and the console in the kids bedroom can all have a copy of StarOffice 6.0 from the one box. One concern from the enterprise market was the use of proprietary file formats that ‘their’ documents were being saved in. Without the support of the vendor who supplied the application, users would run the risk of not being able to access their own data, or, at the very least, deal with it in an effective manner. This results in a very strong, maybe even oppressive, tie to the vendor. StarOffice 6.0 makes much noise about using XML, the eXtensible Markup Language-based file format, which it uses by default. Because of this, anyone has the ability to use widely available tools to open, modify and share the data in those files, thereby not tying the user down to a single application that the data can be used on. Microsoft Office import and export filters are provided with StarOffice 6.0, including filters for Office XP. This now allows people to share data across platforms, and to move away from MS Office if they so wish.

StarOffice 6.0 prepares itself for installation


FEATURE

What you get StarOffice 6.0 is available in 10 languages, which each come in their own boxed sets. As such the English boxed set will only provide English menus, even though the dictionary tools cater for some other languages. The one StarOffice 6.0 disc provides the installation for Linux, Windows and Solaris (for both IA and Alpha processors). This has led some to assume that the same installation disk would offer installation for all of the languages StarOffice now supports. This is not the case, so, just make sure you buy the right boxed set for the language you want to work in and this will not be an issue for you. In the box you get the single installation disc, two manuals – the Set-up Guide and the User Guide – a copy of the licence and a StarOffice Entitlement Certificate which is valid for 60 days from ‘product acquisition’. The Set-up guide runs to 71 pages and is detailed enough to explain things like how to turn off the automatic registration prompt. It covers the installation for all three platforms, though will be redundant for anyone who has previously done at least one install of StarOffice. The User Guide has a useful 22 page index to its 462 pages. The User Guide is a real boon to those of use that have been using StarOffice 5.2 and especially OpenOffice: some of those mystery features like Navigator now make themselves know and useful. The User Guide and Set-up Guide are also included as PDF files on the CD. The applications in the suite include: ● Writer – the word processor. This has been improved over StarOffice 5.2 to include Smart URL attributes; is more flexible when creating labels; has the addition of a graphics object bar; automatic formatting and a host of other features. ● Calc – the spreadsheet. This now features Roman and Arabic conversion functions; improved interoperability with Microsoft Excel functions – particularly with the inclusion of the new Analysis functions; improved Matrix arrays and new print options. ● Impress – the presentation tool. Cache buffering has been added, as has a Crop dialog used when inserting bitmaps to presentations. ● Base – the database. StarOffice 6.0 now works with data sources as opposed to the fixed format used in StarOffice 5.2. This comes with its own Data Source administration dialog as well as enhancements to the administration of database tables. Adabas is also included on the CD.

Installation Installation is simple for the Linux user, as it’s just a case of selecting the start-up script tucked away in the /linux/office60 directory. From here you are

The Java runtimes are included on the StarOffice CD, so you can install them with StarOffice

presented with a graphical install screen, which tells you what is happening. Very little user intervention is required, especially if you have already been using StarOffice 5.2, as the installation fetches information from your previous install. You will need to confirm a couple of paths and confirm the installation of the Java runtimes if you so wish. Try as we might, we could not get StarOffice 6.0 to place itself into the KDE desktop of the SuSE 8.0 machine we installed it on. This might be because I already had StarOffice 5.2 and OpenOffice.org installed, and we can’t vouch for other systems either. Still, it was not too difficult adding them by hand as an afterthought. Printer configuration is dealt with as a separate issue, and has its own set-up script, spadmin, in the same directory where you ran the installation from.

In use Those familiar with StarOffice 5.2 will notice the disappearance of the integrated desktop. Some will pine but many will rejoice at its loss. When you start StarOffice 6.0 you are now just presented with the Writer word processor application. From the File/New drop-down menu you can start up any of the other applications. This now means that StarOffice fits in more comfortably with your desktop and the themes you have chosen to run and is much better integrated all round. For each new StarOffice 6.0 application, its own desktop window is also started, so you can position where and how you like. Star Office 6.0 Supplier Price Web: Requirements For: Against:

Sun Microsystems £52.99(RRP) http://www.sun.com/staroffice/ Kernel 2.2.13 or higher, glibc 2.1.2 or higher, 64Mb RAM, 250Mb free disk space Possibly the best all-round Office suite there is If you want Free, go for OpenOffice.org

rating

Issue 22 • 2002

LINUX MAGAZINE

37


KNOW HOW

Linux networking guide: Part 3

THE DOMAIN NAME SYSTEM In this, the third

Introduction

An overview of DNS

installment of our

The examples in the first two articles in this series used IP addresses exclusively to identify networks, subnets and hosts. But while an IP address is all a computer needs, humans work better with names. Every so often, on a newsgroup or mailing list, some newcomer to Internet technologies will suggest that the system of IP addresses should be entirely replaced by one based on names. This is not practical: an IP address only requires four bytes to store it (IPv4 addresses, anyway), whereas a text string requires at least one byte for each character. Since each IP packet contains the address of both its source and its destination, this would add quite an overhead to TCP/IP networks. What is needed, then, is a mechanism that allows humans to assign meaningful names to hosts on the network and enables computers to translate – to resolve – these names into IP addresses. That is the subject of this article. This article will show you how the Domain Name System is used to organise TCP/IP networks, how to configure a computer running Linux to use DNS and how to configure a DNS server on Linux.

Internet domains are organised into a top-down treestructure. At the very top is the root domain. Beneath that are the Top Level Domains, the generic TLDs like .com, .net etc and the geographical TLDs like .uk, .nz and so on. Each of those domains is further subdivided and so on. Domains further down the tree are considered subdomains of the upper domains, so the debian.org domain is within the .org domain and the uk.debian.org domain is within both the debian.org and .org domains and they are all subdomains of the root domain.

simple guide to configuring Linux networks from the command line, Bruce Richardson shows us how to configure DNS on both client and server

Table 1: DNS Record Types Type SOA A CNAME

NS MX PTR

38

Description Start Of Authority record. If a name server has an SOA record for a domain then it is an authoratitive server for that domain. Address record. Associates a name with an address. An address may have multiple A records, each associating it with a different name. Alias record. Gives an alternate name for a host that already has an A record. NS, MX and PTR records may not point to CNAME records and some people avoid all use of CNAMES, saying they make a mess of DNS. Identifies a host as a name server for a domain. Identifies a host as a mail server that will accept mail for the domain. Pointer records are used to map addresses to names, the inverse of A records. Their use is explained further on in this section. The name in a PTR record must have an associated A record, not a CNAME record.

LINUX MAGAZINE

Issue 22 • 2002

Names A Fully Qualified Domain Name (FQDN) is constructed by taking the name of a host or domain and adding to it the names of all the containing domains, using “.” as a separator. So ftp.uk.debian.org is the FQDN of the host named ftp that resides within the uk.debian.org domain. ftp is the unqaulified name, referred to in this article as the short name. An important point to remember is that the root domain is itself represented by “.”. So the FQDN for the ftp host is actually ftp.uk.debian.org.. Almost all applications will add the final “.” for themselves as long as the rightmost domain matches the name of a TLD. This is not the case with name servers, however. When configuring a name server it is important always to include the final “.” or the daemon will attempt to fully qualify the name by appending the FQDN of the local domain.

Name servers For each domain there must be a name server (a minimum of two, for Internet domains) which can give authorative answers to queries about names within the domain. A name server may be authorative for an entire domain including all its subdomains or it may delegate responsibility for a subdomain to another name server. The area within a domain that the name server does not delegate is called a zone. Name servers can be authoritative for multiple domains and so have many zones.


KNOW HOW

Name servers maintain databases of information about their domains. Each record in the database holds information of a specific type (see Table 1).

Masters and slaves Configuring multiple name servers for a domain provides redundancy and eases the load on each server. To ease the burden of administration, name servers can be configured as slave servers, getting their data from a master server in a regular process called a zone update.

Root name servers The root name servers are authoritative for the root domain (and in most cases for the generic Top Level Domains as well). Each chain of delegation starts with them and so they are the ultimate source of the answers to all DNS queries.

Query resolution DNS name servers accept two kinds of queries: recursive and iterative. In a recursive query, the name server searches the DNS heirarchy until it finds an answer. In an iterative query the name server simply gives the best answer it knows. This is best illustrated by example. A host in the example.org domain wants to know the address of www.linux.org.uk. It sends a recursive query to the local name server, ns0.example.org. ns0 sends an iterative query to one of the root servers, which refers it to ns.uu.net, a name server authoritative for the uk domain. ns0 then sends an iterative query to ns.uu.net. ns.uu.net refers ns0 to ns1.nic.uk, which is authoritative for org.uk. ns1.org.uk refers ns0 to tallyho.bc.nu and since tallyho is one of the name servers that is authoritative for the linux.org.uk domain, it is able to give the address of www.linux.org.uk. ns0 returns the answer to the host that made the original query.

Technicalities The standard port number for DNS queries is 53. Queries are normally carried out over UDP, though TCP may be used if the data involved is too big to fit into a UDP datagram.

Mapping addresses to names Sometimes you want to find out what name is associated with an address. For this a special domain was created, the in-addr.arpa domain. Address-toname queries are solved by looking within that domain for PTR records which list the name matching an address. PTR records are constructed by reversing the IP address and appending ip-addr.arpa, so to find the name associated with the address 195.92.249.252 you would do a DNS query for 252.249.92.195.in-addr.arpa. The inversion of the address is done because DNS places the most significant information to the right. This allows the query to go first to the namserver authorative for inaddr.arpa, then to the nameserver for 195.inaddr.arpa and so on.

An example network The rest of this article will use as the basis of its examples the internal network of an imaginary company. It is a small organisation whose public domain is managed by its ISP. All of its hosts are on a private, internal network behind a NAT-ed firewall and are not visible to the Internet, so the local domain is called “internal”. This allows a simpler example (only one name server, no slaves).

Configuring the resolver Unix systems come with a library that is used to resolve host names, called the resolver. (Some applications, e.g. Netscape Navigator, use their own resolvers. The Netscape one is particularly braindead.) The Linux resolver library is called Resolv+ and

Caching In the example above, ns0 doesn’t throw away the answer to the query. Instead, it keeps it in a cache for a period of time. If it is asked the same query within that period it can give the answer without having to refer onwards. Servers answering iterative answers may also use their cache, so ns.uu.net will also be able to give the address of www.linux.org.uk for a while. Caching thus eases the burden on the DNS system in general and top level name servers in particular. If the actual details for www.linux.org.uk change, those name servers which have the old details in their caches will be serving up incorrect answers. For this reason, the SOA record of each name server includes settings which indicate how long other name servers should cache its replies. Even so, the downside to caching is that changes to your DNS set-up will take a while to propagate throughout the Internet.

Table 2: The Internal Domain Hostname gateway Alpha Oddjob

Address 192.168.10.1 192.168.10.2 192.168.10.3

Description Gateway to the Internet, runs firewall and NAT. File server. Used for a variety of tasks including backups and printing mailbox 192.168.10.4 The internal IMAP mailstore. Squid N/A This used to be a separate box acting as HTTP proxy for the workstations. That application has now been moved onto gateway. Making squid an alias for gateway allowed this to happen without reconfiguring any other applications or workstations. ns 192.168.10.254 Nameserver. Also runs DHCP. All the other computers on the network are assigned addresses by the DHCP server on ns.

Issue 22 • 2002

LINUX MAGAZINE

39


KNOW HOW

is an enhanced version of the library from BIND, the Berkely DNS server application. To set up a Linux box to make proper use of DNS, you edit the resolver’s config files.

Naming your computer This isn’t, in fact, directly associated with the resolver, but many of the networkworking applications on a Linux system need to associate a primary name with the computer they run on. To do this dynamically, use the hostname command: hostname oddjob This won’t survive a reboot, so we also want to record it in a config file for the initscripts to find find and configure. With some distributions (e.g. Debian), the name is simply written to /etc/hostname. On Red Hat you need to edit the HOSTNAME line in /etc/sysconfig/networks.

The resolver config files Back in the early days of the Arpanet, before there was such a thing as DNS, each computer on the network kept a local copy of a file called hosts.txt, which they downloaded via ftp from the Network Information Centre at regular intervals. This system broke down as the network grew but the /etc/hosts file is a relic from that time. Each entry in the hosts file lists an IP address, the name associated with it and any aliases, as in this example:

Main BIND config file # /etc/named.conf options { directory “/var/cache/bind”; }; zone “.” { type hint “/etc/bind/db.root”; file “/etc/bind/db.root”; }; zone “internal” { type master; file “db.internal”; }; zone “0.0.127.in-addr.arpa” { type master file “db.root” }; zone “10.168.192.in-addr.arpa” { type master; file “db.10.168.192”; };

127.0.0.1 192.168.10.1 192.168.10.2 192.168.10.3 192.168.10.4 192.168.10.254

localhost gateway.internal gateway squid alpha.internal alpha oddjob.internal oddjob mailbox.internal mailbox ns.internal ns

Adding entries to /etc/hosts allows the resolver to resolve names without consulting a DNS server. Copying the above example to all the hosts on the network would eliminate the need for a local name server. The administrator of this network, though, prefers the centralisation advantages of DNS, so alpha’s hosts file is simpler: 127.0.0.1 192.168.10.2

localhost alpha.internal alpha

The file /etc/resolv.conf can can hold various entries that define the behaviour of the resolver, of which the most commonly used are: ● nameserver – Add a nameserver entry for each DNS server that you want the computer to consult. Only one server is needed but adding extra ones gives the computer options if the first one is busy. ● domain – Names the local domain. If given a short name (e.g “beta”), the resolver will attempt to resolve it within this domain (that is, it will combine the shortname with the domain name to make a FQDN and then try to resolve that). ● search – Defines a list of domains against which the computer should attempt to resolve short names, overriding the default which is just to search the local domain. Here is alpha’s resolv.conf file: # /etc/resolv.conf domain internal nameserver 192.168.10.254 If this file is not present then the resolver looks for a nameserver on 127.0.0.1, deduces the local domain from the hostname and its matching line in /etc/hosts and has a search list consisting of the local domain only. The file /etc/host.conf can take options which define the general behaviour of the resolver, as opposed to the more specific options in resolv.conf. If it is absent, sensible defaults are used. Here is a typical configuration: # /etc/host.conf order hosts,bind multi on The first entry tells the resolver to consult /etc/hosts

40

LINUX MAGAZINE

Issue 22 • 2002


KNOW HOW

before trying any nameservers. The second tells the resolver that if it finds multiple addresses for a given name it should return them all, rather than just the first.

So far, so good If you have followed all this, you now know how to configure a typical Linux box to resolve names properly. Obviously, if you are setting up DNS for the first time then you should configure the DNS server before referencing it from any other machines.

The Berkely Internet Name Daemon BIND is the most commonly used DNS server in the world and so the one I have chosen for this example. Specifically, I use BIND 8. BIND 9 is a recent major rewrite which is still turning up significant bugs and has not yet supplanted 8.x as the most popular version. You can get the source code from the Internet Software Consortium’s Web site or FTP site (see the Info boxout). I recommend installing the BIND package that comes with your distribution, though.

BIND db file for internal domain internal. IN SOA 1 10800 3600 604800 86400)

ns.internal. postmaster.example.org.uk. ( ; Serial ; Refresh after 3 hours (10800 seconds) ; Retry after 1 hour ; Expire after 1 week ; Minimum TTL is 1 day

internal.

IN NS ns.internal.

; Addresses localhost.internal. gateway.internal. alpha.internal. oddjob.internal. mailbox.internal. ns.internal.

IN A 127.0.0.1 IN A 192.168.10.1 IN A 192.168.10.2 IN A 192.168.10.3 IN A 192.168.10.4 IN A 192.168.10.254

; Aliases squid.internal.

IN CNAME gateway.internal.

The data files The main config file BIND expects to find its main configuration file in /etc/named.conf, though you can put it somewhere else and pass an appropriate command line option. The format for named.conf is extremely simple, as can be seen in the config file for ns.internal, listed in the Main BIND config file boxout. The basic pattern is of a series of blocks, bounded by braces. The first block contains the global options. In this example there is just one option, which sets the default directory to be /var/cache/named. Any file that doesn’t have an explicitly set location will be looked for there. The second block tells BIND that the root hints file is in /etc/bind/db.root. This file contains a list of all the root name servers and their addresses and should be kept up to date for BIND to function properly. A simple way to do this is to query a reliable name server, like this: dig @reliable.name.server . ns > root.hints Then copy that to wherever you keep your hints file and restart the daemon. Each block after that simply names a zone for which this name server is authoritative, states that this is a master (rather than slave) server for that zone and names the file containing the zone’s details. Since no path is given for the files, they should be placed in /var/cache/named. At this point, if you looked carefully at the zones listed, you might ask “Why a reverse-mapping zone for the loopback interface?”. The simple answer is that your name server will ocassionally be asked to perform a reverse look-up on the loopback address, so this covers it.

Next we must create the data files for each zone. The file for the main internal domain is shown in the BIND db file for internal domain boxout. Please note that all FQDNS end with “.” – do not forget this. First we have the SOA record (the IN SOA identifies it as an Internet Start Of Authority record). It begins with the name of the domain, “internal.”. Then comes the name of the primary name server, followed by the email address of the main email contact (with the “@” replaced by “.”). Finally there is a block of settings. These mostly relate to slave servers, which we shall skip. The TTL setting has a broader import, though, as it is returned with each query response. It tells the querying host how long it can reasonably cache the response before checking back. A TTL of one day is very common. Next comes an NS record identifying ns as a nameserver for the domain, followed by A records for each named host on the network. Finally there is a CNAME record making squid.internal an alias for gateway.internal.

Starting and maintenance Now all you need to do is start the daemon. The daemon itself is called named. If you have moved the config file you will need to pass it an option to tell it where: /usr/sbin/named –b /etc/bind/named.conf And that’s it: not the intimidating process you may have heard it was. Just be sure to keep your root hints file up to date. Each time you update the data files, restart the daemon or send it a SIGHUP signal. Issue 22 • 2002

Info ISC Web site http://www.isc.org/ BIND FTP download ftp://ftp.isc.org/isc/bind /src/cur/bind-8/ djbdns Web site http://cr.yp.to/djbdns.html/

LINUX MAGAZINE

41


KNOW HOW

Bring order to your music files

MP3 TOOLS Storing your MP3 collection on your hard drive or CDROM may be appealing but it soon becomes unmanageable. Do

W

e’ll start off with a little program to help keep track of your MP3 files. KDiskCat can be found in the SuSE distribution but you may need to install it via YaST2. Start the tool by entering kdiskcat in the quick starter (Alt+F2). KDiskCat distinguishes between catalogues and archives in the archiving hierarchy. Catalogues are superordinate to archives, so, for example, you can make a catalogue for music files, one for radio plays, one for radio recordings etc.

you have that song

will have declared during burning, for example “mymp3_01”. The use of labels makes it easier to get an overview. If you have not given the CD a name when burning, this doesn’t matter – just name the archive as you sort and label the CDs. So enter the name as the new archive designation and click on OK. KDiskCat will now scan the CD, which can take some time. Following the scan, you should be able to see mymp3_01 archive in the Music catalogue – and more will join it when you scan further CDs.

and if so which disc is it on? Anja M Wagner introduces some tools to help you get your house

Figure 1: First create your catalogue

music in order

MP3 MP3 is the abbreviation for “MPEG 1 Audio Layer 3” and is a standard format for the compression of audio files. An MP3 file has about one twelfth the size of the original audio file. This is possible because MP3 filters out the frequencies in audio data which the human ear cannot hear. MP3PRO is the official successor format of MP3. It is claimed to halve the bitrate required while still keeping the same quality. The MP3PRO format was developed by the Fraunhoferinstitut, Thomson and Coding Technologies and should be downward-compatible.

42

LINUX MAGAZINE

Since the capacity of a hard drive can be rapidly exhausted when MP3s are collected intensively, it’s a sensible idea to burn the files to a CD. To manage the collection it seems a good idea to make a separate archive for each CD; this should be given the name of the respective CD. You can of course choose a totally different method for your archiving. First create one or more catalogues. To do this, click on File/New catalogue in the menu bar and give it an informative name, such as Music. The MP3 CDs with music will now be archived in this folder. Click on Archive/New archive. Place the CD with the MP3s in the drive and click in the search button with the three little dots. Don’t forget to first to mount the CD drive before this though. Enter the directory path, for example /cdrom.

Figure 3: KDiskCat lists the files in each archive clearly

For example, if an archive is now to be added to the Radio plays catalogue, first select the Radio Plays catalogue from the drop-down menu and then proceed as with the Music catalogue. Click on Archive/New archive and scan the CD. If the files on your CD are arranged into folders, KDiskCat takes on this structure.

Figure 2: The archive should be named after the CD

Figure 4: The folder structure on the CD is taken over by KDiskCat

For a new archive name it would be a good idea to use the name and/or the label of the CD, which you

You can also add a description to each archive. To do so, select the menu item Archive/Archive

Issue 22 • 2002


KNOW HOW

properties and enter the text in the box provided. In the search function you can look for the entries via the descriptions later. Here you can also look up when you created the archive.

Figure 7: The search function can use wildcards or terms

Figure 5: Add a description for an archive

Figure 6: You can also store personal comments for entries in the archive

With a right-click on one of the entries in the archive you can open a Properties box with exactly the same structure for each file. A tool like KDiskCat is of great help if it has a good search function, because it’s very easy to lose track of an extensive MP3 collection. Maybe you still know that a file exists on CD, but on which one? Even with well-labelled covers it is tedious searching by hand.

The search function allows wildcards. “*” stands for as many characters as you like, while “?” represents just one character. If you know exactly what you are looking for, select the option Use regular expressions and enter the search term in the text line. You can search through the file names and the descriptions. Place a tick against the option Upper/lower case, so that KDiskCat takes upper and lower case into account. Then if for example you enter acdc, files with the name component ACDC will be ignored by the search. You can also specify as search criteria the file size and the date of creation. When entering the file size keep to the figures displayed in the main window by KDiskCat. For MP3 files these are usually seven-digit numbers. You must nevertheless, if you search according to file size or date, enter something in the text line Search term. It’s a good idea to use the wildcard “*”, if you want to look solely according to size and/or date. The search results are then shown by KDiskCat in a new window. In addition to the parameters of file name, size, catalogue, date and description, the Path column is especially helpful: this shows the name of the archive in which the file is kept. KDiskCat does not save any files, but merely archives names and the aforementioned parameters. Now it becomes clear why the archive ought to bear the name of the CD, because this is how you know on which CD the file you seek can be found. The results list can be saved as a text file and thus be accessed and printed out at any time.

Lost and found Click on Archive/Search. In the search mask first define in which of your catalogues to search; they are listed in the left-hand column. With a click on the A button, you can mark all catalogues, a click on N cancels the marking. A click on C marks the catalogue which you have previously selected from the drop-down menu. You can mark additional catalogues with the click of a mouse, and there is no need to press the Ctrl key when you do so.

Figure 8: The results of a search can be saved, in order to view and print them out later

Issue 22 • 2002

LINUX MAGAZINE

43


KNOW HOW

FreeAmp – another Winamp clone ID3 Tag ID3 tags allow metadata to be added to MP3 files. The older version, ID3v1, stands at the end of the file and can only include details on the artist, album, song, genre, year and a brief comment. ID3v2 stands at the start of the MP3 file and can include, with up to 256Mb of data, a picture of the artist amongst other things. Most MP3 players can read out ID3 data.

Windows. First we will explain how you create playlists with FreeAmp.

KDE offers a number of software MP3 players. Here we shall present FreeAmp, which you can obtain at http://www.freeamp.org, where you’ll even find a Windows versions. Start FreeAmp via the start menu item Multimedia/Sound/Freeamp or by entering freeamp in the fast starter.

Figure 11: How a file browser shows FreeAmp your media files

Figure 9: The main FreeAmp screen

After a few clicks, there is nothing standing in the way of your musical pleasure: to play back an audio file, click on the button marked Files and search in the following window for the tracks on your system. Mark a song by mouse click and hold down the Ctrl key, if you want to mark several. After a click on OK FreeAmp plays the tracks in the marked sequence. The MP3 player displays details of the song during playback (title and artist) and MP3 properties.

Click on the My Music button. In the window that opens click on the New playlist button. A second window will now open – select Add Files. Search in the system for the songs which are to be collated as a playlist. By pressing the Ctrl key you can mark several songs simultaneously. After confirming with OK the right-hand side of browser window will list the title, artist, album, length in minutes and music genre of each track.

Figure 12: Collate the songs for a playlist

Figure 10: At a glance: Details of the song and MP3 properties

The volume is adjusted via the left slider; the right slider shows the progress of the playback, and you can “spool” back and forth through the track, if you move the slider with the mouse. You can use the lower symbol buttons to jump from track to track, stop playback and pause it. Random playback can be activated via the left button. A click on the Repeat button means the current track will be played again. If you click a second time on the button, the whole track list is repeated. To save yourself searching for tracks every time you run FreeAmp, you can create playlists. FreeAmp stores these in exactly the same way as WinAmp and other MP3 players in the .m3u format. As such, you can easily import the playlists you have created under 44

LINUX MAGAZINE

Issue 22 • 2002

Figure 13: All the important details on the MP3 file are shown on the playlist

Click, once all the songs have been collated, on Save Playlist and give the list an informative name. If you now click, in the left hand browser window on the +sign before My Playlists, all the lists created appear. With a double click the list is opened and its content displayed in the right window. It’s quite ikely that you’ve been annoyed at some time or other by incorrect details in MP3 files. These are due to erroneous ID3 tags. Select the file whose ID3 tag you want to edit and then from the dropdown menu select Edit/Info. In the next step you can correct and add to the details. And you can also add a commentary. Confirm your inputs with Apply.


KNOW HOW

Figure 14: Correct the ID3 tag of a file if necessary

In a similar way to Microsoft’s Windows Media Player, FreeAmp searches your system on request for audio files and adds them to the My Music folder. To do this, select File/Search computer for music in the Files menu bar and define which directories in the system are to be searched. The default option is the root directory “/”, so the entire system is searched. If you select this option, after the scan, which takes a very long time, you will also have all the system sounds in the My Music folder.

Figure 15: FreeAmp scans your entire system

It makes sense to restrict the scan, for example to your home directory, because this is where, as a general rule, you will find your audio files stored. Click on the button Entire file system button and in the following menu select My Home Directory or Let me select a directory to restrict the scan further. The result of this is to activate the Start Search button, and you can select the corresponding directory. FreeAmp saves links to the scanned files in two subfolders: All Tracks and Uncategorized Tracks. After the scan you can create a playlist even more simply: click on New Playlist and then open My Music in the left browser window. Each song which is to go on the playlist, can either be dragged using drag & drop into the right browser window or selecting Add to Playlist from the dropdown menu. Next, save the new list.

Figure 16: Files can be added to a playlist via the dropdown menu or via drag & drop after the system scan

If you want to play back a playlist with FreeAmp, start the program, click on the My Music button and double-click on the list. FreeAmp then begins to play back the songs in the sequence of the list. Alternatively, activate Controls/Play Tracks in Random Order for random playback. The sequence of the tracks in the playlist can be altered via the button Sort/Playlist. If you like FreeAmp and want your MP3s to be played as standard following a double click on this program, use a right-click to open the drop-down menu of any MP3 file and select Edit file types. On the General tab a few MP3 players will be listed in the lower window. Mark FreeAmp and click Up, until FreeAmp stands at the top of the list. Confirm with OK.

Figure 17: FreeAmp can be selected as standard application for MP3 files

Issue 22 • 2002

LINUX MAGAZINE

45


KNOW HOW

The Secure Shell and OpenSSH

SECURE ACCESS As anyone who has ever left a vital file behind will appreciate, the ability to remotely connect to a system is immensely useful. Derek Clifford explains how to do this securely with

History

SSH

Derek Clifford – Director of Micro Logic Consultants, a consultancy specialising in configuration management, design and rollout of standard configurations, and general Windows and unix support

46

W

ith more and more users permanently connected to the Internet, it can be useful, when away from your home or office, to be able to connect to your own server or network. In most cases (I hope) this will have been made virtually impossible by the firewall software or hardware installed as of necessity these days. Simply opening up the firewall to allow FTP, telnet and other communication would be madness, and apart from the vulnerability would further compromise the systems because these programs transmit unencrypted passwords. The secure shell (SSH) offers a solution to this problem, by both controlling access in a secure way, and by using public key encryption to secure communication.

LINUX MAGAZINE

SSH was originally written by Tatu Ylˆnen, and the first release was Freely available. However further developments of the original program were issued under more restrictive licences, which severely limited its commercial use. In 1999 Bjˆrn Grˆnvall took the original Free release and produced a more reliable product called OSSH. When this became known to the developers of the OpenBSD system, they took this version and produced OpenSSH, which contained no proprietary or patented software or algorithms, such components being used from external libraries. The OpenBSD group continued to develop OpenSSH, but found that porting to other Unix systems was complicated, and required many changes for system dependencies. Thus the OpenBSD group now produce the core developments of OpenSSH for OpenBSD, and other groups port this code to produce a portable version.

Legal problems Like Phil Zimmerman’s PGP there were both legal Issue 22 • 2002

and commercial problems with the product. The ban on the export of strong encryption from the USA was overcome by sending a non-US developer to Canada to develop the first version of OpenSSH. The RSA patent on the asymmetric encryption algorithm made legal commercial use difficult, but this problem disappeared with the expiry of the patent.

Protocols The concept of public key cryptography in which a pair of keys are used, one remaining secret, the other freely publishable to all was mooted by Diffie and Hellman in 1976. Up to this time the major cryptographic algorithms relied on a single key being kept secret and accessed only by the sender and recipient of a message. In 1977 a practical implementation of the public/private key system was developed by Rivest, Shamir and Adleman (RSA). The RSA algorithm and other further developments of the technique are the most popular and most secure methods of encryption available. OpenSSH offers the choice of RSA and DSA algorithms for the identification of users and hosts The original SSH1 protocol has two variants: 1.3 and 1.5. These used the public key/private key RSA (RSA public key encryption) algorithms for authorisation, and simpler 3DES (DES encryption algorithm) and Blowfish (Blowfish cipher) systems for encoding data. Problems with the RSA patent made commercial use of SSH difficult, but the US patent expired in September 2000, so there is no longer a problem. SSH1 uses a cyclic redundancy check to maintain data integrity, but this has been found to be crackable. SSH2 was introduced to overcome the RSA patent issue, and to improve data integrity. The DSA (Digital Signature Algorithm) and DH (Diffie-Hellman key agreement) encryption algorithms are used for


KNOW HOW

authentication, with which there are no patent problems. The CRC problem is solved by using a HMAC algorithm. OpenSSH supports all of these variants, but there is little point in using anything but SSH2, unless a system does not have suitable clients available.

Getting OpenSSH The latest version of OpenSSH is 3.2.3, and was released on 22 May 2002. The portable software for non-BSD systems is designated with version numbers such as 3.2.3p1. rpms for Red Hat distributions and a source rpm are also available. The current portable download is openssh-3.2.3p1.tar.gz, and a suitable download mirror site (there is a very extensive set of mirrors) can be located at http://www.openssh.com/portable.html. The software requires two other packages to be installed, Zlib (a compression library) and OpenSSL (Secure Socket Layer) 0.9.6 or later.

SSH Components The secure shell system comprises a server daemon sshd, several clients: ssh and slogin (secure equivalents to rsh, the remote shell, and rlogin) scp (secure remote copy), sftp (secure ftp) and utilities for generating and using identification keys. The daemon needs to be started automatically on the remote machine through one of the startup scripts, and the clients and utilities need to be installed on the client machine. In practice the easiest option is just to install the software on both client and server, as it is necessary to generate a host key for each machine, which the installation software does automatically.

Installation For the majority of Linux and other Unix systems it will be necessary to compile the source. Having expanded the tarball, the sequence: ./configure make make install will compile the system, install it and generate the host keys. The latest version installs by default to /usr/local/sbin/ssh, and its configuration files to /usr/local/etc which may not be where an earlier version exists in your distribution. These can be overridden with the switches: ./configure –prefix=/usr –sysconfdir=/etc/ssh

which will install to /usr/sbin/ssh with configuration files in /etc/ssh. The system is controlled by the configuration files /etc/ssh_config, which controls the client programs and shd_config, which controls the server daemon. A user can override these global settings through settings in the local ~/.ssh/config file. Options in ssh_config are applied to a specific host, or group of hosts selected by wildcards, and control the overall parameters to be used when communicating with that host. Settings are applied once only, so host specific parameters must be set in the file before system-wide defaults. The order of precedence in selecting the parameters is first any command-line options given to ssh, followed by user-defined configuration files and finally the system-wide default file. Many of the default settings will be suitable for the normal user and are described in the manpages, but there are one or two parameters which are worth looking at. On the client side the parameter FallBackToRsh can take the values yes or no, and setting it to yes will cause ssh to revert to the standard Unix remote shell rsh if ssh is not running on the target host. Although a warning is issued this could lead to passwords being revealed. Fortunately the default for this parameter is no. If Xwindows sessions are to be used over the secure shell, the parameter ForwardX11 and ForwardAgent must be set to yes (default is no). This will allow X11 traffic, and automatically set the remote shell’s DISPLAY variable to direct the output of the X server correctly. Systems behind firewalls may have difficulty with the fact that ssh uses low-numbered ports to make connections. If this is a problem the parameter UsePrivilegedPort can be set to no, to cause ports above 1024 to be used. Port 22 will have to be opened to allow the SSH server to function. The SSH daemon configuration file also contains a setting which is required to be enabled if X11 is to be used. The parameter X11Forwarding must be set to yes.

Basic use of ssh Having set up the system and started sshd (probably by modifying one of the startup .rc files) the simplest Issue 22 • 2002

LINUX MAGAZINE

47


KNOW HOW

Figure 1: Setting up public and private keys with ssh-keygen

option is to start a session on a remote host with the command: bohr# ssh hostname The first time this command is executed the system will report that the identity of hostname cannot be confirmed, as the public key of hostname is not yet known on the local machine. The identity of the machine should really be verified, but it may not be practical to do so. The message does report the beginning of the remote host’s public key, so this may be checked to give some confidence that the correct machine has been reached. On proceeding the system will add the remote host’s public key to the list of known hosts, and will in future verify the identity of the host. Because the user is not yet known to the remote host, the password for the user on the remote machine will be required. The need to type a password each time may be removed if the user sets his public key in the .ssh/authorized_keys of the target user’s home directory on the remote machine. Having entered the password, the user is running a shell on the remote host, no password has been sent in readable form over the network, and all subsequent communication between the machines is encrypted.

Setting up a key pair To remove the need to type in a password for the remote user account, a public and private key pair is generated. The utility to perform this task is ssh-

Figure 2: Passwordless but secure access with Xwindows started through ssh-agent

48

LINUX MAGAZINE

Issue 22 • 2002

keygen. Most of the default settings are suitable, but it is necessary to specify the type of key to be generated. the switch –t controls this, and the allowed values are rsa and dsa for the SSH2 protocol, or rsa1 for the SSH1 protocol. The key length can range from 512 bits to 2048 bits, with a default of 1024 (–b switch). The user is asked where to store the key, but the default is usually appropriate, and a passphrase is input and verified. The passphrase cannot be recovered from the key, so if it is lost new keys will need to be generated and distributed. Use of the utility is shown in Figure 1. The output of the utility is two keyfiles, in the case of RSA encryption: id_rsa and id_rsa.pub. The public key may be widely distributed (.pub) but the private key must never be revealed. In order to use the keys, the public key must be installed in the authorized_keys file in the $HOME/.ssh directory of the user account to be made accessible on the remote host.

The authentication agent Simply adding the user’s public key to the authorized_keys file merely replaces the request for a password with a request for the key’s passphrase. The trick to allow secure but friendly access to the remote host is to have the key available in memory, and for this the authentication agent ssh-agent is used. The agent is given a command, and all children of the agent inherit the keys added. For example the command: bohr # ssh-agent $SHELL


KNOW HOW

spawns a shell. Keys may now be added and will be available to all sessions started in the shell. Adding the current user’s key is the default action of sshadd, while other keys may be added by specifying the user’s keyfile: ssh-add /home/user/.ssh/id_rsa For each key to be added the passphrase will be requested, but this will be required only once, any remote sessions being started will automatically supply the key and the user will be logged on without a dialogue. The –l switch to ssh-add lists the keys available in memory. Obviously to gain the best use of the authorisation agent it should be started as the parent of all subsequent shells in the user’s initialisation files.

scp and sftp Apart from the fact that there are additional switches for selecting encryption types, and if interactive authentication is used the programs will request passwords or passphrases, these programs behave in exactly the same way as rcp and ftp.

Xwindows It is necessary to set the X11 switches in the configuration files to ‘yes’ in order to pass X11 traffic, and to set the DISPLAY variable. Obviously it would be very tedious to have to type the passphrase or password in every Xterm opened, so the preferred method of starting the Xwindows system is with ssh-agent. This will ensure that the agent makes the security keys available for every window opened (Figure 2).

Figure 3: The Windows PuTTY client supports SSH

which is an extension to TeraTerm Pro, but only supports the SSH1 protocol, and does not provide key generation or scp and sftp utilities. The Macintosh is catered for by Nifty Telnet (figure 4) (which only supports the SSH1 protocol) and MacSSH (which only supports SSH2).

Windows and Mac clients If you are stuck with only a Windows or Mac system to access your server, there are some free products available. For Windows PuTTY provides a client which supports SSH (Figure 3), together with scp and sftp clients, plus the ability to generate key pairs. TTSSH is also a free Windows client

Info OpenSSH http://www.openssh.com/ OpenSSL http://www.openssl.org/ Zlib http://www.gzip.org/zlib/ PuTTY http://www.chiark.greenend.org.uk/~sgtatham/putty/ TTSSH http://www.zip.com.au/~roca/ttssh.html TeraTerm Pro http://download.com.com/3000-U 2155-890547.html?legacy=cnet Nifty Telnet http://www.lysator.liu.se/~jonasw/freeware/niftyssh/ MacSSH http://pro.wanadoo.fr/chombier/ Figure 4: Macintosh support

Issue 22 • 2002

LINUX MAGAZINE

49


REVIEWS

CrossOver Office

MICROSOFT MEETS LINUX Running Microsoft Office under Linux is already possible with CrossOver Office, though as Patricia Jung explains, it’s still far from perfect

M

icrosoft’s Office suite may not be the best in the world, but it dominates the market to such an extent that even some Linux users succumb to its thrall – clandestinely booting up Windows to write letters, design presentations or to set up spreadsheets filled with calculations. While it would be more in keeping with the true faith to point out alternatives, the pragmatic approach of the Wine project – to make Windows programs run under Linux – will benefit users who are not concerned with ideological debates.

The wine of the code weavers

Wine Wine Is Not an Emulator is an attempt to replicate the Windows API (Application Programming Interface) so that function calls from native Windows applications can be received by Wine and converted in such a way that Linux and the X server will perform the relevant actions (for example drawing a window or reacting to a mouse click).

50

LINUX MAGAZINE

When the company CodeWeavers, whose main developer is Alexandre Julliard – the driving force behind the Wine project, introduced its new product CrossOver Office 1.0.0 at the end of March, it caused a wave of euphoria, and not just in the Linux-related media. CrossOver Office is designed to enable the installation and use of Winword and its cohorts under Linux. However, if you take a closer look you will see that CodeWeavers Inc. is trying to play fair with its potential customers. They freely admit that CrossOver Office only brings MS Word, Excel, PowerPoint, Outlook 97 and 2000, as well as Lotus Notes, to the Linux desktop. You will have to make do without Outlook Express, FrontPage, Access or Internet Explorer, and also without Clippy, the office assistant – who is unlikely to be sorely missed. The company also warns that non-US versions of Office packages will not work with CrossOver Office 1.0.0. That last statement could not be generally verified in our test: not only were we able to install a version of Microsoft Office 2000 SR 1 Premium Edition without problems (apart from Outlook) under SuSE 7.2 with KDE 3.0 or fvwm2, we could also use it as advertised. The same was true for an MS Word 2000 SR1 OEM version on Debian Testing with KDE 2.2.2, and for the combination of GNOME 1.4 (Figure 6) or fvwm2 (Figure 5) and MS Office 97’s Winword, Excel and PowerPoint in the SuSE installation mentioned above. Issue 22 • 2002

Debian Testing Debian GNU/Linux (http://www.debian.org/) is the most widespread non-commercial Linux distribution. It always comes in three variants. The first is “stable” (currently Debian 2.2, codenamed “Potato”). This is the thoroughly tested and stable version in which only security updates are incorporated. Then there is “testing”, the contender for the next stable release, which incorporates the latest software that has undergone initial testing (current code name “Woody” – the future Debian 3.0). The third version is “unstable”, the developers’ workhorse and the main test subject, in which some features may well not be working properly (currently “Sid”).

Inscrutable During our test, the only thing that went smoothly was the installation of CrossOver Office itself and that of the Microsoft Office products. When it

The Installation of applications is done smoothly, but that’s only half the story


REVIEWS

Where would you like to install CrossOver and MS Office?

comes to actually using them there seems to be a lot of luck involved. An action, such as inserting an MS Graph 2000 diagram into a Winword 2000 document, which worked fine in one setup, caused the entire GNOME desktop to freeze in another. KDE proved more robust, while the test run with fvwm2 was subjectively better overall. Apart from anything else, much depended on the particular version of CrossOver Office. While the insertion of the diagram mentioned above was no problem with the tested CD release, it caused the downloaded version to crash completely. This seems to point to CodeWeavers using the same version number (1.0.0) to distribute constantly improving versions of the software. After countless, often futile attempts to reproduce errors our two main pieces of advice for potential CrossOver Office users are to have patience (if you are too impatient waiting for an action to finish you are at a greater risk of everything freezing up) and be ready to do some tidying up afterwards. To this end CrossOver Office comes with a program called cxoffice_reset in the bin subdirectory of the installation directory, which, unlike Winword and its friends, acquires an entry in the KDE or GNOME start directory during installation. Calling this program will restore the functionality of your desktop if an MS Office program freezes, so that you can try again. It is generally a good idea to get this tool to clear any old CrossOver processes after the end of every MS Office session, no matter how successful it may have been.

Excel on Linux

If you want to run .exe files under Linux, you need to be prepared to endure a certain amount of misery. Not only will you encounter frequent and unpredictable waiting times and annoying crashes (which, eerily enough, may occur even when the application is simply left to its own devices for long enough), but the way in which the cursor has a tendency of flicking across the desktop in a strange and unexpected manner does not exactly improve usability. Copying and pasting between different applications (Figure 4) is a hit-and-miss affair. The extent to which it works depends on the version of CrossOver Office as well as that of MS Office. Templates can sometimes be used, as long as they don’t contain any VB scripts. Inserting pictures works, but you have to make do without the Microsoft clipart gallery. The macro recorder is perfectly useable, unlike the VB and Microsoft script editor. Printing also worked without a hitch in our tests. In contrast, the screen display was rather disappointing despite the built-in TrueType fonts. You quickly get used to the fact that the file selection dialog refers to “drives” in true Windows fashion (Figure 5). The /tmp EZTZhttp://www.winehq. com/ Wine from CodeWeavers http://wine.codeweavers. com/ What works and what doesn’t http://www. codeweavers.com/products/office/the_real_dirt.php

Practical testing The CrossOver Office bin directory also contains links called excel, powerpnt, winword, outlook, frontpg, iexplore, notepad and msaccess (depending on the MS Office installation). When you call these (specifying the path if necessary) you will be disappointed to find that only powerpnt, winword, notepad and excel actually work properly. All these links point to the shell script wine in the same directory, which works its magic to ensure that the relevant Windows binary, from support/dotwine/fakewindows/Program Files below the CrossOver Office installation directory is called, with Wine as an intermediate layer.

CrossOver Office Supplier Web For Against

CodeWeavers Inc. http://www.codeweavers.com/ Use MS Office under Linux Prone to crashes, unstable

rating

PowerPoint running on Linux

Issue 22 • 2002

LINUX MAGAZINE

51


PROGRAMMING

Perl: Part 4

THINKING IN LINE NOISE Dean Wilson and Frank Booth return for the latest instalment in our guide to all things Perl. This month we continue our look at regular expressions, or regexes as they are known

The regular expression engine Perl’s regular expression engine has become the de facto standard. Incorporating regular expressions common to early Unix tools (grep, ed, awk and sed) and later adding enhancements of its own, Perl’s regular expressions are a source of inspiration for most modern languages that openly seek to emulate it’s aptitude; many fall short of the mark by not integrating regexes into the core of the language and instead often rely upon the use of external libraries or modules. The view that Perl code is “line noise” and “write only” can be attributed to the level of integration that Perl’s regexes share with its functions; regexes are by their nature concise and powerful. The example, extract_date1.pl below, shows several ways to extract the date from a string with and without regular expressions. Using substr to extract multiple parts of the string can be awkward to maintain at best and error prone at worst due to the reliance on exact positioning rather than a more heuristic-based approach. Any alteration to string positions would need to be cascaded along, requiring changes to all subsequent offset values in the same string. In the following two examples (extract_date2.pl and extract_date3.pl) we use regular expressions on

Example: extract_date1.pl – without the power of regex my $date =”20020530175046”; # Using string functions: my $year = substr( $date, 0, 4 ); my $month = substr( $date, 4, 2 ); my $day = substr( $date, 6, 2 ); my $hour = substr( $date, 8, 2 ); my $mins = substr( $date, 10, 2 ); my $secs = substr( $date, 12, 2 ); print “Date is: $day/$month/$year $hour:$mins:$secs\n”;

52

LINUX MAGAZINE

Issue 22 • 2002

# Example: simple_match.pl: my $text = ‘Sesame Street’; if ($text =~ /Street/) { print “I found the word Street\n”; }

the string. This is an illustration on how using a regular expression can benefit the code, making it clearer and easier to maintain. In example extract_date2.pl we utilise the match operator to extract the values; extract_date3.pl uses the substitution operator to modify the string in place. A matching operation tests for the existence of a pattern within a string using special characters to describe categories of matching text. Successful matches return a numeric value, usually 0 or 1 (corresponding to true or false). This example attempts to find a match for the value within the forward slashes (in this instance the literal value ‘Street’). The regex operator =~ binds the

Example: extract_date2.pl – using a matching regex $date =~ m/(\d{4})(\d\d)(\d\d)(\d\d)(\d\d)(\d\d)/ and print “Date is: $3/$2/$1 $4:$5:$6\n”;

Example: extract_date3.pl – alter the variable in place $date =~ s!(\d{4})(\d\d)(\d\d)(\d\d)(\d\d)(\d\d)!$3/$2/$ 1 $4:$5:$6!; print “Date is: $date\n”;


PROGRAMMING

variable $text to the regular expression. The variable is then interrogated until the first match for the pattern is found within the variables contents or the end of the string is reached. If successful the match returns a value that is true whilst leaving the contents of the variable unchanged. Here we show how simple it is to use a regular expression in a position you would normally expect to find a function or a comparison operator. In this case finding the first occurrence of the target word (contained in $word) on a line and then reporting the line and the line number where the match is found. Regular expressions allow us to perform pattern matching upon strings using meta-characters. This enables us to match a large number of possible strings implicitly. The most common meta-characters used are: . * + ? | (). Some of these meta-characters may be familiar, being common to many Unix tools. Be careful though, Perl’s regular expressions are a superset of the standard regexes commonly found in older Unix and GNU tools, so the meta-characters may have different meanings. The following example matches both the correct and American spellings of the word ‘colour’: foreach ( ‘Living color’ , ‘Blue is the colour’ ) { if ( /colou?r/ ) { print “$_ has the word color or colour”; } } In this example we introduce a type of metacharacter called a quantifier, the ? in the regular expression means zero or one occurrence of the preceding character, ie the ‘u’ in the pattern is optional. Other quantifiers are: + one or more occurrences. * zero or more occurrences. See the Quantifier boxout for a more complete list. It is often desirable to have a set of alternatives that you wish to match from. In this example we attempt to match the name of a popular scripting language or python by using the pipe | operator to enable us to select from alternatives. The pipe meta-character presents a list of

#Example simple_cap.pl my $text = “Just another perl hacker”; if ($text =~ /((perl)|(python)|(ruby))/ ){ print “This person can code $1\n”; }

Example: greplite.pl die “usage: greplite.pl <word> <file>\n” unless @ARGV > 1; my $word = shift; while (<>) { print “Line $.: $_” if $_ =~ /$word/o; }

alternatives, the values are separated by pipes and compared sequentially. If a match is found the remaining pipes are ignored and comparison resumes after the last value in the alternation (this is the value immediately following the last |). In the example simple_cap.pl we use parentheses to both group values into sub-patterns (or atoms as they are also known) and to capture the matching value into the $1 variable so we can use the matched value later. This is known as capturing and will be covered in greater depth in subsequent sections. One of the important aspects of alternation is that it affects an atom rather than a single letter so in the above example, simple_cap.pl, we can use parenthesis to create three atoms, each

Quantifiers A regular expression with no modifiers will match just the once. While this is a sound principle it is often desirable to override the default behaviour and match a variable number of times, this is where you would use a quantifier. A quantifier takes the preceding atom (sub-pattern) and attempts to repeat the match a variable number of times based upon the quantifier used. The table below shows the quantifiers and the number of matches they attempt: While we will cover the exact method of matching and the steps attempted when we return to cover the internals of the Perl regular expression engine it is important to know that by Quantifier ? * + {NUM} {MIN,} {MIN,MAX}

default quantifiers are greedy. Each open ended quantifier (Such as the + and * attempt to match as many times as possible providing that the greediness does not cause the whole match to fail. As you can see in greedy_regex.pl, if left unchecked the .* can consume far more than you would expect. You can limit the match to be minimal in nature rather than greedy by appending a ? after the quantifier. When used in this manner the ? sheds its properties as a quantifier and instead limits the match (makes the match minimal) to consume as little as possible while still being successful.

Num of Matches Match zero or one time Match zero or more times Match one or more times Match exactly NUM times Match at least MIN times Match at least MIN but no more than MAX times

Issue 22 • 2002

LINUX MAGAZINE

53


PROGRAMMING

Example: greedy_regex.pl my $quote = “There’s no place like home”; #Default, greedy $quote =~ /e(.*)e/; #This prints “re’s no place like hom” print “I matched ‘$1’\n”; # parsimonious/minimal matching $quote =~ /e(.*?)e/; #this prints ‘r’ print “I matched ‘$1’\n”; containing a whole word that we can then use in the alternation. #means try and match ‘perl’ or ‘python’ or ‘ruby’ ((perl)|(python)|(ruby))/ If you leave out the grouping then the alternation will try to match one of the following words: perlythonuby or perlythoruby perpythonuby or perpythoruby In essence it is potentially matching one of the characters on either side of the pipe, since there are no brackets to force precedence in any other manner the single character is the default.

Example: delint.pl while (<>) { #Basic comment remover s/\s#.*$//; #Skips blank lines next if /^\s*$/; #skips lines beginning with comments next if /^#/; #skips lines beginning with comments s/^\s+//; #This strips trailing whitespace s/\s+$//; print; }

54

LINUX MAGAZINE

Issue 22 • 2002

Example: quote01.pl my $quote = “take off and nuke the site from orbit”; $quote =~ m/(?:and)\s((\w*)\s\w+\s(\w*)\s(\w*))/; print “$2\n$3\n$4\n$1\n”;

Anchors away Anchors are used to tether a regular expression to the end or beginning of a line. They’re used to speed up a search or more accurately specify the position of a pattern within a string. The ^ (aka carat or circumflex) matches the start of a line in a string. The $ matches the end of a line. In the example delint.pl anchors are used to remove all lines that are either empty or begin with a comment. Working through the example in sequence, we remove lines with hashes #; the .* pattern will match any number of characters after the first hash encountered to the end of the string and remove them. The next regex uses the \s character class which matches any white-space, ie ‘tabs and spaces’. Lines that begin with a comment # are also skipped, as the comment runs to the end of the line. Finally remaining leading and trailing white space are removed. If we wanted the domain name from an email address we could use anchors and grouping to capture the information. This short script will grab a domain name from a possible email address and attempt to ping that domain. The example simplemail.pl uses grouping, where a sub-pattern’s actual match is recorded in one of Perl’s internal variables ($1..$9..etc. – these variables are read only ). The sub-pattern to be captured is indicated using parentheses and the results, if any, are stored in the numbered variable corresponding to the order of captured matches

Example: simplemail.pl my $email = ‘example@example.com’; if ($email =~ /\@(.*?)$/ ) { print “Found domain $1 in email\n”; open (P, “| ping –c 4 $1”) or die “Can’t ping\n”; while(<P>) { print $_; # $_ can be excluded } close P; }


PROGRAMMING

within the current regex. The regex in the example anchors to the end of the string and works back to the @ character in the string to retrieve the domain name. We use two types of grouping in the above example. The first type we cover is non-capturing (?:), this allows us to group a sub-expression without storing the results in a variable. The remaining groups all capture the results if the match is successful. Notice that the parentheses are nested. This enables us to capture the overall result and then subsets of this result. This example will capture test in the following variables: $1 $2 $3 $4

= = = =

nuke the site from nuke site from

/ho|a|i|ut/; Which is less legible and requires more effort to maintain as the options are added to. Obviously the more characters we add to a class the more pronounced the advantage is. There is, however, more to character classes. This example shows many of the extra syntactic sugar found in character classes: /[a-z\d_@.-]/i This example matches characters that are valid in an email address, it could be used for a cursory validation of an email address. It works using a variety of methods: a-z is a range of literal characters, a,b,c,d....,x,y,z \d a predefined character class for digits ( 0,1,2,..,8,9 ) _@. is any of the characters _ or @ or .

The outer parentheses capture the whole match and the nested ones capture individual words.

Class act Character classes are a means of providing a wide variety of characters as alternatives, rather like a pipe. However a character class can only ever provide alternative characters, where the pipe can offer alternative patterns. Character classes are contained within square brackets [ and ]. /h[oaiu]t/; will match the words hot, hat, hit and hut. It could be written using pipes in this way:

In the example we use an unescaped dot, which rather than matching any single character as it normally would, matches a literal dot. This may seem strange at first but it makes little sense for the “match anything” meta-character to retain its behaviour in a character class. The loss of metacharacters’ special properties within a character is almost across the board except for –, which is used for ranges and ^ which we will cover later. If you wish to match a literal – in a character class it must be specified as either the first or last character in the class. We can choose what not to match with

The leaning toothpick effect In many regular expressions the / character is required within the matching section. Which can often render the regex illegible. The example below illustrates this:

regex is qualified with an ‘m’ for match or an ‘s’ for substitution. Here is the first example of altering the delimiter to allow a cleaner, more readable regex:

s/CVSROOT=\/usr\/local\/cvsrepos\//CVSR OOT=\/usr\/shared\/cvsrepos\//g;

s!CVSROOT=/usr/local/cvsrepos/!CVSROOT= /usr/shared/cvsrepos/!g;

This simple regex substitutes one path for another. The number of forward and backward slashes make it very hard to understand what the regex is doing, this is sometimes called the ‘leaning toothpick syndrome’. The backslash is required to escape each of the forward slashes so the Perl interpreter doesn’t end the operation prematurely. Perl accepts almost any ASCII character for its separator, as long as the type of

In this example we change the regex delimiter to an exclamation mark so that forward slashes lose their special meaning and do not require escaping. Perl allows a significant amount of flexibility in the delimiters you may use and even allows you to use paired delimiters such as parentheses, curly braces and angle brackets: s(CVSROOT=/usr/local/cvsrepos/)(CVSROOT

=/usr/shared/cvsrepos/)g; s{CVSROOT=/usr/local/cvsrepos/}(CVSROOT =/usr/shared/cvsrepos/)g; s<CVSROOT=/usr/local/cvsrepos/><CVSROOT =/usr/shared/cvsrepos/>g; Using paired delimiters (such as [,],(,),{,} ) can clarify where the find and replace sections occur within the regular expression. If you have both sections of a substitution in paired delimiters you can further increase the readability of the expression by placing the different sections on separate lines. Furthermore, different paired delimiters can be used to separate the match and substitution sections of the regular expression. s<CVSROOT=/usr/local/cvsrepos/> (CVSROOT=/usr/shared/cvsrepos/)g;

Issue 22 • 2002

LINUX MAGAZINE

55


PROGRAMMING

Table 1: Common character class shortcuts Symbol \d \D \s \S \w \W

Meaning digit non-digit whitespace non-whitespace word non-word

by negating a character class, using ^, when we need to fail on a small or unspecified set of values: for (<>) { /\&[^a-z#\d]+;/ and print “Bad entity name: $_”; } An important aspect of negative character classes is that unless they are followed by a ‘*’ quantifier then they are still required to match a character outside of the negative character class. At first glance the example above may seem to work but it hides a subtle bug. If the string “camel.” is attempted against the regular expression then it will match but the string “camel” will fail. This is because the negative class ([^s]) has to match but in this case it fails, since there are no more characters to match against. In this instance * (zero or more can be used to great effect). The literal part of the pattern (camel) is checked against the string matching letter by letter until the pattern progresses to the negative class, this then has nothing to match against that is not an ‘s’ and so fails, forcing the whole match to fail.

Perl character class shortcuts Now that you have seen how to use both positive and negative character classes we can introduce you to another of Perl’s pioneering developments in regular expressions, character class shortcuts. As you can see from Table 1, all of the common shortcuts are back-slashed single letters where the lowercase letter is a positive match and the uppercase version is a negative. In the code sample above we use an anchored Perl character class (in this case \s) to match any lines that

Example: comp_camel.pl

56

LINUX MAGAZINE

consist of only whitespace. The $blank variable is then incremented for each match made until we run out of lines and exit the while loop. Next we divide $blank with the special implicit variable $. (which holds the current line number, in this case the number of the last line read in) and divide by 100 to get the percentage of blank to non-blank lines. The last line of the example passed both the variable $percent and the string to follow it to the ‘print’ function as a list, causing each to be printed in turn. If the code sample was rewritten without the \s then the equivalent handwritten character class would be [ \t\n\r\f]. The \s shortcut is both clearer and less error prone and should be used out of preference. Now that we have covered \s we can move on to the \d (digit) shortcut. The matchip.pl example above takes a single argument and then checks to confirm if it is in the form of an IPv4 address (ie 127.0.0.1). Rather than using the very coarse matching \d+, which would allow any number of digits greater than one, we use a different quantifier that allows the specification of a minimum and an optional maximum number of times to match the preceding atom. This is represented in the example by the numbers inside the curly braces. First we give the atom to match; which in this case is a \d. We then open the curly braces and put the number specifying the minimum number of times to match followed by a comma and then the maximum number of times to match. If you wished to match an exact number of times you change the syntax slightly and put the number without a comma: {5} would match the preceding atom exactly five times. It is also possible to have an open ended match with a minimum number of desired matches but no limit to the number of times the match is permitted, this is achieved by not putting a maximum value after the comma, {3,} would be successful if the atom to the left of it matched three or more times. In matchip.pl we use this to allow between one and three digits in a row ({1,3}) followed by a dot and then the same pattern thrice more but without a tailing dot. While this alone is more than satisfactory over the handwritten character class version it can be made even

Example count_blank.pl my $blank; while(<>) { $blank++ if /^\s*$/; }

my $book = “camel”;

my $percent = ($blank / $. * 100);

print “Match\n” if $book =~ m/camel[^s]/;

print $percent, “% of the file is empty\n”;

Issue 22 • 2002


PROGRAMMING

simpler with the application of grouping and a quantifier: We can change the line containing the regular expression from: if ($ip =~ /^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$/) { #Full program is in matchip_group.pl if ($ip =~ /^\d{1,3}(\.\d{1,3}){3}$/) { And now, rather than repeating the digit matches, we put the pattern inside grouping parentheses so that the attempted match for the literal dot and the one to three digits is a single atom and then apply the curly quantifier so that it must match three times. This makes the regex more compact and easier to grasp as you only need to work out what the one match does and then repeat it. We next move on to the last of the common shortcut classes, \w. The shortcut for word is slightly different from what you may expect as it covers the characters that would often be considered tokens in a programming language rather real words. It covers the range of [azA-Z0-9_] (lower and uppercase letters, digits and the underscore), the most notable absence is -. If you wish to match a string if it only contains alphabetic characters then you will need to use either the handwritten character class [a-zA-Z], the slightly more complex [^\W\d_], which matches only alphabetics, or use the POSIX [:alpha:] which has the (possible) benefit of understanding locale settings (see perldoc perllocale for more details on this complex subject). While the above code snippets are enough to put you along the path of working with words, strings and \w there are some more thorny aspects involved in matching words in real text. Many words have punctuation characters in them that make matching more difficult than you would at first expect. For example words with apostrophes require additional effort, fortunately Perl provides features such as words boundaries to simplify this kind of task but that is beyond the scope of this introduction to regular expressions. Never fear though, we will return to cover them in the near future (or you can look up the suggestion in perldoc perlre if you just can’t wait).

Example: matchip.pl #Check that an argument has been provided. if not exit. die “usage: match_ip.pl <ip address>\n” unless $ARGV[0]; #Remove the newline chomp(my $ip = shift); #Try and match an IP Address. if ($ip =~ /^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$/) { print “$ip seems valid\n”; } else { print “$ip is not a valid ip\n”; }

#This checks if book is a string of characters and numbers my $book =~ /^[[:alpha:][:digit:]]+$/; # valid #This looks like it should match empty lines my $book =~ /^[:space:]*$/; # invalid The second line of code in the example fails because the [:space:] is being used as a character class rather than used inside a character class. Perl’s regular expression engine interprets the pattern as a character class containing the characters “:” “s” “p” “a” “c” and “e” while ignoring the duplicated “:”. The pattern you probably intended to use is: # valid, note the double ‘[[‘ and ‘]]’ my $book =~ /^[[:space:]]*$/; This article has introduced the more common aspects of regular expressions. It is by no means an exhaustive guide though, given that Perl and its regular expressions are syntactically rich and offer an abundance of alternative methods, such as positive and negative lookaheads that are useful tools but destined for coverage in the future.

POSIX character classes

Info

Now we have covered character classes and Perl’s common shortcut character classes we can give a brief overview of the last type of character classes you may see in Perl code; the POSIX character class. POSIX is a set of standards that dictate how, among other things, interfaces should be presented to allow easier porting of applications from one POSIX compatible system to another. POSIX has its own character class notation that in Perl is used only when creating character classes.

You may want to continue your studies and if so, to assist you, here are a few invaluable references on the topic of regular expressions. Perl regular expressions documentation Mastering Regular Expressions (2nd Ed) Sed and Awk Pocket Reference Japhy’s Regex Book

perldoc perlre http://www.oreilly.com/catalog/regex2/ http://www.oreilly.com/catalog/sedawkrepr2/ http://japhy.perlmonk.org/book/

Issue 22 • 2002

LINUX MAGAZINE

57


58 C

17/6/02 4:59 pm

Page 58

PROGRAMMING

C: Part 9

LANGUAGE OF THE ‘C’ This month we look

Love is a battlefield

at a number of

When creating a structure, we don’t always want to use a whole integer for a piece of data – three bits might be more than enough – so why waste space when we needn’t? Also, if working with hardware, it’s very likely we’d have to deal with a control byte where each bit means something different. We could read the data as a byte and look at each bit with a series of faceless bit masks and bitwise AND (&) operators or we could use a bitfield and use names, treating them like normal structure elements.

smaller language features we’ve yet to cover but, as Steven Goodwin explains, that doesn’t make them any less useful

struct { unsigned int LSB : 1; unsigned int ThreeBits : 3; int : 0; /* pad to next word U */ unsigned int Word1_LSB: 1; } BitTest; BitTest.LSB = 1; BitTest.ThreeBits = 4; Each element can consist of one of more bits, and is described with the : notation, above. When accessed it acts like a normal integer in all respects, except that it has a smaller numeric range (naturally), and it is not possible to take its address. The type must be an int or an unsigned int. The effects are the same as for normal variables. So, in the above example, if LSB was instead declared as a signed int, it could only store 0 or –1 (not 0 or 1, as it does here). Similarly, ThreeBits can hold the range of values from 0 to 7, as opposed to the signed version which would hold +3 to –4. It is best to make all bitfields unsigned for this reason, particularly as they general reference bits and not numbers. Individual bitfield elements cannot extend over machine word boundaries (in the case of x86 machines, that occurs every 4 bytes, or 32 bits). If you need more bits, then a ‘: 0’ will automatically

58

LINUX MAGAZINE

Issue 22 • 2002

pad the rest of the word, allowing you to start again. No name is necessary when padding in this manner. GCC, however, will automatically shuffle bits into the next word if necessary, but when working at this low level I prefer to know where my bits are, and will pad explicitly, as in the example above! As always, spacing is for clarity, and bitfields are referenced like any other structure element. Should a number greater than seven be applied to ThreeBits, for example, all but the three least significant bits will be lost, much the same as with casting, and equivalent to: BitTest.ThreeBits = value & 7; To be useful, there should be several bitfields in a structure, and they should mirror something useful in

Listing 1 1 #include <stdio.h> 2 3 union CharOrInt { 4 int iInteger; 5 char cChar; 6 }; 7 8 int main(int argc, char *argv[]) 9 { 10 union CharOrInt UnionTest; 11 12 UnionTest.iInteger = 2002; 13 printf(“Int = %d, char = %d\n”,U UnionTest.iInteger, UnionTest.cChar); 14 UnionTest.cChar = ‘A’; 15 printf(“Int = %d, char = %d\n”,U UnionTest.iInteger, UnionTest.cChar); 16 17 return 0; 18 }


58 C

17/6/02 5:00 pm

Page 59

PROGRAMMING

the real world (perhaps as a Z80 register in a ZX Spectrum emulator program)! Using them to save space is rarely justified outside of embedded systems, since memory is cheap. To be really useful, they are often used in conjuncture with unions (no pun intended).

Union city blues The union is the little sister of a structure. They follow the same syntax, they both like being typedefed into something more readable, and both have good uses within C. Both structures and unions can hold (say) four variables. However, unions can only hold one at a time! Take a look at Listing 1. Here you will see: Int = 2002, char = –46 Int = 1857, char = 65 This is because both variables are stored at the same memory location, the size of the union itself being the size of the largest element within the union. Writing into one overwrites (at least part of) the data in the other, and vice-versa. And the funny things is – you don’t know which variable holds valid data, since there is no way of knowing if cChar has been written to last, or iInteger! Now, before you think this has a limited use (and start scanning this article for the next song title!) let me show you a few examples. union ConfigTagName { int iConfigData[4]; struct { int iMaxFilesOpen; int iDefaultWindowWidth; int iDefaultWindowHeight; int iFirstWindowToOpen; } Cfg; } Config;

called N flag */

int : 1; /* no name creates U padding */ unsigned int HalfCarry : 1; int : 1; /* more padding */ unsigned int Zero : 1; unsigned int Sign : 1; } Flags; } Z80_Flags_Register;

This example shows us the power of combining unions and bitfield structures. We can reference individual bits, to make our emulation code intuitive and easy to read, whilst retaining a simple way of handling the whole register (for creating a snapshot, say) when the need arises. For example, Z80_Flags_Register.Flags.Zero = 1; Z80_Flags_Register.Flags.Subtract = 0; printf(“All flags = %d\n”, Z80_Flags_Register.iCC); Only the first named element of a union may be initialised on creation, the union as a whole cannot. This is why the second element is usually the structure (if it has one). union CharOrInt coi = { 2002 }; If it is necessary to initialise several union elements, you will have to assign each one individually.

Hold me now There’s more than one way to store a variable. By adding what is called a storage class to the declaration, we can invoke five different types of behaviour:

Since the array and the structure are both held at the same memory locations we can reference the configuration data in whichever manner pleases us. We can use Config.Cfg.iMaxFilesOpen (which has a programmer-friendly variable name), or Config.iConfigData[0] as they are guaranteed to be at the same memory location. If we’re saving the information to a file, a loop (and not individual names) could be used to write out each entry, whereas using specific identifiers (such as iMaxFilesOpen) would take extra effort. We can also use bitfields within unions, too. union { unsigned char iCC; struct { unsigned int Carry : 1; unsigned int Subtract

unsigned int Parity : 1; /* or overflow U */

● ● ● ● ●

auto register volatile extern static

auto is short for automatic, and is the common-orgarden local variable that we’ve been using up to now. All variables, unless qualified with one of the other classes above, will become auto by default. It is rarely used these days. auto int iNormalVar = 12;

: 1; /* also U

register tells the compiler that we will be using this variable a lot, and would like it placed into a register on the CPU to improve speed. This is only a request, however, and the compiler is not compelled to do so Issue 22 • 2002

LINUX MAGAZINE

59


58 C

17/6/02 5:12 pm

Page 60

PROGRAMMING

(think what a mess it would get into if you requested 100 register variables, when the machine only has eight!). Register is still seen in kernel code and some tight loops, especially on embedded systems. Naturally, should the variable be placed into a register it will not be possible to take its address, but that should not be an issue for us because there’s a compiler warning should we try! Its use has fallen out of favour in application development because today’s compilers usually make a better guess at which variables should be placed into registers than the programmer. (It is the compiler that creates the code, after all). register int iLoopCount; volatile was added to ANSI C from the original K&R in order to solve problems with optimised memory mapped systems, and is best explained with a short example.

Listing 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

#include <stdio.h> void CountCalls(void) { static int iCount=0; iCount++; printf(“Count = %d\n”, iCount); } int main(int argc, char *argv[]) { CountCalls(); CountCalls(); CountCalls(); return 0; }

volatile int iDataOnPort; extern int iLivesElsewhere; /* get the serial port to share data at U this memory location*/ MapSerialPortToAddress(&iDataOnPort); /* wait until the port is free */ while (iDataOnPort!=0) ; /* empty */ This loop appears to wait forever, since iDataOnPort is never changed anywhere inside the loop. That is true. However, since iDataOnPort is at a memory location which is also used by the serial port, it would be feasible for the loop to exit as soon as the port was free and ready for use. So why make a special type called volatile? Well, imagine if the compiler decided to get clever! It would see that this variable was used exclusively in this loop and would consider putting the value of the iDataOnPort variable into a register. It would no longer be reading from the memory location, and would therefore never realise the port was free, causing the program to loop forever, as we’d originally suspected. Without optimisations, the value of iDataOnPort would be read from memory every time it’s used, but using the reserved word volatile states the intention clearly. extern doesn’t actually declare a variable, it just indicates that there is a variable somewhere with this name, and of this type, in the program. It might appear later in the file (allowing us to use variables before they have been declared) or in another file entirely. Any initialisers on the original declaration should not be duplicated here, though. We will look more at this when we cover multiple files and large scale projects in a later issue. 60

LINUX MAGAZINE

Issue 22 • 2002

Saving the best for last is the static class, shown in Listing 2. This will output: Count = 1 Count = 2 Count = 3 Static variables are very similar to global variables – they retain their value throughout the life of the program, and they will automatically initialise to zero when the program starts. However, they have local scope so can only be accessed by the function they are declared in (like normal automatic local variables), which leads to the behaviour demonstrated above. Unfortunately, this makes resetting iCount to zero quite tricky! It can be useful for returning strings from functions, thus: char *GetString(int iNumber) { static char str[32]; sprintf(str, “%d”, iNumber); return str; } Without the static storage class, this would not work, as the local str variable would get destroyed when the function exited, leaving you with a pointer to an invalid piece of data. However, because str will be there (intact) when we exit GetString its data can be safely referenced outside the function. The problems come later, when we make two calls to GetString, and do not handle the results immediately, since the local array holding the first


58 C

17/6/02 5:12 pm

Page 61

PROGRAMMING

result gets overwritten by the second call. For example: printf(“%s and %s are not what you expect!”, GetString(1), GetString(4)); This technique can have its uses, but because it produces this sort of problem is often considered bad form.

Constant craving There is a very special variable qualifier called const. It can be used either as part of a normal variable declaration, or with a function parameter. In both cases, the value remains constant, and therefore cannot change. const float pi = 3.1415926f; Here, we have declared a constant variable! We can use pi as we would any other (compare it with variables, print it out, and so on), but we are not allowed to change it; the compiler will give us a warning if we try. pi += 0.3f; /* warning! */ new_val = pi + 0.3f; /* valid ! */ Naturally, these constants must be assigned a value when they’re declared, otherwise it would be impossible to set them up afterwards, wouldn’t it? Constants are used because it makes the code much easier to read, provides type security that macros are unable to provide, and data integrity that variables cannot provide. In larger programs, constants are used to maintain unity between several functions: imagine if half the world chose pi to be 3.1415926, whilst the other used 22/7! One single (global) constant is a neat solution to this problem. The second form of const, and perhaps the more interesting, is the constant pointer. Functions that take a pointer to data, have control over that data. When you call strcpy, for example, you are giving complete control of your strings to the strcpy function. You must trust it completely not to wipe over your original data. Through extensive testing we know that’s not the case, however, if there were a way the compiler could trap the problem in advance of the bug testing stage, it would be very much appreciated. The keyword const lets you do that; and strcpy has been implemented using it: char *MyStringCopy(char *pDest, const char U *pSrc) { char *ptr = pDest; *pSrc = ‘H’; /* Let’s try changing the U first letter for a laugh! */

while(*ptr++ = *pSrc++) ;/*empty statement */ return pDest; } This function declares that MyStringCopy will not change the (constant) data to which pSrc points. It is allowed to change the pointer itself (as we do here in the loop), but not the data, as doing so will produce an ‘assignment of read-only location’ message. This also prevents us from ‘having a laugh’ – either intentionally, or through a bug in the code. The placing of the reserved word const determines whether the pointer is to remain constant, the data to which it points, or both. char const *pSrc = pData; /* const U before pointer: constant pointer */ const char *pOtherSrc = pData; /* const U before char: constant data */ pSrc = pNewData; /* Not valid */ *pSrc = ‘A’; /* Valid */ pOtherSrc = pNewData; /* Valid */ *pOtherSrc = ‘A’; /* Not valid */ Like the ‘pi’ example above, if the pSrc pointer is not assigned at declaration time it cannot be assigned at any time. It is possible to declare a const without assigning it a value, but it is undetermined. It is possible, through type casting, to remove this protection from constant pointers. However, you can’t do it accidentally, and that’s a very good start.

Dark side of the moon One issue we’ve yet to deal with is the printf function. At least, we’ve yet to completely deal with it, anyway. From the start we’ve used this function to print data to the screen; one variable, two variables, three variables... all using the same function. Yet, nowhere have we seen how it does this, since a function requires a specific number of parameters. Is it a special feature? A hack with printf? Or a figment of our collective imaginations? Well, actually it’s a feature; called ellipses. Ellipses can be used anywhere a function needs to have a variable number of parameters. These parameters can be of any type, but there must be at least one consistent parameter in the list. In the printf example, the format string (a char *) is always there – Issue 22 • 2002

LINUX MAGAZINE

61


58 C

17/6/02 5:04 pm

Page 62

PROGRAMMING

it’s the other parameters than change, as shown in Listing 3. The first point to note is that this feature of the language needs an include file, stdarg.h. This is unusual, but not unexpected since the code to parse each argument are macros (parading as functions) to start, iterate and end the list of parameters. As none of these types or “functions” are part of the language, a header file is needed. The three dots (...) in the function definition (line 4) tell the compiler that we are using ellipses. It is impossible to add parameters after the ellipses, since there is no way of knowing if a function argument was intended for the ellipses, or as a subsequent parameter. It also follows that there can be only one set of ellipses per function. Finally, and we do not know how many parameters have been passed in, the function must be able to work this out from the data itself, or by including another parameter to tell us. Our example uses the compulsory first parameter to indicate the count. Alternatively, we could choose to terminate the list with a –1, for example. Line 6 declares a variable (va_ptr) that is used to hold our current position in the parameter list. It is of no concern to us what type it is, or how it works. For now let’s be happy that it does! This variable is set up to the start of the parameter list by va_start in line 9 by giving it the last function parameter. Failure to do so will produce a warning and wrong results!

Listing 3 1 #include <stdio.h> 2 #include <stdarg.h> 3 4 int SumAllDigits(int iNumOfDigits, ...) 5 { 6 va_list va_ptr; 7 int iTotal = 0; 8 9 va_start(va_ptr, iNumOfDigits); 10 while(iNumOfDigits––) 11 iTotal += va_arg(va_ptr, int); 12 va_end(va_ptr); 13 14 return iTotal; 15 } 16 17 int main(int argc, char *argv[]) 18 { 19 printf(“Total of digits is %d\n”,U SumAllDigits(4, 14,6,19,73)); 20 21 return 0; 22 }

62

LINUX MAGAZINE

Issue 22 • 2002

Each parameter is read in turn with the va_arg macro (line 11). The second parameter is the type of data you want to retrieve from the parameter list. You can retrieve any data type (as the printf function shows), but it must match the data that exists there. Again, the data must be able to work out (from itself) what type it is. The type needs to be given so the va_arg macro can process the right about of data. va_end is a tidy up function that should be included for completeness (it tells the human reader of the code where the variable argument processing has finished) but internally does very little. If you’re a really keen student of C, an understanding of pointers and a willingness to sift through 200 lines of code is all that’s needed to read the variable argument code (it’s written in macros in stdarg.h but isn’t very pleasant).

One to another Enumerations are a neat way of grouping constants together with meaningful names. When writing a snooker game, for instance, we might want to store each ball colour by name and still have a way of referencing its point score when potted. We could use a number of constant variables (like pi above) or one enumeration. The latter is preferred because it has greater readability, and lets us use the enumeration as a type. The other big benefit of enumerations (over, say, macros) is that all are limited to block scope. So, if you declare an enumeration local to a function, that’s the only code that can see it – the function. In the examples that follow, note the similarity with the syntax of structures. enum SnookerBall { red=1, yellow, green, U brown, blue, pink, black }; enum SnookerBall NominatedBall; NominatedBall = yellow; Here, SnookerBall is the tag for the enumeration. We can use this as if it were a type (remembering to prefix it with enum, of course) like int, or char. However, there is no protection if you decide to write: NominatedBall = 7; /* force it to black */ NominatedBall = 23; /* this actually U works but is not good coding! */ or int iBall = blue;

/* set iBall to 5 */

The values of the enumeration can be assigned explicitly with an equals sign (as we did for red) or left to the default case, where each enum has a value


58 C

17/6/02 5:04 pm

Page 63

PROGRAMMING

1 greater than the previous one. If no value is given for the first enumeration, it is assigned to zero. Once set, the value of an enum can not change. It must also be integral. When using enums as error codes from functions (which is a highly useful feature) it is good practice to make zero the default case. Similarly, in cases where the return value represents a program state, zero is (by convention) used as the error code. Naturally, enumerations can be typedefed to remove the constant need to type enum! typedef enum { red=10, white=5, spot} BilliardBall; BilliardBall iInPlay = red;

Vogue Interestingly enough, there are two flow control instructions we haven’t fully covered yet. One is the ill-fabled goto statement, whilst the other affects for, while and do loops to prevent them from going with the flow. Its name is continue.

Breakout With all loops, there will be an exit condition that, when met, will terminate the loop at the end of that pass. There is also the possibility to break out of that loop early with the break statement, as we saw in part two. for(y=0;y<20;y++) { for(x=0;x<32;x++) { printf(“X”); if (x == y) break; } /* break causes the code to jump here, U and continue with the next value of y */ printf(“\n”); } Now, break has a cousin, continue, that is also very useful. She (because continue is female!) will jump directly to the next iteration of the loop – it will not pass through the rest of the code, it will not collect £200, but it will increment the loop counter. for(i=0;i<10;i++) { if (i == 5) continue; /* let’s skip number 5! */ printf(“Mambo number %d!\n”, i); } Like break, this lets us wield great power from within our loop, be it for, while or do. As always, with

power comes responsibility as this provides a means to make code look very messy indeed. So, as a rule of thumb, try to only use continue when the alternatives are worse, and then group the continues at the top of the loop so it is easy to see the program flow at a glance, without reading the whole code.

Go now Finally, for the completists, I had better mention that C does have a goto statement! I am making no comment as to its usefulness, legitimacy, or make any statement furthering the many holy wars surrounding it! However, as it exists I shall cover to the depth it deserves! goto is used by specifying a label to which the code will jump. This label must be unique within the function it is used. It is possible to get goto jumping out of a set of braces, but never into them. It is impossible, therefore, to jump into a different function from the one you’re currently in. goto label; printf(“It never gets here!\n”); label: printf(“It continues here!\n”); Although academics and formal computing students will say ‘goto is bad’ (in a mantra worthy of Hare Krishna!), it is not to be avoided outright. There are some cases where a goto is actually quite good! It is worth using, like all other features, where the alternative is significantly less good! Personally, I have used goto in a couple of large-scale projects (>1 million lines of code) when jumping out of a heavily nested loop. In many places, especially time-critical code, it is quicker and easier to use a goto, and better than writing explicit exit conditions for each for loop. for(a=0;a<100;a++) for(b=0;b<100;b++) for(c=0;c<100;c++) for(d=0;d<100;d++) if (/* some condition */) goto exit_all_loops; exit_all_loops: This may appear less unstructured, but the alternative: for(a=0;a<100 && bExit==0;a++) would add another 100 million tests - something that should be avoided in most instances. In other cases, however, it is difficult to justify the use of goto. Good! That’s all done – I’m off for a shower! See you next time... Issue 22 • 2002

LINUX MAGAZINE

63


PROGRAMMING

XML processing with Python, Part 2: XPath and XSLT

RE-PACKAGE I

XPath and XSLT are technologies for processing and converting XML documents, which are extremely easy to use with a scripting language like Python. Andreas Jung takes a closer look

The author Andreas Jung lives near Washington D.C. and works for Zope Corporation as part of the Zope core team.

n the first part of our XML discussion we looked at DOM and SAX, both of which allow you to access the structure of XML documents through an API. However, since XML is primarily intended as a central exchange format it is equally important to be able to convert documents into other formats. In this part we will therefore take a look at XPath and XSLT. XPath is a path-based navigation and processing technology for XML documents. XSLT, on the other hand, describes rule-based XML transformations. Both techniques can turn XML files into HTML or other formats. In order to demonstrate these techniques we will again be using the XML file pythonbooks.xml in Listing 1 that you may remember from the first part. We are going to transform it into an XHTML table using both XPath and XSLT.

Navigation with XPath XPath is a W3C standard that enables you to access the elements of XML documents (or, more precisely, their DOM tree) using a path expression. A path expression is similar to the path of a file within a filesystem, as both DOM trees and filesystems are organised hierarchically. An XPath expression references a set of nodes, a boolean value, a floating point number or a character string. A path expression is always tied to the particular context in which it is used (typically a node of a DOM tree). The most important path expressions and XPath

Listing 1: websync.hs 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16

from xml.dom.ext.reader.Sax2 import FromXmlStream from xml.xpath import Evaluate fp = open(‘pythonbooks.xml’,’r’) dom = FromXmlStream(fp) fp.close() print “<table>” print “<tr>” print “<th>Author(s)</th><th>Title</th> <th>Publisher</th>” print “</tr>” for book in Evaluate(‘book’, dom .documentElement): print “<tr>” for item in [‘author’,’title’,’publisher’]: path = ‘%s/text()’ % item print ‘<td>%s</td>’ % Evaluate(path, book)[0].nodeValue print “</tr>” print “</table>”

64

LINUX MAGAZINE

Issue 22 • 2001

Figure 1: The XSLT processor combines XML documents with XSLT stylesheets to create other types of formats

functions are listed in Table 1. Listing 2 (xpath.py) shows the transformation of our little Python book database into XHTML. First we use From XmlStream() to create a DOM tree. This is nothing new as we already did it in part 1. The function Evaluate(xpath, context) forms the link between DOM and XPath. It requires a path expression as an argument and a DOM node as context. In our example it steps through all document nodes of the type book in one iteration. We are using the root of the DOM tree as context. As soon as we find a book node we can use a relative path expression to access the author or other elements directly, always passing the current book node as context. XPath’s text() function returns a set of text nodes. XPath and DOM are not conflicting methods. On the contrary, XPath builds on DOM and simplifies the handling of XML documents. The DOM API with its multitude of functions and attributes for different types of nodes can often get quite confusing. The XPath API only uses the Evaluate() function, while the complexity has been transferred to the syntax of the XPath expressions.

4Suite If you use Python for XML processing you might expect the PyXML package to be sufficient for any of the simple examples given here. Unfortunately that is not the case. XSLT support in PyXML is based on an older XSLT version of 4Suite from Four Thought, which contains errors. For the XSLT part we are therefore using the current version 0.12.0a2 of 4Suite. The package is installed with python setup.py install using the familiar distutils tool. Like PyXML, 4Suite is a package


PROGRAMMING

for XML processing under Python, but it covers a much larger range of functions than PyXML. Correspondingly, its API is more substantial and complex.

XSLT – transformation with rules XSLT’s approach is fundamentally different from that of XPath on its own. Here, the transformation of XML documents occurs via a number of transformation rules held in an XSLT stylesheet. A stylesheet is itself an XML document. Stylesheet rules use XPath expressions to reference nodes within an XML document. The transformation is performed by an XSLT processor, which generates the relevant output from the XML file and the stylesheet (see Figure 1). The exact syntax and semantics of XSLT are relatively complex and have already been covered in Linux Magazine a few months ago. Additional information can be found at from the links in the Info box. To transform the Python book XML file we are using the transform.xslt stylesheet. It consists primarily of three rules (<xsl:template match=”...”>) for matching pythonbooks, book and author, title and publisher. The rule <xsl:apply-templates

Listing 2: xpath.py 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16

from xml.dom.ext.reader.Sax2 import FromXmlStream from xml.xpath import Evaluate fp = open(‘pythonbooks.xml’,’r’) dom = FromXmlStream(fp) fp.close() print “<table>” print “<tr>” print “<th>Author(s)</th><th>Title</th> <th>Publisher</th>” print “</tr>” for book in Evaluate(‘book’, dom .documentElement): print “<tr>” for item in [‘author’,’title’,’publisher’]: path = ‘%s/text()’ % item print ‘<td>%s</td>’ % Evaluate(path, book)[0].nodeValue print “</tr>” print “</table>”

select=”...”> instructs the XSLT processor to continue with a rule for the specified element. We are using <xsl:value-of select=”.”> to access the content of the text nodes. As you can see, the XSLT processor is the central feature of any XSLT application, and 4Suite provides a separate processor class for this purpose. For XSLT processing the processor requires a reader that is able

Issue 22 • 2001

LINUX MAGAZINE

65


PROGRAMMING

Table 1: Important XPath expressions Xpath syntax / . .. @attr @* node * /node /*

Path expressions The root node Self node Parent node All attributes with the name “attr” All attributes All elements with the name “node” All elements All child elements with the name “node” All child elements

Xpath syntax local-name() name() string(obj) concat(s1,s2,..)

Node set and string functions Returns the local part of the expanded name of the node Returns the name of an element Converts an object to a string Returns the concatenation of its arguments

Xpath syntax last() position() count() number(ojb) sum(node-set) string-length(s)

Node set functions Returns a number equal to the context size Returns a number equal to the context position Returns the number of selected elements Converts its argument to a number Returns the sum, for each node in the argument node-set Returns the number of characters in a string

Xpath syntax startswith(s1,s2) contains(s1,s2) boolean(obj) not(val) true(),false()

Boolean functions Returns true if the first string starts with the second string Returns true if the first string contains the second string Converts its agrument to Boolean Returns true if its argument is false, and false otherwise Returns true, returns false

Listing 3: transform.xslt 01 <?xml version=”1.0” encoding=”iso-8859-1” ?> 02 <xsl:stylesheet version=”1.0” xmlns:xsl=”http://www.w3.org/1999/XSL/Transform”> 03 <xsl:output method=”html” /> 04 <xsl:template match=”pythonbooks”> 05 <table> 06 <tr> 07 <th>Author(s)</th><th>Title</th> <th>Publisher</th> 08 </tr> 09 <xsl:apply-templates select=”book”> 10 <xsl:sort select=”author” /> 11 </xsl:apply-templates> 12 </table> 13 </xsl:template> 14 <xsl:template match=”book”> 15 <tr> 16 <xsl:apply-templates select=”author”/> 17 <xsl:apply-templates select=”title”/> 18 <xsl:apply-templates select=”publisher”/> 19 </tr> 20 </xsl:template> 21 <xsl:template match=”author|publisher|title”> 22 <td> <xsl:value-of select=”.”/> </td> 23 </xsl:template> 24 </xsl:stylesheet>

66

LINUX MAGAZINE

Issue 22 • 2001

to read the stylesheet as well as the file to be transformed from the InputSource. An InputSource abstracts the input, so that the reader does not need to concern itself with the source. In our example xslt.py (Listing 4) we are using a non-validating parser that is registered with the processor. The XSLT stylesheet is passed to the processor by means of the appendStylesheet() call. The processor itself is then started with run(). Output in our example is via the standard output.

Conclusion This part shows that you can convert XML documents without changing the Python program. The Python program merely starts the transformation, which is the responsibility of the XSLT processor. In this, XPath is an integral component of XSLT. XPath is an interesting alternative to DOM or SAX in cases where you require access to parts or nodes of XML documents without wanting to go to the trouble of writing a parser. In combination with XSLT this gives the developer a very powerful tool for processing XML files.

Listing 4: xslt.py 01 import sys, urllib 02 from Ft.Xml import InputSource, Domlette 03 from Ft.Xml.Xslt import Processor 04 xml = urllib.pathname2url (“pythonbooks.xml”) 05 xslt = urllib.pathname2url (“transform.xslt”) 06 processor = Processor.Processor() 07 reader = Domlette.NonvalidatingReader 08 processor.setDocumentReader(reader) 09 isrc = InputSource.DefaultFactory. fromUri(xslt) 10 processor.appendStylesheet(isrc) 11 isrc = InputSource.DefaultFactory. fromUri(xml) 12 processor.run (isrc,outputStream=sys.stdout)

Info The Python XML module http://pyxml.sourceforge.net XPath recommendation http://www.w3.org/TR/xpath Four Thought http://www.fourthought.com 4Suite downloads ftp://ftp.fourthought.com/pub/4Suite XSLT and XPath tutorial http://www.vbxml.com/xsl/tutorials/intro/default.asp XSLT tutorial http://www.zvon.org/xxl/XSLTutorial/Books/Book1/ C.A. Jones and F.L. Drake, Jr., Python & XML (O’Reilly, 2002)


BEGINNERS

Welcome to the Linux User section of Linux Magazine Welcome to our beginners’ section, where we will show you how to make the most of your Linux system. K-tools goes eye candy crazy this month, as Stefanie Teufel looks at some of the themes and desktop ornaments that the summer madness has dragged out of KDE developers from around the world. The High Performance Liquid theme and Amor get special mentions. This month’s Out of the Box looks at checkinstall, a new utility which allows source code to be compiled and installed smoothly. It also keeps tracks on the install so the package can be easily removed should you wish to uninstall it. This links into the rpm database, so you can even get some of the graphical package managers involved. Details of how to run your system like clockwork are revealed in our feature on cron, which will start tasks and services at times you set. Now you have no reason for not doing that 3am back up. Asleep, indeed! The Internet has been surfed to find the latest, best and most useful pages on offer, so make sure you take a look at out Internet page. There is always the danger of pushing your machine too far and overloading its resources. Luckily Desktopia discusses Procmeter, a desktop graphical tool which watches over your system courtesy of the /proc directory. Linux makes running multiple jobs easy and such multitasking makes full use of your system. Dr. Linux this month looks at job control, daemons and, when things go wrong, zombies.

CONTENTS 67 BEGINNERS A knowledge base for users new to Linux. 68 Cron The cron functions as a timekeeper on your system, so you can start tasks automatically or in your absence. It’s important to keep your system regular, after all. 70 Desktopia Now you can see how much strain you are putting your machine under with ProcMeter and the delights of the /proc directory. 73 The Right Pages Janet Roebuck takes her monthly look at some of the best and most interesting Web sites for Linux users. 74 Dr. Linux The Doctor pays a visit to explain multitasking, job control and warns about zombies. Understanding the administration of tasks is a fundamental requirement. 78 Out of the Box Installing source code is now less painful with the checkinstall utility, and uninstalling the same is just as easy thanks to its use of the rpm database. 82 K-tools The prospect of the impending summer holidays has urged everyone on to create new designs and delights for the desktop. Make your KDE desktop shine with some new themes and wallpapers. Issue 22 • 2002

LINUX MAGAZINE

67


68 Cron

17/6/02 2:56 pm

Page 68

BEGINNERS

Program scheduling with Cron

COUNT DOWN Linux users can save themselves time and effort by scheduling jobs with Cron. Juergen Jentsch checks out exactly what Cron’s capable of

for loop The for loop is a simple counter based loop that repeats while the runtime variable is within a given range. In bash the loop has the following generic format: for runtime_variable List do [various instructions] done Our example calls the mpg123 command once only for each .mp3 file in the current directory. $ To query the content of a variable in bash you simply prefix the variable with the $ operator. Our example uses this technique to generate a name for each wave file it creates using the content of the file variable and the string “.wav”.

68

LINUX MAGAZINE

D

id you realise that Linux encourages laziness? If your computer is running, it can take care of a whole bunch of tasks without any help from you. Just tell your computer what to do and let it get on with the job.

Once only Linux provides the at command for tasks that you need to run once at a given time. Suppose you need to convert the files in your MP3 folder to wav files overnight so you can create an audio CD the next morning. You can use the following script to perform the conversion job: #!/bin/sh for file in *.mp3 do mpg123 –w “$file.wav” “$file” done

Save the script as “mp32wav” in the directory with the MP3 files you need to convert and type chmod a+x mp32wav to make the script executable. pingi@server:~/mp3$ at 23:00 warning: commands will be executed using /bin/sh at> mp32wav at> [Ctrl][D]

Assuming that you did not run out of hard disk capacity, the wav files should appear right next to your original MP3 files just in time for breakfast next morning. However, if you want to run the script at 11 p.m. in three days time (because you will be spending the next few nights hacking), you might like to try the following: pingi@server:~/mp3$ at 23:00 + 3 days at> mp32wav at> [Ctrl][D] Of course, your computer must be up and running on the day and at the time in question. If you have installed a local mail server, you will be sent a message confirming that your script has been run. The mail is always sent to the user who entered the at job. Issue 22 • 2002

You can use atq to list the jobs that at has scheduled. Normal users will only be able to view their own jobs, but root will be able to view a list containing all the scheduled jobs: root@server:~# atq 2 2002-04-13 23:00 a pingi If you change your mind in the meantime, you can use atrm job_number to delete a job: pingi@server:~$ atrm 2 The batch command is a subcategory of the at command that is seldom seen on today’s workstations. The command has a similar effect to at, the major difference being that the command is only launched when the system load drops below a given threshold (load average < 0.8). If you are working on a server with a heavy daytime load, the batch command will wait until your colleagues have left the office: pingi@server:~/mp3$ batch –m at> mp32wav at> [Ctrl][D] job 8 at 2002-04-13 14:32 The -m (“mail”) switch, which you can also use with at, allows you to send mail on completion even though the command itself does not produce output on your standard output device. Again, you can use atq to view and atrm to delete any jobs scheduled using batch.

Nice and regular If you need to schedule recurring tasks, don’t worry about the syntax for all those at commands. Instead, use the cron daemon, a program that specialises in taking care of those repetitive tasks. A GUI tool is probably the easiest way to create a job-list, or crontab. KDE provides kcron for this purpose (see Figure 1). Just click on Edit/New to create a new entry. Type a name in the comment text box to remind you what this entry does, and then type the command you want to launch in the Program text box. You can then use the various time buttons to


68 Cron

17/6/02 2:57 pm

Page 69

BEGINNERS

specify the dates and times on which you want to run the current job. Click on OK to return to the main window. Unfortunately File/Save does not perform as expected in some kcron versions (although there are no surprises with KDE 3.0). However, when you close the program it should not be too difficult to manually transfer the jobs shown to the cron daemon. If clicking all those buttons seems like a waste of time to you, you can resort to the traditional method instead of using a GUI tool. Use crontab –l to output the current cron job refer to Listing 1 for an example. The hash sign (#) at the beginning of a line indicates a comment – just like in a shell script. Any other lines must contain the following entries (separated by spaces): minutes, hours, days, months, day of week and the command you want to run. Use a numeric value for the weekday: 0 represents Sunday and 6 is Saturday. Use and asterisk (*) as a wildcard, if you do not need to specify an entry. To use several values, type a comma as a separator and do not use space characters before or after the entries. The counter in Listing 1 is thus launched on the full and half hour, every hour and every day. If you want to start from scratch, you can type crontab –r to delete the whole crontab. crontab –e calls the editor defined in your EDITOR environment variable (normally vi). Should you prefer to avoid using the standard editor, just launch your preferred editor and save the entries you need in an ASCII file. You can then use crontab filename to save your entries in the crontab.

Listing 2: Anacron Control File # /etc/anacrontab-Example SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin: /usr/sbin:/usr/bin #Interval delay Job-Identifier command 1 5 whatis makewhatis 7 15 Week statistics /home/scotty/work/Mail/stats

In contrast to cron, anacron does not work with exact times, instead the first column is used to define the number of days that elapse before performing a given task. If the command resides in a directory defined in the PATH variable, you can simply type the name of the command, if not, you must supply the path. If you call anacron on booting your system, it first reads the file referenced by the string in the third column to check whether a sufficient number of days have passed. This file is re-written after anacron has performed a task. To avoid launching thousands of jobs simultaneously when you start anacron (for example, after restarting a computer that has been down for a considerable period), you can enter a delay in minutes in the second column. It makes sense to start the program from your boot script using anacron –s. If you normally leave your computer running overnight, you might consider entering anacron –s in your root cron table. Of course, this assumes that you really want to repeat a job at a given interval.

load average You can use the uptime command to view the average load on your system: pingi@server:~$ uptime 2:05pm up 8:52, 4 user, load average: 0.62, 0.35, 0.22 The command outputs the system time, the time elapsed since the last start, the number of users logged on and the average load for the last minute, the last five minutes and the last quarter of an hour.

Info anacron http://software.linux.com/projects/anacron/ Figure 1: Main window in kcron 3.0

Almost on schedule The cron daemon has one disadvantage: your computer must be up and running, at the scheduled launch times. But anacron provides a useful solution for this issue. If this package is not part of your current distribution, refer to the Info boxout for a download source. The installation is typically performed as follows: tar –xzf anacron-2.3.tar.gz cd anacron-2.3 make su make install

anacron’s control centre is called /etc/anacrontab; each line in this configuration file (Listing 2), which is administered by root, corresponds to a single task.

Listing 1: A personalised cron table # DO NOT EDIT THIS FILE – edit the master and reinstall. # (/tmp/crontab.999 installed on Wed Mar 27 17:29:15 2002) # (Cron version –– $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $) # Countermail for Alexa 10 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23 * * * /home/scotty/work/Mail/counter # Online-Counter 0,30 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23 * * * /home/scotty/work/Mail/onLog.pl >/tmp/file # imail 0 1,3,5,7,9,11,13,15,17,19,21,23 * * * /home/scotty/work/Mail/imail # This file was written by KCron. Copyright (c) 1999, Gary Meyer # Although KCron supports most crontab formats, use care when editing. # Note: Lines beginning with “#\” indicates a disabled task.

Issue 22 • 2002

LINUX MAGAZINE

69


BEGINNERS

Desktopia

A THOUSAND WORDS

It’s much easier to grasp information

when it’s seen rather than explained. When it comes to information about your PC the display of data is handled by the system monitor ProcMeter. Jo

E

nsuring your system is secure entails more than being cautious when dealing with system-wide services or relying on a firewall to surround your computer or network. A major element of keeping a system secure is inspecting the system log files at regular intervals. There is one serious drawback to this however, it is usually a look into the past and is often only done following some disastrous mishap. If your memory is overflowing, your Linux computer will no longer be stable, but you may not be aware of a memory problem until the system has fallen over. Early detection is therefore a must and not just when it comes to your machine’s health.

Moskalewski takes a

Control room

look

Fortunately, the Linux kernel keeps lots of relevant data readily available and up to date at all times. This information can be read out in the /proc directory, the contents of which is created virtually by the kernel (and therefore does not actually take up any disk space, not even the biggest file in it, kcore – a

Listing 1: Working memory Mem: Swap: MemTotal: MemFree: MemShared: Buffers: Cached: SwapCached: Active: Inact_dirty: Inact_clean: Inact_target: HighTotal: HighFree: LowTotal: LowFree: SwapTotal: SwapFree:

70

LINUX MAGAZINE

total: 393928704 777719808 384696 kB 24432 kB 0 kB 17088 kB 164664 kB 32 kB 50348 kB 128592 kB 2844 kB 12 kB 0 kB 0 kB 384696 kB 24432 kB 759492 kB 759460 kB

Issue 22 • 2002

used: 368910336 32768

free: 25018368 777687040

replica of your working memory). A cat /proc/meminfo lists a slew of information to do with the status of the working memory (see Listing 1). Now all that remains is to find the line with the information you seek, and when found it is quite easy to follow, which is not true in all cases. A cat /proc/uptime presents the user with the much greater task of interpreting a string of figures such as 10756.43 7123.55. The only thing relevant here for the period since the last reboot is the first of the two blocks of figures, and this is shown in seconds. Some tedious conversion is now necessary to get to the more meaningful 2 hours, 59 minutes and 16.43 seconds.

Help wanted Rummaging around in the /proc directory is a highly systematic and reliable, although extremely timeconsuming way of searching information. That’s why there are system monitors, which call up and graphically process the kernel information constantly. One such representative is ProcMeter, which is now available in version 3.3a. A copy of this can be found on the coverdisc or else (together with additional information) at http://www.gedanken.demon.co.uk/procmeter3/. This piece of software now informs you not only about RAM locks and times, but also about incoming emails, your power supply, hard disk activity and capacity, network traffic, CPU-utilisation, and so on and so forth. Everything that can be recorded you will see displayed either in a numerical – human readable – form, as a graphical display of the last few moments or as a bar with the current status (also graphical), all together, with or without a label. Each scrap of information can, if desired, sparkle in a range of different colour combinations. Obviously there is also a catch to all this. Although everything can be adjusted by mouse on the fly it’s not possible to save your customisations. The associated configuration file must be typed by hand, which also demands a bit of attention when you’re under pressure. The effort does pay off though, especially since it only needs doing once.


BEGINNERS

If you want to convince yourself of the utility and the abilities of this tool, then a simple tar –xvfz procmeter3-3.3a.tgz will unpack this archive into your own, new directory. This now contains the source code but we don’t have the executable program just yet. It can be moulded to fit your computer with the command make all; the result of which needs to be installed afterwards with a make install, for which you will briefly need root privileges.

Twinpack The program itself comes in two main parts. As well as the actual, finely adjustable procmeter3 there is also the gprocmeter3 incarnation, which is based on the popular GTK+ toolkit, thus giving it a similar appearance to other GTK and GNOME applications. This becomes especially clear in the display of the menus, which in gprocmeter3 are oriented, not to the manually typed configuration file, but to the elected theme for the system. The first time you call up either of the two programs the effect is very sobering. Only the program name and the time and date are displayed in mouse-grey (Figure 1). To add a fourth element, press the right mouse button. A menu with all displayable information in all available display modes now pops up (all in all an impressive number, which on any computer must run into three figures). If you now select an element this will be added to the bottom of ProcMeter window. The sequence in which the elements are displayed can be altered by selecting one of the two Move To options from the left mouse menu and assigning which items you wish to move to a new spot.

Internal When the left mouse button is pressed, additional options can be accessed (Figure 2). Via Properties you can obtain additional information on the respective element (Figure 3). This menu entry becomes especially helpful when you want to transfer your settings into a configuration file because then the respective internal ProcMeter designators will also be displayed. If you do not close this window, it adapts its content at the next mouse click to the current element. Another menu entry bears the title Run, which can be used to call up an external (and freely configurable) program, which varies from element to element. For example a mouse click on a memory element in the standard configuration results in the output of the command free being displayed in an xterm.

system, you can copy the skeletal default configuration from the unpacked tarball in its place and alter it accordingly. The structure of this file is simple and logical but can quickly become very large – and therefore very confusing. This is probably one reason why the author Andrew M Bishop split it into individual sections. First you should show ProcMeter the path to follow for its program files, or to be more precise, to its modules:

Fig. 1: First impressions

[library] path=/usr/local/lib/U X11/ProcMeter3/modules These take over the actual consultation of your system, while ProcMeter merely outputs what the modules deliver. So there is a module for memory questions (meminfo.so), one for your processor (stat-cpu.so) etc. This modular structure has the advantage that it can be easily and quickly enhanced by new functions. The [library] section is now followed in the [start-up] section by the elements to be displayed when the program is started. You can find out what these are called from the instructions, which can be viewed with man procmeterrc or else from the already described Properties dialog: it is advisable to click the structure together with the mouse in order then to type it into the configuration file.

Fig. 2: Hidden behind the left mouse button

A little work The much-bemoaned configuration file can be found in ~/.procmeterrc. If this does not exist in your

Fig. 3: Properties explained

Issue 22 • 2002

LINUX MAGAZINE

71


BEGINNERS

Xresources Ingenious system, with which one or more X application(s) can be configured at the same time. Mostly, one merely defines colour, font or geometry details in this. Newer toolkits (such as GTK+ or Qt) are based on so-called themes and no longer take any notice of the Xresources, as past Desktopias have discussed.

From the Properties window in Figure 3, for example, you can see that the corresponding module is called Sensors and under Output is being asked for a statement on Fan1 and thus on the rpm of the first fan. The display mode is defined in the configuration with a single letter (t for Text, g for Graphics and b for Bar display). Thus a .procmeterrc corresponding to the figure would contain the line Sensors.Fan1-t, with which a text output of the fan rpm is desired. Module and Output are here connected by a dot, while the type of display is attached with a hyphen. You can only access the sensors module if you have installed a current Lm_sensors package or using a kernel from version 2.4.13 on with I2C support. Out of the box, on the other hand, you get a complete overview of your CPU-utilisation displayed graphically. Enter Stat-CPU.CPU-g in the required place in the [startup] section. Stat-CPU here refers to the module to be used, with CPU you are saying that you would like to consult this module about the CPU in general (each module has several items of information ready), and with the final g you are setting the output to “graphics”. If one line of your configuration file becomes too long, you can indicate with the symbol\ that it continues on the next with associated entries: [startup] order = Stat-CPU.CPU-g\ Processes.Processes-t\ Memory.Mem_Free-b

Personal note In the following [resources] section you can define general options, such as colours and fonts, for all the elements. The only things which have to be included are those which you want to be different from the standard settings. The following lines are all it takes to adjust foreground and background colours: [resources] foreground = white background = #445566

procmeter3*geometry: –0+0 Should this file not yet exist, simply make it from new. By the time the command xrdb –merge ~/.Xdefaults has been given, your entry will show effect. At least the GTK-free variant procmeter3 can be filled with a background graphic in the .Xdefaults instead of with plain colours: procmeter3*pane*backgroundPixmap: graphic.xpm But that’s not all: Maybe you would rather have the information on the file system displayed in a separate window? No problem, knit yourself a second configuration file under a different name (such as .procmeterrc-2), and make your Xresources entries under a different designator again (procmeter3-2*geometry: +0+0, for example). The latter prevents the two windows sticking over each other in a corner. Now call up ProcMeter with the command: procmeter3 ––rc=.procmeterrc2 –name procmeter3-2 and the second settings come into effect. As you can see, ProcMeter can be configured and made highly individual. Once one has grasped the possibilities, you must also put the brakes on straight away. All too quickly, you’ll find yourself configuring this superb system monitor so that it fills up a major portion of the screen, thus creating a new hurdle in the path of an easily understood status report.

The colours can, as will be familiar from HTML-pages, be defined both as hexadecimal RGB-values or else with names. You can get a good overview with the tool xcolorsel. If on the other hand you are looking for a nicer font, then xfontsel will be glad to help.

Authorities ProcMeter also obeys classic XWindow options and so the system monitor can be influenced on start-up via suffixes: –geometry –0+0 moves the window to the upper right corner. If you don’t want to type in this flag every time, then inputs in the classic Xresources will help you out. To do this, write the following line in the file ~/.Xdefaults: 72

LINUX MAGAZINE

Issue 22 • 2002

Fig. 5: ProcMeter with background graphic


BEGINNERS

The best Web sites for Linux users

THE RIGHT PAGES SuperLinux Encyclopedia

Pretty Poly Editor

http://slencyclopedia.berlios.de/ Every subject needs an exhaustive source of information and as far as Linux is concerned, this is possibly as close as you’re going to get.

http://prettypoly.sourceforge.net/ PPE is a portable Open Source 3D modeller, file viewer and converter. It is targetted at 3D games development and supports OpenGL and WYSIWYG functionality.

EchelonWatch http://www.aclu.org/echelonwatch/ This site is designed to encourage public discussion of this potential threat to civil liberties and to urge the governments of the world to protect our rights.

Why Python is best http://www.python.org/doc/Comparisons.html Python is often compared to other programming languages. Here’s a sampling of what’s been written, by people biased in various directions...

Linux Planet http://www.linuxplanet.com/linuxplanet/ Linux Planet is an excellent online Linux magazine full of useful content, ranging from the latest news stories and opinions to tutorial guides.

Janet Roebuck offers up the latest Web sites to spark our interest in the Linux Magazine office

Linux Powered http://www.linuxpowered.com/ A portal site for knowledge with special interest areas such as Linux Security and Networking.

User-Mode Linux Kernel http://user-modelinux.sourceforge.net/ User-Mode Linux is a safe, secure way of running Linux versions and Linux processes. Run buggy software, experiment with new Linux kernels or distributions, and poke around in the internals of Linux, all without risking your main Linux setup.

Happy Penguin http://happypenguin.org/ The Linux game tome. Want the latest game? Then find out the news with a huge database of Linux games.

Linux Games

Gimp User Group http://gug.sunsite.dk/ The largest collection of Gimp arts and tutorials anywhere on the Internet! Right now it contains 1,392 pictures, 220 textures and 18 Script-Fu scripts.

http://www.linuxgames.com/ Not quite as big as Happy Penguin but the news is always current with good insider knowledge.

Issue 22 • 2002

LINUX MAGAZINE

73


BEGINNERS

Dr. Linux

EXORCISE YOUR DAEMONS Unix systems are not for the faint-hearted, as the world of processes is swarming with zombies and daemons. Marianne

Which program is tying up the drive?

Q

After I had looked at a CD with graphics on my Linux computer, I had the following problem: when I tried to unmount the CD drive, this error message appeared: unmount: /media/cdrom: The device is busy. How can I find out which program is preventing an unmount?

Wacholz takes us on a trip into the crypts of Linux

Dr. Linux mount and unmount Data devices are integrated in the Linux file tree with the rootreserved mount command. Before you can remove a mounted CD or diskette from the drive, an unmount command is vital. In the file /etc/fstab the system administrator can stipulate that unprivileged users have the right to mount and unmount certain data devices such as CDROMs, or, more importantly, refusing them access. The same is true for hard disk partitions, which can escape access under Linux.

74

LINUX MAGAZINE

Figure 1: Which program is holding onto the drive?

Dr. Linux: You will get this error message, or its graphical equivalent as shown in Figure 1, if there is an open file tying up the device which you are trying to close. You can find which process has opened this file quite simply with the command lsof (“list open files”). This command can be used in so many ways that the associated manpage contains over 2,000 lines. It is usually found under /usr/sbin – should your search path not include this directory, you must instead call up the command with the full path included, lsof would be /usr/sbin/lsof. As an argument, give it the device name and, where applicable, the path of the file which you want to know about. In the case of a blocked CD drive, this would be the path of the corresponding device file. Without specifying some object for lsof to look at, you will find yourself presented with a list of all open files on your system, and in all but exceptional cases this will prove to be very long. If you have yet to acclimatise yourself to the way in Issue 22 • 2002

Complicated organisms, which is just what Linux systems are, have some little complaints all of their own. Dr. Linux observes the patients in Linux newsgroups, issues prescriptions here for the latest kernel problems and proposes alternative healing methods. which a Unix system describes its devices, having spent years learning those Windows drive letters, you can quickly look it up with the command mount. mount without further details lists all currently mounted drives, obviously including the CD which refuses to allow itself to be unmounted: perle@maxi:~> mount /dev/hda7 on / type ext3 (rw) [...] /dev/hdb on /media/cdrom type iso9660 (ro,nosuid,nodev,user=perle)

In the example the CD-ROM drive is mounted as slave on the primary IDE controller (/dev/hdb), and the data contained on the CD can be accessed under the directory /media/cdrom in the Linux file hierarchy. You can tell from the filesystem type iso9660 that this is a data CD.


BEGINNERS

An lsof /dev/hdb now comes up with an output as in Listing 1. To find out which tasks are accessing

data on the CD, let’s take a close look at the following output columns:

Listing 1: Example outputs from lsof perle<\@>maxi:~> lsof /dev/hdb COMMAND PID USER FD gs 1252 perle 3r

TYPE REG

DEVICE 3,64

SIZE 593581

NODE 47726

NAME /media/cdrom/autoren.pfd

● The COMMAND column outputs the accessing command, possibly abbreviated to nine letters. ● PID contains the Process Identification number, which will be required, for example, should you want to send the corresponding task to its grave with the kill command. ● In the third column, overwritten with USER, lsof gives us the name of the user who started the ball rolling with the task, though sometimes this will just be his or her user-ID (UID).

If there are tasks preventing the unmounting of the CD which you started under your User-ID, you can shoot them down with the command kill and the respective PID as argument. Sometimes a task will not react to this and may need a firmer hand by sending progressively more threatening signals until it realises who the boss is. Starting with kill –15 <PID> will, hopefully, get the task to close in a clean way, but if this fails you will need the heavy-handed kill –9 <PID>.

If, under COMMAND, you find a program which you did not even start and want to get right to the bottom of this matter, it’s worth using the classic pstree command. With this command you get a display of tasks in tree form, so that you quickly get an overview of which tasks have started which other tasks. With the option –p the pstree output also includes the PIDs (Figure 2). There are many graphical user interfaces and programs that allow you to administer tasks, which you are welcome to use once you have found one that suits your needs. However, remember that pstree will be available on all systems, even an old one, so keep a healthy respect for it.

perle@maxi:~> kill –9 1252

Figure 2: pstree displays tasks in tree form

The tasks of other users can only be shot down by root; it goes without saying that when you are equipped with root powers you must proceed with great caution. As soon as the task in question has breathed its last, there is no longer an obstacle to an unmount of the CD.

Is it running or isn’t it?

Q

When configuring utilities which are controlled by daemons (for example cron), I often find in the documentation the demand to check whether the respective daemon or the program is running. How do I ascertain this quickly and easily?

Dr. Linux: In the /var sector of the Linux directory tree you will find data which can be changed quickly, ergo is variable. This also includes the information as to whether a certain daemon is running. If it is started, it also receives (like every other task) a taskID, but one which, unlike “normal” tasks, is recorded in a file named “name.pid” in /var/run. This prevents one of the daemons being started twice, and when the system prepares itself to be powered down it is immediately apparent which utilities must be shut down first, which is why you don’t just turn a Linux box off. If you find an entry matching a daemon in /var/run, you can normally assume that the corresponding utility has been started (Listing 2). Obviously it may be that it is doing nothing at all at this precise moment and is just passing the time until its next appointment; this depends on its task and the respective configuration. If you need the task-ID, it is best to look at the content of the file with cat – the output is limited to just one number, so that the use Issue 22 • 2002

Process The operating system kernel has direct access to the resources of the computer, for example memory and computing time. If a command is invoked or a program started, it loads the necessary program code into the main memory. Once started thus, this program is now referred to as a task. Each task has a unique task number (PID), which the system keeps in a task table. Tasks have no access to resources; they request these as required from the kernel. If the same program, the command gimp for example, is started twice then this usually involves two different tasks, although the same program is executed. The operating system kernel allocates the necessary computing time and the memory so quickly that it gives the impression that programs can run simultaneously. Tasks can multiply themselves by creating child tasks by means of duplication; when this happens they themselves are referred to as parent tasks. Parent tasks can wait for their children tasks to end or die; this does not work the other way around.

LINUX MAGAZINE

75


BEGINNERS

Daemons An abbreviation for Disk and Execution Monitor. Daemons are not an integral part of the system kernel, but programs which run in the background and make their services available to other programs or computers. Some are started when booting and remain active throughout the running time of a system; others are active only as long as their services are required.

Listing 2 The .pid-files in /var/run contain the PIDs of started daemons perle@maxi:/var/run> ls –l total 112@l = –rw–r––r–– –rw–r––r–– –rw–r––––– –rw–r––r–– –rw–r––r–– –rw–r––r–– –rw–r––r–– drwxr–x––T –rw–r––r–– drwxr–x––– –rw–r––r–– –rw–r––r–– –rw–rw–r–– –rw–r––r––

1 1 1 1 1 1 1 2 1 2 1 1 1 1

root root root root root lp root root root root root root root root

root root root root root lp root root root dialout root root tty root

of a pager such as less would be an extravagance: perle@maxi:/var/run> cat cron.pid 599The cron daemon currently running thus bears the PID 599. To further your conviction that a program really is alive and kicking, there is a link to the program killall5 with the command pidof (under SuSE this is in /sbin, rather than in the usual user path). Give pidof the name of the wanted program, which in this case does not even have to be a daemon. If it is running, you will receive as output its PID; if it has been started more than once, pidof outputs all task identification numbers.

76

LINUX MAGAZINE

6 6 6 6 6 6 6 6 6 6 6 6 6 6

09:26 09:26 09:26 09:26 09:26 09:26 09:26 10:18 10:18 09:26 09:26 09:26 10:29 09:26

atd.pid cron.pid gpm.pid inetd.pid klogd.pid lpd.printer nscd.pid sendmail sendmail.pid smpppd sshd.pid syslogd.pid utmp xfstt.pid[...]

system can sit back and relax, because there is also death in the Unix world. This applies to tasks – specifically when they stop on their own or are shut down. A zombie is a dead task, whose Exit–Status continues to be kept by the kernel in the task table and waits for its parent process to read it. Only then can it be deleted in peace from the task table. If the parent process dies before it has read this register value, the zombie is also deleted.

Background or foreground?

Q

If pidof says nothing after your input, you are dealing with a script (or with something which pidof believes to be one, though this belief does not have to be correct). In this case specify the option –x too:

I know that I can start programs in the background on a command line, if I put an & after the command, but which command do I use to get one of several background tasks back into the foreground, so that I can shut it down using Ctrl+C for example? Dr. Linux: Programs sent into the background by a command line, so as not to block the console, are referred to as Jobs. When a job is started, the command is also given, in addition to its task number, a figure in brackets, the so-called Job Number.

perle@maxi:~> /sbin/pidof –x kdeinit 1125 929 927 923 921 919 909 895 893 89

perle@maxi:~> emacs & [1] 1650

perle@maxi:~>/sbin/pidof /sbin/syslogd 337

Exit–Status A register value, defined by the programmer, which defines how a program departs this life – either successfully, on the grounds of a bug or otherwise.

4 Mar 4 Mar 4 Mar 4 Mar 4 Mar 4 Mar 4 Mar 4096 Mar 38 Mar 4096 Mar 4 Mar 4 Mar 3456 Mar 4 Mar

Hollywood’s nightmares in the system?

Q

When I start top, in order to check that the system is running, I sometimes find one or more zombies in the system (Listing 3). What does this mean? Dr. Linux: The Film “Dawn of the Dead” defines the term Zombie thus: “When there’s no more room in Hell, then the dead come back to earth.”. Anyone who is now shuddering at his or her Linux Issue 22 • 2002

The background tasks can be listed for each console (and each X terminal) with the command jobs: perle@maxi:~> [1] Running [2]– Running [3]+ Running

jobs emacs & gimp & xtetris &

The active background programs are marked with Running. Other possibilities are Stopped for


BEGINNERS

Listing 3: top sees a zombie 1:01am up 13:29, 1 users, load average: 0.25, 0.19, 0.12 88 processes: 86 sleeping, 2 running, 1 zombie, 0 stopped CPU states: 7.2% user, 8.0% system, 0.0% nice, 84.6% idle Mem: 320072K av, 293952K used, 26120K free, 0K shrd, 16496K buff Swap: 305152K av, 3276K used, 301876K free 140224K cached PID

USER

PRI

NI

SIZE

RSS

SHARE

STAT

%CPU

%MEM

TIME

COMMAND

1719 1286 4808 1 [...] 1445

perle root perle root

19 16 14 9

0 0 0 0

9224 22264 1096 208

8860 13M 1096 208

7164 1720 792 176

R S R S

8.0 5.7 1.3 0.0

2.7 4.3 0.3 0.0

0:25 9:40 0:09 0:04

kdeinit X top init

root

9

0

0

0

0

Z

0.0

0.0

0:00

cron[...]

interrupted programs, Terminated or Done. You will also come across minus and plus signs. + indicates the most recently started background process, – the previous ones. These signs can be handed over as arguments to the command fg with a preceding percentage sign (%), which brings the background process to the foreground. Other possible arguments include:

● kill ends the job specified as argument. If the pattern matches several tasks, however, there will be an error message.

● %n, where n must be replaced by the job number, i.e. the figure placed in brackets.

perle@maxi:~> jobs [1] Running emacs & [2]– Running gimp & [3]+ Running xtetris & perle@maxi:~> kill %1 perle@maxi:~> jobs [1] Stops emacs [2]– Running gimp & [3]+ Running xtetris &

● %e, which keeps the job whose command line commences with the character string e, in the foreground. If more than one background command fits the character string, you will get the error message:

● The listing of the jobs with jobs can be limited with the aforementioned arguments to certain tasks. In practice this looks something like this:

bash: fg: ambiguous job spec: e

● %?s, which brings forward the background process, whose command line contains the character string s (or complains about an ambiguous job specification). ● %% or %+, which both mean the current job. ● %–, which stands for the previous job. ● fg is not the only command which helps in job administration in bash. The following commands also accept the arguments just described above:

Before you shoot down file processing programs such as Emacs, you should however pause for a moment. With kill you are in fact also moving the files you’ve not backed up into oblivion. If you leave the console from which the program was started, with exit, though, the editor will be kept open for you and you can still back up your data. If, in the terminal to be closed down with exit, there are still some interrupted jobs waiting, you will be gently reminded of this fact: There are stopped jobs. Only after a second exit will the console take its leave of you.

● bg sends a job into the background. So that bg cannot enter into a shell and block a foreground process, this must be temporarily be stopped with Ctrl+Z. Issue 22 • 2002

LINUX MAGAZINE

77


BEGINNERS

Out of the box

SAY HELLO WAVE GOODBYE The latest software under Linux is often only available in the form of a tar archive. As Christian Perle explains, installing and removing tarballs can be made much simpler with checkinstall

Source text The form of any software which is legible and alterable by humans. By compiling it with a compiler this is turned into an executable program. Library A library contains a collection of useful C functions for specific purposes. So, for example, there is the libm, which provides mathematical functions, or the libXt, containing functions for programming the Xwindow system. Often libraries are used jointly (“shared”) by a number of programs.

78

LINUX MAGAZINE

T

he three-step process: ./configure; make; make install, will be familiar to anyone who has ever installed a program from its source text form. But very few packages support neat uninstallation of the files copied into the file system with make install. This is where checkinstall by Felipe Eduardo Sanchez Diaz Duran comes in. When the installwatch library is used, the program monitors all write-actions performed when using make install or a corresponding installation command. Doing so it records a list of the new files and directories.

Out of the box There are thousands of tools and utilities for Linux. “Out of the box” takes the pick of the bunch and each month suggests a little program, which we feel is either absolutely indispensable or unduly ignored.

Chicken or egg? If you have installed the GNU-C compiler by hand and the development package necessary for compilation glibc-dev (depending on the distribution it might be called something slightly different), you now only need to take a virtual journey to Mexico in order to get hold of the sources for checkinstall from http://proyectos.glo.org.mx/checkinstall/. It sounds a bit mad, but to install checkinstall (the program) you do need checkinstall (the command). Naturally there’s a trick to this: the program is first installed with the usual command make install. After that, the new command checkinstall is available, with which you repeat the installation:

Figure 1: Selecting the package type

field via the corresponding numbers. Finally checkinstall informs you how you can get rid of the package just installed. In the example, the command reads dpkg –r checkinstall.

Trickery tar xzf checkinstall-1.5.1.tgz cd checkinstall-1.5.1 make su (enter root password) make install checkinstall exit

Before the new tool is installed as a package, checkinstall wants to know a few things about which package manager is normally used on the system. On the system in Figure 1, Debian GNU/Linux is used, so the option d would be your choice. In the following menu checkinstall shows various data fields on the package, most of which are filled with meaningful figures. You can change the content of a Issue 22 • 2002

However, we have not installed checkinstall just to remove it again immediately. This treatment should be used though, for any programs in future software packages that prove to be unstable or of too little use. But how does checkinstall actually work? The library installwatch replaces all file functions of the Standard-C library with its own. Checkinstall now uses the Preload mechanism to give the functions from installwatch precedence over the “true” functions. The installed, functions make a note of all write actions and transfer the file list thus obtained to checkinstall. This in turn deploys the selected package manager to create from the list a Slackware, Red Hat or Debian package. Finally the freshly constructed package is installed with the distribution’s own package manager.


BEGINNERS

Baking Red Hats

Standards and specials

How does this look in practice? As an example let’s pick the installation of the fractal generator XaoS, introduced in a previous issue of “Out of the box”. Once its source text is unpacked with tar –xzvf xaos3.0.tar.gz, the following steps are taken:

Normally one would only use a single package format in a Linux system. But this means it soon becomes a chore to keep querying the package type through checkinstall. Fortunately, the program uses a file with standard settings, which you can adapt as you wish. You can find this configuration file at /usr/local/lib/checkinstall/checkinstallrc. It is well commented and in addition to selection of the package type (INSTYPE) also allows you to define a directory in which to save the created packages (PAK_DIR) or additional options for the various package managers (RPM_FLAGS, DPKG_FLAGS). Checkinstall also includes a range of command line options when invoked. One which has proven very useful is –si. This option allows it to supervise interactive mechanisms, which additionally require user inputs during installation. The complete documentation on checkinstall can be found, by the way, at /usr/doc/checkinstall-1.5.1/README. With many distributions it is good form that the documentation on a package is installed in /usr/doc/Packagename. Checkinstall tries to keep to this and when invoked it searches the source directory of the directory doc-pak. If it exists, its content will be copied during the checkinstall run to /usr/doc/Packagename – and obviously also removed during uninstallation.

cd XaoS-3.0 ./configure make su (enter root password) checkinstall

After the usual steps of ./configure and make, instead of make install you invoke the command checkinstall. To the question about the package type, users of rpm-based distributions select r, use 6 to change the group to Applications/Graphics and add, with 1, a brief description, for example realtime fractal

Limits Figure 2: Installation as rpm package

zooming for X and console. Once installation is complete, uninstallation is done by rpm –e XaoS-3.0-1 (Figure 2). Also, the program files away the created rpm package for later installations in the directory /usr/src/rpm/RPMS/i386. So after removal, it is possible at any time to play back in the package with rpm –i XaoS-3.0-1.i386.rpm as new. A similar thing applies, too, for Debian packages, except they are filed, not in the /usr/src hierarchy, but in the current source directory.

Obviously, every tool has its limits. So checkinstall cannot monitor the file accesses of a static-linked installation program, because the preload mechanism does not function here. The same restriction applies to programs which, after starting, run with the rights of another user. Such programs are prohibited from preloading on security grounds. For the future, the author is planning better menu control via dialog boxes, a manpage for quick reference to the options and the cryptographic signing of the created packages. There are also great expectations of an upcoming version 2.0.

Package manager A management program for smooth installation and uninstallation of program packages. Common package managers are rpm (used by Red Hat, but also SuSE, Mandrake and Caldera) and the Debian package manager dpkg. Preload If the environment variable LD_PRELOAD is added to the name of a library file, then all the symbols of this library take precedence over those of the libraries loaded later. So for example the Clibrary function printf (formatted output) is replaced by its own version. Static-linked A staticlinked program contains all the necessary functions from the libraries used and no longer needs to read these in at run time. The advantage is the independence from the installed libraries, and the drawback is a considerably larger program file.

Figure 3 shows the rpm database queried for control purposes. With rpm –qi XaoS (the version number can be left out for installed packages) the package manager displays the information defined by us or automatically generated about the XaoS package.

Info XaoS http://www.gnu.org/software/xaos/xaos.html Patricia Jung, “Installation without tears”, Linux Magazine, Issue 18, pages 68-73. Figure 3: rpm displays the installed package

Issue 22 • 2002

LINUX MAGAZINE

79


BEGINNERS

K-tools: Desktop add-ons

THAT SUMMER FEELING Stefanie Teufel explains how to give your desktop a whole new lease of life with a range of KDE add-ons

B

y this point in the year it’s not just our wardrobes that are being cleared of winter clutter, it seems that developers have been hard at work crafting new designs for the new season. That’s the only explanation for the glut of new themes, wallpapers, screensavers and other eye candy now being released. In this issue of K-tools we shall present you with the most colourful of the new products.

Thematic

K-tools In this column we present tools, month by month, which have proven to be especially useful when working under KDE, solve a problem which otherwise is deliberately ignored, or are just some of the nicer things in life, which – once discovered – you wouldn’t want to do without.

Themes The famous/infamous themes will surely be already familiar to many from the Windows world. These are background images, icons, sound files etc. that harmonise with each other, which some wellmeaning person with varying degrees of good taste has compiled for your desktop.

Cables, grasses, illuminated buildings or geese are all used by Troy Dawson to adorn his desktop. Anyone wanting to emulate him in this should install the eight new themes which are ready for download on his homepage at http://home.fnal.gov/~dawson/themes/. Handily enough, they come in the form of an rpm package, so that root merely has to perform the installation with the usual rpm command rpm –i kdethemesfermi-1.i386.rpm, so that you can access the new kaleidescope of colour. Anyone who has got their eye on just one, very special, theme – such as the Fermi-Feynman theme from the heading image – can also get the corresponding .ktheme file individually at ftp://ftp.kde.com/Art_Music/Themes/Fermilab_Themes/. To beautify your desktop with it, open the KDE control centre and there select the item Look & Feel/Theme management. Now just click on the Add button, select the new theme, and straight away it’s in the theme manager. Those who no longer like the

LINUX MAGAZINE

Changing the wallpaper Anyone who wants to repaper their desktop instead of their home will be delighted with the zak-o-rama package – a wallpaper set for 1024x768 resolution. Like the themes from Fermilab, the wallpapers are also available singly (or as 3.9Mb tarball) at ftp://ftp.kde.com/Art_Music/Themes/Wallpapers/zako-rama/. The individual jpg files are unpacked or copied into the KDE wallpaper folder. Where this is located depends on the directory in which you or your distribution have installed KDE. Under Red Hat, for example, the directory /usr/share/wallpapers is the destination of your copying action. After that you can select the background images you’ve just installed in the usual manner (right-click

Figure 2: Change of wallpaper for the desktop background

on the desktop and in the pop-up menu click on the item Configure Desktop). Then your desktop will look, for example, like that in Figure 2.

Stylish

Figure 1: Which themes do you want?

82

pattern, by the way, can get rid of it just as easily with a click of the Remove button.

Issue 22 • 2002

Transparent menu bars, mouseover effects in the menu bars and an especially rich colour palette are offered by Mosfet’s style scheme High Performance Liquid (Figure 3), which you will find at ftp://ftp.kde.com/Art_Music/Themes/High_Performanc


BEGINNERS

e_Liquid/. When doing the installation you really only need to ensure that make –f Makefile.cvs is set before the old familiar Linux three-step .configure; make; make install. In order to fully appreciate the Liquid schemes, you should then select High Performance Liquid in the control centre under Appearances under each of the items Colours, Window decorations and Style. The transparency can be adapted to your own personal preferences by means of the newly-added item Appearance/Translucent Menus.

http://freekde.org/neil/washuu/ and populates your screen in work-free periods with small crabs as in Figure 5. With the second screensaver you can now start the countdown to your summer holidays. KountDown, available at http://w1.911.telia.com/~u91117365/kountdown.html, counts down the time to a user-defined deadline. The settings are defined by you in the setup dialog box from Figure 6. In addition to the deadline you can also adapt colours, font and background of graphical countdown as you desire.

Tarball The program tar is a common archiving tool under Unix. A collection of files packed together with it into one file is usually referred to as a tarball. It bears the file ending .tar.gz or .tgz, when it has first been combined using tar and then compressed with the program gzip. jpg Widely-used graphical format with lossy compression. Especially interesting – particularly for Web designers – is its scalability, which can be adjusted by a compression factor. In exchange for very high compression one also gets compression artefacts. The more the image is compressed in size, the less accurately it is reproduced.

Figure 3: Counter fatigue with pastel colours on the desktop Figure 5: A crab rarely comes alone

House guests There’s also another new program around that will bring you some new living companions: AMOR MD 2, a variant of Amor, which may already be on your computer. If you decide on the MD2 edition of the Amusing Misuse of Resources, in future, instead of penguins, there will be Hangman Quake-2 characters dancing on the frame of your window and keeping you and the resources of your computer from working. The latest version of this gadget can be found at http://www.2robots.com/amorMD/.

Figure 4: Worrying playmates Figure 6: What date should it be?

Saving In addition to all the new wallpapers and themes, the screensaver designers didn’t want to be left on the sidelines and have brought out three new screensavers. Saver Numero Uno from Neil Stevens can be found at the homepage of the author at

Info AMOR http://homepage.powerup.com.au/~mjones/amor/

Issue 22 • 2002

LINUX MAGAZINE

83


COMMUNITY

The monthly BSD column

FREE WORLD Welcome to our

monthly Free World column where we explore what’s happening in the world of Free software beyond the GNU/Linux horizons. This month Janet Roebuck opens the door on Java and the office

Java nearer

HotSpot from Sun Microsystems has now been compiled on FreeBSD by Bill Huey. At the moment both demo applets and JFC (Java Foundation Classes) run but HotSpot for FreeBSD is not yet complete. For more information see the freebsd-java mailing list.

Office work Martin Blapp has announced that, with help from Tim Tretyak, Alexander Kabaev and Carlos FA Paniago, OpenOffice.org 1.0 now works on the STABLE version of FreeBSD.

Red Hat say no to BSD licence Red Hat has recently caused a storm over licensing issues. Having previously stated that it did not support software patents, Red Hat has decided to start building up a software patent portfolio. The first two patents for Red Hat are by Ingo Molnar and are for Embedded Protocol Objects and a method and apparatus for atomic file lookup. Red Hat claims that these patents have been adopted as defensive measures to allow cross licensing rather than court action. Red Hat has publicly stated that it will not enforce the patents where they are used for Open Source software. In this the Open Source software is licensed to allow modification and redistribution as long as the modified versions are themselves Open Source. The BSD licence is not covered under this agreement however, as BSD licensed software may be used and built into proprietary software. Approved Licenses are the GNU General Public License v2.0; IBM Public License v1.0; Common Public License v0.5; and the Q Public License v1.0. More information can be found at http://www.redhat.com/legal/patent_policy.html

Remember the football? Last month’s FIFA World Cup was a highly popular 92

LINUX MAGAZINE

Issue 22 • 2002

event with millions of interested viewers watching the events and many more eager to find out the latest news. The World Cup Web site was in understandably high demand and this heavy traffic was handled with ease by none other than FreeBSD.

Yet another hardware port The ever-expanding influence of NetBSD has again made another leap forward. This time it has been ported to an Artesyn PM/PPC board – a PowerPC-based PMC (PCI mezzanine) card module. This is designed to allow communication equipment manufacturers to add computing functionality to a baseboard, such as a T1/E1 or ATM interface board, and provide the localized horsepower necessary for applications such as protocol processing, data filtering or I/O management. For more information on the subject visit http://www.artesyncp.com/html/pmppc.html

Smaller and safer

A new project is underway to make a secure hardened Posix1e draft OS – a project which has taken on the affectionate moniker of MicroBSD. The idea behind MicroBSD is to create a hardened secure Posix1e OS with a small – hence Micro – footprint. The compact system is designed to work on x86 (now) and Alpha/Sun/PPC (soon) hardware, using as little hard disk space as possible yet providing fully functional systems. Based on a complete server model, builds are currently available for firewalls, IDS and VPN with SMTP, WWW, DNS and FTP combinations to be developed over time. The systems features address all aspects of security: these builds are designed to take the work out of building secured network


COMMUNITY

environments with specific features unique to each one. In a nutshell, MicroBSD is a secured manageable system build designed to undertake specific tasks. More information can be gleaned from http://www.microbsd.net/

OpenSSH goes up a level OpenSSH 3.2.3 has been released. This is a 100 per cent SSH implementation of the versions 1.3, 1.5 and 2.0 protocols and features both server and client support for sftp. This latest build is available for download from http://www.openssh.com/. The OpenSSH suite includes the ssh program which replaces rlogin and telnet, scp which replaces rcp, and sftp which replaces ftp. Also included is sshd, which is the server side of the package, and the other basic utilities like ssh-add, ssh-agent, ssh-keygen and sftp-server.

Don’t mess with Stephanie OpenBSD 3.1 has a new version of the security hardening package Stephanie. The new package is in a modular form so you can choose which components to install. Modules included are: TPE (Trusted Path Execution), MD5 “binary integrity verification”, an in-kernel ACL mechanism, real time logging of execve() calls and ld.so protection (env. stripping). Stephanie is available from http://innu.org/~brian/Stephanie.

Seeing the light Solaris 9, from Sun Microsystems, has increased its security features by using code from OpenSSH. According to Darren Moffat, the engineer responsible for Solaris: “All we added was the following: ● 1. BSM audit code (which has now been donated back to OpenSSH). ● 2. L10N/I18N of messages that get sent to the user. ● 3. Two standalone proxy commands one for SOCKS5 and one for HTTP. ● 4. The code was also linted.

implement re-keying. There will be an effort to get back in sync with OpenSSH in a future revision of Solaris – we will assess at that time if it is appropriate to keep Sun_SSH as the vendor component or revert to OpenSSH. By having a different vendor string it helps in identifying any bugs because it is obvious who to contact.”

Sandboxed The OpenBSD developers have now developed Systrace. This makes system call polices enforceable and so untrusted binaries can be safely sandboxed and run without risk of harm to the rest of the system. As more binary-only software is released onto the market this will become increasingly useful and may help calm down some of the current security paranoia that occurs with binary-only software.

Package information At the beginning of June the number of packages in the NetBSD Package Collection stood at an impressive 2,926, while at the same time the FreeBSD ports broke the 7,000 ports barrier. The NetBSD Package Collection can be found at http://www.netbsd.org/Documentation/software/p ackages.html#searching_pkgsrc. The most recent new packages are p5-WWWAmazon-Wishlist (get details from your Amazon wishlist), rox-system (a system monitor), xfm (the X File Manager), yup (print multiple pages on one sheet), su2 (enhanced su command), xcin (xim server in Chinese), libtabe (Chinese language processing), nbitools (imake tools), rox-wallpaper (backdrop setting), rox-memo (reminder pad), roxedit (text editor), rox-lib (library functions for ROX), sylpheed-claws (email and news client). The most recent updates are mozilla, awka, lukemftp, ttf2pt1, ettercap, audit-packages, doc++, sylpheed, xservers, tits, stow, p5-XMLNamespaceSupport, p5-HTML-FillInForm, p5-TestHarness, p5-Net, pkglint, ethereal, rox-wrappers, rox-session, rox, rox-base, apache, portsentry. The gnus and mutt-devel packages have been retired.

a different vendor string helps in identifying bugs

Signing off That’s about as much of the Free World as we have time to visit this month. Until the next issue, if you have suggestions or feedback then don’t be coy – get in touch with us here at Linux Magazine.

We did change the vendor part of the version string but this is perfectly in spec. The reason for this being we don’t want to identify it as OpenSSH because it isn’t 100 per cent OpenSSH code and also because the version of OpenSSH we started with didn’t Issue 22 • 2002

LINUX MAGAZINE

93


COMMUNITY

The monthly GNU column

BRAVE GNU WORLD Rocks’n’Diamonds in Boulder Dash mode

Welcome to another issue of Georg CF Greve’s Brave GNU World. This month we’ll look at some more ways to creatively free yourself of free time

94

LINUX MAGAZINE

Rocks’n’Diamonds in Emerald Mine mode

Rocks’n’Diamonds Rocks’n’Diamonds, by Holger Schemel, is a game which bears a striking resemblance to classics like Boulder Dash (Commodore 64), Emerald Mine (Amiga) and Supaplex (PC). This shouldn’t be overly surprising since it was written by a great fan of all these games. For the unenlightened: the point of this classic 2D arcade game is to collect diamonds without suffering a premature end. To achieve this, you can (among other things) move rocks, drop bombs and fool monsters. The game was written in C with an eye on portability. It runs on pretty much any flavour of Unix – given that X11 is supported – and also under MacOS X, DOS and Windows. With smooth scrolling of levels, joystick support and a freely customisable keyboard binding it can also be tailored to the preferences and circumstances of its users. On top of the quite original-looking graphics, the overall feel is increased by support for sound effects and music on all operating systems that support it. Another point in the game’s favour is its networking support under Unix, which allows up to four players to take on levels together. It also has a local multi-player mode that lets players solve levels as a team on a single machine. To make sure there will be no boredom the game has literally thousands of levels that need solving and once you’ve finished those, there’s still the level editor. Holger Schemel publishes it as Free Software under the GNU General Public License. Although the game is already very mature, Issue 22 • 2002

Rocks’n’Diamonds in Supaplex mode

Editor mode for Rocks’n’Diamonds

development is still active. According to Holger, emulating the different game engines of Boulder Dash, Emerald Mine, Supaplex and Sokoban is still posing some problems since it is not yet good enough to play all their original levels. If you would like to help, you’re welcome to do so. Also contributions in terms of graphics, help on porting it to another platform or new levels are all very welcome. Holger himself experienced some of the interesting aspects of such a co-operation in the mid 90s when the German nuclear research centre Julich told him his game was crashing one of their AIX servers. Since he wasn’t able to reproduce that bug, he was provided with Telnet access to the affected system and tracked the problem down to a faulty X11system call. He was thus able to fix the bug while hoping he did not have too much of a negative impact on some nuclear gear. Fortunately that is not very likely, but this little story nicely shows how the connecting and cooperative spirit of Free Software can sometimes bring you to interesting places.

Mirror Magic Like the last game, Mirror Magic was also written by Holger Schemel. By way of context, the game itself was written in 1988 and distributed commercially and as proprietary software under the name Mindbender for the Amiga. Holger then ported it to Unix around 1994 and published it under the GNU General Public License as Free Software. The goal of the game is to guide a laser beam out of its emitter into a detector. Given a couple of player-adjustable mirrors this could be easy but is


COMMUNITY

Mirror Magic Conquer the world the T.E.G. way

made increasingly difficult by all kinds of obstacles that can either be circumvented or destroyed by laser power. In some positions, the mirrors will cause the beam to feedback into the emitter, causing the laser to overheat and eventually explode; which is a rather undesirable outcome. Simple principles can often lead to a lot of fun and Mirror Magic is no exception to this rule. Just like Rocks’n’Diamonds, Mirror Magic also provides nice graphics, sound effects and music. In fact both games have a suspiciously similar look, which is no coincidence since Rocks’n’Diamonds is based on the engine of Mirror Magic. Their relationship goes as far as the version number, which – at the time this column was written – is 2.0.1, released on March 19th 2002 for both games.

Different map with playing cards for T.E.G.

times during its development, it has been under continuous development since early 2000. It currently contains three maps, is capable of network play and has translations into Spanish, French, German and Polish. Further plans focus on the generation of different rule systems, better maps, increased intelligence of robots and a meta-server. Help with these tasks by interested graphics designers or developers is certainly welcome. T.E.G. was written in C using the GTK+/GNOME libraries and is published under the GNU General Public License.

J-TEG

Mirror Magic’s level editor

T.E.G. Many things that would raise complaints in real life can be enjoyed freely in virtual space. Achieving world domination is one of them. T.E.G. (“Tenes Empanadas Graciela”) is a clone of the well-known game Risk and was started by Ricardo Quesada in 1996. The game concept of Risk shouldn’t need much explanation, but in case some readers are not yet familiar with it: Risk is a board game in which the players compete with their armies in taking over control of certain regions or the whole world. Winning depends on tactical ability and some luck rolling the dice. Although the project was declared dead many

Should the choice of C and GTK+ not suit your individual taste, you could try J-TEG by Jef De Geeter and Yves Vandewoude. It is a Java implementation of TEG although the codebase and development are entirely independent. This project is also published under the GNU General Public License and since it uses the same networking protocol as TEG, both games can communicate with each other. In terms of translations, J-TEG currently offers Dutch, French, German and Italian. Thanks to using Java, J-TEG should be able to run on almost all platforms supporting Java 1.3 or higher. But of course this also means it will have the Javarelated problems. It would be good if Sun showed more interest in making Java a fully Open language and support Free reference implementations.

GNU Chess GNU Chess is among the oldest projects of the GNU system, as its development began way back in 1984. It is under maintenance and development until today and should also find its place in the Brave GNU World. The game Chess itself shouldn’t need any explanation. Even non-players often know the rules and many people have had contact in school when taking numbers to the nth power was explained with Issue 22 • 2002

LINUX MAGAZINE

95


COMMUNITY

Connecting for network play with JTEG

Language choice in JTEG

a chessboard and grains of rice. Given the origin and age of the project the choice of GNU General Public License and C as the programming language can hardly surprise anyone. The most active current developers of GNU Chess are Simon Waters, and Lukas Geyer. Stuart Cracraft, who maintained the project for many years, still helps them with advice and occasional replies to bugreports, though he is slowly pulling out of GNU Chess. Kong-Sian should still be mentioned since he contributed the major part of the GNU Chess version 5 codebase. Simon sees the focus of current development on maintaining and further expanding the high portability and on implementing an end-game database and an analysis mode. The analysis mode in particular is something he considers important since in his experience, complex programs can profit enormously from such a mode. Along with the already finished code clean up, the analysis mode should also help further increase the playing strength of GNU Chess. Even if there are many gratis Chess programs, some of which you can even get the source code for, GNU Chess seems to be very popular with maintainers of Web sites and authors of graphical chess programs who need a Chess engine that will give them the necessary freedom to port and integrate it easily and efficiently. According to Simon’s experience the freedom offered by GNU Chess is a major advantage that is very much valued, as he was able to discover by the large amount of patches sent in. If you just want to play GNU Chess, you can of course do so, but a graphical front-end would probably be useful. The best known front-end is probably XBoard by Tim Mann, which is published under the GNU GPL.

SCID analysis

certain level of skill. Scid (Shane’s chess information database”) is such a database, developed by Shane Hudson under the GNU General Public License. Games can be easily and quickly entered in Scid to check in the database for certain search parameters. Since the use of any database is largely determined by the capabilities of maintaining it, this part of the functionality has been given a lot of the attention. The possibility to train your own playing strength was also important to Shane and with the help of a WinBoard-compatible Chess engine you can even use it for analysing games. In an area dominated by proprietary and predominately expensive programs, Scid not only supports Windows but also runs under Unix and has – according to its author – a much easier, cleaner interface. Of course the size of the accompanying game database is also an important factor for the usefulness of such a project. Some proprietary programs have more than 1,000,000 games in their database. From the Scid Web site, you can download a high-quality database with over 500,000 games of master level. The development of Scid began in 1999 and today it is clearly a stable project with translations into 12 languages. As a program language, Shane used C++

Scid XBoard front-end

96

LINUX MAGAZINE

Those who know Chess from playing will certainly know the value of a good Chess database – at least once you reach a Issue 22 • 2002

SCID database


COMMUNITY

with Tcl/Tk for the graphical user interface. Therefore the most important tasks are maintenance of the help pages and the creation of a tutorial for new users. So if you like spending time pondering over Chess, you should risk a look at Scid.

GNU oSIP Library RFC2543 describes the Session Initiation Protocol (SIP), a protocol to initiate, modify and terminate multimedia sessions. SIP was invented as a lightweight replacement of H323 in order to allow, in particular, hardware and software Internet telephones. Among other things it allows for proxies as gateways between networks and registrars to locate dynamic users. The protocol bears a – desired – resemblance to the MAIL and HTTP protocols and just as it is possible today to mail me with the “mailto:greve@gnu.org”, SIP will make it possible to call me over the “sip:greve@gnu.org” URL one day. Given that more and more companies are shifting from H232 to SIP and given that release five of the UMTS protocol is based on SIP, it is becoming increasingly important to implement this protocol freely. The GNU oSIP (“Omnibus SIP”) library by Aymeric Moizard is one such Free implementation under the GNU Lesser General Public License and has recently become part of the GNU-Project. oSIP was written in C, deliberately limiting dependencies to the libc6 so that it may be used on as many systems as possible. This allows the use of oSIP in embedded devices and creates the foundation of mobile Internet phones based on Free Software. The major advantages of oSIP compared to proprietary projects are that it is very small, flexible and Free. To the knowledge of Aymeric, there is also no other Free Software SIP C stack that is comparable to oSIP. It is quite possible that SIP-based internettelephony will completely replace the current telephone with its well-known players. If one combines the social and economic importance of communication by phone with the tendency of proprietary software to create monopolies, it immediately becomes apparent that communication needs to be possible with Free Software in order to help preventing a global monopoly on telecommunication. As such it is easy to understand why oSIP is a seminal contribution to the GNUCOMM project by Aymeric.

libferris Like the previous project, the Ferris library is also something that most “normal users” will never come in direct contact with. But even if its features are only immediately useful to developers, I believe it’s useful to have a certain understanding for what’s happening “behind the curtains” – even for pure users.

The libferris is a Virtual File System (VFS) running in user-space. Its function is to provide transparent, easy and consistent access to many different sources of data for programs – and therefore the users. The libferris allows users to make databases, relational databases, XML, mailboxes, FTP-accounts, sockets, compressed and rpm archives, and SSH2 servers available as transparent directory structures. It also allows the direct extraction of certain information/data out of different file formats like ID3, MPEG2 and all image formats supported by the Imlib2 or ImageMagick libraries. Users gain the advantage that it becomes irrelevant where or in which format data is stored, as all such details are handled by the libferris. Developers don’t have to worry about supporting dozens of file formats and transport layers anymore – all they need to write is a binding to libferris. Ben Martin began working on the libferris in April/May 2001 and had already released the first version in June 2001. As the programming language for the project Ben chose C++, since he wanted to make extensive use of the C++ “Standard Template Library” (STL), which is also the reason why he decided against expanding the gnome-vfs for his purposes. The object and steam orientation of libferris makes its expansion pretty easy and enables users to write their own modules to make other sources or formats accessible through the libferris. Ben is seeking help in form of modules that allow the inclusion of different protocols or extracting information out of previously unsupported file formats. But he also wouldn’t mind being provided with nice, fast hardware as he pointed out.

Until the next time That’s enough Brave GNU World for this month. As in every issue, I’d like to encourage you to send mail with ideas, feedback, comments and interesting projects to the usual address.

Info Send ideas, comments, questions to Brave GNU World column@brave-gnu-world.org Homepage of the GNU Project http://www.gnu.org/ Homepage of Georg’s Brave GNU World http://brave-gnu-world.org Rocks’n’Diamonds homepage http://www.artsoft.org/rocksndiamonds/ Mirror Magic homepage http://www.artsoft.org/mirrormagic/ T.E.G. homepage http://teg.sourceforge.net J-TEG homepage http://jteg.sourceforge.net GNU Chess homepage http://www.gnu.org/software/chess XBoard and WinBoard homepage http://www.tim-mann.org/xboard.html SCID homepage http://scid.sourceforge.net RFC2543 http://www.ietf.org/rfc/rfc2543.txt GNU oSIP library homepage http://www.gnu.org/software/osip/ GNUCOMM homepage http://www.gnu.org/software/gnucomm/ libferris homepage http://witme.sourceforge.net/libferris.web

Issue 22 • 2002

LINUX MAGAZINE

97


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.