linux magazine uk 11

Page 1



003welcomesbd.qxd

29.06.2001

19:25 Uhr

Seite 3

COMMENT

General Contacts General Enquiries Fax Subscriptions Email Enquiries Letters

01625 855169 01625 855071 www.linux-magazine.co.uk subs@linux-magazine.co.uk edit@linux-magazine.co.uk letters@linux-magazine.co.uk

Editor

John Southern jsouthern@linux-magazine.co.uk

CD Editor

Richard Smedley rsmedley@linux-magazine.co.uk

Contributors

Alison Davis, Dean Wilson, Colin Murphy, Alison Raouf, Richard Smedley, Kim Hawtin, Richard Ibbotson

International Editors

Harald Milz hmilz@linux-magazin.de Hans-Georg Esser hgesser@linux-user.de Ulrich Wolf uwolf@linux-magazin.de

International Contributors

Simon Budig, Mirko Dölle, Björn Ganslandt, Georg Greve, Jo Moskalewski, Christian Perle, Frank Haubenschild,Carsten Zerbst, Tim Schürmann, Stefanie Teufel, Berhard Bablok, Winfried Trümper, Fionn Behrens, Lars Martin, Michael Engel, Andreas Grytz, Patricia Jung, Karsten Gunther, Christian Wagenknecht

Design

Renate Ettenberger vero-design, Tym Leckey

Production

Bernadette Taylor, Stefanie Huber

Operations Manager

Pam Shore

Advertising

01625 855169 Carl Jackson Sales Manager cjackson@linux-magazine.co.uk Linda Henry Account Manager lhenry@linux-magazine.co.uk Verlagsbüro Ohm-Schmidt Osmund@Ohm-Schmidt.de

Publishing Publishing Director

Robin Wilkinson rwilkinson@linux-magazine.co.uk Subscriptions and back issues 01625 850565 Annual Subscription Rate (12 issues) UK: £44.91. Europe (inc Eire) : £73.88 Rest the World: £85.52 Back issues (UK) £6.25

Distributors

COMAG, Tavistock Road, West Drayton, Middlesex England UB7 7QE

Print

R. Oldenbourg

Linux Magazine is published monthly by Linux New Media UK Ltd, Europa House, Adlington Park, Macclesfield, Cheshire, England, SK10 4NP. Company registered in England. Copyright and Trademarks (c) 2001 Linux New Media UK Ltd No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, e-mails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing.

INTRO

CURRENT ISSUES

RETAIL SALES I was going to write about Steve Ballmer the Microsoft CEO calling Linux a cancer, but their latest missive for the retail industry is far funnier. They have produced a white paper on why the retail industry should not use Linux. Split into ten different areas, it is more humorous than my daily UserFriendly cartoon. Each section is filled with lies and untruths. For example in one section they complain that Linux has so many installation versions. This is obviously bad to Microsoft as they say ‘with so many different distributions available, there are bound to be proprietaries introduced beyond the free operating system to allow vendors to distinguish themselves.’ Hmmm choice is bad then. Never mind that Microsoft’s whole ethos is about introducing proprietary technology. ‘Microsoft, unlike Linux, has one standard graphical user interface across its limited number of operating systems.’ So one size fits all and again choice is a bad thing. In the section – Less Secure :’”Open source” means that anyone can get a copy of the source code. Developers can find security weaknesses very easily with Linux. The same is not true with Microsoft Windows.’ To be honest they are correct. We do get the source and we do find security

weaknesses. Unfortunately they then miss the point by a mile. We also post fixes and are open about problems. If everyone knows then it is harder to exploit. The same cannot be said for Microsoft. It is all very well for me to read about problems in my TechNet subscription but the majority of people do not know and so can be compromised. They do raise some good points such as limited device driver support and untested waters in retail. Device drivers are always a problem as Linux has not yet caught all the hardware developers support. Untested waters is both a disadvantage with no previous market penetration but also an advantage as we have not yet made errors in the market. Although the paper is aimed at persuading buyers to steer clear of Linux, I think most are bright enough to read between the lines for themselves. Happy hacking

John Southern, Editor

ISSN 14715678 Linux is a trademark of Linus Torvalds Linux New Media UK Ltd is a division of Linux New Media AG, Munich, Germany Disclaimer Whilst every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material providied on it is at your own risk. The CD is comprehensively checked for any viruses or errors before reproduction. Technical Support Readers can write in with technical queries which may be answered in the magazine in a future issue, however Linux Magazine is unable to directly provide technical help or support services either written or verbal.

We pride ourselves on the origins of our magazine which come from the very start of the Linux revolution. We have been involved with Linux market for six years now through our sister European-based titles Linux Magazine (aimed at professionals) and Linux User (for hobbyists), and through seminars, conferences and events. By purchasing this magazine you are joining an information network that enjoys the benefit of all the knowledge and technical expertise of all the major Linux professionals and enthusiasts. No other UK Linux magazine can offer that pedigree or such close links with the Linux Community. We're not simply reporting on the Linux and open source movement - we're part of it. 11 · 2001 LINUX MAGAZINE 3


006newssbd.qxd

29.06.2001

19:30 Uhr

Seite 6

NEWS

Brussels sprouts Red Hat conference

Red Hat has announced the keynote schedule of the company’s first Red Hat TechWorld conference. The conference, to be held in Brussels on 17 and 18 September 2001, will give open source companies and users a chance to meet, demonstrate their new products and learn more about other new technologies being developed in the open source space. Keynote speakers will include Bob Young, cofounder and chairman of Red Hat, who will explain why he believes open source is inevitable, and Matthew Szulik, Red Hat chief executive and president, who will discuss how open source can help solve the technical education problem. Michael Tiemann, Red Hat chief technology officer, will discuss how Linux can move from its revolutionary standpoint to be accepted as part of the establishment. Glyn Moody, author of the book ‘Rebel Code’ will discuss the contribution hackers have made to technological progress by writing software and then giving it away.

Technical track speakers Stephen Tweedie, Mark Cox, Gary Thomas, Owen Taylor, Havoc Pennington, Tom Tromey and Alex Larsson will be speaking on such open source topics as embedded, ecommerce, development tools, application solutions, application development, Red Hat Network, application solutions and deployment. Red Hat chief technology officer Michael Tiemann said: ”It is a very exciting time for the open source movement, with the adoption of Linux and open source accelerating at such speed on a global scale. Red Hat TechWorld Brussels will bring together the technology, the developers and the users, and provide a learning and networking environment in which to share ideas and knowledge. We look forward to hosting the first Red Hat TechWorld event and providing a forum for everyone to get together and share their experiences with open source and with Red Hat technologies.”

Info http://www.redhat-techworld.com ■

SuSE’s oracle

Knocked for six

Open source solutions provider SuSE Linux has announced that its Linux operating system has now been validated for Oracle’s Oracle9i database technology. The validation means that SuSE can now offer its e-business infrastructure to customers using the Oracle9i database. Dirk Hohndel, president of SuSE, expressed his delight at working with Oracle to deliver an open source e-business solution: ”Oracle9i Database provides a powerful software infrastructure for the e-business needs of thousands of companies around the world. Using SuSE Linux, these companies have one of the most secure, affordable and reliable operating systems in the world as an alternative to more expensive systems such as WindowsNT and proprietary UNIX systems. We are pleased to be the first Linux company to have validated our operating system against Oracle9i Database and we look forward to our extensive collaboration on future products.” Doug Kennedy, vice president of global partnerships, Systems Platform Division at Oracle Corporation, said: ”We ask our Linux partners to use an extensive set of criteria to validate and test their operating systems with Oracle9i Database. This brings enormous value to customers because it gives them the affordability and reliability of Linux together with the power of Oracle9i Database.” ■

Compaq has announced six new initiatives to meet global customer demand for commercially viable Linux enterprise solutions. These include: highperformance Beowulf clustering, a program for interoperability and portability between Linux and UNIX, investment and participation in a Linux lab, training and certifying system engineers, contributing SSI clustering technology to the open source community, and fostering development of Linux applications for handheld devices. Compaq has drawn on more than twenty years of high performance and high availability clustering experience to build high performance technical computing clusters on AlphaServer. Now the company is partnering with leading Linux players to deliver Beowulf clustering solutions on its ProLiant server platform, enabling enterprises of all sizes to build supercomputer-like clusters of 16 to 512 or more nodes. To further the development of its Linux solutions, Compaq is working closely with Oracle, with plans to participate in Oracle’s Linux Lab to optimise kernel development and performance. ”Oracle and Compaq continue to deliver leading edge solutions to the enterprise,” said, Mike Rocha, senior vice president, Oracle. ”Working together in the Linux lab capitalises on our respective strengths and expertise in developing more reliable and scalable next-generation Linux applications.” ■

6 LINUX MAGAZINE 11 · 2001


006newssbd.qxd

29.06.2001

19:30 Uhr

Seite 7

NEWS

Ask Sleepycat Search engine provider Ask Jeeves has chosen Sleepycat’s Berkley DB embedded database for the management of its question answering data. Ask Jeeves’ Web Properties handles high transaction volumes to deliver information to users on Ask.com, AJKids.com, and DirectHit.com. Ask Jeeves developers will now use Berkeley DB to help them build high performance Web-based solutions, said Ed Boudrot, product manager for Ask Jeeves: ”We perform somewhere on the order of 810 billion transactions with Berkeley DB on a monthly basis without failure. We selected Berkeley DB to store information because of its flexible architecture, extremely high transaction rate, and its ability to scale. In addition, our experience with Sleepycat’s customer support has been outstanding. When we have a technical matter to solve, the Sleepycat team is responsive and effective.” Michael A. Olson, vice president of marketing at Sleepycat Software, said: ”By combining its innovative natural language and popularity search technology with Berkeley DB, Ask Jeeves has become the 15th most popular Web property on the Internet. We’ve worked hard to provide an embedded database that scales well and remains fast and reliable at any size. Ask Jeeves’ decision to deploy Berkeley DB convinces us that we’ve made the right choices.”

DataPiped, courtesy of Ensim

Hosting automation provider Ensim Corporation has announced that DataPipe is its first service provider customer for its outsourced data centres initiative in collaboration with Compaq. Mark Linesch, vice president of Service Provider Solutions, Compaq Industry Standard Server Group said that working with partners such as Ensim enabled Compaq to deliver highly optimised ProLiant server platforms to customers: ”With this initiative Compaq and Ensim continue to drive greater simplification and innovation in the growing Linux Service

Provider market and deliver customers the return on investment they require to be competitive.” DataPipe has deployed Ensim’s flagship product, ServerXchangeT, with Compaq ProLiant servers to optimise hosting automation of the Compaq systems and the applications that run on them. Robb Allen, chief executive of DataPipe, said: ”As part of our growth equation, deploying ServerXchange on our Compaq ProLiant servers has streamlined our operations and certainly created new revenue opportunities.” The Ensim-Compaq initiative means that DataPipe will now be able to optimise the provisioning and management of its Compaq ProLiant server infrastructure using Compaq Intelligent Manageability Tools and Ensim ServerXchange. DataPipe can also create a reseller channel using the ServerXchange three-layer control panel for the service provider, reseller and end-user.■

Info The Sleepycat technology is available for download at the company’s website, http://www.sleepycat.com ■

Connectivity The NetLinOS initiative, created by Cyclades Corporation, has launched its NetLinOS Web portal. The Web portal is intended as a centre on the Internet for those involved in initiatives related to Linux-based network connectivity to keep up to date with related developments, encouraging the development of new Linux based network appliances. Visitors to the NetLinOS Web Portal can learn more about the initiative and view a list of products designed and tested by the NetLinOS team. There are also ”how-to-build” tutorials for all products listed and visitors can access a database of hardware and software components used in developing Linux-based network connectivity products. Ivan Passos, leader of the NetLinOS project, explained how the portal came into being: ”We wanted to create a place on the Internet where visitors can view, contribute to, and communicate about the latest developments involving network connectivity in Linux. The idea is not to compete with current efforts in this arena, but to consolidate these efforts by using software already developed, contributing new code and focusing on hardware and software integration to allow the development of commercial products.”

Info http://www.netlinos.org/ ■ 11 · 2001 LINUX MAGAZINE 7


006newssbd.qxd

29.06.2001

19:30 Uhr

Seite 8

NEWS

A step in the right direction Twenty four technology suppliers to cable, satellite and telecommunications operators have allied to form the TV Linux Alliance, a group whose aim is to define a standards-based Linux environment to improve products, shorten time to market and speed up development cycles in the digital set-top box market. Members of the TV Linux Alliance include ACTV, ATI Technologies, Broadcom Corporation, Concurrent Computer Corporation, Conexant, Convergence Integrated Media, DIVA, Excite@Home, iSurfTV, Liberate Technologies, Lineo, MontaVista, Motorola, nCUBE, OpenTV, Pace Micro Technology, Qpass, ReplayTV, STMicroelectronics, Sun Microsystems, TiVo, Trintech, TV Gateway and WorldGate. The Alliance’s new Linux specification will be available later this year. Yankee Group analyst Adi Kishore said the new alliance would be in a position to provide network operators with a standardised offering from the Linux community for digital TV set-tops. He added: ”A common solution built around a single framework will reduce the integration issues that result in lengthy deployment time-frames for advanced interactive applications, as well as middleware solutions and set-top boxes.” Jerry Krasner, executive director, Electronics Market Forecasters, commented: ”The world of digital television is rapidly changing and the formation of this alliance will ensure that network operators have access to a solid offering set from the Linux community for digital TV settops. No single company has been able to own the digital TV market. The talents and technologies of these companies rallying around a single framework for a robust Linux solution should keep competition at the operating system layer thriving while insuring that advanced interactive applications and middleware providers along with set-top manufacturers can get solutions to market more quickly.”

Info www.TVLinuxAlliance.org ■

Beta Endeavors Endeavors Technology has unveiled the new beta version of its secure cross-platform, cross-device peer-to-peer collaboration tool. Magi Enterprise v2.0 is aimed at VARs, OEMs and integrators. Magi Enterprise v2.0 uses SSL and certificate authority to address business concerns about sharing sensitive company information. The solution offers such features as file sharing, community index and search, chat and instant messaging. Also included is a software developer’s kit, enabling companies to create a peer-to-peer collaborative network and integrate applications for tracking such processes as knowledge management, productivity, revenue growth and profitability. Anne Zieger, principal analyst with enterprise peer-to-peer research firm PeerToPeerCentral.com,

8 LINUX MAGAZINE 11 · 2001

said: ”Without strong security protections in place, corporate managers are unlikely to take peer to peer seriously. But once secure platforms become available, peer-to-peer has a bright future in the enterprise. Solutions like Magi Enterprise – designed to offer highly secure file sharing and communication options – should be among the first to be adopted by IT departments.” Jim Lowrey, security architect for Endeavors Technology, adds: ”Peer-to-peer computing faces a mix of security issues not faced by traditional secure-computing paradigms. Magi Enterprise addresses these issues and now provides the vital infrastructure required by IT managers to construct secure information processing systems without exposing an organisation’s systems and data to theft, attack, or compromise.” ■


006newssbd.qxd

29.06.2001

19:30 Uhr

Seite 9

NEWS

Notworking Corporate IT infrastructures in the UK are unequal to the demands being placed on them, new research has revealed. The research, conducted by MORI at the request of Hewlett-Packard, questioned IT decision makers in UK corporates. In the past six months, according to the research, four in ten large companies reported that peak demands on their IT infrastructure had outstripped capacity. The respondents were pessimistic about the prospects for the future, with seventy-five percent saying they believed the situation would remain unchanged or worsen. IT decision makers look to outsourcing as a possible solution, with eighty percent of companies surveyed currently considering outsourcing parts of their IT infrastructure to a service provider.

”It is clear that IT decision makers are feeling the effects of peak demands more than ever, and companies are increasingly having to take precautions and invest in order to deliver the best possible service,” said Chris Franklin, UNIX Server Category Manager for HP. ”What the industry needs is a cost effective, flexible, and secure solution which can deploy compute resources immediately in a simple and effective manner, regardless of the overall peak demand.” Franklin added that security continued to be an important consideration for organisations: ”By embracing a utility computing model, UK companies can also benefit from a scalable Internet infrastructure with capacity on demand and instant Internet connectivity.” ■

Caldera serves up 64

New SuSE OS released

Caldera has announced that a preview of its OpenLinux Server 64 is now available. OpenLinux Server 64, the result of work by Caldera and other members of the IA-64 Linux project to develop a 64-bit operating system for the next generation of Standard High Volume (SHV) server platforms, provides an enterprise class Linux server platform for Intel Itanium-based systems, featuring Internet services such as Web servers, file and print servers and network infrastructure, as well as the performance platform for Linux enterprise solutions. Caldera chief executive Ransom Love said: ”Itanium systems represent the next generation of high-end computing platforms. OpenLinux Server 64 supplements Caldera’s range of platforms that span from desktop to data centre. As a Linux leader, we have the expertise to provide an operating system such as OpenLinux Server 64 that exploits Itanium’s capacity of supporting mission-critical business applications.” Victor Krutul, manager of Software Programs at Intel, said: ”Itanium-based systems deliver world-class operation for the most demanding enterprise and high-performance computing applications. Caldera OpenLinux Server 64 will provide business-focused customers with a smooth migration path to deploy Itanium systems as their application needs grow.” Caldera plans to make the operating system generally available towards the end of the third quarter of 2001. ■

SuSE Linux has announced the release of its operating system for Intel’s Itanium based systems. SuSE Linux 7.2 for Itanium systems is based on the latest Linux technology, including the new Linux Kernel 2.4.4. As well as the operating system itself, the package features 1,500 applications on six CDROMs, enabling the set-up of intranet and Internet solutions as well as set-up and protection of heterogeneous networks. For professional users there are tools for setting up WWW, proxy, mail, and news servers in Linux. Victor Krutul, manager of Operating System Programs at Intel commented: ”End-users and developers will be able to take advantage of the 64bit capabilities of Itanium-based platforms and the technical competence of SuSE with the release of SuSE’s 7.2 Linux. Linux on Itanium-based systems will provide high performance, reliability, and flexibility to IT professionals.” ■

11 · 2001 LINUX MAGAZINE 9


006newssbd.qxd

29.06.2001

19:31 Uhr

Seite 10

NEWS

Double feature Hewlett-Packard has announced new features for its e-utilica ‘out-of-the-box’ service provider solution for corporate networks. New features include the ability to operate within multi-operating system environments and on top of an existing IT infrastructure. e-utilica is now available with a range of bundles to suit the requirements of the individual company, including extra storage, servers and software. eutilica comes with out-of-the-box integrated tools for monitoring of usage to help IT departments

forecast requirements. Chris Franklin, UNIX Server Category Manager for HP UK said he saw an increasing demand within the corporate space for on-tap applications, processing power and storage-capacity on demand. ”With e-utilica, we’re delighted to be offering corporates the unique opportunity to be as flexible as they like, creating real business benefits within a cost-effective and secure environment.” Franklin added that, ”There is clearly a need for on-tap computing and we’re delighted to be the only vendor offering customers the flexibility to successfully adapt within today’s demanding and unpredictable business environment”. ■

New Matrox release

Getting the message Internet messaging solution provider Sendmail has announced that it has enabled its solution for Linux on the IBM eServer z900 and S/390 platform. The product becomes generally available in the third quarter of this year. Until then, the Sendmail/zSeries Early Support Program is available, offering early customers a product discount, as well as free set-up services, including installation and configuration. Rich Lechner, vice president of IBM eServer marketing, commented: ”Sendmail Internet messaging solutions take advantage of the scalability, reliability and agility of the eServer z900. By exploiting the power of Linux on the IBM mainframe platform, Sendmail provides the robustness and scalability needed for today’s messaging and mail infrastructure with the industry’s lowest total cost of computing.” ■

10 LINUX MAGAZINE 11 · 2001

Matrox Graphics has announced the release of its new open source graphical user interface (GUI)based utility. The Matrox PowerDesk for Linux enables Matrox Linux users to manage their desktop, confire Matrox Dualhead display features such as multi-display, clone and TV output, and make monitor adjustments to change resolution settings, pixel depths and refresh rates. ”As the adoption of Linux among corporate and novice users grows, we wanted to provide consumers with a desktop interface that is easy to use and free of the sometimes complex coding required under the OS,” said Alain Thiffault, Matrox global software manager. ”PowerDesk for Linux will dramatically increase productivity by allowing Linux users to change their desktop settings using our point and click GUI, which is far more efficient then the current text-based approach.”

Info PowerDesk for Linux runs under XFree86 versions 4.0.2 and 4.0.3 and is compatible with Matrox G200, G400 and G450-based graphics cards. It can be downloaded from the Matrox driver page at http://www.matrox.com/mga/support/drivers/ home.cfm ■


006newssbd.qxd

29.06.2001

19:31 Uhr

Seite 11

NEWS

Itanium Red Hat Red Hat has announced that Red Hat Linux 7.1 for the Itanium Processor is now awailable. The new release features Red Hat Linux 7.1 with the new 2.4 kernel. It features new configuration tools and security enhancements and is configured to support eight or more Itanium processors running as a single system image. The release also provides default settings for security that keep ports closed and Internet utilities inactive until needed. A new firewall screen enables users to customise their security settings by turning the various features on or off. Updates to the operating system include improved device support and improved multithreaded network stack and virtual file system. A revised scheduler enables the operating system to handle more processes.

Michael Tiemann, chief technical officer at Red Hat, said: ”Intel recognised very early that Linux was going to be a very important operating system in the future, and we are pleased that they chose to work with Red Hat as a strategic technology developer. Red Hat Linux 7.1 for the Itanium Processor will enable the enterprise and technical computing communities to take full advantage of Intel’s new technology running on the industry’s standard Linux platform”. Victor Krutul, Linux Technology Manager for Intel commented: ”With the release of Red Hat Linux 7.1 for the Itanium processor-based systems, developers should be confident that the technology, support and leadership is in place to successfully develop, deploy and manage 64-bit Linux applications and services.” ■

Expanded mission Linux solutions and services company Mission Critical Linux has announced the expansion of its PreConfigured Solutions Program to include systems built on Intel’s Itanium architecture. The program currently combines Mission Critical Linux’s convolo cluster software with hardware from OEMs including IBM and Hewlett-Packard. The expanded program will enable the company to deliver a mission critical management services solution for customers who require an enterprise standard high-availability solution. Backed by Mission Critical Linux’s round the clock support, the PreConfigured Solutions Program is aimed at financial services, data centres, securities, telecommunications, and hightech industries, providing them with a Linux-based solution that can provide seamless high-availability for applications like Web servers, email, Samba, NFS, and databases such as Postgres, Oracle, Informix, and DB2. Moiz Kohari, chief executive of Mission Critical Linux said, ”Working with industry-leading OEMs on Itanium-based platforms, we see this expanded offering as providing enterprise customers with a comprehensive answer to their IT needs. When you combine Convolo, our deep Linux expertise, and the Intel Itanium processor, you have a powerful combination of the hardware, software, and around-the-clock support businesses demand.” Victor Krutul, manager of Operating Systems Programs at Intel, said ”Linux solutions based on Mission Critical Linux will allow companies to take full advantage of the performance and scalability built into Itanium-based platforms. The processor was designed with business-critical environments in mind, and we think Linux and Intel Itanium architecture is a powerful combination.” ■

Sharp choice Sharp has selected Lineo Embedix as the operating system for the next generation of Sharp handheld devices. Lineo announced that Embedix was chosen for its low hardware requirements, broad feature support, and for its ability to bring the product quickly to market using the Linux operating system. The new Sharp operating system platform will be able to run software written in Sun Microsystems’ Java software language that supports a variety of operating systems. The first Linux-based handhelds are expected to be available in autumn 2001 in North America. Hiroshi Uno, general manager, mobile systems division, Sharp Corporation said: ”Sharp has decided to use Linux because it is an open operating system. And we selected Lineo, the leading provider of the Linux-based embedded operating system, Embedix, to run on our upcoming PDA for the worldwide market. Linux will allow tens of thousands of Linux developers to write applications for our devices.” Sharp is also working with Tao Group and Amiga Corporation to develop content, software and processing power for the Sharp handheld device. ■

11 · 2001 LINUX MAGAZINE 11


014Omnisbd.qxd

29.06.2001

19:44 Uhr

Seite 14

ON TEST

OMNISERVER

All in one server

RAQ BEATER Network appliances seem to be the comming thing at the moment. The new Omni Server from RainbowCyber aims to solve many problems in a simple box.

Network Appliance The new Omni Server is avaiable in three flavours. The first being the Standard model with a 533 processor and 64MB Ram and 20GB hard drive. The second is the professional model which is boosted up to a 667 processor, 128MB Ram and 40GB hard disk with the top of the range Enterprise model having twin 667MHz processors. The test model was a small footprint (290x350x92mm WDH) incorporating a 24x CD drive and a CF IDE expansion slot on the front. Various LEDS and a power switch. The back reveals the usual keyboard, mouse and monitor connections and the main selling point the onboard 8 point hub. The test model did not have the smart card reader attached but the internal COM2 port header 14 LINUX MAGAZINE 11 · 2001

allowed us to test this for the usual card reader from Towitoko once we downloaded the drivers. Very similar to the Sun Blade 100 workstation for added security and authentication. Rather than a 533 it came with a 550 Cryix. Memory is via 168 pin DIMM’s with a maximum of 512MB supported. On connecting power and starting the unit it loads the Trustix linux. This is preconfigured to act as a full web hosting device with Apache and as a firewall. Administration of the unit is either via connecting a monitor, keyboard, mouse etc,. or using a browser over a network and Webmin. This latter option is probably the way most users will use the device enabling it to just be added to a network with power and network connections similar to Colbalts Raq or Qube3. As a server appliance the Omni Server helps to intergrate all the necessary tools and applications to enable a company to quickly be on the web. Via the printer module we quickly setup a printer onto the parallel port to then act as the office printer. This worked fine across the network. The main selling point has to be the inclusion of a eight port hub. On closer inspection this turns out to be seven ports to connect to ( the eighth being used by the omni itself) and a seperate connector for a 2 port Davicom LAN card (again one port being used by the server) running at 10/100. By including the two networks a good fiewall can be made


014Omnisbd.qxd

29.06.2001

19:44 Uhr

Seite 15

OMNISERVER

Webmin is a good choice to ship with the server and during the tests we ran worked flawlessly. We successfully used ssh to connect and as security was good failed to Telnet in. The installed software was Trustix Secure Linux 1.1 although 1.2 is now available and is also fully webmin compliant. Squid for caching was also included. The Samba module worked and having a 20GB drive meant that a MySQL database had plenty of room to act as a web catalog. All the software ran first time as expected and as it motherboard is fitted with a Award BIOS it acts as any server you choose. Opening up the unit showed spare connectors for a floppy disk (it will boot from floppy if needed) as well as secondary IDE connector. Two PCI slots give room to expand for items such as an ADSL modem or other network cards. As a test we set up the unit to act as a webpage hosting server, connect to the rest of our office network and handle the mail traffic in twenty minutes. Not bad when we actually wasted some

ON TEST

time trying to connect with DCHP running rather than just using the IP address. Admin was done by connecting with a RedHat box though we also got it working with a Windows box (well it was lying around and I wanted to see Webmin running on IE) and any browser would do. Support for the unit is provided by LinuxSure. Overall a nice simple to use server. This would suit any company that is short of space or just wants to do away with the hassle. The ability to administrate remotely opens up lots of possibilities and I can already see them being installing at many sites to allow a single knowledgeable consultant/engineer to look after. ■

Info: http://www.omniserver.co.uk Rainbow Cyber Services Ltd, 0208 994 0053 http://www.linuxsure.com Cost £850 for the standard model, £1175 for the Professional model. ■

11 · 2001 LINUX MAGAZINE 15


016Susesbd.qxd

29.06.2001

16:05 Uhr

Seite 16

ON TEST

SUSE 7.2

Geeko’s latest

SUSE 7.2 PROFESSIONAL EDITION ON TEST RICHARD IBBOTSON

What is it that is attracting so many people to SuSE 7.2 and away from the other distributions? There’s the absolute quality of the software coupled with the excellent technical support in the United Kingdom and from other countries through the Internet which is in fact in pain old fashioned English. There’s also the SuSE English list out there on the Net. Other things that help are that there is a low cost personal version and a professional version as well. There is more than enough software for most types of engineer or scientist or home user. In fact there’s so much software on seven CDs and on one DVD disk that it takes a long time to understand what to do with all of it. Just to finish it off there’s an endless ftp site with many updates for the latest release and for the others that have been around for a long time. The SuSE distribution is the software that runs the Linux Magazine office so it must be good .

Initial installation

What’s new in 7.2? Well,apart from the useful and never ending list of the latest updates for software such as Sendmail or Fetchmail or Emacs there is the

16 LINUX MAGAZINE 11 · 2001

2.4.4 kernel with increased USB support and support for Iptables and NAT. There are several bonuses for the people who are graphical users rather than command line GNU shock troops. An evaluation version of Evolution which is Ximians latest assault on the desktop brightens up the day with it’s many features which combine a personal organiser with an email application to help the individual who is lost without Outlook Express. Other goodies are a release of Anjunta which is the GNOME IDE programing tool. After a quick look through some excellent manuals the graphical installation method with YaST2 compares favourably with those that are available with distributions such as Red Hat or Mandrake or Linux by Libranet. The experienced UNIX person may not like that and may prefer to use the updated and original YaST which now shows many useful features that were not there before. Going from penguins one to penguin number eight in the install screens doesn’t require a great deal of skill. As Ruediger Berlich who is the Managing Director of SuSE Ltd likes to show at his


016Susesbd.qxd

29.06.2001

16:05 Uhr

Seite 17

SUSE 7.2

ON TEST

Using Mozilla.

many public appearances the average MS Windows user can quickly understand what to do. Dual boot or a quick replacement of the Windows partition with Linux is easy to do. Something that is really good about the SuSE distribution is that you can quickly and easily install Reiser FS or Ext2 file systems. Either one is easily configured at installation. The test machine was a notebook with an Intel processor. The 2.4.4 kernel modules and the PCMCIA modules worked fine on this hardware as did USB and Irda with a mobile phone for email and Web pages. For X-windows configuration either SaX or SaX2 can be used. Configuration is a point and click experience which should take only a few seconds. The test install did just that. Xfree 4.0.3 provides anti-aliasing and so the TrueType fonts are more rounded and easier on the eyes. Ten minutes of installation and a few more to key in network and ISP data produced a working laptop that could be used on a network or dial-up system without a problem. If you are someone who has to use sound or listen to music then you are very well catered for. The Alsa sound system is an SuSE sponsored project. Alsa works well on just about any personal computer and very well on notebooks. The applications manual gives extensive information about the applications. There are many applications for sound management. If you are a professional musician then you should consider using this distribution. It’s so much better for sound than most propieretary brands of software. There is also a choice of an enterprise class firewall or a personal firewall both of which can be used in whatever way the user wants to use them in

addition to the usual open source security tools. AMaVIS has finally been included with this distribution. This is a high quality virus scanner that all mail that goes through a gateway machine or on a mailserver. There is a loopback mounted file system which increases the security of the system even further. Kerberos 5 is supported in this release which makes it popular amongst MS Windows users. Staying with MS Windows for a moment. Samba 2.2 is released with this version of SuSE Linux. If you want an internal file server that is virus proof and doesn’t crash at all then you should think very seriously about using Samba on a mixed Win

Just one of the fourteen desktops available. Here XFce is running.

11 · 2001 LINUX MAGAZINE 17


016Susesbd.qxd

29.06.2001

16:05 Uhr

Seite 18

ON TEST

SUSE 7.2

YaST2 running on the KDE desktop

Enlightenment desktop

2000 network. Many people have been able to get a good night’s sleep for the first time in years after installing and configuring Samba and the 2.2 version is quite amazing once it’s up and running. If you want it, KDE 2.1.2 provides a well finished desktop. All of the desktops and the minimalist window managers are available at login so that you can choose whatever you want or just stay with the desktop that you like the most. The new GNOME window manager has been much improved in SuSE 7.2 with many more options available than there were in the last release. GNOME has quite definitely caught up with KDE2 in the way it looks and feels.

18 LINUX MAGAZINE 11 · 2001

Once an installation is finished, a broken machine can be rescued with the SuSE rescue system which can be run from a floppy disk or from the first installation CD. To say some more about YaST2: All of the command line hacking that used to be a part of Linux is now largely gone. YaST2 provides a foolproof way of configuring your system. In the present day it’s highly likely that training will not be available for the system administrator. This makes graphical management systems like YaST2 very popular and in fact they may be essential within the commercial world for people who quite simply do not have the time to read a book or go to night classes. YaST2 is particularly good for ISDN and ADSL configuration and once configured is rock solid for years. Only in distributions like Debian is the old and traditional UNIX command line preserved. If you are a purist then you will presumably stay with Debian. However, the SuSE package management system compares favourably with the Debian one and I can’t say which one I prefer. There is much evidence to support the claim that both the Debian and RPM package management systems will eventually be merged into something that will be as good as the Debian one. To get back to the manuals once again. I feel that these are worth a second mention. After all, you are paying for the books and not the software. Other distributions provide some excellent books with their CDs. SuSE have come up with a range of documents that will keep the first time user or the experienced Linux user well informed in just the right way. It’s easy to start out with the first


016Susesbd.qxd

29.06.2001

16:05 Uhr

Seite 19

SUSE 7.2

installation manual and then graduate towards the level of an intermediate level system administrator or novice programmer in this way. In fact many people I know learned about Linux by getting hold of a CD and then started by reading the docs that come with the software. There are six books in total in the professional version. You can start with the quick install manual which provides the cartoons that most experienced computer users can understand – and certainly the newbies should be able to get to grips with Linux using this easy- to-understand booklet. It’s the kind of thing that Microsoft should have done years ago and didn’t bother to do. The next one along that would suit the beginner is the applications book. This is rather like the kind of publication that you can get with the Caldera Open Linux 2.4 release. There’s a lot of information in this book that explains the kind of things that many people miss. The other three manuals are an endless treasure trove of the kind of information that you just can’t get from anywhere else without paying lots of money for it. The configuration manual shows how to use point and click to do most things. There is a lengthy explanation of how to use file managers and other graphical interfaces. The network manual describes some simple and more advanced networking concepts. It’s the kind of thing that you just cannot

ON TEST

obtain from anywhere in simple English rather than complex terms that are not understandable. The final volume is the reference manual. This is packed full of things that the other distributions have not bothered to include into their own documentation and books that accompany their software. It’s worth buying the professional version just for this book alone. Just a final word here: SuSE GmbH is the only Linux software distributor that has actually taken the time to study British English as it is and then they have produced a completely British distribution without all of the Americanisms that you can see in other distributions. This should mean something to the average Linux user. They have also spent a great deal of time and money translating their original German product into many other languages thus making SuSE Linux the distribution with the best documentation. The SuSE support database can be viewed on any of their computers and the same database can be accessed on their own website from you desktop, notebook or palmtop. So, you get not only man pages but also nice pictures as well as easy to understand instructions from SuSE and both the KDE and GNOME development teams in addition to the printed manuals. SuSE Ltd will be at the Linux Expo at Olympia and at Birmingham NEC in September. We will look forward to seeing you there. This review like most of the Sheffield Linux User’s Group website was produced with SuSE software. ■

Useful links http://www.suse.co.uk http://www.suse.co.uk/uk/suse _linux.html http://www.suse.de/en http://www.suse.com http://www.namesys.com http://www.alsa-project.org http://www.ximian.com http://www.kde.org ■

About the Author Richard Ibbotson is the Chairman and Organiser for Sheffield Linux User’s Group.You can view their website at http.www.sheflug.co.uk

1/2 ad 11 · 2001 LINUX MAGAZINE 19


020letterssbd.qxd

29.06.2001

19:47 Uhr

Seite 20

LETTERS

WRITE ACCESS Adding another dimension Could you please explain the paragraph entitled ”Basic 3D elements” in the article ”OpenGL course: part2”. I cannot make head-nor-tail of the discussion - certainly not when referring to Picture 2. And at the bottom of page 2, reference is made to ‘10 of the primitive types mentioned above’. Which 10 types are these? Only five types were mentioned in the text - points, lines, triangles, quadrangles and polygons. Am I missing something here? Paul Keith via email

Linux Magazine We made a mistake and did not print the correct picture. The 10 primitive types should be: • 1 Points •2 Lines • 3 Line_Strip (lines joined to one another) • 4 Line_Loop (lines joined to one another making a loop) • 5 Triangle • 6 Triangle_Strip (ajoining triangles) • 7 Triangle_Fan (adjoining triangles with one common point) • 8 Quads • 9 Quad_Strip • 10 Polygon The 10 primitives

Star Letter I was somewhat excited to find a copy of Mandrake on your coverdisc and impatiently tried to install it. After creating the boot floppy and following the installation instruction, the installation told me that the disc did not seem to be a Mandrake installation disc? I have tried all the methods described. Can you help me? Also the DOS method is wrong because it tells me there is no such file as autoboot.bat. I would dearly like to install this product and maybe one day escape from Windows! I am running Windows Me on a 733mhz Pentium III with 27GB hard drive CD-ROMrw and DVD. S Monty via email

Linux Magazine As you may have guessed, we had one or two similar letters. The truth is we made a mistake and incorrectly mastered the CD without the Rockridge extensions, thus making the CD into a non-bootable disc. It was nothing to do with Mandrake. All our fault and for that I apologise. However remiss we were, can I point out to the two gentlemen who so kindly took the time to call me at home that swearing at my children is not perhaps the best response as I now have your number. Hopefully by now everyone who has requested a replacement has received one. If anyone still requires a replacement copy then just write or email.

Not browsing I am trying to access your CD through Windows Internet Explorer unfortunately. But only the Index Contents or Start page will open. Any selection made from the index causes the ”Page cannot be displayed” error page to pop up. Other browser type CD formats work fine, have I missed something? Ron via Email 20 LINUX MAGAZINE 11 · 2001

Linux Magazine Trying to access a CD-ROM with a browser can be a trying experience. We have used the ISO 9660 standard when writing the CD. In real words, when we save data on the CD it is saved with / ( Slash ) as the separator between directories. UNIX, Linux, Mac, PowerPC, Alpha, Sun, SGI and, more importantly, the Internet use this separator (as in http://www.microsoft.com). Unfortunately, Microsoft Explorer does not want to play fair and so treats ‘/’ as part of a name. I expect you are getting something like: d:\./LinuxMagazine/kthemes/index.html (local) at the bottom of your explorer screen, when you hover over a hypertext link. What would be required for Internet Explorer to read the file would be: d:\LinuxMagazine\kthemes\index.html As we are aimed at Linux OS, we would continue to use the ISO 9660 standard but that does not help you. What you need to do to read the magazine CD-ROM under Internet Explorer is open each page individually by using the menu: • File • Open • Browse At this point you can then step down through the directories. Both the other two browsers under Windows that I have tried (Netscape and Opera) work fine and treat the ‘/’ as a directory separator. Honestly, it is easier if you change to Linux. Hope this helps. ■


021Tomcatsbd.qxd

29.06.2001

19:54 Uhr

Seite 21

TOMCAT

FEATURE

Tomcat for Apache

EFFICIENT, FLEXIBLE AND OPEN SOURCE, CHOOSE ANY THREE DEAN WILSON AND KIM HAWTIN

Background Jakarta is a selection of open source, Java-related technologies, such as XML parsers, XSLT processors, regular expressions and portal management. Due to many projects being implemented in Java, they’re mostly OS independent, which allows their widespread use. Apache is the Web server of choice. It delivers the content of over half of the sites on the Web, but until recently it lacked an integrated way of serving dynamically generated Java content. With the rising popularity of the Tomcat server (from the ApacheJakarta project) this need has begun to be met. Tomcat is a product of the Apache-Jakarta project that functions as a Java Servlet and Java Server pages engine and is one of the most up-todate implementations of the Java servlet. A Java servlet is a small application that receives and responds to http requests. Although Tomcat can be run as a stand-alone servlet engine, its true power becomes clear when coupled with Apache to help cut down: • Running the built in Tomcat Web server in addition to Apache. • Forcing the user to enter non-standard details such as the port number in each request. • Doubling the effort required to administer the servers. • Exposing more potential security risks. Using the two projects together means that you can harness Apaches features such as: • Multiple virtual hosts. • Fast delivery of static content (Tomcat is optimised for servlet processing). • An almost infinite number of configuration options for multiple virtual Web servers that Apache users require.

adding Java servlets, this will be achieved by adding the Java Software Development Kit (J2SDK) and Tomcat to the established site. It will not require a recompilation of your existing Apache set up if your Apache allows the use of DSOs. The Java Development Kit is available from http://java.sun.com. Download the newest version from this site (J2SDK at the time of going to press). Although the J2SDK is free, you do have to sign an agreement before you can get the software. Once you have this on the computer that you have Apache installed on, you’ll need to install the J2SDK. This procedure varies between Red Hat and Debian-based distributions; here we will cover the Red Hat install. Installing the Java SDK is simple. You should run the following command:

Web sites used to be a way

# sh j2sdk-1_3_1-linux-i386-rpm.bin

based upon its ability to act

You’ll be confronted with a licensing agreement to be accepted in order to install. If you agree to the license the package will decompress, leaving a ready-for-install rpm. Next, login as root and install the rpm with a command similar to:

of claiming a personal piece of the new frontier, a number of html pages and a CGI script made a website. These days with the advent of P2P sites and e-Commerce solutions Java is coming into the limelight both on its own merits and as a rival to Microsoft’s .net platform. Java’s success in this field is as middleware between different vendors software and Open Source projects such as Apache, Enhydra,

# rpm -i jdk-1.3.1.i386.rpm

Cocoon and Tomcat. 1.3.1 in the filename will vary depending upon the version of the J2SDK you want to install.

What is a DSO?

Installing Java

A DSO is a Dynamic Shared Object, in the past whenever you wished to add new functionality to an existing Apache web server install you would be required to reconfigure, re-compile and perform a fresh install of the Apache binaries. With the inclusion of DSO’s in the Apache server this process has been simplified into compiling an external module into a DSO that Apache can pick up and incorporate at run time without the base web server needing to be recompiled. If you are running a Debian based distribution then you’ll need the Apache-dev package, if its Red Hat based then you’ll need the web server source from Apache.

This tutorial assumes that you already have an Apache site that you would like to expand by

■ 11 · 2001 LINUX MAGAZINE 21


021Tomcatsbd.qxd

29.06.2001

19:54 Uhr

Seite 22

FEATURE

TOMCAT

Next, add the Java binaries to your path. You can do this on a per user basis or a system wide level. To add Java to the path for a single user edit .bash_profile in their home directory. Check for the line that has PATH=$PATH at the start and add the path of the Java runtime binaries to the PATH environment variable. The second step involves adding a line to set the JAVA_HOME environment variable. Both variables need to be explicitly exported. A simplified example is as follows:

Resources For the best place to learn more about the Apache and Jakarta projects: http://www.apache.org/ http://jakarta.apache.org/ For more information on Java Servlets see: http://java.sun.com/ For ideas on how to use Java Servlets or information on Tomcat configuration see: http://www.oreilly.com/catalo g/jservlet2/ http://www.jguru.com/ ■

PATH=$PATH:/usr/java/jdk1.3.0_02/bin/java JAVA_HOME=/usr/java/jdk1.3.0_02/ export PATH JAVA_HOME jdk1.3.0_02 is the version of the software downloaded. To do this on a system wide level you need to enter the same changes as before but in the /etc/profile file. We recommend system-level application so that they apply to all users. This will make running Tomcat a much simpler task. You’ll need to logout and log back in for the environmental changes to take effect. You can test that the Java binaries installed correctly by entering the simple test program (Boxout HelloWorld.java). Be sure to make the name of the file the same as the name you entered in the public class line at the top of the sample code

Listing 1:HelloWorld.java public class HelloWorld { public static void main(String[] args) { System.out.println(”Hello Java World”); } } Listing 2:ServletHello.java /* ServletHello.java: Hello world example servlet */ import java.io.*; import javax.servlet.*; import javax.servlet.http.*; //Import all required class’s public class ServletHello extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { response.setContentType(”text/html”); //This sets the type of the response PrintWriter toClient = response.getWriter(); //This sets the output character stream toClient.println(”<html>”); toClient.println(”<head>”); toClient.println(”<title> This is a test page</title>”); toClient.println(”</head>”); toClient.println(”<body>”); toClient.println(”<p>Welcome to tomcat!</p>”); toClient.println(”</body></html>”); } } 22 LINUX MAGAZINE 11 · 2001

otherwise the java compiler will give an error similar to ”class HellWorld is public, should be declared in a file named HellWorld.java”. To test the code you will need to type: # javac HelloWorld.java The compiler will then generate a file named ”HelloWorld.class”, if you then type: # java HelloWorld You’ll see ”Hello World” printed on the terminal. This means the J2SDK was successfully installed. If an error occurs work through the steps again.

Installing Tomcat The precompiled version of Tomcat is at http://jakarta.apache.org/ tomcat/index.html Download the newest stable binary; at the time of writing this was Tomcat 3.2.2. Move the compressed tar file to ”/usr/local/”. Extract as shown. This creates jakarta-tomcat-3.2.2. Then create a symlink in the jdk external library directory (below) for the Tomcat servlet jar file and run the Tomcat startup script : # mv jakarta-tomcat-3.2.2.tar.gz /usr/local/ # tar -zxvf jakarta-tomcat-3.2.2.tar.gz # ln -s /usr/local/jakarta-tomcat-3.2.2/lib/U servlet.jarU /usr/java/jdk1.3.0_02/jre/lib/ext/servlet.jar # cd jakarta-tomcat-3.2.2/bin # ./startup.sh Once you have run the startup.sh script Tomcat will output diagnostic messages to the terminal. Tomcat will run in the background as a daemon but continue to print messages to that terminal. Any error messages at this stage mean you’ve set up the JAVA_HOME or the PATH statements in the user profiles incorrectly. To restart Tomcat run ./shutdown.sh, then repeat the previous steps. Then run startup.sh again. Open up a browser and point it to http://localhost:8080/index.html. If Tomcat is installed correctly you should see a Tomcat test page. In order to test that your environment is correctly set up and you are ready to move to the final stage of integrating Apache with Tomcat you should enter and run the sample servlet presented in the Sidebar. Open up your editor of choice and enter the sample code remembering to name the file ServletHello.java. You should then compile the java source: # javac ServletHello.java The compiler should create the ServletHello.class file. You then need to copy the class file to the Tomcat directory structure so that the Tomcat server can execute it when the request HTTP request is issued from your webbrowser. You should execute the following:


021Tomcatsbd.qxd

29.06.2001

19:54 Uhr

Seite 23

TOMCAT

# cp ServletHello.class U /usr/local/jakarta-tomcat-3.2.2/webapps/exaU mples/WEB-INF/classes/ Point your web browser at the following: http://localhost:8080/examples/servlet/ServlU etHello You should be rewarded with a page with the following text ”Welcome to Tomcat!”, if the request succeeded. If you get a ‘file not found error’ or 404, then check the output of Tomcat on the terminal you started it on. An example of this is: Ctx( /examples ): 404 R( /examples + /servletU /ServletHell + null) null ServletHell in this case is probably a typo. Check the URL that you entered in your browser. If these errors persist compare the name of the Java class you requested in your browser to the name of the class file you copied into the Tomcat directory tree.

Configuring Apache In order for the Apache server to be able to communicate with the Tomcat Server you need to download a DSO called mod_jk.so. This is available as part of the Jakarta project and can be downloaded from: http://jakarta.apache.org/builds/jakarta-tomU cat/release/v3.2.2/bin/linux/i386/"

FEATURE

You need to ensure that the path to the workers.properties file is correct, you can check this by issuing a locate workers.properties command and editing the path in the configuration file as required. You also need to create a log file for the jk module, this can be done with a simple: touch /usr/local/src/jakarta-tomcat-3.2.2/loU gs/jk.log

Contexts in Tomcat

Configuring Tomcat For each context in your Tomcat configuration file you need to add a JkMount line to your httpd.conf file. This is so that Apache can forward requests to the correct Tomcat handler for processing. Apache supports the Ajp12 protocol used with JServ. Tomcat uses the new Ajp13 protocol for new functionality, and so you need to add support for the new protocol. In the configuration file for Tomcat, server.xml you need to add after the existing Ajp12 section the following request handler: <!-- Apache AJP13 support. This is also useU d to shut down tomcat. --> <Connector className="org.apache.tomcat.seU rvice.PoolTcpConnector"> <Parameter name="handler" value="org.apache.tomcat.service.connector.U Ajp13ConnectionHandler"/> <Parameter name="port" value="8009"/> </Connector>

You should have two options, a mod_jk.so-eapi and mod_jk.so-noeapi. Download the mod_jk.so-eapi version. This then needs to be moved to the correct place under the Apache directory tree. The Tomcat and Apache configuration files need to be updated to reflect the new functionality.

After adding these lines to your configuration file you will need to restart the Tomcat server with the shutdown.sh and startup.sh scripts. Once the Tomcat server has successfully re-initialised you can open your web browser and point it at:

# mv mod_jk.so-eapi /usr/lib/apache/mod_jk.so

http://localhost/examples/servlet/ServletHello

By moving the module to this location Apache knows that it needs to load it when restarted. You then need to make the changes to the Apache and Tomcat configuration files before restarting the Apache server. In httpd.conf (Which you can find by issuing a locate httpd.conf) you need to make the following amendments. At the end of the LoadModule section you should put the following line:

You should see the servlet output sent straight to your browser from Apache.

LoadModule jk_module modules/mod_jk.so Add to the end of the AddModule section: AddModule mod_jk.c <IfModule mod_jk.c> JkWorkersFile /usr/local/src/jakarta-tomcatU -3.2.2/conf/workers.properties JkLogFile logs/jk.log JkLogLevel warn JkMount /examples/* ajp13 </IfModule>

Tomcat can manage multiple web applications. Each web application is a collection of files such as Java Servlets, html, JSP files and other resources that are required for the web application to function. Each web application can be deployed separately, in a context. This is useful to test your Java servlet with different versions of supporting Jar files. For example using several different XML parsers, or XSLT processors. ■

Server Initialisation So that Apache and Tomcat start and stop like other servers, we recommend that you add Tomcat to the Apache initialisation script or create a separate initialisation script .

Conclusion We’ve only covered the basic Tomcat and Apache set-ups. Servlets and Java Server Pages are powerful tools for producing dynamic content for Web-based applications. The joint use of Tomcat and Apache gives all the strengths of Apache’s fast static content delivery and near infinite configuration options and Tomcats flexible content delivery to produce an open source solution that rivals the best commercial offerings. ■

About the Authors: Dean Wilson: Professional Developer, using Linux as a serious alternative to commercial offerings. Kim Hawtin: UNIX Systems Administrator, dabbling in all things network-related, preferably without wires.

11 · 2001 LINUX MAGAZINE 23


024case-toolswip.qxd

29.06.2001

19:56 Uhr

FEATURE

Seite 24

CASE TOOLS

CASE Tools compared

CRISIS MANAGEMENT FRANK HAUBENSCHILD

Tools for computer aided software engineering are treated as resources to counter the software crisis. But another question to be answered is whether the time spent on breaking in is proportionate to the benefits.

Ever more powerful hardware, and the resulting possibility of being able to solve ever more complex problems with the aid of software, led in the 1960s to the term software crisis being coined. Software developers realised that development tasks could no longer be managed without the implementation and the support of powerful tools. At the start of the 80s CASE (Computer Aided Software Engineering) joined in battle against the bogeyman of the software crisis.

Software development problems Studies proved that about 50 per cent of the errors detected during a software development process occurred in the analysis and specification phase. A further 26 per cent come in the design phase and only about 25 per cent of the errors found stemmed from faulty implementation. Obviously, and especially in the initial phases of software development, people were not working with enough attention to detail. CASE tools offer, especially for these early phases of development, a 24 LINUX MAGAZINE 11 路 2001

transparent and visual method to enable developers to view the system being created as a whole. This means they will not lose themselves at the early phases of development in implementation details. The CASE tools presented here use Unified Modelling Language (UML) for the notation of software models. UML diagrams copy the connections of object-oriented systems visually and thus increase an understanding of the system just before the actual implementation. But CASE tools offer more than just a purely visual support for the development process. They can be used for documentation purposes and offer the option of creating code entities out of class diagrams (and vice versa) (forward and reverse engineering), and in roundtrip engineering modifications which are made in the source text act directly on the visual model.

End of the software crisis? So does this mean that CASE tools are a powerful weapon against the much complained about


024case-toolswip.qxd

29.06.2001

19:56 Uhr

Seite 25

CASE TOOLS

FEATURE

software crisis? Will all software projects now be completed by the promised deadline and meet customer requirements with the system realised? Well, not exactly, no. Many developers see CASE tools as more of a blot on the landscape, which hinders them in their creativity. In the case of smaller projects, with a few hundred lines of code, this may well be fine, but projects with over 10,000 lines of code and developers working in parallel can be very hard to control without CASE tools, so the CASE tools available under Linux will be looked at more closely below.

Class diagrams with Dia Unlike the other test candidates, the GPL program Dia is not strictly speaking a proper CASE-Tool. It is really for drawing diagrams of all kinds. The prototype is the commercial program Visio, well known in the Windows world. For hobby-developers who want to create class diagrams and application cases and do not need to use code-generation, reverse engineering and the like, Dia is a good choice. It impresses by its simple and intuitive user guidance. Compared with the byte code-interpreted tools Together and ArgoUML, its rapidity also stands out. Dia supports the following diagram types: UML (use case, class, sequence), ER (entity relationship), SADT, flow charts, networks and integrated circuits. New types of diagram can be added using simple XML files. The program loads diagram types in advance on start-up or dynamically, as required. Dia can load and store diagrams in XML and exports the formats EPS, SVG, WPG, CGM, PNG and TeX macros. XML files can either be saved direct in the ASCII format or compressed. Note: You can download the command lineoriented tool Dia2code, which converts the class diagrams created under Dia into corresponding C++ or Java classes from the fifth URL listed below. Overall, Dia is a stable tool which is good to use for the software development process on a small

scale. People who want to avoid long break-in periods, and do not need the function overkill of Rational Rose or Together, are well served. For professional implementation in software development, however, Dia is not suitable.

ArgoUML ArgoUML is an open source project and completely implemented in Java. The tool also runs, due to this principle, on any platform with a virtual machine for Java 1.2. Since it is byte codeinterpreted, though the speed of execution is not exactly electrifying. ArgoUML meets the OMG standard for UML 1.3 and supports as diagram types class, state machine, use case, collaboration, activity and object/ component/ deployment diagrams. It is only sequence type diagrams that are not supported in the current version 0.8.1 – but these are planned for the next release. Also, ArgoUML supports the XMLbased swap format XMI. ArgoUML uses it as standard memory mechanism and thus makes it

1/4 Red Hat

Figure 1: Dia in the Use-case-Modelling.


024case-toolswip.qxd

29.06.2001

19:56 Uhr

FEATURE

Seite 26

CASE TOOLS

possible to swap model data with other UML tools and in this way creates the basis for an open standard. For code generation, ArgoUML supports only Java. UML diagrams can be saved in GIF, Postscript, EPS, PGML and SVG formats. One plus point is the wide variety of setting options for the print output. After downloading and unpacking the tarball from the second URL listed below, all the necessary jar files are saved to the current directory. As long as you have installed JRE 1.2 or higher, ArgoUML can be started with the command java -jar argouml.jar

[top] Figure 2: Clear GUI from ArgoUML. [below] Figure 3: Together Control Center offers a wide variety of possible diagram types.

After starting, you will be looking at a nice, tidy program interface. The GUI of ArgoUML is split into four main parts. In the left upper corner there is the navigation panel in the form of a tree structure, via which one can access all previously installed elements of the model. If you click on an element there, the properties of the element will be displayed in the detail panel (bottom right), and the element itself (for example a class) will be selected in the master window – the

editing panel top right. The detail panel itself is in turn divided into eight index cards – for example under Source there is a preview of the generated Java source text of the selected UML element. Developers who manage to-do lists with the aid of yellow Post-it stickers, and can hardly see their monitor for notes, will be glad of the to-do panel in the bottom left corner. Here the developer can manage to-dos sorted according to priority and thus has a constant overview of all items still outstanding. Apart from any to-dos you add yourself, ArgoUML also automatically adds to-dos to the list in accordance with design criteria and analyses of the model. These could be missing methods or class names. If you go to http://www.ArgoUML.org and follow the Tours link, you’ll find a good introduction to the most important features of the program. But even without a tutorial, thanks to the intuitive user instructions you will quickly get on your feet and can then defend yourself against the bogeyman of the software crisis. Apart from the speed of execution and the lack of one or two features, ArgoUML makes a very good impression and is therefore recommended for both hobby-developers as well as for the semi-professional domain. For the professional domain, there is no code generation for additional languages (especially C++), nor team support or reverse and roundtrip engineering.

Together Solo and Together Control Center Together Soft offers its Together 4.2 product range in two versions. Both are byte code-interpreted, which, like ArgoUML, guarantees a sluggish rate of execution on ordinary hardware. Together Soft therefore also recommends P-III systems with 500MHz and 512MB RAM. As a virtual Java machine, JDK 1.3 is required. Together supports class diagrams and UML diagram types (use case, sequence, collaboration, state, activity, component and deployment) for modelling. Code generation can be done in Java and C++, and reverse and roundtrip engineering as well as team support are provided. A project expert helps the developer to set up a new project. Here, for example, target language and directory settings can be adjusted. The version Together Solo offers automatic documentation generation to HTML or RTF and supports the development of larger software projects via CVS. Together Solo also imports Rational Rose model files and exports UML diagrams as GIF or WMF. In addition to these there are EJB and forward and reverse engineering for sequence diagrams. Diagrams and UML elements can be linked to each other. So for example classes can be linked to status diagrams, to get a better overview of the complete architecture. Apart from the target 26 LINUX MAGAZINE 11 · 2001


024case-toolswip.qxd

29.06.2001

19:56 Uhr

Seite 27

languages Java and C++, Together Solo supports the generation of IDL (Interface Definition Language) from class diagrams. The high-end product Together Control Center supports, in addition to all the aforementioned UML diagrams, EJB Assembler and XML structure diagrams. It also excels due to the option of creating ER diagrams and offers JDBC roundtrip engineering for class and ER diagrams. Amendments to ER diagrams take effect directly on the database schemata of the DBMS below. Another option is that of direct import of existing relations from a database as ER diagrams. Via a dialog window, the necessary settings (server type, database name, host, port, username and password) can be made for database communication. Databases supported are Oracle 7.3.x/8.x, DB2, MySQL, MS SQL, Cloudscape, ODBC/Access 97 and SequeLink/Oracle. Together Control Center also includes a debugger for Java – and is thus maturing into a complete IDE. Overall the Together products make a very good impression. In terms of user guidance and handling Together leaves nothing to be desired and in this field is definitely, together with ArgoUML, ahead by a nose. Together is obviously intended for the professional domain, because of the enormous range of functions – which is also underlined by the respectable price of around £2800 (inclusive of one year’s support) for Together Control Center. Version 5.0 is launching just as we go to press.

Rational Rose Since the end of March the CASE tool Rational Rose, which stems from the Windows world, has been available for Linux.

What matters when it comes to CASE tools? Depending on the size of the system to be realised and the number of developers involved in it, requirement profiles always differ. A hobbydeveloper, who wants to write a little tool with just a few hundred lines of code, will certainly not want to spend several thousand on a high-end product. The list below shows a few requirements, depending on the domain of application.

Main CASE-Tool features Minimum Requirements Support for common UML diagram types Simple, intuitive user guidance Easy-to-use diagram-layouter Semi-professional requirements Reverse Engineering Roundtrip Engineering Flexible documentation creation (for example in HTML) Code generation (support for several target languages) Professional requirements Database support Team support for larger projects Open architecture for any/ potential expansions In heterogeneous environments, support for as many platforms as possible

1/2 hoch PositiveInternet


024case-toolswip.qxd

29.06.2001

19:57 Uhr

FEATURE

Seite 28

CASE TOOLS

After registering, you can download the roughly 80 MB TGZ file. You will then receive, via email, a 15day licence key and can test the program without restrictions. Rational specifies as platform Red Hat Linux 6.2 with Kernel 2.2.12.20. After unpacking Rose is installed with the installation script rs_install. The binary rose starts from the directory ./bin. Developers who have already worked with Rose under Windows will immediately feel at home, because the Linux GUI matches the one from the Windows world. In comparison with ArgoUML and Together, the user instructions and the handling are limping behind – many features and setting options can only be found after a long search. For example the source text generated from the diagrams cannot be

viewed directly, but has to be created manually by means of non-intuitive dialogs. Of the CASE tools in the test field, Rose supports most languages: Java, C++, ADA 83, ADA 95 and CORBA IDL and DDL for database applications. Rose offers both roundtrip and reverse engineering. While reverse engineering a small Java sample project, Rose abruptly crashed, despite a correctly set CLASSPATH, without an error message – this is something the manufacturers must fix. Diagram types supported are class, use case, collaboration, sequence, component, state chart, deployment and activity. For notation UML, Booch and OMT can be used. As to be expected for implementation in the professional domain, Rose has multi-user capability

Figure 4: It is only the look-and-feel of KDE that hints at Linux – otherwise Rational Rose’s Linux GUI matches the one from the Windows world.

CASE tools in overview Product Dia 0.86

ArgoUML 0.8.1

Manufacturer Internet Price (approx.) Diagram types Use case Class State Deployment Activity Collaboration Sequence Entity-Relationship Code generation

Alexander Larsson www.lysator.liu. se/~alla/dia GPL

+ + + only via Dia2code (C++, Java) Team support Reverse engineering Roundtrip engineering Other expandable by XML

28 LINUX MAGAZINE 11 · 2001

University of California www.ArgoUML.org

Together Solo 4.2 Together Soft www.togethersoft.com

Together Control Center 4.2 Together Soft www.togethersoft.com

Rational Rose 2001 Rational www.rational.com

free

£1600

£2800

£5500

+ + + + + + Java

+ + + + + + + Java, C++, IDL

+ + + + + + + + Java, C++, IDL

open source, XMI support, needs JRE 1.2

+ + + Version 5.0 just out Version 5.0 just out

+ + + Java debugger,

+ + + + + + + + C++, IDL, Java, Ada 83, Ada 95 + + + Look & Feel matches Windows version


024case-toolswip.qxd

29.06.2001

19:57 Uhr

Seite 29

CASE TOOLS

and supports developer groups. Rose makes a private working area for all developers, in which each has an individual via of the whole model. Modifications are thus restricted to the private working area until they are checked in to the CMVC (Configuration Management and Version Control System). Models created with Rose can be put on an intranet or the Internet via a Web publisher as HTML files. As for notations, there is a choice of UML, Booch and OMT. Diagrams automatically integrate themselves as JPEG graphics. So for example for an API the complete documentation can be place don the Net. Using a preview function the result can be approved before the actual HTML generation. Netscape Navigator 4.74 comes with it as Web browser. As to be expected, model data stored under Windows (MDL files) can also be used under Linux without any problems. Overall Rose for Linux gives a stable impression – apart from the problem of reverse engineering of Java. In terms of speed it leaves the byte codeinterpreted CASE tools ArgoUML and Together far behind. Because of the enormous range of functions of Rose and the wide variety of platforms supported (Windows, Sun Solaris, HP-UX, AIX, Irix, Compaq True 64 Unix) it is recommended for the pro. But a minus point for the product is its high price of around £5,500 per commercial single user. Here’s a thought for the manufacturer: A free version for non-commercial use would be a good idea.

Conclusion Each of the CASE tools presented here has both advantages and disadvantages – whether one or two missing features or ergonomic weakness in the interface. ArgoUML and Together are in the lead in the field of operability. Together and Rose are restricted by the very cost of their licences to the professional domain. The free ArgoUML, though, scarcely needs to hide behind these programs, but the professional one is certainly lacking a few features. All in all, CASE tools should be used more frequently in development. Because once they get used to them, developers will see them not as a blot on the landscape, but rather as helpful colleagues.■

Info Dia: http://www.lysator.liu.se/~alla/dia ArgoUML: http://www.ArgoUML.org Together Soft: http://www.togethersoft.com Rational: http://www.rational.com Dia2code: http://dia2code.sourceforge.net ■

1/8 Wired4life

FEATURE


030XFSsbd.qxd

29.06.2001

18:31 Uhr

Seite 30

FEATURE

FILESYSTEMS

SGI XFS on SuSE 7.1

CRASH PROOF HARALD MILZ

Version 1.0 of the XFS journaling filesystem has been available for download on the SGI website since 1 May – including as a patch for the 2.4.2 kernel. The obvious thing would seem to be, therefore, to try building it into the SuSE Linux 7.1 kernel. For long enough, UNIX users have been used to not being able to simply switch their systems off. However, failsafe systems which work with a log similar to databases, continuously recording any changes, have been available for a few years. The Reiser filesystem was introduced to the Linux world with SuSE 6.2, after Stephen Tweedie’s Ext3 filesystem had already been available in a highly experimental and unstable form since the end of 1999. SGI had already announced months ago that it would be porting XFS, known from the Irix environment, to Linux, and for the last few weeks it has been available as source code to everyone. 30 LINUX MAGAZINE 11 · 2001

What makes XFS interesting are a number of features not previously available, or at least not in this combination: • full 64-bit support • quotas • extended attributes, ACLs • maximum file size 16TB on 4K pages and 64TB on 16K pages. If the block device layer has been converted to 64 bit, files up to a size of 9 exabytes (9 x 10^18 byte) are addressable. • xfsdump and xfsrestore for filesystem back up. Usefully, dumps created on Irix can be restored on Linux and vice versa - despite different endianness.


030XFSsbd.qxd

29.06.2001

18:31 Uhr

Seite 31

FILESYSTEMS

• A data management API (DMAPI/XDSM) allows implementation of hierarchical storage management systems without any further kernel modifications. • Using xfs_growfs, filesystems can grow while mounted (in fact, they have to be mounted to be able to grow). The number of inodes can be changed during operation. • The log can be situated in a separate partition or a different logical volume. This will only improve throughput if the log is kept on a different physical disk.

Patching the kernel In order to put XFS on their machine, the user must first build the two patches linux-2.4-xfs-1.0 .patch.gz and linux-2.4.2-core-xfs-1.0 .patch.gz into the kernel in the normal way. The SGI website recommends using a standard kernel from ftp.kernel.org. RPMs and an installer are available for Red Hat. However, our point of interest is the comparison with Reiser FS, up to now the only log-based popular filesystem - and it therefore seems sensible to test it with the SuSE 2.4.2 kernel. What could be awkward is the fact that this kernel already contains a whole variety of patches which make further modifications impossible, or at least difficult. No need to worry though - apart from one reject for one makefile, the core patch runs without any problems. The reject can be safely ignored; the corresponding patch is already contained in SuSE’s 2.4.2. The page buffer and XFS options must be activated as part of the kernel configuration, and possibly also DMAPI. All other XFS options are primarily for error detection and are not required in our case. However, the core patch does produce one stumbling block. The top makefile suddenly contains the line: CC = $(CROSS_COMPILE)gcc \ -V egcs-2.91.66 This call will fail unless you happen to have that version of egcs installed. SuSE 7.1 normally comes with release 2.95.2, so this line should be commented out. This is the most obvious example of Red Hat imitation.

Enormous XFS module Finally, the kernel is converted as usual with make dep bzimage modules modules_install

FEATURE

Listing Kernel messages during mount page_buf cache Copyright (c) 2000 Silicon Graphics, Inc. XFS filesystem Copyright (c) 2000 Silicon Graphics, Inc. Start mounting filesystem: lvm(58,14) Starting XFS recovery on filesystem: lvm(58,14) (dev: 58/14) Ending XFS recovery on filesystem: lvm(58,14) (dev: 58/14)

Log space required seneca:/mnt # df /dev/vg01/xfstest Filesystem 1k-blocks Used Available Use% Mounted on /dev/vg01/xfstest 650560 13752 636808 3% /mnt seneca:/mnt # du -s -k . 13368 .

Newly created 64MB filesystem Filesystem 1k-blocks /dev/vg00/testlv 65528 /dev/vg01/xfstest 60736 xfs 403600 xfs_support 8400 pagebuf 23040

Used Available Use% Mounted on 32840 32688 51% /mnt 80 60656 1% /mnt

0 (unused) 0 [xfs] 0 [xfs]

As you can tell from the number of pages it occupies, the XFS module itself is pretty extensive. In order to be able to actually install and use a filesystem, the tools in the xfsprogs package have to be translated and installed in /usr/local. e2fsprogsdevel must be installed before you can run configure, and two lines must be commented out in include/liblvm.h: /* #include ”lvm_log.h” #include ”lvm_config.h” */ If you like, you can build an RPM package from the xfsprogs.

Man page muddle When trying to access the XFS man page you will discover an annoying similarity in names as the X fontserver man page appears. The XFS man page is actually called man 5 xfs. Filesystem performance is most easily measured using the tried and tested filesystem benchmark Bonnie. This benchmark tests I/O-access speed with a large file. First of all it performs a characteroriented write, then it repeats the operation (rewrite) and finally it performs a block-oriented write. Reading is character-oriented to start with and then block reads. To finish off, there are random searches.

make modules_install creates a new directory /lib/modules/2.4.2-XFS for the modules. This is also where the three new modules pagebuf.o, xfs_support.o and xfs.o are located. After updating /etc/lilo.conf, calling lilo and rebooting, the modules can be loaded: 11 · 2001 LINUX MAGAZINE 31


030XFSsbd.qxd

29.06.2001

18:31 Uhr

Seite 32

FEATURE

FILESYSTEMS

Bonnie results ———-Sequential Output———— —-Sequential Input— —Random— -Per Char- —Block—- -Rewrite— -Per Char- —Block—- —Seeks— Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU XFS-IDE 512 2933 99.5 16210 28.7 5463 15.1 2755 89.1 15384 19.7 222.7 3.7 Reiser-I 512 2832 99.4 18695 65.1 5279 16.0 2783 88.9 16348 23.4 218.0 4.1 EXT2-IDE 512 2919 98.6 14652 26.7 4045 12.7 2643 83.3 10546 11.5 159.9 2.3 XFS-SCSI 512 2932 99.5 15241 28.7 5529 14.2 2795 90.8 15380 20.0 215.8 3.6 Reiser-S 512 2838 99.5 18533 61.2 5360 15.1 2733 87.6 15412 21.8 209.4 3.5 EXT2-SCS 512 2967 99.4 22915 38.8 5251 12.6 2670 84.2 15018 16.5 213.0 2.6

Info XFS-Homepage: http://linuxxfs.sgi.com/projects/xfs/ Further comparison tests: http://slashdot.org/developers/ 01/05/10/1747213.shtml ■

Bonnie should be included in most distributions, so anyone ought to be able to use it. The results (see box ”Bonnie results”) relate to the following test environment: AMD K6-II/350 on Tyan S1590S board, VIA Apollo chip set with VT82C586B IDE controller. The SCSI controller was a Symbios Logic 53C875 with a SYM53C8XX driver. Hard disks: IBM DJNA-352030 (EIDE, UDMA-33), IBM DNES-309170W (Fast-20 Wide SCSI). In each case Bonnie was running with a 512MB test file, about double the size of the main memory, to exclude possible cache effects. The first three lines relate to the EIDE disk, the last three to the SCSI disk. Reiser FS appears to perform better in terms of speed, particularly when writing large blocks, but at the price of a considerable CPU load. If an actual application is CPU intensive and wants to write large blocks, it is entirely possible that a system using XFS would provide a greater throughput. On the other hand, XFS seems to have a slight advantage in the random seeks - especially useful when running an OLTP database. There is hardly any difference between the other variables - the variations are within tolerance levels.

disk, and is considerably faster when reading blocks, but not when writing them, contrary to what you might expect. It could not be determined why Ext2 was so much slower on the IDE disk, but the fact was confirmed through additional tests. As a little endurance test, XFS was subjected to eight hours of furious copying activity with a multitude of small files. It survived without the slightest problem. Another interesting comparison is to see how long it takes to delete firstly lots of small files and secondly a really large one (filled from /dev/zero). The purpose was primarily to see how quickly the filesystem deletes indirect and multiple indirect blocks. Reiser FS with its tree structure claims to be much faster than the bitmaporiented Ext2. The result can be seen in the diagram. Finally for the question of how failsafe XFS is. A simple test is a find|cpio, copying the kernel source tree into the XFS filesystem, and pressing reset halfway through the process. The subsequent mount shows that the filesystem takes less than one second for the repairs (see ”Kernel messages during mount”).

Space saving: XFS Lame duck on IDE: Ext2

Large Data File

File Tree

With the loss of a whole file tree, XFS is slow compared to Reiser FS and Ext2. With large files it is much faster than the rivals.

A comparison with the venerable Ext2 on the same test partitions also produces significant results. While quite markedly lagging behind on the IDE disk, Ext2 does play one or two trumps on the SCSI

XFS

63 s

Reiser FS

20 s

Ext2

39 s

XFS

3,2 s

Reiser FS

5,2 s

Ext2

6,4 s

0

5 s 10 s

20 s

32 LINUX MAGAZINE 11 · 2001

30 s

40 s

50 s

60 s

70 s

Compared to Reiser FS, XFS uses considerably less space for its log. If you create the filesystem with the default values, the log is also situated on the block device, but it still requires far less space (Listing, section ”Log space required”). With Reiser FS, even a newly created filesystem already permanently occupies about 32MB. The underlying logical volume has a size of exactly 64MB (see third section of Listing). Another feature: mkfs.xfs realises if a logical volume already contains a formatted filesystem (not only XFS, Ext2 is also recognised), and requires the option -f to carry on formatting regardless. Further endurance tests are needed to see whether XFS is suitable for everyday use - so far it would appear that XFS lives up to SGI’s promises. At the time of writing there wasn’t any news on whether SuSE is working on an official patch for SuSE 7.1 - requests for information regarding this question drew a blank. Possibly it will take until version 7.2 in the middle of this year before anything happens in this respect. ■


034Tclsbd.qxd

29.06.2001

18:34 Uhr

Seite 34

FEATURE

TCL

Object-oriented Tcl

OBJECTIVITY CARSTEN ZERBST

Tcl is the IT industry’s best kept secret - and very much alive. The object-oriented extension with the unwieldy name of [incr Tcl] that we are going to introduce at the start of our Tcl series, has contributed a great deal to Tcl’s popularity.

Listing 1: Class definitions with methods and variables package require Itcl namespace import itcl::* class Point { public variable x public variable y constructor {_x _y} { set x $_x set y $_y puts ”constructor: $this, $x:$y” } destructor {} public method move {dx dy} { set x [expr {$x + $dx}] set y [expr {$y + $dy}] return [list $x $y] } }

With this article, Linux Magazine is starting an occasional series of reports on news from the Tcl world. We will be introducing tools and extensions that enable particularly simple or elegant Tcl solutions. As the development model of Tcl has changed recently, the first part will start with new additions and amendments. Following that, we will be introducing the object-oriented extension [incr Tcl]. For years the scripting language Tcl has been labelled ”the IT industry’s best kept secret”. Although the Tcl programming community is growing, this selfdeprecating description is quite accurate. While its competitors Perl or Python are on everyone’s lips, Tcl ploughs on in obscurity without much fuss. There are reasons for that. For one thing, there is still no equivalent to Perl-CPAN or Python.org, the search for Tcl extensions or documentation for a specific problem can thus be quite difficult. The ‘The Story So Far’ box examines the current state of affairs, how it came about, and what the near future is likely to bring. The many changes in recent months have meant that many extensions and a lot of information have become even harder to find. The developers’ information page is now at Active State. It contains many links to documentation, extensions and programs. Another option is the Tcl foundry at Sourceforge. For many everyday problems it’s also worth having a look at the Tcl’ers Wiki. If you’re still drawing a blank, the newsgroup news://comp.lang.tcl is the last resort for any questions on Tcl. One old criticism of Tcl that keeps reappearing, and not just on the Tcl newsgroup, is its lack of object orientation. Fortunately, it is easily upgraded using an extension - the sophisticated and very popular OO-extension [incr Tcl], which is the subject of this month’s article.

Doubleplus good: [incr Tcl] With scripting languages you frequently wish for the blessings of object-oriented development: classes, inheritance, data encapsulation and so forth. Although Tk does already feel pretty objectoriented, Tcl does not support any OO features

34 LINUX MAGAZINE 11 · 2001

apart from namespaces. This is where the extension [incr Tcl] comes in. Not only is its name similar to C++, the Tcl command incr is the equivalent of the ++ in C, but [incr Tcl] aims to extend the base language with OO features, just like C++. This extension is available from Sourceforge, but it is also included in most Linux distributions. Prior to use, the extension must be loaded into a tclsh or wish, this is done with the command package require Itcl. All [incr Tcl] commands are defined in the namespace itcl, you can either use their full names or import them into the current namespace. In the following text we will be describing programming with this extension using a type 1 font editor as an example.

Class-wise Classes are the basic components of object-oriented programming. We will be assuming a certain degree of familiarity with the concept of classes. Before objects can be created and used, it is necessary to define a class. The assignment of classes determines which variables and methods exist, and what their tasks are. To define class variables it is sufficient to list their names with the command variable. Methods are defined in a similar way to normal Tcl procedures: the name is followed by the input parameters and then by a body containing the actual source code. Within this body, class variables can be accessed directly. The variable this, containing a reference to the current object, is also available there. In addition, two special methods can be defined: constructor and destructor. They are invoked when objects are created or deleted. Their structure is again similar to that of normal procedures, but the destructor has no input variables. Our example in Listing 1 shows the definition of the class point. Each point has an x- and a y-coordinate set by the constructor. Point objects can also be moved. The classes defined with [incr Tcl] can be used in a similar way to Tk widgets. As you can see in Listing 2, the creation of a new object is invoked using classname objectname ?parameter?. The object name can either be specified explicitly or


034Tclsbd.qxd

29.06.2001

18:34 Uhr

Seite 35

TCL

assigned automatically using #auto. The object is now available as a new command under this name, with the syntax objectname method ?arguments?. The methods cget, configure and isa are available for each object. The first two are used to return and set variables defined as public. The method isa checks whether an object is a member of a particular class. We also have the method move that we defined ourselves. The commands delete object objectname and delete class classname delete objects or classes.

Family ties The outline of type 1 fonts consists of straight lines and curves. Each straight line is defined by two intersections, each curve by two intersections and two check points. In order to be able to distinguish between the two different point types, each one will be assigned its own class. In the example in Listing 3 both classes inherit from the point class using the keyword inherit. The constructor passes the required variables to the base class. Additional variables and methods can be added in the definition of the derived classes - such as the method coordinates in the class Intersection in Listing 3. So far we are lacking the ability to represent the objects on canvas. Normally this method would be placed in the point class. In order to demonstrate

FEATURE

multiple inheritance we have defined a special class draw in the example. As you can see from Checkpoint, multiple inheritance is also pretty simple to use. However, it is not altogether without problems. Just ask yourself the question: ”What happens if two base classes each contain a method with the same name?” In [incr Tcl] the method from the first base class on the inherit list is used, which is not necessarily what you want. Another problem that can occur with multiple inheritance is diamond inheritance (see Figure 1). This case is not supported in [incr Tcl]. A type 1 font does not only consist of individual points, however, and we are still missing outlines. This is an opportunity to introduce another feature of object-oriented programming - delegation. This describes a process where a class assigns tasks to objects in another class. In our case a straight line is defined by two intersections and does not have to concern itself with representation and data storage. Whenever the line requires intersection coordinates it can request them from the intersection objects. The necessary intersections are stored in the Line class (Listing 4).

Figure 2: The type 1 editor is based on the program excerpts in this article.

Mine!

John Ousterhout, father of Tcl. Still seems to be having problems with ”Tcl/Tk for Dummies”.

As part of data encapsulation a class should be able to determine who can access its variables and methods. [incr Tcl] supports three different levels of access:

The story so far John Ousterhout, the spiritual father of the language, was also its main developer for many years. He designed its basics while a professor at Berkeley University, before leaving for Sun Microsystems with a team of developers. There, the team created the bytecode compiler and took the first steps towards Unicode. Before internationalisation was completed with Tcl 8.1, Ousterhout left Sun to set up his own company, Scriptics. Scriptics’ aim was to sell development tools (Tcl Pro) and professional support for Tcl. This was meant to finance further development of the language, amongst other things. Releases 8.1 to 8.3 were developed under the auspices of Scriptics. However, sales of Tcl Pro were apparently not sufficient to finance full-time Tcl developers. This is the classic dilemma of open source companies, Python have had similar problems recently. Scriptics was renamed Ajuba Solutions, its new emphasis being the sale of B2B (business to business) products for integrating databases into the Internet using XML. As a result, Ajuba’s developers seem to have got to know XML too well - the whole company was taken over by Interwoven, a large manufacturer of B2B software, who was not interested in developing Tcl any further. So what is the situation for Tcl about half a year later? Ousterhout had already moved the home of the Tcl sources to Sourceforge in the days of Ajuba. Apart from the interpreter, you can find about 200 other projects there that develope Tcl tools or extensions, including the Tcl-Pro-Suite tools. Since

Interwoven did not want to market Scriptics’ crown jewels, they generously issued them as Open Source. However, the issue of where the sources were stored was only part of the problem. It was also necessary to transfer development control from the hands of a benevolent dictator to the development community. The solution was the Tcl Core Team (TCT). Tcl users decided on the Internet who should be a member of the TCT and thereby determined the direction of future development. Development suggestions (mostly including the solution) are submitted as Tcl Improvement Proposals (TIP). After a discussion on the public mailing list TCT members vote on the proposals. Thanks to this process, development has speeded up significantly. However, the loss of full-time Tcl developers paid for by Scriptics has, of course, left a large gap. This is where a company called Active State stepped in. They are already known to users of Perl and Python for support and porting, with Tcl they now offer all of the three major scripting languages. In the meantime, Active State has also picked up two prominent Tcl developers, namely Jeffrey Hobbs and Andreas Kupries. Active State will be offering a ”batteries included” distribution in the foreseeable future, putting an end to the constant need to search for suitable extensions. At the moment they are working on Tcl version 8.4, a new alpha release of which is meant to be available on Sourceforge by the time this issue goes to print. On the TCT mailing list and in the newsgroup comp.lang.tcl there are already first rumblings about Tcl 9.0. Overall, we can expect a lot of activity.

11 · 2001 LINUX MAGAZINE 35


034Tclsbd.qxd

29.06.2001

18:34 Uhr

Seite 36

FEATURE

TCL

Listing 2: the interactive point class % Point p1 10 10 p1 % p1 cget -x 10 % p1 configure -x 200 % p1 cget -x 200 % Point #auto 20 20 point0 % point0 move 10 20 20 30 % point0 isa Point 1 % point0 isa Oink 0 % delete object p1 point0 % delete class Point

Info Developer’s information page: http://tcl.activestate.com Tcl foundry at Sourceforge: http://sourceforge.net/foundry /tcl-foundry Tcl’ers Wiki: http://mini.net/cgibin/wikit/0.html Tcl newsgroup: news://comp.lang.tcl [incr Tcl] at Sourceforge: http://sourceforge.net/projects /incrtcl/ Combat, a Tcl CORBA extension: http://www.fpx.de/Combat/ The Tcl Tk homepage: http://www.tcltk.org/ The type 1 editor: http://www.tuharburg.de/~skfcz/tcltk.html

• public: The method or variable is accessible to anyone. • protected: Protected elements can only be used within the class itself or in classes that have been derived from it. • private: Only accessible within the class itself. Up to now, the coordinates of a point could be modified at will using configure, without any validation of the values being performed. Once the coordinates have been defined as protected, they can only be changed using the method move. class Point { protected variable x protected variable y # ... } The derived classes can still access variables directly, without being dependant on specific methods. If the coordinates had been assigned a level of private, however, even the derived classes would not be able to access them directly.

All in all [incr Tcl] is a simple way of defining classes including inheritance, in which variables and methods can be protected against access at different levels. Its features - including multiple inheritance - are modelled on C++. In fact, [incr Tcl] classes can even inherit from C++ classes. A particularly good example of this if the CORBA extension Combat by Frank Pilhofer.

The author Carsten Zerbst is a member of staff at Hamburg Technical University. Apart from researching service integration on board ships he also investigates Tcl in all its forms.

Figure 1: Valid and invalid relations between classes in [incr Tcl]. Inheritance, multiple inheritance and delegation are permitted, but diamond inheritance is not.

36 LINUX MAGAZINE 11 · 2001

Using objects will be familiar to any user who has worked with Tk before. The features of [incr Tcl] can therefore be used without too much additional learning effort being required. Namespaces, which were first developed in [incr Tcl] have already found their way into normal Tcl several years ago, which is why we have not dealt with them here. Additional literature on [incr Tcl] can be found at http://www.tcltk.org/ The examples above, and an editor derived from them (Figure 2), can be found at http://www.tuharburg.de/~skfcz/tcltk.html However, at the moment, there isn’t any source text to represent the letters on canvas. We will be looking at the options of the canvas widget in the next instalment of Tcl. ■ Listing 3: Inheritance and multiple inheritance class Draw{ constructor {} {} destructor {} public method draw {} { if {[$this isa Checkpoint]} { # draw Checkpoint } elseif ... } } class Intersection { inherit Point constructor {_x _y} { Point::constructor $_x $_y } { # constructor for Intersection } public method coordinates {} { return [list $x $y] } } class Checkpoint { inherit Point Draw constructor {_x _y} { Point::constructor $_x $_y Draw::constructor } { # constructor for Checkpoint } }

Listing 4: Delegation class Line { private variable k1 private variable k2 constructor {_k1 _k2} { foreach k {$_k1 $k2} { if {![ $k isa nodal point]} { error “Node $k is not a nodal U point!” } } set k1 $_k1 set k2 $_k2 } }


038Helpsbd.qxd

29.06.2001

19:59 Uhr

Seite 38

FEATURE

LINES OF SUPPORT

Helping yourself in the quest for solutions

TO NAGGING PROBLEMS COLIN MURPHY

Things go wrong, sometimes they were never right, but with a bit of effort maybe you can fix that gripe or at least learn to live with a foible with a workaround. Here is a little reminder to the various sources of solutions to nagging problems for the Linux user.

[below] Websites can be helpful

Linux started out as, and hopefully will always be, a collaborative effort. People from around the world we able to bring together their best efforts and develop something that many like to think has become greater than the sum of its parts. Support can also be thought of as a collaborative resource, information, understanding and just plain, good old fashioned help can be found in a variety of places, so, just because you have some problem to solve doesn’t mean that you

should feel isolated because of it. Someone, somewhere out there is more than likely to be able to help you, or at least be able to share in your suffering and offer sympathy - which is better than nothing. The trick is to be able to find them. The internet played a huge part in allowing the development of Linux, and it is true that a lot of support can be found there as well.

The Web page Each distribution seems keen to develop it’s own community, which can be seen from their web pages. Mandrake are developing their ‘Mandrake Expert’ web portal where you can pose questions to ‘experts’ who have put themselves forward as being willing to help on their specialist subjects, while SuSE have their ‘Support DataBase’ which is allows you to search for solutions to problems that people have previously suffered with. Other distributions will offer similar resources so you should always check the support options as your first port of call. Non-distribution specific information can also be easily found on the web. Hardware compatibility has never been something that you can assume with Linux and many web pages are now set up to help the unsuspecting through this minefield. Compatibility databases like www.LinuxHardware.net will help you decide if that second hand scanner you’ve seen is going to work on your machine, or www. linuxprinting.org will help you locate that driver which will allow you to get your hands of the printhead of your GDI printer, so you can change the ink cartridge without having to take the case apart.

38 LINUX MAGAZINE 11 · 2001


038Helpsbd.qxd

29.06.2001

19:59 Uhr

Seite 39

LINES OF SUPPORT

The mailing list Almost all of the distribution manufactures will run one or more mailing lists which you can all upon for support and sometimes this can also be a valuable source of support. A mailing list is an email based discussion group, so a message sent to the mailing list is copied to all of the people who subscribe to that list. Some lists can be very busy, sometimes with hundreds of messages a day, with only a small percentage of which will be relevant to to any one persons interests - this is especially true of lists which are quite general in there subject matter, ”newbie@linux-mandrake.com” for instance. To get around this some mailing lists have a more defined subject area, Suse run a list for discussion just about Lotus Domino or instance. You do get the chance, in most cases, to search the archives for past solutions to your problem. Mailing lists are not just the domain of Linux distributors, applications often will have mailing list support and other specific areas of interest may have mailing lists dedicated to them. For distribution specific mailing lists, a search from the distributors web page will be the best way to find what’s available or for something more subject specific a web search through something like Google on ”Subject” + ”mail list” will usually bring useful results. Busy, or ‘high bandwidth’, mailing lists can be a cross to bare, but they do provide you with the latest news, of developments, or of problems just discovered or solutions just found. They are also very good at providing a sense of community and of the spirit that goes with it.

Internet relay chat If time is of the essence then IRC might be your support solution. Web pages can take some time to trawl through to find just the snippet of information that you require and you might have to wait a day or more for a reply from a mailing list. It does depend much more on luck than judgment though

FEATURE

as it does rely completely on who is about at the time. Simply connect your IRC client - I like Xchat, just one of many - to a server like irc.openprojects.net, join one of the rooms like #LinuxHelp and ask your question. A lot depends now on your conversation skills. Expressing your problem with unfamiliar terms can be a struggle, but most are patient and some are quite skilled in extracting the information from you that they need to solve your problem. It can be fun as well - it’s quite nice to find yourself in a position where you can answer other peoples problems, making you realise that maybe you are about to outgrow you ‘Newbie’ handle. It has made me reconsider my abilities to solve my own problems, occasionally with some success.

[left] Beware of high bandwith lists [right] XChat in action

Magazines Most magazines will have some kind of online forum or maybe their own IRC #room in which to pose your problems. Letters pages must be the slowest route to solving a problem, but maybe the topic has been covered in the past, in letters or feature articles, so keeping a magazine library can be a useful asset.

Your local user group LUGs are a valuable asset to the Linux community and you really should try and make use of them if at all possible. This magazine has a list of user groups and you should see if there is one that is in striking distance to you. If not, maybe the potential is there for a group but it just needs a nucleus for it to form around - maybe you could be that nucleus and start your own group. The rewards you get from a LUG do depend on the amount of effort you put into it, you do need to ask your questions and make your presence known, which is easily achieved in amongst friendly company. Some LUGs have the facilities for you to bring your own equipment along, so that you can go through your problems first hand. An ideal solution to any problem. ■ 11 · 2001 LINUX MAGAZINE 39


040CDRecover.qxd

29.06.2001

20:02 Uhr

KNOW HOW

Seite 40

RECOVERY CD

DIY Recovery CDs

SPEEDY RECOVERY BERNHARD BABLOK

Pulling an image off the hard disk and if necessary burning it back without any problems onto the disk – even the Linux distributions which fit onto one diskette come with all the resources to do this. This article shows the few commands which are necessary.

What gave rise to this whole story was a friend whose Windows operating system went out on strike after his wife had installed a game for children. At this point, it would be fairer to point out that this was due neither to the fact that a woman had loaded it, nor that this is because it was running under Windows. Even under Windows neat installation routines can be written, but once the system has been corrupted, usually the only remedy is re-installation. Under Windows this is as simple or complicated as it is under Linux; so it’s not a job which can be performed by real computer amateurs. A remedy is provided by the little project proposed here, with the aid of which even beginners can create a selfbooting CD, which will, after asking ”Do you really want to ...” make the system workable once more. Configuration work no longer comes into it. 40 LINUX MAGAZINE 11 · 2001

And for all those who often install and compare software, such a solution is interesting, too; because this is a simple way to guarantee identical initial conditions. And those who feel that they would happily shoot their system dead, can also be helped by it. Anyone who has to look after educational PCs has probably already implemented a similar procedure in order to get round the constant installation orgies. First off, a warning: A recovery CD is no substitute for data back up. The procedure described here depends completely on the hardware. For example, if the hard disk is replaced, the CD is usually unusable. Another, somewhat more demanding approach (which will be discussed at the end), does get round this problem. But there again, the emphasis lies on the restoration of the system and not the user data.


040CDRecover.qxd

29.06.2001

20:02 Uhr

Seite 41

RECOVERY CD

Create an image In order to create an image of a hard disk, this must of course be mounted. So that leaves two options: Install the hard disk in a second computer or boot from diskette or CD – either with a mini-distribution or a special boot CD and back up the image via the network. In the first variant the creation of the image does go somewhat more quickly, but screwing it into the hardware is not always desirable. The second variant requires a Linux-supported network card. This could even be an old ISA-NE2000 card or a cheap PCI clone for around £10. One very useful, single-disk version of Linux is Tomsrtbt. This packs almost everything the heart could desire onto one oversized diskette. It can be downloaded via http://www.toms.net/rb/ or a mirror. It is also very simple to adapt it to your own requirements to recompose the diskette after making your own modifications, since the necessary scripts are also present. Once the computer has rebooted, the image is created using the following command:

KNOW HOW

the designations for additional hard disks must be altered. The command rsh is in fact normally frowned upon, but the computer, with the CD burner, must nevertheless still be physically accessible. This is why the security loophole (due to rsh) can be ignored in this case. For the above command to function, a user burner must be installed on the computer myhost, which allows remote access via a suitable entry in /home/burner/.rhosts: $ cat .rhosts floppy.bablokb-local.de root Floppy is the hostname of the computer booted via the diskette. The dd command reads out the hard disk. The necessary parameters bs= (block size) and count= can be determined using an fdisk -l. On a test computer, an old laptop with a 2.1GB hard disk, the command showed that the hard disk possesses 525 cylinders, each with 8064 - 512 bytes. This means the dd command reads: # dd if=/dev/hda bs=8064b count=525 | ...

# dd if=/dev/hda bs=... count=... | \ rsh -l burner myhost \ "bzip2 -c > /home/burner/image/hda.bz2" The command assumes that an image of the first hard disk is to be dragged to the IDE adapter. Under SCSI that would be the hard disk /dev /sda. Similarly,

The character b here stands for 512 bytes. Additional figures stand for other factors (which can be viewed via dd --help). Here, 525 blocks 4MB in volume have been read out. Since 525 = 5 x 5 x 7 x 3, an equivalent alternative would be the block size 56448b (7 x 8064 = 56448) with the count 75.

The system guard showing memory and CPU load. The drop in the middle was while writing zeros.

11 · 2001 LINUX MAGAZINE 41


040CDRecover.qxd

29.06.2001

20:02 Uhr

Seite 42

KNOW HOW

RECOVERY CD

Simple inefficiency It is simple to create the image via the aforementioned command, but inefficient and very slow. But on the other hand it is also valid indefinitely. It functions regardless of how many operating systems are on the hard disk, and regardless of the file systems used. It is precisely this latter which is now always a game of chance under Linux. Reiser FS, Ext3 JFS, each with and without logical volume manager or software Raid are just a few of the more recent developments which are causing more and more problems for classic rescue disks and backup programs. In the figure below the KDE-2 system monitor can be seen on the destination computer during the creation of the image. In the lower part the bytes received and in the upper part the CPU loading are displayed. Notice the gaps that appear in the network traffic. These are due to the fact that dd is either reading or writing. Whenever a new block is being read by the hard disk, the network and the destination computer take a break. Otherwise the slowest link in the chain is bzip2, which processes the arriving data in 900k blocks. If

the sending computer is faster than the destination computer, it makes sense to compress the data before sending. The whole image creation in this case took more than an hour, and the destination computer, at 700MHz, is comparatively fast. More later on additional details about this figure.

Restore Once the image has been created, it goes onto a CD. How that works is explained in the ”Creating the recovery CD” box. Restore then functions again after a boot with a floppy local via the following two commands: # mount /dev/hdc /mnt # bunzip2 -c /mnt/hda.bz2 > /dev/hda Of course, a restore via the network would also be possible: # rsh -l burner myhost \ ”bunzip2 -c /home/burner/image/hda \.bz2” $ /dev/hda In the last case it is vital to make sure the output

Creating the recovery CD The recovery CD is created in the usual way under Linux via Mkisofs/Cdrecord. The file structure appears as follows:

The file linuxrc is in the root directory of this file system. To get to the system, the following steps are necessary:

build | hda.bz2 | | boot | hal91.img

# mkdir /tmp/floppy.mnt # mount -o loop hal91.img /tmp/floppy.mnt

The command # mkisofs -b boot/hal91.img -c boot/hal91.cat -o reccd.iso build creates the bootable CD-ROM image. The option -b refers to the bootable diskette image. Another file, the boot catalogue, also has to be created (option -c), but is otherwise unimportant. The output file is specified via -o. If hda.bz2 is a link to the image, the -f option must also be stated. If the ISO image has been created, the CD can be burnt using CDrecord or one of its front-ends. In our case, it looks like this:

The diskette image is mounted via a loop device. The ability to mount such loop devices has to be compiled into the kernel, but this is usually the case with standard kernels in distributions. Then the compressed file system is unpacked and also mounted via a loop device: # gunzip -c /tmp/floppy.mnt/initrd.gz > initrd # mkdir /tmp/initrd.mnt # mount -o loop initrd /tmp/initrd.mnt Now we have access to linuxrc and can edit the file as described in the article: # emacs /tmp/initrd.mnt/linuxrc After that all the steps are to be executed more or less backwards:

# cdrecord -v -isosize fs=8m speed=4dev=x,y,z reccd.iso fs is a buffer memory, speed the rate of the burner and dev the device of the burner, which can be determined via cdrecord -scanbus. These three parameters must be adapted by each person to suit their own circumstances.

Exchange the /linuxrc with HAL91 A Linux boot diskette almost always consists of three parts: a bootloader, the kernel and a pre-compressed file system.

42 LINUX MAGAZINE 11 · 2001

# umount /tmp/initrd.mnt # gzip -9c initrd > /tmp/floppy.mnt/initrd.gz # umount /tmp/floppy.mnt The HAL91 kernel does not support SCSI devices. So anyone who has a SCSI device should also swap the kernel. In HAL91 it is called vmloop. Since for our purposes the kernel hardly has to be able to do anything, apart from access the corresponding block device and CD-ROM support, the swap should not be a problem.


040CDRecover.qxd

29.06.2001

20:03 Uhr

Seite 43

RECOVERY CD

diversion is not in quotation marks, otherwise the hard disk will be overwritten by myhost. But the original objective has not yet quite been reached, because a normal user cannot be expected to cope with booting by floppy and composing cryptic commands. Luckily, everything can be done automatically.

A CD as floppy substitute Bootable CDs in accordance with the El Torrito standard do none other than make the Bios believe they are a bootable diskette. The recovery CD will thus, together with the hard disk image, also contain an image of our boot diskette. Since the Tomsrtbt floppy is a portrait format diskette it cannot be used for this purpose. But mini-distributions are as common as pebbles on the beach. The best suited for the recovery CD is for example the boot diskette called HAL91. Directly after booting, the kernel executes the file /linuxrc. We are thus replacing this file from HAL91 with our own version, which basically contains the two commands mentioned above (mount and bunzip2). How this works in detail is explained at greater length in the listing ”A modified /linuxrc”. Since the restore is a fairly destructive matter and because a normal user is accustomed to the constant challenges ”Do you really want to ...”, it is advisable to give the user a last opportunity to stop. The listing shows one option for this.

Space problems One important question has not yet been dealt with: What will fit onto a CD? The aforementioned laptop hard disk has three partitions. The first with a capacity of 1GB contains a freshly installed Windows 98. In addition to this there is a Linux partition of 800MB with a Mandrake 7.2 installation and a swap partition with the remaining space. Originally the computer was only installed in this configuration in order to test which stunts were necessary to install Windows 98 as an addon to a Linux computer – in almost all tests it is only the reverse case which is investigated and evaluated. Of the FAT32 partition, 210MB was occupied. The Ext2 partition on the other hand 674MB. An image was created, as described above. Surprisingly, this had more than 635MB and thus did fit onto one CD, but was much too big for this specific example. To check the process, the content of both partitions was backed up with a classic tar -cvpI. The compressed Tar archive of the Windows partition was 87MB in size, the Linux archive 195MB. Then the hard disk was repartitioned (one big Ext2 partition) and formatted. A newly created image of this almost empty hard disk still came to a solid 616MB.

KNOW HOW

The reason for this astonishing size lies in the fact that repartitioning and reformatting only changes the administrative information of the hard disk and partitions. The actual data remains unaffected. Before the created Tar archives were played back, the whole hard disk was overwritten with zeroes: # dd if=/dev/zero of=/dev/hda bs=... count=... An image of the empty Ext2 partition, with this preprocessing, only comes to just under 109KB, an enormous difference from the 616MB determined at first. Equally, after restoration to its original condition (with Windows 98 and Mandrake Linux) the image came to a reasonable size of just under 283MB. In the ”System loading” display, you can see quite clearly when the zeroes are transferred and compressed. In the central part the CPU loading drops dramatically, while the network is more heavily loaded.

30GB on one CD The compressed Tar archives show that a Windows installation can be reduced to just under 41 per cent, while with Linux it is even possible to attain a value of under 30 per cent. This is probably due to the high proportion of text files (for example HTML Listing: a modified /linuxrc 01: #!/bin/sh 02: 03: PATH=”/bin:.” 04: TERM=linux 05: export PATH TERM 06: 07: mount /proc/ /proc -t proc 08: 09: mount -o ro /dev/hdc /mnt 10: echo ”Should the hard disk be overwritten (all data will be lost)?” 11: until [ ”x$answer” = ”xYES” -o ”x$answer” = ”xNO” ]; do 12: echo -n ”Confirm with YES or stop with NO! ” 13: read answer 14: if [ ”x$answer” = ”xYES” ] ; then 15: echo ”overwriting the hard disk. Please wait...” 16: bunzip2 -c /mnt/hda.bz2 > /dev/hda 17: elif [ ”x$answer” = ”xNO” ] ; then 18: echo ”Stop!” 19: fi 20: done 21: umount /mnt 22: echo ”Please remove the CD-ROM and press CTRL-ALT-DEL” 23: sh Comparison of bzip2 with gzip Command\File zero bzip2 (bytes) 113 gzip (bytes) 101.801 bzip2 (time) 20.4s gzip (time) 8.1s bunzip2 (time) 4.2s gunzip (time) 4s

random 105.321.149 104.874.289 264s 48s 89s 12s

opt-kde2.tar 27.893.829 31.631.186 150s 88s 42s 5.7s

11 · 2001 LINUX MAGAZINE 43


040CDRecover.qxd

29.06.2001

20:03 Uhr

KNOW HOW

Seite 44

RECOVERY CD

documentation, scripts, configuration files) under Linux. An empty (zeroed) 30GB hard disk in compressed condition could take up some 1.5MB. If 2GB of the hard disk are taken up by a Linux system, then the image should still fit onto a recovery CD. For Windows the limit is around 1.5GB. Mind you, these are operating systems and programs. If compressed applications files, maybe in MP3 format, are present, the calculation will look very different. If there are several operating systems on the hard disk, space on the CD will also soon run out. But a recovery CD, as described here, is not suited to such systems anyway.

Optimisations

Info Homepage of Tomsrtbt: http://www.toms.net/rb/ HAL91 Homepage: http://home.tuclausthal.de/~incp/hal91/ Freshmeat: http://www.freshmeat.net Sourceforge: http://www.sourceforge.net MkCDrec Homepage: http://mkcdrec.ota.be Partimage Homepage: http://www.partimage.org Source of Sfdisk, also part of MkCDrec: ftp://win.tue.nl ■

It may just be acceptable to wait more than an hour for the image of a 2.1GB disk. But for a really large hard disk the whole process adds up to more than a whole day. So which optimisation options exist? As described, bzip2 and bunzip2 are ultimately responsible for the time taken to create the image and to do the restore. A highly practical alternative is to use gzip/gunzip for this task. In the ”Comparison” table, sizes and times for compression and decompression of three files are listed. The file zero consists of 100MB zeroes (created from /dev/zero), the file random out of 100MB random numbers (created out of /dev/urandom) and the file opt-kde2.tar is an uncompressed Tar archive from the /opt /kde2 directory of my computer. The archive also comes to almost 100MB. It is apparent from the table that with real data a time gain of about 40 per cent balances out a reduction in size of some 10 per cent. When the data is already compressed, the performance gap is even more marked, plus in this case bzip2 also has the greater overhead. But it is only in the case of blank data that a really significant difference can be noted in the size of files: bzip2 reduces the 100MB zeroes in the zero-file to a total of 113 bytes, while gzip still produces a result which is almost three powers of ten higher. With an image, which typically contains data from these three basic types, the result is obviously somewhat less extreme. The time saving is only about 30 per cent, while the size increases by 10 per cent. In the case of large, still mostly vacant disks, though, the calculation may look rather different again.

Relay race Another problem is the large number of programs needed to get the image onto the hard disk of the destination computer. dd reads the data out and

44 LINUX MAGAZINE 11 · 2001

writes it into a pipe. From there it reads rsh, only to immediately write it back into a socket. On the other side the rshd daemon then reads the data from the socket, writes it into a pipe, from where it is then met by bzip2. A real relay race is taking place here between the programs. The ideal would thus be a network-capable dd, which writes the data directly into a socket, interacting with an equally network-capable bzip2, which can read the data out of a socket. Since the sources are open, these expansions should not mean any great expense. So if anyone is looking for an interesting programming task, they could try their hand at this. One more important optimisation would be a dd which can read and write at the same time. The source computer in this case could send at full network bandwidth and the throughput from bzip2 would then be the only bottleneck. Short of just rewriting dd (which would certainly be the better solution), the variant was also investigated where the program buffer is interconnected between dd and rsh and/or before bzip2. It stores the data in a ring buffer in the main memory and can read and write at the same time from there. The only thing to watch for here is that the memory volume allocated by dd and buffer combined will still fit into the RAM. With this double buffering on both sides of the network it is possible to achieve a time saving of about 25 per cent with bzip2 and 10 per cent with gzip. Unfortunately buffers are seldom found on rescue floppies. Which is one good reason to simply create your own bootable CD with a comprehensive Linux system yourself. Regardless of these optimisations, there is one hole in the solution described here. There may be (and in the case of large disks, absolutely certainly) a huge amount of useless zeroes being read, transferred and compressed. At block device level, though, there are only bytes, no contents. An intelligent alternative definitely requires knowledge about the contents at file system level.

Alternatives A short search of the Internet at Freshmeat and Sourceforge also brings some corresponding solutions to light. The Belgian project MkCDrec creates, from a running Linux system, a recovery CD – or several, if everything will not fit onto one CD. This is intended for system administrators and therefore not automated, but that could probably be easily altered. The number of file systems supported is limited. In normal Linux systems, however, MkCDrec has the great advantages of efficiency and flexibility. The files are ultimately backed up with Tar. This means that only actually existing data are stored, plus if you do a restore a different partitioning is possible. All in all an extraordinary tool, which is continually being improved and, for all those who do not


040CDRecover.qxd

29.06.2001

20:03 Uhr

Seite 45

RECOVERY CD

necessarily need a solution for inexpert users, the right choice. A different approach is offered by Partimage. This is a low-level tool for backing up partitions. At the moment FAT16/32, Ext2 and Reiser-FS partitions are supported. The contents of the partitions are analysed and only the used blocks are backed up. The current production version is however even slower than the dd solution. Firstly, reading the used blocks is very slow but the compression is done beforehand on the source computer. On the credit side, Partimage offers an intuitive interface, including such things as progress indicators, and the option of distributing the image over several media. In an emergency it would be possible to back up a partition on diskettes. The latest beta version also has a client-server mode, implementing a whole range of optimisation options, such as simultaneous reading and writing or the encrypted transfer of data to the server. This version is described as quite stable, but suffers from the lack of documentation for the login mechanism. Building on the basis of Partimage, the following procedure for the creation of an optimised recovery CD would be possible, if only the supported file systems are present: • Reading out the partition information, maybe

KNOW HOW

with Sfdisk. The program allows the partition information to be output in a format which can be used by Sfdisk again as input. • For each partition a corresponding image file is created via Partimage. • The images are burnt, together with the Sfdisk input file, onto the CD. • The /linuxrc program of the boot image of the CD repartitions the hard disk via Sfdisk – using the corresponding input file – and writes all the images back onto the hard disk. The advantage of the last procedure, apart from faster image creation and faster restore is the option of backing up larger hard disks, possibly with several operating systems, on several CDs.

Conclusion As you have seen, there are various approaches when it comes to restoring a hard disk. All the means necessary for this are either supplied with simple distributions or are freely available on the Internet. But a final pointer - all the solutions considered are unsuitable as back up procedures for normal user data. Who wants to restore a whole hard disk or partition just because a corrupted file has to be replaced? ■

1/2 ad

11 · 2001 LINUX MAGAZINE 45


046CdBootbd.qxd

29.06.2001

20:06 Uhr

Seite 46

KNOW HOW

BOOT CDS

Creating boot CDs

QUICK RECOVERY BERNHARD BABLOK

Bootable Linux CDs are highly practical in case of emergency. But producing one yourself does require some knowledge about the boot procedure and the tool presented here.

your own CD. Such CDs are ideal for presentations, training courses, kiosk systems. Or to proudly show off to your best mate your own brand new KDE installation on his computer. This article presents a procedure which is easy to use and with whose help, at very little effort, a functioning Linux installation can be transferred onto a bootable CD. The first paragraphs below, though, bring you some theory about the boot procedure itself, but are of interest anyway regardless of the topic. On this basis, there follows a description of how to use the build system to create bootable CDs.

All beginnings are easy – the boot procedure

Rescue diskettes are as common as sand on the beach. But they all have a whole series of drawbacks. They are always too small, slow and error-prone. And with fairly modern PCs, they are no longer needed now that it is possible to boot direct from CD. So what could be more obvious than to make your own bootable Linux CD? Those who are saying, but such a thing already exists, are of course right (see Info box). But often these CDs are lacking one very special, absolutely vital program. There are other reasons for making 46 LINUX MAGAZINE 11 ¡ 2001

After switching on the computer, it looks in the places defined in the Bios for executable code. Normally these are the diskette, the CD-ROM drive and the first hard disk. This executable code is very simple, since at this time there are no operating system resources available, in particular no file system. Its task is to load and to start the operating system kernel. This simple code is also contained in the Linux kernel, hence you can copy the kernel directly onto a diskette (dd if=bzImage of=/dev/fd0) and start it from there. The kernel then initialises all the subsystems and starts the program file /sbin/init on the root partition (to be precise, the following files are sought in this order: /sbin/init, /etc/init, /bin/init and /bin/sh).


046CdBootbd.qxd

29.06.2001

20:06 Uhr

Seite 47

BOOT CDS

The root partition is defined when the kernel is compiled (in the top level makefile) and has by default the same value as the current root partition on which the compilation is running. This value can be modified later by means of rdev(8) utilities. Anyone interested in the details of the boot procedure must definitely take a look in the file /usr/src/linux /init/ main.c. The program /sbin/init is the primary process of a running Linux system (it has the process ID 1). It reads its configuration file /etc/inittab and starts, depending on the inputs, the corresponding scripts and gettys (or the Xdm for graphical log-ins). The drawback to the procedure described is its lack of flexibility. The root partition is fixed, plus no additional parameters can be assigned to the kernel. This means that in practice almost exclusively, a two-stage procedure is used. Instead of starting the kernel directly, the Bios loads a bootloader. This then loads the kernel and transfers to it the arguments – either from a configuration file or from a command line. The commonest bootloaders (Lilo, Chos, Grub and others) can do even more. These are boot managers, with which different operating systems and/ or kernels can also be loaded.

Where are the files – the initial ramdisk Even with the bootloader, one question remains unanswered. On a completely new system, there is no formatted root partition, so nor is there a file system with /sbin/init and /etc/inittab. The kernel that has just been successfully started would thus come to a stop with a kernel panic. The solution to this problem is an initial ramdisk. This is a Linux file system, which is loaded into the memory either by the kernel itself (classic ramdisk) or by the bootloader (initial ramdisk: initrd). The typical emergency diskette thus contains exactly two components: a kernel and a zipped file containing a complete file system. If one is using a bootloader, two arguments are necessary for the kernel: root=/dev/ram and initrd=path to file. Without a bootloader one has to patch the kernel (again with the aid of rdev), in order to define the start address of the ramdisk. The last procedure, though, has now become fairly uncommon, since in this case both the kernel as well as the ramdisk has to be copied onto a blank diskette to the right offsets. The boot procedure with this is slightly modified. Firstly, the bootloader loads the kernel and the initial ramdisk. The kernel unpacks it to a normal ramdisk and mounts it as root file system. Next – if present – the file /linuxrc is executed. When this program has finished, the correct partition is mounted, as described above, as rootpartition and then /sbin/init is called up. First the initial ramdisk is either unmounted from the file

KNOW HOW

system using umount (and the memory space is released) or – if the /initrd directory exists – remounted to /initrd.

Bootscreen of Bernhard’s bootable Linux CD.

Stocktaking with Linuxrc The pivot of any initial installation is the program Linuxrc. It may be a shell script, but usually, in the big distributions, is a very time-consuming C program and is responsible for the partitioning, selection and installation of the packages. For a bootable CD-ROM, Linuxrc must do three main things: depending on the existing hardware, load the right modules, find a CD-ROM drive with the boot CD and convince the kernel that the corresponding device is the right root partition. But the latter is very simple. Linuxrc has only to write the device number (which consists of major and minor numbers) of the root partition in /proc/sys/kernel/real-root-dev. For everything to work, the kernel must be configured and compiled with both ramdisk and initrd support. The default size of ramdisks has changed in one of the most recent kernels and is now only 4MB. This can be modified both during the kernel configuration and also via a kernel boot parameter during run time.

Creating the initial ramdisk There are various ways to create an initial ramdisk. The necessary steps are listed in the printed listing. Firstly, a RAM device is pre-filled with zeroes, then a file system is made. After that, the RAM device is mounted completely as normal and all the necessary data is copied into the mounted directory. The content of the whole device is then copied by means of dd and gunzip compressed into a file. Depending on whether only one start diskette is to be made or a complete rescue system, the content of the disk is very simple or correspondingly comprehensive. In the case of a rescue system the size should also be optimised such that apart from the kernel, 11 · 2001 LINUX MAGAZINE 47


046CdBootbd.qxd

29.06.2001

20:06 Uhr

KNOW HOW

Seite 48

BOOT CDS

Listing 1: Creating a ramdisk 1: dd if=/dev/zero of=/dev/ram bs=1k count=2048 2: mke2fs -vm0 /dev/ram 2048 3: mount /dev/ram /mnt 4: cp -a foo/* /mnt 5: dd if=/dev/ram bs=1k count=2048 | gzip -v9U > ramdisk.gz

every byte is put to good use. A well-known trick here is to write a program which acts differently depending on with which program name it is called up. If it is called up as cat, it acts like cat and so on. The individual program commands are then nothing but hard links to this program. This saves a lot of space, because the start-up code, which every program needs, now only needs entering once. The drawback to this is that one cannot simply delete a few programs to make space for a tool of one’s own on the ramdisk.

The devfs file system Devfs is, like Proc, a virtual file system, which can be mounted by the kernel at the same time as the root file system. The great disadvantage of Devfs is that most programs cannot cope with it. So SuSE supplies a Devfs kernel patch (not yet even working properly) for the 7.0, but Yast cannot cope with a running Devfs system. But Devfs is to be an option from kernel 2.4 on, so there will certainly be some changes here. The distributors must be assuming that systems will run with Devfs. The boot scripts of Red Hat 7 are already prepared for this. The principle of Devfs is simple. Instead of identifying the devices by means of major and minor numbers, as is currently the case, each driver (similar to when loading the corresponding module) logs on explicitly and is then assigned its name. Contrary to today’s systems, in which one can easily reach over 2000 virtual devices under /dev, with Devfs it is only the actual devices which appear there. The advantages are obvious: a clear, structured /dev directory with meaningful names (who knows what /dev/hdj13 really means), no more administration of major and minor numbers (these are necessary so that several modules don’t get tangled up) and support of hot-pluggable devices. The greatest flame wars in kernel history were probably those to do with Devfs and for a long time it was only available as an unofficial patch. Opponents claimed, in particular, that the kernel gets bigger due to the additional administration of the devices. All the more surprising was the fact that in the course of the last developer series (kernel 2.3.x) Linus Torvalds did include Devfs in the official kernel, although with the label ”experimental”. Devfs solves a problem in a generic way, which subsystems such as USB also have to solve. And high-end devices with PCI devices which can be swapped on the fly also demand a solution which allows devices to log on and off. To run a system neatly using Devfs, it would have to support all drivers in use. But this is generally not yet the case, which is why there is a Devfs daemon, which, when accessing classic device names, converts these into the Devfs names. There will probably be a consolidation of the whole set of problems in the 2.5 kernel series, since it makes little sense to support dynamic devices at several places in the kernel.

48 LINUX MAGAZINE 11 · 2001

A CD-ROM as root directory A CD-ROM as root directory obviously has the advantage of size, but the major disadvantage, compared with a ramdisk, that it is read-only. Unfortunately a running Linux system requires write access to many different directories, sometimes as early as the start phase: • /var: Here for example important files are made or perpetuated under /var/run and /var/log. • /etc: In /etc/mtab, all mounts are stored. • /dev: This is where pipes are created. • /tmp: Many programs create files or sockets here. • /home: This is a hotchpotch of all possible configuration files. One possible solution would be to mount a ramdisk among each of these directories, make a file system and copy the contents of the CD into it. But one quickly realises that this will cause problems. So the kernel should mount the CD as root partition, but under /dev there are still no devices, so they should first be created in a ramdisk from the contents of the CD. The process is similar with the directory /etc. The program /sbin/init reads the /etc/inittab, but here too there are still no directories and files, since only the first script started by /sbin/init can make the files and directories.

A ramdisk for the /var Even if it does not work like this, this approach is not completely wrong and highly practicable for /var, /tmp and /home. So as not to create three ramdisks and end up with a bit of waste, though, /tmp is replaced by a symbolic link to /var/tmp, and similarly /home by a link to /var/home. The creation of the ramdisk, the mounting under /var and the playing in of a complete directory hierarchy (from a Tar archive) is done here at as early a stage as possible, after /sbin/init has passed control to the first boot script (in SuSE for example it is the script /sbin /init.d/boot).

The /proc/mounts trick For /etc though, we do need a different solution. Here one can use the trick of replacing the file /etc/mtab with a symbolic link to /proc/mounts. The last file may not contain all the information, like mtab, but still enough to be able to work normally. If you output both files with cat, you will see hardly any difference. As the result of this trick, /etc can stay on the CD. If write access is also needed for additional files, these could be replaced by symbolic links to files under /var, such as ln -s /var/etc/foo/etc/foo.

The /dev problem As the last remaining directory we have to create /dev. As yet, there is no completely satisfactory solution to this. One highly efficient option is to use


046CdBootbd.qxd

29.06.2001

20:06 Uhr

Seite 49

BOOT CDS

the Devfs file system. What this is all about is explained in more detail in the box of the same name. Since the time and effort spent getting a system to run, at least for the first time, with Devfs, is fairly considerable, an alternative is used in this project. In terms of memory consumption, it is certainly not ideal but on the other hand can be used without any manipulation of the installation. It exploits the fact that the initial ramdisk, as described above, is remounted on the directory /initrd, if this directory exists. This occurs as the last action, before the new root system is mounted. If one now replaces /dev on the CD with a symbolic link to /initrd/dev, one has all the devices which were already available on the initial ramdisk. The situation thus created is fairly pathological. Mounting the CD makes use of a device from the initial ramdisk. This in turn is mounted on a directory on the CD. The effect is that there is no option, during the system power down, of performing a clean dismantling of the file systems. And because Linux bars mounted CD-ROMs, you can only get to the CD again after switching off.

The bootable CD After this excursion into the shallow end of booting, all that remains is to put together the pieces of the puzzle to make a bootable CD. What further simplifies the matter is the fact that a bootable CD to the El-Torrito standard does nothing more than emulate a diskette. So to this end, one creates a diskette with bootloader, kernel and initial ramdisk (which essentially contains only the special Linuxrc described above), copies the diskette into a file (such as dd if=/dev/fd0 of=bootdsk.img) and tells the burn program which file is the diskette emulation. Under Linux, though, the last line is not quite correct. The actual burn program Cdrecord does not in fact create any CD file systems (ISO9660 file systems), as the program Mkisofs is responsible for that. It creates the file system and at the same time copies all files into it which one wants on the CD. The result is a file with a maximum of 650MB, which is transferred by Cdrecord via a CD burner onto a medium (details on this can be found in last month’s CD writer test). With a Bios which is error-free the selection of the bootloader does not come into it, since the CD now booting is emulating a booting diskette. Stupidly, though, not every Bios is error-free, with the result that the Lilo may be loaded by the CD, but it wants to use Bios commands to load the kernel from a real instead of the emulated diskette. This is why the use of Syslinux has taken over as bootloader for bootable CDs. This loader needs an (obviously immortal) DOS file system on the diskette and as a result does not find the kernel directly via the Bios.

KNOW HOW

An easier life This points the way to a bootable Linux CD. All you need do is replace a few directories and files with symbolic links, write a little Linuxrc program, create a boot diskette and burn the whole thing onto CD. Unfortunately, a system with all the remounted directories is hard to maintain. In particular, bending /dev can have some unpleasant side effects, if you want to boot the system which served as model again from the disk. But since almost all steps towards a bootable CD are independent of the distribution used, it seems a good idea to create a makefile for automation purposes. A makefile, in the execution of the idea, actually turned into an entire hierarchy, though the principle remains the same. For this you will need a computer with enough space for two Linux systems: an active system for the work and a second system to serve as model for the CD. The model system is installed and configured completely normally. Of course, one must hold back a bit, because even 650MB soon fills up. The model system is then worked on from the second partition. This splits the procedure into two parts. In the first step, only those modifications are performed which are not destructive in the sense that the system can no longer be booted. So for example, moving /home to /var/home is no problem at all. On the other hand potentially destructive operations are performed almost on the fly during the creation of the CD. Anyone wanting to play around with this concept a bit can download the files from http://www.bablokb.de/bblcd/. Extensive documentation comes with them. The system may not be perfect yet, but the basic functions are already in place. There are also professional systems (for example Webpads), which work with bootable Linux CDs. An update of the system is no problem even for amateurs with this, as a simple change of CD handles the system update. If you have a weakness for cool gadgets, you can also get hold of blank CDs in visiting card format. These are really expensive for just under 20MB of available space, but for a personal Linux rescue system in your trouser pocket, it’s worth it. ■

The author Bernhard Bablok works for AGIS mbH as a systems programmer in the systems management division. When he is not listening to music, cycling or walking, he is involved with topics concerning object orientation. He can be contacted at coffeeshop@bablokb.de.

Info Bernhard’s bootable Linux CD: http://www.bablokb.de/bblcd/ H.P.Anvin: The most overfeatured rescue disk ever created: http://www.kernel.org/pub/dist/superrescue/ Homepage of Gibraltar, a firewall system which can be started from a CD: http://gibraltar.vianova.at/ The bootable visiting card of Innominate: http://www.innominate.de/level2.phtml?parent=101 A CD-based rescue system: http://rescuecd.sourceforge.net/ The classic. The most you can fit on a diskette: http://www.toms.net/rb/home.html ■ 11 · 2001 LINUX MAGAZINE 49


0503Dsbd.qxd

29.06.2001

20:08 Uhr

Seite 50

KNOW HOW

3D INSTALLATION

Installing 3D Support for Nvidia cards

HOUSE OF CARDS MIRKO DÖLLE

In order to enjoy the smooth flow in most games, you will need 3D hardware support. Here we explain the installation for Nvidiabased graphics cards.

While in the past, computers were driven to the limits of their capacity by database applications or scientific calculations, nowadays it’s games which are the greatest guzzlers of resources. In particular since the screen display has become ever more realistic, even the fastest processors have broken into a sweat over the necessary calculations. Instead of moving flat figures over a painted background (as used to be the case) games scenes are now depicted in 3D - in virtual space there are now several thousand objects spread around, with different structures, form and colours. But since a monitor can still only show twodimensional images, the game scene must thus be photographed for each individual monitor image from the point of view of the player or, more correctly, it must be rendered. To give the CPU a bit of breathing space, subtasks to do with rendering have gradually been farmed out to the graphics card chip. Now, instead of tracing each individual beam of light, the processor need only tell the graphics card the position and nature of the individual objects (which are split up into small, triangular areas) - and the graphics card takes care of the rest. One of the biggest manufacturers of such graphics chips with 3D hardware acceleration is the chip forge Nvidia (http://www.nvidia.co.uk), which is where many graphics cards manufacturers, such as Elsa for example, go for their chips. Since every chip manufacturer uses his own, well-protected instruction set, a special driver is necessary to make use of the 3D characteristics. Nvidia has been offering Linux drivers for quite some time now, which you will find on the company’s FTP server at ftp://ftp1.detonator.nvidia.com/pub/drivers/english/ XFree86_40/. The driver consists of two parts: The GLX package, which guarantees connection to the Xserver, and the kernel module, which allows direct access to the graphics card. Nvidia supplies readyadjusted packages for the commonest distributions. By the way, you will not find the drivers on our cover CD, since this is not possible for licensing reasons they have not been disclosed. For reasons of space and time, though, we are limiting ourselves to

50 LINUX MAGAZINE 11 · 2001

versions 7.1 and 7.0 from SuSE and Red Hat 7.0. Since the Nvidia driver requires at least XFree86 4.0.1, the use of older distributions is problematic and not recommended. Generally, the system should always be the very latest, which is why you cannot get away without regular updates for the driver, the X-system and the kernel. The use of the latest driver is absolutely vital, as Nvidia is still struggling with problems of stability, from simple graphics errors to complete crashes of the Linux system.

SuSE 7.1 During installation, YaST2 offers you, at the Xinstallation, the option of selecting the 3D driver, but you can ignore this. Integration is done later via SaX2 or by manual adaptation of the configuration file. As we closed for press, the latest version of the Nvidia driver was Version 0.9-769, the corresponding packages are called NVIDIA_GLX0.9-769.suse71.rpm and NVIDIA_kernel-0.9769.suse71.rpm. If newer drivers come out in the meantime, you should give these preference. The two packages should be stored in the /tmp directory, which facilitates the description of the next steps. We are assuming a SuSE-7.1 standard installation. To prepare the 3D installation you must of necessity leave the graphical user interface and change over to a text console. The first text console can be reached using [Ctrl+AltF1], and there you should log in as root. After that, use init 3 to switch the graphical user interface off completely and you can start the actual installation. You no longer need the mesasoft package for software rendering, and can delete it using rpm -e mesasoft. Next, the kernel and then the GLX package from Nvidia are installed: rpm -i —force /tmp/NVIDIA_kernel-0.9-769.susU e71.rpm rpm -i —force /tmp/NVIDIA_GLX-0.9-769.suse71.rpm The kernel package must be installed by --force, because it overwrites the Nvidia modules supplied by SuSE. When installing the GLX package, RPM may possibly complain about the lack of the file


0503Dsbd.qxd

29.06.2001

20:08 Uhr

Seite 51

3D INSTALLATION

switch2nv_glx, but this is not a problem. The next thing is to set a missing link for GLX: ln -s /usr/lib/libGL.so.1 /usr/lib/libGL In order to activate the 3D support, you can either edit the X configuration file by hand or use SaX2.

Installation with SaX2 Call up sax2 -f and refuse the first 3D support offered, since otherwise SaX2, especially in the case of GeForce-3 based cards such as the Elsa Gladiac 920, will not start. The settings which SaX2 now offers you should also be rejected and instead of these, you should select Change Configuration from the menu at bottom right. Then comes a query, as to whether any allegedly missing components should be installed. Say no to this. In the dialog which follows, accept your former X-settings with Use / change the current configuration and click on Next until you reach Graphics Device Setup. The corresponding modules must be selected here. Click on Properties, Expert. You will be offered, under Driver, the nv module. Instead, choose nvidia from the pull-down menu, as shown in Figure 2. With OK you will come back into the graphics device setup. There you should click on Load 3D modules, tick glx and again confirm with OK. With Next you will reach the monitor selection, where, under Properties, apart from your monitor model, you can also set the resolution required. If you have finished this, click on Finish. In the window that follows, save the settings first as in the tests, there were some SaX2 crashes, for which no configuration had been written. After saving, do not leave SaX2, but start the test mode. Here you can make the final fine adjustments. The Save Configuration button then brings you back to SaX2, which you can then shut down. That completes the installation, and with init 5 you can get back to the graphical user interface. In rare cases, it can happen that games, despite apparently correct installation, do not run 3Daccelerated, or even complain about the absence of the library libGL. If this happens, call up switch2nvidia_glx as root. On one occasion, it was even necessary to play in the GLX package again using rpm -i --force --nodeps NVIDIA_GLX-0.9-769.suse71.rpm. In order to return, in the worst-case scenario, to your original configuration, all you need to do is rename the file /etc/X11/XF86Config.saxsave again as XF86Config.

Manual installation Editing the configuration file directly in an editor, such as mcedit, goes much faster than with SaX2. Using mcedit /etc/X11/XF86Config you can edit the central configuration file of the X-window system. First, look for the section Modules and there add the entry Load ”glx”:

KNOW HOW

Section "Module" ... Load "freetype" Load "glx" EndSection This means the GLX-Module will be loaded on the next startup. Now you must enter the Nvidia module in the section Device. SuSE has the module nv in there as standard. Here is the changed section: Section "Device" Driver "nvidia" Identifier "Device[0]" EndSection Store the file with [F2] and leave mcedit with [F10]. That’s the end of installation, init 5 brings you back to the graphical log-in.

SuSE 7.0 Installation with SuSE 7.0 is significantly more timeconsuming: Since there is no Nvidia driver for version 4.0 of XFree86 which is included in SuSE 7.0, you will have to download around 30MB from the Internet. Updating the Xserver is a subject that brings its own set of problems. If it goes awry the graphical log-in will no longer function. Which is why you should not perform the update without some thought. You can obtain the RPM packages from ftp://ftp.gwdg.de/pub/linux/suse/ftp.suse.com/suse/i 386/X/XFree86/XFree86-4.0.2-SuSE/. To update on XFree86 4.0.2 you will need the RPM packages xshared, xmodules, xf86, xloader and xfntscl. Also advisable is xfnt100, and for installation, also, intlfonts-ttf and sax2 from the sax2 subdirectory. To update, you must also switch off the graphical user interface. To do so, change, using [Ctrl+Alt+F1] to the first text console, log on as root and call up init 2. After that, install the RPM packages in the sequence mentioned above using rpm -Uhv –force –nodeps packagename.rpm. So long as no error messages have popped up, you can move on to the installation of the 3D support. To do this, first install the corresponding packages from Nvidia, in our case these were NVIDIA_GLX-0.9-769.suse70xfree864.0.2.i386.rpm and NVIDIA_kernel-0.9769.suse70xfree86-4.0.2.i386.rpm. The procedure corresponds to that of SuSE 7.1.

Figure 1: The Elsa Gladiac 920 with GeForce-3 chip functions only with the latest driver version (from 0.9-769). The installation programs from SuSE and Red Hat do not yet recognise it, but in case of doubt the settings for older models like GeForce 2 will work.

Installation with SaX2 Manual installation after the update is very difficult – there is no configuration file to rely on. For this reason, you should use SaX2. The procedure is 11 · 2001 LINUX MAGAZINE 51


0503Dsbd.qxd

29.06.2001

20:08 Uhr

Seite 52

KNOW HOW

3D INSTALLATION

Additional preparations are not needed, and the next thing to do is install the kernel and the GLX package from Nvidia: rpm -i /tmp/NVIDIA_kernel-0.9-769.rh70-up.i3U 86.rpm rpm -i /tmp/NVIDIA_GLX-0.9-769.i386.rpm RPM will tell you that diverse files have been renamed by way of avoiding conflict. But more about that later, for the moment we can ignore the messages.

Manual installation This leaves the manual entry of the correct server in /etc/X11/XF86Config-4. The GLX-Module is already entered in the case of Red Hat as standard, so all you need do now in the section Device is to swap nv for nvidia. The result should look like this:

Figure 2: The nvidia module is the hardwareaccelerated variant.

identical to the installation under SuSE 7.1, except that you must additionally control the proposed settings for mouse and keyboard. On the whole, though, these can be taken over. After the end of installation, return to the graphical log-in with init 3. In order to revive the old XFree86 in case of error, insert the first SuSE CD (or DVD) in the text console and, as root, call up yast. Under Define installation/start, Change/create configuration you will find a listing of the SuSE series. The packages required are in the series ”x”, marked with [i]. Now go to the package thus marked and press the [R] key, which will change the marking to [R]. With [F10] you can then leave the series and select Start installation. At the end of installation, leave YaST and use init 3 to return to the graphical log-in.

Red Hat Links Nvidia-driver for XFree86 4.0.1 or higher: ftp://ftp1.detonator.nvidia.com/ pub/drivers/english/XFree86_40/ Update to XFree86 4.0.2 for SuSE 7.0: ftp://ftp.gwdg.de/pub/linux/suse /ftp.suse.com/suse/i386/X/XFree8 6/XFree86-4.0.2-SuSE/ ■

Installation under Red Hat 7.1 is something we will have to owe you, sadly, as the packages offered by Nvidia were produced for the wrong kernel version. By and large, the procedure with Red Hat 7.1 is similar to that described below. Red Hat 7.0 already comes with XFree86 4.0.1 and therefore needs no X-update like SuSE 7.0. Should your graphics card not be recognised during installation, for example because you are using a GeForce 2 GTS or GeForce 3 and this is not yet detected, select a previous model, such as GeForce 2. The Nvidia packages necessary for installation of the 3D support are NVIDIA_GLX-0.9-769.i386.rpm and NVIDIA_kernel-0.9-769.rh70-up.i386.rpm. Both packages should be saved in /tmp to simplify matters. With Red Hat, too, you must first swap the graphical user interface for the text console: After switching over with [Ctrl+Alt+F1] log on as root and switch off X with init 3.

52 LINUX MAGAZINE 11 · 2001

Section "Screen" Identifier "Screen0" ... Driver "nvidia" ... EndSection That completes the 3D installation, init 5 brings you back again to the graphical log-in. On our system Version 0.9-769 of the driver turned out not to be very stable. If you have problems with crashes in 3D games, you might want to try out the forerunner version 0.9-6. The installation procedure differs only in that RPM now comes with the parameter ”--force”.

Uninstallation The message displayed during the installation of the GLX package about the renaming of four files was intended for any later uninstallation and concerns two files in each of the directories /usr/lib and /usr/X11R6/lib/modules/extensions: ”xxx” is placed before their names and the ending ”.RPMSAVE” attached, so if you want to uninstall you must rename them as before. No other changes are necessary.

Prospect Naturally there are other graphics chips with 3D support than those from Nvidia. For a few very widespread chips, including ATI Rage 128, ATI Radeon, 3Dfx and Matrox, the DRI Project (http://dri.sourceforge.net/) makes ready-made driver packages available for self-compilation – you can find the main ones on our cover CD. But since the drivers require Kernel 2.4 together with XFree86 4.0.1 or higher, the only suitable bases are the latest distributions from SuSE, Red Hat or Mandrake. We will come back to this topic in a later issue. ■


053Railroadsbd.qxd

29.06.2001

16:09 Uhr

Seite 53

RAILROAD TYCOON II GOLD EDITION

COVER FEATURE

In the age of steam become a

RAILWAY BARON COLIN MURPHY

Life would truly be unbearable should you be without the facility of diversion, and here, and here in this game we have a great diversion, but will it suit you? If you like a game that involves the challenge of developing strategies against a complex and changing environment then yes, I reckon it will. Railroad Tycoon is a graphical strategy game, the aim of which is to build a railway empire. This you can do in a wide range of geographical and historical contexts, each of which provide their own unique challenges. These range from the almost ridiculous scenario of running a passenger train service in Antarctica to coping with the stresses and strains of running the all important supply services to Second World War-torn England. You take on the responsibility of deciding routes, laying track between locations, building and maintaining infrastructure and running services. You will find, depending on the type of scenario initially chosen, locations which provide resources and those that require them. The cattle farms require grain to boost milk production, the milk is required by populated areas, those areas with people provide passengers, mail and have other requirements still. So the first skill needed is to see where opportunities lie for bringing together resources and those with a need for them. Then to lay efficient and well maintained track between them, while keeping another eye open for further opportunities. That would make for a game good

enough, but Railroad Tycoon goes further, as your game progresses so does the scenarios develop over time – new technologies come into play, allowing you to combine new and different resources. But the gameplay doesn’t even stop here as you also get to dabble in the world of high finance, where you can sell shares and issue bonds for your railway company and also buy and trade stocks of other companies playing against you. The game itself is presented and packaged in a most professional way. The box, rich with artwork contains a 120 page instruction manual and user guide, a full colour fold out aide-memoire detailing the profitable route for resources to take through the maze of demands made by the world and the all important CD-ROM. When the game starts, at least for the first time, you are played a full motion railway themed video trailer to get you in the mood.

The Installation Process This is clearly laid out in the user guide and involves nothing more than running a script and answering a few questions. You can customise the install to 11 · 2001 LINUX MAGAZINE 53


053Railroadsbd.qxd

29.06.2001

16:09 Uhr

COVER FEATURE

Seite 54

RAILROAD TYCOON II GOLD EDITION

Playing the Game

The main game in play.

include the larger and more fancy introductory video clips and options on what sets of scenarios save on hard disk space, should that be a scarce commodity in itself. It did not set up a desktop icon by default, for KDE at least, but this was easily done by right- clicking on the desktop and creating a new link to the application. The minimum install size is some 200MB with graphics displayed at 1024x768 and 16 or 8-bit colour depth. Sound is nice to have, as you are offered audio feedback, but you don’t have to rely on this. The game play isn’t’t very processor intensive and only calls for a minimum spec of a Pentium 133.

Choosing the consist of each train

Info: Publisher- Loki Games http://www.lokigames.com Cost - £15.00 from SuSE or Linux Emporium. ■

54 LINUX MAGAZINE 11 · 2001

Even though the game comes with a comprehensive, but still readable manual, it also features a very handy tutorial scenario to allow the eager player to start to enjoy the game without any unnecessary delay. The tutorial will show you how to manipulate the basic features of the game, such as laying track and building stations. From here you get to see how to set up train services and organise the cargo that they will carry from one town to another in exactly the same way you would in a real game. The screen initially shows an isometric view of the landscape with trees and some building representations shown. Using the mouse you click and drag between any two objects to lay a track. After the track is laid you then need to add stations and sundries, such as water towers and post offices. Next you need to buy a train to run on your line and choose the make up of the carridges and wagons that will be transported. You need to choose the correct cargo to move between towns such as iron ore from the mine to the steel foundry along with coal from the pit. This allows you in turn to transport from the foundry to the tool factory steel. Each cargo moved increases revenue for your company but also increases the wear and tear on the train and track. The game can be played with others over a TCP/IP network, be it local or through the Internet, or pit your wits against computer-controlled players which you can configure to match your skill level. The winner being decided on the final profit achieved over a set period of play.


053Railroadsbd.qxd

29.06.2001

16:09 Uhr

Seite 55

RAILROAD TYCOON II GOLD EDITION

At any point you can purchase new track, trains or cargo and modify the wagon make up of each train. When running short of money you have the option to enter the world of business dealing where you can raise capital or should you be flushed you can attempt a takeover or merger with rivals. One curiosity though is that if the CD-ROM has been left in the drive then a Rhythm and Blues track is played ad nauseum – never has a CD been snatched from a drive bay with such vigor! The plus and minus keys control the speed of the game and the graphical view shows tiny train models moving back and forth. This is just one of the keyboard shortcuts which you will find out about by taking the time to read through the paper documentation as well, knowing some of these keypresses really improves the game flow. The only criticism of the game was when you choose the cargo combinations you have to say at what station these will be left. The map for this section may contain lots of towns, but being so close due to the map scale it is often difficult to highlight the correct one with the mouse. When laying track you need to be careful of the route over the landscape that is taken, as if over steep terrain then most fuel and time are used as well as a greather chance of the train breaking down, giving rise to further costs or a replacement train. The more that a train is used the greater the chances are of a breakdown. Breakdowns cause the goodwill of the company to be hurt and so in turn the share value. Adding more roundhouses at stations means that the trains are more likely to be maintained for longer. After playing all the available scenarios you have a built in map editor so you finally can make a railway on the moon if you choose. Along with the manual there is also the strategy guide on disk in html format. This is not simple and electronic version but rather a game walk through showing how and why certain gamesplay occurred. It also sets out how the computer plays and what strategies to use to win. While playing events will occur. This is where some action has occurred that you maybe able to utilise such as a remote town offering a bonus to any company willing to connect a railway line to it or a line being bombed during a war. Once you are familiar with all the game techniques you can then play in the advanced scenarios. Here you receive no income for moving unwanted goods, only a cost as your fuel is used up. This level does allow you to be ruthless with your opponents. You can not only buy railways but the surrounding industries such as the dairy farms or ports. Once owned, you can then deal on margin profits where the rival railway companies have to pay you a percentage for the goods. You can also use your personal wealth to buy rival stock so giving you more voting power when you attempt to do a takeover or merger. Failed mergers need to wait a year before reattempting. This stops you quickly increasing the bid to buy at a minimum cost.

COVER FEATURE

Should you be less than financially astute, then you also have the option of playing the game in ‘sandbox’ mode, where the financial distractions are removed from the overall game, more than enough for some of us. The game is very configurable, allowing you to make changes to things like the colour depth of the display, so the game will work on less than up-to-date hardware, or to change the skill level of your computer opponents so that you can enjoy a challenging game and not just a whitewash on one side or the other. The cutesy graphics similar to SimCity games do not distract and with the speed control it is sometimes like watching a Hornby railway in you X session. Overall, the game has caused many a lost hour and it continues to eat up free time. Each time you think you have mastered the game you realise that there is another layer of dealing and so the game becomes more and more devious. ■

Building stations

Profit or loss?

11 · 2001 LINUX MAGAZINE 55


056Onlinesbd.qxd

29.06.2001

16:32 Uhr

COVER FEATURE

Seite 56

ONLINE GAMING

Taking play seriously:

ONLINE GAMING RICHARD SMEDLEY

Before the advent of the console and the home computer, gaming was ususally a social occasion, involving two or more people. With the Internet gamers are once more locked in combat with each other, rather than pitted solo against the rudimentary AI running on their PC. On this month’s cover CD you will find a copy of the Linux beta of Creatures. New games releases can be seen as part of the Linux advance on the desktop. However online gaming on Unix and its predecessors can be traced back over 30 years.

MUD Slinging

[left] Figure 1: Follow the yellow brick road... [right] Figure 2: Strange fruit

Of course the stereotype of a Unix sysadmin involves science fiction and Dungeons & Dragons as much as beards and sandals. It is no surprise to find Multi User Dungeons (MUDs) have long been a popular pastime in the Unix community The first MUD to appear on the internet was MUD1, in 1979, created at Essex University by Richard Bartle and Roy Trubshaw. It was written in MACRO-10 - the machine code for DECsystem-10, with the first external players logging in accross ArpaNet, from the USA, early the following year. MUDs quickly became popular pastimes amongst those with the connectivity (usually research students and systems administrators). Although everything in the games could be effected by telnet access, many more advanced client programs were written during

56 LINUX MAGAZINE 11 · 2001

the 1980s. This has continued in recent years as Linux and the internet has brought a whole new generation to the pleasures of pretending to be lost in some dark dungeon, surrounded by powerful adversaries.

Mud on the desktop Those running KDE may want to try Kmud (see figure 1), which integrates into the desktop as well as offering all the usual features such as intelligent browsing of input history and an automapper. Papya is a more tradional client for Gtk. Again it has a strong feature list and benefits from a simple Plugin system if you wish to add your own modules, as well as being fairly configurable. For a development upon traditional clients, mcl - MUD Client for Unix runs under a virtual console on the Linux desktop. It is fast, supports embedded Python (and Perl) and makes life easier with features such as automatic login, aliases and Macro keys, as well as some support for peer-to-peer chat protocols.

Graphic detail Bioware have taken a different approach to the traditional MUD game by implementing the Dungeons & Dragons 3rd edition ruleset in a realtime 3D roleplaying game, using the latest rendering and graphics techniques. Of course it needs a more powerful PC than any MUD client, but there is some time to save as the simultaneous PC / Macintosh / Linux release is still a long time in coming. It is refreshing to find a company which is taking its time to get a product right rather than rushing it out the door.


056Onlinesbd.qxd

29.06.2001

16:32 Uhr

Seite 57

ONLINE GAMING

Figure 3. Too cute by half

Creature feature Also soon to be released is the Linux version of Creatures Docking Station. The creatures in question are cute artificial life forms, à la Tamagochi, known as Norns. They can feed themselves, communicate and be taught what is right and wrong. With the launch of Docking Station populations of Creatures can now interact, and be swapped. Digital DNA means a better Norn can be bred - you can get to play eugenics without the moral outcry. Docking Station is a hybrid of a game client and a web site. Users can send Norns directly between each other, and then track where they are on the web. You can also receive a Norn from a random other user, and then start a chat session with them to tell them how it is getting on. The installation should delight many a Linux newbie, or indeed anyone who has previously suffered RPM hell, or hunted high and low for the right versions of libraries needed. After downloading and untarring the beta version from the site, just enter the directory and run the install script: ./dstation-install It will grab the latest updates for you. So far it has been tested successfully on Debian, Mandrake, Redhat, Slackware and SuSE. On the cover CD you will find the application untarred and ready to install. If you need to be root to install anything, you will be prompted for the password. Of course there are security implications to all of this, but then you are safe behind a hardened firewall, aren’t you?

On the Board Chess has brought people together for deep and studied confrontation for centuries. It has proved a relatively simple challenge for AI developers with chess programs now available to play at

COVER FEATURE

Info Links to MUD clients: http://www.kyndig.com/links/Clients You will find some of them on the cover CD. mcl - MUD Client for Unix: http://www.andreasen.org/mcl/index.shtml Mud server: http://www.circlemud.org/ Creatures docking station: http://ds.creatures.net/linux/beta.pl Never Winter Nights: http://www.neverwinternights.com/ ICS: http://www.chessclub.com/ FICS: http://www.freechess.org/ A good source of info: http://www.tim-mann.org/chess.html The code to DeepThought can be downloaded from http://www.timmann.org/DT_eval_tune.tar.gz Go servers - NNGS: http://nngs.cosmic.org/ IGS The Internet Go Server: http://igs.joyjoy.net/ kiseido Go server: http://www.kiseido.com/ ■ international master standard. Nevertheless part of the satisfaction that many gain from games of skill arises from pitting themselves against the cunning of another human player, probing for weaknesses and laying traps. The popularity of the game has lead to a thriving online community with many servers On an Internet Chess Server (ICS) one can play online with players across the world, or against machine opponents, watch beginners or grandmasters or just hang out and chat. The Free Internet Chess Server (FICS) is justly popular with the Internet Chess Club a good alternative. The XBoard interface provide front ends to most of the chess servers on the web, and very many chess ”engines” - programs that play chess. GNU Chess and Crafty will be found on most distribution disks, and play a fairly good game. Machine ”intelligences” tend to be strong tactically, but weak strategically though.

simple

Go? The Japanese game of Go, based upon the Chinese Wei Qi, is better suited to an ”intuitive” approach, rather than a computer’s brute force problemsolving. It makes a good basis for AI research, but meanwhile load up one of the Linux Go clients, connect to one of the international Go servers and discover this ancient and fascinating game. We have included a number of clients and Go links on the cover CD as, for obvious reasons, it is very hard to do a web search for ”Go”. ■

or graphical

[links] Figure 4: Digital DNA gives rise to diversity [rechts] Figure 5: Travelling is easy with the Warp 11 · 2001 LINUX MAGAZINE 57


058xpilotsbd.qxd

29.06.2001

16:50 Uhr

COVER FEATURE

Seite 58

XPILOT

Multi-player Fun

XPILOT WINFRIED TRÜMPER

Like so many things for me, it began long ago. Doom could just be played within a local network, at the same time crippling it; provided you had already received higher orders of IPX configuration under DOS. And what you experienced then actually made Doom superfluous: as the result of the IPX broadcast storm triggered by the game, some of the secretaries who were being hindered in their work turned into raging monsters. Often, it was hard to tell the difference between game and reality.

Figure 1: An objective game rating.

The new version of XPilot on the other hand could be played via the Internet. And it ran, as the name suggests, under the X windows system. To be more precise, there is an XPilot server and a client, where only the client needs X11. In the meantime, clients for variants of Windows have now become available, although they leave a lot to be desired in some aspects. But XPilot also disrupted network operations in many a university, because the administrators did nothing but play all day, thus giving denial-ofservice a new twist. XPilot was a game of its time: impressive coloured vector graphics, tedious-to-

58 LINUX MAGAZINE 11 · 2001

learn keyboard controls, barely synchronised, croaking sound. What a crass thing compared to today’s Ego-shooters. The facts about XPilot are summarised according to modern criteria as a games rating in Figure 1. Hardly any points and yet still 90% gaming fun? Yes, even today XPilot still has its attractions, which go beyond pure retrocomputing or feelings of nostalgia.

Installation and preparation The game gets under your skin, not through realistic graphics nor through particular brutality or a complex frame handling. It’s just the dexterity, the rapidity and the strategy of other human co-players that give the game its allure. Put another way: XPilot is terribly boring to play on your own. On the other hand you need solid training to have any chance in the shark-infested Internet. It’s best to start training on your own local server, on which you will not be bothered by the overwhelming Nordic Empire (toplevel domains .dk, .se, .no and .fi). Before launching into the game, you should make sure that you are using at least Version 4.3.0. The version number is always stated at the start of


058xpilotsbd.qxd

29.06.2001

16:50 Uhr

Seite 59

XPILOT

COVER FEATURE

Figure 2: Training in the world of newbie-demo.

the program. You can build the files from a tar archive, and install yourself. This procedure is easy and is explained in Box 1. On start-up, the server (started with /usr/games/xpilots), immediately contacts the metaserver named meta.xpilot.org and enters itself there in the list of active servers. So other players become aware of your server and can connect to it. The options +reportToMetaServer and -onePlayerOnly prevent this action, if you want to be left undisturbed. The client is now started with /usr/local/games/bin/xpilot &. The meta-server can be queried directly by the XPilot client, if you simply omit a host name at the start. Click on the Internet button and you get a list of world-wide servers, sorted according to the number of players. Of the many options for the server process, mapFileName is one of the most important, because you use this to specify the file name of the world map. The world newbie-demo was developed especially for this article, and it contains all the components with the exception of the balls. Copy a map file newbie-demo.xp into the current directory, and then start the XPilot server on the command line: xpilots -mapFileName ./newbie-demo.xp. Beware: The cannons on the walls fire when approached, and you must activate the protective shields promptly by using the space bar.

Pilot, salute the sun for me! When starting the client, specify on the command line as parameter localhost, so that the local server is contacted. The complete command to start the client is /usr/local/games/bin/xpilot localhost &. The client reads its settings, by the way, from the file ~/.xpilotrc, which you can edit either manually with an editor or from the configuration menu of the client (under Menu/Config). Firstly, the client presents you with an internal command line, which you can explore yourself with the help command. By pressing the Enter key, finally, you can start the game. If you left out the localhost when starting, click now, to get into the world newbie-demo, in the client window on Local, and you should see the

message ”The following local XPilot servers were found”. You will be offered your local server, and with a click on Join, you start at training camp. Steering the ship can be done with either the mouse and/or using the keyboard. Newer Windows keyboards like to protect the user from himself, by locking up completely if you press several keys. Simultaneous turning, acceleration and shooting ([a]/[i]+[Shift]+[Return]) is impossible with this type of keyboard, and in this case it is advisable to have mouse control right from the start, to have any kind of a chance. The next time you buy a keyboard you will automatically watch for such characteristics. Moving with the ship is simple, in principle. Turn left with [a], turn right with [s], accelerate with [Shift] and fire with [Return] – not to be confused with [Enter] on the number block, which toggles back and forth between mouse and keyboard control. Oh yes, and to couple and decouple the ball, press [f]. This can become necessary, in order to drive the ball and at the same time fight an opponent. There is no braking, because otherwise the game would become too complicated. To stop, make an abrupt U-turn. You should avoid colliding with walls, opposing players and their shots. Protective shields only help up to a certain speed, so it’s always better to rely on your own skill. The shields, the deployment of items and movement also require (lots of) fuel, which you must promptly refill. If you can’t find any fuel tanks lying around, you can search the world for built-in petrol stations, fly to them and with the [f] key, fill up. Amass items by switching off the shields using the space bar (if active) and flying over them. A list of all items in the server versions from 4.1.0 on is shown in Table 1. Some items get used up over time or with use, regardless of whether successfully used or not. The more items of a type are attached to the ship, the stronger, too, is their effect. Bear in mind the possibility of modifying the weapons. A smart guided missile finds the enemy automatically, while an invisible enemy can be tracked down by a heatseeking missile. The keys for modification can be found in Table 2. The keyboard assignment is a science in itself, and so is the question as to whether you want to

Box 1: Installation instructions for XPilot As root system administrator, change to the directory /usr/local/src/ and unpack the archive there: cd /usr/local/src tar xzf /tmp/xpilot/xpilot-4.U 3.1.tar.gz Change to the source directory and compile the sources as follows: cd xpilot-4.3.1 xmkmf -a make A final make install copies all files to the designated places in the file system, the server to /usr/games/xpilots and the client to /usr/local/games/bin/xpilot.

11 · 2001 LINUX MAGAZINE 59


058xpilotsbd.qxd

29.06.2001

16:50 Uhr

Seite 60

COVER FEATURE

XPILOT

Table 1: Items Symbol

Description

Key

Fuel Needed for propulsion and weapons. Can be used to repair your own base. Can also fill your tanks

none

Tanks To take on additional fuel. Also accelerates refueling. Each tank holds only a limited quantity of fuel. Can also be used as a weapon, by decoupling them. The tanks, together with the fixed-installation main tank, form a unit, whose fill level is displayed on the right in the HUD

[R]

Aircraft cannons The cannons built into the ship. Can be added to with a fan and a tail gun

[Return]

Fan canon (wides) Scatters the shots from the on-board cannons in a wide angle. The angle can be adjusted with the [z] key in four steps

[Return]

Tail gun (rearshots) A cannon at the stern of the ship. [Return] Afterburner Better use of fuel. Makes the propulsion of the ship more effective

none

Boost Gives more push for a short time. Gets used up at the same time

[J]

Tractor beam For fatal pulling or pushing of other players

[T]

Autopilot Keeps the ship in its present position. Useful for beginners for refueling

[H]

Extra shields Extra strong protective shields protect against contact with walls, hostile ships and their weapons. Release with [G], activate with the space bar. As long as the shields are active, you cannot fire your own weapons nor pick up any items. Nor do the extra shields help against a collision with a cannon or a treasure chest

[G]

Laser A beam of light of greater or lesser length. Can atomise or maim an opponent

[-]

Laser reflector Reflects back the laser beams of other players to some extent. The probability of being hit by someone is cut by half for each reflector

[I]

Cloak Makes your own ship more or less invisible for other players. The exhaust gases, however, continue to be visible. Flying through wormholes, bombardment and ECM can nevertheless make you visible, despite the cloak, for a short time

[Del]

Sensor Makes other invisible players more or less visible, i.e. is the opposite to cloaking

none

Transporter beam For stealing items from other players

[T]

Mines Explode when other players approach or by remote detonation. In mobile or stationary design. Mines can be reprogrammed with ECM

[Tab] [+]

Missiles Can be used in three types. Unguided missile, or torpedoes, simply fly in a straight line – until they hit something. Heat-seeking missiles track active powerplants by the heat they give off - and their own too, so be careful. Programmed or smart missiles track down the players whose name they have in their radar. With ECM the smart missiles of other players can be reprogrammed. Invisible players cannot be traced

[#] [‘] [=]

ECM (electronic counter measures) Act upon the on-board electronics of the opponent, blinding him as a result, cause the loss of the ball, destroy part of his laser and reprogram mines, missiles and robots in the immediate vicinity. The strength of their action depends on the distance. The maximum range is eleven blocks

[[]

Deflector Sometimes fends off shots and other dangerous items from the ship. Less effective than protective shields

[O]

Hyperjump Displaces the ship at random to another point in the world. Gets completely used up at the same time

[Q]

Phaser Dematerialises your own ship, so that you become invulnerable and can fly right through other things. On the other hand, you cannot pick up any items or fire weapons. The effect lasts up to four seconds 60 LINUX MAGAZINE 11 · 2001

[P]


058xpilotsbd.qxd

29.06.2001

16:50 Uhr

Seite 61

XPILOT

play using mouse or keyboard. In any case, you should configure the keys at the beginning such that they match the original settings. You should make use of three overriding keyboard functions: • Alternative client configuration, to which you can switch in special situations with the [Esc] key. For example: in close fighting, to zoom closer to the events • Macro keys for weapons configuration. For example: I B3 Z1 • Talk macros for often-used phrases. For details see the file README.talkmacros, in the subdirectory doc of the XPilot sources Unfortunately these functions can only be assigned by editing the .xpilotrc in a text editor

Psychological weapons Apart from control of the ship and the deployment of the weapons, psychological strategy also plays a role. Which also brings us to the first ground rule: If you ever fail, you are never personally to blame, but the client, the network connection or the unbelievable luck of your opposing player. Here are a few typical expressions, with which you can

COVER FEATURE

indicate this: lucky, pure luck, never!, pathetic, stray, no way!, die!, hands off, damn, forget it, hey, doh, oh my. It is almost superfluous to refer to the possibility of communication with other players. The chat line is simply part of a network game on the Internet for this and is activated with the [M] (message) key. The language of XPilot players has a few peculiarities. So-called ”item-wussies” are players with a passion for collecting items, ”suiciders” commit suicide out of cowardice and ”(base) sitters” are participants who never attack. You can even talk to the server, which understands the commands /advance, /help, /kick, /lock, /password, /pause, /queue, /reset, /set, /team and /version. For example: m/help set. In this way, you can configure the server without a restart; although for most operator commands, you do need an ID, defined in the world map with the command /password. Now, are you fed up with the world maps newbie-demo.xp and the factory-set default.xp? Then it’s time to get busy with the many XPilot worlds. An illustrated overview can be found at the http://www.undue.org/homepages/matt/xpilot/map s.html. There are three basic types of XPilot world: Duels, Racing and Ball Hunt. You should start with the individual fighting as in the standard map, and

Figure 3: Key configuration.

11 · 2001 LINUX MAGAZINE 61


058xpilotsbd.qxd

29.06.2001

16:50 Uhr

COVER FEATURE

Figure 4: Extract from Mad Cow Disease.

The author Winfried Trümper develops bang-on concepts for the further development of UNIX, such as sql4txt, docfs, shobj or file-rc. Since he is unable to live from this and from playing Xpilot alone, he ekes out a living by Perl programming and system administration. He can be contacted at me@wt.xpilot.org.

Info http://www.xpilot.org/ http://www.undue.org/homep ages/matt/xpilot/maps.html http://bau2.uibk.ac.at/erwin/N M/www/ ■

Seite 62

XPILOT

then later you can try racing and finally learn the noble art of the ball game. By far the most popular XPilot world is Bloods Music 2. Two teams of four players each try to steal the ball of the opposing team and to place it in their own garage. This is by no means simple, since the ball weighs pretty heavily. Yes, indeed, in XPilot everything obeys the laws of physics and is also calculated accordingly: gravity, mass, angle of ricochet, etc. If the theft succeeds, then there are a whole lot of points for the team and the same number of minus points for the opposing team, which is why they will try anything to protect their own ball. This includes the deployment of on-board cannons, whose strike costs the opponent one of their four lives per round and brings them back to their home base. But beware: Bloods Music 2 is the playground of the XPilot gods and therefore only recommended for experienced players. Fortunately, however, there are a whole lot of other interesting world maps, such as on the following: Tourmination, New Dark, Hell, Hi There, Teamball, The Caves, Pizza and Mad Cow Disease (see Figure 4). The latter, due to its special originality, is my personal favourite, even if one has seen all the gags after a few rounds. If you ever draw up a world map yourself, then take into account the problems of non-local players, soft walls, against which one is not immediately smashed to pieces and a low image refresh rate of 12 frames per second. The construction of worlds is beyond the scope of this article but is certainly possible, with a text editor and a bit of hard work. XPilot client and server communicate with each other on the basis of the UDP/IP protocol, which has the advantage over TCP/IP of error tolerance. If ever a UDP packet does not arrive, then the Linux kernel does not attempt a re-transmission, because the transmission takes a long time measured by human reaction times, and in the meantime the player

would miss out on the rest of the events in the game, which would by now be queueing up in the form of new UDP packets. With XPilot one soon learns how fast human reaction time is - about 20ms. In the local Ethernet (10 Mbit/s) the run times for data packets, at 2ms, are usually less than that, while on the Internet one often plays beyond the 20 ms limit. Lag is the name for this delay in XPilot jargon. And if the lag is to blame for the fact that one loses control over the ship in a crucial situation, then in the client chat line will quickly fill with vulgar expressions that cannot be printed here. Run times to the server can be measured with ping, which states the run times in the time= column. Over 60ms means you have a severe disadvantage, and over 100ms means you are playing more out of politeness or else training. Socalled lag-training helps for the team cups which take place several times a year, at which teams from various European countries compete against each other for two days. Then, of course, one has to contend with the delay to far distant servers. wt@backstage:~> ping -c 5 wonder2.e.kth.se PING wonder2.e.kth.se (130.237.48.16): 56 daU ta bytes 64 bytes from 130.237.48.16: icmp_seq=0 ttl=U 51 time=44.3ms If you run into problems with XPilot, the rec.games.computer.xpilot newsgroup is a helpful place to start. There you can ask questions. I have forewarned people. The newsgroup is also the right place to start looking for addiction counseling. For example, you should try to break free from XPilot, if after the game you see red triangles before your eyes for long periods, potential feelings of dizziness... XPilot sickness is the specialist term for these kinds of complaints. An article called ”XPilot how can I break free from it?” is already being written. But until that appears, you can still play a couple of rounds. ■

Table 2: Modification of weapons Action Implosion Weapons implode and as a result draw all the surrounding players towards them (possibly to their doom). Pressing again switches back to explosion mode Angle of scatter the firing angle can be selected in four stages (Z0-Z3). If your ship is equipped with lots of fan cannons and if other players are already deriding you for being like the sun, then a smaller scatter angle will soon put a stop to this jeering Big In four stages (B0-B3) the weapon is topped up with additional fuel, so that the weapons become heavier, and because of this they fly more slowly and also suffer more damage Velocity Weapons and particles fly faster, but also disappear faster. Here again, four stages from V0 to V3 are possible Fragmentation / cluster Very nasty modification, which not only causes mines and missiles to produce pressure waves, but also to leave behind a hail of shrapnel Nuclear Several mines or missiles are combined into one and this results, especially as ”full nuclear weapons” (FN), in extremely destructive weapons. Get out of the dust promptly, otherwise you will get hit by something, too Multiplication Mines and missiles are shot off in smaller but multiple form. Can be selected from single (X0) to quadruple (X4)

62 LINUX MAGAZINE 11 · 2001

Key [i] [z]

[b]

[v] [c] [n]

[x]


063ja2sbd.qxd

29.06.2001

16:52 Uhr

Seite 63

JAGGED ALLIANCE 2

COVER FEATURE

Here come the Linux games!

JAGGED ALLIANCE 2 FIONN BEHRENS

The Canadian company Tribsoft has gone to the trouble of converting the game to Linux, which has in turn been brought to us by Titan, among others. The game itself is almost more complex than the story of its development – but let’s start at the beginning. Whenever there is a TV report about distant countries, in which dictators are oppressing people by the thousand this all somehow seems very far away. Anyway, there’s nothing you can do about it. Jagged Alliance 2 deals with just such a country. An evil tyrant named Deidranna is keeping the small, poverty-stricken country of Arulco enslaved with great harshness and in dire conditions, and she is also squeezing the last drops from her tormented subjects, in order to spend the money thus obtained on the army and her terrorist state. The former ruler, long believed dead, is in hiding abroad and commissions you, as an experienced mercenary, to liberate the land from its ruler with hired troops. To do this, you receive precisely 40,000 dollars start-up capital. This whole process is packed into a very well-made and very long introductory film. The game comes in a typical DVD plastic case, whose hard plastic holder does not exactly mollycoddle the CD, which is clamped in so hard as to be bomb-proof. It is only on closer inspection that you discover the second CD, which is necessary for installation, in a paper cover hidden behind the

manual. After starting the installation program, you are met with a simple shell script which has four installation variants, requiring between 305 and 850MB of space on the hard disk. And yet, considering the free-of-charge, easy-to-use installation tool, a somewhat more click-friendly installation wouldn’t have hurt. The instructions come in the form of a 50-page, illustrated booklet, which gives a comprehensive introduction to all aspects of the game. Especially useful at the beginning is the short reference on the first three pages of the instructions, because there are quite a lot of functions and keys to memorise. The objective of the game – liberation of the country piece by piece, gaining new confederates, money and sources of raw materials – can only be achieved, even at the easiest level of the game, with brains and hard work. After the intro sequence, the extremely homespun graphics with a skimpy, unalterable resolution of 640x480, seems more than a little unpleasant. But as we all know, strategy games often are not primarily defined by their graphics. So, if you aren’t put of by this glitch, you will soon find out that JA2 is a game whose playing levels set new standards for Linux software. On the other hand it would be unfair to Jagged Alliance to force it into the strategy games pigeonhole. Even if you would like to think that in the end all computer games come down either to someone racing around somewhere and chasing something or some kind of resources being

This game looks set to become a classic on Linux computers. With JA2, Sirtech has come up with a hitherto unknown but successful mixture of action game and strategy simulation, which has won the hearts of many gamers, and not without good reason; its predecessor, Jagged Alliance, was named 1995 strategy game of the year.

Figure 1: The comprehensive intro film describes the situation at the start of the game.

11 · 2001 LINUX MAGAZINE 63


063ja2sbd.qxd

29.06.2001

16:52 Uhr

Seite 64

COVER FEATURE

Figure 2: The ”SirOS” Laptop: Whether email, Web or finances - this helps.

URLs Sirtech http://www.sirtech.com/ Tribsoft http://www.tribsoft.com/store. html Titan http://www.titancomputer.com ■

Evaluation: Long-term playing fun: 90% Graphics: 35% Sound: 80% Control: 85% Multiplayer: 0% Originality 90% Complexity: 95% Overall rating: 80%

JAGGED ALLIANCE 2

managed somewhere far above – Jagged Alliance offers both and more, although with the emphasis on strategy. In fact, this successful combination of large-scale strategy and direct contact with the detailed handling of role-playing elements has defined its own, new genre. The game primarily runs on four levels: • The first of these is the laptop. This links the player with a virtual Internet, in which you can do such things as obtain information, buy arms, hire mercenaries and send emails. For the medium term, an indispensable aid. • Then there is a sort of complete overview, in which you can determine the composition of the groups of mercenaries, map out routes and switch to all other overviews. But the map in this overview has an over-ground and also three underground levels, all of which you have to keep an eye on. • Thirdly, a sort of action mode, in which when there is contact with the enemy you have to send your fighters one after the other to face the opponent in best role-playing tradition. Each mercenary also has, as in a role-playing game, quite specific characteristics and strengths, the type and quality of which jointly determine the outcome of each round of fighting. This is a mode where good tactics are especially necessary otherwise you will soon lose a few valuable comrades. • At the fourth level there are the steadily interspersed inter-sequences, which move the

Figure 3: There’s a nice loading image for all key scenarios.

64 LINUX MAGAZINE 11 · 2001

action forward each time minor stages and game objectives are achieved and thus support and steer the player in their task. Many additional elements enlarge the variation options almost to infinity – so you can construct your own implements from individual components and combine these in turn with up to four others. Ingredients such as chewing gum, steel piping, super glue and the like are used here. So no problem adorning your own favourite cannon with a bit of DIY with laser pointers, tripod, aiming telescope and range extenders – McGyver says hi. Of course, at the beginning, due to the low amount of start-up capital, there is no change of hiring an army of top class fighters. What you have to do, given the resources, is to buy and to plan tactics carefully and circumspectly. The first contacts will occur shortly with the scattered rebels, supporting them in the first instance with advice and deeds, and soon with people, too. Later the citizens of Arulco will also be joining up and training as militiamen and allowing their mines to be exploited for the good cause, so that then at last the much-needed money comes into the treasury. Each individual mercenary and militiaman has an individual set of equipment, with which, depending on his special abilities he can repair other people’s armour, open locks, heal the casualties or build things - and much more besides. This is somewhere that the patient player can open up endless prospects for optimisation. Even the good


063ja2sbd.qxd

29.06.2001

16:52 Uhr

Seite 65

JAGGED ALLIANCE 2

COVER FEATURE

Figure 4: Strategic and personal decisions take place in the global overview and on the virtual laptop.

Figure 5: The action mode right in the thick of it, whether a rebel pub or Deidranna’s human experimentation clinic.

organisation of a simple contact with the enemy with some well thought-out series of moves and well-distributed people can easily take half an hour. Thankfully the game has a function up its sleeve, for impatient colleagues who prefer to concentrate on global events, to dice automatically for entire enemy encounters. The sound is well done and finely adjusted to the events – noises always come from where they are created, the background music is unobtrusive but not boring, although now and then it does stutter a bit. But to compensate, JA2 also supports ESD of its own accord. This means that any system sounds or your own favourite music can be listened to alongside the game on any old sound card. In this game, we sorely missed the option of multi-player scenarios. It is precisely in the domain of strategy games with non–linear time sequences that others have shown that such a thing is easily possible and can also be highly entertaining. The fact that this option is missing is something that must be chalked up as a big minus point. A game of this kind of complexity does of course also require a high-class interface for operation. This is where SirTech, with its successful division of the meagre space, has done wonders. Details such as a directory of inventory by map sectors and sensibly arranged tables with colour highlighting of entries which are strategically important or relevant to decision-making turn the game into a pleasure. What is urgently needed, though, is a keyboard pattern, because you currently have to remember more keys than with a flight simulator for fluid play.

Lastly, the successful integration of the various game levels give Jagged Alliance 2 that certain something. You can change completely naturally from the tactical overview to the action mode, never getting the feeling of overlooking something or losing perspective. Whenever things get tight, you can stop the game time and decide at leisure whatever needs deciding. And the fact that the fight actions are term-based also means that you don’t have to worry that your opponent might drill you a new hole in your hat while you’re pondering the next step.

Conclusion This game could keep you occupied for a month. Although from a technical point of view the game sometimes seems outdated and the paltry resolution cannot be described as anything but anachronistic Tribsoft, by converting it to Linux, – if you overlook the Spartan installation and a few memory leaks, the correction of which is already in progress – has come up with a solid and tidy product. No obvious bugs or crashes occurred during our test. From the point of view of game playing, this special class of mercenary simulation, which has already been awarded prizes many times and all over the world, is a real cracker, which defines its own particular style – a definite buy for strategy fans. A list of reference sources for this game and Linux-specific discussion forums can be found on the Tribsoft website. And by the way, the porting of the Unfinished Business game expansion onto Linux has already been announced for the near future. We can hardly wait. ■ 11 · 2001 LINUX MAGAZINE 65


066gimpNewsbd.qxd

29.06.2001

20:17 Uhr

SOFTWARE

Seite 66

WORKSHOP GIMP

Image processing with Gimp: Part 3

LAYER BY LAYER SIMON BUDIG

This time we’ll be dealing with layers. Layers are the method of choice when it comes to organising and arranging image data easily.

Figure 1: The dialog for layers, channels and paths

So far we have always been familiar with images as a screen full of pixels. Most of the slightly more ambitious image processing programs on the other hand offer another method for flexible handling of various image elements - layers. Imagine layers as transparent sheets of film. An image consists of several films which are laid one on top of the other in a pile. The films have transparent areas through which the underlying films become visible. Let’s take a look at a simple example of how to work with layers. Start Gimp and open a new image. Use <Image>/Dialogs/Layers, Channels and paths... or [Ctrl+L] to open the central control point for working with layers (See Figure 1). The main part of the dialog is the white area in the middle. Here you can see the individual layers listed line by line. The eye at the beginning of the line shows whether the layer is visible at the moment (with a click on this you can make a layer invisible), the cross indicates coupled layers, which can only be moved together. Then follows a small preview,

which is sufficient for a rough orientation, and finally the name of the layer. It is worthwhile, especially when there are lots of layers, to select meaningful names – with a double-click on the name, a small dialog opens in which you can change the name. The current layer is marked in blue. All painting operations and every plugin command works on this layer. With a single click on the preview or the name of the layer you can change to another layer with which you’d like to work. At the lower end of the dialog you will see six buttons, with which you can execute basic layer operations. With the first button you create a new layer, the second and third buttons move the layer up or down, the fourth creates a copy of the current layer, the fifth anchors a floating selection, and the sixth deletes a layer. Other important operations can be found by clicking with the right mouse button on the layer names.

A hard shadow Sometimes you want to make an image or a graphical element stand out a bit from the background. A standard procedure for this is to lay a hard shadow behind the element. And now you can learn how to do this; this is a good way to start learning how to work with layers. Create a new image, for example 500x500 in size. Open the layers dialog and click on the bottom left button, to create a new layer. In the dialog that appears, select a transparent background, name it anything you like and accept the default size. After a click on OK the second layer in the layers dialog appears. Transparency in Gimp is symbolised by a grey chessboard pattern. Since the layer is empty, nothing changes in the image window. Paint in this layer as you like with the painting tools. Make sure a few areas remain transparent, so that later the effect can be seen (Figure 2). Duplicate the layers by clicking on the fourth button (the one with the two sheets of paper). Now make the upper layer invisible by clicking on the eye symbol and

66 LINUX MAGAZINE 11 · 2001


066gimpNewsbd.qxd

29.06.2001

20:17 Uhr

Seite 67

WORKSHOP GIMP

SOFTWARE

activate the middle layer with a click on the name. Then activate the Keep transp. button in the upper area of the layers dialog. This will protect the transparency of the layer. Similarly to the first part of our workshop, you can now paint around the image in black and the form of the motif is retained. To create the hard shadow, we now completely fill the motif with black. The fastest way to do this is to use drag & drop. Click in the colour fields of the toolbox on the small black-white symbol, in order to reset the colours, then press (and hold down) the mouse button in the black colour field and drag the colour into the image window (Fig. 3). Since the transparency of the layer is currently protected, the contours of the motif are retained. Now activate the Move tool (the one with the compass rose as symbol), click in the black area of the layer and drag the layer a few pixels down and to the right. If you now make the uppermost layer in the layers dialog visible again, you will see a shadow-like effect (Figure 4). But since it is in the nature of shadows that they are rarely so sharp-edged, we shall now dot the i: Make sure the middle layer is activated and turn off the transparency protection. Then select <Image>/Filter/ Blur/Gaussian blur (IIR)... and a radius of 10. Now the shadow gets a soft edge that fades out (Figure 5). Since we have spread our image over three different layers, it is possible to move the shadow later to change the direction of illumination. You can make it even softer by reducing the covering power of the shadow layer. To do this, use the slide ruler in the upper part of the layer dialog.

effect. Obviously this is easy to do. But a small problem arises here, in that the three layers with the different luminous colours should not really be displaced with respect to each other. So as not to have to move each layer individually by exactly this same distance, you can click on the area between the eye and the preview and make a cross visible. This cross means that this layer will also be moved in all actions with the Move tool. To move all the layers with the luminous effect in parallel, activate this cross on all the affected layers. Activate one of these layers and move all the affected layers together. A little tip: The Move tool always moves the layer with a picture element under the mouse pointer. In case of doubt, this is the background layer. If you do not want this, hold down the [Shift] key and the activated layer will always be moved.

Lighting effects

Masked

It is possible to achieve lighting effects in exactly the same way (preferably against a dark background). In Figure 6 you can see a spiral, which we have made glow with the above technique. Duplicate the layer with the spiral three times. Activate Keep transparency in each of the middle three layers and fill in the second layer with white, the middle one with a luminescent turquoise (#00ffff) and the fourth layer with a luminescent green (#00ff00). Now deactivate the transparency protection again. With the very useful Gaussian Blur plugin, you can make the middle three layers go out of focus. For example, try the radii 5 for the white, 10 for the turquoise and 20 for the green spiral. As a result we get a pale bluish luminescent spiral. To strengthen the green luminescence a bit now, activate the layer with the green spiral. With <Image> /Image/Color/Curves... activate the curve tool which will be familiar from the last part. From the top list here, select the Alpha channel and drag the middle of the curve upwards slightly. This makes the semitransparent area of the layer a bit stronger, and the luminescence becomes more visible (Figure 7). Contrary to the hard shadow from above, here we have done without displacing the layers with the glow

To further confuse matters, apart from the two options addressed in the keyword Transparency of defining transparency, there is also a third – the layer mask.

Floating selection: we came across this in the first part, in a general way, stating only that a floating selection can be moved back and forth without destroying the underlying image data. Here you can now see what’s really going on behind it. A temporary layer is being made. This is treated separately and merged back into the image as soon as the floating selection is anchored. With the layers dialog you can also create a real layer. Transparency: If an image consists of several layers, all (except perhaps the lowest) automatically have an alpha channel, meaning that each pixel has, in addition to the Red/Green/Blue colour information, an additional value, representing the covering power (from 0 to 255). Also, each layer has a global covering power, which affects all pixels of this layer and defines the maximum covering power. It is defined using the sliding ruler in the layers dialog. ■

Figure 2: A layer in front of a solid background

11 · 2001 LINUX MAGAZINE 67


066gimpNewsbd.qxd

29.06.2001

20:17 Uhr

SOFTWARE

Seite 68

WORKSHOP GIMP

corresponding places. making sure that the black background is noe invisible with no eye symbol against the layer What’s interesting about this is that the original image data are not altered by this. If you paint with white as foreground colour in the layer mask, the original image reappears. So you can snip away, without destroying the image, until the desired contour can be seen, and in Figure 8 you can see the first steps. Incidentally, it is advisable to appraise the contour against both a black and a white background. Perfectionists will also view it in front of other colours. If one now has several image elements released in this way, these can be combined any way you like and you can also create abstruse images. Flying fish in the middle of a primeval jungle are no longer a problem. At this point, drag and drop within Gimp also comes in handy. By dragging the layers out of the layers dialog into different image windows, one can copy them simply into other images. Figure 3: Fill the layer using drag’n’drop

Transparent GIFs

Figure 4: A sharp hard shadow looks more artificial...

The author Simon Budig would like readers to approach him with topic requests – otherwise this series will soon peter out...

Figure 5: ...than a soft-focus shadow

The layer mask is a separate greyscale image, which can be used in addition to the normal alpha channel to make certain areas of the layer transparent. A layer mask is created with Add layer mask in the context menu in the layers dialog. Now you will see a second preview image, and with the mouse one can, by clicking on the respective preview image, select whether to paint in the layer or the layer mask. Using the tiger image from last month. We now want to release the tiger, so that he can more easily be transplanted into a different environment. Load the image into Gimp and give it an alpha channel, via <Image>/Layers/Add Alpha Channel. Now you can paste a layer mask onto the image in the layers dialog via the context menu. Right click on the image in the layers list and choose the Apply Layers Mask. In the dialog which appears, select White (full covering power). If you now paint in the image with black as foreground colour and any painting tool, the image becomes transparent at the

68 LINUX MAGAZINE 11 · 2001

At this point we must address a very frequently asked question: How does one use Gimp to create GIFs, which are transparent at certain places? Lots of people come to grief at the point at which they want to define the transparency colour. The answer is simple: you don’t even need to define a transparency colour – Gimp does it for you. If a certain area of an image is to be transparent, give the image an alpha channel and etch some holes, for example, in the image. If the image is then saved as a GIF, Gimp will ensure that one colour is reserved for the transparency. Bear in mind that in this process, information is lost. Gimp can save several degrees of transparency in its own format, while the GIF format can only handle completely covering or completely transparent pixels. If you have painted your image with nice soft contours, these will be lost when you export it in the GIF format (see Figure 9). Here, in some circumstances, help can be provided by the

Figure 6: X Files here we come...


066gimpNewsbd.qxd

29.06.2001

20:17 Uhr

Seite 69

WORKSHOP GIMP

SOFTWARE

plug-in Semi-Flatten (<Image>/Filter/Colours/SemiFlatten) to some extent: This plug-in ensures that all semi-transparent pixels (thus those which the GIF format would ruin) are made non-transparent, by counting them against the background colour. If one has selected a colour here which matches the ”average” colour of the web page background, this effect can be considerably weakened. In Figure 9 the three different variants can be seen. At top left there is a brushstroke with clean anti-aliasing. In the middle this stroke has simply not been converted into the GIF-type indexed format of Gimp (colour chart with maximum of 256 colours). The clean edge has given way to steps. On the right below the brushstroke has been prepared, with the Semi-Flatten plug-in, for a web page with a scarlet background. Of course one should now only show the image against a red background - otherwise it looks very ugly. But against a red background one can enjoy impeccable anti-aliasing.

Future prospects

Figure 7: With the curve dialog, you can increase the glow

In the next part we will make a foray into the Gimp menus. We will look at a few gems among the plugins and see how these can be combined to make a few nice effects. ■

Figure 9: Problems with the alpha channel in GIFs

Figure 8: With the layer mask it is easy to release image elements

Alpha channel: refers to that part of a layer which contains information on transparency. The alpha channel can be manipulated deliberately (with the curve tool) or protected (Retain transparency in the layers dialog). Release: is the name for the technique of separating a motif from the background. In our case the tiger is separated from the landscape. The main motif can then later be inserted into other images. GIF: is a file format which is very widely used on the WWW. Images saved in the GIF format can define one (of the maximum 256) colours as ”transparent”. This means a pixel can be either transparent – or again, not. Gimp’s model of transparency goes further: A pixel can have one of 256 levels of transparency. Since the GIF format unfortunately has a few licensing problems (LZW compression), you should avoid it as far as possible in your own web projects and instead use PNG or similar. Unfortunately support for PNG is not yet available in every web browser. ■ 11 · 2001 LINUX MAGAZINE 69


070killstratorsbd.qxd

29.06.2001

16:56 Uhr

KNOW HOW

Seite 70

KOFFICE WORKSHOP

Koffice Workshop Part 4

EXERCISES WITH K TIM SCHÜRMANN

In the last three parts of our Koffice Workshop, only relatively dry figures and texts have played the leading role. This time we will do something a bit more creative and with the aid of the vector painting program KIllustrator, conjure up a happy smiley face. Graphics and images can be displayed by a computer in two ways. Firstly an image can be broken down into lots of little dots, called pixels in the same way as your monitor and printer compose an image. Unfortunately, enlarging these halftone or bitmap images leads to fairly clumsy and unattractive block structures. With the other option, which is certainly very popular in the field of technical drawing, the images are composed from individual geometric elements such as lines or circles. Apart from the advantage that these vector graphics take up relatively little space on the hard disk – you need only store the position and size of the individual element – these graphics can also be enlarged by as many factors as you like. The disadvantage of such images lies in their often fairly complex structure: photos either cannot be shown at all, or with difficulty.

One and only In KOffice there is only one painting program specialising in vector graphics, and that is KIllustrator. A counterpart responsible for halftone graphics is being developed by the Koffice team and should also find its way into the free Office package soon. Since it may be some time before that happens, this part of our five-part Workshop will be Workshop summary 1. Word processing with KWord – Part 1: A business letter 2. Word processing with KWord – Part 2: A newspaper 3. Tables and diagrams with KSpread and KChart 4. Graphics and images with KIllustrator 5. Presentations with KPresenter 70 LINUX MAGAZINE 11 · 2001

devoted totally to KIllustrator. Apart from the spreadsheet, Kspread, this is one of the most mature and stable components of the entire Office Suite. The online help is equally satisfactory. Nevertheless you should still bear in mind that KIllustrator, like all the applications collected into the Office package, is still at the development stage. You should therefore not entrust any important data to this painting program. It is still worthwhile getting to know the application a bit better, since, especially under Linux, this type of vector painting program is rare.

The interface In order to be able to join in with the following smiley example, you should start KIllustrator via the KDE Start menu under Office programs/KIllustrator or the Koffice desktop (cf. also the first part of this Workshop). Close the window that appears after the start with a click on OK, which will then automatically create a new, empty page. Unlike its other Office colleagues, the interface is sparing with the overloaded toolbars and on the whole looks very tidy. The use of KIllustrator is very much oriented towards its commercial, large archetypes, such as Corel Draw. Anyone who has already worked with this type of program will soon feel at home with KIllustrator. In the centre of the window, you will see the work area. The big white rectangle represents precisely one printable page. All objects that we are going to create next must always be on this page – any objects outside it are neither displayed nor printed by KIllustrator. In order to modify the page


070killstratorsbd.qxd

29.06.2001

16:57 Uhr

Seite 71

KOFFICE WORKSHOP

settings, select from the menu Layout/Page. There, apart from the page size, you can also modify its orientation and/or set the size of any margins. But back to the interface: On the upper margin, immediately beneath the menu bar, you will find a symbol bar which offers rapid access to the file and clipboard functions. With the size list, which is roughly in the centre of the symbol bar, the view of the displayed page can be enlarged or reduced. On the left side of the KIllustrator window there is another tool bar, which allows rapid selection of the various painting and drawing tools. Alternatively, you will also find all the tools listed here in the menu under Tools. These are incidentally arranged here in the same order as their symbol icons. The last object bar to be mentioned is the colour palette positioned on the right edge of the screen. With its help, the colours of the individual graphics elements can be changed quickly and easily.

Feint lines and the grid Our smiley face initially consists of a yellow circle with a black rim. Before we start the drawing process, you should first make sure that the grid is activated. This aid provided by KIllustrator works in a similar way to the little squares on graph paper. It is meant to help the user in the optimal and millimetre-accurate alignment of all graphical objects. To this end it can not only be made visible, but also be set up such that its gridlines suck in all graphical objects. To activate the grid, select from the menu Layout/Grid.... Mark the two empty boxes there and then click OK. The blue grid squares should now be shown on the white paper area. For our face, select the ellipse tool from the toolbar or the menu (Tools/Ellipse). Bring the mouse cursor in the left upper third of the paper to a grid junction, then hold down the mouse button and draw a circle. When you do so, watch how the circle is drawn in and held by the respective gridlines. Once you have drawn a circle roughly, like the one shown in Figure 3, release the mouse button and activate the mouse tool (the arrow or via Tools/mouse). The circle should now have been selected automatically, which is shown by the little box around it. As soon as you move the mouse over a box like this your mouse cursor changes its form. Now hold down the left mouse button, and by slowly moving the pointer, you can change the size of the circle. Click just once briefly on one of these points, though, and KIllustrator switches over into a second editing mode, recognisable by the now altered marking symbols. Like the change in size just described, the object can be rotated about its own axis in this mode. Bear in mind that rotation can only be done with the four arrows at the vertices. A click on one of the other arrows would twist the object. Since rotating a circle makes relatively little sense, you should try this function out some time

KNOW HOW

Figure 1: The KIllustrator screen

on a rectangle. This can be created via the corresponding symbol from the symbol bar in exactly the same way as a circle. The centre – about which the object rotates – is incidentally defined by the small circle in the centre of the figure. In order to modify this point, place the mouse on the small circle in such a way that the pointer turns into a double arrow. Now hold down the left mouse button and move the point to its new position. To get back to the old mode, simply double-click on one of the edge markings. If you ever go completely wrong with the drawing, this is how you can remove the marked object via Edit/Delete from the picture again. To deselect an object, simply click, with the mouse tool activated, on an empty area on your page. Or else you can select an existing object

Figure 2: Setting up the grid Figure 3: The finished circle

11 · 2001 LINUX MAGAZINE 71


070killstratorsbd.qxd

29.06.2001

16:57 Uhr

KNOW HOW

Seite 72

KOFFICE WORKSHOP

again, by clicking on it with the left mouse button. What counts is that when you do so, you hit the respective precisely, which gets a bit fiddly, especially with small objects or those with a thin edge. Incidentally, you must do a very similar thing if you want to move the object: Move the mouse cursor to the corresponding candidate, hold down the left mouse button and then move the complete object to its new position.

Colour The smiley face is now to get a yellow coat of paint. To do this, simply click, with the circle selected, with the left mouse button on the yellow colour fill in the colour palette on the left edge of the window. A click with the right mouse button would mean that instead of the fill colour, you would be changing the colour of its border. But for the sake of our example, leave the latter black. It could be a bit wider. To modify this, either click on the filled circle with the right mouse button and select Properties, from the context menu, or with the circle selected pick Edit/Properties. In any case, a window should now open with which you can alter a few attributes of the circle. To make the outline look thicker, enter 4mm as width in the Outline tab. Here incidentally you also have the option of modifying the type of line, so for Figure 4: Our circle is given a broader outline

Figure 5: The second eye is created by copying and inserting

72 LINUX MAGAZINE 11 ¡ 2001

example the smiley face could be made to look frayed. After a click on OK the circle should now have a thick, black outline. Now all the face is lacking is the eyes, nose and mouth.

The eyes have it First the eyes. We shall make these out of two smallish, black-filled circles. When you do this, proceed exactly as for the face, by first selecting the circle tool, then painting a small circle at eye level and filling this in with a click on the colour black in the colour palette. You could now draw a second eye in the same way – but so that we get a second eye that looks exactly the same, the first eye should then be copied. To do this, mark the black circle just drawn, select from the menu Edit/Copy and then Edit/Duplicate. Apparently, nothing has happened in your drawing. But click on the first eye with the mouse tool and move it to the position of the second eye, so you can see that KIllustrator inserts the copy at exactly the place at which the original is located. That completes the two eyes, so now we can go on to the nose. Before we create this with a line, you should first turn off the grid again. To do this, call up the corresponding window via Layout/Grid and unmark the two small boxes. After a click on OK the grid should disappear from the background. Instead of the grid, a feint line should be used to create the nose. A feint line works like a line in the grid, except that the user can now define the position of each individual feint line himself. KIllustrator enables you to create vertical and horizontal feint lines in two ways: Firstly, you can create a feint line rapidly using the mouse. For our example, place the mouse cursor on the left, vertical ruler, hold down the left mouse button and draw the feint line on your page. Move the mouse to the centre of the incomplete face. If you are not sure whether the feint line is in the right place, do not let go of the left mouse button. It is very fiddly to make any modification later via a corresponding window. If the feint line is roughly in the centre between the two eyes, release the mouse button. Similarly, you can also make horizontal feint lines via the top ruler. Another, considerably more accurate, option for creating feint lines can be obtained by selecting Layout/helplines. Here you can create feint lines directly by entering their position values. In addition to this, this dialog serves to delete and change the positions of individual lines. Back to our face: Using Layout/Align to Helplines, switch on the pulling power of the feint lines. This will make it much simpler to position a straight nose precisely. For the drawing procedure, activate the line tool (fourth symbol from top in the toolbar or under Tools/Line). Position the mouse cursor roughly beneath the eyes and click once with the left mouse button. Move downwards along the feint line, until a simple nose is drawn. Then quickly and briefly press the right mouse button. This has the effect that the


070killstratorsbd.qxd

29.06.2001

16:57 Uhr

Seite 73

KOFFICE WORKSHOP

KNOW HOW

drawing process is regarded as complete. If you would like to draw the line further in order to give the face a hooked nose for instance, press the left mouse button. You will now get another line, whose beginning lies exactly at the end of the first line. Please bear in mind that the line thus produced with a kink represents an independent object and not two separate lines produced by KIllustrator.

Kiss me Now all that’s missing is the mouth. This should be a semi-circle, which is why it will now be created with the aid of the Bezier curve tool. To do this, select this tool from the toolbar (fifth symbol from top or from the menu Tool/Bezier). Working with this tool turns out to be somewhat more complicated than with the simpler tools previously introduced, hence these step-by-step instructions: With the Bezier tool selected, move to the point on the face at which the left corner of the mouth should begin. Click once with the left mouse button. You will get a feint line, with which you can define the degree of curvature. Bring this feint line with the mouse into a position where the mouse cursor is at bottom right (see Figure 7). Click again with the left mouse button and then position the mouse cursor on the vertical, continuous feint line, roughly where the mouth is to have its lowest point. Click again with the left mouse button and bring the feint line that appears as horizontally as possible. After one click with the left mouse button, move the pointer to the place where the right upper corner of the mouth should appear. Click one last time with the left mouse button and adapt the feint line, which again appears so that it produces a smiling mouth. To complete the drawing, press the right mouse button. If your mouth does not look all that great, you can re-edit it, as incidentally you can with any other geometric objects, with the point tool (second symbol from top in the toolbar or from the Tools/Point menu). To do this you must first select the object with the mouse tool. If you now turn on the point tool, all the changeable node points will be displayed and can be moved with the mouse to any other position. There is even a new toolbar opened under the toolbar, with which you can add additional node points or again, remove them from the object. You should experiment a bit with these node functions on our mouth, until the smiley displays an almost perfect smile.

Figure 6: The nose was created by a line. In the background the vertical feint line can be made out.

layer. You could imagine this as a sort of transparent film, with precisely one geometric element to be found on each sheet of film. In our smiley example the left eye is on one layer which lies over the big yellow circle, but under the layer on which the right eye is located. In order to become a bit more familiar with this

Figure 7: The left corner of the mouth is created with the aid of the Bezier curve tool Figure 8: The smiley face is finished

Layers One important functional characteristic of KIllustrator has so far not been mentioned – the layers (referred to as levels in the online help). Each graphical object (the individual eyes, the nose and the mouth in our example) is located on a separate 11 · 2001 LINUX MAGAZINE 73


070killstratorsbd.qxd

29.06.2001

16:57 Uhr

KNOW HOW

Seite 74

KOFFICE WORKSHOP

the mouse cursor at top left on the blank paper. Now hold down the left mouse button and draw a frame that surrounds all the elements of the face completely. As soon as you release the mouse button again, all the elements of the smiley should have been selected. To create the group, now select Arrange/Group from the menu. Check that KIllustrator has performed the grouping correctly, by moving the face around the paper as a test. If the grouping procedure was successful, all the elements forming part of the face should move at the same time. Henceforth this group of objects always counts as a graphical element. If you want to cancel this grouping later on, all you need to do is select the group concerned with the mouse tool and then select Arrange/Ungroup.

Figure 9: The yellow circle has been moved forward by one layer

Info Koffice Homepage: http://koffice.kde.org Homepage of the KDE Project: http://www.kde.org ■

concept of layers, you should selected the big yellow circle with the mouse tool. Now select Arrange/To front. This means you are placing the layer on which this circle lies in the foreground. Then select Arrange/Back one, until the circle is back in its original position. With the corresponding entries from the menu Arrange or via the context menu of the right mouse button (mark the object with the mouse tool, then press the right mouse button) you can bring the layers of the corresponding object into the sequence you desire.

Grouping When you are making rather more complex drawings in KIllustrator, it can often happen that you displace an element without meaning to. Another problem crops up if you want to modify several elements, which are part of a larger object, at the same time but relative to each other. For a solution to these problems Killustrator offers grouping of the elements concerned. This is done by combining several individual objects into a single large object. To combine the elements of our smiley face into such a group, all the objects must first be marked. You do this by drawing a frame with the mouse tool around all the elements involved. Place Figure 10: Example of overlay

74 LINUX MAGAZINE 11 · 2001

Text Before coming to the end of this Workshop installment, another interesting affect should be mentioned, which plays an important role in connection with the editing of text. First, cancel the grouping of the face, as described above. The next thing we will do is align Smiley with the mouth. To do this, first activate the text tool (Tools/Text or second symbol from the bottom in the toolbar), click roughly above the mouth and then type the word Smiley. When you have finished, activate the mouse tool again and click on the text with the right mouse button. From the context menu which appears, select Properties and on the Font list of the newly opened window, a larger font. After a click on OK, select Arrange/Text Along Path. The mouse cursor should now have changed into a thick, black arrow. Click with the point of this arrow on the mouth. KIllustrator should then align Smiley along the mouth. At this point it should be mentioned that on our test system the application of this function led to a crash of the complete application. This unfortunately shows that KIllustrator is still in development. Equally interesting is the overlay function. In order to get a better look at the result of this function, open a new blank drawing page. On this page, draw both a rectangle and a circle. The two objects should not intersect, and the circle should be placed on the right beneath the rectangle. Now activate the mouse tool and draw up a marking frame, completely enclosing both objects. Select Extra/Blend and leave the value set by KIllustrator. After clicking on OK Killustrator tries to overlay the two shapes. New geometric forms are created which carry the first outline into that of the other object (see Figure 10). This little affect ends the fourth part of our fivepart Workshop on KOffice. In the last part you will get another chance to apply the vector painting program skills you have acquired in this installment. This will involve the presentation program Kpresenter. ■


075Abacussbd.qxd

29.06.2001

17:00 Uhr

Seite 75

FALUN

CASE STUDY

Keeping it clean...

WATER TREATMENT

CLIVE DE SALIS AND ESSIE ANDERSSON

Ever since a well-known Swedish furniture store was associated with the work of the artist Carl Larsson, his home town of Falun became world famous. Yet the town of Falun has an interesting industrial history of its own. It is the home of the world’s oldest limited company, Stora Kopparberg AB. Stora Kopparberg is a copper mining company. Copper has been mined in the area for over 600 years. The copper from Falun has typically been used as a wood preservative and it is this copper that gives Scandinavian wooden houses their distinctive colour. The copper mines of the Middle Ages were open cast but over the centuries mineshafts were dug and now the area is riddled with a mixture of active and disused copper mines. Today’s environmental requirements are much stricter than in the past when old mineshafts could be used for dumping chemical and other types of

waste. The result was that by the year 2000 the town’s waste water, as well as the industrial waste, contained a significant cocktail of heavy metals and other contaminants. Sweden’s laws dictate that all the local authority’s actions are open to the public and so the performance of the water treatment works is automatically published on the Internet every hour. Heavy metals are difficult to handle at any time but with the realisation that any mishap is public information within the hour, Falun Kommun authority didn’t want to take any chances. They chose the stability and trusted process control of Linux. Falun Kommun opted for the proven ABACUS4 for their process control software. ABACUS4 is the latest version of the ABACUS process control software originally developed at the end of the 1960s. The first industrial installation of ABACUS

The Linux operating system is now being successfully used in industrial process plant control. In Sweden, Linux’s capabilities as both a highly stable operating system and reliable ISP software prove ideal to meet an unusual requirement.

11 · 2001 LINUX MAGAZINE 75


075Abacussbd.qxd

29.06.2001

17:00 Uhr

CASE STUDY

Seite 76

FALUN

A traditional Swedish home.

The old mine buildings are painted with copper paint – the traditional copper mines product.

was commissioned in 1971 and used Data General hardware which had ferrite cores for memory blocks. Each ferrite ring had the capacity of 1 bit and a large, heavy group of them were put together to make a memory block which added a massive 1K memory. In addition to memory faults we know about these days, corrosion of the ferrite rings was an extra consideration. The last ferrite core based ABACUS control system still running was finally replaced in 1998 on a paper mill in the UK having running without fault for 27 years. During the 1970s the software was further developed and ported to DEC PDP 11 series hardware running on the RSX operating system on which it was renamed ABACUS II. In this form the system became widely used in boiler houses, paper plants, the steel industry, chemical and pharmaceutical production as well as other applications. At the beginning of 1990 the software was again ported – this time to DEC’s VAX range of computers from where it went on to be available on the famous DEC MicroVAX range of computers. In 1997 the ABACUS process control software was ported over to PC hardware running on the Slackware distribution of Linux on

76 LINUX MAGAZINE 11 · 2001

which it has continued to run for all industrial applications. Based upon Linux, it is now known as ABACUS4. It has been successfully run on the Red Hat distribution in the laboratory but in industry it is currently run on Slackware 7. Details of the ABACUS4 software can be found on www.abacus4.com The waste water treatment process Falun Kommun chose was a High Density Sludge (HDS) process instead of the old-fashioned method of just neutralising the acid mine drainage water by addition of lime. The HDS-process can be designed in different ways, but the main point is that you recycle the sludge in the process, which results in getting a very dense and easily dewatered sludge. This product is later dried on the site before disposal, thereby minimising the volume of sludge to be disposed of. In the two first steps in the HDS process in Falun the acid mine drainage water is mixed in rapid mixing tanks with sludge to adjust the pH to pH 4.3 in tank one and then again to around pH 6.0 in tank two. The second, third and fourth tanks are all aerated by membrane aerators. In the third tank lime is added take the pH to pH 8.3. The fourth tank has been installed in order to ensure stable products (sludge and water) out of the process. The change in pH between tank three and tank four should be minimal to ensure complete oxidation of ferrous iron and stability of the sludge. The quality of the contaminated acid mine drainage water in Falun varies, not only according to the season, but also from which depth in the mine you pump it. The main ingredients in the water are sulphur, ferrous iron and zinc. pH for this contaminated water varies between 2.5 and 4.0, so it is normally highly acidic. Due to the fact that almost all the iron exists in the ferrous form it is very important with efficient aeration to get all the iron oxidized into ferric iron.


075Abacussbd.qxd

29.06.2001

17:00 Uhr

Seite 77

FALUN

If this procedure was not carried out, the sludge and the water would not be in a stable condition when they left the treatment plant. From the treatment plant in Falun a 99.9% reduction of the content of iron and zinc in the acid mines drainage water and a stable sludge with 5560% solids is achieved. The ABACUS4 process control system uses high quality standard industrial input/output (I/O) units for the digital and analogue I/O. A number of industrial standard protocols are available in ABACUS4 and for Falun Profibus communications was chosen giving up-to-date Fieldbus technology around the site. Process control decisions are then taken within the PCs running the ABACUS4 software on the Linux Slackware platform. The data, running conditions, alarms and reports are presented to the operators on PC-based operator workstations running ABACUS4 on the process plant network. The UK and Ireland firm Rowan House Limited was involved in the process control site. Being chemical engineers, they advised and assisted MCH Konsulting in Sweden on the process control techniques to be applied to the process using Abacus4, http://www.rowanhouse.co.uk/. The control of the process had to overcome significant problems. Each step of the process has to be adjusted to respond to changes in the dissolved heavy metal content and composition but, as with most water treatment processes, the time delays are significant. A truly distributed process control system (DCS) was used and configured to maximise the reliability and availability of the system for optimum safety. The selection of a DCS also facilitated optimum response to process changes. The use of Abacus4 enabled the alarm reporting and operator interfaces to also be distributed rather than needing to rely on a central computer. A central computer for the display of information and alarm reporting would have run the risk of failing to report process alarm conditions in the unusual event that it was offline for any reason. Since the plant performance is public information every hour, the operators need to know that they can always see what is happening even if one of the computers is offline for maintenance or software changes. The water treatment site is often unmanned. Alongside distributed alarm handling, the Abacus4 control system makes use of the telephones. SMS text messaging is used to allow critical alarms to be sent as text messages to the duty operator’s mobile phone whilst the site is unmanned. If the duty operator fails to reply to the Abacus4 system the message is also sent to other operators on the duty list. The text messages are followed by a pre-recorded voice message being transmitted by the Abacus4 DCS by telephone until someone responds to the alarm. These back up alarm systems are essential because the performance of the plant is being automatically displayed on a website by the Abacus4 DCS every

CASE STUDY

hour for public information. The result of failure to respond to an alarm would become well known throughout the town in a short space of time. Which is an excellent incentive. Basic plant performance data is automatically posted on a website in compliance with Sweden’s tough anti-secrecy laws. The performance can be seen by anyone logged onto the town’s intranet. Falun Kommun’s intranet is permanently linked to the Internet, allowing high-speed Internet access to everyone in the town. The performance of the town’s waste water treatment can also be seen from anywhere in the world on: http://www.users. wineasy.se/bbab/framby-reports/. ■

Information available from the Abacus4 system.

Abacus4 control system view.

11 · 2001 LINUX MAGAZINE 77


078PcornerNewsbd.qxd

29.06.2001

20:19 Uhr

PROGRAMMING

Seite 78

PROGRAMMING CORNER

Control structures

CHECKPOINT CHARLIE MIRKO DÖLLE

After the introduction to control structures and the presentation of simple comparison options in the last month’s Programming Corner, this time we will be concerned with series comparisons, loops, keyboard inputs and small selection menus. It becomes relatively fiddly and involved when we have to check several parameters and also wish to ignore upper and lower cases:

In the last issue, we became familiar with the if construction and the test comparison program. This made it possible for us to make the program sequence independent of external circumstances for the first time. Depending on the situation, other commands were executed. Bash knows additional control structures, which are especially necessary for larger comparisons and for multiple call-ups of individual commands (loops).

Parameter recognition We shall begin with a small script. Like almost every other Linux program, our script will use the parameters -h or –>-help to output a brief explanation of the permissible options and then stop by itself. To do this, we use an if construct, covered last month: #!/bin/bash if [ "$1" = "-h" -o "$1" = "—>-help" ]; then echo "call up:" echo " $0 [-h|—>-help]" echo "Parameter:" echo " -h, —>-help: brief explanation" fi 78 LINUX MAGAZINE 11 · 2001

#!/bin/bash if [ "$1" = "-h" -o "$1" = "-H" -o -z "${1#—>U -[hH][eE][lL][pP]}" ]; then echo "-h" elif [ "$1" = "-v" -o "$1" = "-V" ]; then echo "-v" elif [ "$1" = "-q" -o "$1" = "-Q" ]; then echo "-q" fi The test on –>-help in the second line needs some explanation. So as not to have to test –>-help in all variants of upper and lower case, we use the pattern recognition from part 3 of our course. With ${1#–>[hH][eE][lL][pP]} we search through the variable $1 for a string which begins with a double minus sign and contains an upper or lower case ‘h’, upper or lower case ‘e’ and so on. If there is a —>-help in $1 in any upper or lower case combination, it is removed – leaving a blank character string. This is where the test parameter -z comes into play. It supplies a true value if the following character string is empty – thus a version of –>-help has been found. Words which only begin with –>-help (for example –>-helper) fail the test, because the ending would be left over.

Simplification using case As can be seen from the previous example: Largescale parameter tests can hardly be conducted in


078PcornerNewsbd.qxd

29.06.2001

20:19 Uhr

Seite 79

PROGRAMMING CORNER

PROGRAMMING

this manner. There is an urgent need for simplification. The method of parameter comparison is always the same: We check in each case whether the first parameter meets a certain condition. For such serial comparison, there is the case construction:

others are ignored. So in the case of ”ppp0” only ”Modem” is output, but not ”Unknown”. The script is then continued, after dealing with the case, after the esac keyword.

#!/bin/bash case $1 in -h|-H|—>-[hH][eE][lL][pP]) echo "-h" ;; -v|-V) echo "-v" ;; -q|-Q) echo "-q" ;; esac

With loops it is possible to have program segments executed many times, for example, to evaluate all command line parameters one after the other. The Bash knows three kinds of loop constructs: for, while and until. In principle the three loops are interchangeable with each other: Anything which can be solved using for, can in any case also be written using while. Nevertheless you should decide which loop construct is most appropriate for which problem.

The case construct consists of the bracketing keywords case and esac (like the final fi for if, esac is also the reversal of the letters of case), a character string (here $1), which will be tested, and the individual blocks with the respective cases. These blocks begin with the pattern which is limited by a final round bracket, and end with a double semicolon – in between are the instructions which are to be carried out for the respective case. In our example we have three different cases, h, -v and -q. The pattern of the first case consists of three parts, one of which must be true. The patterns themselves are practically identical to those from our if construct, although considerably clearer. Apart from the square brackets which can be used to specify permitted characters or character ranges, there are also the wildcards ? for any character and * for any sequence of characters. This makes it possible to distinguish the various network devices from each other: case $device in eth*) echo "Ethernet" ;; ppp*) echo "Modem" ;; ippp*) echo "ISDN" ;; lo) echo "Loopback" ;; *) echo "Unknown" esac The last pattern, ”*”, applies to any character string – which is why this case construct in fact would always have to return ”Unknown”. But the cases are processed from top to bottom, and the only one to be executed is the first one that fits. All the

Loops

for The for loop is suitable for applications where a list of variables is laid down and has to be processed individually. This is practical for example for evaluating the command line parameters with our case construct: #!/bin/bash for P in $@; do case $P in -h|-H|—>-[hH][eE][lL][pP]) echo "-h" ;; -v|-V) echo "-v" ;; -q|-Q) echo "-q" ;; *) echo "Unauthorised parameter $P" ;; esac done $@ provides a list of all command line parameters, which for then enters in sequence in the variable P, in order then to execute the body of the loop with our case construct. If you’re already familiar with other programming languages, you will be slightly surprised at the way the for loop works (in Perl for example it works in a completely different way). Usually, a start and an end value are stated, then the size of the increments by which the start value is to be raised. The loop is then run through until the end value is reached – which is usedto read in the values from 1 to 10 of a field, for instance. This is not expressly provided for in the Bash, but we can remedy this by using the utility program seq from the sh_utils package (or sh-utils). seq supplies us with a number sequence from a specified starting value to an end value, there is the option of setting the size of the increments, and the 11 · 2001 LINUX MAGAZINE 79


078PcornerNewsbd.qxd

29.06.2001

20:19 Uhr

PROGRAMMING

Seite 80

PROGRAMMING CORNER

number format can also be changed. In order to output ”Hello world” 10 times, we could use the following script, where the number of the respective run-through is placed in front in square brackets: #!/bin/bash for i in `seq 1 10`; do echo "[$i] Hello world" done

Unfortunately there is no manual page for seq, but you will find relatively exhaustive help via the —>help parameter. For home use, the invocations by means of seq Start End and seq Start Step size End are sufficient to cover most cases.

while Certainly the most frequently used loop construction is while. In this case, the body of the loop is executed until the specified condition is true. One potential application is that of reading in any long field: #!/bin/bash i=1 while read -e -p "[$i]> "; do field[$i]="$REPLY" : $[i+=1] done echo ${field[*]} The condition in this case is an invocation of read, but, exactly as with the if construct, test or any other program can be used. If the return value of the program is 0, the condition is true, otherwise false. The read command is new, with which we read inputs from the keyboard for the first time (exact standard input). The input is – unless otherwise specified – stored in the variable REPLY. The parameter -e activates the ReadLine extension, and you can edit the input line as usual with the cursor keys, and the tab completion of program names also works. The second parameter -p ”[$i]> ” defines the prompt, which is displayed at the Table 1: read parameters -a Field Allows the input of several values separated by spaces. The values are stored in the array Field , incrementing from element 0 on -e Activates the ReadLine support. This makes it possible for example to edit the input line with cursor keys or tab completion -r Deactivates the backslash-enter special treatment. Normally it is possible to continue an entry on the next line by means of a backslash at the end of a line, without the line break having any effect -p Prompt Replaces the standard input challenge with the character string Prompt, without attaching a line break. Name Stores the input directly in the variable Name and not, as usual, in REPLY

80 LINUX MAGAZINE 11 · 2001

beginning of the input line. In this case the element number is in the array, $i in square brackets with a smaller symbol and blank space following. Unlike echo there is no line break attached to the prompt, the cursor stays behind. The rest of the script is quickly explained, while checks each time whether read is true – what exactly happens if something were to be entered. The input has put read into the variable REPLY, whose content we store in the body of the loop as element number $i of our array field. Then i is increased, so as not to overwrite the stored values, and the loop starts again from the beginning. It is only when, instead of a value, [Ctrl+D] is pressed, the return value of read does not equal 0, the condition is thus not met. The loop ends, and the next instruction after the done belonging to the loop is executed. In our case it is the output of all elements of our array field, which we achieve by specifying a start in place of the element number.

Program simplification The example just shown is still very exhaustive and even relatively complicated as it is written. There is a much shorter and faster way: #!/bin/bash while read -e -p "[$[i+=1]]> " field[$i]; do : done echo ${field[*]} The most unusual thing, at first glance, is the empty body of the loop, containing only the colon as zero function. But this is necessary, since the body cannot be empty. There is no need to restore REPLY in the field element, because by specifying the variable name field[$i] as last parameter of read, the values entered are stored immediately in the array and not in REPLY first. Incrementing the variable i by 1 is also transferred into the condition and in addition the initialisation has been done away with by the 1. The crux of the matter is that new variables are empty by default, but have the arithmetical value 0. The instruction $[i+=1] first increments the variable by 1 and then delivers the value. So we begin as before at element number 1.

until until incidentally does the same as while, except that the loop is executed as long as the condition is false – otherwise there is no difference. The following example is something you will be familiar with from the presentation of the while loop, the condition has been inverted by means of exclamation marks – true becomes false and vice versa.


078PcornerNewsbd.qxd

29.06.2001

20:19 Uhr

Seite 81

PROGRAMMING CORNER

#!/bin/bash i=1 until ! read -e -p "[$i]> "; do field[$i]="$REPLY" : $[i+=1] done echo ${field[*]} In Listing 1 you will find the rough draft of a script, which apart from -h and –>-help also understands the parameters -q and –>-quiet respective as well as -v which means the same as –>-verbose. -q and -v are often used either to suppress all outputs except for serious error messages and/or to comment all actions. The default setting is the verbose mode via the variable QuietMode, by setting this to 1 in the second line. Even if at first sight this looks nonsensical: Since 0 is true, we set QuietMode with 1 to false.

Selection menus The select construction offers an option which is frequently underestimated. This makes it possible to construct complex selection menus. Here is a simple example for choosing between apples, pears and plums: #!/bin/bash Field=("apples" "pears" "plums" "end") select fruits in ${field[*]}; do case $REPLY in ${#field[*]}) return ;; *) echo "$Fruits" ;; esac done The content of our array is hidden behind ${field[*]}, in this case four elements, which select numbers in sequence and outputs one after another: 1) 2) 3) 4) #?

Apples Pears Plums End

Now select queries the number of the desired action, for which read is implicitly used. The result of the selection is as usual available later in the variable REPLY. In addition select copies the corresponding element in our array into the variable P and executes the body. Once this has been processed, select shows the selection again. The rest of the procedure in the body is defined by means of case. The first pattern looks fairly unusual at first, but ${#field[*]} is concealing only the number of elements contained – which is the same as the number of the last entry. This allows us

PROGRAMMING

Listing 1 #!/bin/bash QuietMode=1 for P in $@; do case $P in -h|-H|—>-[hH][eE][lL][pP]) echo "Command:" echo " $0 [-h|—>-help]|[-q|—>-quiet]|[-v|—>-verbose]" echo "Parameter:" echo " -h, —>-help: This brief explanation" echo " -q, —>-quiet: Only report serious errors" echo " -v, —>-verbose: Extensive messages" exit ;; -v|-V|—>-[vV][eE][rR][bB][oO][sS][eE]) QuietMode=1 ;; -q|-Q|—>-[qQ][uU][iI][eE][tT]) QuietMode=0 ;; *) echo "error: Unknown parameter $i" exit ;; esac done echo $QuietMode

to reliably recognise when the user has selected the last entry, without having to know its actual number or name – thus the selection can be expanded at will, as long as the last entry stands for ”exit menu”. In order to leave select, you must either press [Ctrl+D] or, as shown in the example, call up return or break. To this extent, select also differs from all other loops – it has neither a condition nor a list, after the processing of which the loop ends. Unlike read, you cannot give select the prompt which is to be output as parameter – the standard return prompt from the variable PS3 is used, which you can adapt manually: #!/bin/bash field=("apples" "pears" "plums" "end") PS3="Which fruits? > " select fruits in ${field[*]}; do case $REPLY in ${#field[*]}) return ;; *) echo "$fruits" ;; esac done That ends the fifth part of Programming Corner. part 6 will be on the structuring and modularisation of scripts by means of functions and modules. Using a small management program we will then recap all the previous lessons in part 7 and show the potential applications of the individual commands and constructs. ■ 11 · 2001 LINUX MAGAZINE 81


082booksbd.qxd

29.06.2001

17:02 Uhr

Seite 82

BOOK

REVIEW

LPI LINUX CERTIFICATION IN A NUTSHELL ALISON DAVIES

Many people, these days are looking for certification in Linux to match those in Microsoft or Unix systems, to meet those needs the Linux Professional Institute (LPI) has introduced a series of exams leading to various levels of certification and examination. The LPI is widely regarded as being the leader in independent certification and examination. ‘LPI Linux Certification in a Nutshell’ prepares candidates for both of the level 1 exams. It is aimed at junior to mid level Linux administrators, but also any new Linux users who need a detailed introduction to Linux that applies to using Linux in real life instead of theory. The book is split into two parts. Part 1 covering the exam 101 and part 2 covering exam 102. Each part contains an exam overview, a study guide, topic sections, review sections, a practice test and a highlighter’s index. Part 1 covers GNU and Unix commands; Devices and file systems; Boot, initialisation, shutdown and Run levels; Documentation and Administration tasks.

Part 2 covers Hardware and Architecture; Installation and Package management; Kernel; Text editing and printing; Shells; X windows system; Networking fundamentals; Networking services and Security. The tutorials take the reader through the various Objectives of the exams and the practice tests allow you to see how you are progressing (answers are given). The Highlighters Index is a useful quick reference and revision guide. The book will be very useful for anyone taking the exams and also to anyone who wants to learn more about Linux concepts and functions, whether they are trying for certification or not.

Info Published by O’Reilly Priced at $39.95 Author Jeffrey Dean http://www.oreilly.com ■

BEGINNING GTK+/GNOME PROGRAMMING ALISON DAVIES

GTK+ and Gnome allow the development of professional graphical interfaces in Linux. The book takes the reader through the basics of programming in GTK+ and Gnome, covering the GIMP toolkit, gIDE, Glade, Glib and GDK. It is aimed at the Linux beginner as well as at the more experienced programmer wanting to develop Linux applications with graphical interfaces. The book gives details of where to download any packages needed to run the examples and explains concepts in a clear and concise manner. Some basic knowledge of C programming is assumed. The book is written in a very personal manner, almost giving you a tutor leaning over your shoulder to guide you through 82 LINUX MAGAZINE 11 · 2001

tricky programming moments. Once you’ve worked your way through all the exercises you will have no excuse not to use graphical interfaces at every opportunity. One or two errors have crept into the commands but in case of difficulty there is a website where any problems picked up have been corrected.

Info Published by Wrox Priced at £28.00 Author Peter Wright http://www.wrox.com ■


084ksplittersbd.qxd

29.06.2001

17:05 Uhr

BEGINNERS

Seite 84

KORNER

K-splitter

SMOOTH OPERATOR STEFANIE TEUFEL

Who says there is no place for gossip and scandal in a Linux magazine? K-splitter broadcasts news from the K-World and noses around here and there behind the scenes.

A whole new typeface Fonts and Linux – never a marriage made in heaven. With the new XFree86 version though, that could change, because with this properly installed you can finally get true anti-aliasing under KDE, too. There are a few conditions, though: XFree86 in a version greater than or equal to 4.02, and also the package Freetype2 must both be present on your computer. Unfortunately, that’s not all, because the Xfree driver for your video card must support the Rendering Extension. You can winkle out of your system to what extent your driver does so with a stefanie@diabolo[~]> xdpyinfo | grep RENDER

anti-aliasing: Aliasing means the staircase effect at the edges of graphics, especially of text or lines, caused by the fact that with pixels it is really only possible to show straight lines exactly if they are vertical or horizontal. The countermeasure – the insertion of shading pixels into the stairs – is called antialiasing. ■

If, after entering this line you are not confronted with a shining RENDER, for the time being you will not see any smooth fonts under KDE. Finally, make sure that /usr/X11R6/lib/lib Xft.so.1.0 links to Freetype. This is easy to check with an ldd libXft.so.1.0. If the output does not include anything from the Freetype library, you have unfortunately picked up an Xfree package which does not include Xft (”X FreeType”-) support. But fear not, because this support is at least included in the more recent Mandrake and SuSE packages. Now all you need is qt in Version 2.3.0 with Xft support compiled in to complete your anti-aliasing. If you cannot find this, you can also compile this library yourself from the sources. You will find these on the FTP server of the Troll at ftp://ftp.trolltech. com/qt/source/. But please don’t forget to add on an -xft to the .configure command. Everything present on your computer so far? Then all that stands between you and the new font miracle is a few changes in diverse configuration files. One more little tip before you get cracking: Please do not delete any entries or files whatsoever,

84 LINUX MAGAZINE 11 · 2001

but merely decomment entries and rename old files. If there are problems with the new configuration, you will then have the option of setting your computer back to its original condition in seconds. The first thing to do is to get rid of all font servers which may be running on your system. To do this, decomment everything in your XF86Config file in FontPath which bears any similarity to a unix/:7100. Because you are so conveniently doing some decommenting, please place another # at the start of the lines /usr/X11R6/lib/X11/fonts/truetype and /usr/X11R6/X11/fonts/Type1, if they exist. Insert the following in the section Modules: Load "type1" Load "freetype" If you have bad luck – as I did with my Red Hat 7.0 – in the font path there will now be the font server we have just decommented and absolutely nothing else. In this case you will have to enter the paths to your font directories by hand. This should look something like this: Section "Files" RgbPath "/usr/X11R6/lib/X11/rgb" # FontPath "unix/:7100" FontPath "/usr/X11R6/lib/X11/fonts/100dpi:U unscaled" FontPath "/usr/X11R6/lib/X11/fonts/75dpi:uU nscaled" # FontPath "/usr/X11R6/lib/X11/fonts/truetype" # FontPath "/usr/X11R6/lib/X11/fonts/Type1" FontPath "/usr/X11R6/lib/X11/fonts/100dpi" FontPath "/usr/X11R6/lib/X11/fonts/75dpi" FontPath "/usr/X11R6/lib/X11/fonts/misc" FontPath "/usr/X11R6/lib/X11/fonts/local" FontPath "/usr/X11R6/lib/X11/fonts/misc:unU scaled" Fontpath "/usr/X11R6/lib/X11/fonts/Speedo" ModulePath "/usr/X11R6/lib/modules" EndSection


084ksplittersbd.qxd

29.06.2001

17:05 Uhr

Seite 85

KORNER

BEGINNERS

Figure 1: No more edges or corners

In /usr/X11R6/lib/X11/XftConfig you must now enter the two font paths which you have just decommented in the Xfree configuration file – even if in a somewhat different form, as follows: dir "/usr/X11R6/lib/X11/fonts/Type1" dir "/usr/X11R6/lib/X11/fonts/truetype" If you don’t yet possess any TrueType fonts, the time has now come to get your hands on some. A neat package can be found at http://keithp.com/~keithp/fonts/truetype.tar.gz, which you should simply unpack into the directory /usr/X11R6/lib/X11/fonts. Now all you need to do is set the variable export QT_XFT=true in a file such as the /etc/profile or the /etc/profile.local. That’s it.

everything will be smoothed out, as you can see in Figure 1.

Thematic Beautifying your KDE 2.1 desktop with your own themes should soon no longer be a problem, because at http://www.ibm.com/developerworks/ a new online tutorial explains to design-mad hobbyists what KDE themes are all about. The tutorial is free to use, although a brief registration is necessary. For those who would rather enjoy the tutorial at leisure offline: No problem, there is also a download version. ■

Figure 2: Construction of themes made easy

For owners of a SuSE 7.1 the following method has been tried and proven: Install the packages qtexperimental and ttmkfdir if these are not already on your computer. Then copy all the TrueType fonts which you would like to use later into /usr/X11R6/lib/X11/fonts/truetype. Anyone who has not to date monkeyed around with /etc/X11/XF86Config and /usr/X11R6/lib/X11/XftConfig is as good as finished, because then these are correctly configured. Nevertheless, make sure as a precaution that XftConfig contains both of the dir lines mentioned above and the path to the font server is decommented in the /etc/XF86Config. After that, change to the directory /usr/X11R6/lib/X11/fonts/truetype, and there enter the following: # ttmkfdir -o fonts.dir # SuSEconfig -module fonts Now anchor the entry export QT_XFT=true in the file /etc/profile.local, and next time you start KDE 11 · 2001 LINUX MAGAZINE 85


086Ktools2sbd.qxd

29.06.2001

20:22 Uhr

BEGINNERS

Seite 86

KORNER

K-tools

FURNACE STEFANIE TEUFEL

Koncd is the KDE Tool of the Month. With this cdrecord front-end, you can burn CDs easily.

”I’ve had my fingers burned before” - this saying may have gone through your head more than once if burning data or audio CDs under Linux has been giving you a hard time. But, like so many things, there has been progress on the burner front, too. With Koncd the perfect home-baked CD is just a few mouse clicks away. There really is nothing more to it than an easyto-use graphical user interface for the programs cdrecord and mkisofs, with which command line fetishists have already been enjoying going for the burn under Linux for some time now. The latest version of the program can be found either on our CD or at http://www.koncd.de/.

Stoke the fire

[left] Figure 1: Cleared up [right] Figure 2: The burner and writer should match

After installing your new burner software, you will find an entry in the K menu, Applications/KOnCD, with which you will be able to start the program with ease in future. But before you actually shoot off, you should first check a couple of things which could stand between you and your homeburnt CDs. If you would like to use koncd as a normal user, you will in all probability founder for the lack of execution rights over cdrecord. Because this program can usually only execute root. So change the permissions - as Superuser - as follows: chown root /usr/bin/cdrecord chmod 4711 /usr/bin/cdrecord Also, for safety’s sake, test whether cdrecord

86 LINUX MAGAZINE 11 · 2001

recognises your burner and the CD-ROM drive. To do this, enter the following in the console: cdrecord -scanbus cdrecord should then reward you with an output as in Box 1. Should you have an ATAPI burner rather than a SCSI device, you will need to select the ATAPI-SCSI emulation in the kernel. In the most common distributions this is present as a module, which you can simply load in with the command modprobe -k ide-scsi That’s about it. Now take a deep breath, and start the program. Box 1: Cdrecord scans the bus Cdrecord 1.9 (i686-pc-linux-gnu) Copyright U (C) 1995-2000 Jörg Schilling Linux sg driver version: 2.1.38 Using libscg version ‘schily-0.1’ scsibus0: 0,0,0 0) * 0,1,0 1) * 0,2,0 2) ‘TEAC ‘ ‘CD-ROM CD-532SU ‘ ‘1.0A’ Removable CD-ROM 0,3,0 3) ‘YAMAHA ‘ ‘CRW4260 U ‘1.0j’ Removable CD-ROM 0,4,0 4) ‘EXABYTE ‘ ‘EXB-8200 U ‘ ‘2600’ Removable Tape 0,5,0 5) * 0,6,0 6) *

Burn, baby, burn The main window (Figure 1), looks - to put it politely - a bit on the lean side, but as with everything in life, with koncd it’s the inner values that count. And it certainly has them, because with koncd you can not only copy or burn audio CDs, but also master CDs, produce multi-session and bootable CDs, burn onthe-fly or delete CD-RWs. First click on the Settings button, to check whether burner (writer) (Figure 2) and CD drive (reader) have been set correctly. If not, adjust the devices using the pulldown menu. After that, it’s up to you what you want to do.


086Ktools2sbd.qxd

29.06.2001

20:22 Uhr

Seite 87

KORNER

For now we shall make your mind up for you and decide that you want to copy a CD.

The double whammy To do this, click in the start window on the Copy CD button, after which you will immediately be confronted with a window as in Figure 3. A whole lot of settings, but: What do they mean? Before you plunge with me into the options jungle, it’s best to start with the simple things in life, er, programs. The control button Erase CDRW only concerns you if you can use rewriteable CDs. With None that’s exactly what happens,. the CD is not deleted. The setting All deletes the complete CD, which - depending on the speed of the burner - can take one or two minutes. With Fast you delete only the TOC, (”Table Of Contents”) from the CD, Track erases the first track on the CD, and Leave open does not close the last session. In the pulldown menu Speed you can adjust that of the burner. One little tip, especially when burning audio CDs, you should not squeeze the maximum from your burner. Because the faster you burn, the less precise the track becomes, and in the worst case scenario it can happen that instead of music you hear nothing but crackling. But now to the options: If you select Dummy Mode, the burn procedure is not really executed. With ISO-Size Koncd uses the size details for the standard CD, one click on Ignore medium size, and Koncd won’t even think about it - which can be quite useful when overburning. Eject CD after write automatically ejects your brand-new CD after the burn procedure. With No fixiating no table of contents is created. Warning: Unfixed multi-session CDs are not readable in some CD-ROM drives. Finally with Force Mode Koncd cheerfully carries on burning even if errors crop up during the process. Now all you need to do is click on the Start button, to start the burn procedure and watch koncd go to work. The first status message then shows the progress of the burn procedure while second keeps tabs on the buffer memory of the burner. If this falls to 0 per cent, the burn procedure is interrupted. If so, try it at a lower burn rate - it’s a good thing there’s the Dummy Mode option...

BEGINNERS

will find the local directory structure and on the right the window belonging to the ISO image directory, that is all the directories and files which are to be burnt onto the CD later. The assembly is simple. Create the directories you want, then in the left window mark the files or directories to be saved and in the right window the directory, into which they are to be burnt. One click on Add and your selection is added. A final click on OK brings you back to the root window. The ISO image you have created should now be in the input field Image-File. If you can do without bootability on your CD, the Bootable CD field need concern you no further. Otherwise specify the boot image file here which is to be burnt onto the start of the CD, in order to make it startable. What’s more important for most people may be the field Image Type. If you want to read the CD under both Linux and Windows 9x or NT, then choose the type Rock-Ridge+Win9x/NT (Figure 5). The options on the right side under CD Identification are more to do with cosmetic features. In the field Volume-ID you can give your new creation a name. Under Windows this appears in the Explorer under the respective drive letter. Publisher and Preparer would be suitable spots for your name. Since both these entries are to be stored in the table of contents of the CD, this can be very sensible when it comes to copyrights. In the field App.-ID enter for example the date when you burnt the CD. The items in the Options area are much more important. The options Create CD-Image and Write CD must be selected in order to burn the CD. If you deselect Write CD, only an image file will be created, which you can burn later or with another program. A click on Bootable CD helps you with a bootable CD, Multisession makes Koncd burn the CD in multi-session mode, meaning you can add more data to the CD later on. The item Leave image leaves the image file created on the hard drive, instead of deleting it after burning. It makes sense to get rid of this disk space guzzler if you want to burn several CDs with the same content. But now, at last, it’s high time to light a fire under your burner. ■

Figure 3: Copy your data!

The sorry remnants If you want to clear data from your hard drive to CD (repeat after me, ”backups are good!”) - you have to proceed slightly differently. In this case, select the Master CD button from the root window, and prepare yourself for even more options (Figure 4). You have already met one or two of the options, so at this point we will just introduce the newcomers. A click on the button next to the Source-Dir field opens the Koncd file manager, in which you can assemble the data you want to burn. The file manager is split into two windows: On the left you

Figure 5: The best of both worlds

Figure 4: When mastering a CD there are even more options to watch 11 · 2001 LINUX MAGAZINE 87


088gnomogramsbd.qxd

29.06.2001

20:25 Uhr

BEGINNERS

Seite 88

GNOMOGRAM

GNOME News

GNOMOGRAM BJÖRN GANSLANDT

GNOME and GTK as the basic programs for GNOME have been attracting more and more followers in recent years. There are now programs for almost every task and new ones are being added daily. Each month in the Gnomogram column, we present the pearls among the GNOME tools and report on the latest GNOME rumours. This month, we cover GUADEC 2001, Eazel Reef, Progeny 1.0, Etherape, Gnoetry and Adapting Sawfish.

Figure 1: GUADEC 2001 (photo: http://canvas.gnome.org /~gman/guadec/)

GUADEC 2001 GUADEC (the GNOME User And Developer European Conference) took place in Copenhagen this year and gave GNOME developers the opportunity to discuss the future of GNOME and to sign posters and each other. Since GNOME 1.4 was completed shortly before GUADEC, one of the main points of discussion was GNOME 2.0. And it was not only GNOME followers who got a word in – there were also several KDE developers present, with whom better interoperability between GNOME and KDE was being worked on. There were even strenuous efforts being made to replace GNOME’s antiquated sound daemon ESD by KDE’s aRts. It remains to be seen whether this solution will ever become a reality, since aRts takes over too many tasks, according to some GNOME developers, which a multimedia framework like Gstreamer ought to handle. Presented for the first time was the GUADEC DirectFB, which allows GTK applications direct access to the framebuffer, thus an abstraction of the graphics hardware. DirectFB also offers features such as window management and an alpha-channel for transparent windows. To make it easier for new

88 LINUX MAGAZINE 11 · 2001

developers to get on board GNOME, it was decided to expand the existing technical documentation considerably. Of course, working with GNOME is also to be made easier for users, especially those who are disabled. Although there is still a great deal to be done in this direction, it was already possible to present features on GUADEC such as speech output. And the development of a GNOME Office Suite, for which plans have long existed, was finally resolved. Under the name GNOME Office, several existing programs will be combined and harmonised with each other until the launch of GNOME 2.0. Images of GUADEC and the associated parties can be found at the second site listed below; also, by the time this issue comes out all the lectures should be available at the third site below as MPEG-2.

Eazel Reef Since the technology currently in use by Eazel is very restricted by services over the Internet, a base has been created under the name Reef, which is considerably more powerful, at least on paper. The user receives Service View Bundles via Reef, containing script code and other data such as images. Python will be deployed in the first instance as a script language, but in the longer term other languages will also be supported. For communication between the local script and the server both SOAP, which also forms the basis for Microsoft’s .NET, and XML-RPC are being discussed.

Progeny 1.0 Progeny is a commercial distribution, developed on the basis of Debian Woody and in which the Debian co-founder Ian Murdock played an important role. The long-term objective of the development, apart from the provisions of services, is simple management of Linux networks. But Progeny now also offers a few


088gnomogramsbd.qxd

29.06.2001

20:25 Uhr

Seite 89

GNOMOGRAM

BEGINNERS

improvements for GNOME users. Instead of the normal Debian front-end for Debconf, Progeny uses so-called configlets, which can be written in Python. These configlets are partly integrated into the GNOME control centre and offer features similar to Ximian’s set-up tools. Anyone who has already installed Debian can simply upgrade via apt-get to Progeny 1.0 – otherwise ISO images of the distribution can be found at archive.progeny.com/progeny/images/.

Etherape Etherape, which is based on Etherman, illustrates network traffic between your own computer and the local network and/or the Internet. To do so, Etherape represents each computer by a node and draws a connection corresponding to the amount of data volume between the individual nodes. The colour of this connection shows the protocol being used, and you can define which protocol levels Etherape should concentrate on. As data sources, apart from Ethernet, PPP and FDDI interfaces, the output from Tcpdump can also be used. This makes it possible to keep re-displaying network traffic which has been recorded once. Since only the connections which lead to your own computer can be analysed via a PPP or SLIP interface, Etherape uses the -m ip option, or the command interape to offer the possibility of adapting the display and positioning your own computer in the centre of the illustration. There are also modes for Ethernet, FDDI or TCP, where in the last type of illustration the network traffic from port to port is shown.

[above] Figure 2: Etherape shows where data is really coming from

[left] Figure 3: Gnoetry creating a sonnet

The author Björn Ganslandt is a student. When he is not involved in trying out new programs he reads books or plays the saxophone.

Gnoetry As the name suggests, Gnoetry creates poetry, largely unassisted. To do so the program analyses existing texts statistically and then generates a text with similar characteristics. Gnoetry masters a wide variety of forms of poetry and is also capable of converting the rhyme schemes of western poets or metric patterns respectively to Japanese poems. Unfortunately quite often the rhymes are flawed, and it does happen that a syllable gets overlooked. Since the objective of the project is a joint production by man and machine, the lines of the poem can be regenerated as often as you like, until the poem is perfect. Gnoetry comes with only English texts as sources, since different languages vary too much for them to be interchanged without a lot of effort. Nor are any contemporary texts included for copyright reasons. But with the aid of a 5MB bonus pack at least a large number of classics can be added.

Adapting Sawfish One of the great advantages of Sawfish is that this window manager can be expanded by means of scripts in Lisp dialect Rep. To do this you can make use of modules from /usr/share/sawfish/VERSION/

lisp/, by loading them from the file ~/.sawfishrc with the command (require 'module)

URLs

In this file, there should also be the line (require 'sawmill-defaults) which, among other things, adds GNOME adaptations. New modules can be found at sites such as items nine and ten listed below. They must first be compiled with the command sawfish --batch compiler -f compile-batch ModU ul.jl before they are integrated. Sometimes there are also code snippets, which must be copied directly into ~/.sawfishrc. If you want to expand Sawfish yourself you should take a look at sawmill.sourceforge.net/prog-manual.html, where all relevant functions and variables are explained. ■

guadec.gnome.org gnome.wlug.westbo.se/guadec/ gnome.org mail.gnome.org/archives/gnomehackers/2001-April/msg00002.html progeny.com archive.progeny.com/progeny/images/ etherape.sourceforge.net www.beardofbees.com/gnoetry.html www.sics.se/~lofgren/sawmill/ adraken.themes.org/map.ph sawmill.sourceforge.net/progmanual.html ■

11 · 2001 LINUX MAGAZINE 89


090answergirlsbd.qxd

29.06.2001

20:28 Uhr

BEGINNERS

Seite 90

ANSWER GIRL

The Answer Girl

ALL IN THE TRANSLATION PATRICIA JUNG

The world of everyday computing, even under Linux, is often good for surprises. Time and again things don’t work, or not as they are supposed to. The Linux Magazine’s Answer Girl demonstrates how to deal elegantly with such little problems.

Read rights: For the content of a file to be made accessible with the aid of a pager such as less or an editor to the eyes of a user, it must carry, from the point of view of this user, the r- (“read”-) flag. This can be set with the command chmod for the owner of the file (chmod u+r filename), the owner group (g+r) and all others (o+r). In the case of directories the read right allows the content of the directory to be displayed with ls. Other rights include write (w) and execution rights (x). These can be shown using ls -l (“long listing”). ■

Little graphical helpers such as Qtrans, (presented in Linux Magazine Issue 8 May 2001 on p96) also offer offline help, but sadly the dictionary formats used there make no allowance for simple browsing with less & co. on the command line. For the DICTprotocol there is certainly also the command line tool dict, although DICT, despite its open format, does have one major drawback: without the dictd server nothing whatsoever will happen. All in all, not exactly ideal for users who like to use their dictionaries for browsing, or would like to continue to use vocabulary lists created by the sweat of their own brow. Pure ASCII files are unbeaten, so long as we stick to the Latin alphabet: Browsed through with less, you can use the less command /search term to look specifically for certain search terms.

Wanted: ASCII glossaries You can find collections of ASCII dictionaries on the Internet. Anyone hunting for collected EnglishGerman glossaries, for instance, will strike lucky at http://www.wh9.tu-dresden.de/~heinrich/dict/dict_ leo_ftp/leo_ftp/.

90 LINUX MAGAZINE 11 · 2001

Once downloaded and copied into a joint directory (anyone having root rights will create /usr/dict/eng_deu) the browsing can commence (assuming you have read rights): [trish@lillegroenn ~]$ cd /usr/dict/eng_deu [trish@lillegroenn eng_deu]$ less * When it comes to the letter z, less asks for leave to speak in the last line: (END) - Next: EXERCISE.VOC How on earth can we get to the next file EXERCISE.VOC? Pressing h brings up a Help page, from which we can read: CHANGING FILES [...] :n * Examine the (N-th) next file from theU command line. :p * Examine the (N-th) previous file fromU the command line. The less-command :n thus brings us to the next file, while with :p we can jump back a file at a time. Unfortunately, forward searches are always limited with /searchterm and backwards searches with


090answergirlsbd.qxd

29.06.2001

20:28 Uhr

Seite 91

ANSWER GIRL

BEGINNERS

?searchterm to the currently displayed file. But here again the h (or the man page) comes to our assistance: SEARCHING [...] Search patterns may be modified by oneU or more of: [...] ^E or * Search multiple files (pass thU ru END OF FILE). To try it out we close the Help mode with q, go back, using :x to the first file and once in there, with 1G (“Goto line 1“) to the first line. If we now enter /*yesterday instead of /yesterday, and with n jump to the next occurrence of yesterday, the end of a file is no longer the end of the search. We also search through all the files stated on the command line. After entering the asterisk less reports with EOF-ignore in the last status line, that it is remembering, for this search, to ignore the end of a file (“End of file“).

Not browsing, but searching Browsing was an important argument in favour of ASCII vocabulary lists, but we don’t want to do without a targeted search. For this purpose, grep is our friend: [trish@lillegroenn eng_deu]$ grep yesterday * BOOK.VOC:yesterday gestern EXERCISE.VOC:gestern - yesterday [...] eng2ger.voc:gestern — yesterday [...] No matter how delighted we may otherwise be that grep tells us where it was found - for our reference purposes we are not exactly dying to know in which file grep found it. Luckily man grep declares... -h, --no-filename Suppress the prefixing of filenames on outU put when multiple files are searched. that it is possible to turn off the mention of the filename with the flag -h: [trish@lillegroenn eng_deu]$ grep -h yesterday * yesterday gestern gestern - yesterday [...] gestern — yesterday [...] But this brings the disadvantage that the vocabulary is distributed throughout several, sometimes thematic, ASCII files with the filename endings .voc or .VOC, even more to the fore: The various files use different conventions, to separate phrase and translation from each other. In order to filter out duplicates, there is only one thing left to do: We must tailor all the files to a single convention.

Egalitarianism eng2ger.voc separates the German vocabularies with two hyphens and a space before and after its respective English translations: erst gestern --only yesterday Since this is by far the largest file, it is advisable to transfer its convention to the other files. In the case of EXERCISE.VOC this is not so hard: This file retains it with a hyphen (-) between the spaces, which we quickly replace with sed: [trish@lillegroenn eng_deu]$ sed -e „s/ - / U -- /“ EXERCISE.VOC > EXERCISE.VOC_ The sed command s quickly and simply substitutes the first occurrence of SpaceMinusSpace in each line with SpaceMinusMinusSpace. We actually receive the result of this command, applied to EXERCISE.VOC, as the standard output. But since we would rather see it in a file, we use > to divert the output into the file EXERCISE.VOC_. After we have checked that the new file looks reasonable, a [trish@lillegroenn eng_deu]$ mv EXERCISE.VOCU _ EXERCISE.VOC is sufficient to overwrite the old with the new file. The file BOOK.VOC imposes higher demands. Here a simple space serves as the dividing symbol: yesterday gestern So that there can be no confusion with spaces between words in phrases, these are marked by an underscore, which fortunately does not occur as part of a word: yearn sich_sehnen 11 · 2001 LINUX MAGAZINE 91


090answergirlsbd.qxd

29.06.2001

20:28 Uhr

BEGINNERS

Seite 92

ANSWER GIRL

So here we have to replace twice: the first space in each case by SpaceMinusSpace (s/ / — /) and globally, every occurrence of _ by space (s/_/ /g). Combined, this looks like so: [trish@lillegroenn eng_deu]$ sed -e “s/ / -- U /“ -e “s/_/ /g“ BOOK.VOC > BOOK.VOC_

Compare and contrast Before we overwrite BOOK.VOC with BOOK.VOC_, we would like to check the new file, thus compare it with the original. But diff is not suitable for this, as it outputs all lines which are different, and that’s all of them... What we need is a word-based diff: wdiff. If this does not come with the distribution, it is available from http://rpmfind.net/linux/rpm2html/ search.php?query=wdiff or http://packages.debian. org/stable/text/wdiff.html. [trish@lillegroenn eng_deu]$ wdiff --help Usage: wdiff [OPTION]... FILE1 FILE2 [...] -3, --no-common inhibit output of commonU words With the option -3 you can thus avoid having wdiff output words which have stayed the same. If we send the entire output through less again, we are also preventing something slipping by us when we look through: [trish@lillegroenn eng_deu]$ wdiff -3 BOOK.U VOC BOOK.VOC_ | less [...] ========================================= {+--+} ========================================= [-you can-] {+--you can+} [...] wdiff’s output does, admittedly, take some getting used to: The = line functions merely as a dividing line. In [- -]there are strings from BOOK.VOC, which have been replaced in BOOK.VOC_ by the character string in the {+ - brackets. The {+--+} means that in BOOK.VOC_ simply two minus symbols have been added – spaces are easier to ignore for the word-based diff. The output is more readable in the so-called less mode, which does not really have very much to do with less. But nevertheless, [trish@lillegroenn eng_deu]$ wdiff -3l BOOK.U VOC BOOK.VOC_ | less [...] ========================================= -========================================= you_can --you can [...] waives the unwanted bracketing and thus makes the output easier to read. 92 LINUX MAGAZINE 11 · 2001

But we have no desire to go through the entire less output, and we ponder the following: If we have done everything right, wdiff -3 will throw out exactly the same number of -- lines as BOOK.VOC (and BOOK.VOC_) has lines (lines: -l): [trish@lillegroenn eng_deu]$ wc -l BOOK.VOCU BOOK.VOC_ 29018 BOOK.VOC 29018 BOOK.VOC_ 58036 total If we filter all the distracting dividing lines out of the wdiff output, we should in fact again end up with 29018 lines (grep -v seeks out all the lines with no ==): [trish@lillegroenn eng_deu]$ wdiff -3l BOOK.U VOC BOOK.VOC_ |grep -v “==“| wc -l 29023 So that did not go quite as planned - where do the five extra lines come from? Clever as we are, we will simply display all the lines which contain no double minus: [trish@lillegroenn eng_deu]$ wdiff -3l BOOK.U VOC BOOK.VOC_ | grep -v “==“ | grep -v “--“| wc -l Usage: grep [OPTION]... PATTERN [FILE]... Try `grep —help’ for more information. 0 But we’ve certainly done something wrong here... Of course – minus signs serve as symbols for the shell that there is an option to follow, and the strength of the double quotes is not enough to hide our minus search pattern from the shell. Luckily we can remember the old Bash trick, of telling a command with a – that there are no further options to follow: [trish@lillegroenn eng_deu]$ man bash [...] OPTIONS [...] - A single - signals the end of options and disables further option processing. Any arguments after the - are treated as filenames and arguments. An argument of -- is equivalent to an argument of -. [...] With a [trish@lillegroenn eng_deu]$ wdiff -3l BOOK.U VOC BOOK.VOC_ | grep -v “==“ | grep -v — “--”U wc -l 5 thus we come to the missing five lines. But where did they come from? [trish@lillegroenn eng_deu]$ wdiff -3l BOOK.U VOC BOOK.VOC_ | grep -v “==“ | grep -v — “--“


090answergirlsbd.qxd

29.06.2001

20:28 Uhr

Seite 93

ANSWER GIRL

arrow_keys sensing_mark Three empty lines, which wdiff has somehow included, but what is going on with arrow_keys and sensing_mark? The same command without the l option for wdiff provides information, and [trish@lillegroenn eng_deu]$ wdiff -3l BOOK.U VOC BOOK.VOC_ | less lets us track down the corresponding point by comparison with the less command /arrow_keys. Look at this: [-arrow_keys-] {+arrow keys —+} The fault (for the empty lines, too) lies clearly with wdiff. With all this toing and froing we had almost forgotten why we dragged out wdiff in the first place: We wanted to check if, in the lines where we replaced underscores, everything had gone smoothly. Here we would prefer to take wdiff without the l option, because then we could exclude all lines in which a {+--+} occurs: [trish@lillegroenn eng_deu]$ wdiff -3 BOOK.VU OC BOOK.VOC_ | grep -v “==“ | grep -v „{+--+}“U | less Everything in order? Then we simply overwrite the old BOOK.VOC with the converted content from BOOK.VOC_: [trish@lillegroenn eng_deu]$ mv BOOK.VOC_ BOU OK.VOC

Grep and paste As if we hadn’t already gone to enough trouble, technic.voc presents us with a disproportionately difficult task: Here stand the original and the translation, each on their own line, and the pair is separated from the rest of the vocabularies by an empty line in each case: Ab-; Abfall waste

BEGINNERS

We can get rid of the empty lines by using grep to seek out all those lines in which at least one letter a-z and/or A-Z occurs: [trish@lillegroenn eng_deu]$ grep [a-zA-Z] tU echnic.voc Now it gets a bit more difficult. But then we remember the cut command, with which columns can be extracted from text files. Where’s there’s a cut, there must also be a paste, which combines several columns into a file. In fact, we find it with man paste. With -d we can specify a column delimiter – unfortunately only a single letter, but we can still replace that later with sed. What matters is only that the Delimiter does not occur in technic.voc. How would # work? Let’s count: [trish@lillegroenn eng_deu]$ grep -c “#“ tecU hnic.voc 0 The hash symbol (“#“) occurs precisely 0 times in this dictionary file and is therefore ideally suited as a temporary column delimiter for paste.

Pipe: written on the command line as |, takes the standard output of the commands standing to the left of it and feeds the command to the right-hand side. ■

The rest is perfectly simple: paste wants to have as argument just the two files which serve as first and additional column(s). Now we have no files at all, but the manpage tells us that paste is also satisfied with the standard input (from the Pipe of grep), if instead of a filename we insert a -. In fact we can settle quite happily for the standard input STDIN (“standard input“); this has in particular the nice property that a line disappears from STDIN as soon as it has been read out once. If we twice replace paste in an admittedly dastardly move for STDIN, we obtain precisely the effect we want: In the first column are the odd lines, in the second column the even lines: [trish@lillegroenn eng_deu]$ grep [a-zA-Z] tU echnic.voc | paste -d “#“ - Ab-; Abfall#waste abfuehren#discharge [...] To remove the hash symbol from this is one of our easiest exercises, and we immediately divert the result into the technic.voc_ file:

abfuehren discharge

[trish@lillegroenn eng_deu]$ grep [a-zA-Z] tU echnic.voc | paste -d “#“ - - | sed -e “s/#/ U -- /“ > technic.voc_

[...]

The result technic.voc_ ...

With sed on a command line, nothing more will come of this, because here we must replace line breaks with “ -- “ and additionally eliminate empty lines. It also becomes difficult to construct a halfway comprehensible one-liner with Perl for this. But luckily the file is so regularly constructed that – once we have removed the empty lines – an odd, and the following even line, always go together.

Ab-; Abfall -- waste abfuehren -- discharge [...] ... can thus at the same time be renamed in technic.voc. This means we have a sufficient selection of dictionary files (BOOK.VOC, EXERCISE.VOC, eng2ger.voc and technic.voc) in place – I will leave 11 · 2001 LINUX MAGAZINE 93


090answergirlsbd.qxd

29.06.2001

20:28 Uhr

BEGINNERS

Seite 94

ANSWER GIRL

converting the rest to your inventive powers – and can finally turn to a small script, which takes over the translation of words entered on the command line.

Look me up Like (almost) every shell script it begins by specifying which Shell we are using. Naturally the one with which we are most familiar, and that will usually be the Linux standard shell bash: #!/bin/bash -vx

Turn around once Of the four vocabulary files used here, BOOK.VOC displays one major difference from the others: The English term is on the left, the German match on the right. Since the wb script from Listing 1 does not recognise that for example gestern – yesterday from eng2ger.voc and yesterday – gestern from BOOK.VOC is a duplicate for our purposes, it is presumably simplest just to swap the columns in BOOK.VOC. Like all the text modification exercises covered in this Answer Girl, there are several ways to achieve this goal. A few of them will be listed at this point by way of example.

Cut and paste With cut columns can be extracted from a text file, which, with paste can be – and in reverse order, too - added back again. We explicitly specify the column delimiters with the option -d (“delimiter“). Unfortunately this can only be one character, not a character string, and that makes the whole thing somewhat fiddly: [trish@lillegroenn eng_deu]$ sed -e “s/ -- /%/“ BOOK.VOC | cut -d “%“ -f 1 > /tmp/BOOK.VOC.1 [trish@lillegroenn eng_deu]$ sed -e “s/ -- /%/“ BOOK.VOC | cut -d “%“ -f 2 > /tmp/BOOK.VOC.2 [trish@lillegroenn eng_deu]$ paste -d “%“ /tmp/BOOK.VOC.2 /tmp/BOOK.VOC.1 | sed -e “s/%/ -- /“ > /tmp/BOOK.VOC.paste In the first two lines we replace the true column delimiters in each case “--” with the working delimiter %. Line one, with cut f 1, then fetches out everything on the left of the delimiting symbol, and writes it into the temporary file /tmp/BOOK.VOC.1. The same thing happens with the second column (-f 2) on the right of the delimiting symbol in line two – the output of this cutting-out action with cut lands in /tmp/BOOK.VOC.2. If we give paste as first argument in the third line the second and as second argument the first temporary file, we have swapped the columns from BOOK.VOC. Now just replace the percentage sign again with “--” and save the result of the swap action in /tmp/BOOK.VOC.paste. If everything has gone smoothly, the original file can be overwritten with this.

Pearls and expressions It is of course less fiddly, too - but then we step into the area of independent script languages such as Perl. Perl can be used very well with the -p option as a more powerful sed substitute. As with sed the -e option (“execute“) introduces a Perl command to be executed on the command line. [trish@lillegroenn eng_deu]$ perl -pe ‘s/(^.*)( -- )(.*$)/$3$2$1/’ BOOK.VOC > /tmp/BOOK.VOC.perl Everything (.*) from the start (^) of a line to the end ($) is to be replaced by a revised version. So that the content of the lines does not get lost, we save it in round brackets: the start of the line before the delimiter string “--” in the first buffer, “--” in the second and the rest up to the end of the line in the third buffer. The whole thing is now replaced by the content of the third buffer ($3), followed by the delimiter string from the second ($2) and the former line start from the first buffer ($1). Make sure that you set the Perl Substitute command in single quotes (‘). Double quotes cause the shell to assume that $3$2$1 means the contents of shell, not Perl, variables.

As if it were (k)not a problem The most elegant way is via awk. Contrary to paste, this tool can also manage with multi-character column dividers. All the same, the delimiter here is specified with the option -F (“Field separator“). [trish@lillegroenn eng_deu]$ awk -F “--” ‘{print $2 “--” $1}’ BOOK.VOC > BOOK.VOC.awk The awk-“program“ in single quotes normally consists of a pattern, on which a command block is applied in braces. Since we mean the entire file, we need not specify any explicit pattern and settle for the bracket block. In this we instruct awk to output the content of the second column ($2), then the delimiter string “--” and finally the content of the first column.

94 LINUX MAGAZINE 11 · 2001


090answergirlsbd.qxd

29.06.2001

20:28 Uhr

Seite 95

ANSWER GIRL

Errors often occur when developing a script, which is why we will first switch on the debug options -vx. Provided /usr/dict/eng_deu contains only converted dictionary files, we shall hold this dictionary directory in the variable WBDIR: WBDIR=/usr/dict/eng_deu As with any script intended for more than one person, we begin with a call up test: If the user enters more or less than one search term as argument (thus “not equal“) to one... if [ $# -ne 1 ]; then ... we simply spit out how our script ought to be used: echo “Usage: $0 string“ Nicely enough, in the variable # a shell script recalls the number of arguments with which it was called up. In the variable 0 (null) can be found the reset argument, thus the command name itself (if applicable, with specified path). On the other hand... else ...we search in the vocabulary lists in the directory $WBDIR for the first command line argument ($1): grep -hw “$1“ $WBDIR/* With the “Word option“ -w we ensure that grep only outputs something when the search word pops up as such (and not for example as part of another word) in the vocabulary lists. In order to exclude typing errors in upper and lower case, we can also force grep to ignore differences between upper and lower case letters:

BEGINNERS

+ [ 0 -ne 1 ] + echo Usage: ./wb string“ Usage: ./wb string Thanks to the verbosity option -v (“verbose“) the Bash displays every single line, which it remembers to execute. The lines with the initial plus are something for which we can thank the -x (“extensive“) option, which also states each time what the shell really sees internally, if it has performed all the replacements (read out the contents of variables). Last of all - and unfortunately not especially marked out - we also find the output with which we would have been faced without the debug options in the wilderness - in this instance: Usage: ./wb string. And the variant with one search word functions: [trish@lillegroenn /tmp]$ ./wb yesterday [...] yesterday -- gestern only yesterday -- erst gestern yesterday -- gestern gestern -- yesterday vorgestern -- the day before yesterday [...]

No doppelgangers This output clearly shows that we still have some plans for the script: We want to get rid of the duplicates. This is really quite a simple matter: sort the output with sort (thanks to -f — “fold“ - with equal value for upper and lower case letters) and use uniq to throw out the doppelgangers: grep -hwi “$1“ $WBDIR/* | sort -f | uniq

Shell: The command line interface between users and their input devices and the operating system. Most UNIX shells have a more or less powerful programming language built in. $: Shells such as the Bourne shell (sh), Korn shell (ksh) or the Bourne-Again-Shell (bash, under Linux also known as sh) reveal the content of a variable, if one places a dollar symbol before their name. Whitespace: Collective term for characters which mislead the eye into believing “There is no character here“. These include space and tab characters. ■

Unfortunately, there is something wrong with this, because the test run produces

grep -hwi “$1“ $WBDIR/* which means we really are finished and can close the if construction: fi Issue execution rights to our wb script

[trish@lillegroenn /tmp]$ ./wb yesterday [...] gestern -- yesterday only yesterday -- erst gestern vorgestern -- the day before yesterday yesterday -- gestern yesterday -- gestern [...]

[trish@lillegroenn /tmp]$ chmod ugo+x wb and test: [trish@lillegroenn /tmp]$ ./wb #!/bin/bash -vx WBDIR=/home/trish/dict + WBDIR=/home/trish/dict if [ $# -ne 1 ]; then echo “Usage: $0 string“ else grep -hwi “$1“ $WBDIR/* fi

which may be sorted, but it is still not free from duplicates. An investigation using ./wb yesterday > /tmp/test of the output diverted into the file /tmp/test with an editor comes up with: The sole difference between the two “yesterday -- gestern“ lines is the Whitespace characters. OK, then we’ll standardise all these (‘[:blank:]’)s first into spaces (‘ ‘) and simplify all space sequences with the tr option -s (“squeeze“) into a single one in each case: grep -hwi “$1“ $WBDIR/* | tr -s ‘[:U blank:]’ ‘ ‘ | sort -f | uniq 11 · 2001 LINUX MAGAZINE 95


090answergirlsbd.qxd

29.06.2001

20:28 Uhr

BEGINNERS

Seite 96

ANSWER GIRL

And yet the double line is still proving problematic: Of course, because now we have, in one output none and in the other precisely one space at the end of the line, which is bothering uniq. So, with a sigh, we again pull out sed and replace a single space at the end of the line ($) with nothing

grep -hwi „$1“ $WBDIR/* | tr -s ‘[:U blank:]’ ‘ ‘ | sed -e „s/ $//“ | sort -f | uniq Et voila – at last the wb script (Listing 1) is ready to go to work. Now the debug options can go, and root can copy it to /usr/local/bin to be used by anyone. Since this directory is usually included in the PATH variable, it is now also sufficient to call up wb without specifying the path. ■

Listing 1: The dictionary script wb #!/bin/bash WBDIR=/home/trish/dict if [ $# -ne 1 ]; then echo “Usage: $0 \“string string ...\“” echo “ $0 string“ echo “ $0 regexp“ else grep -hwi “$1“ $WBDIR/* | tr -s ‘[:blank:]’ ‘ ‘ | sed -e “s/ $//“ | sort -f | uniq fi

Value added Sharp-eyed readers may be wondering how, in Listing 1, one proposed line suddenly turned into three echo lines. Anyone who has experimented a little with the script (or grep) will know that one can suggest to the shell, by including several strings in quotes, that despite everything, there is only one argument involved. As soon as users want to search for an expression consisting of several words, they simply have to place it in double quotes: [trish@lillegroenn /tmp]$ wb “sich erinnern“ recollect -- sich erinnern an remember -- sich erinnern sich erinnern --remember [...] We should of course document this type of use: echo “Usage: $0 \“string string ...\“” So that echo does not wrongly interpret the quotes to be output as the delimitation of its own argument, they have to be escapt (stripped of their special position in the shell) with \. The last echo line echo “

$0 regexp“

on the other hand intends that grep, right from the start, looks not only for character strings, but also for regular expressions (“regexps“). This means for example that the user can elegantly skate over any uncertainties in terms of spelling: [trish@lillegroenn /tmp]$ wb “ye.*y“ erst gestern -- only yesterday Freibauern -- yeomanry gelbliche -- yellowly gestern -- yesterday hefig -- yeasty [...] searches for the translation of words beginning with ye and ending in y. The dot here stands for any character, and the following * signals that any number of (at least none) should pop up from this. The only thing to watch here is that with regular expressions, too, the saying applies: “Some are more equal than others.“ Although the ground rules are the same, such as not all perl regexps can also be used with grep. It’s therefore often worth taking a look at the grep man page...

96 LINUX MAGAZINE 11 · 2001


097Kidssbd.qxd

29.06.2001

18:37 Uhr

Seite 97

KIDS FOR LINUX

SOFTWARE

Reading, wRiting and aRithmetic

EASY AS ABC

RICHARD SMEDLEY

Computers may never replace pen and paper, but they can certainly complement traditional methods of learning the basics. The majority of children positively enjoy learning to read and write, but it’s not just the slower or less interested ones who benefit from a little encouragement. Below we survey some of the large number of programs that benefit children in the learning of the three Rs. You will find most of the code on the cover CD.

The write thing Typing tutors reinforce spelling as well as improving keyboard interaction. The classics are Typist and Typespeed, to which can be added Sam Hart’s Tux Type. The excellent, Curses-based Typist, updated as a GNU package gtypist, is a little dull for younger children. For those with a strong desire to learn touch typing on a qwerty or dvorak keyboard, however, it is well worth a look. Typespeed has more entertainment value as you are challenged to type a word as it whizzes accross the screen. You can use the regular English dictionary or, if you are bringing your kids up as true geeks, UNIX command words. It is a great challenge for older children or adults and has the advantage of running on the command line and so needing very little in the way of resources. The speed and vocabulary put it beyond most under 10s, although it’s a simple matter to make a junior version. Alternatively give xletters a go. Whilst not really a touch-typing tutor, TuxTyping is certainly great fun for the kids. An SDL game, it features funny sound effects and graphics (see figure 2) which keep the young amused as they send Tux chasing after his dinner by typing the characters or words written on fishes which fall from the sky. The graphics are appealingly cartoon-like and the game makes an interesting counterpoint to GCompris (see Linux Magazine 9). In the same genre can be found Linux Letters and Numbers

(LLN). LLN is a fun game for aged two and up. Click on a letter and up pops a picture of something beginning with that letter (click on ”Z” for example, and you may get zebra.png). You can add extra images of your own.

It’s only words Those who feel safest with a Graphic User Interface (GUI) may never have noticed the package bsd games on their distribution disk, a collection of old text-based games which run from the command line, or in an Xterm. As well as fun games like robots and tetris there are the word games boggle and hangman. Nineword is a Gtk version of those boggle-type puzzles seen in newspapers, where words of four or more letters must be made from the nine available. There is always a nine letter word to find. Still with traditional games a Gnome clone of Scrabble - Gnerudite has been developed. It only supports one player for now, but has many useful features including a cheat mode to swap some of your letters if you are stuck.

SDL Simple DirectMedia Layer is a cross-platform library for games development, providing fast access to the audio device and video card frame buffer. It supports all the major desktop platforms and has bindings for most popular programming languages. Civilization: Call To Power and Mind Rover are among the better-known games dependent upon the library. The libsdl homepage contains a collection of bad jokes that your co-workers do not want to hear, so do not click on the link in this box.

11 · 2001 LINUX MAGAZINE 97


097Kidssbd.qxd

29.06.2001

18:37 Uhr

Seite 98

SOFTWARE

KIDS FOR LINUX

[above left] Figure 1: Easy as ABC... [above right] Figure 2: catch a falling word [right] Figure 3: A genius at vector drawing

Gutenberg

Info SEUL/edu: http://www.seul.org/edu/ linuxforkids website (no connection with this column) http://www.linuxforkids.org/ linuxforkids cdrom available from http://www.linuxemporium.co.uk Debian junior: http://www. debian.org/devel/debian-jr/ Project Gutenburg: http://promo.net/pg Project Gutenbook (also on the CD): http://www.gutenbook.org ■

Once your children are happily reading and writing it is time to switch off the computer (unless you are always on) and head off down the library. Do not forget the library on the Net, though: Project Gutenburg. Worthy of an article on its own account, if you have yet to discover this monumental venture then point your browser to it. Starting with Alice Through The Looking Glass and Peter Pan, your children can work their way through every out of copyright text listed until they have finished the complete works of Shakespeare and Milton. On the cover CD we have Project Gutenbook, a GPL’d Perl-Gtk browser for Project Gutenburg, which allows you to browse the archive, select and download a book, then read it. Those inspired to help out with the code may (or may not) be pleased to know that the next release will be in Python.

Sum thing for everyone We reviewed the flashcard arithmetic game, MathWar, a couple of months ago. Variations on this theme are provided by Addpsx, first_math and Math Literature. Viewers of Channel 4’s Countdown program may like to practice with Anton, which

98 LINUX MAGAZINE 11 · 2001

takes six random digits and asks you to combine them with the four basic arithmetical operators to produce a three figure target number. The program will also present you with the best solution. Of course there is a lot more to getting children interested in mathematics than putting them through their paces with arithmetical quizzes. Xaos will introduce them to the beauty of fractals whilst snowflake allows the creation of a graphical cryptographic key, in the form of a snowflake pattern, from any series of characters – such as a child’s name. Returning to the beauty of a challenge, Groundhog involves rotating tiles to align pipes, allowing little coloured balls to return along the pipes to the correct coloured cups. Not a strictly mathematical puzzle, perhaps, however gtans, a Gtk version of Tangram, certainly is as some geometry rubs off on players along with creative puzzlesolving. Both games are suitable for quite young children. For a stronger geometry ”fix”, try the GNU program Dr Genius. The name is one of those selfrecursive acronyms beloved of geeks, standing for Dr Genius Refers to Geometric Exploration and Numeric Intuitive User System (ouch). It combines vector drawing with a strong interactive element which many children will find involving. Now dive in and try some of these programs, but don’t forget to let the kids have a go, too.

Resources Many of the applications reviewed here can be found collected together on the Linux for Kids Web site, along with arcade and strategy games and art applications. Software is reviewed and rated, and an ISO image of all the freely-distributable code is available. This CD may also be purchased cheaply in the UK. Debian users (including Progeny and Stormix) can find many of these applications packaged up by the Debian junior project, which we will be examining in depth in a future issue. ■


099desktopiasbd.qxd

29.06.2001

20:31 Uhr

Seite 99

DESKTOPIA

SOFTWARE

Jo’s alternative desktop

ROCK AROUND THE CLOCK JO MOSKALEWSKI

Sometimes your internal clock is not enough. And anyone who prefers not to wear a watch on their wrist can simply distribute clocks about their environment. The Linux desktop is the perfect place to start.

The unwritten law

Basic training camp

Anyone who has spent long enough over the past few years sitting in front of Microsoft Windows has accepted one tool on its desktop as part of his mental furniture. The clock is on the right at the bottom in the tray of the taskbar. KDE follows this pattern (GNOME, with its more recent versions, no longer does) – it appears to be a current standard. But what do you do if you want to automatically hide this toolbar, but still want to know the time? And nor, by any means, does every UNIX desktop have this type of start menu or taskbar with integrated timepiece. Nevertheless, GNOME and KDE turn out in this instance to be considerably more flexible than Windows: Their clocks are swappable applets, which can be moved, configured, removed or added. But even these options for the two environments do not exhaust the possibilities on a UNIX desktop: Larger clocks, which are easier to read, can still be placed anywhere on the desktop and thus make more room in any Start menu which may be in use. In any case, anyone who, in the old UNIX tradition, screws his desktop together out of umpteen individual tools, must certainly give some thought to the right clock. But separating the wheat from the chaff can be a very time-consuming venture, though constructing a clock serves for young programmers as a test bed for later successful projects. A small and by no means complete overview of some interesting clocks and their specialities is given here.

First of all there are the two simple clocks which belong to the X Window system, xclock and oclock. Both are thus available practically everywhere and can be configured completely via start parameters or Xresources. While oclock presents itself as a simple, analogue and round clock (although without window frames and with a transparent background!), there is a strange feature in the square xclock, called up with the suffix ”-digital”, this also starts as such. But there is no need to find any deeper meaning behind this. Both display variants are very unfriendly and outmoded in appearance – essentially, they can also only be configured in terms of their foreground and background colours. If you want to use your own colour scheme, you’d be better off with the plain oclock. Here each element can be assigned its own colour – whether it’s the hour hand, or the numberless dial frame.

Jo’s Desktop

Figure 1: xclock’s standard look

Only you can decide how your Linux desktop looks. With deskTOPia we regularly take you with us on a journey into the land of window managers and desktop environments, presenting the useful and the colourful, viewers and pretty toys.

Figure 2: xclock -digital

11 · 2001 LINUX MAGAZINE 99


099desktopiasbd.qxd

29.06.2001

20:31 Uhr

SOFTWARE

Figure 3: Round object: oclock

Seite 100

DESKTOPIA

Punctuality

Goliath

A far more elegant hermaphrodite is the buici-clock, which mainly comes with the Debian distribution (the program author is part of this project). Simply round and frameless – like xclock, this too can only be configured in terms of position and size. The colours are pre-set and unchangeable – but since its appearance is based on a railway station clock, this is not a failing, but deliberate.

With the xdaliclock the program author has attempted to get round these problems at the nuts and bolts end – but by doing so, he has also created a few new ones. But the nice thing about this clock is that the numbers fade into each other (also possible when colouring). The only thing is, the numbers have ended up really huge. Unfortunately this is also embedded into the program, so that there is no longer any option here for individual adaptation. But the author must be given credit for having planned in at least one ”transparent” mode – and also the windowless representation on the desktop background – from the ground up. Unfortunately, it falls a bit short in the execution, which can easily be checked with a:

Finishing touch [left] Figure 4: Freshly painted: oclock [right] Figure 5: Stylish: buici-clock

[top] Figure 6: xdaliclock [below] Figure 7: Colorcycling & overlapping numbers

Both buici- and oclock are really easy to use on the desktop, all they need are the clearly-defined area of the clock face. There is no background painted around them by these applications, and nor is there a window frame to spoil the picture. The digital clocks described below are more difficult to design here. Since these are usually letters, these have a text colour as well as a background colour. Certainly instead of the latter it would be nicer to have simple transparency – but each and every programmer seems to be shirking this task more or less successfully. This is where the only thing that will help is a background colour that is similar to the desktop image, or at least matching. Plus, a completely different problem now arises. Since they are not transparent anyway, the creators also forgo an unframed representation. Anyone who now wants to not only click away or minimise his clock, but also wants to look at it on all virtual desktops, would be well advised to take a look in the Window Manager documentation. If this offers no solution, xnodecor may help (see box).

Fig. 8: Two-fold copy of the dclock

Window frames? No thanks If you find that your favourite clock is adorned with a nice window frame by your window manager and the latter offers no option for changing this condition then we recommend that you use xnodecor . This is very easy to use. To start the clock, you should not use the works-installed Autostart function of many desktop interfaces, but the user’s own start file ~/.xinitrc or ~/.xsession. This will start the clock first, and thus hide it from the window manager, which is only then to be started. Example of a start file: dclock -geometry -0-0 & xnodecor -w dclock twm

100 LINUX MAGAZINE 11 · 2001

xdaliclock -root -transparent xdaliclock immediately takes over the entire desktop – and at the same time it comes out, not transparent, but coloured. If the clock is only called up with the option -transparent and the drama with the windows is left to the listing in the box, this only works for a short time: xdaliclock reports back in lively fashion every minute to the window manager, and the magic of the transparency is countered by marked representation errors.

David For sheer undiluted joy, however, go to dclock: This, too, is digital, overlaps (apart from the seconds) the figures with each other, is freely configurable in terms of size, scope of representation and colour – and also offers an alarm function. One would almost be tempted to say its perfect, but unfortunately there is no transparent mode. Figure 8 shows this clock once with the standard defaults in the window, and once completely configured and integrated into the desktop (including implementation of xnodecor). The following command parameters are especially interesting with this clock: • -bg [color]: background colour • -fg [color]: foreground colour • -led_off [color]: colour of the inactive segments of the LED display • -fn [font]: font for the date • -geometry -0-0: off in the corner with ... • -miltime -date ”%A, %d %B %Y”: • -seconds: displays seconds • -noblink: blinking ”:” avoid • -notails: 6 & 9 without ”crossbars” • -fade -fadeRate 100: fade in the figures • -slope [X]: place figures slanted by X% • -smallsize 0.5: show seconds half as high as minutes • -nobell: switch off half-hourly ”alarm” ■


101ootbsbd.qxd

29.06.2001

20:35 Uhr

Seite 101

OUT OF THE BOX

SOFTWARE

To Infinity and beyond

PENGUINS IN SPACE! CHRISTIAN PERLE

There are thousands of tools and utilities for Linux. ”Out of the box” takes the pick of the bunch and suggests a little program each month which we feel is either absolutely indispensable or unduly ignored. This month we honour two choice little morsels for the desktop, XPenguins and XCruise.

Following the release of Pingus, the open-source clone of the classic game Lemmings, the idea arose of making the cute little penguins romp around directly in the Root Window of the X-desktop. Robin Hogan has made it happen with XPenguins.

South Pole post Before the penguins can start to romp around, the X11-Header files and the XPM library together with header files must be installed. In many distributions the corresponding packages are called x11dev or x11-devel and xpm and xpmdev or xpmdevel. With an RPM-based distribution, such as Mandrake, you can see if they’re installed with: rpm -qa | grep -i xpm rpm -qa | grep -i x11 The XPenguins homepage http://xpenguins.seul.org/ has the source archive xpenguins-1.2.tar.gz ready for download. Once this file is on your hard drive, you can move on to compile and install: tar xzf xpenguins-1.2.tar.gz cd xpenguins-1.2 make su (enter root password) cp xpenguins /usr/local/bin cp xpenguins.1 /usr/local/man/man1 exit

Go Penguins! If everything has gone smoothly with the compiling, you can let the penguins out in a terminal window with the command xpenguins -delay 100 & (Figures 1, 2 and 3). Of course, you can also enter the same thing with a KDE or GNOME menu link. The option -delay 100 makes sure the little Linux mascots don’t move too hectically across the screen. The command man xpenguins displays additional options. If you unexpectedly get sick of the

Figure 1: Who’s that walking on the window?

Figure 2: Can penguins fly? - Yes, they can!

Root Window: The background in the X Window system is managed as a separate window. This window is known as the Root Window. Header files: Header files (also called Include files) list the functions available in a library together with parameters. The C-Compiler needs this information to compile a program. In the most common distributions a header packet for a library usually carries the suffix dev or devel in its name. XPM: The ”X-PixMap” library. A collection of service functions to display colour graphics (pixmaps) with the X Window System. Compile: A program in source code form cannot be executed by the operating system. It is only by compiling (translating) it with a Compiler that it can be turned into a form which can be executed by the processor. One great advantage of the source code form is that the program can be compiled onto various platforms (Intel, Sparc, Alpha, etc.), if it has been programmed to be sufficiently portable. &: The commercial ”And” (ampersand), entered as the last symbol in the command line serves to execute a command in the background. Otherwise the shell stays blocked until the command is ended. ■ 11 · 2001 LINUX MAGAZINE 101


101ootbsbd.qxd

29.06.2001

20:35 Uhr

Seite 102

SOFTWARE

OUT OF THE BOX

Symbolic Link: (Symlink for short) Unix file systems offer the option of making references to files; these references appear at different places in the directory tree and provided they have equal rights, allow access to the original file as long as this has not been deleted or renamed. With the command ln -s foo bar the file foo can also be accessed under the name bar. Alternatives: A speciality of the Debian distribution. For example, when several clones of the vi editor are installed such as elvis, vim and nvi - it is possible to use this mechanism to select an alternative (such as elvis) as default. vi: The standard text editor under Unix systems. It is certainly not exactly intuitive to learn, but it does offer many useful functions. A vi reference sheet can be found at http://www.bembry. org/tech/linux/vi.shtml ■

penguins, enter the deadly command killall xpenguins.

Deep Space What was still a computer cliché in the film Jurassic Park is starting to become a reality with XCruise: flying through the file system. Yusuke Shinyama of Japan gives the user a three-dimensional view of the directory structure on the hard disk. Files are shown as planets, directories as galaxies and symbolic links as wormholes. XCruise does not act as a real file manager, as no manipulations such as deleting, renaming or copying are possible. But anyone interested in just browsing the file system and seeing how files are linked by means of symlinks, can fly around to her heart’s content. But - of course - the program must first be installed. The requirements for compiling XCruise are even more modest than those for XPenguins. Only the X11 header files have to be installed. The source archive can be obtained from the site http://tanakawww.cs.titech.ac.jp/~euske/prog/index-e.html. to compile and install, enter the following commands: tar xzf xcruise-0.24.tar.gz cd xcruise-0.24 xmkmf -a make strip xcruise su (enter root password) cp xcruise /usr/local/bin ; exit

Navigation Figure 3: A labyrinth can quickly be constructed out of xterms (Screenshot from the Project homepage)

Once installation is complete, start XCruise with the command xcruise & in a terminal window. Control it with the left and middle mouse buttons to fly back

102 LINUX MAGAZINE 11 · 2001

Figure 4: /etc in your visor

Figure 5: All roads lead to /usr

and forth. Specify the direction of flight using the cross-hairs. You can also freeze the image with f and quit the program with q. The file system is displayed according to a specific scheme: directories are white or blue rings (galaxies), and you can fly into these. Normal files are shown as filled-in circles (planets) with various colours and symlinks as green threads (wormholes), which link the respective file or directory objects together, even across vast distances. Once you fly close enough to a directory, its content becomes visible. In Figure 4 you can see the approach to the /etc directory. The size of a file defines the diameters of the planets displayed. If a planet appears coloured in violet, the user has no read privileges for the associated file. Files with similar names have the result that their planets are located close to each other. Figure 5 shows a whole bundle of symlinks, which all point from /etc/alternatives to /usr. For anyone who has now acquired a taste for this and is on the look-out for more desktop gimmicks, Jo Moskalewski’s Desktopia column is just a couple of pages away. ■


106freeworldsbd.qxd

29.06.2001

20:36 Uhr

COMMUNITY

Seite 106

FREEWORLD

Small in stature, but big at heart

QNX, ANY SIZE YOU WANT RICHARD SMEDLEY

The Hurd is unfortunately a long way from a finished product, and many Linux fans continue to mock the idea of microkernel architecture. Nevertheless one company has successfully shunned monolithic kernels for decades: QNX.

Figure 1: Fast, light and flexible.

Presented with an old laptop with no OS, no CD drive and no network card, I pulled out my QNX4 demo diskette and an old serial modem and five minutes later I was browsing Project Gutenburg with the Voyager Web Browser. The browser, GUI, TCP/IP and OS, along with a web server, dialer, word processor, Tower of Hanoi game and vector graphics demo fit on a single 1.44Mb floppy disk and made big news when QNX first did this a couple of years ago. It is still useful for getting online quickly today, when confronted with an old PC and no easy install method for a Linux distro. Of course the demo disk is more than just a gimmick. It is a demonstration of how flexible a modular (unix-type) architecture, running on a

microkernel can be. As well as being a Real Time OS (see last month’s column), QNX Neutrino - the latest version of the OS.

Why QNX The big advantage that QNX has over many embedded and RealTime rivals is self-hosting - the development platform and target platform are the same (see figure. 1). Combine this with an open and familiar API (Posix) and free (in every sense of the word) development tools, courtesy of GNU and the Free Software community and you have an easy to develop for and powerful platform. If the Real Time variants of Linux (see last month’s column) are

Microkernel architecture The microkernel includes only a small set of core services within the kernel, including thread services, message passing, condition variables, semaphores, signals, and scheduling. The kernel can be extended by dynamically plugging in serviceproviding processes, such as file systems, device drivers, POSIX message queues, and networking. These services run in user space and benefit from protected memory. Throughout the 1980’s microkernel architecture was taught as state-of-the-art in OS theory classes at universities accross the world. The Hurd, the kernel for the Free Software Foundation’s GNU OS was started in this period. However getting multithreaded servers to pass messages to each other is particularly difficult to implement correctly, and although the underlieing Mach microkernel was (eventually) available as a free, debugged base, Hurd development was (and still is) fairly slow. Ten years ago, when Linus Torvalds wanted to run Unix on his i386, the quickest solution seemed to be a monolithic kernel. Although the purists poured scorn on the idea at the time, readers of this publication have a fair idea of the subsequent success of the Linux kernel :-) We will, nonetheless, return to microkernel architecture in greater depth in future columns. As QNX Neutrino shows, done correctly it has great potential for a GNU-based OS.

106 LINUX MAGAZINE 11 · 2001


106freeworldsbd.qxd

29.06.2001

20:36 Uhr

Seite 107

FREEWORLD

not yet powerful or stable enough for you, and your customer base will bear the licence cost (and they usually will in traditional embedded fields) then you may find yourself severely tempted. QNX Neutrino has become the OS of choice for everything from fledgling Internet Appliances to what the Guiness Book of Records calls ”the most intelligent robot” in the world, Cog, at MIT’s Artificial Intelligence Lab. Designed to mimic the way humans react with and learn from their environment, Cog uses a QNX-based distributed control system to support its ”realtime visual and auditory requirement” - camera ”eyes” and microphone ”ears” are placed in the same position as on a human face and Cog learns about its environment in a similar fashion to a baby. The distributed architecture and transparent networking enable the eight QNX nodes to be accessed and developed simultaneously by students and researchers in the lab, or from home. The performance has impressed the AI lab enough to move all of its robot research onto the platform.

No X word Another advantage is the embedded GUI, Photon. The X server/ client architecture, common to all Unix systems, has many advantages but is far too big for most embedded systems. The modular approach to the GUI means that it has the smallest possible memory footprint, whatever the application, and can be used in many multimedia applications. Naturally it works over TCP/IP too. The ability to distribute components across a networked environment is inherent in the system. With no user involvement QNX Neutrino can share disks, modems or even processors accross your network. Whilst it scales up to huge distributed SMP systems, it is also an advantage in systems with limited resources. In the home entertainment sector this ability could speed the long-heralded ”convergence” of comms, computer and audiovisual equipment into a low-cost distributed

COMMUNITY

Microkernel on a mini disk QNX has a modular microkernel architecture which means that distributions can be custom made with only the services needed. Even the Photon Windowing system on the demo disk only occupies 45K with additional processes loaded a needed. As there is no room for the myriad of common drivers needed for compatibility with desktop PCs, a ”flat” driver uses the frame-buffer memory of the graphics card, mapping it into the high-memory space of the processor. The largest application on the disk, at 400K, is the HTML3.2-compliant browser, Voyager. This understands frames, Javascript and animated GIFs. The full version in the Neutrino distribution has all the plug-ins (Real Player, Flash, et al.) that are needed. If you want to give it a go, visit the site and download the appropriate version. There is one for network connections and one for computers with a serial, ISA or pcmcia modem. Untar the download then copy the image to the floppy by simply running the makedemo shellscript included. It is as simple as that. Now stick the floppy in the drive of any PC - Minimum specification is a 386PC with 8Mb of RAM and a colour VGA display - and switch on. The OS boots and loads a compressed image to RAM from where processes are decompressed and loaded on-the-fly as needed. It will not overwrite your hard disk - you don’t even need a hard disk to run it.

environment where every multimedia service is available ”on tap” around the house, and the system just works.

QNX vs. Linux With a proven track record in everything from lifecritical medical instruments and emergency call centres to traffic control and supermarket POS (Point-of-Sale) QNX Neutrino is a tough competitor for the embedded Linux solutions providers. Although the licence is restrictive, it is a lot more open than most of the proprietary Unices, enabling one to dig into the code and construct just the system one needs. However the truly free Linuxbased Real Time solutions, as well as Free alternatives such as ECOS, are strongly competitive with QNX Neutrino. The embedded space continues to be every bit as interesting as the Desktop market, and it will be an interesting test of the Free Software model to see how the Open-Source-but-not-free model of QNX holds up. ■

Figure 2: More than just a toy

Licensed to what? It is with some trepidation that one approaches the maze of licences around QNX in all its forms. The QNX Community License and QNX Open Community License enable derivative works to be based upon QNX code, and royalties collected from customers for use of the code. Those who wish to probe the intricacies of the Custom License Certificate (CLC) Program should head over to http://licensing.qnx.com. In addition to proprietary RTOS code QNX development is very much dependent upon GPL utilities such as awk, rcs, gmake, gzip, and sed. The company see their product - and their licensing model (with its ”traditional” view of IP - Intellectual Property) as very much complementary to Linux. Indeed there is a great deal of sharing of applications and developers between the two platforms. However, for many, the licences will be the sticking point. Readers may be pleased to note that next month’s posix compliant column will return to GPL’d OS’s.

Info QNX demo and ISOs of full system from http://www.qnx.com/ (Register at http://get.qnx.com/) Hurd: http://www.gnu.org/ Debian GNU/Hurd: http://www.debian.org/ ■

11 · 2001 LINUX MAGAZINE 107


108Gnuworldsbd.qxd

29.06.2001

20:38 Uhr

COMMUNITY

Seite 108

BRAVE GNU WORLD

The monthly GNU Column

BRAVE GNU WORLD GEORG C.F. GREVE

Welcome to another issue of Georg’s Brave GNU World. This month I’ll make the philosophical background a central issue focusing on the question of commercial Free Software. But first I’d like to introduce a few projects.

TuxFamily.org Julien Ducros has started the ”TuxFamily.org” project with a group of volunteers. The inspiration for this was the American SourceForge project, which provides a central infrastructure for development and presentation (like Web server, FTP server, CVS server and so on) to projects. It is the belief of Julien Docrus that the European and African community also need such a service on their continent. This is what the French TuxFamily project wants to provide. TuxFamily itself is clearly oriented strongly towards Free Software. The software used for hosting (vhffs) is published under the GNU General Public License itself. Additionally the TuxFamily only accepts projects that qualify as Free Software. So projects looking for a new home might consider joining the TuxFamily.

Chrooted SSH CVS-Server HOWTO Very often relatively small companies are doing good work for Free Software whilst being almost completely unknown outside their respective countries. The French company Idealx is one of them. On the community homepage of Idealx several interesting modules and documents about Free Software can be found. They solve several standard problems and of course everything is available under the GNU General Public License and the GNU Free Documentation License. 108 LINUX MAGAZINE 11 · 2001

Among the packages is a Python module which allows creating calendars in CGI-scripts. A XMLcustomizable CVS-notify script to automatize actions during check-in of new versions or a binding of Erlang to Python can also be found. A very interesting document is also a Howto written by Olivier Berger and Olivier Tharan, which deals with the set-up of a very secure and wellinsulated CVS server. Programmers normally know of the advantages of the ”Concurrent Version System” (CVS), but CVS is a tool many people underestimate, so I’ll give a short introduction. It should be common knowledge that software is normally written in the form of source code. This source code is improved by one or several authors during the development process. When doing this, the problem of coordination of several developers arises because normally every developer will be sitting at his personal machine to make changes there. This means that source code often changes simultaneously at different places. To solve this problem, CVS (like other ”version control systems”) has a central gathering point, the so-called ”repository.” Every developer communicates directly with the repository, receiving updates by others or submitting their own changes. Of course it can happen that two authors change the same part, but the different ways to resolve or avoid such conflicts are not important right now. What is important is that the CVS repository does not just contain the current version, it also saves every change. That way the development


108Gnuworldsbd.qxd

29.06.2001

20:38 Uhr

Seite 109

BRAVE GNU WORLD

COMMUNITY

The Chrooted web site

process can be tracked step by step later. Also, it is is possible to go back to old versions in order to fork a project and have different development processes run in parallel with the option to merge them again later. All this is not just relevant to developers. Considering the fact that source code are simple ASCII files in most cases, it is immediately obvious that this can be used for any sort of data, especially if it can be expressed in text-only form. Web sites, documents, email-archives and much more are perfect areas for the use of CVS.

The neuralgic point of a CVS server are its access rights. There are several possibilities for this that have different levels of security. In most cases an account on the CVS server requires an account on the system and some methods of authentication transmit passwords in clear text, so they can easily be read by others. Even when this is being avoided by using SSH, it is very often not desireable to give every CVS-user full access to the system. The Howto mentioned above describes very well how to configure several CVS servers in parallel Autogen

11 路 2001 LINUX MAGAZINE 109


108Gnuworldsbd.qxd

29.06.2001

20:38 Uhr

COMMUNITY

Seite 110

BRAVE GNU WORLD

on a server while only granting users the necessary rights to access the specific repository. This should allow the majority of intermediate users to install a secure CVS server on their machine in order to use the advantages of CVS mentioned above.

AutoGen It is always a pleasure to introduce new GNUProjects. The one I would like to introduce this month is AutoGen by Bruce Korb. It is a tool designed to make creating and maintaining programs with large amounts of repetitive text quite comfortable. A classic example for this are loops for the commandline evaluation. In the worst case, text is repeated with cut & paste for every option in order to put the values in some place where other functions that need them will be able to find them. Since this is very much a standard problem, AutoGen has template called ”AutoOpts” for this. AutoGen does bear certain similarities with m4, the traditional UNIX macro processor. But it is superior to m4 in many aspects like a simplified way of adding new parameters to functions. Also AutoGen supports nested collections of values. Because of this, Gary V. Vaughan, the most active developer of AutoGen besides Bruce Korb, would like to see autoconf start using AutoGen instead of m4. According to Bruce one of the special strengths of AutoGen is the clear separation of templates and definitions, because this makes the templates much more flexible. Also all data is address by names, not positions, which allows restructuring and resorting files. On top of this old definitions can become obsolete without having to change old files which Where all your GNU fun should begin

110 LINUX MAGAZINE 11 · 2001

increases compatibility. And of course all definitions can be nested. Through variables marking the locations for replacements it can be defined with the help of key words which parts are being left out or repeated. AutoGen also has much better ways of controlling the output than the C-preprocessor. When asked about the disadvantages of AutoGen, Bruce Korb says that it is widely unknown and it is also much too easy to let templates look like hieroglyphs. The static evaluation of definitions is currently the biggest limitation of AutoGen, but it is planned to make this dynamic in future versions. Besides this AutoGen is pretty much finished and so mostly feedback to increase the portability is needed. So far it is known to run on GNU/Linux, BSD, SVR4-5, HPUX, SCo OpenServer and Solaris as well as Windows NT provided CygWin is installed. Although AutoGen itself is licensed under the GNU General Public License, some add-ons are under the LGPL and FreeBSD license or public domain.

Free Software and Commercialization I have received mail about this topic for instance by Tommy Scheunemann, who asked about ”Industy and GPL” and on the discussion mailing list of the FSF Europe a pretty controversial debate about whether it was legitimate to make money with Free Software has taken place. This made me aware that there are still some open questions in this area. Before the question about commerce and the interaction with industry can be understood, it is


108Gnuworldsbd.qxd

29.06.2001

20:38 Uhr

Seite 111

BRAVE GNU WORLD

mandatory to take a look at the definition of Free Software. The first step is always to understand that ”free” in Free Software does not stand for ”gratis” but rather for ”freedom.” But what freedom does this refer to? The most precise definition for Free Software are the four freedoms of the Free Software Foundation. The ”Debian Free Software Guidelines” were derived of these and have provided the base for the ”Open Source Definition”. Technically, all three definitions were written to describe the same licenses. Because the four freedoms are the most compact definition I will only be talking about them, however. The first freedom is to be able to use a program for any purpose. Restricting the use in any way would immediately mean a program does not qualify as free software. The second freedom allows to study a program in order to learn how it works and to change it accordingly to ones own needs. Having access to the source code is a precondition for this. Freedom No.3 allows to make copies and pass them on while Freedom No.4 is the sum of freedoms 2 and 3. It says that you must have the freedom to pass on improvements for the benefit of others. Like freedom 2 this requires access to the source code. It is important to be aware that having the freedom to do something also includes the freedom to not do it - which applies especially to freedoms 2, 3 and 4. There is no obligation to copy or modify a program and also no obligation to pass it along. In fact the requirement to make changes public was what let the ”Apple Public Source License” fail to qualify as a Free Software license. The question whether a piece of software is Free Software or not is decided by its license. Of the licenses for Free Software, the GNU General Public License is the most widely used. Besides the GNU Lesser General Public License, which is a variation of the GPL, the FreeBSD license has the biggest practical importance. So what about commercialisation? Free Software very deliberately does not make a difference between commercial and noncommercial use. A limitation for commercial purposes would even violate the first freedom - so Free Software is always also commercial. Combined with the knowledge that there is no requirement to pass it on, it immediately becomes clear that Free Software can even be sold. It should be clear now that Free Software can be commercial although it doesn’t have to. The currently predominant commercial model for software in the industry is the proprietary one, where the price is artificially raised by limiting these freedoms. So a company could be tempted to take away the freedoms to shortly increase its bottom line. This happens in two ways. Licenses like the FreeBSD license are based on the assumption that noone

COMMUNITY

Info Send ideas, comments and questions to Brave GNU World column@brave-gnuworld.org Home page of the GNU Project http://www.gnu.org/ Home page of Georg’s Brave GNU World http://brave-gnu-world.org ”We run GNU” initiative http://www.gnu.org/brave-gnuworld/rungnu/rungnu.en.html ”Tuxfamily.org” home page http://www.tuxfamily.org Sourceforge home page http://www.sourceforge.net Idealx community home page http://www.idealx.org Chrooted SSH CVS-server HOWTO http://www.idealx.org/prj/idx-chrooted-sshcvs/dist/ Concurrent Versions System (CVS) home page http://www.gnu.org/software/cvs/ AutoGen home page http://autogen.sourceforge.net GNU m4 home page http://www.gnu.org/software/m4/ The FSF Free Software Definition http://www.fsf.org/philosophy/free-sw.html Debian Free Software Guidelines http://www.debian.org/social_contract#guidelines Open Source Definition http://www.opensource.org/docs/definition.html Free Software licenses http://www.fsf.org/philosophy/license-list.html ■ would put his personal interests above the interests of the public. They deliberately allow proprietary relicensing. The GNU licenses have a ”proprietarisation protection” to avoid this. If you directly base your success on the work of others, you cannot release the result under a proprietary license. The only program to go back to the proprietary scheme in this case is to write modules that are technically isolated from the original program, which requires additional work and is not always possible. In both cases the proprietarised end-product is normally sold as ”value-added” software with the goal to convince the user to give up his or her fredoms. This can happen commercially or noncommercially. So despite the protection offered by the GPL, in the end only the awareness of the users can prevent the return to the proprietary model. To summarize: without question there is commercial Free Software just as there is noncommercial Free Software. It is a question of choice. Instead of being distracted by this question we should rather make sure to not lose track of the freedom as it is the fundament of the whole movement.

Enough for this month Okay, that’s it for this issue. I hope the question of commercial Free Software has become clearer now and I managed to explain things in an understandable way. As usual I am requesting comments, questions, ideas, inspirations and project introductions to the usual address. ■ 11 · 2001 LINUX MAGAZINE 111


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.