linux magazine uk 009

Page 1



003welcome.qxd•

08.05.2001

8:54 Uhr

Seite 3

COMMENT

General Contacts General Enquiries Fax Subscriptions E-mail Enquiries Letters

01625 855169 01625 855071 www.linux-magazine.co.uk subs@linux-magazine.co.uk edit@linux-magazine.co.uk letters@linux-magazine.co.uk

Editor

John Southern jsouthern@linux-magazine.co.uk

CD Editor

Richard Smedley rsmedley@linuxmagazine.co.uk

Staff Writers

Keir Thomas, Dave Cusick , Martyn Carroll

Contributors

Alison Davis, Richard Ibbotson, Luke Leighton, Colin Murphy, Alison Raouf, Richard Smedley

International Editors

Harald Milz hmilz@linux-magazin.de Hans-Georg Esser hgesser@linux-user.de Ulrich Wolf uwolf@linux-magazin.de

International Contributors

Simon Budig, Mirko Dölle, Albert Flugel, Björn Ganslandt, Georg Greve, Sebastian Gunther, Pablo Gussmann, Andreas Huchler, Patricia Jung, Oliver Kluge, Lars Martin, Jo Moskalewski, Christian Perle, Thomas Ruge, Ronald Schaffhirt, Fabian Schmidt, Tim Schürmann, Volker Schwaberow, Stefanie Teufel, Christian Wagenknecht

Design

Renate Ettenberger vero-design, Tym Leckey

Production

Bernadette Taylor, Stefanie Huber

Operations Manager

Pam Shore

Advertising

01625 855169 Carl Jackson Sales Manager cjackson@linux-magazine.co.uk Verlagsbüro Ohm-Schmidt Osmund@Ohm-Schmidt.de

Publishing Publishing Director

Robin Wilkinson rwilkinson@linux-magazine.co.uk Subscriptions and back issues 01625 850565 Annual Subscription Rate (12 issues) UK: £44.91. Europe (inc Eire) : £73.88 Rest the World: £85.52 Back issues (UK) £6.25

Distributors

COMAG, Tavistock Road, West Drayton, Middlesex England UB7 7QE

Print

R. Oldenbourg

Linux Magazine is published monthly by Linux New Media UK Ltd, Europa House, Adlington Park, Macclesfield, Cheshire, England, SK10 4NP. Company registered in England. Copyright and Trademarks (c) 2000 Linux New Media UK Ltd No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, e-mails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing. ISSN 14715678 Linux is a trademark of Linus Torvalds Linux New Media UK Ltd is a division of Linux New Media AG, Munich, Germany Disclaimer Whilst every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material providied on it is at your own risk. The CD is comprehensively checked for any viruses or errors before reproduction. Technical Support Readers can write in with technical queries which may be answered in the magazine in a future issue, however Linux Magazine is unable to directly provide technical help or support services either written or verbal.

INTRO

CURRENT ISSUES

DISTRO FEVER This month sees the release of a variety of new Linux distributions, some of which we review in this issue. These releases have, of course, been precipitated by the recent release of the 2.4 kernel. Each new distribution builds on the one before it and is better than the last, with advances being made either in ease of use or in functionality. The promise of added functionality naturally necessitates an upgrade — but which upgrade to choose and how to manage it? If I remain with one distribution I gain the benefit of being very familiar with it — anticipating its shortfalls and fully exploiting its strengths. Sticking with one boxset would free me from the constant race to keep up with new developments — with so many distributions coming out I would need to install almost daily to keep up with every innovation and so would lose productivity. Also, my day job allows me to use Linux exclusively and so I need a stable system. On the other hand, I wouldn’t like to miss out when other distributions race ahead. Sticking with one system may mean that the virtues of another’s tools pass me by and changing between distributions has the advantage that I am always up to date. Fortunately, I can resolve this dilemma quite easily: With development systems at home, and a stable system at work, I am lucky enough to be able to run many environments and so can try out differing distributions as and when they are launched. This odd arrangement means I can keep up to date with what is new and only change my work machines when finally I cannot manage without that must-have utility.

Eventually, most distributions seem to merge. Although they all have their own nuances, their collective similarities ultimately outweigh individual differences. SuSE has its YaST configure system, Debian its package apt-get tool and Mandrake its drake tools. All good. All worth having. All missed when on another machine — but equally all circumvented on other systems. It leaves me wanting a combination of everything and so, like most other users of Linux, I add packages and modify files until the system ends up as the hybrid I require. Another user may love or loathe my systems — but they’re my systems, and so, my choice. This exercise left me thinking just how many distributions are available. Woven Goods for Linux lists some 71 distributions. However if we count differing systems rather than distributions then, as everyone configures there own machine, there could be said to be at least 175858 distributions. Why this figure? Well, this is the most conservative figure based on the number of people who have registered on the Linux counter (http://counter.li.org). Although not everyone who is registered is still using Linux, a far higher number are not registered who do use Linux. The site estimates up to a hundredfold factor for each country, giving England some 685,400 active Linux users. Quite a community. Now if only I can find someone to finish off the vCard standards...

John Southern, Editor

We pride ourselves on the origins of our magazine which come from the very start of the Linux revolution. We have been involved with Linux market for six years now through our sister European-based titles Linux Magazine (aimed at professionals) and Linux User (for hobbyists), and through seminars, conferences and events. By purchasing this magazine you are joining an information network that enjoys the benefit of all the knowledge and technical expertise of all the major Linux professionals and enthusiasts. No other UK Linux magazine can offer that pedigree or such close links with the Linux Community. We're not simply reporting on the Linux and open source movement - we're part of it. 9 · 2001 LINUX MAGAZINE 3


006news.qxd•

08.05.2001

14:38 Uhr

Seite 6

NEWS

Compaq and Oracle mean business As part of their E-Business Infrastructure (EBI) initiative, which aims to help businesses in implementing, extending and maintaining complex e-business infrastructures, Compaq and Oracle have announced plans to deliver infrastructure solutions for the Oracle9i platform. These will include consulting services and support. Michael Rocha, Senior Vice-President of the Platform Technologies Division at Oracle said, ”This joint initiative and resulting configurations help decrease the time, cost and resources customers spend deploying and managing their computing infrastructure, while providing for optimal e-business functionality. We’ve extended our relationship with Compaq to offer assurances that together we will provide the complete set of Internet application services

Locked in Linux Open Source security company Guardian Digital has announced Internet security solution, EnGarde Secure Linux. EnGarde offers a suite of Open Source tools that provide businesses with a foundation for building a secure online presence. Features include intrusion detection capabilities and improved authentication and access control methods, as well as strong cryptography, and SSL secure Web-based administration capabilities. The solution can manage thousands of email and DNS domains and offers a suite of e-business applications based on AllCommerce. Benjamin D. Thomas, EnGarde Secure Linux product manager said, ”The ability to quickly and securely generate e-commerce storefronts and virtual websites, as well as manage email and DNS services for an entire organization, is a very powerful feature for our customers.” ■

JWAVE 3.5 released Data visualisation, numerical analysis and Enterprise software solutions provider Visual Numerics has released the latest version of its JWAVE client/server solution. JWAVE 3.5 aims to increase the productivity of JWAVE developers with support for Java Server Pages and advanced graphics features. JWAVE uses Sun Microsystems’ Java components to develop applications

and deploy them throughout the enterprise across the Internet or an intranet. Margaret Journey, the JWAVE product manager at Visual Numerics commented: ”The big picture benefit of JWAVE 3.5’s JSP support is that it will now be even easier for JWAVE developers to build Web-based applications that help their end-users solve complex problems.” ■

6 LINUX MAGAZINE 9 · 2001

provided by Oracle9i, including clustering, data management, portal, wireless, and caching services optimised for Compaq servers.” Mike Winkler, Executive Vice-President, Global Business Units at Compaq added, ”Many companies urgently need to Web-enable their environments to become nimble and flexible in decision making and to remove the costs of the development and maintenance of e-business infrastructure. Two industry leaders have joined forces to offer engineered, integrated and tested solutions for the most demanding ebusiness environments.” At the same time, Compaq and Oracle announced a reference configuration for clustered solutions. It is planned as the first in a series of joint reference configurations from both companies. ■

Vmware to run under NetBSD Wasabi Systems has enabled its VMware emulation software package for running under NetBSD. VMware enables users of the i386 platform to run a guest operating system within a virtual machine based on another operating system, so, for example, NetBSD users could run Windows on a Linux-based system. Perry Metzger, Chief Executive of Wasabi Systems said, ”Wasabi Systems created the VMware compatibility package because we feel VMware is a valuable tool for the NetBSD community. Unfortunately, we were forced to do it without assistance from VMware Inc, so we can’t offer a natively compiled, packaged and supported version. Luckily, NetBSD has the ability to run Linux binaries, and so by porting the Linux kernel modules supplied by VMware it was possible to make VMware run under NetBSD.” Metzger also expressed his hope that someday VMware would ”recognise the size of the BSD marketplace and choose to cooperate on making a native, supported version of the software available.” Frank van der Linden, a senior developer at Wasabi Systems, commented: ”Having the option to run a different operating system on NetBSD without having to restart the one that you are currently using is a powerful feature. For example, it makes running Windows applications easy – you just start up a complete Windows session using VMware. I have put this feature to good use myself already on a regular basis, and am happy to provide it to the NetBSD community.” Wasabi was founded by members of the NetBSD project. ■


006news.qxd•

08.05.2001

14:38 Uhr

Seite 7

NEWS

Streaming is free

New tricks forYellow Dog

Open source community SurePlayer.org has announced the release of its nonproprietary, MPEG-1, Java-based, audio and video player. The player is available under the GNU General Public License. Sureplayer is offering its source code and sample video demos for download at its website. Alan Blount, driving force of the SurePlayer.org initiative, said: ”The goal of SurePlayer.org is to build a streaming video player that works on 96% of all browsers. The player is the first nonproprietary, open-source video player that plays directly out of a Web page without a download or installation.” Jon Orwant, Chief Technical Officer of O’Reilly & Associates, commented, ”SurePlayer has the potential to be the most widely deployed video player on the Internet. Finally, users can watch video in their Web browsers as easily as they can read text. It’s about time.” ■

Terra Soft Solutions, a developer of Linux solutions for PowerPC microprocessors, is to bundle LXP, Command Prompt’s PostgreSQL application server with the latest release of its Yellow Dog Linux software. The LXP application server features a suite of services to help Linux Web developers to create dynamic, easy-to-manage websites. Features include direct fallback to the PHP language, persistent query execution, data parsing, and XML and content management. Kai Staats, Chief Executive of Terra Soft Solutions said, ”In addition to the nearly complete YDL 2.0 book, we are pleased to expand the function of Yellow Dog Linux with Command Prompt’s quality product. While it is our goal to take YDL 2.0 into the hands of those newer to Linux, LXP adds to the server and development OS foundation we have built with Champion Server.” Joshua Drake, Co-Founder of Command Prompt, added, ”Terra Soft Solutions YDL 2.0 is an exceptional distribution that will help increase the viability of Linux and LXP. It is our pleasure to include our LXP application server with their distribution.”

Info The player and source code are available at: http://www.sureplayer.org/resources.html ■

Info

AlphaServer all kitted up Compaq is offering Linux developers an Advanced Developer’s Kit (ADK) for use with its AlphaServer GS Series systems. The ADK follows the recent port of Linux to the AlphaServer GS series and provides documentation and software, including the tools and patches required for running the Linux 2.4 kernel with either SuSE 7.0 or Red Hat 7.0 on AlphaServer GS systems. Rick Frazier, Vice-President of Marketing for Compaq’s Business Critical Server Group said, ”With its strengths in handling dataintensive and other high-performance applications, Linux is gaining increased acceptance in the enterprise. With the ADK, we are responding to requests from customers who want to evaluate Linux on a high-end multi-processor configuration or run Linux applications in a mixed environment with our Tru64 UNIX operating system.” Dirk Hohndel, Chief Technology Officer at SuSE Linux AG said, ”SuSE and Compaq are working together in many areas to move Linux more into the high end of computing. Among the highlights in this cooperation is our work on support for the NUMA architecture, as well as improvements to the scheduler and other key kernel components to make it better utilize the enormous potential of the AlphaServer GS systems. Having the ADK available enables us and our customers to implement Linuxbased solutions for their high-end computing needs.” The ADK is available for download at http://www.support.compaq.com/alpha-tools. ■

www.linuxports.com ■

New BlueCat out of the bag Open source and true real-time embedded solutions provider LynuxWorks has announced the latest release of BlueCat Linux featuring MIPS support. BlueCat Linux 3.1 features tool-chains specifically for the MIPS R3000 and R4000 microprocessors. These simplify the embedding process to help reduce time to market. LynuxWorks sees the addition of MIPS support as another step towards the next generation smart devices, communications and consumer products markets for its operating system, as MIPS supplies microprocessors for those markets. Other architectures supported by LynuxWorks include Intel Pentium, Xscale and x86 compatibles, ARM family (including Thumb extensions), StrongARM, PowerPC (including PowerQUICC) and Hitachi SuperH. Doug Agnew, Product Manager at LynuxWorks said, ”LynuxWorks’ support for the MIPS architecture is a critical component of our Linux strategy. Deployment of Linux-based networked devices and digital consumer appliances is exploding and BlueCat Linux gives developers the perfect choice of OS, tools, and now the broadest architecture support, for developing the next wave of breakthrough applications.” ■ 9 · 2001 LINUX MAGAZINE 7


006news.qxd•

08.05.2001

14:38 Uhr

Seite 8

NEWS

Egenera’s cutting edge Internet infrastructure solution provider Egenera has unveiled its soon-to-be-released Internet data centre solution. The Egenera BladeFrame System supports up to 96 high-end Intel processors, which can be deployed entirely through software. The system features a 24x30x84in chassis with 24 two-way and/or four-way SMP processing resources (Egenera Processing Blade), redundant central controllers (Egenera Control Blade), redundant integrated switches (Egenera Switch Blade) and a redundant interconnect mechanism (Egenera BladePlane). BladeFrame combines with the Egenera Processing Area Network Architecture, which consolidates and simplifies the allocation and management of computing power, to adjust processing while the machine is running, as well as

to support new applications or accommodate variable demand on existing applications. Vern Brownell, Chief Executive Officer at Egenera said that during his years as Chief Technology Officer at Goldman Sachs, the company’s use of technology as a key business driver grew considerably and that this meant that the agility and performance of data centres became increasingly mission critical. He commented, ”In our efforts to reduce application time to market, ensure availability and become more flexible, server deployment was a primary hurdle. Realising that nothing short of a totally new processing architecture could solve the problems my managers and system administrators routinely encountered, I founded Egenera. We believe that our comprehensive approach to improving the data centre will resonate with customers and find favour in the marketplace.” ■

Farm offers savings harvest Until 30 June 2001, SGI and Platform computing are offering savings of up to £17,000 to businesses who buy the new SGI EDA Technical Compute Farm for Linux. The solution is based on SGI’s 1100 server with 32 1 GHz Pentium III processors. It has a 2GB memory per node and a Cisco Catalyst 3524-PWR XL Gigabit Ethernet switch, providing efficient job distribution and data access for all nodes. The package also includes Red Hat Linux 6.2 and SGI management tool, Advanced Cluster Environment (ACE). Phil Weaver, President and Chief Operating Officer of Platform Computing said, ”SGI has

embraced Linux to a degree unmatched by other traditional UNIX operating system vendors. It delivers comprehensive, cutting-edge solutions for Linux and is a solid contributor to the open-source community. Using Platform LSF to manage distributed resources, the SGI EDA Technical Compute Farm for Linux offers a total solution with very attractive price/performance characteristics.” The list price for the solution starts at £113,000 for 32 CPUs. However, until the end of June, EDA users can save up to £17,000 per rack when they buy EDA Technical Compute Farm for Linux. ■

Kompany coding TheKompany.com has released its C++ GUI IDE, KDE Studio Gold Beta 3. The release offers code completion, dynamic syntax highlighting and pop-up function parameter look-up, as well as new features and additions requested by users, including simplified debugging and documentation. Developers are not limited to KDE projects, as the release can also handle custom projects (in which no makefiles are generated, instead everything is based on Autoconf/Automake), as well as console, X11 and Trolltech’s Qt Designer. It also enables developers to import projects from directory structure. New features include code folding, a highlight engine enabling developers to modify and add custom highlight by editing the XML file and plugins for class diagram, as well as quick project file creation using the existing directory structure.

Info http://www.thekompany.com/products/ksg/ ■ 8 LINUX MAGAZINE 9 · 2001


006news.qxd•

08.05.2001

14:38 Uhr

Seite 9

Bynari’s insight Texas-based open standards software developer Bynari has announced its new Insight line of messaging and collaboration products, claiming that the Insight tools will allow Linux and UNIX desktops within an enterprise to work with messaging products such as Microsoft Outlook and Lotus Notes. Previous releases of Bynari’s Linux-based messaging and collaboration client application had to access Windows messaging components across a Windows NT Server proxy to interoperate with Outlook. The new Insight client works directly with Outlook without requiring a proxy. Bynari Chief Executive Mike O’Dell said, ”This breakthrough opens up a world of new opportunities for integrating the Linux and UNIX desktop user community into the Windows-centric enterprise. Insight provides the capability to simply and cost effectively upgrade a company’s Linux and UNIX users with an easy-to-use, integrated application that interoperates with Outlook and provides Internet standards-based email, IMAP shared files and folders, LDAP global address books, scheduling and calendar management functions.” At the same time Bynari announced its Insight client-server solution. The server now provides messaging and collaboration services to Insight clients, for improved collaboration between workgroups and individual users. Insight Server also supports Outlook clients. ■

New LynuxWorks Environment LynuxWorks has announced its CodeWarrior Integrated Development Environment (IDE) Edition aimed at developers working in Linux and Solaris environments to deploy on LynxOS and BlueCat Linux targets. CodeWarrior combines an editor, code browser, compiler, linker, and debugger in one application, all accessed within a Graphical User Interface (GUI.) Greg Rose, director of product management for LynuxWorks said, ”Development Tools will help our customers speed the product development phase and introduction of their new products to market. Additionally, this CodeWarrior IDE announcement is the first of a series of new announcements we will be making in 2001 under our new LynuxWorks expanded tools initiative.” ■

Info http://www.bynari.net ■

NuSphere’s advantage Business software solution provider NuSphere has released its Web development platform for small and medium-sized enterprises. The NuSphere MySQL Advantage 2.0 features open source components, giving developers the choice of building, maintaining and deploying Internet applications under Linux, UNIX or Windows. Technology enhancements include support for financial transactions, essential for building business-critical Web applications. Advantage 2.0 also includes RPM support for RedHat Linux, encryption support for Windows and enhanced MySQL version 3.23.36, with the beta release of Gemini, aimed at highly granular, transactionintensive database applications. Gemini enables automatic crash recovery, failover clusters, table and site replication, and backup. It also features the bug database Bugzilla. Carl Olofson, program

director, information and data management software research at IDC commented, ”NuSphere and this open source RDBMS, with substantial enhancements for transaction management and other enterprise-class database functionality, enable companies to build cost-scalable, stable enterprise-class eBusiness solutions and Web services. Market pressures in favor of rapidly expanding eBusiness functionality are driving a requirement for greater Internet functionality and capacity, but these must be developed with an eye toward both upward scalability and cost containment. The growing range of open source software components, including, and in combination with NuSphere technology, will allow IT systems to meet this requirement, yet remain cost effective and scalable while exploiting the latest trends in database software.” ■

9 · 2001 LINUX MAGAZINE 9


006news.qxd•

08.05.2001

14:38 Uhr

Seite 10

NEWS

Caldera’s sneak preview The Santa Cruz Operation (SCO) and Caldera Systems, have announced a technology preview release of their commercial 64 bit UNIX operating system for Intel Itanium processors. AIX 5L version 5.1 is the result of Project Monterey, a cooperative effort between SCO and IBM, to develop the next generation UNIX operating system for Intel Itanium processors. Caldera Chief Executive, Ransom Love, said that Caldera sees the importance of a stable 64 bit operating system as the backbone supporting mission-critical business applications on Intel platforms. He added: ”Offering AIX 5L to our highend Intel OEMs and resellers allows Linux to reach another bar on the enterprise ladder. AIX 5L provides choice and flexibility for our customers while leveraging their current investments.” ■

LinuxIT makes new appointment Vendor independent open source solutions provider LinuxIT has appointed Dr David Hodges as chief technology officer. Until earlier this year, Dr Hodges was head of systems for London-based global investment firm, Antfactory. His appointment will bring more than eighteen years of experience in operational management, with nine years in the Linux marketplace. Peter Dawes Sales Director of LinuxIT said, ”We are delighted that David has joined LinuxIT... David has unique experience in the management and delivery of IT solutions, and he will be able to leverage these strengths for LinuxIT and assist us in delivering Linux and Open Source solutions to our growing customer base.” ■

New Heroix suite Red Hat releases version 7.1 Red Hat has announced version 7.1 of its Linux product, which features a kernel update. The latest release also features Red Hat Network connectivity, including software manager with errata alerts and RPM updates that advise users of new RPM packages. The new 2.4 kernel combines improved SMP support with new configuration tools to help users set up and administer DNS, Web and print servers. Billy Marshall, product manager for Red Hat Network said the software manager was an aid to productivity. ”Every enterprise, regardless of size, is challenged to deliver better services to their customers using the Internet. Software Manager increases IT productivity in meeting this challenge with this set of customisable services that improve the reliability and security of the Red Hat Linux systems that power many enterprises’ Internet infrastructure.” ■

IT infrastructure management software developer Heroix has released its new management suite. The Heroix eQ suite enables Windows 2000, Windows NT, UNIX and Linux systems to be unified for monitoring and maintenance purposes. Features include the task-oriented Express Wizard interface, which prompts for the required information while providing contextsensitive help, and Application Autodiscovery, which automatically detects application and workload changes for improved scaleability of the management suite. Howard Reisman, Chief Executive of Heroix said, ”The Heroix eQ Management Suite responds to the sharp upswing in IT complexity that enterprises are grappling with today. While Web-enabled business, powerful distributed servers, and opensource platforms have ushered in tremendous advantages, they also pile on layers of management issues... One of the most significant ways Heroix delivers on this vision is by encompassing the three most widely used server environments: Windows, UNIX, and Linux.” ■

All together now The Board of the Embedded Linux Consortium (ELC) has announced plans to release a single unified specification for an embedded Linux platform to its members. The proposed unified specification would reference existing specifications including the POSIX 1003.13 PSE 52 and PSE 53, the Single UNIX Specification, and the Linux Standard Base. The ELC says it must also include the basic OS services supported in any compliant embedded Linux system. 10 LINUX MAGAZINE 9 · 2001

The ELC will distribute an outline of its proposal to its 124 member companies and once their comments have been received, it plans to make the full document publicly available. It is hoped that this will help to establish Linux as a viable open, multi-vendor software platform alternative to other single-vendor embedded solutions, such as Windows CE, PalmOS or VxWorks and further accelerate the adoption of Linux in emerging post PC applications. ■


044Distintro.qxd•

09.05.2001

9:54 Uhr

COVER FEATURE

Seite 44

DISTRIBUTIONS TEST

Linux Distributions for newbies/users

THE WIDER THE CHOICE... ANDREAS HUCHLER

Word has got around by now

that Linux not only performs reliably as a server operating system, developer platform or embedded system, but is also regarded more and more as a serious alternative to commercial operating systems for desktop users. But which of the numerous Linux distributions available on the market is best suited to the requirements of a desktop user is a matter that is frequently still unclear. Linux Magazine risks a topical evaluation of the market situation.

Focusing completely on the latest Linux distributions is not without its problems. One of the hardest decisions is surely the choice of when to conduct a test series. Since the manufacturers of Linux distributions bring their latest collections of packets to market at various times and (with few exceptions) at intervals which are also hard to predict, it is hard to avoid the fact that distributions that have only just come out enjoy a certain advantage in terms of newness compared to their competitors, which may have been on the market for a while already. These circumstances frequently also affect the test result as a whole. We nevertheless feel it is worthwhile, even necessary, to subject the various distributions at regular intervals to as fair an evaluation as possible. This kind of comparison test is designed to help you decide which distribution is best for you. Nor, though, should the responses that such tests can

44 LINUX MAGAZINE 9 · 2001

trigger among the manufacturers of distributions themselves be underestimated. Even two years ago, most distributions could only be installed using really monotonous console menus, which were also hard to understand. Administration of the system was mainly done through consolebased scripts, if at all. Nowadays a graphical installation program forms part of the standard repertoire of most distributions. Certainly, in the past few years some technical specifications have improved. But the fact that now almost all distributors make every effort to enable installation and configuration to be as easy as possible and also visually attractive, is also due to the increased expectations of a rapidly-growing group of Linux users. As we write this article Red Hat 7.1 and Mandrake 8.0 have both recently been released (and are reviewed in this issue) and Caldera openLinux workstation is released as beta. ■


045Debian.qxd•

08.05.2001

9:23 Uhr

Seite 45

DEBIAN 2.2 R2

COVER FEATURE

Debian 2.2 R2 on Test

THE PURIST’S ALTERNATIVE ANDREAS HUCHLER

Debian, unlike most common distributions, is not compiled and updated by a distributor with a commercial interest, but survives on the voluntary commitment of a world-wide community of developers and compilers. The product – the official release, which is regarded as extremely stable – can be downloaded in full from the Internet. As an alternative to this, various Debian resellers offer distribution packages, with which they sometimes include, in addition to the official release CDs, a manual or supplementary CDs. But we are only going to look at the official release.

Installation Going against the general trend towards graphical installers, the developers of Debian 2.2 continue to insist on the tried and tested menu-supported or console-based installation routine. This would not be so bad if at least a passable automatic hardware recognition was integrated. But unfortunately this still does not exist, and so only experienced or wellread users can get through the installation marathon, so that when they first log in, at least the most important hardware components are preconfigured for use. For the ISDN configuration, for example, there is no explicit configuration mask. Thus you have to know in advance that most ISDN cards can be run with the Hisax kernel module by specifying a few card-specific parameters, so that one loads them in at installation in the

Short introduction: Commissioning a USB mouse under Debian 2.2 1. Making a device file • mkdir /dev/input • mknod /dev/input/mice c 13 63 2. If necessary, reload the following USB kernel module with modprobe: • usb-ohci or usb-uhci (depending on the USB controller) • mousedev, usbmouse 3. Adapt the pointer section in /etc/X11/XF86Config: • Protocol ”IMPS/2” • Device ”/dev/input/mice” • and wheel support too, if necessary, by buttons 4 and 5

corresponding module loader submenu by hand. Packet selection is equally unusual in its concept, with the menu-based standard packet manager dselect. Nevertheless, pre-defined packet combinations are offered for a palette of implementation scenarios (tasksel). For the Linux newbie and user-only, though, this installation procedure seems cryptic and off-putting.

Debian/GNU is regarded by many Linux fans as a tricky hacker system. Linux Magazine has nevertheless attempted to install the latest Debian for desktop use.

Initial configuration Not only is there no automatic hardware recognition, after the first console login you will also search in vain for a configuration tool which is at least menusupported. The X-server configuration can be mastered (if you know about this) with the classic xf86config or with XF86Setup. There is a USB mouse, which can be installed with a bit of manual work (see boxout). Without a modicum of experience of manually editing major Linux configuration files, one would soon give up at this point. But one thing has to be said for Debian: The configuration files are mostly neatly structured and adequately documented.

Debian 2.2: menu-based installation without automatic hardware recognition

Expandability Once you have become accustomed to the interplay of the packet management tools apt (alternatively: GNOME front-end GNOME-apt) and dselect, the expansion and updating of Debian packets becomes a real pleasure. Because in Debian it is not the rpm format which is used as standard (though conversion with alien is possible), but its own format with the ending .deb, which, among other things, can also resolve packet dependencies by itself. So with a few precautions, effortless online updates are possible without jeopardising the consistency of the whole system.

Figure 2: With a bit of manual work, Debian can also be used as a desktop system, here with packet manager dselect and apt-setup

Stable, but cryptic Linux newbies and non administrators would be better off keeping away from Debian 2.2. Although in principle it can also be used as a desktop operating system, the initial configuration hurdles are not to be underestimated. For newbies and users, CorelLinux OS, Stormix or Linux by Libranet would be of greater interest. These are based on Debian and come with graphical installers. Most of these Debian add-ons, though, are not at present right up to date or else are only adapted for USA requirements. ■

Debian 2.2 R2 + Can be obtained at a good price + very good expandability - installation and initial configuration not easy - CD distribution is now out of date.

9 · 2001 LINUX MAGAZINE 45


046easy.qxd•

08.05.2001

9:41 Uhr

Seite 46

COVER FEATURE

EASYLINUX 2.2

EasyLinux 2.2 on test

TITLE DEFENDER ANDREAS HUCHLER

The forerunner version of this Linux distribution from German-speaking countries was praised a year ago by a whole range of reputable journals as surprisingly beginner and user friendly. We were naturally eager to find out if the latest version, 2.2, could follow up this success. EasyLinux 2.2 has been available for several months now. According to information provided by the manufacturer, eIT, though, it may still take some time before a new packet version is available on the German market, although there is an Englishlanguage version, 2.4.

Installation Figure 1: EasyLinux stands out for its easy-to-understand graphical installer

Although in principle EasyLinux can also be installed via a SCSI-CD-ROM drive, it is still advisable to use an IDE drive. After installation, the packet manager

eProfile could not get anywhere with our SCSI-CDROM drive in any case. The now relatively old kernel 2.2.16 of EasyLinux, despite diverse kernel patches, cannot persuade all the current USB mice to cooperate. Our test mouse would only work with the aid of a USB-PS/2 adapter, naturally without a functioning mousewheel. But having once overcome these initial installation hurdles, the rest of the installation process is easy. Thanks to the orderly hardware recognition, it is usually enough to confirm the pre-set values with a mouse click. If you are puzzled by anything, there is also an adequate online help (eHelpAgent) available. But in retrospect, we discovered, purely by chance, that the partitioning tool ePartition signals both the Windows partition and the Linux data partition as ‘active’ by default. Although Windows did continue to run up in an orderly fashion, in some circumstances this kind of partition entry could be very annoying.

Initial configuration As in the forerunner version, one is still greeted in EasyLinux 2.2 immediately after first log-in by a configuration dialog, the eHelpAgent, with the aid of which the impending configuration steps can be

EasyLinux 2.2 + In many cases, simple installation / initial configuration + For Windows migrants: eSystem with diverse eTools - Basic system is now outdated - Limited expandability of the system

46 LINUX MAGAZINE 9 · 2001


046easy.qxd•

08.05.2001

9:41 Uhr

Seite 47


046easy.qxd•

08.05.2001

9:41 Uhr

Seite 48

COVER FEATURE

EASYLINUX 2.2

The packet manager for distribution-specific packets eProfile (interestingly, this bears a marked similarity to SuSE’s YaST2) requires the first EasyLinux CD at every installation/ uninstallation of packets. So frequent changes of CD are also preprogrammed. We also missed an explicit function, with which online updates such as the one on KDE 2.0.1 (EasyLinux homepage) could be dealt with in one go. For packets from other distributions, there is an RPM database (and it’s already filled). But RPM packets can only be installed later or else only by means of an older version of the console tool rpm or the KDE front-end kpackage. Since EasyLinux rests completely on KDE, the basic system still lacks the necessary libraries for the successful integration of gtk+ based (GNOME) applications.

Original, but outmoded Figure 2: The now outdated KDE-1.2 desktop of EasyLinux 2.2 with ‘System control’ eSystem

dealt with step by step. What’s special about EasyLinux is that the developers have really succeeded in lending the KDE desktop what is in many respects an amazing similarity to the customary Windows interface. This starts with the system control eSystem and extends to a registry imitator. Using the e-tools, too, an astonishing amount of exotic hardware, such as certain TV cards, can also be integrated; but don’t expect any miracles from them. Otherwise, the only really negative thing that struck us is the fact that the keyboard layout is almost unusable in a standard Xterminal.

Expandability If this important aspect is included in the evaluation of EasyLinux it takes a lot of the shine off EasyLinux.

1/3 Anzeige 84 x 185 mm

48 LINUX MAGAZINE 9 · 2001

In principle, the concept of EasyLinux does have something to be said for it: What could be wrong with putting together a Linux system which does not differ too widely in its external appearance from the Windows interface which is familiar to many PC users, but which finally offers crashplagued Windows users a more stable system environment? In the case of EasyLinux 2.2, in fact, only the fact that the packet has now become very outdated. If the developers of EasyLinux react more quickly in future than in the past to major innovations in the Linux scene (Kernel 2.4, USB, XFree 4.x with 3D-acceleration, KDE 2.1, etc.), then EasyLinux could become the system of choice for many migrants from Windows to Linux. In the meantime, though, the manufacturer eIT is trying its luck with an EasyLinux 2.4 English version. It is available from http://www.easylinux.com/ for $49. ■


049RedHat.qxd•

08.05.2001

9:47 Uhr

Seite 49

RED HAT 7.1 (DOWNLOAD VERSION)

COVER FEATURE

Red Hat 7.1 on Test

THE INTERNATIONAL MARKET LEADER ANDREAS HUCHLER

At the date of this test there was not yet a boxed version of Red Hat 7.1 available. To test we got hold of the download version (2 CDs). Available will be a Deluxe Workstation Version (costing approx. £59.15) as well as a Professional Server Version (costing approx. £147.89). The former comes with immediate support for laptops and multiprocessor systems and contains 9 CDs and two manuals.

Installation Red Hat 7.1 caused no problems at all during installation, thanks to its very good automatic hardware recognition. Both the SCSI-DVD drive as well as the USB mouse worked right from the start. And the nVidia graphics chip was recognised immediately. Developers at Red Hat favour the GNOME desktop. So it is surprising that the developers are no longer integrating the new GNOME 1.4 desktop together with Nautilus file manager in Red Hat 7.1. Red Hat continues to rely on the tried and trusted partitioning tools DiskDruid and fdisk. Since Linux is still operated in most cases in parallel with an existing Windows partition, the recommended automatic partitioning, which deletes all existing hard drive partitions, would only be sensible in the rarest of cases. Red Hat is still bucking the general trend by doing without the journaling file system Reiser-Fs. The graphical installation program leaves the impression of being mature and clearly designed. Because of its useful pre-sets it is enough for the inexperienced user to click on the Continue button. The online help is available at all times.

Initial configuration The GNOME desktop, does seem really neat, though which administration tools Red Hat has available would be helpful. There is a program icon on the desktop for configuring Internet access by modem. Under the menu item Programs/System there are more configuration tools. That the X11based control panel does exist as a central starting point for the configuration tools, is something that a newbie only finds out after logging onto KDE. Apart from Linuxconf, you will find in the control panel some innovations such as configuration of any ADSL modems present. Basic configuration

steps, can be dealt with easily with the control panel. On first login, Red Hat makes icons for floppy, CD-ROM and Zip drive on the desktop and these also function immediately.

Expandability You can install pretty much everything that can be tracked down with the ending i386.rpm. Red Hat continues to rely on the rpm front-ends GnoRPM and kpackage. It recommends membership to it’s own Red Hat Network. Every buyer of a full version, receives free access to the RHN Software Manager for a few weeks. This is a big software pool, kept up to date by Red Hat with respect to new program releases and corrected bugs. This update system by Red Hat considerably simplifies the task of system updating. After the free test phase expires, though, Red Hat asks the user to pay for this extra service. As a matter of fact, as a member one also has to pay another price: As the result of the relatively synchronous updates and bugfixes, the Red Hat systems registered in the RHN become more homogeneous and thereby in future more open to large-scale attacks from the Net. Although Red Hat has always championed GNOME as standard desktop, a desktop user fairs noticeably better if they select the brand-new KDE 2.1.1 as standard desktop. Could this fact potentially hide a smart move by the Red Hat strategists, to bring about a significant increase in the number of RHN registrations through the GNOME 1.4 updates?

The distribution with the Red Hat is estimated at present to be the world’s most frequently used Linux distribution. Linux Magazine has just taken a closer look at the brand new Red Hat 7.1

Playing with fire Among the most important innovations must be the integration of the new Linux kernel generation (2.4.2). It is now possible for the user to swap the plugs of his USB devices without rebooting. Owners of the latest graphics cards (except PowerVR chips!) will be glad that the current latest XFree86 4.0.3 has been included. In the security domain, too, Red Hat has included a new firewall configuration tool, with which new firewall rules can be defined in a relatively easy way. As long as Red Hat’s paid online RPM update services remain optional and does not become obligatory for systems to work correctly, the distribution will also continue to find a following of enthusiastic supporters. ■

Red Hat 7.1 (Download Version) + Brand-new system together with USB hot-plugging + Relatively beginner-friendly installation/initial configuration - GNOME 1.4 not yet integrated - Online updates paid for after brief grace period

9 · 2001 LINUX MAGAZINE 49


050Mandrake.qxd•

08.05.2001

9:54 Uhr

COVER FEATURE

Seite 50

LINUX MANDRAKE 8.0

Linux Mandrake 8.0 on test

THE INNOVATIVE SOCIAL CLIMBER ANDREAS HUCHLER

Linux Mandrake, a Red Hat offspring of French origins, which two years back was still treated at best as a secret tip in the Linux communities, has recently turned into a real cult system for many users. Linux Magazine has looked at the brand new version 8.0 on your behalf. The new version 8.0 came out just before we went to press, so that only the download version was available for testing. The PowerPack Edition of Linux Mandrake, like the previous version, comes with not only the manual but also several additional CDs with free and commercial (demo) programs (including the full version of IBM’s speech recognition software, ViaVoice) and costs about £40. The ProSuite Edition which is also available aims to be a professional server solution especially for small and medium-sized enterprises. At the time of going to press, it was still not clear whether Mandrake will also be selling a standard version in Europe, which might be somewhat cheaper than the PowerPack Edition. The present success of Linux Mandrake can be attributed to two fundamental decisions by the manufacturer MandrakeSoft: the choice of Red Hat as basic system and the decision to place all the distribution-specific administration tools (including manuals) under the conditions of the GPL. The decision in favour of (almost) 100% Red Hat compatibility means the user has access, apart from Mandrake’s own rpm packages, also to the whole range of Red Hat rpms which are widely available on the Web. The large following of subscribers to 50 LINUX MAGAZINE 9 · 2001

the GPL philosophy also contributes to the fact that an increasing number of users and developers are taking advantage of the comprehensive online offers and the rapid availability of Pentiumoptimised Mandrake rpms.

Installation Thanks to the very good automatic hardware recognition, both the SCSI-DVD drive and also the USB mouse were fortunately recognised immediately on our test system, so we were able to get started straightaway. The graphic installer has only changed slightly in terms of appearance with respect to the previous version (7.2). But in terms of functionality it has been heavily revised. First of all, there is now a large question mark symbol, which provides, after a mouse click, additional instructions for the user. Compared to the previous version, the developers have now managed for the first time - taking in their stride a few limitations of co-determination - to ensure that, even as a non-expert, one can now put oneself in the hands of the installation class ‘recommended’ with a fairly easy mind. The installer no longer spoon-feeds the naïve user who clicks on recommended as much as it used to in previous versions, but simply rushes him past a few queries, which may be somewhat confusing for raw beginners. But whether it was all that clever


050Mandrake.qxd•

08.05.2001

9:55 Uhr

Seite 51

LINUX MANDRAKE 8.0

to leave practically the entire X11 configuration of the automatic hardware recognition in the recommended mode, will remain to be seen in practice. Anyone who wants, after first log-in, optimal resolution with maximum image repetition rate on his own monitor, will presumably prefer the expert mode, in which the X11 configuration can also be performed manually. According to marketing reports, it is now also supposed to be possible in Mandrake 8.0 for the first time for 3D graphics enthusiasts to enjoy 3D hardware acceleration ”without additional configuration effort”. Unfortunately we were unable to try this out with the download version, which was fairly limited in terms of packages. But it must be assumed that this long-awaited feature will be an option in the PowerPack Edition. The image has also changed when it comes to packet selection. The pre-defined implementation scenarios are now differentiated considerably. So apart from the main categories of Workstation and Server, there now also exists a whole range of specific implementation scenarios such as Office Workstation, Games Station or Network Computer (client). This fine-tuning is certainly welcome in principle, but the choice of packets is somewhat overloaded as a result. Mandrake now offers, like SuSE, a summary after the one-off run through of the hardware configuration, in which the user can see at a glance which hardware components have been successfully configured. It is also possible to jump back to the respective question marks. If one overlooks the still-not-quite-perfect intervention options in the recommended installation class, the Mandrake installer mainly gives a good impression, mostly thanks to the fine hardware recognition.

Initial configuration The Mandrake developers have considerably jazzed up the graphic configuration tools DrakConf for version 8.0, not only in terms of appearance, but also functionally. The central configuration tool is now called the Mandrake Control Centre and combines, under five main drop-down headings, just about everything that can be changed when Linux is running with respect to existing hardware and software configuration. The greatest gem, and so far unique to Linux, is surely the HardDrake hardware configuration tool, which is significantly refined compared to the previous version, and offers an overview of all the hardware components found in the system. And when doing something like installing a new PCI card, it also sometimes provides the user with manual driver selection. But the central configuration tool of Mandrake has lots more to offer besides. So the Linux start procedure (boot manager, system utilities, etc.) can be adapted to individual requirements at the click of a mouse. Obviously, the hobbyist-administrator will also find graphical configuration tools here, which will help to set up Internet access by modem, ISDN and

COVER FEATURE

even xSDL (although this was not tested). It is precisely in the domain of Internet connection via ISDN and xSDL that one was usually left high and dry by Mandrake in the past. But MandrakeSoft has now finally put its shoulder to the wheel and presents the user with an easy Internet configuration tool, together with comprehensive provider database and nice dialup program. More advanced administration tasks such as the configuration of an Internet gateway computer (DrakGW) or a personal firewall (tinyfirewall) are no longer a problem with Mandrake 8.0. System administration under Linux has never been so simple!

Expandability In terms of system expandability, too, Mandrake puts you on the safe side as a desktop user. Because Mandrake is now drawing even with Red Hat 7.x and equally risking the rpm version leap to Version 4.0. and the gcc version change which is being hotly disputed in the developer scene (gcc 2.96). This means that practically every rpm packet available on the WWW ever built for Red Hat 6.2 / 7.x runs under Mandrake 8.0. Mandrake’s own rpm front-end RpmDrake has, by the way, also been considerably jazzed up in terms of appearance and if required, will fetch security updates and packet updates from the free Mandrake server (or a mirror). Obviously, it is also possible to read in and manage rpm packages from other source media with the package manager. As with the Debian community, at MandrakeSoft there are now three degrees of maturity of distributions: Cooker (in development), MandrakeFreq (mainly stable) and the official release. Anyone who does not want to wait for the next official version from Mandrake can get it online or on MandrakeFreq CD.

State of the art as Linux desktop A glance at the new features of the download version is enough in itself to be able to determine that MandrakeSoft has succeeded in pulling off a surprise coup with the brand new Mandrake 8.0, again. Mandrake 8.0 thus offers practically everything one needs as a desktop user for everyday administration work under Linux. And this is also at the very latest level with GNOME 1.4! ■

Linux Mandrake 8.0 (Download Version) + Very up-to-date and comprehensive system + Central X11 administration tool DrakConf + Easy expandability with RpmDrake - Installer still has room for improvement

9 · 2001 LINUX MAGAZINE 51


052suse.qxd•

07.05.2001

10:48 Uhr

Seite 52

COVER FEATURE

SUSE 7.1

SuSE 7.1 Personal on Test

KILLER PACKAGE ANDREAS HUCHLER

The distribution from SuSE Linux UK Ltd is identiifed to all intents and purposes by many Linux newbies as the Linux operating system. We have taken a somewhat closer look at the latest Personal Edition on your behalf.

Figure 1: SuSE’s installation program shines, particularly because of its flexible selection options, such as here, the choice of the kernel and of the packages to be installed

Since version number 7.0, SuSE has been splitting its distribution into two versions. Since then the former, full version has been sold, together with a DVD, at the increased price of £49 as the Professional variant. So as not to lose the ordinary, priceconscious Linux user, SuSE is also offering a trimmed down variant (for differences, see box) at a price of £29 as the Personal Edition for (purely) desktop use.

52 LINUX MAGAZINE 9 · 2001

Installation The better-than-average hardware recognition means the first CD even booted up from our SCSIDVD drive immediately. The USB mouse, too, went to work without complaint after the graphical installer appeared. Basically, SuSE gives you the choice between a new installation and a (more time-consuming) upgrade of an already existing old SuSE system. When setting up the data partition(s), in the Personal Edition one can also choose between the classic ext2 file system and the new journaling file system Reiser-fs. The selection of packages offers enough flexibility, both for newbies (with rough categories like Standard with Office), as well as for advanced Linux users (up to the selection of individual rpm packages), even if clarity does suffer somewhat as a result. There is also a choice of kernel: a refined and patched kernel 2.2.18 or, again, the new (also patched) kernel 2.4.0 with all its advantages and perhaps some bugs that have not yet been corrected? It’s a sound idea, but on the other hand in practice presumably fairly unusual for the boot manager to be installed by default on a boot diskette. Anyone who prefers, after switching on the computer, without inserting a Linux boot diskette, to be able to choose between the installed operating systems, must therefore explicitly say so at installation (in a fairly fiddly way). Equally tiresome, but nevertheless sensible, is the fact that the installer compels you to enter at least one valid


052suse.qxd•

07.05.2001

10:49 Uhr

Seite 53

SUSE 7.1

user together with the root password in an appropriately secure form, before the installation can be continued. Overall the SuSE installation procedure is increasingly coming to resemble that of Windows - the positive thing about this is that there is far-reaching automation and a user dialog which is fairly comprehensible. But less worth copying are the over-vigorous warning instructions together with the need for reboots even during the installation procedure. Relatively exceptional, though, is the fact that thanks to SuSE, as owner of a new 3D graphics card (in this case: nVidia Geforce 256) by simply ticking on Activate 3D-acceleration you can enjoy the benefits of a (though not always completely stable) 3D hardware-accelerated X-server. In all, despite the menu guidance that sometimes takes some getting used to, SuSE’s latest installation procedure is convincing, especially because of its clarity and error-friendliness. Because of the Braille support, now even the blind can perform a SuSE installation on their own.

Initial configuration The initial configuration of the main hardware components turned out to be a piece of cake. The compulsory manual commissioning of ISA cards might, though, overtax a Linux newbie somewhat. Central configuration tools such as SuSE’s YaST/YaST2, though, do have some disadvantages: Especially whenever one has to leave ready-made configuration menus for whatever reason, perhaps to get an existing exotic hardware component to work under Linux, one comes up against the limitations of distribution-specific configuration tools. It can sometimes happen with SuSE that YaST(2) soon overwrites configuration files which have been painstakingly edited by hand, because of the built-in script automation. Another disappointment was the announced USB support: For the USB ZIP drive, there does exist (in Kernel 2.4.0) in principle a suitable kernel module; but one searches in vain for a corresponding entry in the /etc/fstab. Nevertheless the new YaST2 Control Centre may be just the right thing for migrants from Windows as a passable alternative to the system control.

COVER FEATURE

Figure 2: With each new SuSE version, the X11-based configuration tool YaST2 replaces a bit more of the menu-based predecessor version

Hat 6.2-compatible RPMs and also, if necessary, by direct compilation of source packets.

A question of cost SuSE 7.1 currently combines, better than most other common distributions, administrability with relatively low requirements and the fact that it is as up to date as possible. The central administration concept of YaST(2) also enables Linux newbies to become productive as quickly as possible on the desktop. On the other hand, as an advanced, NotJust-A-User, one feels increasingly restricted in a SuSE system in terms of design freedom. ■

SuSE 7.1 Personal + Very up to date system + Central configuration tool YaST/ YaST2 - Tedious CD changing when later installing packages - Relatively high price for ‘trimmed down’ distribution

Which is it to be then: Personal or Professional? The SuSE Professional Edition comes with the following additional features: • Installation DVD • additional know-how manual (635 pages) • New CUPS printer system • additional developer tool / autoinstaller • C/C++-IDE KDevelop 1.3 • LDAP Server • Server Tools • IP Videotelephony • Clustering • Longer installation support In particular, advanced desktop users considering buying a new SuSE distribution should ponder whether the first mentioned extra features of the Professional Version are worth the extra cost of £20.

In the Office

Expandability On both binary CDs there is an acceptable repertoire of Linux application software, even if it also sometimes still needs supplementing from other sources. Extremely tiresome when installing rpm packages later: The YaST2 package manager first needs, on every later installation of packages, the first SuSE CD, regardless as to on which CD the package to be installed is ultimately found. So DJuse is pre-programmed here! Otherwise, due to wide-ranging binary compatibility, the system is relatively easy to expand in a normal case with Red

Having tried all the different distributions in the office it is fair to say that the SuSE Professional is the version we use for the everyday production of Linux Magazine. The sheer number of packages provided (and so tested to work first time) is a little overwhelming at first, but means that the full system is always to hand. The YaST2 configuration interface does take some time to get familiar with, but is no hindrance. At a recent London computer fair, SuSE gave demonstrations. The system was installed half a dozen times by users new to Linux to show how easy it was to configure. One SuSE user did express concern about doing a minimal install but running through at the show installed the system fine. As SuSE typically have a four-month period between launches we guestimate that, with a fair wind, their next version (7.2) will hit sometime early July.

9 · 2001 LINUX MAGAZINE 53


054table.qxd•

08.05.2001

9:58 Uhr

Seite 54

COVER FEATURE

Manufacturer URL Supply source Full version price Number of CD’s CDs with free binary packages CDs with commercial binary packages CDs with source texts Boot diskette included Number of pages in manual Overall impression of the Manual Kernel XFree86 KDE GNOME Overall impression of the system 3D Graphic card activation SCSI-card PCI ISDN-card PCI External modem Soundcard PCI TV card PCI Printer drivers USB support method USB Hot Plug compatibility USB Wheel mouse Overall impression installation: Features Overall impression installation: Operability Standard desktop Mounting procedure for removable media Overall impression: Desktop clarity Automatic HW recognition Central configuration tool Overall impression: Basic system administering Standard packet manager Packet manager frontend CPU-optimised Automatic resolution of packet dependencies Online update function Overall impression: Handling of binary packages Free manufacturer support Other support centres Start places for online updates Overall impression: Support Overall assessment: Current newbie/user-friendliness of distribution (higher is better) Remarks

Weighting factor Manual 10% Is the system up to date 20% Installation features 5% Installation operability 20% Desktop clarity 5% Basic administrability 20% Handling of binary packets 10% Support 10% Total score 54 LINUX MAGAZINE 9 · 2001

DISTRIBUTION-TEST

Debian 2.2 R2 www.debian.org Linux Emporium £29 6-9 3 depends on reseller 3 343 good-satisfactory 2.2.18pre21 3.3.6 - (now included) 1.0.51 satisfactory-adequate no yes no yes yes no lpr Kernel 2.2.18 no no good adequate Console / GNOME manual / automount good no no adequate dpkg dselect / gnome-apt i386 yes Using apt very good-good Mailing-Lists, IRC (unlimited) Debian Consultants packages.debian.org very good 2,00

easyLinux 2.2 www.easylinux.com EasyLinux.com $49 5 2 2 1 1 345 very good-good 2.2.16 3.3.6 1.1.2 adequate no no yes yes yes Partly CUPS, lpr Backport-Patch no no good-satisfactory very good-good KDE manual good no eSystem good rpm 3.0.3 kpackage not found in distribution’s own packets no satisfactory 90 day installation support on request ../download/update.php good-satisfactory 2,58

Distribution updated by freelance developers

Version 2.4 English only

Debian 2.2 R2 2,50 1,50 3,00 1,00 3,00 1,00 3,50 4,00 2,00

easyLinux 2.2 3,50 1,00 2,50 3,50 3 3 2,00 2,50 2,58


054table.qxd•

08.05.2001

9:59 Uhr

Seite 55

DISTRIBUTION-TEST

SuSE 7.1 Personal SuSE £29 3 2 1 2 63 + 118 + 288 very good-good 2.4.0 / 2.2.18 4.0.2 2.0.1 1.2 very good yes yes yes yes yes no LPRng Kernel 2.2.18 / 2.4 no no very good-good good KDE 2.0.1 automount satisfactory yes YaST / YaST 2 good rpm 3.0.5 YaST(2) / kpackage i386 in distribution’s own packets via YaST2 good-satisfactory 60 day installation support Also commercial ../de/uk/support/download/index.html good 3,10 SuSE also sells the £49 Professional Edition. Special feature: Support for the blind (Braille).

SuSE 7.1 Personal 3,50 3,50 4,00 3,00 2,00 3,00 2,50 3,00 3,10

COVER FEATURE

Mandrake 8.0 www.linux-mandrake.com/ Linux Emporium Unknown N/A N/A N/A N/A N/A 272 + 290 (previous version) good (previous version) 2.4.3 3.3.6 / 4.0.3 2.1.1 1.4.0 very good mostly yes yes yes yes sometimes CUPS, lpr Kernel 2.4.3 no yes very good-good good KDE supermount good kudzu DrakConf very good-good rpm 4.0 rpmDrake i586 in distribution’s own packets rpmDrake good 60 days MandrakeExpert installation support Mailing lists, MandrakeExpert ../en/updates/ very good-good 3,38

Red Hat 7.1 Deluxe www.redhat.com LinuxLand, ixsoft £59 N/A N/A N/A N/A N/A 163 + 485 + 9 (previous version) good (previous version) 2.4.2 4.0.3 2.1.1 1.2.4 very good sometimes yes yes yes yes no LPRng Kernel 2.4.2 (+Patches) yes yes very good-good very good-good GNOME 1.2 Kernel autoloader good-satisfactory kudzu control-panel good rpm 4.0 gnorpm / kpackage i386 no up2date (RHN) satisfactory 60 days RHN, 60 days Internet Also commercial ftp.updates.redhat.com (for a charge) good-satisfactory 3,15

Only the download version was tested here. But the basic system should be largely similar to the Powerpack Edition. Special feature: 3D hardware acceleration; many graphics cards are supported automatically Mandrake 8.0 3,0 4,0 3,5 3,0 3,0 3,5 3,0 3,5 3,38

Only the download version was tested here. But the basic system should correspond to that of the Deluxe variant. Manufacturer updates and bugfixes after 60 days are charged for; special feature: USB hot-plugging Red Hat 7.1 3,0 4,0 3,5 3,5 2,5 3,0 2,0 2,5 3,15 9 · 2001 LINUX MAGAZINE 55


011perens.qxd•

08.05.2001

8:59 Uhr

Seite 11

HP SENIOR STRATEGIST

INTERVIEW

Bruce Perens

HP SOURCE Bruce Perens is Hewlett-Packard’s Senior Worldwide Strategist for Linux and Open Source. He is the founding father of the open source community and drafted both the Debian Social Contract and the Open Source Definition. Software includes Electric Fence and he is credited in A Bug’s Life and Toy Story films

LM What current software are you working on? BP Primarily spreading the Open Source movement. I have promised to help Debian with the bootstrap system and ensuring that HP keep ethical. I think I could do better than Python and Perl and so may write a language called ‘O’ but it may never be completed. I am also writing a tutorial series on Busybox. LM What are the problems facing HP? BP HP has a very able competitor in IBM. It is not so much challenges as opportunities such as working with the community and the IA64 processor. LM What wins have you had at HP? BP Adopting the Open Source policy. Enabling hardware with Open Interfaces HP has now released the Print Server Appliance 4200. This contains Samba and is IP click and print. HP is sponsoring to the tune of $26M the Open Source development lab along with IBM, Intel and others. This allows developers time on multiprocessor systems with load simulations that they would otherwise not be able to afford. LM Where do you see the Open Source movement going? BP IBM owns 10 per cent of software patents in the US. It is easy to stop an individual. Ogg Vorbis made a format to circumvent the Fraunhofer codec. The developer is now being threatened and cannot afford a day in court. Businesses that plan to make big bucks out of Open Source should help the individual developers.

LM What do you hope to achieve in the coming year? BP More Open Source such as the Deskjet drivers. Because HP is cross patented with other people, we are trying to remove those other patents so we will be able to release as Open, and not restrict the community with others patents. LM If I wanted to buy a HP computer today I could not buy without Windows. Do you see this as a problem for HP? BP No Linux on HP laptops this year, but in the server market HP has standardised on the Gnome desktop, as has Sun. LM What is there left for HP to do? BP Lots of stuff is left to do at HP. HP-UX is our enterprise offering and will be supported as long as customers require it. Personally speaking, I think Linux will be able to handle the enterprise market within three years. LM Will Linux make it to the desktop? BP We are already being used on the desktop by engineers and Office workers will soon be surprised LM What do you consider the most vital piece of software that needs developing for Linux? BP Now we have office suites, a Quicken-like program is needed to help home finance. Greater ease of use and ease of installation. A tax calculation program, but that would depend on each country. LM What is holding back Linux? BP With the rate of acceptance nothing stands in the way. ■ 9 · 2001 LINUX MAGAZINE 11


012rustysbd.qxd•

07.05.2001

10:01 Uhr

INTERVIEW

Seite 12

RUSTY RUSSELL

Rusty Russell

POPPING KERNELS

Paul ‘Rusty’ Russell, one of the leading lights of kernel development,

recently undertook

a whistle-stop tour of

Europe to explain his latest projects. Linux Magazine caught up with him on en route back to Australia.

Paul ‘Rusty’ Russel

Talking kernels in a Sheffield bookshop

Info Rusty gives a talk in Sheffield http://www.sheflug.co.uk/apr01.html Rusty’s Diary http://netfilter.filewatcher.org/diary http://antartica.penguincomputing.co m/~netfiler/diary Rusty’s Kernel Hacking Unreliable Guide http://kernelbook.sourceforge.net/ker nel-hacking.pdf ■

RICHARD IBBOTSON

Rusty was born in London and left for Oz when he was three years old. He has spent most of his life in Adelaide and still lives there with his parents. He became interested in computers at the age of eight, when his father studied them as part of his medical course. Rusty knew he wanted to be a programmer from the age of 10, and so, naturally, when he got to university he chose Electrical Engineering with Computing Science. After graduating, he took to programming and never looked back. Rusty went on holiday to Italy for four weeks before beginning his grueling schedule and then on to Madrid for Linux World, followed by a trip to a Santiago computer conference, Xuventude Galicia Net. If you have a look at his diary on the Internet you will see that, as part of his itinerary, he went to the VA Linux offices in Amsterdam where he was able to have a long talk with Wichert Akkerman. Wichert is the developer who used to be in charge of the Debian project. Rusty says that this chat was the highlight of his tour. Let’s hope that all of us Linux users, and particularly the Debian fans, will benefit from this meeting. Two UK stops were included on the tour, one at the University of Aberystwyth and the other at Sheffield. We were extremely privileged to attend of one of these presentations, held in Sheffield’s Blackwell’s bookshop, in which Rusty explained the netfilter that he has written for the 2.4 kernel. The lecture was well attended, with many people travelling from all over the country to hear him speak. There were some heavyweight technical people in the audience including attendees from the Manchester and West Yorkshire users’ groups. Rusty opened by explaining that he has worked on ipchains as well as iptables. He is also responsible for producing, or working on, file hierarchy standard 2.2, network address translation 2.4, the kernel hacking unreliable guides and kernel locking. If you’re a kernel coding person it’s extremely likely that you will have come across his work at some time. His explanation of netfilter and iptables was brilliant from beginning to end. The talk was well

12 LINUX MAGAZINE 9 · 2001

received and those present showed their appreciation by way of a warm applause. Afterwards, I talked to him over a pint of Theakston’s Old Peculier and asked him a few questions. Rusty explained that he started on the 2.0 firewalling code in Slackware at a time when he was working on his own as a UNIX consultant. In January 1997 he decided to go to a Usenix session. Linus Torvalds was there along with Steven Tweedie, Alan Cox and a few other Linux luminaries. Rusty was hooked and has worked on Linux kernel code ever since. He wrote the packet filtering stuff for earlier kernels and later on became involved with writing code for network address translation. Rusty was attracted to kernel coding because, for him, this enterprise represented a fresh project and a means of self improvement. Issues of Internet ownership and control also loomed large in his reasons for getting involved. I asked him why he worked in Oz and not somewhere else. He says that anyone who wants to be successful goes to Silicon Valley. They don’t have as much talent as they would like to have over there, and so they are willing to pay people. He thinks that it’s not too hard to telecommute and so he prefers Australia where the scenery is great and the people and the beer are things he understands. His own kernel project has contributed to the growth of the Internet, which he can then use to work with people in many countries without actually travelling to them. We also discussed the controversial subject of documentation in Linux and agreed that someone ought to sort out the docs, although just who could do this no one really knows. For all of us mere mortals here in Sheffield it was something of a religious experience to see Rusty walk along the street from the pub and take the tram to Sheffield Midland Station so that he could catch a 747 to go to work. We hope he’ll come back sometime. Richard is chairman and organiser of the Sheffield Linux User’s Group. You can view their site at http://www.sheflug.co.uk ■


013miguel.qxd•

07.05.2001

9:58 Uhr

Seite 13

XIMIAN

INTERVIEW

Miguel de Icazza

DELIVERING INTELLIGENCE RICHARD IBBOTSON

Miguel de Icazza began his life in the south side of Mexico city. He certainly didn’t think that he would eventually become one of the leading lights of the Open Source and Free Software movement of the early 21st century.

Miguel is the founder of the Gnome Foundation and a board member of the Free Software Foundation. He is presently CTO of Ximian. Ximian was previously Helixcode, which Miguel co-founded with Nat Freidman. Miguel has known Nat for a long time from the Linux.net IRC network. The idea to create a company that would work on GNOME came from Nat in early 1999 (for those of us who don’t know, GNOME means GNU Object Model Environment). GNOME is a sub-project of the GNU

project. Helixcode came into being to make GNOME more accessible and to extend the GUI environment for the Linux desktop. Miguel always likes to explain that the GNU/Linux desktop is the place where we need to improve user friendliness and on other Unices as well. He likes to think that his own younger brother could use a Linux desktop without a problem. Last year Helixcode was the company that was at the centre of the GNOME object model. Both Sun

To contact Ximian in the States Ximian, Inc. 401 Park Drive, 3 West Boston, MA 02215 General information: hello@ximian.com Investor information: exec@ximian.com Sales: sales@ximian.com Distribution: distribution@ximian.com Words and pictures by Richard Ibbotson, the Chairman of Sheffield Linux User’s Group. You can view their site at www.sheflug.co.uk. Sheffield Linux User’s Group is sponsored by SuSE Ltd at Borehamwood http://www.suse.co.uk.

9 · 2001 LINUX MAGAZINE 13


013miguel.qxd•

07.05.2001

9:58 Uhr

Seite 14

INTERVIEW

XIMIAN

Info http://www.ximian.com http://primates.ximian.com/~miguel http://primates.ximian.com/~miguel/gnome-2.0 http://www.openoffice.org http://www.sun.com/software/gnome http://www.gnome.org http://www.gnu.org To contact the Red Hat labs write to Eliot Lee at sopwith@redhat.com ■ Microsystems and the Free Software Foundation were and are interested in their ideas. The Red Hat labs also did quite a bit of work with Helixcode. Eliot Lee is the person to speak to if you want to know more about that. Much more development work is in progress and we expect more in the future. Sun Microsystems worked closely with Miguel on Star Office. Quite a few of the Star Office components use software that is based around the GNOME project. Sun have taken their Star Office suite and given it away as Open Office. This means that a free office suite will be made available and the code can be changed by anyone who wants to join in with the Open Office project. This will in fact preserve all of the ideas and principles that were part of the Original Star Office project before Sun got hold of it. At the time of writing, Ximian GNOME has just come into being. Many of us expect great developments in GNOME in the future. One of the main objectives of the GNOME project was to provide a graphical user interface that would make Unix and Linux more accessible. Miguel has personally put his best efforts into popularising the Linux desktop phenomenon and to make Unix user friendly rather than the kind of thing that only highly-educated technicians can

14 LINUX MAGAZINE 9 · 2001

understand. A trip to the Gnome site will reveal that there are applications such as Gnome Office and others like Gernel which make kernel configuration a point and click experience for those of us who aren’t Linux developers. A fine example of GNOME software is an application called Evolution. It looks a lot like Outlook Express but without all that Microsoft nastiness included. Some Linux developers see the command line as the only way to work, but how will the end user understand anything about the command line? As Miguel explained to me over the phone from his office in Massachusetts, ”As things stand Unix is sucking very badly. We need to make it more attractive to the general user rather than leave it as a developer’s environment where only academics may tread. Further development of Gnome components will lead to greater ease of use and even my own younger brother will be able to understand how to use Linux”. I also asked him about office suites: ”This is part of the Bonobo thing. In future both Open Office and Gnome Office will come together and be a part of the same project”. Ximian and the products that it advertises are all set to become a major success. It has already been selected by Hewlett-Packard as the desktop software that will come preloaded on all HP-UX workstations. Ximian also have a new finance officer – Tod Miceli. They have secured $15m funding for their projects. The CEO of the company is David Patrick, who has spent the past twenty years marketing and selling software. He is the best choice for the job of integrating free software into the American and international business community. Nat Freidman, who was one of the founders of Ximian, will become the Vice President of Product Development. This will provide a more rounded structure to the company’s management team. Gnome 1.4 has just been released in its final version. You might want to download it and try it. There are also official Ximian CDs available, in nicely finished jewel cases with the Ximian logo on the front. Where did the picture of the monkey come from? You can get those from Ximian as well. Gnome 2.0 is at the planning stage. Miguel has published his own ideas about this on the Web for public consumption. GNOME 1.4 introduced a number of interesting new technologies. GNOME 2.0 looks as though it might repair some of the things that are broken just now and will most likely lead to something quite different, but similar. Miguel himself has a belief that we shouldn’t break things if they are not already broken. Miguel would like the following information to be made public – the Bonobo chimps are also known as the Pigmy Chimpanzees and they live in the Congo, but they are in danger of becoming extinct. I encourage you to visit http://www.gsu.edu/~wwwbpf/bpf/ for more information on them and the ways in which you can help save them. ■


016backup.qxd•

09.05.2001

9:05 Uhr

Seite 16

COVER FEATURE

BACKUP PRINCIPLES

Data back up on the network

BETTER SAFE THAN SORRY ALBERT FLÜGEL, OLIVER KLUGE

Even the most reliable hard disk will give up the ghost one day. Only by making regular backups can you protect your data against the worst case scenario. The following overview sheds some light on the various strategies.

Effective data backup makes sensible data management necessary, especially in view of the explosive growth in the size of files. Modern data back up is much more than just copying data onto a tape cassette. It concerns not only the selection of backup software and hardware, but also the configuration of the data server and the behaviour of the user. Most users would rather not need worry about technology themselves; they take the attitude that the administrator should handle those kinds of problems on their own. But disk space fills up sooner rather than later. Co-operative users can contribute to clarity by tidying up, compressing and packing.

is retrieved, and made accessible, completely automatically from the slower media, which can of course take some time. HSM cannot be realised by the file system without support, as the procedures described are intended to be invisible to the user processes and/ or their system calls. HSM in its simplest form does not replace back up. If the data has gone from the online medium, it now only exists in a single copy. If the tape on which it is stored then fails, it will have to be restored from somewhere else. HSM systems therefore offer the option of producing several copies at once.

Long term archiving

The results of your daily work must be backed up. But certainly, the backup capacities should not necessarily be stuffed full of things which are still available on CD or the Internet. This includes such things as operating systems. But a computer environment will not normally be equipped with unaltered system installations. Adaptations to the respective requirements are always necessary. The software should, apart from the back up of all data, also be able to handle the incremental mode. This means that only the files that have changed since the last back up are written into the backup. Often, additional back ups of the type Level-N are also possible with N = 1, 2, etc. With a Level-N back up, all data which has changed since the last back up with the same level is backed up. As you might imagine, a total back up is Level 0 and an incremental is Level infinite. With a Level-3 back up everything new since the last Level-3, Level-4, or an incremental back up is backed up. The software Afbackup by this author evaluates it differently, so that one can be open to higher levels. Typically, a complete back up is done at the weekend with incremental back ups every night. Another option is complete back ups every first weekend in the month and a Level-1 back up on the other weekends with incremental back ups each night. The longer it is since a complete back up, the longer restoring is likely to take. In principle, no backing up should be done while lots of people are

Offline storage of data should be distinguished from a backup. When it comes to back up, the usual assumption is that only data that is fairly recent (say, three months old) needs to be restored. However, this does not apply to archiving. Candidates for archiving are those files containing data that is not currently needed but will (or could) become important again at a later date. Archives – possibly existing on several media as clones – should be part of your standard repertoire when it comes to managing data on a computer network. This function does not necessarily have to be performed by the backup system in use. A tar or cpio on several redundant tapes with labels is usually enough.

Hierarchical storage management A very exciting technique, well worth discussing, is HSM, such as the one employed by Veritas or SAM-FS from LSC. This involves copying data at a predefined interval from the hard disk onto slower and cheaper media. This step is referred to as ‘archiving’. Data which is not accessed for a certain length of time is removed from the online medium (or ‘released’). The file system entry is retained, but the data blocks disappear. Later accesses to data which is no longer on the hard disk lead to staging. This means that the data 16 LINUX MAGAZINE 9 · 2001

Backup scope


09.05.2001

9:06 Uhr

Seite 17

BACKUP PRINCIPLES

working, since this represents a considerable load for the computers and the network concerned. It may be desirable to back up several computers at the same time. But if the data is being sent to just one single backup server or a tape drive, the backup software must support this type of operation. If several computers have a lot of data, a parallel start, especially of incremental back ups, is very useful. Another example is that of many backup clients, which back up on a central server via slow lines, but in this case you only benefit from parallelisation, when the flow rates can add up on the server. Tapes do break now and then. So quite a few administrators tend to configure the use of a new tape for each full back up. In this way, there is always a complete backup available, which was done not more than two full back ups ago, even if a single tape does fail. Multiple backups of the same data on various media can benefit the user, apart from the higher level of redundancy. If one configures just one full back up on disks (in the case of backups on tape stored for a long time), the current files can be restored more quickly from the hard disks with security at the same time.

Getting it taped Considering current prices of hard disks it’s worth thinking about storing data on hard drives. The throughputs achievable, even with slow disks, are higher than with the usual tape technologies. Nevertheless, there is a considerable price difference in favour of tapes: A DLT with approximately 35GB capacity without compression costs about £50, and you would not get a disk of that capacity for the same money. Tape changes are also easy to automate.

Tape technologies Before buying a tape drive or a changer, there are some tough choices to make. Many technologies try to court the buyer. The main ones are presented below. The choice of one of the technologies described should primarily be made on the basis of the amount of data to be backed up, rather than on the price of the drives and tapes.

Quarter inch cartridges QIC now plays a very small role in systems management, but in private use there are still plenty to be found, because the drives are especially cheap and offer acceptable capacities for home users. QIC makes serpentine linear recordings, the tape is drawn at high speed past the head, and as soon as the tape comes to the end, the head is lifted and the whole tape is run through again. So with an economical system you get both capacity and speed. But when buying such drives, make sure they can cope with read-after-write for the sake of data security.

HSM Archiving

COVER FEATURE

Hard Disk

Optical Media

Staging

Tape Cartridge Figure 1: Overview of hierarchical storage management

Exabyte Derived from the technology of Video-8, the tapes were seen as susceptible to wear and tear due to the narrow tape guides, loose head contact and the resulting strain. In newer products these problems are supposed to have been corrected, but there are others. If you insert a tape to be read into a drive with a different construction, the reading does not always work. This happens even with drives from the same manufacturer. This is nothing to do with the typical problem of correct adjustment of block sizes, which often crops up in newsgroups. If a drive does fail there should thus be a matching replacement within reach. Exabyte is now achieving capacities of up to 60GB per tape (uncompressed).

Set 1 = Week 1 Set 2 = Week 2 Set 3 = Week 3

DAT

Set 4 = Week 4

The capacity specifications for DAT are usually worked out using unrealistically high compression rates. In practice, it is only the uncompressed value that is relevant. Because of the comparatively large wastage, as a result of bad spots on the tape (drops), the real quantity achieved is normally less. Plus, to make things harder, DAT drives to the DDS3 standard usually recognise and report contamination of the head too late. A phenomenon which is occurring increasingly often is that the markings on the tape are overlooked in a fast search. This can lead to data not being found or in the worst case, parts of the tape being overwritten unnoticed. From DDS-3 on, the head is cleaned automatically in the drive during relatively frequent use. But this should not result in any excessive wear. It is strongly recommended that you keep to the cleaning intervals with DAT advised by the manufacturer in the accompanying documentation (but not exceed them). DAT can theoretically handle, with DDS-4, 20GB uncompressed.

Sun

Mon

Incremental Backup 1

Tue

Incremental Backup 2

Fri

Incremental Backup 5

4 June

5 June

8 June

Digital linear tape DLT has been developed for high densities, low mechanical wear and high recording speeds. There

Full Backup

3 June

1st Week

016backup.qxd•

Figure 2: Backup strategies

9 · 2001 LINUX MAGAZINE 17


016backup.qxd•

09.05.2001

9:06 Uhr

Seite 18

COVER FEATURE

BACKUP PRINCIPLES

can be problems from time to time with the tape getting out of line; the second spool is in the tape drive. The start of the tape can come out in sympathy, as the result of which it becomes unusable. The start of the tape contains information managed by the drive. If this cannot be evaluated, the drive will not even accept the tape when it is inserted. For this reason, in the AIT technology from Sony a writeable chip has been built into the cassettes.

The way to data back up

Info Website on afbackup incl. general HOWTO & FAQ: http://www.afbackup.org ftp://www.vic.com/af Yellow Guide at Transtec: http://www.transtec.co.uk -> Guide -> Mass Storage General details on back up and storage: http://www.backupcentral.com Overviews of backup software for Linux: http://linux.tucows.com/conhtml/adm _backup.html and http://www.linux.org/apps/all/Adminis tration/Backup.html Legato homepage (Product »Networker«): http://www.legato.com Info on Budtool (now also belongs to Legato): http://wwwftp.legato.com/Products/html/budtool .html Info on BRU: http://www.estinc.com/bruinfo.php Veritas homepage: http://www.veritas.com/uk/products/ Websites on ADSM from IBM: http://www.storage.ibm.com/storage/ software/adsm/adsmhome.htm EMC2: http://www.emc2uk.co.uk/products/networking/ Networking appliance: http://www.netapp.com/ ■

One simple and effective variant is to connect the drive directly to the respective file server. This also means there is no load imposed on the network and there are no security worries with respect to data being overheard. But if the server goes up in smoke the tapes cannot be saved. This problem can be mitigated somewhat by regularly taking them out and storing them elsewhere. The crucial question is what periods without backing up are acceptable. If you want to guarantee security even if the building collapses, online data and backup must be geographically separated. The administrator achieves this by means of back ups over the network or the use of a suitable bus technology between computer and drive. The commonest way is to back up via a network to a computer which then acts as backup server. If this is not to impose a heavy load on the network via which the computers of the users are being operated, you could consider the option of an additional network connection between the two computers. If security is an important aspect, all the typical problems in network services are relevant.

Correct access rights Can the backup data only be read and written by those authorised to do so? Are there back doors into the system resulting from the architecture? Can bugs (such as buffer-overflows) lead to unauthorised access? In any case, the permissions of the devices must be tested in /dev. Normally everyone has writing permissions on tapes; even big-name backup products work like this or give no instructions in the documentation. If it is not possible to limit permissions here without the backup software refusing to work, you must consider barring the backup server to any login by normal, potentially malicious users. Of course, this consideration does not apply only to backing up via the network.

Storage area network back ups Another option has been becoming fashionable for some time: Backup in a storage area network. SAN means that there is not only a connection between a computer and mass memories as on a SCSI bus, but that several computers with several mass memory systems – possibly also via several

18 LINUX MAGAZINE 9 · 2001

redundant paths – are networked. In this way, fast connections from all connected devices can be used as alternatives, similar to the communication between computers in a LAN. Backup devices (usually jukeboxes) can be connected to a SAN. The back up of the data then runs, not via the file server, but direct from the online mass memory to the backup system. In this data transfer, neither the file server computer nor the network outside the SAN is put under strain. But since most data is backed up from a file system, the controlling software is given an additional task: neither the mass memory nor the backup device know the file system structure. This information is in the exclusive possession of the file system driver in the server operating system. If a file is backed up, the mass memory is informed which blocks it should send to the backup device. The restore function is more time-consuming: The hardware components involved in such installations are in the rack format and the software is expensive. There is no way that someone who wishes to invest in such a solution will be able to avoid working out his own individual strategy. Here is a brief sketch of one other variant: There are devices (for example Celerra from EMC2, Server from Network Appliance or devices from Transtec), which combine mass memory and logic in one housing, so that on the network they appear as a pure fileserver (network attached storage). They typically offer no other services and nor can one log on. If their back up does not run via the file service in the network, then there is still the option of connecting drives and changers directly to these devices. Backup software on the devices themselves and control software on a computer in the network (NetApp NFS-Server and Veritas NetBackup) then enable the back up. When backing up via NFS-Mounts at least one read-only-root-export must be available at the time of back up, as otherwise read-protected data is not backed up. When restoring, root must even be able to write via NFS. Since a forged UDP packet with the sender of the NFS client is all it takes to manipulate data on the NFS server, this is a potential security risk.

Handling the media If the quantity of data to be backed up at one go is greater than the capacity of a tape, you ought to acquire a changer (stacker, jukebox or a tape library). The simplest stackers for example have a drive and six compartments or slots for tapes that the robot can then change. Large jukeboxes have a hundred slots and six or more drives. Frequently there are several loadports or loadbays as well. They considerably alleviate the work of the administrator when assembling the device. With respect to the backup software, on security grounds, one should find out from the manufacturer whether the changer is supported. Usually however, changers implement at least one subset of a standard protocol, with which the hardware can be driven via


016backup.qxd•

09.05.2001

9:06 Uhr

Seite 19

the SCSI bus. In terms of software, it is also possible to use the programs available in source code mtx or stc (for Solaris).

Software Freely available packets such as Amanda, Burt or Afbackup by the authors are just as interesting as commercial software. In principle, when it comes to choosing software, the same rules apply as with all other products. Anyone who believes what a manufacturer says without having verified the facts in a test is taking a risk. You should always conduct a test installation, in which you should test with marginal conditions which are as realistic as possible. The problems that really hurt only come to the fore in conditions of higher complexity, using combinations of features or in connection with other components. So far it has been tacitly assumed that certain functionalities will be present: These assumptions include the facts that a ‘verify’ (such as comparison of the content of the backup with the file system) is possible, or that when archiving (when the tape is subsequently read) such a comparison takes place. But this is not necessarily the case. One fairly expensive backup and archiving product reads the tape following a back up and sends the data over the network to the computer from which the data originates. A comparison with the file system is not done, though. The evaluation of how important any advantage or disadvantage of a product is for the respective purpose is a decisive factor. If security features are important, one should not shrink from using strace, tcp-dump, lsof, truss, snoop, or other tools on the respective system. Also, the permissions with which the software is installed must be tested. For example, if there is no Set-UID-bit in programs which users can start (this does not necessarily have to be Set-UID on root) and if the shared version of the Libc is used (test with ldd), then internally implemented access restrictions are pretty certain to be worthless. These are really easy to get round by redefining functions such as get-uid with the aid of the environment variable LD_PRELOAD.

Potential index problems One typical quality of most products can turn out to be an Achilles’ heel: So that one can target specific data to restore and the systems administer an online index. This stores the entire structure of the backed up directory trees. With the appropriate program, users or administrators can navigate in the backed up data as in a file browser and make a selection for restoring. If the same restrictions on rights are to be as effective here as when working in the file system, the rights, owners and ACLs ought to be in safe storage. Plus, information has to be managed, such as date of back up, storage location of the data and the flag, as to whether the file system entry can be restored. Basically, a file system without datablocks is constructed here, but with additional information. The more entries there are in the file system to be backed up, the bigger the index. If there are only empty files or Symlinks in the original directory, this online index, which is also in a file system, cannot take up any less space than the original data. It is also subject to consistency requirements, like a file system. This means that if a process which manipulates the index expires uncontrolled, the index can be inconsistent. Then it has to be tested and repaired. Thus run a type of fsck. In this case it has to be restored or else you will lose the option of navigating in the backup. Safe storage of the flags for selection in the index is a problem. This can mean that one cannot run parallel restores on the same client, although this is supposed to be theoretically possible. A selection in the restore front-end leads to another, previously made, selection being cancelled. This is seen by the fact that parts of the first restore are not restored, as the flags have been deleted in the meantime. ■

1/2 hoch Anzeige 90 x 260 mm


030firewall.qxd•

07.05.2001

10:21 Uhr

FEATURE

Seite 30

SECURITY WORKSHOP

Protecting Linux systems against attacks: part 2

CLOSE BULKHEADS! MIRKO DÖLLE

After the mayhem we caused in part one, where we got rid of most daemons, we will now build a simple firewall that should insulate us against the last remaining few gaps.

Firewall: firewalls are used wherever private networks meet public ones, for example on company servers providing Internet access. Firewalls are meant to ensure that unauthorised access to the internal, private area is impossible. Depending on the complexity and size of the network, set up can take several days. However, firewalls are also sensible for domestic use if you want to protect your own computer against attacks from the Internet. Masquerading: primarily used on servers providing Internet access for local networks. Masquerading assigns the server’s IP address to all queries from internal networks. The replies are translated back, so that internally there is no apparent difference between masked and unmasked connections. However, the local machines are not accessible from outside, as their IP addresses are not revealed, and queries can therefore only be made to the masquerader’s IP, which ends up on the server itself. Masquerading is commonly used for leased line or flat rate connections that are used by more than one machine. Providers normally only give out one IP address per connection, which can only be used to address one machine. All other machines use private IP addresses and the masquerader attaches their own IP to Internet queries and handles the delivery of replies. ■ 30 LINUX MAGAZINE 9 · 2001

Assuming that you have switched off the most important (or most useless) services, as described in the last issue, we can now turn to protection for the remaining daemons. We will achieve this by means of a firewall that controls outside access. The subject of firewalls is notorious for being extremely complex - unfortunately with some justification - but for home use you’ll be up and away with only a few lines. Administrators configuring large servers for companies or providers need to take many peripheral conditions and special services into account that are of no, or only minor, importance to home users. For our example, we are using a computer dialing to the Internet via an ISDN card. Our interface is ippp0 and the IP address assigned by our Internet provider is 192.168.1.1. Modem users simply need to leave out the ‘i’ in the device name; the modem interface is normally called ppp0. Network cards can also be used in the same way, but substituting eth0. The procedure itself is always the same. The firewall acts as a filter between network devices, such as modems or ISDN and network cards, and the internal area. Which data ends up where is determined using filter rules. There are four


030firewall.qxd•

07.05.2001

10:22 Uhr

Seite 31

SECURITY WORKSHOP

basic areas for these rules: all rules entered in the input section are applied to any incoming data packets in sequence, like a chain, while the rules under output are applied in turn to any outgoing data packets. The forward rules are used particularly for masquerading. In the fourth area it is possible to set up your own sections and rules. This is not normally required for home use, and we will deal with masquerading in a separate article. The standard kernel of most distributions already contains firewall support, so no new compilation is necessary. The required package ipchains is set up for virtually all standard installations; if not, it can be found among the network utilities and installed personally. We want to close the bulkheads and only give access to a few selected services. There are fundamental disadvantages to this method, which we will discuss in detail when looking at the respective rules. You can examine the rules that have been set up at any point using ipchains -n -L. To start with, everything is permitted: linux:~ # ipchains -L Chain input (policy ACCEPT) Chain forward (policy ACCEPT) Chain output (policy ACCEPT) Policy describes the basic attitude towards data packets. When all rules in the chain have been applied to the packet without it being re-directed

FEATURE

ICMP: Internet Control Message Protocol – used in case of unavailability to send an appropriate message to the originator of a query. For instance, ping sends small data packets with ICMP echo-request (request for return) to the destination, in order to receive back the same data, via ICMP echo-reply (reply). This allows it to calculate the time lag between send and receive. ■ somewhere else, it is accepted with ACCEPT or discarded with DENY. Our aim is to deny everything that is not expressly permitted – therefore we will set the input policy to DENY: ipchains -P input DENY Rules are always processed in sequence of entry, so we need to specify what we will accept from ippp0 before discarding the rest. When a rule that obviously does not contain any errors doesn’t work, it is usually due to an incorrect sequence. Once a packet has been discarded you cannot get it back in the next rule. Now, nothing is working at all: any data is discarded, no matter whether received via the network, ISDN, modem or locally. In order to be able to use all our local services, and to keep our graphical interface working, we must except ourselves from being discarded: ipchains -A input -i lo -j ACCEPT [top] Figure 1: Colourful activities: Most daemons can be recognised by the ‘d’ on the end, but also include portmap, cardmgr and cron [below] Figure 2: Utilities which (almost) no-one needs: In /etc/inetd.conf too, there are hidden daemons, which are started completely automatically

9 · 2001 LINUX MAGAZINE 31


030firewall.qxd•

07.05.2001

10:22 Uhr

FEATURE

Seite 32

SECURITY WORKSHOP

The parameter -A specifies that we are adding a rule, input indicates the required section: all incoming data. Then follows the actual filter rule. -i lo applies to any data coming in via the loopback device, which can only be accessed by programs running locally on our machine and seeking a connection to other programs or services on our computer. Finally, with -j we stipulate what happens to the packets: they will be accepted. Sealing ourselves off completely doesn’t only have positive effects. For instance, we will no longer receive messages when we cannot reach a server. However, these messages are very important for smooth Internet traffic. Consequently we will permit them initially, from any direction:

IP address, in this case 195.99.156.130. Without this IP address you won’t get anywhere on the Internet, so we must give access to our nameservers. You will need the script in Listing 1, which you should save as resolv-list in the directory /usr/local/bin. Please don’t forget to make it executable with chmod a+x /usr/local/bin/resolv-list. resolv-list provides us with a list of nameservers used, which we then make accessible in our firewall using the following commands: for ns in `/usr/local/bin/resolv-list`; do ipchains -A input -s $ns 53 -d 192.168.1.1 U 1024: -i ippp0 -p udp -j ACCEPT ipchains -A input -s $ns 53 -d 192.168.1.1 U 1024: -i ippp0 -p tcp -j ACCEPT done

ipchains -A input -p icmp -j ACCEPT The parameter -p icmp indicates the ICMP protocol, responsible for transferring these messages; -j ACCEPT again represents the processing: accept.

Clear nameserver access Another very important service is the Domain Name Service, or DNS for short. The DNS servers, nameservers for short, handle the resolution of, for example, www.linux-magazine.co.uk to the server’s Listing 1: /usr/local/bin/resolv-list if [ -r /etc/resolv.conf ]; then set – `grep -i nameserver /etc/resolv.conf` while [ $# -ge 2 ]; do echo $2 shift 2 done fi Listing 2: Chain input (policy ACCEPT) target prot opt source destination ports ACCEPT all ——— 0.0.0.0/0 0.0.0.0/0 n/a ACCEPT icmp ——— 0.0.0.0/0 0.0.0.0/0 * -> * ACCEPT udp ——— 192.168.2.1 192.168.1.1 53 -> 1024:65535 ACCEPT tcp ——— 192.168.2.1 192.168.1.1 53 -> 1024:65535 Chain forward (policy ACCEPT) Chain output (policy ACCEPT)

Access encouraged

UDP: User Datagram Protocol – a connectionless protocol, which means that data packets are not acknowledged by the recipient. The sender also doesn’t repeat the data. This is used, for example, when querying DNS servers to find out the IP address associated with a host name. UDP is very fast, as no connection is established. UDP data cannot be sent directly through the Internet and are therefore normally wrapped in IP packets. TCP: Transfer Control Protocol – a frequently used Internet protocol. It is often wrongly referred to as TCP/IP, even though these are two protocols (TCP and IP). TCP ensures, among other things, that data is assembled in the correct order. IP: Internet Protocol – ensures the transfer of data packets on the Internet. This is where the IP addresses come in, which uniquely identify sender and recipient. UDP, ICMP and TCP data packets are wrapped in IP packets and provided with the addresses of sender and recipient before being sent through the Internet. ■ 32 LINUX MAGAZINE 9 · 2001

The difference between the two ipchains lines is in the protocols specified with -p, in this case UDP and TCP. The nameserver in our example is 192.168.2.1. Your IP will be different, depending on your Internet provider. You can enter several nameservers. Linux can cope with up to three. The data source address is specified with -s. In our example the variable $ns was entered. After that follows the port number, or service ID, ”53”. Finally we name the destination, -d, with the possibility of restricting the permitted range of port numbers. By entering 1024: we are permitting any port number from 1024 upwards to 65535. Ports below 1024 have a special status, but more about that later. If you now enter ipchains -n -L, you should see the list in Listing 2 on the left-hand side of the page. Don’t be put off by the second line. Even though it looks like everything is permitted everywhere, this is not the case. This output format does not display the device name to which the rule refers, and during set up we had specified the local loopback device with -i lo.

We also want to permit known users to log onto our system. In order to stop user names and passwords from being captured we will only allow the use of the encrypted Secure Shell or SSH for short. We deliberately spared the relevant daemon, sshd, when we were killing daemons in the last issue. Access is given using the rule: ipchains -A input -d 192.168.1.1 ssh -p tcp -U i ippp0 -j ACCEPT The parameter -i ippp0 makes the rule applicable to any data coming in via the ISDN card. If we had another network card with further Linux machines attached to it, no one could log on to the system from those, as the rule is restricted to the ISDN card and we are, by default, rejecting anything else.


030firewall.qxd•

07.05.2001

10:22 Uhr

Seite 33

SECURITY WORKSHOP

This rule will admit any data packets destined for the SSH service of machine 192.168.1.1 and entering the system via the ISDN card ippp0. As the packet has now been accepted, no other rules will be applied. ipchains -n -L now gives us: Chain input (policy ACCEPT) target prot opt source destination U ports ACCEPT all ——— 0.0.0.0/0 0.0.0.0/0U n/a ACCEPT icmp ——— 0.0.0.0/0 0.0.0.0/0U * -> * ACCEPT udp ——— 192.168.2.1 192.168.1U .1 53 -> 1024:65535 ACCEPT tcp ——— 192.168.2.1 192.168.1U .1 53 -> 1024:65535 ACCEPT tcp ——— 0.0.0.0/0 192.168.1U .1 * -> 22 Chain forward (policy ACCEPT) Chain output (policy ACCEPT)

FEATURE

Replies to Netscape queries always originate from ports starting at 1024. We still need to give access to these. However, we can restrict the whole thing a bit further: it is not necessary during surfing for anyone to connect to us, as we are querying the server and it returns the reply through the same connection. Incoming connection requests are therefore not accepted (! -y): ipchains -A input -d 192.168.1.1 1024: -i ippU p0 -p tcp -j ACCEPT ! -y There is still one catch: the user’s SSH client will normally try to open a second channel in the range of ports 600 to 1023 once it has logged onto the server. This is no longer possible, as everything up to port 1023 has been sealed off. For some helpful advice, see the SSH and Firewall box.

Practical effects Web server access If we want to make our Apache Web server accessible from outside, we require another ACCEPT rule: ipchains -A input -d 192.168.1.1 http -p tcp U -i ippp0 -j ACCEPT As you can see, the pattern is the same, only the service entry has changed. The rule listing is extended by one line: Chain input (policy ACCEPT) target prot opt source destination U ports ACCEPT tcp ——— 0.0.0.0/0 192.168.1U .1 * -> 80

Ports and services Anything that is not permitted is denied. This, at the moment, includes anything that is not a nameserver reply or SSH connection – even standard surfing activities. So we will have to consider what else we need to permit, to enable normal operations. This is not possible without knowledge of ports. Behind the entries for services such as ssh or http in our examples lie the port numbers. In the example of how to give nameserver access we actually worked directly with the port number, 53. Imagine a large block of flats in which all the letter boxes have been numbered sequentially – they all have the same address (IP), and letters can only be delivered correctly on the basis of the letterbox number (port number) or the name on the letterbox (service description). You can find out which service corresponds to which port number from the file /etc/services. Ports 0 to 1023 have a special role: these numbers are reserved for privileged services. The daemons behind them are normally running with root privileges. These ports are generally not available to normal users.

To summarise: we are accepting SSH connections through the ISDN card, as well as requests to our Apache Web server. ICMP messages, DNS server replies and requested Internet data are also let through. On the other hand, any external connection that is not routed through SSH or the Web server will always be ignored. These settings will only have a minor impact on the user sitting at their machine. Even if the talk daemon has not been switched off (as discussed in the last issue) users can no longer be addressed from the Internet. External administration via swat or linuxconf is not possible, however it is no problem from the user’s own machine. The only limitation is with IRC: we can no longer send data

Table 1: Service access rules Nameserver: ipchains -A input -d IP 53 -p udp -i Interface -j ACCEPT ipchains -A input -d IP 53 -p tcp -i Interface -j ACCEPT SSH access: ipchains -A input -d IP ssh -p tcp -i Interface -j ACCEPT Telnet access: ipchains -A input -d IP telnet -p tcp -i Interface -j ACCEPT Sendmail access: ipchains -A input -d IP smtp -p tcp -i Interface -j ACCEPT Apache Web server: ipchains -A input -d IP http -p tcp -i Interface -j ACCEPT FTP access: ipchains -A input -d IP ftp -p tcp -i Interface -j ACCEPT ipchains -A input -s 0/0 ftp-data -d IP 1024: -p tcp -i Interface -j ACCEPT ICQ: ipchains -A input -d IP 4000 -p tcp -i Interface -j ACCEPT IRC with DCC: ipchains -A input -d IP 1024: -p tcp -i Interface -j ACCEPT This rule is to be used with care, as it allows an external connection to be established on non-privileged ports. If this rule is implemented, no other rule for TCP protocol and ports from 1024 upwards must be active.

9 · 2001 LINUX MAGAZINE 33


030firewall.qxd•

07.05.2001

10:22 Uhr

FEATURE

Seite 34

SECURITY WORKSHOP

SSH and firewall SSH will normally try to establish a second channel through a port between 600 and 1023. However, as we have prevented this with the firewall set up in the article, SSH would not be able to connect. There are two solutions: either call SSH with the parameter ‘-P’, or amend the rights for SSH. Normally any SSH connection is established with root permissions in order to be able to use a port below 1024. Using the command chmod u-s `which ssh’, you can ensure that SSH will be started with your user rights in future – and automatically uses a port upwards of 1023 as the return channel.

Figure 3: Almost all utilities were superfluous: we need http-rman for the SuSE help system, swat stands in for the system administration program linuxconf of other distributions.

via DCC or otherwise. Our FTP server is also no longer accessible to outsiders. Table 1 is a list of permission rules you can build into a firewall to allow access for individual services.

combine the rules in the file /etc/ppp/inet_chains, using the appropriate variables for IP and interface used. You can see a relevant example on the CD under LinuxUser/firewall/inet_chains. There you will also find the access rules mentioned in Table 1, commented out with a hash (#) at the beginning of the line and therefore not active. Should you want to give access to individual services you only need to remove the hash. The file /usr/local/bin/resolv-list from Listing 1 is no longer needed for this by the way, inet_chains has its own function for this purpose. The call to inet_chains should be entered near the start, preferably in the second line, of /etc/ppp/ip-up and /etc/ppp/ip-down: In /etc/ppp/ip-up, test -x /etc/ppp/inet_chains && /etc/ppp/ineU t_chains up $@ In /etc/ppp/ip-down, test -x /etc/ppp/inet_chains && /etc/ppp/ineU t_chains down $@

Automatic activation Info Firewall manual by Guido Stepken with many examples: http://www2.littleidiot.de/firewall/ Notes and extensions by Dirk Haase for users of EasyLinux for the first part of Close bulkheads!: http://members.tripod.de/krids oft/easyl/ha/ha005.html ■ Figure 4: Activation and deactivation is done differently from one distribution to another – here for example in linuxconf under Red Hat (left) and DrakConf under Mandrake.

A huge problem in building a domestic firewall is that your own IP address changes each time you log on - and consequently needs to be corrected in the firewall rules. Most firewall configuration tools make no provisions for changes in the IP address and are therefore not suitable for home use. Ideally, the rules would be activated automatically after each login, with the correct IP, of course, and deactivated once you log off. The required scripts into which we can integrate our rules are called /etc/ppp/ip-up and /etc/ppp/ipdown. ip-up is called as soon as login has occurred, and ip-down once you have logged off. We are making use of the fact that parameter $1 gives us the modem or ISDN interface and $4 our assigned IP address. Since the lines for set up and removal of the firewall rules are almost identical, we will

34 LINUX MAGAZINE 9 · 2001

Conclusion: In regard to standard installations, distributors have a lot of catching up to do. Only Mandrake possesses a useful mechanism that will switch off virtually any service at a paranoid setting. With most other distributions even security profiles are little help. Distributions especially aimed at beginners, starting with the SuSE 7.0 personal edition, ought to be better suited to their end users’ requirements. It must be hoped that the next versions from the big distributors will take this on board. Nevertheless, no computer is really secure. Even if the possibilities described above provide you with reasonable external protection, one day the error that will invalidate everything may be found. And there is one thing you ought to know: the Internet is evil, and it gets everybody eventually. ■


035openldap.qxd•

08.05.2001

9:03 Uhr

Seite 35

OPENLDAP

FEATURE

OpenLDAP: Practical application

ORGANISING PRINCIPLE VOLKER SCHWABEROW

The Lightweight Directory Access Protocol directory service brings structure and order to the chaos of server administration. And with OpenLDAP and Linux, administrators don’t even need to incur any licence costs in the process.

The IT world has always been preoccupied with the subject of uniform, centralised user administration, and it is more topical than ever these days, thanks to developments such as Single Sign-On and Public Key Infrastructures. In order to ensure uniform user administration, administrators nowadays use NIS or yellow pages. Should your requirements be more substantial, however, or if you would perhaps like to include applications in a centralised user data concept, only a scaleable solution will do. This will include facilities for data replication as well as for creating distributed architectures. LDAP (Lightweight Directory Access Protocol) provides the foundation for such a solution in its role as a central network information service. LDAP is integrated into NDS (Novell Directory Services) and Microsoft’s Active Directory. It is an open standard for an information service based on a tree-like database structure. Compared to a normal database, its main advantage is attribute-related storage. LDAP has developed from X.500DAP, but it uses the TCP/IP stack instead of the OSI stack. Its developers have tried to simplify the data structure as compared to X.500, which means, for instance, that data is stored as plain text. This storage method also simplifies the interrogation of LDAP trees, as the client side does not have to deal with any complicated encoding. LDAP provides a link to X.500, and at the same time minimises the effort for networks and network software (clients). Version 1 of LDAP was created at Michigan University. Only since version 2 has it been possible to use LDAP in the classic client/server model without putting the main burden onto the clients. In the meantime, there is already a white

paper for version 3, and its essential principles are beginning to enter LDAP implementations, leading to improvements in the data model.

Data structure In the LDAP data structure an object class defines a collection of different attributes, which can be used to describe a directory entry. There are predefined object classes that can be used for defining locations, organisations or companies, people or groups. Object classes can be used to create entries. A typical entry in an LDAP tree looks something like this: cn=Volker Schwaberow, ou=IT, o=MyCompany, c=DE This is what is called a Distinguished Name (DN), or unambiguous name, in an LDAP tree (see figure 1). The entry shows the attributes cn, ou, o and c. You will notice at first glance that LDAP is structured hierarchically, with the DN being read from right to left, similar to the structure of the Domain Name Service on the Internet. As has already been mentioned, attributes are used within a DN. Common attributes in the default LDAP schema are: Common Name (CN), Organisational Unit (OU), State (S) and Country (C). These attributes are each assigned to an object class through definition. Frequently used object classes are, for example, Organisation, organisationalUnit, Person, organisationalPerson and Country. The defined object classes determine what an entry can contain. The entry for a person on the LDAP tree can, for instance, contain the telephone number of the person, their General 9 · 2001 LINUX MAGAZINE 35


035openldap.qxd•

08.05.2001

9:04 Uhr

FEATURE

Seite 36

OPENLDAP

After these LDAP basics, let’s get down to business: OpenLDAP. This open source project emerged several years ago from a server project at Michigan University. OpenLDAP consists of a scaleable server with matching LDAP clients and since version 2 it finally supports the protocols in the white paper for LDAP version 3.

Installation of OpenLDAP

Figure 1: The LDAP tree for the example as a diagram.

Public Key or, if required, a JPEG picture that can be displayed by an LDAP client. The possibilities are virtually endless, only depending on the respective application. If the solution to a requirement doesn’t already exist as an attribute or a class it must be implemented in the server’s LDAP schema. User or organisation data can be set up in this schema using text files in LDI format. LDIF (Lightweight Data Interchange Format) has a large variety of different applications. Data entry is one example, another is exporting existing LDAP trees into LDIF files. Due to their LDI format it is easy to maintain LDAP data. A useful overview of LDIF can be found at Netscape. This relates specifically to the Netscape Directory Server, but also contains generally useful information.

After downloading the current source distribution, there are some more requirements to meet before you can compile. One of these is a database compatible with LDAP’s own LDBM. LDBM-compatible databases are, for instance, Berkeley DB2 or GDBM (GNU Database Manager). OpenLDAP can also make use of other backends. Since all distributions contain one of the two databases mentioned above, there should be no big problems here. Now you might as well unpack the sources: tar xvfz openldap-stable-DATUM.tgz Change to the directory where you have unpacked the source and execute ./configure -prefix=/usr/local/ If that has worked, execute: make depend make Once OpenLDAP has been compiled, you ought to run the functionality tests. To do this, change to the directory tests below the source tree and start make. If the tests (database, server functionality, etc.) are completed successfully, you can install the OpenLDAP servers and clients with the usual

Configuration 1: slapd.conf

make install

# See slapd.conf(5) for details on configuration options. # This file should NOT be world-readable. # include /etc/openldap/slapd.at.conf include /etc/openldap/slapd.oc.conf schemacheck on referral ldap://myserver.mycompany.de/ pidfile /var/run/slapd.pid argsfile /var/run/slapd.args ############################################################ # ldbm database definitions ############################################################ database ldbm suffix "o=MyCompany, c=DE" rootdn "cn=Manager, o=MyCompany, c=DE" rootpw mypassword directory /var/ldap/openldap-ldbm defaultaccess none access to attr="userpassword" by self write by * compare access to * by self write by dn=".+" read by * none access to * by dn="^$$" none by * read

command. The following is now located under /usr/local: slapd and the replication daemon slurpd, along with gateways for X.500 (fax, mail, etc.) are in libexec. The OpenLDAP clients can be found under bin. Here, the following three commands are of particular interest: • ldapadd for adding entries to an LDAP directory • ldapmodify for amending entries • ldapsearch for searching the LDAP tree These commands work locally as well as remotely with access to a remote LDAP server.

36 LINUX MAGAZINE 9 · 2001

Configuration of OpenLDAP First of all, for simplicity reasons, move the directory /usr/ local/etc/openldap to /etc/openldap so that the configuration files are in the right place. Now change to /etc/openldap, where you will find the following files for the supplied OpenLDAP clients: • lldap.conf, basic client settings • lldapfilter.conf, LDAP search filter configuration • lldapsearchprefs.conf, other object-related filter settings


035openldap.qxd•

08.05.2001

9:04 Uhr

Seite 37

OPENLDAP

• lldaptemplates.conf, display-related client settings The following files are for OpenLDAP servers: • lslapd.conf, configuration for the slapd daemon • lslapd.at.conf, predefined attributes • lslapd.oc.conf, predefined object classes Firstly, open slapd.conf and amend it as shown in the Configuration 1: slapd.conf box. You don’t need to touch classes and attributes at this point. But before you start working with OpenLDAP, you should at least have a look at the default attributes and classes. Should you need to change the model, perhaps due to the incorporation of special server software (such as Netscape SuiteSpot products), you can set up additional attributes or classes in the slapd.conf file using include File Name. Note also that your manager password should of course not be stored as plain text in slapd.conf as it is shown in the example in Configuration 1. OpenLDAP accepts SHA, MD5 or CRYPT as password encryption. The two include statements in the slapd.conf file are used for loading the standard attributes and classes. The database line is important. In it, an LDBM-compatible database is selected as the backend. The keyword suffix defines the DN for queries that can run against our server. The entry rootdn should be self-explanatory, this DN has all operating rights and is used by the administrator to bind to LDAP tree operations. The directory line determines where slapd deposits its database. This directory needs to be created beforehand and must be assigned the file rights rwx for the user (chmod 0700). The following lines contain the access rights or ACLs (Access Control Lists) for the LDAP tree. The default access is none. Each authenticated user is then given self write rights to the attribute userpassword. Thus each user can change their own password within the LDAP tree. The last ACL permits anonymous binds, so that an address book application without a dedicated LDAP tree user can access the server. After you have implemented the basic settings for the slapd server, you should try to start it: /usr/local/libexec/slapd -f /etc/openldap/slU apd.conf Now check whether the server has started correctly, ideally using ps -ax |grep slapd. By default, the LDAP service runs on port 389. That should also be documented in /etc/services, of course. Please note that SLAPD can be compiled with TCP wrapper support to create additional security.

FEATURE

Configuration 2: MyCompany.ldif dn: o=MyCompany, c=DE o: MyCompany l: Gelsenkirchen streetaddress: Emscherstr. 41 postalCode: 45891 telephonenumber: 0209-4711 objectclass: organization dn: cn=Manager, o=MyCompany, c=DE cn: Manager sn: Manager objectclass: person dn: ou=IT, o=MyCompany, c=DE ou: IT objectclass: top objectclass: organizationalUnit dn: ou=Finance, o=MyCompany, c=DE ou: Finance objectclass: top objectclass: organizationalUnit dn: cn=Volker Schwaberow, ou=IT, o=MyCompany, c=DE objectclass: top objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson cn: Volker Schwaberow sn: Schwaberow telephonenumber: 0209/4712 dn: cn=Bernd Schlaefer, ou=Finance, o=MyCompany, c=DE objectclass: top objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson cn: Bernd Schlaefer sn: Schlaefer telephonenumber: 0209/4713

First directory entries Once the server is running you can make the first entries in the directory. In order to do this, transfer the contents of the Configuration 2: MyCompany.ldif box into a separate file with the extension LDIF. First, the organisation needs to be set up. In our example it is o=MyCompany, c=DE. Note that correct formatting of your LDIF file is crucial and that it must be created exactly as specified in the example for the entries to work. For the sake of simplicity, save your initial LDIF in the /etc/openldap directory as well - in case you need it again at a later point. The finished LDIF file is then imported with the command line ldapadd -D "cn=Manager, o=MyCompany, dc=DE" U -W < LDIF-Filename After the manager account has been declared, two more organizational units are set up, the IT and Finance departments. Finally, two people are

Configuration 3: MyCompany_modified.ldif dn: cn=Volker Schwaberow, ou=IT, o=MyCompany, c=DE changetype: modify replace: telephonenumber telephonenumber: 0209/4716512

9 · 2001 LINUX MAGAZINE 37


035openldap.qxd•

08.05.2001

9:04 Uhr

FEATURE

Seite 38

OPENLDAP

assigned to these departments, one is the author, and the other is one Mr. Bernd Schlaefer. When the entries have been added you can perform your first directory search. This is done using the command ldapsearch. Assuming you want to search and list all entries in the directory, the correct command is: ldapsearch -D "cn=Manager, o=MyCompany, c=DEU " -b "o=MyCompany, c=DE" "(objectclass= *)"

The author Volker Schwaberow is the Internet Security Engineer of Sozialwerk St. Georg e.V, a social service provider that is part of the German charity organisation Caritas. He deals with LDAP, PKI and SSO, amongst other things. His first Linux experiences were gained in 1995. Since 1998 he has been working with it on a daily basis and prefers it to any Bluescreen, in private as well as for work.. The author likes reading books, listening to music and programming under C/C++, Java, Perl and PHP.

Note that in large installations this should be used with care. Now search for all occurrences of cn beginning with the string Vol - that should return an entry. The command for this is: ldapsearch -D "cn=Manager, o=MyCompany, c=DEU " -b "o=MyCompany, c=DE" "(cn=Vol*)" One thing that should be apparent from these two examples is that search attributes require round parentheses. Of course, the author’s telephone number or another value could change. In this case you should create an LDIF file along the lines of the example in the LDIF Modify File (see Configuration 3: MyCompany_modified.ldif box) which contains the author’s data, including the amended telephone number. This LDIF file is imported into the system using the following command:

ldapmodify -D"cn=Manager, o=MyCompany, c=DE"U -W < LDIF-File The method shown does not distinguish itself in terms of user friendliness for one entry, but can be well worth it for large-scale changes. Debugging allows you to detect annoying error sources before modification.

Directory interrogation by mail clients After performing these three local standard operations on the directory, you should try to interrogate it using a mail client. Type the following URL into your browser: ldap://myserver.mycompany.de/o=MyCompany, cU =DE??base Netscape will ask whether you want to add this server to your LDAP setting. Once it has been set up, the LDAP server can be interrogated via the Address Book. If addressing has been configured correctly, the entire LDAP directory will be searched for hits whenever a name is entered in the mail client’s To field. After entering ldap://myserver.mycompany.de/o=MyCompany, c=U DE??sub

Listing 1: PHP-Demo <? // LDAP Example by V. Schwaberow // for Linux Magazine 2001 echo "<html><head><title>LDAP Test</title></head><body>"; //Establish TCP connection with LDAP database. //This is the IP address of the LDAP server. $connection = ldap_connect("192.168.10.248"); if ($connection){ // Anonymous bind, sufficient for query according to ACL! // However, amendments require a valid DN. $result = ldap_bind($connection); // Search for all CN attribute entries in LDAP tree. $search = ldap_search($connection,"o=MyCompany,c=de","cn=*"); // Array of search results. $entries=ldap_get_entries($connection,$search); echo "<table cellspacing=\"0\" cellpadding=\"0\" border=\"2\">"; // Table loop for ($i=0;$i<$entries["count"];$i++){ $cn=$entries[$i]["cn"][0]; $dn=$entries[$i]["dn"]; $phone=$entries[$i]["mail"][0]; echo "<tr>"; echo "<td align=\"right\">".$i."</td>"; echo "<td>".$cn."</td>"; echo "<td>".$dn."</td>"; echo "<td>".$phone."</td>"; echo "</tr>"; } // Close server connection. ldap_close($connection); } else { echo "Connection to LDAP server cannot be established!"; } echo "</table></body></html>"; ?> 38 LINUX MAGAZINE 9 · 2001


035openldap.qxd•

08.05.2001

9:04 Uhr

Seite 39

OPENLDAP

in the browser, the whole directory is represented as HTML.

Amendments using an editor instead of LDIF files So what happens if you only want to amend a small entry? You would need a lot of patience to do everything through LDIF files. This is where LDAP editors come in useful. Some recommendations are GQ for Gtk and Kldap. An LDAP browser allows the representation of the LDAP tree as a structural diagram, which simplifies amendments.

Programming languages and LDAP LDAP’s support by various programming languages offers interesting possibilities. It’s quick to create an extensive address management system for your home intranet with these, if you use languages such as PHP. Listing 1 shows an LDAP tree listing by anonymous bind using PHP. That would be enough to create a simple email address management system for a user. The LDAP tree could even be administered via a PHP script; you could, for example, give another department general access to the telephone numbers in the LDAP tree. Perl also offers many opportunities of integrating LDAP into a script. The CPAN archive contains the Net-LDAPapi interface. However, the Perl module Net-LDAP, which can be found at sourceforge, is much more useful. You can find a short Perl example in Listing 2. There is, of course, also the possibility of programming LDAP-bindings under Java or C/C++; in which case the author’s sympathies lie with the Java variant.

Other possibilities and security aspects In centralised network information services it is possible to bind standards to an authentication for the specified directory. This is done by exchanging the login PAM module. Linux server logins can be verified against a directory. This solution is also valid for Apache, Squid, Qmail and other server services. The market is overrun with vendors selling such Single Sign-On solutions for a lot of money. However, this feature is also available free, for example in the Plugable Authentication Module from PADL Software. It is open source, and performs better in everyday use than many an expensive SSO solution. OpenLDAP can also be used as a PKI server (Public Key Infrastructures). The Oskar PKI project deserves a special mention in this context. You should already be familiar with the access control lists (ACLs), which restrict access to attributes according to various criteria. However, at the moment access to the directory service is completely unencrypted. That can be changed using something called an SSL wrapper. By default,

FEATURE

LDAP servers use port 389. Most clients also support SSL-encrypted LDAP. A frequently used SSL wrapper is, for example, Sslwrap.

The future Looking at what the common current network administration methods you will notice that there are too many different services that keep setups and user data locally on servers. Centralising this data would simplify things considerably. That also explains why enterprise products come complete with an LDAP interface. We can only hope that companies will recognise the potential of Linux, and open source in particular, in this area. The integration of directory services based on open source solutions can minimise costs and improve the stability and reliability of the system at the same time. ■ Listing 2: Net::LDAP #!/usr/bin/perl # # Net::LDAP Test for Linux Magazine # use Net::LDAP; # New Net::LDAP Object $ldap = Net::LDAP->new(`myserver.mycompany.de'); # anonymous bind $ldap->bind; # The query $state = $ldap->search ( base => "o=MyCompany, c=DE", filter => "(objectclass=*)" ); # Entry output foreach $buffer ($state->all_entries) { print $buffer->dump; }

Info IETF - Internet Engineering Task Force: http://www.ietf.org RFC1779 - A String Representation of Distinguished Names: ftp://ftp.isi.edu/innotes/rfc1779.txt RFC1778 - The String Representation of Standard Attribute Syntaxes: ftp://ftp.isi.edu/innotes/rfc1778.txt RFC1777 - Lightweight Directory Access Protocol: ftp://ftp.isi.edu/in-notes/rfc1777.txt LDAP project of Michigan University: http://www.umich.edu/~dirsvcs/ldap/index.html OpenLDAP project: http://www.openldap.org Netscape hints on LDIF format: http://developer.netscape.com/docs/manuals/directory/admin30/ldif.htm#1043950 LDAP browser/editor V2.8.1: http://www.iit.edu/~gawojar/ldap GQ - LDAP-Browser for Gtk: http://biot.com/gq Kldap - LDAP browser for KDE: http://www.mountpoint.ch/oliver/kldap NetLDAPapi - Perl-API for LDAP access: http://search.cpan.org/search Net::LDAP - Perl-API for LDAP access: http://perl-ldap.sourceforge.net PADL’s LDAP PAM: http://www.padl.com/pam_ldap.html Oskar PKI: http://oscar.dstc.qut.edu.au Sslwrap: http://www.rickk.com/sslwrap Jens Banning: LDAP under Linux, Addison-Wesley, ISBN 3827318130 ■ 9 · 2001 LINUX MAGAZINE 39


040gimpsbd•.qxd

09.05.2001

10:08 Uhr

FEATURE

Seite 40

GIMP 1.2

Image processing with Gimp, Part 2

UNDERSTANDING GIMP SIMON BUDIG

If you ask about image processing under Linux, you will more than likely be referred to Gimp. Gimp (Gnu Image Manipulation Program) is a very flexible program. But first of all, you have to learn how to handle this flexibility. This is the second part of a series in which we will look at various aspects of Gimp. In this part, we’ll be covering various tools and selections.

Plugin: a little program, which is as it were ‘plugged’ into Gimp and adds image processing functions to Gimp. Pixel: is an abbreviation for PICture Element and is the smallest unit of an image. Gimp processes pixel images almost exclusively - unlike programs such as Sketch, which construct images out of mathematicallydefined elements. ■

When you start Gimp for the first time, you will be met by a series of dialogs to help you create some important settings. At this stage the most important settings are on the fourth page. The size of the memory store is the maximum RAM consumption of Gimp. If you have plenty of RAM in your computer and are the main or sole user of the computer, you should consider allowing Gimp to use more than the preset 32MB RAM. About 75% of the available memory is a rule of thumb here. If Gimp should need more memory storage, it swaps image data to the hard disk. Especially if your home directory is integrated via NFS – thus over a network – you should change the settings for the swap directory and set, for example, /tmp or /usr/tmp. There is then no need to access the network to swap files, and the speed stays passable. On the next page you can set details for screen resolution. This is only important when you set great store by, for example, wanting to process scanned images at exactly their original size. After that, the Gimp start dialog greets you. You have to wait a bit longer when you start it for the first time, since Gimp collects data on all the installed plugins. From the next time you start, this data will be read from a single file - this goes considerably faster. The first thing you should do is position the window which now pops up sensibly on the screen (Figure 2). We prefer to place the window at the edge and leave the space in the middle for image windows. In any case, it’s worthwhile reserving a separate workspace or ‘virtual screen’ for Gimp. But this does not work in the same way from one

40 LINUX MAGAZINE 9 · 2001

window manager to another. Oh yes: read the Tips of the Day – they are very helpful, especially for newbies. To repeat the first tip from Gimp: If you give a right-click in an image window a comprehensive menu will appear. Unlike the menu in the toolbox you can also save an image here. To distinguish between the two menus, the location will always be given first in angle brackets like so <Toolbox>/File/Open and <Image>/File/Backup.

Toolbox Let’s take a look at the central station of Gimp: the Toolbox. In Figure 3 you can see the toolbox, in which the most important painting tools are marked in red. Here you can see the pencil, the paintbrush, the eraser, the airbrush and the ink tool. The two areas at the lower end of the toolbox indicate which colours and paintbrush are set, and offer rapid access to the corresponding selection dialogs. Open a new image using <Toolbox>/File/New; you can happily accept the presets, but if necessary you can also create a somewhat larger image, For example, 500x500 pixels. With a click on the paintbrush symbol in the toolbox, you activate the paintbrush tool and can start painting in the image window. The paintbrush tool uses a really thick, round paintbrush to do this by default. Of course this is not suitable for all requirements, but Gimp offers a rich assortment of different paintbrushes (Figure 4). With a click on the small preview in the


040gimpsbd•.qxd

09.05.2001

10:08 Uhr

Seite 41

GIMP 1.2

lower right area of the toolbox, you can call up the paintbrush dialog. Here you can choose the paintbrush with which you want to work. When experimenting, you will also come across paintbrushes that behave somewhat differently to normal paintbrushes, as shown last month. With the slide adjuster Distance you can set here how far apart the individual paintbrush images are to be placed.

Softly softly The pencil tool behaves at first glance in a similar way to the paintbrush tool, with the difference being that paintbrushes allow soft edges, while the pencil offers hard contours. This is especially important when you want to control images down to each individual pixel. Normally though, considerably better results are achieved with the paintbrush tool. The airbrush is comparable to the paintbrush tool, but applies the colour much more slowly to the image. If you hold down the mouse button over one spot for a long time, more and more colour will be applied - just like a spray can in fact. From our present point of view the eraser is nothing more than a paintbrush tool which always

FEATURE

paints with the background colour. I will not go into it any further at this point. That should be enough to start with. The last of the painting tools is the ink tool, which simulates an ink pen. Unlike the other painting tools, it ignores the currently set paintbrush completely. On the other hand the thickness of the stroke depends on the speed. This tool takes on a new dimension when it is used together with a graphics tablet. All the painting tools are considerably more flexible than they may appear right now. After a double-click on a tool symbol, the tool options open. Here it is possible to set more precisely how the tool should act, especially how heavily the colour is applied and should be combined with the existing image. Try out all the options, to get a feeling for the possibilities. But don’t be surprised that the options for pressure sensitivity with a normal mouse are ineffective.

[left] Figure 1: Starting Gimp [right] Figure 2: First distribute dialogs...

[left] Figure 3: Gimp painting tools

Colourful Now let’s turn to colour selection. In the toolbox at bottom left you will see two colour areas: The foreground and the background colours. One of these two areas appears to be ‘pressed in’, and this

[middle] Figure 4: First paintings. [right] Figure 5: One of the colour selection dialogs

9 · 2001 LINUX MAGAZINE 41


040gimpsbd•.qxd

09.05.2001

10:08 Uhr

FEATURE

Seite 42

GIMP 1.2

is the so-called active colour, which can be changed at various places. If you click on the active colour, a comprehensive colour selection dialog appears, which you can look at in Figure 5. You can switch between various types of dialog using the tabs. With a bit of background knowledge (see the Colour models box) you should soon be at home with the first dialog.

Selected

Figure 7: The selection tools

One area which is very important, especially for reprocessing photos is selections. One does not normally always want to cover the entire image with an effect or correct the colours. You have to somehow delimit the area to be processed - the more flexible tools there are, the better. We have highlighted the basic tools in Figure 7 in red. From left to right: The rectangle and the ellipse selection tool, the free hand lasso, the magic wand and the ‘intelligent’ scissors. Other important tools can be found under<Image>/Selection/....

Colour models If an image is to be coloured, you are definitely going to have to get involved with colour modes. For Gimp, two colour models are decisive in this: RGB and HSV. We will also go into a third colour model (CMYK) at this point, Figure 6: The HSV which is unfortunately not yet supported by Gimp, but is triangle very important in practice. The RGB colour model is used by monitors to display colours. Physically, this is based on an additive colour system, which means colours are added to black (a switched off monitor is always black), until, at the maximum, white is produced. In accordance with the perception in the eye, the colours red, green and blue (RGB) are used here. This is sufficient to show the majority of colours. The drawback is that it can be difficult to guess the right depth of colour in order to obtain a certain hue. The HSV colour model goes a different route, which makes it possible to create slightly different shadings and similar hues. A colour is defined here by stating the hue, saturation and value. Usually the hue is specified by means of a colour circle – you can see this circle in the outer area of the ‘triangle’ colour dialog. If you adjust the saturation downwards, the colour becomes more and more grey, with the value you can darken the colour. In the ‘triangle’ colour dialog you can see the relationship between the colours: the side opposite the coloured corner has a saturation of 0, the side opposite the black corner has the colour value of 100% (Figure 6). The CMYK colour model plays a central role in the printing world. The physical basis for this is the subtractive colour model, which means for example, colours are gradually ‘subtracted’ from a ‘white’ sheet of paper, until one arrives at black, so it is as it were the opposite of the RGB colour model. In this case, colours are composed out of cyan, magenta and yellow. But since this only works brilliantly in theory, in practice black (key) is still used so as not to obtain merely a dirty brown as the darkest colour. Neither the RGB nor the CMYK colour model are sufficient to reproduce all the colours which occur in nature on paper or monitor. In fact, in the CMYK model one is more likely to come up against barriers. Here, so-called decorative colours are used to expand the spectrum that can be shown further. But that’s a science in itself...

42 LINUX MAGAZINE 9 · 2001

Select the rectangle selection tool and use the mouse to drag a rectangle up in the image window. A broken line (the so-called marching ants) now marks the selected area, which you can drag back and forth using the mouse by clicking in the area and then moving the mouse with the mouse button pressed down. By clicking outside you can again anchor the area to the image. When an area is selected, the painting tools can only change this area. Try it out: Select an area and paint diagonal strokes over the image using the paintbrush tool. The strokes are only visible within the area. The ellipse tool works in a similar way. If you want to select perfect circles or rectangles, you can press the Shift key while dragging the area up. It is important that you only press the key after first clicking with the mouse. With the Ctrl key you are defining that the first mouse click has defined the centre point of the ellipse/ rectangle.

Combining selections Of course, not every shape you want to select is perfectly rectangular or elliptical. To adapt the selection as closely as possible to the desired shape, you can combine the various tools with each other. If, at the start of the mouse click the Shift key is pressed, the area will be added to the existing selection, with the Ctrl key the area will be subtracted. With both pressed at once the average will be formed. In Figure 8 a perfect circle is added to a selection. Gimp helps to remember the key combinations: A little plus sign shows that we are now adding to the selection. Please note that these two actions are really independent of each other. The combination with the available selection is defined by the keys at the start of the click, the regularity of the selection is defined by the keys at the end of the click. So if you want to drag a perfect circle out of the selection, press the Ctrl key, then the left mouse button, release the Ctrl key, press the Shift key, drag the circle to the requisite size and then release the mouse button.

A Button Is your head buzzing by now? Sorry, but these basics are too important just to cover superficially. Perhaps we should start a little project together now, so as to see how something like this could look in reality. The aim is to paint a button for a website. So that it invites the viewer to click on it, it should look slightly three-dimensional. Open a new image, about 400x200 in size (<Toolbox>/File/New). Select, in the middle, a rectangular area (rectangle selection tool) and set as foreground colour a medium grey (colour dialog). Select the fill tool (the colour bucket) and click in the selected area: The rectangle is filled with grey.


040gimpsbd•.qxd

09.05.2001

10:08 Uhr

Seite 43

GIMP 1.2

A click on the black and white symbol at bottom left of the two colour fields in the toolbox resets the colours to black and white. Select the airbrush and a gently tapering paintbrush. With an opacity power of about 50% (in the tool options which pop up when you double-click on the airbrush), move along the lower and right edge of the selection. The selection prevents you from painting into the white area. After that, click on the double arrow at top right of the two colour fields to swap the foreground and background colours. With the white colour, repeat this on the upper and left side of the rectangle.

FEATURE

enough to adapt the box precisely to our button. The image size is changed with a click on ‘cutting’. Lastly, we save the image: With <Image>/File/Save as... the save dialog appears. Here you can simply enter a file name. Depending on what ending it has, Gimp searches for the corresponding format. I would suggest at this point that you use the JPEG format with the ending .jpg, the quality 0.90 and smoothing 0.05. In the image window, you can assess in advance how the image will look afterwards (the JPEG format creates visible defects in some circumstances). The other options are of a more technical nature and can be left in the default settings. Voila – our button is finished.

The author Simon Budig has known Gimp since Version 0.99.10 and has recently tried out Version 0.54 just for fun. What a laugh. When he is not writing Gimp articles or giving lectures on Gimp, he is trying to complete his degree in mathematics at the University of Siegen.

Text You have now created the slightly threedimensional-looking basis for a button. All we really need now is a colourful caption. For this we can use the text tool. Click on the ‘T’ in the toolbox and then on the grey rectangle. In the dialog that appears, you can enter text and select a font type/ size. Since we tend to have one-track minds, we have entered ‘Gimp’ here. Century Schoolbook I has been selected as the font, which comes with the freefont packet in many distributions. In my case, the size is 64 point. If you now click on OK, the text appears in the image, although rarely at the spot where you want it. Move the mouse cursor to the text (it turns into a little cross) and drag it to the right place. You will have noticed that the text is also selected. We can now take advantage of this. Pick a nice bright colour and the paintbrush tool. Then simply drag a couple of strokes diagonally over the text (Figure 9). Due to the selection, you are only painting on the text itself and can thus provide it with a decoration. Incidentally, you can cancel the last steps with Ctrl+Z if you have made a mistake somewhere. The text is still a so-called floating selection, which means that it can be moved around with a selection tool, without the image information underneath it (our grey button) being destroyed. In order to anchor this floating selection to the image again, you can either press Ctrl+H or set the rectangle selection tool and click outside the text in the image. The selection borders disappear, and the text is fixed to the image.

Outlook We won’t hide from you the fact that this is a very tedious way of creating such buttons, but you should have an idea by now of how to make simple things with Gimp. In the next installment we will look at additional features in connection with selections and prepare scanned images. ■

[below] Figure 8: Here a perfect circle is being added to a selection [bottom] Figure 9: Two steps in one: Create text and colour it...

Pattern Now the button just has to be tailored to its final size and saved. To do this, select the cropping tool, which is represented in the toolbox by a scalpel and is found next to the magnifying glass. In a similar way to selecting a rectangle, draw a frame around the button. It’s all right if it’s a bit too big, as you can change the area by clicking on the four corner handles or entering the co-ordinates numerically in the dialog box. In our case an automatic shrink is 9 · 2001 LINUX MAGAZINE 43


020streamerNew.qxd•

07.05.2001

10:13 Uhr

FEATURE

Seite 20

BACKUP HARDWARE

Streamers from £250 to £4000

COMPARISON OF TAPE DRIVES OLIVER KLUGE

Tape drives, so-called streamers, are still the backup medium of choice. Value for money and with a large capacity, they have maintained their position for decades. We tested a representative sample of a few of the current models.

In the lab, Linux Magazine used an IDE-Raid, in order to feed data to the streamers at the highest possible speed. The data content consists on the one hand of easily compressible files (sources from the latest SuSE distribution), and on the other hand

of hard-to-digest MPEG videos. The data mix on the lab Raid is a fairly good reflection of the reality of a corporate server. The capacity details of the drives are uncompressed, thus guaranteed, values. The usual 2:1 or even 4:1 assumptions of the manufacturers are unrealistic fantasy values, and under no circumstances should you rely on them.

Tandberg SLR-60

bild1.jpg

Figure 1: The Tandberg drive is robust and fast

Extending the reach of SCSI devices There are many solutions for getting the data onto the backup device. As well as dedicated backup servers there is also the option of running several SCSI devices on one server and to extend the range of the SCSI interface so as to distribute the locations of the devices. One product this would be possible with is the Storage Net SCSI-Extender from Storagetek. With this device, the very short length of the cable of the SCSI can be extended to an impressive 20km, if a length of fibre-optic cable is used. When WAN networking is used, it is even possible to achieve 200km. Both backups on streamer drives and also disk and Raid reflections can be done via the wide SCSI connections thus obtained, and at the same time the mirror images – accommodated in separate buildings – even guarantee fire protection. http://www.storagetek.com Standards: SCSI, Wide and Ultra-SCSI.

20 LINUX MAGAZINE 9 · 2001

Tandberg, with its SLR devices, relies on the tried and tested system of linear recording. The principle is identical to QIC, but as the result of refined mechanisms and fully automatic recording controllers with read-after-write, Tandberg offers data security which is bang up to date. The drive (Figure 1) stands out because of its great robustness, and even the cassettes do not have the fragility of DAT cartridges and will cheerfully put up with rough handling. The drive has a wide SCSI connection and 8MB buffer. Tandberg SLR-60 Capacity: 30GB Back up rate: 210MB/min Price: approx. £1200 http://www.tandberg.com

Tandberg SLR-60 Autoloader The speedy Tandberg (Figure 2) also comes with automatic changers. The rig costs just under £4000, but for this you also get 180GB uncompressed


020streamerNew.qxd•

07.05.2001

10:13 Uhr

Seite 21

BACKUP HARDWARE

FEATURE

capacity. In the extremely long housing, tapes are inserted very practically with the aid of a cassette holder, so you can change a set of six tapes at a stroke. This turns the archiving of even huge quantities of data into affordable child’s play. At twelve seconds change time, nor does it take an eternity to change a tape, and the back up rate is identical to the single drive. Tandberg SLR-60 Autoloader Capacity: 180GB Tape change rate: 12 s Back up rate: 205MB/min Price: approx. £4000 http://www.tandberg.com

Ecrix VXA-1 VXA (Figure 3) from the US manufacturer Ecrix uses a new tape cassette. This contains a tape, which is threaded into the drive where it is helically recorded, just like with DAT or Exabyte. The broad tape gives the impression of being markedly more robust than DAT media. The drive processor has to manage with an input buffer of just 512K. Ordinary server hard drives have more to offer than this. In the test, though, the drive proved to be really fast: 201MB/minute is a decent figure. VXA-1 Capacity: 33GB Back up rate: 201MB/minute Price: approx. £800 http://www.ecrix.com/

[left] Figure 2: The autoloader from Tandberg offers cassette holders

back up speeds for this. On the other hand, this drive has nothing whatsoever in common with the leisurely ways of earlier QIC floppy streamers. In the Linux Magazine Test it bulldozes away at 39MB a minute, a figure at which an individual server can certainly be backed up on tape in an acceptable time. When installing you need to take care that the computer is not too exposed, because the Travan cassette protrudes out of the drive by three centimetres. If it gets pulled out during the save, that’s the end of your backup ■ Travan NS 8 Capacity: 4GB Back up rate: 39MB/minute Price: approx. £250 http://www.seagate.com

[right] Figure 3: VXA uses new style cassettes and makes helical recordings

[above] Figure 4: Autopak is the name of the robotic changer from VXA. It offers a spacious 495GB [left] Figure 5: Travan offers acceptable speed at a low price

Ecrix VXA Autopak Ecrix also manufacture VXA drives as autoloaders. The devices, called Autopaks (Figure 4), offer ample space. The tested device has 15 slots for cassettes, so that the capacity is 495GB uncompressed. This means that it can be used to back up even larger server environments. The speed is practically the same, but can be doubled by having a second drive in the changer. In the test it was 198MB/minute, as the result of the time it takes to change tapes. VXA Autopak Capacity: 495GB Tape change rate: 13 s Back up rate: 198MB/minute Price: approx. £3500 http://www.ecrix.com/

Travan NS 8 Travan Network Storage uses the Seagate NS-8 as drive, which still shows clear signs of the Conner past (Figure 5). The Travan drive stands out because of its low price. But do not expect any three figure

Bigger libraries Just before going to press, the Linux Magazine was sent details on Storage Technologies’ new tape library L20 (http://www.storagetek.com/products/ tape/L20/). This device was conceived as the entry-level model in the Lseries, and provides exceptionally high capacities. It starts off with 10 or 20 slots. Even with these, a respectable 2TB can be safely stored uncompressed. The bigger models with up to 80 cassettes and a maximum of eight drives can even put away 8TB without compression. The speed is remarkable: According to the manufacturer it is 920MB per minute for uncompressed files and tape drive. These enormous quantities are managed via a built-in Web interface. The tape formats used are DLT 1, 7000, 8000 and Super-DLT. LTO Ultrium can also be processed. The devices include a standard barcode reader for secure identification of the tapes. Apart from the SCSI-3-LVD connection, the (optional) high-speed interface fibre channel is also interesting.

9 · 2001 LINUX MAGAZINE 21


056arkeia.qxd•

08.05.2001

14:53 Uhr

ON TEST

Seite 56

BACKUP SOFTWARE

Arcserve versus Arkeia

THE DUEL OLIVER KLUGE

Backup programs for servers are complex products and are also usually specially produced for specific domains of application. This article compares two wellknown packages and introduces two alternatives.

Data produced using Linux is no less valuable than any other. Anyone who does not choose their software for backups with care will probably soon regret it. The following performance comparison of two well-known products should make it easier for you to choose.

Arcserve The HTML interface of Arcserve looks garish, but it is still nice to use. This makes it easy for the administrator to seek out special solutions

Arcserve from Computer Associates (CA) has been a fixture on the market for a long time now. First appearing under the label Cheyenne, it was the backup solution for early operating systems. Linux is now a worthwhile platform for the server specialists.

Arcserve makes some demands of the system. Without a Korn shell and Apache Web server nothing whatsoever will work. The latter is required for the Web-based GUI. Circumstances that mean considerable additional expense for an individual server becomes an advantage in a server cluster: Administration from any point on the network is no problem, even without a client. CA uses Java applets for display. The installation script tests for the correct installation of the necessary components. Despite the initially gaudy impression given by the GUI, with large icons for various objects and groups, Arcserve clearly presents lots of setting options. With the aid of the large icons, often-needed functions can be called up rapidly and directly, which gives the administrator enough flexibility to construct a solution for new demands quickly.

Arkeia Arkeia from Knox is seeking to become the market leader in this field under Linux. The program behaves very modestly during installation. It does its duty within 10 minutes without any great orgies of installation – although this does not include the configuration. That takes somewhat longer. The installation procedure itself seems somewhat antiquated. Certainly this is no backup software for the home user, but nowadays, even system administrators do not want to cut and paste the necessary installation command sequences out of a readme file into a shell. A small shell script would not be asking too much, nor would the storing of an icon in the KDE. 56 LINUX MAGAZINE 9 · 2001


056arkeia.qxd•

08.05.2001

14:53 Uhr

Seite 57

BACKUP SOFTWARE

But the program immediately makes up for these minor inconveniences. The graphical speed control is handy, especially if the tape drives are far away in the server room. Arkeia is of modular construction right down to the last tiny detail and all elements of a back up can be combined into groups. In this way, an administrator can monitor all tasks which arise centrally: from the total back up of a workstation, via the incremental databank copy up to networked enterprise back up strategies. The extreme flexibility of the program does have the disadvantage, though, that solving a short-term problem which has just cropped up becomes fiddly because a lot of adjustment has to be done before the streamer starts to whirr. The hierarchies in Arkeia are somewhat unusual. Many options can be defined both in the corresponding element, as well as in the overriding group (the command as to whether subdirectories are to be searched or not, for example). This is a double-edged sword. It can lead to confusion if the program behaves unexpectedly, while on the other hand, it does allow for particular flexibility when adapting to specific corporate needs. ■

Arkeia offers many detailed setting options for system administrators

ON TEST

Veritas Net Backup Business Server Just before going to press we received the new program from Veritas. The great strengths of this package are, firstly, the distinctly broad platform support and secondly, the many optional agents. This means Oracle, Symbase and Informix server can be copied in the same way as Lotus Notes databases. With clients for all the latest operating systems, the package offers a good basis for central administration of locally performed backups too. http://www.veritas.com

Product Overview Manufacturer Sales Telephone Internet Price

Arkeia Knox Software SuSE 020 8387 4088 http://www.arkeia.com from £430

Arcserve Computer Associates CA 0161 928 9334 http://www.ca.com/arcserve from £1000

Use GUI Administration Web-Interface Central administration Command line

yes no yes yes

yes yes yes yes

Scheduler Calendar planning Rotating Jobs Prioritisation

yes yes yes

yes yes yes

Devices Automatic recognition Barcode support Trailer management

no yes yes

yes (SCSI) yes yes

Extras Virus testing

no

yes

Quadratec Time Navigator The name of the product is derived from a feature: The user can ‘travel back in time’ when restoring and can thus see in advance the state of the data at any point in time. Time Navigator comes with a great deal of equipment and is designed for large corporate networks. Installation goes smoothly and quickly. The modern GUI design is striking, and is especially seductive because of its clarity. Although Time Navigator also offers an exceptional number of detailed setting and control options, it is still simple and logical to use, with a wizard to help if there are any remaining unclear points. Wherever groups are formed (server, drives, tapes and so on), Quadratec uses impressive icons. The user can at any time click on the information they need and click away the rest – thereby obtaining an overview. Tracking jobs that are run simultaneously or spread out is also no problem with this GUI concept. The software appears expensive at first (from £3000). But Quadratec has a completely different price structure from the competition. The firm uses linear scales, as hardware and software structure increases in size (agents). As the result of this, Time Navigator becomes cheaper for large installations. http://www.quadratec-software.com.

9 · 2001 LINUX MAGAZINE 57


022opengl.qxd•

07.05.2001

10:06 Uhr

FEATURE

Seite 22

OPENGL

OpenGL Course: Part 2

POINTS, LINES AND POLYGONS THOMAS G. E. RUGE, PABLO GUSSMANN

This part of the OpenGL course firstly concerns the basic graphical elements from which 3D objects are constructed. It will also explain how the objects created can be correctly lit.

Before we come to the basic elements, the coordinate system used by OpenGL must first be explained in more detail. This is a Cartesian coordinate system. The x and y-axes form a plane, something like the visible surface of a monitor. The z-axis adds the third dimension - spatial depth. In the case of our monitor this would now be the depth of the picture tube. A point P thus needs three values (x,y,z) in order to have a fixed position in our co-ordinate system.

Basic 3D elements As demonstrated in the first part, the general command structure of OpenGL in order to draw something looks something like this:

Figure 1: The Cartesian coordinate system

glBegin(...); glColor3f(..); glVertex3f(...); glColor3f(..);

glVertex3f(...); ... glEnd(); glBegin(TYPE) tells the machine which basic element (also referred to as a primitive) it should draw from now on. A complete illustration of all OpenGL primitives can be seen in Figure 2. In OpenGL there are five different basic elements, from which all objects must be composed. In detail, these are points, lines, triangles, quadrangles and polygons. Variations can be formed out of all these elements (apart from points), mostly simple continuations as the result of defining additional vertices. So a simple triangle can turn into a so-called TRIANGLE_STRIP or a quadrangle (GL_QUAD) can become a QUAD_STRIP (see Figure 2). This has the additional advantage that the overlapping vertices in the composed element do not have to be loaded into memory and calculated twice. So if we want to draw four coherent triangles, it is sufficient to specify six points - saving three sides and six points.

Colours A colour in OpenGL is normally based on the RGB principle, thus it consists of the components Red, Green and Blue. So all visible colours from Black to White can be mixed. Examples of OpenGL colours Green: glColor3f(0.0f, 1.0f, 0.0f); Violet: glColor3f(0.6f, 0.0f, 0.4f); Black: glColor3f(0.0f, 0.0f, 0.0f); Grey: glColor3f(0.4f, 0.4f, 0.4f); White: glColor3f(1.0f, 1.0f, 1.0f); The first example shows all 10 of the primitive types mentioned above. 22 LINUX MAGAZINE 9 ¡ 2001


022opengl.qxd•

07.05.2001

10:06 Uhr

Seite 23

OPENGL

FEATURE

The second example from the last part of the course serves as the basis for this. So it is again based on GLUT, the OpenGL Utility Toolkit.

Let there be light Objects such as the teapot from the first part consist of triangles or polygons. To make them look more realistic, these must be lit and thus appear brighter or darker, depending on the angle formed between them and the source of light. OpenGL fortunately takes over this part of the maths for us, but it still requires additional information. And this is in the form of normal vectors, thus a vector that stands vertically to a surface. Figure 3 shows an image of a normal vector on an area. An area consists of at least three points. The vectors u and v are the vectors from P1 to P2, and P1 to P4 respectively. But it doesn’t matter which point is used to form u and v, since if the points are correctly oriented with respect to the area, the normal vector is always the same.

The cross product The normal vectors of an area can be calculated using the cross product. vNorm.x = u.y * v.z - u.z * v.y vNorm.y = u.z * v.x - u.x * v.z vNorm.z = u.x * v.y - u.y * v.x So that the normal vector is also pointing in the right direction, it is again necessary for the sequence of points (P1, P2, P3...) which define the area to be consistent. But this leaves the normal vector still only halffinished, because it still has to be standardised. This is necessary to ensure that all normal vectors have the same length. This then looks something like this: length = sqrt( vNorm.x * vNorm.x + vNorm.y * U vNorm.y + vNorm.z vNorm.z) vNorm.x /= length vNorm.y /= length vNorm.z /= length This should, of course, not be calculated anew for each frame because the computing time taken would be enormous. These normal vectors should be calculated just once at the start of the program, because they do not change (in most cases). Using glNormal3f(vNorm.x, vNorm.y, vNorm.z); these values are transferred to OpenGL, so this is just the same as with colour values.

Light in OpenGL Of course, in order to show illumination with normal vectors, you also need a source of light. For this we need a few details about its position and

[left] Figure 2: OpenGL primitives

colour values (the light source need not, of course, only give out white light). The following variables contain the necessary values for the position of the light source:

[right] Figure 3: Normal vectors

GLfloat LightPosition[] = \ {0.0, 0.0, -1.0, 1.0f }; The first three values specify the position and the fourth is a sort of switch, which should stay at 1.0. The next two variables contain the values for the ambient and the diffuse components of the light: GLfloat LightAmbient[] = \ {0.2, 0.2f, 0.2f, 1.0f }; GLfloat LightDiffuse[] = \ {0.3f, 0.3f, 0.3f, 1.0f }; GLfloat LightSpecular[] = \ {.9f, .9f, .9f, 1.0f };

Listing 1, Primitives.c The program is compiled with: gcc -I . -c Primitives.c gcc -o Primitives Primitives.o \ -L /usr/X11R6/lib/ -lGL -lglut -lGLU The program is really very simple to explain: You can select the type of primitive using keys 1..9. This is done in the callback function (see Part 1) for drawing. Primitives are always drawn with different colours. The case query in the keyboard callback sets the value for the primitives in the variable draw_type, which is then queried in turn in callback for the drawing. The following commands from the program draw a red triangle. glBegin(GL_TRIANGLES); glColor3f(1.0f, 0.0f, 0.0f); glVertex3f(-100.0f, 0.0f, -100.0f); glVertex3f(-100.0f, 100.0f, -100.0f); glVertex3f(0.0f,100.0f, -100.0f); glEnd(); Most routines have been taken over entirely or expanded from the last part of the course. The program run has remained the same. During navigation it sometimes happens that when surfaces are drawn they are not always visible. This happens if the surface is turned away from the onlooker. Normally the sequence of vertices of polygons is defined uniformly, clockwise or anticlockwise. This is prevented by the command glPolygonMode(GL_FRONT_AND_BACK, GL_FILL). So both sides of the polygons are declared as visible.

9 · 2001 LINUX MAGAZINE 23


022opengl.qxd•

07.05.2001

10:06 Uhr

Seite 24

FEATURE

OPENGL

Vector Forwards: Backwards: Right: Left: Up: Down:

glNormal3f( 0.0f, 0.0f, 1.0f); glNormal3f( 0.0f, 0.0f, -1.0f); glNormal3f( 1.0f, 0.0f, 0.0f); glNormal3f( -1.0f, 0.0f, 0.0f); glNormal3f( 0.0f, 1.0f, 0.0f); glNormal3f( 0.0f, -1.0f, 0.0f);

glLightfv(GL_LIGHT0, \ GL_SPECULAR, LightSpecular); glLightfv( GL_LIGHT0, \ GL_POSITION, LightPosition ); Using glEnable(GL_LIGHT0); the light source and with

Info OpenGL-Homepage: http://www.opengl.org OpenGL and GLUT information: http://www.xmission.com/~nat e/opengl.html ■

Here, the first three values specify the Red Green Blue (RGB) values at which the light should shine. The ambient component of the light is the part of the light that comes from no particular direction. It arises, for example, when light falls into a space and the rays strike everywhere and are reflected until they are no longer coming from any definable direction and are only present in the form of background light. The diffuse portion of the light comes from a specific direction and is reflected evenly over an area. Areas which are tilted towards the light source appear brighter than those turned away from the light. The specular part of the light also comes, like the diffuse part of the light, from one direction but is reflected unevenly over an area. As the result of this, bright spots of light are created on surfaces. These values are now allocated as follows to the light source GL_LIGHT0: glLightfv(GL_LIGHT0, \ GL_AMBIENT, LightAmbient); glLightfv(GL_LIGHT0, \ GL_DIFFUSE, LightDiffuse);

glEnable(GL_LIGHTING); the lighting calculation are started by OpenGL. Now the area no longer appears just in the full colour, but brighter or darker, depending on how they stand with respect to the light. OpenGL provides a maximum of eight light sources. These can have different colours and positions and be switched on or off. So that the light source also works on coloured areas (only very few are white), OpenGL still has to be instructed on how to apply the light calculation to the colour values of areas. glEnable ( GL_COLOR_MATERIAL ); glColorMaterial ( GL_FRONT_AND_BACK, GL_AMBIU ENT_AND_DIFFUSE );

Below is a short sample program, which draws, lights and rotates a dice made of GL_QUADS. Using ‘l’ and ‘o’ the light source can be turned on and off. Rotation can also be modified: ‘s’ for stop and ‘g’ to continue rotating. ■

Listing 2, Light.c The program is compiled with: gcc -I . -c Light.c gcc -o Light Light.o \ -lGL -lglut -lGLU The sample program draws an illuminated dice and rotates it. The light source is situated behind the (transparent) onlooker and lights the dice from the front. It is easy to see how the areas become bright and dark. The code for the set up of the light sources and of the lighting, as described above, is in myInit(). Firstly, values are defined for the position and the properties of the light source and then they are assigned to the light source. The dice is now drawn in DrawScene(). But first the representational matrix is created and translated backwards using: glTranslatef(0.0f, 0.0f, -5.0f); and then rotated glRotatef(rtri,.0f,1.0f,0.0f); rtri+= .1f; The dice consists of 6 GL_QUADS and is so simple that normal vectors do not have to be calculated on a large scale. These are simple vectors, which point forwards, backwards, right, left, up and down: Each GL_QUAD is assigned a different colour, so that the sides are easy to distinguish. Obviously, a dice is not exactly a complex object, but one with 20,000 polygons would be beyond the scope of any printer. The dice serves as the basis and can be expanded with a bit of effort. But the calculation of the normal vectors should not necessarily be undertaken manually, but automated. That’s something we will come to in a later installment of this course. The plan for the next part is an explanation of the world of matrices in more detail. Then it will be possible to program more complex procedures than a mere dice rotating about its own axis.

24 LINUX MAGAZINE 9 · 2001


026software.qxd•

08.05.2001

14:45 Uhr

PROGRAMMING

Seite 26

MAIL SERVER

Installing a basic mail server.

POST MASTER COLIN MURPHY

Is there some element of your email software that bugs you? Do you get the feeling that you’ve lost control of how your machine exactly handles your email? Maybe you should consider, or reconsider, running your own email server and taking back control.

When it comes to email so many people, even those who should know better, make do with a monolithic email program, software which takes all of the responsibilities of creating, processing and displaying your email. This goes very much against the grain for a UNIX/Linux system, where modularity is the order of the day, where all of the separate processes needed for a particular service (in this case email) are handled by individual programs. This article will describe how you can set up a very basic email server for a dial-up Linux machine with no other network connections. We will have to assume that you have already configured your system to connect to the Internet. This is the simplest example for the most common of situations, where the basics can be learnt. From here you can further configure to suit your own needs. All this article can really hope to do is give you the impetus to start the ball rolling. There is an ever increasing range of software that you can use to run as your email server and this variety can be enough to dissuade people from ever starting what can be quite a simple and rewarding 26 LINUX MAGAZINE 9 · 2001

task of running (or should that be tinkering with) your own mail server. It would be pointless for us to suggest which software you should choose because that depends so much on your requirements. These requirements differ from case to case, even if only slightly. We’ve chosen packages here for demonstration purposes only, you really should do the homework, look at what’s available and make your own choices for your own situation. Our choice of packages, Postfix and fetchmail was made mainly on the grounds of availability - we expect most of the boxed set distributions to include them - and a balance between ease of configuration and power. You will also need a package to read and write your email with. For this you might like to reconfigure the software you currently use, or better still, experiment with a new package during a testing phase. We’ll be using Kmail as an example here.

Postfix Postfix is an MTA (a Mail Transport Agent) with responsibility for moving mail around from place to place, most importantly, moving new email that you have created from your machine to the big bad world of the Internet. You may already have an MTA installed, if it is Postfix then all well and good. If it is


026software.qxd•

08.05.2001

14:45 Uhr

Seite 27

MAIL SERVER

some other MTA, Sendmail being a likely candidate, some work will need to be done first. Use RPMs to install Postfix, if there is another MTA installed already, your package manager will complain and you will know what to uninstall beforehand. Postfix comes with lots of documentation which you should at least look at, but don’t be put off if it seems unclear, the most basic configuration which we are dealing with doesn’t require very much of it. The upshot of it all is that you need to add some lines to one or more of the Postfix command files to configure it. The most important file is ‘main.’ which will most often be found in the /etc/postfix/ directory. If it’s not there try running locate main.cf as a command line to get some clues as to where it might be hiding. relayhost = [mail.ispname.com] defer_transport = SMTP disable_dns_lookups = yes with mail.provider.com from the first line changed to the address to which you upload your mail at the moment to your ISP. Details of what this is could be found by checking your ISP’s support pages or by looking at the configuration details of your current email program. For example in Kmail look under Settings/ Configuration/ Network/ Sending Mail. It will be something like post.demon.co.uk or smtp.uklinux.co.uk Explaining what all this is: 1. The relayhost is the name of your ISP’s mail server, which we are going to take advantage of because, hopefully, your ISP is always connected to the Internet 2. Defer_transport is present because we are not always connected to the Internet, so we will have to take responsibility for when our mail server should try to send its mail 3. disable_dns_lookup because, not being connected to a local network, we are unlikely to

PROGRAMMING

have our own local DNS server running, so looking for it would cause problems On occasions things go wrong and it is usually better to know about it than to bury your head in the sand. Sometimes things will go wrong with email and the mail servers, your local one or those outside will want to tell someone about it. The mail servers will send email to the postmaster, a special user on the system. Obviously you are not going to want to log in as this special user just to wait for something bad to happen, so arrangements are made for the message for the postmaster to be sent somewhere more convenient, say, to your own login. This is done by setting up an alias in the file /etc/postfix/aliases, which you again need to edit, changing the postmaster entry from root to your most frequently used login name. We should be safe in assuming that root is not your most frequent user. If your ISP supplies the facility to use an unlimited number of email address and if you have taken advantage of this you may want to set up more aliases for those other email addresses. If there is more than one user for your machine you should set up aliases for them as well so that their post will go directly to their login account, unless, by some happy coincidence their email user name, (the bit before the @) matches their login name, then it will happen automatically. Once you have edited the /etc/postfix/aliases file with your information, you need to create a database from it by running the command postaliase /etc/postfix/aliases This database is required by postfix, so, even if you have decided not to make aliases for your users, you still must run this command as part of the configuration process. To make sure that Postfix looks again at the new configuration you need to restart it with the command The config file for Postfix. With an editor you need to add the following details to the main.cf file, at the end of the file will do fine

9 · 2001 LINUX MAGAZINE 27


026software.qxd•

08.05.2001

14:45 Uhr

PROGRAMMING

Seite 28

MAIL SERVER

should also make sure you have sensible info set up under the identity tag. Compose an email, either to yourself, or better still to an email echo server, like echo@tu-Berlin.de, which, on receipt of your message just sends it back to you, but with all the message headers on display - useful for tracking down any unusual activity. Compose your email and send it from Kmail, but do this while offline. To get Postfix to actually send the mail, once you have made an online connection, you need to fire off the command /usr/sbin/sendmail -q

/etc/rc.d/init.d/postfix reload We just need to set up Kmail to pass any new email to the server that has been installed, so go back to the configuration screen and set Kmail to use /usr/sbin/sendmail instead of sending directly to your ISP’s mail server directly, that’s if you were using Kmail previously. The /usr/sbin/sendmail here is actually still part of Postfix, it’s just a neat way of allowing Postfix to take over tasks that have been configured for the real Sendmail package. While you are still in Kmail configuration screen you [right] Starting to configure [below] Initially choose Novice You will need to configure fetchmail as root, so either log in as root or become a Super User with the su command at the terminal prompt. To start fetchmailconf, if you have it, just type

which is fine for testing purposes, but would become a real pain if you had to type it every time you wanted to send some mail, which is why there is an automatic way. Back in your editor, add to or create the file /etc/ppp/ip-up.local with #!/bin/bash /usr/sbin/sendmail -q and make sure that this file is executable with this command chmod +x /etc/ppp/ip-up.local So now, every time your Internet connection starts up, this script will be run and Postfix will be told to send its stuff. If you can think of any other programs or utilities that you use online you could also add them to this script.

Configuring fetchmail Postfix and all the other MTAs can look after the movement of emails, in either inward or outward directions using a Simple Mail Transport Protocol but usually only for machines that have a permanent connection to the Internet. The majority of ISPs expect their dial-up users to retrieve email using a different protocol, usually something called POP3 but others are possible too, which is why we need to call upon the uses of Fetchmail. Make sure you have fetchmail installed and, if you have it available, the stand alone graphical configuration tool for fetchmail, called fetchmailconf which takes what little pain there might have been in configuring away completely. At its most basic, fetchmail needs to know where to collect your mail from, so you will need to know the address of the mail server from which you will download your mail, details of which will be found on your ISPs support pages, or can be plucked from the configuration details of your current email program. For example, in Netscape you would look at Edit/ Preferences/ Mail Servers/ Incoming Mail Server. It also needs to know your user name and password that you use to log into your ISP. fetchmailconf &

28 LINUX MAGAZINE 9 · 2001


026software.qxd•

08.05.2001

14:45 Uhr

Seite 29

MAIL SERVER

at the command prompt of your terminal. Choose the Novice Configuration option and enter the name of your ISP’s mail server, then your user name at your ISP, then your password. You may also want to check the box to Suppress deletion of messages after reading initially, until you are confident that all is working, minimising the chances of loosing any of your email. OK all of this information and save the configuration file. Go online and hit the Test fetchmail button and wait a little while. It will take a few moments for fetchmail to talk to your ISPs mail server, so the output in the fetchmail run window won’t appear until your email has downloaded. Hopefully, you will see some output which has fetchmail: normal termination, status 0 near the end, meaning all went well, or status 1 meaning that you don’t have any email to download, so send yourself some and try again. Anything else means you have a problem - look the status number up in the man page for fetchmail for clues.

PROGRAMMING

Info Postfix http://www.postfix.org/ Fetchmail http://www.tuxedo.org/~esr/fetchmail# Alternative documentation http://www.redhat.com/support/docs/faqs/RHpostfix-FAQ/book1.html http://www.mandrakeuser.org/docs/connect/cmail.html ■

poll pop.ispname.net protocol pop3 username U ”YourUserName” password ”YourPassword” changing the ispname, YourUserName and YourPassword parameters to your details. Save it and then, at a command prompt do chmod 600 /root/.fetchmailrc which will stop anyone other than the root user from looking at the file and seeing your password. To test it, go online and at a root command prompt enter fetchmail -d0 -v --nosyslog Just like with Postfix, you won’t want to be messing around with running fetchmail from the command line every time you want to see if you have mail, so you need to add a line to the /etc/ppp/ip-up.local file fetchmail -d [600] will poll your mail server every 10 minutes to check and download any new mail.

Enter the ISP details

You should also add to the file /etc/ppp/ipdown.local fetchmail --quit to stop fetchmail when you log off, otherwise it will start complaining about not being able to find a DNS server. ■ [far left] Choose the protocol. [left] Is your password safe? To configure fetchmail without the use of the graphical configuration tool, you will need to edit a file called /root/.fetchmailrc while logged in as a root user so that it reads

9 · 2001 LINUX MAGAZINE 29


058Kylix.qxd•

07.05.2001

10:28 Uhr

Seite 58

COVER FEATURE

KYLIX

Kylix 1.0: Delphi for Linux

DEVELOPMENTCAPABLE SEBASTIAN GÜNTHER

Now at last, after being constantly postponed, the first version of Kylix, Borland’s Linux porting of the development environment Delphi has reached the UK. The conversion has only partly succeeded, so there are lots of problems to spoil the fun of working with what is truly a very powerful development tool.

In autumn 1999, when Borland announced a Linux version of Delphi and later of C++-Builder as well, there was great astonishment, even among the developers in their own company. After all, these products were software, which lives very much by its visual nature and should therefore depend heavily on Windows. The launch date was initially planned for one year after the announcement, but this was exceeded by about six months in view of the high cost of development. The US retail version just completed can now show whether the development period was nevertheless sufficient to be able to follow in the successful footsteps of Delphi.

Windows past The promise made by Borland was to offer the options of Delphi for Linux. In a later version an equivalent to C++Builder is intended to follow as part of Kylix. But what 58 LINUX MAGAZINE 9 · 2001

does Delphi do now? The aids of the Delphi IDE simplify, in all product variants, the easy creation of graphical user interfaces (GUIs). Here control elements are no longer created by a function command in the program code, but the drafting of a window, dialog or form is simplified in a form designer for selecting the control elements from a component palette and the visual determination of position and size by mouse click. A properties editor, which can be opened at any time as a free-floating window, displays all the properties of the currently-selected components in the form of a table and enables the direct manipulation of these properties by keyboard and mouse; the changes can be seen immediately in the designer. The more expensive versions of Delphi also offer, purely for designing forms - where ordinary windows and dialog boxes are also referred to as forms - more besides: with the aid of database support, control elements can for example be linked to a field of a data table; the Web support allows the dynamic creation of Web contents, and the support of COM/ DCOM/ActiveX and CORBA under Windows makes it possible to develop real multilayer database applications.

Linux future So how does Kylix convert this to Linux? Firstly, the software package will have to be installed; this can be done using the very well-known installation


058Kylix.qxd•

07.05.2001

10:29 Uhr

Seite 59

KYLIX

COVER FEATURE

program from Loki Entertainment Software, either via Gtk-based graphical user guide or by command line. Overall, it performs well: Among other things, it checks at the start whether all the requirements are met. So a kernel from at least the 2.2 series must be used, Glibc from version 2.1.2 and Libjpeg from 6.2 are absolutely essential. A full installation takes up about 200MB on the hard disk, and this can only be reduced noticeably by doing without the online documentation, which would roughly halve it.

Old wine in new bottles The long load times when you start the integrated development environment are an early clue: The IDE is not so much a newly-developed Linux application, it’s just that with the aid of the Wine library the old, familiar Delphi IDE has been ported onto Linux. Wine, the imitation Windows programmer interface for Unix-type systems, does allow a very rapid conversion of Windows software onto (among others) Linux, but it does bring a number of considerable disadvantages with it: Long load times, high memory consumption, slow initiation and sluggish reaction by the graphical user interface, and font problems with many X11 installations. If possible, therefore, it is advisable to use TrueType fonts from an original Windows installation. But whether Wine is also responsible for the over-frequent crashes of the IDE, is a question nobody can answer. After the start, in any case, four windows appear for the user, which float on the desktop: The command centre is in the form of a long narrow window at the top edge of the screen. It contains the menu bar, symbol bars and the component bar. Also opened: a form in the design mode, a source text editor and the Object Inspector. All registered classes of components, spread over several pages, are shown in the component bar in symbol form. This bar is important in connection with a designer, a sort of form designer: A component, for example a simple button, is selected from the corresponding category by a mouse click. Another mouse click in the form view inserts a new component of the type selected at the site of the click. Each component can be moved later by mouse; it is also easy to change the size directly. By clicking on a component in the designer, this component is selected; the Object Inspector always represents the corresponding properties and event handling routines of the currently-selected component. The Inspector window always presents them in two columns: The first contains the names of the properties, the second the corresponding property values. A click on a value makes it editable. Unfortunately Kylix can only show properties as pure text, but a graphical representation of certain types of property such as colour values would surely be more user friendly.

Four sections of the component bar are devoted to the purely visual components. Behind this is hidden, basically, all the types of control element already familiar from Windows: buttons, menus, symbol bars, but also complete dialog boxes for things such as file selection. Three additional sections serve as database support: Special database-capable variants of the normal control elements can be connected to a data source component.

[top] The IDE after creating a simple MDI application: As well as the command centre and an editor window, a form editor and the property editor can also be seen [above] The automatically-created basic framework of the MDI application is in fact ready to run. But even here the first errors crop up

Flexible database support with dbExpress This data source forms the link between control elements and data set components: for example, during development of the application, should it ever become necessary to change from an SQL table to an SQL stored procedure, to do this it is only 9 · 2001 LINUX MAGAZINE 59


058Kylix.qxd•

07.05.2001

10:29 Uhr

Seite 60

COVER FEATURE

KYLIX

necessary to specify the data source of a new data set component - all control elements connected with the data source then automatically access the new mechanism. Kylix offers several alternatives as data set components, but there have been some major changes in comparison with Delphi under Windows in this area: In place of the old BDE (Borland Database Engine) and MIDAS there are now a good half dozen new components named dbExpress. They use their internal, special Kylix database drivers to execute the new commands. Drivers for the freely available databases MySQL and Borland InterBase are delivered from the factory as well as for the well-known commercial products IBM DB2 and Oracle 8i. Friends of older database systems seem at first to have been left out in the cold, because unlike Delphi, so far under Kylix there is no support for a general interface standard like ODBC or ADO, although this would be perfectly possible with UnixODBC, as for example StarOffice demonstrates. This gap will surely be closed very quickly by interested third-party suppliers. As data set sources, dbExpress offers the usual database objects: Direct read and write access to tables, the result data set volume of a stored procedure or the result of a manually coded SQL query. But a data source cannot be linked to a table to create a master-detail relationship: The table displays all the data sets which correspond in a field to the current value of a selected field of the master data source. A classic example of this: A master table contains customer data, while a calculation table acts as a detail table. The customer number of a calculation is linked with the customer number of the customer data table. The master-detail relationship ensures that the detail data set always represents exactly the calculations of the currently selected customer. Also of interest are the so-called Client Data Sets. These enable the use of a simple database in the memory, swapping into a file on the user computer, but also complex mobile solutions, in which a client does not always have access to the database server in a network.

When editing, built-in programming aids such as code-completion are extremely useful. Here you can see what happens if, after entering the point, you hesitate: Kylix displays which elements the global application object possesses. You can now select a method, such as MessageBox, from the list 60 LINUX MAGAZINE 9 · 2001

Rapid development for network and Internet The last big area of the component bar concerns the development of network or Internet applications: As well as components which encapsulate the TCP/IP or UDP/IP sockets, the Webdispatcher distributes HTTP queries to various dataproducing components called producers. These HTTP queries can be differentiated according to the type of command (such as get, head, post, put) and the address (URI). Since there are also components which exist as producers that can create HTML pages or tables automatically from a database, whole Web servers can be created using Kylix. But equally, it is also possible to create just one CGI application or an expansion module for the Apache server. For further reaching Internet and network support Borland supplies the new Linux version of the well-known open source component library Indy (formerly known by the name of Winshoes) along with Kylix. This allows access to practically all relevant Internet protocols: TCP/IP, UDP/IP, daytime and time servers, DNS, Echo, finger, FTP and TFTP, Gopher, HTTP, ICMP, POP3, NNTP, QOTD, SMTP, SNTP, Telnet and WhoIs. Raw Sockets for communication under TCP or UDP are also supported by Indy. Servers for corresponding protocols can also be realised easily, those supported being TCP/IP, UDP/IP, Chargen, daytime and time servers, DICT, Discard Protocol, Echo, finger, Gopher, Hostname, HTTP, IMAP4, IRC, Portmapping, NNTP, QOTD, Telnet, TFTP, IP Tunneling and the WhoIs service. 21 additional components also provide help functions, such as encoders or decoders for important codings like Base64 or UUEncode.

Development means more than just clicking A relatively large part of applications development does consist of clicking together existing components into data modules or forms and placing the corresponding properties in the object inspector. A great many assistants and special component editors continue to support the creation of complex applications. But at some point, glue code will have to be written, to bond, hold together or expand the structure. This is where the powerful source text editor and the object Pascal compiler come into play. For every object that can be created in the large designers (such as for forms or data modules), a unit is created automatically. In Pascal, larger applications are not simply distributed over several source text files, but a clear distinction is made between the main program and the add-on modules – the units. Each unit can be independently compiled and integrated into various applications at


058Kylix.qxd•

07.05.2001

10:29 Uhr

Seite 61

KYLIX

COVER FEATURE

the speed of light. The split into main module and units is, by the way, the main reason for the generally very short compile time of these sorts of Pascal compilers, as here it really is only the parts of an application which have actually changed that have to be recompiled.

Editing at a higher level The IDE creates the basic framework for units themselves, where a new class of the respective basic class is derived for each form or data module. The components used here now all reappear as fields within the new class. The programmer can even save a bit more typing work: The code developed most often will be one that is intended to respond to specific events. The object inspector, though, as already hinted at, lists not only the properties of a component, but also all possible events. A simple double-click on one such event entry causes the IDE automatically to insert an event-handling routine in the source text of the corresponding unit. This now only needs to be filled with code. And this is a real delight with the easy-to-use editor: Because, as befits a development environment from the superior class, it offers more than just syntaxhighlighting and adaptable keyboard layouts. If, for example, a certain keyword is typed in and then Ctrl+J is pressed, the editor recognises this as a copy command and replaces the keyword with a more complex expression. Thus an forb plus Ctrl+J turns into a complete block for ...:=... to ... do begin ... end. Borland combines other programming aids under the name Code Insight, all based on an evaluation of the source texts during editing. Codecompletion becomes active after a short pause after entering a point or pressing Ctrl+Enter. It shows a selection list of all the appropriate continuations at the current cursor position – for example after the name of an object variable and a following point the list shows all properties and methods for this object. The editor recognises the type of variable from its previous declaration. If a procedure or a function is now called up and if the parameter list is to be entered, here again the IDE helps: A brief hesitation during input leads to a display of the declared parameter list. There is now no need to guess or look it up in the documentation to find the correct parameter. And even during actual programming the IDE provides a built-in symbol browser, the Code Explorer, which can display the structure of a module in real time.

Easy debugging During troubleshooting via the integrated debugger, the ToolTip support is useful as it is familiar from other development environments under Windows. If the mouse pointer in the editor

stops over a symbol name, the value of this symbol is calculated and displayed in a ToolTip. So in many cases it is no longer necessary to work laboriously over the additionally available expression evaluation or the watch list. The debugger turns out to be an indispensable tool during application development, it supports practically everything that could be expected of a modern debugger, including an in-built disassembler. For handling larger projects, which are spread over several applications or modules in the shared object format (.so), the IDE has a project manager. It combines all binary modules (files with executable code) into a project group, for which an individual makefile is created. On the other hand, in order to combine a group of components as smaller units into a SO-module, packages can be produced. A package combines several units with components and there is also a comment as to which other packages this package depends on. This technology makes it easy to use larger components from several applications in combination via a SO-file. The components supplied with Kylix are even installed in such packages. An application created with Kylix finds and loads the necessary package SOs at run time by itself – no registration in the system is necessary. New packages and components though, do have to be registered in the IDE, so that they can be implemented in applications. The online documentation leaves a mixed impression. Borland has licensed a tool here that makes it possible to show Windows help files under Linux. The documentation thus corresponds, in terms of structure, to that of Delphi. But the descriptions are not error-free or complete and generally the help texts could easily be a bit more comprehensive in many places. Also, many things

The IDE provides a powerful debugger. Since Kylix uses a real compiler, there is also a display of the CPU register and a disassembler

9 · 2001 LINUX MAGAZINE 61


058Kylix.qxd•

07.05.2001

10:29 Uhr

Seite 62

COVER FEATURE

KYLIX

VisualCLX outside, Qt inside

The online help is based on the help files familiar from Windows, which have been extended according to the scheme which you may know from Delphi

are described only from a very high level of abstraction. Anyone interested in the internal method of working will not find much information.

Borland has developed a component library for Kylix, for use on several operating systems, called CLX (pronounced clicks). It is derived from Delphi’s VCL, the Visual Component Library, and also works with Delphi 6 on Windows. CLX is split into several parts: The BaseCLX contains general classes and routines (for file accesses or loading and storing components for example). This part is relatively independent of the operating system, as it largely relies on the underlying Run-Time Library, or RTL, which itself abstracts most of the functionality. The same holds true for NetCLX - the network components - and for DataCLX, as this is merely a link between dbExpress and VisualCLX. This ultimately contains all visible components, thus mainly control elements. It rapidly turned out to be a new wrapper for an old acquaintance: TrollTech’s Qt-Library, which is also the basis for the popular KDE-Desktop environment. The sense and nonsense of this decision may be disputable, because Qt is still far more than just a GUI library. When all’s said and done, the entire functionality of BaseCLX is also reproduced here one way or another, but, mainly for reasons of compatibility with Delphi, is not used by VisualCLX.

Visual development environments In the last few years a new method has been becoming ever more popular: Instead of writing software complete, line by tedious line manually, advanced development packages support or replace this process with a range of visual help programs. These attempt to reduce applications development to combining ready-made components plus a bit of classic code as glue. For example Microsoft, with Visual Basic, scored a direct commercial hit in this field, whereupon suppliers of countless additional components shot up out of the ground like mushrooms. But other firms too, such as Borland, were developing similar solutions at the same time. Borland was formerly mainly known for two products: the C/C++ compilers (starting with Turbo C) and Borland Pascal, the amalgamation and further development of the classics Turbo Pascal and Turbo Pascal for Windows. Visual Basic (VB), though, had to combat a number of deficits: For a long time, VB was not really a proper compiler, but the code was interpreted at run-time, which did not exactly have a positive effect on the execution speed. On top of this, the component model used was anything but fast or memory saving. The upshot was that VB applications on the computers of that time turned out to be very large, memory-guzzling and slow. But on the other hand application development was extremely simplified, which in many cases more than compensated for the greater demands on hardware. Borland read the signs of the times, and so, in a tour de force, Borland Pascal, a representative of the classic method of programming, was expanded into an easy development package for modern, graphical applications. Two things were necessary for this: Extending the language of Pascal for better support of objects and components - the language variant Object Pascal was created - and a highpowered integrated development environment (IDE) together with special support for component technology for rapid application development (RAD). The finished package with RAD-IDE finally came out under the name of Delphi and was now available only for Windows, while Borland Pascal also supported DOS. In parallel, a product was created, with C++ Builder, which on the basis of the same component library made it possible to work with the language C++ instead of Pascal.

62 LINUX MAGAZINE 9 · 2001


058Kylix.qxd•

07.05.2001

10:29 Uhr

Seite 63

KYLIX

COVER FEATURE

Object Pascal The variant of the programming language Pascal, originally created for educational purposes, on which the modern Pascal compilers such as Delphi, Kylix or even Free Pascal are based, was christened by Borland with the name of Object Pascal. In comparison with the classic ANSIPascal standard it was mainly expanded by options for objectoriented programming (OOP). Borland was in fact introducing objects in Turbo Pascal 5.5 more than ten years ago, but with Delphi the OO-capabilities were considerably extended. An example demonstrates some of the new capabilities using a class to generate random numbers: program ClassDemo; type TRandomGenerator = class private FMaxValue: Integer; function GetValue: Integer; public constructor Create; property MaxValue: Integer read U \FMaxValueU write FMaxValue; property Value: Integer read GetU Value; end; constructor TRandomGenerator.Create; begin Randomize; MaxValue := 10; end; function TRandomGenerator.GetValueU : \Integer; begin Result := Random(MaxValue + 1); end; var RandomGenerator: TRandomGenerator; i: Integer; begin // Create random number generator RandomGenerator := TRandomGeneratoU r.\Create; try RandomGenerator.MaxValue := 99; WriteLn(`10 Random numbers in the rU ange \0..9U 9:’); for i := 1 to 10 do WriteLn(RandomGenerator.Value); finally RandomGenerator.Free;

end; WriteLn(`Done.’); end.

One of the extensions in Object Pascal is the properties, which exist in addition to the normal methods and object fields (the variables within an object): A property has, like a field, a data type. For a property, though, no code is created, nor is memory space reserved in the object - the property is a virtual construction, which can be applied in the program code almost like a field. To now give the property a meaning, the programmer states where it receives its value in a read access, or what is to happen to the new value in a write access. In both cases it is possible, separately in each case, to define a field or a method as source or destination. In the example the property MaxValue corresponds precisely to the internal field FMaxValue, while value can only be read - each read access would be identical to calling up the method GetValue. Properties were mainly created for component-oriented programming, since run-time type information is created for all properties within a published extract (an extended public extract): At run-time the list of all properties in a class can be queried. But even without RTTI the properties have a few advantages: So one could easily extend the sample program so that a write access on value leads to an initialisation of the random number generator at a specified value. In exactly the same way, MaxValue can also be changed later to method access, without the rest of the application having to be changed. Compared to C++, it is noticeable that objects are always stored on the heap, so they cannot be placed on the stack and automatically constructed and deconstructed. But this is only a disadvantage in a few cases, for example when simple data structures (such as a rectangle structure: x1, y1, x2, y2) are to receive a simple OO-wrapper. By way of compensation, a few of the complicated C++ peculiarities such as default- and copy-constructors in Object Pascal could be dropped. As a rule,

though, that is, with storage on the heap, practically identical procedures are used in the various compilers. As can be seen from the example, Object Pascal also supports exception handling. The functional method is extremely similar to that of C++ or Java: If an error occurs (exceptional condition), an appropriate exception object with additional information is created. These can now be caught via a try/except block, while a try/finally-block allows the code, which is also executed if an error condition arises within the try-section, to be specified: try Anyfunction; except on e: Exception do WriteLn(`error: `, e.Message); end; This mechanism allows, for example, the reliable release of previously reserved areas of memory or objects – as is typical of compilers, Object Pascal uses no automatic memory management. Borland has also added a few more things to the Pascal language: ANSIstrings increase the maximum length of Pascal character strings from 255 characters and manage copies of these same strings very efficiently (Copy-onWrite). Unicode strings can be stored in a WideString variable. The length of the new dynamic arrays can be defined and altered at run-time, if necessary with checking for correct array-indices (by range checks). Variables of the variant type store almost any other data type; This type was really mainly introduced to support COM/ ActiveX under Windows, but is also available, slightly limited, in Kylix. In Kylix, a variant cannot, understandably, point to a COM-object. One innovation which may be controversial: Pointers no longer have to be de-referenced with \^{ }, if the source text nevertheless remains unequivocal; this is certainly something to do with the development that languages like Java prefer memory management to disappear completely into the background - and pointers just do not fit into this concept.

9 · 2001 LINUX MAGAZINE 63


058Kylix.qxd•

07.05.2001

10:30 Uhr

Seite 64

COVER FEATURE

KYLIX

Interview with Jason Vokes, Director, Rapid Application Development at Borland Linux Magazine: What made Borland decide to develop Kylix? Jason Vokes: I regard the Linux operating system literally as a golden opportunity to reach new developers. We began with Delphi and ended up with a cross-platform system. Linux Magazine: Borland wanted to introduce Kylix a year after the announcement. The deadline was passed by about six months. Why? Jason Vokes: There were two main reasons for this. Firstly, there was no rapid application development environment. Under Windows, we were used to the tools and advantages available there. Here we had to start with rudimentary things such as gcc and gdb. Once our own debugger and our IDE were available, productivity increased. The second point was that the various Linux distributions all behave in different Jason Vokes ways. It took longer than planned to complete it. Linux Magazine: Was the porting of the IDE with the aid of Libwine, which has now been done, your second choice? Jason Vokes: We originally only planned Delphi for Linux, but before we really started, we noticed that the market needs more, namely a cross-platform environment. That’s why we developed the component library CLX. Linux Magazine: Is it your intention that the Kylix-IDE will one day also run with CLX? Is there a specific deadline? Jason Vokes: The use of Libwine was a time-to-market decision. We wanted to be fast. In future there will of course be a complete CLX-IDE. It is only internally that there are precise deadlines. Linux Magazine: Have the stability problems with the IDE anything to do with Libwine? Jason Vokes: These problems lie primarily with the Linux loader. Our developers have suggested numerous fixes to the Linux community and sometimes also even done them themselves. Many were adopted and are available and some have yet to penetrate the Linux distributions. When they do, stability will improve. This has nothing to do with Libwine. Linux Magazine: When will the no-charge version of Kylix for the development of free software be available? Jason Vokes: By the middle of this year. We are not announcing a specific date at this time, though. Linux Magazine: Borland has made parts of CLX open source. Will there also be other components, too? Jason Vokes: No, there are no plans for that.

Besides, Qt is a C++ library, which cannot be used directly with the Kylix compiler - a C-wrapper is needed, which repackages all the classes, methods and functions of Qt into normal C-functions. These C-functions can then be imported by a Kylix unit, so that ultimately VisualCLX can use Qt. This solution was certainly the fastest solution for Borland to get Kylix ready for marketing, but it also means that visual Kylix applications need more memory and are dependent on countless libraries: Starting with Kylix’s Qt-Interface-Unit libqtintf via Qt2 itself and all sorts of X11 libraries to the C++ run-time library. So it will be especially interesting to observe how far Kylix applications will operate reasonably under older or future Linux installations. To 64 LINUX MAGAZINE 9 · 2001

complicate matters, Borland itself provides no support for the creation of installation programs, as is the case with Install Shield Express in Delphi.

Quo vadis Pascal? Kylix is not completely free from competition under Linux: Firstly, there is the C- and C++-compiler with ever more powerful IDEs such as KDevelop. Secondly, in the server field the importance of compiler languages is certainly going to continue to fall, when more and more special scripting languages like PHP or highly-specialised visual development environments succeed. And finally, Borland should also keep an eye on the field of classical programming: Kylix is certainly


058Kylix.qxd•

07.05.2001

10:30 Uhr

Seite 65

KYLIX

suitable for the creation of command line-based tools, too, and the good editor and debugger are a great help in this. Nevertheless, there is stiff competition in this field with the two free projects GNU Pascal and Free Pascal; in fact, the latter provides not only compilers for several operating systems, but overall comes with a considerably broader palette of additional units and C-Header conversions. It is only in the IDE field that Borland, despite the said problems, has a clear advantage.

Prices and licences Borland is demanding truly beefy prices for Kylix, which are scarcely justified in comparison with the markedly more stable and more complete Delphi: Buying Desktop Developer would cost some £800 even though this version still lacks the full NetCLX for developing network and Internet applications. This is reserved for the Server Developer edition, which costs twice as much at about £1600. But Delphi offers considerably more in this price class, for example support for ActiveX (Windowsspecific) or the not-insignificant CORBA architecture, which could also be used under Linux without any problem. And yet Borland has announced that from the summer a version which is free of charge (but not free) - probably from the release for Desktop Developer - will be on offer for the development of free software under the GPL licence for download (or on CD for about £80). CLX received a double licence for this: Borland’s commercial No-Nonsense Licence and the GPL. This could have far-reaching consequences: Firstly, it is to be expected that a flood of programs under GPL licence will crash in on Linux. But it remains to be seen whether, in view of the problems mentioned with library dependencies, this will be a curse rather than a blessing. And on the other hand many component developers who want to port their products from Delphi onto Kylix will also have to consider the use of such a double licence if they want to build up a substantial following of users.

COVER FEATURE

environment and some of the errors in the CLX runtime library. The frequent crashes of the IDE and the CLX bugs are something Borland will eventually get to grips with. It would certainly be very helpful if the IDE was converted from Wine to CLX itself, but VisualCLX still lacks some urgently required capabilities to do this, such as support for dockable windows. But Borland should not take too long to come up with these improvements, because the free IDEs for C++ are getting better all the time and the teams of GNU Pascal and Free Pascal are not sitting idle. One major problem for Borland could be that the free developer community will not restrict itself to developing components and utilities for the commercial product Kylix. It will - because of the many Kylix bugs, but also on principle - appreciate a free compiler and a free IDE more. And Borland cannot simply release both, as they are the essential foundation of the company’s business. ■

The author Sebastian G¸nther is technical director of Areca Systems GmbH in Munich, a service provider involved with networking, the Internet and of course, Linux. For aesthetic reasons he is a great fan of the language Pascal for its own sake and especially the modernised variants. His first contact with the Pascal compilers of Borland was with Turbo Pascal 5.5.

Conclusion Kylix is currently definitely the most comprehensive software for rapid development of applications (RAD) under Linux. It should give the operating system a bit of impetus, because developing applications has never been so simple. Ultimately however, Kylix has to be described as a very hasty porting of the old, familiar Delphi. The commercial version is anything but refined, and a few extra months for error corrections really would have made all the difference. The main aspects of Delphi can also be found in Kylix, but behind the scenes the first version comes across like a botch job, which becomes apparent through the instabilities of the development 9 · 2001 LINUX MAGAZINE 65


075scheme.qxd•

08.05.2001

10:11 Uhr

Seite 75

SCHEME

PROGRAMMING

Scheme as a teaching device

LEARNING CURVE CHRISTIAN WAGENKNECHT & RONALD SCHAFFHIRT

We’ve already covered quite a lot of ground in just a few Scheme articles, dealing with fairly advanced procedures such as first-class objects, macros and GUI programming. Other programming languages however, are a little more involved. Scheme is unsurpassed in the level of abstraction it requires. Due to very simple syntax on the one hand and semantically powerful language elements on the other, Scheme is excellently suited to formulating abstract concepts. For this reason Scheme is one of the favourite didactic vehicles in the teaching of students: “Represent it with Scheme and play around with the defined functions, then you will better understand what it’s all about.” In this article we would like to introduce two examples of this didactic approach. The first concerns calculation with infinite objects, and the second deals with the area of Web programming. In both cases we will express our ideas in Scheme and use interactive Scheme programming as a means rather than an end. We have used Chez Scheme for the implementation. This can be downloaded free from http://www.scheme.com/ as Petite Chez Scheme. The conditions of application are described there.

Calculating Infinity – Representing infinite objects with finite memory The statement that computers can only handle finite objects is normally taken for granted. For instance, rational numbers are generally implemented as fixed-point numbers. However large the mantissa, a recurring decimal fraction like will be ruthlessly truncated from a certain decimal digit onwards. The dots or overscore in the finite notation indicate that the threes continue indefinitely. Due to truncation, not even rational numbers, let alone real ones, are represented adequately by computers. Consequently, you end up working with approximate values rather than the actual quantities. This means you are limiting yourself to machine numbers, for which some of the mathematical laws that apply to rational numbers have no effect. Clearly, the capacity for infinite objects is limited by the always finite memory. Or perhaps not. Looking at the equivalent fraction instead of yyt , it offers a finite, “dotfree” representation of the same number. This observation leads to the idea of representing rational and even real numbers by the method used for their creation. In the case of 1/3: ‘Divide 1 by 3’.

A warning: The aim is not to perform an algorithmic operation, but rather the definition of a number through the (where necessary continuous) operation employed in its creation. But how is a calculation with numbers represented in this way supposed to work? How much is, for example, ? Do the respective operations need to be added to each other in this case?

Data type “stream” In the following text we will be introducing an abstract data type “stream” (a conceptually infinite list or sequence), which is characterised as follows: A stream is a pair whose first member is any Scheme object (such as a numeral), but not a stream and whose second member is a stream. We access the first member with stream-car and the second with stream-cdr. The reader is strongly advised to refrain from questions regarding the implementation of language elements for streams at this point. Let’s just assume that everything we require is available (or built in). To create an actual stream we are using a constructor, streamcons. This expects two arguments, the two members of the pair to be created as mentioned above. Let’s look at two example streams: (define integers (letrec ((integer-stream-maker (lambda (from) (stream-cons from (integer-stream-maker (+ from 1)))))) (integer-stream-maker 0))) Now we would like to look at 10 elements of this numeric sequence. > (stream-print integers 10) displays 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 on the screen. 9 · 2001 LINUX MAGAZINE 75


075scheme.qxd•

08.05.2001

10:11 Uhr

Seite 76

PROGRAMMING

SCHEME

The second example concerns the Fibonacci number sequence, i.e. 1, 1, 2, 3, 5, 8, 13, 21, 34, ... The n-th Fibonacci number is defined by the following simple rule: the first two Fibonacci numbers are 1. Each subsequent number is the sum of the two previous ones. This is easily written as a Scheme procedure. (define fib (lambda (n) (if (< n 2) 1 (+ (fib (- n 1)) (fib (- n 2)))))) Calculate (fib 29) and see how long it takes your computer to do this. Now let’s define the (infinite) sequence of Fibonacci numbers using (define fib-stream (letrec ((fib-stream-maker (lambda (from) (stream-cons (fib from) (fib-stream-maker (+ from 1)))))) (fib-stream-maker 0))) and then display the first 30 members of this sequence: > (stream-print fib-stream 30) As expected, this takes even longer than (fib 29) above. The result is: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987,U 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, U 121393, 196418, 317811, 514229, 832040 If we now evaluate the same expression again (stream-print fib-stream 30), the result is returned without any noticeable use of computing time. This is a welcome efficiency advantage caused by the fact that elements of a stream are not re-evaluated once they have been calculated. Instead, Scheme ‘remembers’ already calculated stream elements.

Evaluation concepts Before we continue to work with streams we should look at the reason for this odd evaluation behaviour. By default, a Scheme expression in the format

This obviously leads to a loss of efficiency through multiple evaluation of one and the same part of the expression. In our example ((lambda (w) w) fib)) is evaluated twice. This advantage of applicative order evaluation has led to the strategy being built into all common Scheme systems. The efficiency benefits are rated more highly than correctness when reducing particularly unusual Scheme expressions, like ((lambda (x) 3)((lambda (x) (x x))(lambda (x) (x x)))) which are relatively rare. The (applicative order) evaluation will not reach a result, even though the normal order evaluation terminates: ((lambda (x) 3)((lambda (x) (x x))(lambda (x) (x x)))) Each x in expression 3 is replaced by ((lambda (x) (x x))(lambda (x) (x x))). Since expression 3 does not contain any x at all, there is nothing to do. However, there is an efficiency problem with the standard Scheme evaluation itself that is tackled in other functional languages (such as Gofer) by a change in strategy. The value of the expression ((lambda (w x y z) w) 1 (fib 29) (fib 29) (fib 29)) is 1. The three elaborate calculations of (fib 29) are completely unnecessary. This leads to the idea of only ever performing evaluations when the value in the expression in question is actually required. Sometimes - as in the example above - it is never needed. This strategy is known as call by need. In contrast to the eager evaluation implemented in Scheme as standard, call by need is a delayed evaluation (lazy evaluation). The realisation of delayed evaluation in Scheme simply requires two (built-in) language elements, delay, to create a delayed expression (called promise), and force, to force the evaluation of a delayed expression. Compare the following two versions of crazy (define crazy (lambda (w x y z) x)) (define crazy-lazy (lambda (w x y z) (force x)))

(Operator Operand_1 Operand_2 ... Operand_n) is evaluated according to the following rule: evaluate all elements of the list and then apply the operator to the operands. The sequence of evaluation for the individual parts of the expression is not fixed. All that matters is that the operands are evaluated before the (evaluated) operator is applied to them. This is called applicative order evaluation, which has efficiency advantages compared to normal order evaluation (from left to right), as demonstrated by the following example: ((lambda (x) (x (x 5))) ((lambda (w) w) fib)) = ((lambda (x) (x (x 5))) fib) = (fib (fib 5)) = (fib 8) = 34 Normal order evaluation would first apply the expression ((lambda (w) w) fib) to the left hand side (twice, once for each x) and then continue to reduce the resulting expression. ((lambda (x) (x (x 5))) ((lambda (w) w) fib)) = (((lambda (w) w) fib) (((lambda (w) w) fib) 5)) = (fib (fib 5)) = (fib 8) = 34 76 LINUX MAGAZINE 9 · 2001

> (crazy (fib 29)(fib 29)(fib 29)(fib 29)) 832040 > (crazy-lazy (delay (fib 29)) (delay (fib 29)) (delay (fib 29)) (delay (fib 29))) 832040 and try to interpret the different computing times. There is another advantage to delayed evaluation. An expression that is forced to evaluate using force is not re-evaluated, as we’ve already seen above when we were working with streams. A small experiment will emphasise this message. > (define x (delay (fib 29))) > (force x) 832040 > (force x) 832040 The calculation of (fib 29) for the first force takes noticeably longer than for the second one.


075scheme.qxd•

08.05.2001

10:12 Uhr

Seite 77

SCHEME

Language elements list structure (head . <list>) first element (car <list>) remaining list (cdr <list>)

constructor empty object Predicates

(cons x <list>) ‘() list? null?

stream (head . <stream>) (define stream-car car) (define stream-cdr (lambda (stm) (force (cdr stm)))) stream-cons ‘() (define stream? pair?) (define stream-null? null?)

PROGRAMMING

(define itvs (lambda (a) (letrec ((interval-stream-maker (lambda (left right) (stream-cons (cons left right) (let ((middle (/ (+ left right) 2))) (if (< a (* middle middle)) (interval-stream-maker left middle) (interval-stream-maker middle right))))))) (interval-stream-maker 1.0 a)))) Therefore (infinite!) interval nesting (itvs 2) defines the real number

Implementing language elements for streams Everything is now ready for implementing the language elements used above to work with streams. Since streams are closely related to (always finite) lists, we shall use an analogy between the two. The definition of stream-cons poses a problem for us: The approach (define stream-cons (lambda (head tail) ...)) is not much use, because when calling stream-cons, tail would also be evaluated, instead of being delayed. We resolve this problem with the help of a macro. (define-syntax stream-cons (syntax-rules () ((stream-cons head tail) (cons head (delay tail))))) Another useful language element for streams that we have already used above is (define stream-print (lambda (stm n) (cond ((= n 0) (printf “...~%”)) (else (printf “~s, “(stream-car stm)) (stream-print (stream-cdr stm) (- n 1)))))) stream-print is used to return the first n elements of a sequence. If you are only interested in the n-th member, then (define stream-n-print (lambda (stm n) (if (= n 0) (printf “~s~%” (stream-car stm)) (stream-n-print (stream-cdr stm) (- n 1))))) will come in handy.

> (define sqr2 (itvs 2)) > (stream-print sqr2 10) (1.0 . 2), (1.0 . 1.5), (1.25 . 1.5), (1.375 . 1.5), (1.375 . 1.4375), (1.40625 . 1.4375), (1.40625 . 1.421875), (1.4140625 . 1.421875), (1.4140625 . 1.41796875), (1.4140625 . 1.416015625) That makes sqr2= , even though we only get approximate values for a sufficiently large i when looking at the rational intervals (with stream-print or stream-n-print). Terminating/non-terminating continued fraction expansions are also used for defining rational/irrational numbers. There are other theoretical construction methods for defining real numbers, the important thing is that we can perform (exact!) calculations using the real number sqr2= in Scheme.

Calculating with streamrepresented numbers We are going to demonstrate this for the division of the irrational numbers and , with and . Because of

and

,

applies. It is also possible to show that interval lengths become as small as you want. These theoretical considerations lead directly to the following Scheme procedure, the result of which is a stream. (define itv/ (lambda (stm1 stm2) (stream-cons (let ((head1 (stream-car stm1)) (head2 (stream-car stm2))) (cons (/ (car head1)(cdr head2)) (/ (cdr head1)(car head2)))) (itv/ (stream-cdr stm1)(stream-cdr stm2))))) As an example, we are going to calculate

,

> (define sqr2/sqr3 (itv/ sqr2 (itvs 3))) and look at the twentieth interval.

Stream representations of real numbers

> (stream-n-print sqr2/sqr3 20) (0.8164958693703516 . 0.8164973191071839)

For irrational numbers , with natural , interval nesting bbbbb with nnnnnnn for and rational interval boundaries q can be specified. For example, ***Ima can be constructed using the split-half method. , if else .

As you can see, it is possible to calculate with the defined infinite objects. We shall leave the sum to the reader as an exercise. If you are going to attempt exponentiation of these numbers, please bear in mind that the interval boundaries do not remain rational.

and apply for the initial values The procedure itvs deals with interval nesting.

Two final examples from the field of number sequences will illustrate how powerful this concept is. The first one concerns the set of prime

The sieve of Eratosthenes

9 · 2001 LINUX MAGAZINE 77


075scheme.qxd•

08.05.2001

10:12 Uhr

PROGRAMMING

Seite 78

SCHEME

numbers, which can be defined as an (infinite) number sequence: Take a sequence of natural numbers starting with 2: 2, 3, 4, 5, 6, 7, ... = (stream-cdr (stream-cdr integers)) Filter out all multiples of whatever is the first member of the sequence: 2, 3, 4 , 5, 6 , 7, 8 , 9, 10 , 11, ... = (filter-out (lambda (x)(divides? 2 x) (stream-cdr (stream-cdr integers))) 2, 3, 5, 7, 9 , 11, 13, 15 , 17, ... = (filter-out (lambda (x)(divides? 3 x) (stream-cdr (stream-cdr integers))) etc. The remaining numbers are the prime numbers. (define divides? (lambda (t n) (zero? (remainder n t)))) (define filter-out (lambda (praed stm) (if (praed (stream-car stm)) (filter-out praed (stream-cdr stm)) (stream-cons (stream-car stm) (filter-out praed (stream-cdr stm)))))) (define sieve (lambda (stm) (stream-cons (stream-car stm) (sieve (filter-out (lambda (x) (divides? (stream-car stm) x)) (stream-cdr stm)))))) The prime number sequence results from > (define prime-numbers (sieve (stream-cdr (U stream-cdr integers)))) Let’s display the first 100 prime numbers on the screen: > (stream-print prime-numbers 100) 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, U 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, U 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, U 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, U 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, U 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, U 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, U 479, 487, 491, 499, 503, 509, 521, 523, 541

Sequence of quotients of neighbouring Fibonacci numbers If you calculate the quotient of any pair of neighbouring Fibonacci numbers, you will find that it seems to settle around a certain value, 1.61803.... Before calculating the respective limit, the quotient sequence helps to establish the hypotheses. (define fibquot (lambda (stm) (stream-cons (/ (stream-car (stream-cdr stm)) (fixnum->flonum (stream-car stm))) (fibquot (stream-cdr stm))))) > (stream-print (fibquot fib-stream) 18) 1.0, 2.0, 1.5, 1.6666666666666667, 1.6, 1.625, U 1.6153846153846154, 1.619047619047619, 1.61764705288235294, U 1.6181818181818182, 1.6179775280898876, 1.6180555555555556, U 1.6180257510729614, 1.6180371352785146, 1.618032786885246, U 1.618034447821682, 1.6180338134001253, 1.618034055727554U 78 LINUX MAGAZINE 9 · 2001

Summary Infinite objects can be defined, stored and processed using streams. The components of these objects (intervals, sequence members, etc.) always form a potentially infinite (e.g. set of all real numbers) or countable set (set of all integers). In contrast to uncountable sets, countable sets have as many elements as there are natural numbers. The entire set of real numbers cannot be stored on a computer. That has given us some pretty abstract insights. The Scheme procedures that we developed and implemented helped us to put the facts in concrete terms and (hopefully) contributed to their understanding.

HTML programming with Scheme Anyone who has ever created or updated HTML documents without a WYSIWYG editor, by just working on the plain source text, will soon notice a certain lack of clarity, even within files they’ve written themselves, and begin to search behind the multitude of tags for the actual content they were meant to be updating. If there were an option of defining the content at the beginning of the document and of specifying the structure and formatting later on, this task would be much easier. But HTML is set in its ways and relatively inflexible. The Scheme HyperText Generator allows the generation of HTML documents using Scheme. This involves what is more or less a new Scheme-based scripting language: HTSS (HyperText Scheme Source) provides powerful language elements from the Scheme world, together with the option of adapting and extending it. Scheme language elements make it possible, for example, to organise the document contents in an abstract way to begin with and to deal with the translation into a concrete structure and formatting afterwards. In this way, the source text remains clear and changes to the appearance of similar elements only have to be made once, because they are going to affect all of these elements. As you will see, HTSS enables you to build up a document description language which bears no relation to HTML, apart from the end result. The idea of creating HTML documents with Scheme comes from Kurt Normark, who already presented his adaptation (LAML) in the issue 8. In contrast to his approach of providing easy-to-use language elements for document generation, SHTG is aimed at the creative application of Scheme programming knowledge. In this article we are going to show how higher functions can be implemented with existing procedures, in order to demonstrate how powerful HTSS is.

SHTG HTML is a language that describes the structure and formatting of content with tags and Cascading StyleSheets (CSS). The tags should be correctly nested and ideally result in an HTML tree. This hierarchical tag structure will be represented by similarly nested Scheme procedures from which SHTG (Scheme HyperText Generator) generates an HTML document. However, unlike HTML tags, Scheme procedures can be extended and redefined, which makes them considerably more powerful. The basic principle is to assign a Scheme procedure of the same name to each HTML tag. (with the exception of the tag <map>, which becomes html-map, because a procedure called map already exists in Scheme.) These procedures can now be used to generate HTML documents directly, or as a basis for implementing more powerful language elements, which constitute the main strengths of this approach. Furthermore, tags can be adapted to language-specific requirements.


075scheme.qxd•

08.05.2001

10:12 Uhr

Seite 79

SCHEME

HTSS The language used in connection with SHTG is called HyperText Scheme Source, or HTSS for short. To begin with, let’s look at a simple example that only uses standard procedures: HTSS HTML (html <html> (head <title>window-title</title></head> (title “window-title”)) <body (body bgcolor=”#000000” text=”#ffffff”> `(bgcolor “#000000”) Content</body></html> `(text “#ffffff”) “Content”)) That is more or less a 1:1 translation of a tiny HTSS document into a HTML file, which naturally doesn’t show the real strengths of HTSS. You will already notice, however, that with HTSS there is no need to worry about closing tags - when a bracket is closed, so is the tag, although there are exceptions where no closing tag exists (such as <img ...>), but HTSS takes these into account. Of course, tags can also contain attributes. While this happens in the form of </tag name=value> in HTML, specification in HTSS is handled as follows: (tag `(name value) ...), where the value type can be symbol, number or string. The character before the list is not a quote but a backtick (the key to the left of 1), and is the short form of (quasiquote <list>). This allows us to evaluate variables within the list, which must be identified by a leading comma. Instead of the three dots other attributes could, of course, follow with the same format. Once all attributes have been specified, the content follows, which must always be a character string. If further functions are nested within the structure, their return value will still be a string. Let’s have a look at an example: (define colour1 “#ff0000”) (define colour2 “#0000ff”) (define text1 “... red text on black background”) (html (head (title “window”)) (body `(bgcolor “#000000”) `(text ,colour1) `(link ,colour2) (big text1))) As you can see, the colour specifications (colour1 and colour2) in the attribute lists of the body tags are marked with a comma in order to evaluate them within the list, i.e. to insert the respective definitions from the beginning of the document. That allows ordered global formatting changes, if these variables are referenced several times in larger documents.

Defining your own functions We are going to demonstrate the extendibility of HTSS through definition of new procedures with an example: ;================================== ; define required procedures ;————————————————— (define webpage (lambda (t . k) (html (head (title t)) (apply body k)))) (define chapter (lambda (x) (string-append (p _) (h1 x)))) (define picture (lambda (source text . size) (let ((width (if (> (length size) 0)

PROGRAMMING

(car size) #f)) (height (cond ((= (length size) ((> (length size) 1) (if width (img `(src ,source) `(alt `(width ,width) `(height (img `(src ,source) `(alt (define section h2) (define heading h3) (define text p) ;=========================== ; and this is the source text ;—————————————(webpage “My Homepage” (chapter “Introduction”)

1) (car size)) (cadr size))))) ,text ) ,height)) ,text ))))))

(section “Who am I?”) (heading “General”) (text “My name is ... and I was born in ... “) (heading “Hobbies”) (text “I am especially interested in ...”) (section “What do I do?”) (text “Within my ...”) (picture “work.gif” “Me at work” 300 200) (chapter “My Projects”) (section “Project 1: ...”) (heading “Terms of Reference”) (text “Drawing up ...”) (heading “Preparation”) (text “Before starting with ...”) ) First of all, the procedure webpage receives the name of the page as its first parameter (t), which is then displayed as the window title. All remaining entries are combined in the second parameter (k). The first value is used to call the function title, which in turn is located within head. Then body is applied to the actual body of the text with apply. An interim evaluation step will clarify this. First, we process the call (webpage “My Homepage” (chapter ...) ...) (html (head (title “My Homepage”)) (body (chapter ...) . . .)) In the next step (“Introduction” chapter) is evaluated. There should be a blank line before each new chapter. We achieve this using an empty paragraph. In HTML this would be <p> </p>, whereas in HTSS it simply looks like this: (p _). Underscore is a predefined variable for a non-breaking space. After this blank paragraph we would now like the actual chapter heading with the largest possible font size. Each of the functions p and h1 returns a character string, just like all other tag procedures. The same is true of the procedure chapter, of course, so it must append the two strings with string-append first: (html (head (title “My Homepage”)) (body (string-append “<p> </p>” “<h1>Introduction</h1>”) . . .)) 9 · 2001 LINUX MAGAZINE 79


075scheme.qxd•

08.05.2001

10:12 Uhr

PROGRAMMING

Seite 80

SCHEME

We shall not go into details regarding the function for inserting images. Suffice it to say that it receives a path for the image, alternative text and optional size information. The specification of only one number results in a square image of the appropriate size, while in the case of two values the first one is the width and the second one the height. The remaining functions are self-explanatory since they correspond to their HTML counterparts. It should be clear by now that it is possible to ignore HTML itself entirely, as long as the relevant procedures are loaded. You could develop your own page description language and use only that. All you need to do is implement the appropriate language elements once (or have them implemented for you). Should you ever find that one is missing, it is easy enough simply to fall back on the standard HTML tags, which are also still available. Now we are going to demonstrate the special capabilities of HTSS by creating more powerful functions. Scheme offers a wide variety of opportunities that can be utilised in HTSS. Let’s assume that we want to perform certain calculations with a sequence of numbers and to display the result as a table on the WWW. In the example we are calculating the Fibonacci numbers (2nd column) from i (1st column) and the quotients of two consecutive Fibonacci numbers (3rd column). You already know the Fibonacci sequence from the first part of the article when we were discussing streams. ;======================= ; required procedures ;———————————(define fib (lambda (n) (if (< n 2) 1 (+ (fib (- n 1)) (fib (- n 2)))))) (define line (lambda (i) (tr (th (number->string i)) (td (number->string (fib i))) (td (number->string (/ (fib i) (fib (- i 1)))))))) (define table (lambda n (html (head (title “Table”)) (body (div `(align center) (table `(border 1) `(cellpadding 5) `(cellspacing 0) (tr (th “i”) (th “fib(i)”) (th “fib(i) / fib(i-1)”)) (apply string-append (map line n)))))))) ;============= ; “source text” ;——————(table 1 2 3 4 5 6 7 8 9 10) The procedure map applies the specified function (line) to the parameter list n. The return value of map is a list containing strings. They are no use to us as a list, however, as we can only work with the strings themselves. By applying string-append to the entire list, all strings contained are concatenated and we receive the desired character chain with the results. This is done using apply. If we try this now, we will receive a table containing the values we were looking for. By making a few amendments, it is also possible to avoid a static representation of the calculation formulas like above, but rather to include them in the call. That would already provide considerable functionality. Without referring to the actual Scheme procedures in detail, the call could then look like this: 80 LINUX MAGAZINE 9 · 2001

(calculate ‘( (x . “Nett Price”) ((* x 0.07) . “VAT (7%)”) ((* x 0.16) . “VAT (16%)”) ((* x 1.07) . “Retail Price (7%)”) ((* x 1.16) . “Retail Price (16%)”) ) ‘(15.78 29.80 14.26 39.03 45.12 19.25 33.45 22.34 25.56))

Libraries The required procedures will be collected in an extendible library and re-used later. This is not possible for the CGI variant, since all of your definitions only apply to the current session. That is different for the installable version, where functions can be combined in files, which can be loaded as required. An example for such a library can also be found on the SHTG Web page.

Order and structure In contrast to HTML, SHTG uses the bracket-structure that is typical for Scheme. On the one hand this takes care of controlling the structures, on the other it makes them difficult to handle without the support of a special editor, e.g. providing features such as highlighting matching brackets. Once you have about 20 brackets in a row, it becomes impossible to tell which tag is closed where. This is particularly annoying when you want to insert something. In HTML it is completely obvious from the closing tags. For this reason, SHTG offers the possibility of marking brackets appropriately, thereby considerably increasing the clarity of the source text. This is done by inserting the tag name as a symbol before the closing bracket. Symbols can be recognised by the apostrophe or backtick. We recommend the use of backticks, which we already know from the attribute lists. There is no syntax checking, however, so that even nonsensical names will be formally accepted at this point. (html (head (title “Main_Window”) `head) (body (div `(align center) (p “Table 1”) (table `(border 1) `(cellpadding 5) `(cellspacing 0) (tr (td “field 1”) (td “field 2”)) (tr (td “field 3”) (td “field 4”)) `table) `div) `body) `html) Without the identification of the last four brackets, the character chain ‘field 4’ would be followed by six closing brackets and it would not be immediately obvious which one belonged where.

Style sheets In order to apply a certain style to a HTML file you can use the font tag or create a consistent document layout with style sheets. The W3 consortium recommends the latter, of course, the reason being that should the entire layout of a finished HTML file that has been formatted with font need to be amended, each individual font tag will have to be adjusted. If the same file had been formatted using a style


075scheme.qxd•

08.05.2001

10:12 Uhr

Seite 81

SCHEME

sheet, only this would need to be changed. Furthermore, several files can access the same CSS, which means that a certain consistency is apparent within a project that is not only aesthetically pleasing, but also points to a connection between the contents. Nevertheless, there is still an orderly and standardised way of using the font tag in HTSS. Due to the ability to define new procedures, or tags, it is equally possible to attach certain style properties to these tags. Let’s define a few new p-tags: (define p-cn (lambda x (font `(face “courier new, courier, monospaced”) (apply p x)))) (define p-blue (lambda x (font `(color “#0000ff”) (apply p x)))) This variant is almost as consistent as style sheets and almost as easy to amend, but unfortunately not as powerful. Admittedly, it is possible to do a whole lot more with CSS. However, the aim of HTSS it is not to replace style sheets - you can use style sheets just as easily in pages created with SHTG as in any other HTML documents. A page created with SHTG contains its entire content and therefore also - hence the ‘almost’ - all its formatting. The only advantage becomes apparent when downloading such pages. If the CSS file for a downloaded Web page is missing or cannot be found, the page can sometimes look pretty bad compared to the original on the Web, something that won’t happen with SHTG-generated documents. Why don’t you experiment a bit for yourself!

PROGRAMMING

Summary You have learned about Scheme as a means of HTML programming with the capacity for wide-ranging extensions. With Scheme we were able to describe the structure of complex source texts in an orderly manner, without worrying about actually producing the final document. Due to the importance and popularity of Web programming this is also an approach to generate interest in Scheme and its possibilities. With regard to teaching, SHTG also has the advantage of allowing students with Scheme knowledge to look at the paradigm of scripting languages from a structural perspective.

The authors Professor Christian Wagenknecht teaches Theory of Information Technology, Programming Paradigms, Web Databases , Scientific Web Publishing, etc. in the Information Science and Technology faculty of the Technical University of Zittau/Gorlitz. For over 20 years he has been studying the use of non-imperative programming (Logo, Scheme, Prolog, Smalltalk, Java) from a didactic perspective. He bought the first SuSE Linux distribution as a set of diskettes in 1993. Ronald Schaffhirt is one of Professor Wagenknecht’s students and has developed Scheme for Web programming (CGI) and SHTG. The resulting material will be included in the course “Programming Paradigms” (section: scripting languages). He is currently in his fourth semester of studying Information Technology while continually developing SHTG on the side. ■

1/2 Anzeige DIGITAL NETWORK not SuSE

9 · 2001 LINUX MAGAZINE 81


096linuxkids.qxd•

07.05.2001

12:13 Uhr

SOFTWARE

Seite 96

KIDS’ LINUX

Games for education. Games for fun.

YOUNG AT HEART RICHARD SMEDLEY

Children take PCs for granted. Those of us administrating a child’s GNU/Linux desktop don’t have that luxury. Here we take a look at applications for younger computer users with three games Gcompris, CircusLinux and MathWar.

Children are more curious than adults, making the considerations different. If the PC is ‘just’ for the kids they can play around and break the installation as their parents have probably done in the past. However, if the machine is shared with adults, it must be locked down to protect work. Most parents would feel uneasy about letting their children loose on their favourite UNIX clone. It’s worth considering an extra machine exclusively for children. A 486 or early Pentium may cost only £50 or so, but it will be adequate for most younger children’s needs. They can always use your PC for 3D games or resource-monsters like Mozilla. Most distro disks include applications for younger children. Oneko, Xpenguins, Gcompris, CircusLinux and Mathwar are available in Debian unstable. The next stable release of Debian will contain a special section just for young people - debian-jr - aiming to

And the band played on... 96 LINUX MAGAZINE 9 · 2001

make Debian GNU/Linux appealing to people aged 299. Initially, the project is concentrating on making a distribution for 2 to 8 year-olds and those who will administer their machines. All the software concerned is also available for other Linux distributions as source tarballs or RPMs.

Now I understand Gcompris (pronounced j’ai compris) is a skillsbuilding game for children aged 3 years old and up. As well as typing, arithmetic and time-telling, it helps to build mouse skills. Gcompris aims to be a central user interface for many small educational applications - set out as boards within the game. The user manual even gives instructions on developing new boards. A mouse skills board involves clicking on fish before they swim off the screen. My children found the Learning Clock board a little confusing, as the hands are the same length. Make the Puzzle is a jigsaw game featuring famous paintings (so you can educate your children in art history while they play). Typing skills and co-ordination are coupled with counting and arithmetic in a series of boards involving typing in the correct answer before the object falls to the ground - letters in the case of Simple Letters and whole words in Falling Words. Another board involves counting the spots on a die and typing the number in time. These games keep children amused for hours, all the while developing their skills. Meanwhile, if you would like to sharpen your C skills, code a new board.


096linuxkids.qxd•

07.05.2001

12:14 Uhr

Seite 97

KIDS’ LINUX

Oh what a circus Circus linux is the famous clown jumping, balloon popping game, ported by Bill Kendrick. It is a clone of the Atari 2600 game, Circus Atari. A clown is fired out of a cannon onto a see-saw, which bounces a second clown into the air to pop balloons. It accommodates one or two players, and has different difficulty levels. Windows and Mac versions are available pre-compiled. Moving the slide accurately from the keyboard is extremely difficult. However, if your children are adept, they will quickly take to this addictive game, with its jolly circus-style music and sound effects. Very amusing - it makes all the young visitors to our house laugh. And as a great aid to improving handeye coordination, you can excuse the hours you may find yourself playing the game too.

Adding up to fun MathWar is good for those just learning their sums. Numbered pairs of cards are presented along with a +, - or X operator. You must submit the answer at a predetermined time. The computer may submit a guess itself if you take too long. Whoever answers correctly gets the points. If the computer guesses you can decide whether the computer’s guess is

SOFTWARE

Info Gcompris http://gcompris.sourceforge.net/ CircusLinux http://www.newbreedsoftware.com/circus-linux/ Linux for 2-8 year olds http://www.debian.org/devel/debian-jr/ Find rpms of the games you want at http://rpmfind.net All games were tested on a P233/32MB RAM/640x480 VGA running Debian GNU/Linux with a 2.2.18 kernel. ■ right for extra points. The game ends after a number of rounds (default 20). Set the levels so your child can just beat the computer if they like a challenge. Easy configuration of the settings means that I can maintain a difficulty level that keeps my six year-old daughter interested. An HTML manual with well-written, simple instructions is a delightfully surprising addition to any piece of software. Well done Ken Sodemann. ■

All in the mix [left] Gcompris [right] Catch a falling letter

[left] Make the Puzzle - a jigsaw with culture [right] Learning Clock - but which hand is which?

How smart would you like your PC?

9 · 2001 LINUX MAGAZINE 97


098desktopiasbd.qxd•

07.05.2001

12:38 Uhr

SOFTWARE

Seite 98

DESKTOPIA

Jo’s alternative desktop

ROOT-TAIL JO MOSKALEWSKI

Only you can decide how your Linux desktop looks. We take you with us on a journey into the land of window managers and desktop environments,

We all know the scenario: Something in the system is hanging, and a tedious look into the log files under /var/log/ is necessary to track down the evildoer. Even when there are no acute problems, it is clearly better to be kept informed at all times so as not to have to go troubleshooting in the log files when it’s too late. With Root-tail, all the information you want from log files you can obtain, without mouse clicks, on the desktop background.

presenting the useful and the colourful, viewers and pretty toys.

Something special There are numerous tools which track log file entries and report their (hopefully) good news. But who wants to keep a window open on their desktop all the time, just to note the sign of life from the syslog every 20 minutes? Surely it would be much nicer to have this directly on the desktop: No separate window, which has to be closed, opened or moved around, and nothing to interrupt the beloved background graphics. Such a program does exist: Root-tail.

Off we go! If your own distribution does not have a finished root-tail packet among its baggage, it will help if you go to the source code of the program or take a look at http://www.goof.com/pcg/marc/root-tail.html. The archive that you can get there is easy to install. The basic requirements are (as ever) the X-Includes, which can be found in SuSE in the xdevel packet. If the X-Includes are on the hard drive, the first step in unpacking the archive is done. After that, create a Makefile with the tool xmkmf, which the following tool make uses to read off exactly what needs to be done. Consequently, you don’t need a lot of experience, as your system does everything for you. After that, a make install and make install.man follow, with which just the completed program and its documentation are copied to the right places and the correct rights are set: jo@planet jo@planet jo@planet jo@planet 98 LINUX MAGAZINE 9 · 2001

~$ tar xvzf root-tail-0.0.10.tar.gz ~$ cd root-tail-0.0.10 root-tail-0.0.10$ xmkmf -a root-tail-0.0.10$ make

jo@planet root-tail-0.0.10$ su Password: root@planet ~# cd /home/jo/root-tail-0.0.10 root@planet root-tail-0.0.10# make install root@planet root-tail-0.0.10# make install.man root@planet root-tail-0.0.10# logout

Nothing happening? The syntax to start Root-tail is simple: As with almost all programs, you obtain information on this by entering the option ”–help” as you go along – thus ”root-tail –help”. Anyone who wants it a bit more precise and extensive should consult the man page, which can be found via ”man root-tail”. If you want to place the popular log file /var/log/messages on the desktop, you might use the following command: root-tail /var/log/messages But since, as a prudent Linux user, you do not start an X-session as ”User root”, a problem arises: roottail answers with a curt ”/var/log/messages: Permission denied”. Quite right too, because these files are nothing to do with the normal user. So root-tail has to be started with root-rights to do this.

Super A good tool allowing a user to start a specific program with root-rights is super. super should come with every distribution. super is configured with the file /etc/super.tab, in which we simply allow the user jo also to start root-tail as superuser: root-tail /usr/X11R6/bin/root-tail jo The first word states which command is involved (in this connection new names can also be invented), followed by the command to be executed, and lastly the name of the user who is to be authorised to start the command as root (additional user names can be added easily, separated by a comma). After this simple configuration, root-tail starts with root rights via the command


098desktopiasbd.qxd•

07.05.2001

12:38 Uhr

Seite 99

DESKTOPIA

SOFTWARE

[left] Figure 1: Root-tail on the desktop

[right] Figure 2: root-tail -g 120x13+20+20 -color mistyrose1 -fn 6x13 -shade /var/log/messages /var/log/kern.log,red

super root-tail /var/log/messages But here again, the desired log file still does not appear on the user desktop, and instead root-tail responds with ”Error opening display (null).”:

Accessing the X-Server Even root cannot simply use some other desktop illegally as output medium: The X-server has its own access control. This is governed in the hidden file ~/.Xauthority and as the user root can read the data of every user, it can simply make joint use of that of the user. The simplest variant of this is the setting of the environment variable XAUTHORITY: export XAUTHORITY=/home/jo/.Xauthority But that’s not all: As root has not started an Xserver, this permission is insufficient because our program cannot make use of this until it also knows that an external display is to be used. Here again, the setting of a variable helps: export DISPLAY=:0.0 Before what is now being described is pulled together, here is a little test, which illustrates the necessary basis and ought to function: jo@planet ~$ su Password: root@planet:~# export XAUTHORITY=/home/jo/.XU authority root@planet:~# export DISPLAY=:0.0 root@planet:~# root-tail /var/log/messages In this case, root-tail can be ended with the key combination Ctrl+C.

Puzzle Since now as root two variables still have to be set before starting root-tail, this can no longer occur with a simple command. A little script can help here – although it is then no longer root-tail, but the entire script which is started via Super (the variable, too, must set root, if they are valid for it). It is best to make a file /usr/local/sbin/root-tail.username with the following content: #!/bin/sh export XAUTHORITY=/home/username/.Xauthority export DISPLAY=:0.0 /usr/X11R6/bin/root-tail /var/log/messages

Since scripts called up via tools such as Super can easily be misused as the result of modifications, the file should only be read/write for root (otherwise anyone who could change the file and start it via super could start any programs they liked in it simply by adding!). Therefore type as root: chown root.root /usr/local/sbin/root-tail.uU sername chmod 700 /usr/local/sbin/root-tail.username And then adapt the configuration file super.tab: root-tail /usr/local/sbin/root-tail.username jo And from now on /var/log/messages appears on the user desktop after entering super root-tail. Admittedly, because of the user rights and the access to the X-server, there is a lot of work initially, but once you have understood and applied this, it’s easy: super releases any programs for one or more users, and a script allows graphic output. This method can be transferred to countless programs – to your favourite file manager for example, which you can then place a second time, but with root rights, in the Start menu. This then starts without any additional password challenges whatsoever. And that’s worth a bit of effort.

Personal edition root-tail can obviously be adapted to your own requirements. The most interesting options here could be the following: -g states where Root-tail should appear on the desktop, and also the number of characters to be shown is defined hereby. A ”-g 120x13+20+20” moves Root-tail by 20 pixels away from the edge of the desktop and sets its size at 13 lines, each with 120 characters. Another helpful option here is frame, which can be used for test purposes to display a frame until the optimal geometry specification has been found. -color sets a standard font colour. Each log file can also be given its own, by specifying a colour name when calling up the log file: ”/var/log/messages,green” (see Figure 2). -font: By their nature, fonts with a fixed width are most suitable here, thus 5x7, 5x8, 6x10 etc up to 12x24. Exactly which ones are available depends on the distribution being used. The tool xfontsel can help with the selection. -shade gives the letters a shadow. ■

Note with respect to KDE 2 KDE 2 lays a frameless window over the entire desktop and thus covers everything another program paints onto the desktop. We do not know of any solution for using root-tail in conjunction with KDE 2.

9 · 2001 LINUX MAGAZINE 99


100ootb.qxd•

07.05.2001

12:49 Uhr

Seite 100

SOFTWARE

OUT OF THE BOX

Mini-Distribution

POCKET LINUX CHRISTIAN PERLE

There are thousands of tools and utilities for Linux. Out of the Box chooses the pick of the bunch and suggests a little program each month that we feel is either absolutely indispensable or unduly ignored. To keep in line with the main focus of this issue we are bending our rule a bit this time round and devoting ourselves entirely to just one, entire distribution - but one which nevertheless runs from a single diskette: HAL91.

Mini-distributions, even if they are often largely ignored in the shadows of their bigger siblings, are useful tools. The spectrum ranges from the special distribution, which system administrators like to use

Higher density: Formatting a diskette with more than the usual 1.44MB capacity (about 1.72MB). Disk-Image: The image of a complete diskette as a file. With suitable programs the diskette including Bootsector can be created from this. Bootsector: The first sector on a diskette or another data medium which can contain executable code to start an operating system (boot). dd: This Unix command serves for direct read/write of block-oriented devices. The data read can if necessary be converted into the format. BIOS: Basic Input/Output System. This minimal system sits permanently in the computer and ensures it is possible to load an operating system from diskette or hard drive. ■ 100 LINUX MAGAZINE 9 · 2001

for diagnosis and correcting faults, to the almostcomplete Linux desktop for older hardware (see the Other Mini-distributions box also). HAL91 is specially conceived for somewhat older computers and is an ideal playground for all those who want to control their system without any graphical tools. This distribution can also be used as an emergency system, too, or general installation diskette (with a little manual work). The requirements to run HAL91 turn out to be correspondingly modest. It can be started with any processor from the 80386 onwards, equipped with at least 8MB RAM. It was also important to the developers not to use the higher density diskette format, since a few drives have problems with this. HAL91 was developed by Øyvind Kolås and since January 2000 it has been undergoing refinement by myself (Christian Perle).


100ootb.qxd•

07.05.2001

12:50 Uhr

Seite 101

OUT OF THE BOX

SOFTWARE

What do you need? On the HAL91 homepage http://home.tuclausthal.de/~incp/hal91/ you will find the DiskImage hal91.img. If you do not yet have Linux on your computer and have to install from DOS/Windows, you will also find the DOS program rawrite2.exe there.

How do you install it? Since we are dealing with a ready-made diskette image, all we have to do is write this on a formatted diskette. If you already have Linux up and running, the following dd command (to be entered by the root administrator) is sufficient: dd if=hal91.img of=/dev/fd0 Under DOS/Windows you should instead use rawrite2.exe: rawrite2 -f hal91.img -d a:

Shell: One of the most important components of every Unix system - the command line controlled user interface of the system. Kernel: The operating system acts as the interface between hardware and any processes running. It also makes multitasking and memory management available. The real Linux is only the kernel. ext2: The Second Extended Filesystem is now the most commonly used file system under Linux. It provides files and directories with rights and assigns them to owners and groups. Manpage: Manpages (short for Manual pages) are an online reference handbook for Unix commands. They are called up using man commands. There are no manpages included in HAL91 due to lack of space. Mount: Under Unix systems, data media are not assigned drive letters, but mounted in the file system. A directory provided for this (the Mountpoint) serves for access to the content of the data medium. Shell script: A text file with commands which are automatically processed in sequence by the shell. Nullmodem cable: A cable to connect two computers directly via the serial interface. Unlike normal serial cables, in this case the send and receive lines are cross connected. PPP: The Point to Point Protocol connects two computers via a serial line (modem or null modem) with the TCP/IP protocol.

Start me up

To boot from the diskette, you may need to change the boot sequence in the BIOS setup of your computer to A:,C:. With the HAL91 diskette in the drive, re-start your computer. After about 60 seconds, the mini-Linux is loaded and announces itself with a logo in the console (Figure 1). The diskette can then be taken out of the drive after booting because the system runs completely in the RAM. As in the ‘big’ distributions there are also several virtual consoles available in HAL91 (reached via ALTF1 to ALT-F4). There is no need to log in, as a Shell with root privileges is running on all consoles.

Listing 1: Extract from the kernel messages hda: TOSHIBA MK1924FCV, 518MB w/128kB Cache, CHS=1053/16/63 ide0 at 0x1f0-0x1f7,0x3f6 on irq 14 Floppy drive(s): fd0 is 1.44M FDC 0 is an 8272A PPP: version 2.2.0 (dynamic channel allocation)

Especially with older computers, there is rarely anyone who actually remembers what hardware is actually inside them. HAL91 helps to identify many components. So IDE hard disks, ATAPI CD-ROM drives, NE2000-compatible network cards and Figure 1: The HAL91 console

9 · 2001 LINUX MAGAZINE 101


100ootb.qxd•

07.05.2001

12:50 Uhr

Seite 102

SOFTWARE

OUT OF THE BOX

an example of using ifconfig and route in the Shell script init.net in the/bin directory. If there is no network card, you can still make use of the Nullmodem cable. Using the shell script ppp-nullmodem - which can also be found in the /bin directory - make a PPP connection between two computers running under HAL91. This will involve having to swap the IP address used in the script on one of the two computers, so that the pppd commands look as follows:

Other Mini-distributions Of course, there are other mini-Linuxes apart from HAL91. Three specialised examples are worth mentioning in this respect: tomsrtbt (Tom’s Root Boot Disk) is especially good for use as a rescue system, while with muLinux an attempt is made to capture as many applications as possible on one diskette, and fli4l provides a complete ISDN router solution on a diskette. Reference sources are http://www.toms.net/rb/home.html for tomsrtbt, http://sunsite.dk/mulinux/ for muLinux and http://www.fli4l.de/ for fli4l.

pppd /dev/ttyS1 115200 asyncmap 0 noauth persU ist local passive nodefaultroute 192.168.0.1U :192.168.0.2

diverse PCI devices can be recognised and used from time to time. The commands dmesg to display the Kernel messages and cat /proc/pci, which lists the PCI devices. Listing 1 shows an extract from the kernel messages on a Toshiba 100CS Notebook. These messages are telling us that an IDE hard drive (hda, thus the first device on the first IDE controller) which is a Toshiba with 518MB, split into 1053 cylinders, 16 write/read heads and 63 sectors per cylinder, has been found. The next line shows that only one IDE controller (ide0) is present. There is also a diskette drive (fd0), which is connected to the controller FDC 0. The last line of the extract relates the support that exists in the kernel for PPP.

What else? The command e2fsck for patching up the ext2 file systems, is included, with which you might be able to resuscitate wounded Linux installations. You must, however, first read the associated Manpage. So that you can also mount hard disk partitions, diskettes and CD-ROMs, the commands mount and umount are included. ext2 (Linux), vfat (for long filenames from Windows 95/98) and iso9660 (for data CDs) are all included.

Small, but networked HAL91 can also make contact with the world around it, to a limited extent. If the computer contains an NE2000-compatible network card, this can be configured using the commands ifconfig and route, to work with telnet, ftp or ncp on remote computers or to transfer files. The program ncp has already been discussed in an earlier Out of the Box article (Linux Magazine Issue 3 p.114). You can find

Loop device: With the loop device it is possible to mount files like partitions. A syntax example of this reads mount -t ext2 -o loop initrd /mnt - where the file initrd is mounted as ext2 file system on the directory /mnt. Router: A router is used for passing on (forwarding) IP packets to specific destination IP addresses. From its Routing Table, it knows which route a packet should take, depending on the destination address. ■ 102 LINUX MAGAZINE 9 · 2001

on one and pppd /dev/ttyS1 115200 asyncmap 0 noauth persU ist local passive nodefaultroute 192.168.0.2U :192.168.0.1 on the other computer.

An editor for all seasons A text editor is also part of the package with HAL91. The Unix standard editor vi was quite deliberately chosen for this. This may not be as easy to learn as other editors, but it has the most functions in proportion to its small memory space requirement. A good introduction can be found at http://www.infobound.com/vi.html. Experiment with this editor, it’s worth it! From this point of view, HAL91 is also a suitable system for dry runs with Linux. So long as nothing else is mounted, everything runs in the RAM and is, unbreakable. Finally, it is also possible to find out with no risk what happens if libc.so.5 is deleted.

For the advanced student Having once discovered this toy/tool, you may well want to make your own adaptations. HAL91 is based on kernel 2.0.36 and libc 5.3.12. To compile your own kernel for this distribution it is best to stay in line with the existing configuration, which is stored on the diskette as the file kconf. In any event, support for the Initial Ramdisk (initrd) should be part of the compilation. To swap programs you will have to change the content of the compressed RAM disk (initrd.gz). To do this, copy the file onto a hard disk partition and uncompress with gunzip, in order to then mount it via the Loop Device. Now you can potter about to your heart’s content in the file, deleting or adding programs. When you do though, bear in mind that the programs can, at the most, be linked to libc 5.x, libm 5.x or libtermcap. Also, after unmounting and re-compression using gzip -9 the file must not be too big for the diskette. That also sets the limits for HAL91: A graphical user interface with X11 thus falls victim to the requirements of space. ■


068metrics.qxd•

07.05.2001

11:02 Uhr

CONFERENCE

Seite 68

UKUUG

Take it on trust

TRUST METRICS LUKE KENNETH CASSON LEIGHTON

You may think word of mouth is enough, but does it quickly tell you everything you need to know? Can word of mouth be automated by a computer? Can word of mouth be digitally signed? The practice of finding resources for every day business and personal issues is so common-place that no-one has considered what it might be like to have the same capability available to them via the Internet. Trust Metric Evaluation is likewise a simple concept: it allows for the automated evaluation of people’s opinions - in a Web of Opinions, with far-reaching consequences for the day-to-day way in which we conduct our business, across the world. Trust Metrics is a means to evaluate a chain, or web, of opinions. Evaluation of a Web of Trust requires that you specify whom you trust implicitly for opinions. This becomes the centre of your web - the seeds. The seeds have specified their opinions of other people, or the things that other people have done, said, written, performed etc., and then those people have specified their opinions etc. Trust Metric Evaluation limits the chain of ‘opinions of opinions’ as it were, resulting ultimately in a means to provide an unbiased, verifiable, and reasonably impartial appraisal. The only way for an individual to receive a better evaluation is to actually do something that is worthwhile for someone who is reasonably close to the centre of the Web of Trust to express their opinion of them or their actions. Equally, if they do something contrary to the trust that has been placed in them, the opinion can just as easily be revoked... Imagine that you require the services of a financial advisor. You have no idea how to go about this or who to trust. So, you go to 68 LINUX MAGAZINE 9 · 2001

You are looking for a recommended financial advisor. Who do you trust to tell you which one is well informed and impartial: their clients or other advisors, your friends or an independent financial agency? Do you trust those recommended opinions? How do you evaluate those opinions and what weight should you give them? myfavouritefinancialadvisors.com and lo and behold, they are running Trust Metric evaluations of financial advisors. Other financial advisors, their clients, and the Independent Financial Advice Bureaus of 15 U.S. States and 10 separate countries across the world are involved with this site, expressing their opinions as to the reliability of the advice given by the financial advisors listed on the site. You conclude, ”hmm: I only really trust Independent Financial Bureaus, but there was a scandal with one of them recently, so I am not interested in their opinions”. So you select seven bureaus you’ve heard of, and seven that you haven’t, as the seeds for the Evaluation you wish to perform. Setting these 14 bureaus at the centre of your Web of Trust, you ask the site to perform an evaluation. You ask it to list the top 100 financial advisors it can come up with, that have had Reliability opinions expressed on them from at least two bureaus, four of their peers, and at least five clients. You wait a few seconds, and lo and behold, there are only 10 financial advisors that meet your exacting requirements. Well, that’s good enough to start with. So, you start to explore these people a bit more, browsing their credentials online. Click click hmm, funny: five of them all seem to work for the same company. Ah, but wasn’t there some sort of financial irregularity about that company in the news, recently? Whoops, don’t think I’ll be using them! Ahh yes – I see why they came up so high in my criteria. A number of their former clients have made use of this site to express their dire opinion of this company’s activities. Oh dearie me, it looks like the bureaus haven’t got round to revoking their certifications of these people yet. Ah well. Maybe they are trustworthy, but I’m not using them. Click, what about this one? He’s a small-time financial advisor, but he has ratings from (click click) five Bureaus that say that he gives sound advice and some of his peers have also rated him as very good. Let’s see – yes, they too are all rated by at least two of the original 14 bureaus I specified as the seeds, and he has reports of quite varying degrees from his customers. Yes, they’re all pretty good, except for one


068metrics.qxd•

07.05.2001

11:02 Uhr

Seite 69

UKUUG

client who says his advice was completely useless: must ask him about that if I ring him. Where’s his telephone number (click), ah yes, here it is. This is such an incredibly powerful and liberating example of the use of computing that it is in some ways quite frustrating to know that, though it is technically possible, Trust Metrics are only being used in experimental ways at sites such as advogato.org, skolos.org, sourceforge.net and a few others. The possible applications and potential of Trust Metrics are quite amazing. For example, it can be used as a search engine - one that you can actually trust because it gives you an impartial amalgamation of other people’s evaluations. And as if that isn’t enough, where you absolutely have to know that the opinions being expressed are real and concrete, why not have the people who enter in their opinions into the Trust Metric Engine digitally sign those opinions? That way, any opinions that are not digitally signed and verifiably digitally signed - can be automatically excluded when the Trust Metric evaluation is performed. Combining Trust Metric evaluations with Digital Signatures leads to interesting possibilities. Imagine that you request a Trust Metric evaluation, but you do not really trust the computer performing the evaluation to give you the right results. You ask the engine to give you a digitally signed copy of the results, along with the original Certification Web from which it performed the calculation. You can then give that to another Trust Metric evaluation engine and ask it to double-check it! Not only that, but imagine that there is a Certification type which can be applied to evaluation engines, which certifies them as to the reliability of those engines to perform evaluations. This process of cross-checking could even be automated, by the Engines themselves, which would be essential in a distributed Trust Metric environment. There are field-based military intelligence applications for Trust Metrics, too. Imagine that all sources assess each other as to the reliability of the information coming from their peers. A source out in the field is cut off from communication with their usual base, which they would normally use as their seeds for the centre of the Web of Trust. They still need some assessment as to the sources available to them. So they select the sources closest and most trusted that they are still able to contact and ask for a Trust Metric evaluation of their immediate environment. Untrusted sources not linked to the trusted seeds via the Web of Trust are automatically excluded. Compromised sources which provide false information are soon discovered by their nearest peers who act on that information, and upon discovering that a source has been compromised, they immediately revoke their Reliability Certification, with the result that the compromised source is quickly excluded. A slightly different version of this approach was the original reason behind the development of Trust

CONFERENCE

Metrics: to solve the problem inherent with trusting certificate authorities, and to provide a more secure, trustworthy and scalable way to handle DNS Domain Name Registrations and Updates. The problem at the moment is, can you really trust the Public Key Certificate Authorities, especially given that very recently, someone fraudulently obtained a Digital Certificate that allowed them to digitally sign ActiveX components as if they were Microsoft.com? ActiveX components are downloaded and run automatically on Internet Explorer - if they are signed by one of the Trusted Certificates. What alternatives are there? Trust Metrics. Bruce Sterling’s Science Fiction novel, Distraction, describes a reputation-based nomadic community that actually uses digitally-signed Trust Metrics in order to evaluate who should be given responsibility to lead the community. The better the individuals actually fulfill the role assigned to them, the more Trust Certifications they will receive by their community peers, and the more responsibility they gain. Abuse of the trust placed in them results in their certifications being revoked, and they are relieved of their position. The interesting thing is that, as mentioned in Bruce Sterling’s book, there is almost always more than one possible candidate for a particular leadership role, as recommended by the Trust Metric Evaluation. This makes people interchangeable, and therefore replaceable, and therefore less likely to abuse their position. Especially as the certification records are digitally signed forever. Bruce Sterling’s book also makes it clear how pointless it is for an opposing organisation to attempt to target, persecute and remove individual leaders from such a community, as alternative candidates for exactly the same job are just one or two steps down the Trust Metric list... The key strength of Trust Metrics is that they rely on peer-evaluation, as opposed to centrally, implicitly trusted evaluation. With centrally-controlled evaluation, trust begins to wear a little thin, and ultimately carries less and less weight as the size of the community the centrally-controlled authority serves grows ever larger: ironically, it becomes something of a contradiction in terms to trust a centralised Trust Authority. As the size of the community they serve grows, the trust required to bolster their position may lead the organisation to extreme measures that are way out of line, way out of proportion, which compromises their integrity and effectiveness but still maintains their position. We can see this quite clearly for ourselves out of the numerous over-bureaucratic or over-zealous orgranisations in the world that could be cited as perfect examples. With digitally signed Trust Metric Certifications, other than the limits of the capacity of the computers used to perform the evaluations, the ability to perform reliable evaluations scale as the size of the community grows to world-wide proportions, and you still get answers that you know you can trust. ■ 9 · 2001 LINUX MAGAZINE 69


070koffice.qxd•

07.05.2001

11:16 Uhr

Seite 70

BEGINNERS

KOFFICE-WORKSHOP

KOffice Workshop: Part 2

PRACTICAL EXERCISES WITH K. IN THE OFFICE TIM SCHÜRMANN

While the last part of our KOffice Workshop concentrated on the simpler text functions of the KWord components, this we’ll be turning our attention to the more complex layout functions. Using the example of The Penguin Echo, we will delve more deeply into the handling of frames, which until now have been rather neglected. In order to make the material a bit less dry and dusty for you, this time we will also explain the ways KWord works and functions by a little example that you can participate in. Following this, the front page of the newspaper, The Penguin Echo, will be produced. This page will be given a big headline, two columns of body text

and a little eye-catching graphic. Before the Workshop starts round two though, we would like to expressly remind you that KWord is still in development and therefore crashes (especially when using the layout and frame functions below) are not all that rare. You should therefore back up your documents at regular intervals and in general not entrust any important data to KWord (cf. the first part of our Workshop).

Like The Penguin Echo Workshop summary 1. Word processing with KWord - Part 1: A business letter 2. Word processing with KWord - Part 2: A newspaper 3. Tables and diagrams with KSpread and KChart 4. Graphics and images with KIllustrator 5. Presentations with KPresenter

70 LINUX MAGAZINE 9 · 2001

In KWord, all elements such as graphics, formulas, tables and even text are each filed in their own, appropriate, frame. This somewhat unusual method, which is unlike other word processing programs, is highly advantageous whenever you want to create a more complex document such as a club newsletter. With ordinary word processing programs, you may for example be faced with a problem, if you want to insert a text box into your document later on. You can find lots of examples of these additional boxes in


070koffice.qxd•

07.05.2001

11:16 Uhr

Seite 71

KOFFICE-WORKSHOP

Linux Magazine. If you want to compare the working methods of KWord with those of an ordinary word processing program, you should, as a test, try to copy the box associated with this article, ‘All that’s left’. Under KWord, all you need to do for this is create a new text frame using Tools/Create text frame, drag this to the desired position and the right size and finally just enter the desired text. Unlike other word processing programs, under KWord the formatting and editing options available for this are not restricted. So all the functions addressed in the sample letter in the last installment (see Linux Magazine Issue 8) can be applied to any text in a frame. Even if KWord did cleverly conceal its working method in the first part of the Workshop, even there, the text was entered into a text frame.

BEGINNERS

It is precisely when producing a newspaper or newsletter that it can be an advantage, instead of beginning with a single, large frame, to start with several smaller text frames arranged in columns. By providing suitable templates, here again KWord offers an ideal starting point. In order to be able to understand the Workshop, after starting KWord you should create such a multi-column document via File/New. To do this, on the Publishing list, select the template Simple Layout. As you can now see, KWord has created a new document with three text frames, whose arrangement already somewhat resembles the typical layout of a newspaper.

[left] Figure 1: Selecting the right template for our newspaper [middle] Figure 2: KWord after starting the selected template [right] Figure 3: Reduce the left frame to about this height

Mode confusion As already mentioned in the last part, KWord works with two different input modes. In the so-called ‘text editing mode’ you can enter your text in the corresponding frame, while on the other hand the ‘frame editing mode’ allows you to modify the layout of the document and thus to adapt the size and position of any frames. After starting, KWord goes by default into text editing mode, as can be seen from the switchedon top symbol in the toolbar on the left-hand side. If on the other hand the second symbol from the top is highlighted, this means the frame-editing mode is active. Since in our first step we want to insert the title ‘The Penguin Echo’, you should change, using Tools/Edit frame into the mode of the same name. Alternatively, a click on the aforementioned symbol in the toolbar will achieve the same thing.

Figure 4: The frame drawn up for our title

[left] Figure 5: Every newly created frame must be given a designation [right] Figure 6: The finished title

9 · 2001 LINUX MAGAZINE 71


070koffice.qxd•

07.05.2001

11:17 Uhr

Seite 72

BEGINNERS

KOFFICE-WORKSHOP

In the frame

Figure 7: An example of the linking of two frames: The text in the frame on the upper left is now automatically continued in the lower frame

Shunting depot The three text frames set up by the template already occupy the whole page, so at first there is not a spare inch of space left for our title. To make a bit of room, click on the top left frame with the mouse. This should now be highlighted, as can be seen from the eight little boxes round its edges. Now position the mouse cursor on the little box in the middle of the top edge. It is important that the mouse cursor takes on the form of a double arrow and not that of a cross, as otherwise you would shift the whole frame with the following procedure. Now press the left mouse button and hold it down. Move the mouse downwards, which will automatically reduce the size of the text frame. As you will note, when moved, the frame border always snaps into the position specified by KWord. Reduce the text frame until there is plenty of room above it for a header (roughly until the value 4.5 is reached on the left ruler). Repeat this process with the right, long box, so that its upper edge is at roughly the same height as that of the left box.

The next step is to create a new text frame for our header. To do this, select from the menu the item Tools/Create Text Frame, or click on the corresponding symbol in the toolbar. Now place the mouse cursor, which has in the meantime turned into cross hairs, in the top left-hand corner of the page. But when doing so, keep a bit of distance from the actual edge of the page. Then hold down the left mouse button and drag up a frame, as in Figure 4. As soon as you release the mouse button, a settings window opens. Each frame in your document is assigned a name, which the user can choose at will. Think of a suitable designation for our example such as ‘Title frame’ and enter it in the lower input box provided for this purpose. This name will be needed again later on in our Workshop series. Leave all the other settings on the individual listings of the window in the default setting and click OK. Change back to the frame editing mode and adjust the size of your frame in the now familiar way. If you do not like the frame, you can remove it again at any time, when it is selected, usingEdit/Delete Frame. Now switch to text input mode and click on the frame just created. In the active box, enter the title ‘The Penguin Echo’. Mark this text (for the exact method, see Workshop Part 1), increase its font size and then centre it. To achieve this latter step, you can either use the corresponding symbol from the symbol bar or go to the menu Format/Paragraph and there on the Flows listing, select Centre. The font size should be set such that the text roughly fills the entire text frame.

Connecting Once our title is in place, the rest of the frames should also be filled with content. As you have

All that’s left In addition to the functions mentioned in the article KWord also offers a few more design options. Here by way of example is a brief description of how to create headers and footers and the implementation of styles. To create a header or footer, select the menu item View/Header or View/Footer. KWord then creates, on the top or bottom edge of the page respectively, an additional text frame in which you can enter your header or footer text. So-called ‘styles’ are formatting templates that also exist in a similar form in other word processing programs. If your document contains repetitive, time-consuming formatting, you can save this as a style. To adapt the text you enter later to the desired layout, all you need do is activate the corresponding style. KWord comes equipped with a few styles of its own for various purposes. You can activate a style by selecting the desired template from the list in the associated symbol bar at the far left. To make a new style template for yourself you must call up the menu item Extra/Stylist and then click on Add in the newly opened window. There, using the corresponding buttons, you can set all the text attributes which your style template is to include and click on OK. These should immediately be available for selection in the list mentioned above in the symbol bar.

72 LINUX MAGAZINE 9 · 2001


070koffice.qxd•

07.05.2001

11:17 Uhr

Seite 73

KOFFICE-WORKSHOP

already seen with the title, you can select a text frame by simply clicking on it. Now try to enter a little text in each of the three text frames already made. You will not be able to do this in the long text frame on the right, because this has been linked by KWord with the bottom left text frame. This means that the text, if it is too long for the left lower frame, is automatically continued in the long, right-hand frame. Next, as an example, the left upper frame will be linked to the left lower frame. To do this, change to the frame-editing mode and mark the left upper frame by clicking on it with the mouse. Then select Edit/Reconnect Frame from the menu. In the window that pops up, you can change a few settings relating to the behaviour of the inter-linked text frames. For our example, accept all the defaults and change to the listing Connect Text Frames. Here you will see a list of all the text frames included in your document. The frame you have clicked on is highlighted in colour in this list. By the way, you will also find the exact names you entered when creating the text frames here. These designations now make it easier to identify the link candidates concerned. The three pre-set frames from our newspaper example were created from the template and thus bear the standard names pre-set by KWord. Select the list Frameset 2 and click OK. The two frames should now be linked together in the same way as the left lower one and the right, long one previously (cf. Figure 9).

BEGINNERS

Bulleting The content of our newspaper is intended to appear in the left upper box already set by KWord in the form of a small bullet list. To do this, select the menu Format/Paragraph and (on the listing Numbering) the item Arabic Numbers. In the lower part, under Start at (1,2,...) enter the figure ‘1’, as the result of which the first bullet point will start with the number 1 instead of 0. Click on OK and enter a few fictional items of content in the text box, which are intended to appear in The Penguin Echo. Whenever you press the Enter key while doing so, KWord automatically creates a new bullet point. If you have entered all the points, call up Format/Paragraph again and select, in the listing Numbering, the item No numbering. You can make a list considerably faster (but with fewer setting options) via the corresponding symbol from the symbol bar. Click on this once, and a list is created automatically. You can leave the bullet mode thus activated just as quickly by simply clicking again on the symbol.

[left] Figure 8: Two frames are linked together [right] Figure 9: After the frames have been linked as described in the article, the text is continued in the respective following frame

Eye catcher All we need now is a little graphic, placed exactly in the centre of the page to round off the overall impression of our newspaper. Give in to temptation and, in text input mode, insert a graphic via the

Points of view Every application from KOffice is able to show different views of a document at the same time. To do this, the corresponding application must be running without the KOffice desktop. If so, select the menu View/New view. This opens a new application window with the same content. Via View/Split View you can now divide this window into two further windows. This makes it possible to view two different parts of the same document at the same time and thus omitting all that fiddly scrolling. In the View menu, there are, by the way, a few other sub items with which you can control these views.

9 · 2001 LINUX MAGAZINE 73


070koffice.qxd•

07.05.2001

11:17 Uhr

Seite 74

BEGINNERS

KOFFICE-WORKSHOP

[left] Figure 10: Creating a bulleted list [right] Figure 11: The completed first page of The Penguin Echo, with the text flowing round the graphics

The author Tim Schürmann is a student of IT at the University of Dortmund and wonders why the Linux penguin does not have feathers, but a highlypolished exterior?

menu item Insert/Picture in your text. Any graphic inserted in this way will be treated like a normal symbol in the text. This mainly means that you will find it difficult or even impossible to alter the position and the size of this graphic. Instead, create a graphics frame via the toolbar (fourth symbol from the top). Alternatively, the menu command Tools/Create Picture Frame also leads to the corresponding dialog window. After selecting the image file to be imported, the mouse cursor, as it did when the text frame was created, turns into cross hairs. Now click on the place in your document where you’d like the graphic inserted. As soon as KWord has placed the image in this position, you will notice that the object cannot be changed, either in size or position. KWord seems to be holding tight to this graphic. This behaviour is attributable to the still-active text-editing mode. To get back to frame-editing mode, click in the toolbar on the second symbol from the top or select Tools/Edit Frames from the menu. Now you can click on the graphics frame just created and modify its size and position. To change the size of the marked object, position the mouse cursor, exactly as with the text frame, on one of the little boxes, until it turns into

an arrow with two points. Hold down the left mouse button and then drag the object to the requisite size. The graphic can be moved to a different position in a similar way: Place the mouse cursor on the object until it turns into a double arrow. Now, keeping the left mouse button pressed, you can bring the image to its new position. Should the graphics overlap other text frames, KWord can allow the text contained therein to flow around the graphics object. You can activate this by pressing the right mouse button over the text frame in which the object concerned is located. In the context menu which appears, select Properties and then the listing Text Run Around. Here you can set how the text flows round the object. With this sample newspaper, so too ends the presentation of the word processor KWord. As you have seen, the frame-based approach, which certainly takes some getting used to at first, is a good starting point for creating really complex documents in a relatively simple way. Next time we will be taking a look at the no less interesting spreadsheet, KSpread, whose range of functions is already a match for many professional programs. ■

Creating your own templates in KWord

Info KDE homepage: http://www.kde.org/ Koffice homepage: http://www.koffice.org/ Workshop Part 1 in Linux Magazine Issue 8

Creating your own template in KWord by means of the corresponding assistants is almost child’s play. First make a KWord document in the usual way, containing precisely the content with which the template is later to be created. To do this you can use all the tools and functions available in KWord. Then select from the menu the item Extra/Create Template From Document. In the window now shown, give your template a name in the corresponding box. In the list below this you will find all the groups to which you can assign your template. When selecting the templates, the overriding group corresponds exactly to a listing on which later the individual, subordinate templates will be offered for selection. Via Add group you can create a new listing. If you have decided on a group, simply click OK. From now on, when creating a new document you will also be able to select your own template.

■ 74 LINUX MAGAZINE 9 · 2001


082bookssbd.qxd•

07.05.2001

11:31 Uhr

BOOKS

Seite 82

REVIEW

LINUX IN NO TIME BY UTE HERTZOG ALISON DAVIES

Info Published by Prentice Hall Priced at £19.99 ■

You’ve bought your new computer. You’ve taken the plunge and decided not to go down the Microsoft route, but instead you have got a copy of Linux. You need to install it and would like someone to hold your hand and talk you through it every step of the way. This is the book for you. The book assumes no technical knowledge whatsoever and even explains jargon in use in everyday speech, mouse mat, menu and window are examples. Linux in No Time consists of a series of tutorials taking you from first putting the disc into the drive, through the process of starting up and configuring a Linux machine (in this case running Caldera OpenLinux). Later chapters include KDE applications and utilities; installing software; working with Star Office; the Internet and networking with Linux. Each tutorial is clearly set out, first telling you what it will cover and then going through it step by step with plenty of clear screenshots and an illustration of the mouse showing you which button to click on what. Even individual icons are clearly shown, so that there is no excuse for clicking on the wrong thing. Chapter one deals with installation and covers partitioning, creating a boot disk, graphics setting and passwords, chapter two starts up the newlyinstalled program and deals with the desktop, the mouse, windows and the help function. The next continues with starting a program, virtual desktops and closing a program. Chapter four is starting to get more complicated and covers KDE configuration and modification of the desktop and windows, but its step-by-step instructions remain easy to follow. Chapter five is more technical and explains files and directories as well as the trash bin. Lots of tips and definition boxes make it easy to follow, if a little

82 LINUX MAGAZINE 9 · 2001

obvious. The next section goes further into KDE and formats text with the editor and tries out the paint package. It creates a diary as well as an address book. The chapter ends with some light relief by describing the games and toys on KDE with brief descriptions of how to play some of the games. Chapter seven covers printing and system information. Chapter eight installs and uninstalls programs and introduces Star Office. Star Office is continued in the following chapter with tips on using the various packages. Chapter ten takes you onto the Internet using Netscape, and covers downloading, searching and email. Networking and using Samba are dealt with in chapter eleven. The book concludes with a section on troubleshooting, covering common problems that readers may come across. The appendix includes a list of Linux-compatible printers, as well as a list of websites, including the book’s only mention of other Linux distributions. Most of the sites would be a little beyond what is needed by the book’s target audience. The most noticeable thing about Linux in No Time is how clearly it is set out; it leaves no room for error, and explains everything to the point of oversimplification. The screenshots show you exactly where you should be at any given moment and the mouse pictures show exactly what you should be doing. Instructions do not get more straightforward than this. It must be acknowledged, however, that most Linux users do not need quite such basic instructions and the style may grate with some. It is, perhaps, padded out with some unnecessary definitions but on the whole it is a very attractive textbook. ■


084answergirls•.qxd

07.05.2001

11:42 Uhr

BEGINNERS

Seite 84

ANSWER GIRL

The Answer Girl

SHOVELING DATA PATRICIA JUNG

The fact that the world of everyday computing, even under Linux, is often good for surprises is a bit of a truism: Time and again things don’t work, or not as they are supposed to. Linux Magazine’s Answer Girl shows you how to deal elegantly with such little problems.

$HOME: The home directory of the respective user is stored in the Environment variable HOME. With a $ before the variable name, you can reach its content. So echo $HOME outputs the home directory of the enquiring user on the command line. Backup Medium: data carrier reserved for the recording of backup data; on a large scale this is usually magnetic tapes and hard disks. Rotation: To face the risk of a total failure of the backup medium with equanimity, you should if at all possible use a different medium for each backup run. But since this is highly impractical (and data still becomes obsolete anyway at some point), you should use a number of media in rotation, such as, Monday is always tape 1, ..., Sunday is always tape 7. ■

You have probably heard more than enough of the well-meaning litany about making a backup. At work or at university there may be some justification for leaving responsibility for this tedious activity to system administrators, but what happens to your data at home? A tape drive is pretty rare at home, a backup on CD requires a CD burner, and if there isn’t a blank in the drive at all times, don’t even think about automation. Data backup on diskette? You might do that with the letters from the Inland Revenue, but hardly with your 100-page thesis and the exchange of emails with past loves, which has by now grown into several MBs.

Storage strategies With the current size of hard disks, you can certainly spare the room for a dedicated partition and use it exclusively as backup space. Bad news if the hard disk goes off to the great cyber hunting grounds in the sky, but better than nothing at all. Better yet is a second disk – even with the sixyear-old GB from the cast-off computer you can go a long way. As long as the computer does not get

84 LINUX MAGAZINE 9 · 2001

stolen or go up in flames, this is not bad at all. But by no means should you underestimate the (safer) alternative, of not mothballing the old computer in the first place, but turning it into your own personal backup cupboard. Of course this could also be a notebook, on which data worth preserving is always kept in a second copy. To do this will not need more than a little LAN. And yet, thanks to flat-rates, ADSL etc., even an account at college or the Internet computer belonging to your partner is a suitable storage place for selected data. Those not wanting to back up their entire 2GB installation but wishing to stick to hand-optimised configuration files and the best of $HOME, can presumably make a backup via ISDN or modem. In case of doubt there is still an update to SuSE or Red Hat waiting after the next disk crash. In the face of such heretical statements, any conscientious system administrator will of course scream blue murder, but hand on heart: You have still not got to grips with any proper backup software have you? Even if you have, this is not backing up your home computer anyway. If you are one of the shining exceptions the question arises, when did you last check whether your backup could actually be restored?


084answergirls•.qxd

07.05.2001

11:43 Uhr

Seite 85

ANSWER GIRL

Backup or data reconciliation? So what we are looking for is an alternative which may not be quite so secure, but on the other hand is easier to manage. Regular backups are usually performed as so-called incremental backups. This means that once (or better, at regular intervals) a complete security copy of all data is made and between two full backups, only the differences with respect to the previous version are saved in each case. Plus, the backup media is rotated, so if later on something breaks or otherwise turns out to be unusable, you will hopefully be able to fall back on the next oldest backup copy. If this is applied to our home data, this is of course also the ideal situation but for rotation, several disks or even computers would be necessary for our impossible demand for backup on hard disk. The keyword ‘incremental’, on the other hand, certainly has its attraction for us – after all, we don’t want to back up the data from new every time when it hasn’t even changed. Anyone wanting to have files ready in various processing versions cannot, however, solve this problem using a incremental backup. They would be better off using a version control software such as cvs, so that they can settle for a situation where the target system contains precisely the data which was in the source directories before the data reconciliation – no more, but no less either. So what we want is a simple mirroring of the data, preferably via the network and, if at all possible, in such a way that the data (and especially the password) are encrypted. For a simple restoration of the data to be possible, there should not be any accumulation of files in the target directories, which have already been deleted from the source directories. This means that before each data reconciliation we must be sure that all previous deletions were correct – that is the price to be paid for not rotating backup media. Your choice of which directories to back up should be determined by the following criteria: The capacity of the target system, the form of network connection between the two computers and your personal evaluation of which data is actually worth backing up

A question of software If there is an FTP server running on the target system, you can of course use it, but this means transmitting password and data in clear text over the network. Also, FTP-client programs aren’t usually capable of transferring the most recent data only, or automatically deleting data that no longer exists in the source directories. If you have to use FTP as the method of transfer, it’s best to stick to mirror software, which provides proper backup programs. The Secure Copy program scp, which comes with the Linux version of the SecureShell or its

BEGINNERS

open-source pendant OpenSSH, is certainly suitable for this. Here, all the data travels over the network encrypted. A secure shell server, the daemon sshd, should nevertheless be on every Internet computer on which you do not wish to work only from the local console. Nevertheless, some of the criticism of FTP clients also applies to scp: It should not be used for a data reconciliation. Anyone who has been involved with Unix for a while may recall that the unencrypted pendant of scp is called rcp. Many people were irritated by the fact that this cannot perform a data reconciliation, and these included Paul Mackeras and Andrew Tridgell, the latter being better known from Samba. And because their rcp substitute (called rsync) can also perform an encrypted data reconciliation via ssh, it’s worth a trip to http://rsync.samba.org/rsync/download.html, if the distribution does not come with a suitable packet.

Decrypting A man rsync intervenes initially, so that in the SYNOPSIS chapter all combinations of data transfer options are presented schematically: rsync [OPTION]... SRC [SRC]... [USER@]HOST:DEST rsync [OPTION]... [USER@]HOST:SRC DEST rsync [OPTION]... SRC [SRC]... DEST rsync [OPTION]... [USER@]HOST::SRC [DEST] rsync [OPTION]... SRC [SRC]... [USER@]HOST::DEST rsync [OPTION]... rsync://[USER@]HOST[:PORT]/SRC [DEST] As usual in the case of the Backus-Naur Form notation used in manpages, options in square brackets can be left out. The three dots do not

Client: ‘customer’, making use of the services of a server. The term is used to refer both to the computer on which a client program is running, as well as for this program itself. This means a computer can be both client and server at the same time. SecureShell: A safe replacement for the traditional Remote-Login or r-services Telnet and RSH (Remote Shell). A remote login, such as logging onto a distant computer, makes it possible, while working on a local computer, to access a computer connected via the network as if you were sitting right in front of it. To do this, one starts a remote login client on the local computer (such as telnet, rsh or ssh), which converses with the remote server (telnetd, rshd or sshd). With a secure shell connection, unlike Telnet or the r-services, all data is transmitted encrypted. Console: The unit forming part of a computer, consisting of (local) screen and keyboard. Samba: Windows computers can allow mutual access to their files and/or printers. The exchange of data is transacted according to the rules of the Server Message Block network protocol, where messages travel back and forth in blocks between server and client computers. Samba is software that implements this protocol and thereby also gives Linux and other Unix computers the option of allowing such SMB accesses and/or access to approved resources. ■ 9 · 2001 LINUX MAGAZINE 85


084answergirls•.qxd

07.05.2001

11:43 Uhr

BEGINNERS

Seite 86

ANSWER GIRL

IP Address: Unique identity of a computer on the Internet – either as a combination of numbers. In the current, commonest Version 4 of the Internet Protocol a maximum of four, three-digit numbers separated by dots or as text, consisting of domain and computer name, such as www.linux-magazine.co.uk. The conversion of numerical and textual IP addresses is taken over by Nameservers, also known as DNS (Domain Name System) servers. Port: If all planes/trains arriving at roughly the same time at a large airport or station were to go to the same gate/platform, there would be rather a lot of collisions. A computer offering various services (server) is confronted with a similarly precarious position with respect to network traffic. This is why every server process (daemon) eavesdrops at a different ‘gate/platform’ – the port. When a daemon listens out for a port which is reserved for its service, a Wellknown port, the client does not normally need to state a port number. But if the server uses a different port (a Web server using 8080 instead of 80), the client must be told of this explicitly. If it is written in the GENERAL section that both lines with the double colon (::) equally require an rsync daemon, it is only the first three lines which are of interest to us: 1. Copy local files/directories into a directory on a remote computer. 2. Pack copies of remote data into a local directory. 3. Mirror local data in a different local directory ~: The tilde is an abbreviation of the shell for the home directory of the present user. If there is a username after the ~, this means the home directory of this user. ■

exactly correspond to a scientifically precise nomenclature, but it does make clear what the authors want to say: There can be more details of the type just described written here (for example additional options). Just as easy as decrypting the [OPTION]... placeholder as ”any number of the options listed below in the section OPTIONS SUMMARY”, is the demystification of [USER@] name and a following @) and HOST. These are the optional specification of a user and the numeric or textual IP-Address of the remote computer respectively. With our expectations of file and directory transfer, the only way to interpret SRC and DEST is as Source and Destination files/directories. Since we wish to transfer our data via the SecureShell protocol, the last option does not interest us, - that of addressing an rsync server on the Port PORT, so that we can forget the last line.

Are you local? We’ll begin with the last and simplest case: The directory ~/article is to be copied as backup onto another partition mounted under /mnt/backup. [trish@lillegroenn ~]$ rsync article /mnt/backup skipping directory /home/trish/article/. That was not exactly a rush of copying: There is not a single file in the destination directory /mnt/backup. Now we must look, for the options: Options [...] -r, —recursive directories

recurse into

Anyone wanting to copy entire directories together with their content should thus also specify a -r or –recursive at the same time: [trish@lillegroenn ~]$ rsync -r article /mntU /backup The disk noise does indicate that something is happening, but what? write failed on article/LM/LM0501/ootb/gramofile-3.html : NoU such file or directory unexpected EOF in read_timeout unexpected EOF in read_timeout A fast df (disk free) confirms our fears: Filesystem Mounted on /dev/hda2 mnt/backup

1k-blocks 643959

[trish@lillegroenn ~]$ du —max-depth=2 article [...] 1924 article/LM/LM0501 51 article/LM/LM0601 73270 article/LM [...] 84049 article/designer/qt-designer2 234 article/designer/qt-designer1 100112 article/designer [...] The numbers in the first column, the size of the directory contents in Kbytes, are still extremely unclear. The miscreant is quite certainly more than 1MB in size, and luckily du has the option -m, with which the size details are stated in rounded whole MB. Then there’s a whole series of zeroes for the directories that are smaller than 1MB. To see only the larger directories, we set awk to work: [trish@lillegroenn ~]$ du -m —max-depth=2 arU ticle | awk ‘$1 > 1’ [...] 2 article/LM/LM0501 72 article/LM [...] 82 article/designer/qt-designer2 98 article/designer [...] awk now filters out all du output lines in which the first column ($1) is greater than 1, and does not display the rest at all. In this way we have detected that the miscreant is ~/article/designer/qt-designer2 and as this directory contains only test software, we can also do without the backup of it. With the -exclude flag we now tell rsync that it should ignore all files containing a qt-designer2 in the path or file name. But this time we are more cautious and do a dry run first with -n (not actually to be executed): [trish@lillegroenn ~]$ rsync -rn --exclude "qU t-designer2" article /mnt/backup The emergency without the -n precaution option is causing problems again, though: [trish@lillegroenn ~]$ rsync -r --exclude "qtU -designer2" article /mnt/backup [...] skipping non-regular file article/designer/qU t-designer1/qt-2.2.3/include/qxml.h

Used Avail Use% U 610690

5 100% /U

The partition is full! So we first delete the failed backup with rm -rf /mnt/backup/article completely and recursively. The thing to do now is to find out using du where the miscreants are hiding. To prevent the

86 LINUX MAGAZINE 9 · 2001

thousand sub and sub-sub-directories rushing right past us, we shall limit ourselves to the first two directory levels under ~/article:

A look, using ls -l, at the suspect file brings the explanation: lrwxrwxrwx 1 trish users 17 Dec U 21 05:32 article/designer/qt-designer1/qt- 2U .2.3/include/qxml.h -> ../src/xml/qxml.h The file in question is a link which was simply not copied with the others. But there is also a remedy


084answergirls•.qxd

07.05.2001

11:43 Uhr

Seite 87

ANSWER GIRL

for this: with the rsync option -l, with which symbolic links are retained. The manpage, in the section USAGE, also kindly explains that the archive option -a simultaneously copies recursively, retains links and doesn’t change attributes, rights, the owner details, or any device files either. Exactly what we want for backup! Quite incidentally, we also learn here about the verbosity option -v, which we shall also use from now on in our tests. There is still one problem: If we don’t bear in mind that files deleted in the source directory also disappear from the backup, multiple deletions will at some point fill up the backup partition. Quite apart from that, when a backup is really necessary, it is tedious clearing up all the files which had long since been thrown away, after playing back the data. The corresponding rsync option, which deletes everything at the destination site that no longer exists at the source site, is called – delete. So let’s make a full backup, then rename a file from ~/article for test purposes and see what happens: [trish@lillegroenn ~]$ rsync -av --exclude ”qU t-designer2” article /mnt/backup [many files] article/LM/LM0601/Answergirl_0601.html [many files] [trish@lillegroenn ~]$ mv article/LM/LM060U 1/Answergirl_0601.html !#:1_new mv article/LM/LM0601/Answergirl_0601.html arU ticle/LM/LM0601/Answergirl_0601.html_new [trish@lillegroenn ~]$ rsync -av --delete --exU clude ”qt-designer2” article /mnt/backup building file list ... done article/LM/LM0601/ deleting article/LM/LM0601/Answergirl_0601.html article/LM/LM0601/Answergirl_0601.html_new article/LM/LM0601/ wrote 43868 bytes read 32 bytes 29266.67 byU tes/sec total size is 26953280 speedup is 613.97 rsync dutifully reports that it is deleting the file Answergirl_0601.html which no longer exists in ~/article/LM/LM0601 in /mnt/backup/article/LM/L M0601 too and instead is creating the new file Answergirl_0601.html_new. With !# we are telling the Bash that it should instead implement everything which has been on this command line until now ( mv article/LM/LM06 01/Answergirl_0601.html). Thanks to :1 we are somewhat more selective and tell the shell to restrict itself to argument number 1 (the second argument article/LM/LM0601/Answergirl_0601. html). Anyone who likes to play safe and wants to retain a safety copy of all amended files (thus even the deleted ones) in the backup directory, will presumably become familiar with the rsync option -b. This is by no means a substitute for version control, but could be of interest to more than just the nervous. By default, the backup files are given a tilde after a file name.

BEGINNERS

Off in the distance We do not really need much more if we are limiting ourselves to the local mirroring of data. But it is always safer to have a copy on a different computer. If we recall the synopsis, this was also very easily realised by rsync: If the usernames are different on the source and destination computers, the latter must be stated with a following @ before the address of the remote computer. There is also a colon at the end, after which the destination directory can be written - or nothing, if we are settling for the remote home directory: [trish@lillegroenn ~]$ rsync -av --delete arU ticle pjung@backup.linux-magazine.co.uk: Since there are hopefully no r-services running on the remote computer, there ought to be an error message. We’re better off going via SecureShell at this point, provided there is a sshd running on backup.linux-magazine.co.uk. To get rsync to transfer via SecureShell, there are the options -e (”execute”) or –rsh (”substitute for rsh”). The former wants the ssh command after a space, the latter wants an equal- sign (–rsh=ssh): [trish@lillegroenn ~]$ rsync -av --delete -e sU sh article pjung@backup.linux-magazine.co.uk: If your ssh command does not lie in the search path, you must of course state the full path, -e /usr/local/bin/ssh. So you don’t want to make the article directory on the destination computer directly underneath pjung’s home directory? Then we must also explicitly specify the destination parent directory, such as: [trish@lillegroenn ~]$ rsync -av --delete -U e ssh article pjung@backup.linux-magazine.cU o.uk:~/backup The USAGE manpage section revealed, if you can recall, that the data is transferred compressed with z. This certainly plays a role now as our data is going via the network, which is why we are adding this option, before actually pressing the Enter key: [trish@lillegroenn ~]$ rsync -avz --delete -U e ssh article pjung@backup.linux-magazine.cU o.uk:~/backup pjung@backup.linux-magazine.co.uk’s password: [enter password] building file list ... done

Better with script Repeatedly typing in this whole rigmarole – well, we’re much too lazy to do that. Anyone wanting to back up several directories or even individual files (such as the bookmarks of a Web browser) will be longing for a little script, which – once written – can if possible even be processed automatically by a Cronjob. It is best if we write the files to be backed up as a list separated by spaces in a variable named

.rm -rf: One of the most notorious Unix commands of all: It deletes, without challenge, (-f stands for force) an argument directory together with all subdirectories. Before you set this command off, then, you should be really sure that you have not included any typing errors: An rm -rf /mnt/backup leaves just an empty /mnt behind, if backup was previously the only directory entry in /mnt. Path: The sequence of directories via which one must go if one wants to reach a certain file in the file tree. Bash: The ‘Bourne Again Shell’ is used by most distributions as the standard command line interface. A Shell accepts user inputs and transforms them so they turn into orders (program commands) for the kernel. r-services: See explanation on SecureShell in this article. Cronjob: Task in a Cron table, which is executed by the Cron daemons at a specified time repeatedly and automatically without any action on the part of the user. cf. the manpages on cron(8), crontab(1) and crontab(5). ■

9 · 2001 LINUX MAGAZINE 87


084answergirls•.qxd

07.05.2001

11:43 Uhr

BEGINNERS

Seite 88

ANSWER GIRL

BACKUPFILES, while remote user name, @, the address of the remote computer, the colon and the destination directory are easy to amend in BACKUPTARGET. For the script equivalent to the command [trish@lillegroenn ~]$ rsync -avz --delete -U e ssh article .netscape/bookmarks.html pjuU ng@backup.linux-magazine.co.uk:~/backup

rsync 2.4.6

the variable contents therefore look as in Listing 1. But wait – why is the .netscape subdirectory now missing in backup.linux-magazine.co.uk in the ~/backup-directory, so that bookmarks.html is suddenly present as ~/backup/bookmarks.html? Because we, as the rsync manpage shows, forgot the option -R (relative), which makes sure that on the destination computer exactly the same relative paths are installed as on the source computer.

No password If, for example, one wishes to automate the data reconciliation using a Cronjob (cf. Crontables, LM Issue 6 p.108ff.), entering a password turns into a problem. It can be resolved, even if security fanatics might need to close one eye. The keyword is Public Key Cryptography: One has a pair of keys, of which one of the keys is kept secret and the other is publicly distributed. Authentication is only possible when both secret and public key come together. As we can see from the manpage on ssh, our chosen method of transfer supports this. What we have to do first is to generate the key for the computer executing the backup script. We

Listing 1: Backup script #!/bin/sh # files and directories to be backed up, starting # from the home directory BACKUPFILES=”article .netscape/bookmarks.html” # Backup target BACKUPTARGET=”pjung@backup.linux-magazine.co.uk:~/backup” cd # Change to home directory rsync -e ssh -aRvz —delete $BACKUPDIRS $BACKUPTARGET Replace the italic details, save the file, and use chmod u+x to give it the necessary execution rights for yourself. Then the script can be executed by calling up its name (if necessary with path details).

Reciprocal data reconciliation Notebook owners often get annoyed about inconsistencies in the data stored on desktop computer and notebook. The solution sounds simple: A script as in Listing 1 is installed on both computers, and depending on which computer was lasted worked on, the data on the other computer is updated ... and in the worst case, overwrites a more recent version on the destination system with the old one. Here the rsync option -u (update only) can help. This ensures that files with a more recent time stamp on the destination system than on the source system are not overwritten. One important point here: the computer time on the two systems absolutely must be synchronised.

88 LINUX MAGAZINE 9 · 2001

could almost have guessed it: The command for this is called ssh-keygen (”ssh-key generation” creating the ssh key). [trish@lillegroenn ~]$ ssh-keygen Generating RSA keys: ..................ooooU ooO.................ooooooO Key generation complete. Enter file in which to save the key (/home/trU ish/.ssh/identity): [Enter] Enter passphrase (empty for no passphrase): U [Enter] Enter same passphrase again: [Enter] Your identification has been saved in /home/U trish/.ssh/identity. Your public key has been saved in /home/trishU /.ssh/identity.pub. The key fingerprint is: f7:68:22:9f:a3:be:37:7c:7f:92:c2:fb:a1:86:ffU :fe trish@lillegroenn.troll.no Anyone wanting to save their secret key in the suggested file ~/.ssh/identity, simply confirms with just the Enter key, otherwise a file name, preferably with path, is necessary. It gets critical when it comes to the request for the password: Normally we would set one to protect the private key, but then we would have to enter one again – an infinite circle. That’s why this time we are going to swallow the bitter pill and again type only Enter. Also, at the last request to repeat the (now blank) password it is still appropriate to enter nothing but Enter. Anyone finding the no-password key unsettling can still increase security by frequently generating and distributing a new key. We shall now take the public key (saved with the ending .pub) and transfer it to the backup computer via SecureShell, of course (thus with scp or by copy & paste, while logged onto ssh). It must in any case be entered into the file there ~/.ssh/authorized_keys, like this: [trish@lillegroenn ~]$ cat ~/.ssh/identitU y.pub | ssh -v pjung@backup.linux-magazine.cU o.uk cat - > ~/.ssh/authorized_keys This fiddly procedure, instead of a simple scp ~/.ssh/identity.pub pjung@backup.linuxmagazine.co.uk:~/.ssh/authorized_keys is necessary when ~/.ssh/authorized_keys is already accommodating other keys on backup.linuxmagazine.co.uk, too. As the result of the double > what we achieve is that the standard input which is output with the second cat of ssh (symbolised by a ) is attached to the end of ~/.ssh/authorized_ keys. Where does this entry for ssh come from, which passes the latter on to the remote command to be executed cat - > ~/.ssh/authorized_keys? The pipe | is responsible for this, which shoves the output of cat ~/.ssh/identity.pub into the ssh command. All that’s left now is to test whether the script actually functions without a password. If so, there is no further obstacle to a backup Cronjob. ■


089ksplitter.qxd•

07.05.2001

13:24 Uhr

Seite 89

KORNER

BEGINNERS

K-splitter

ADMINISTRATIVE MATTERS STEFANIE TEUFEL

Constantly entering the famous/infamous Linux rule of three of configure, make, make install can become very tedious. If you’d like to avoid this task, or provide your less skilled friends with a graphical user interface for compiling Tar-balls, Kconfigure is just the thing.

Linux rule of three made easy The program, by Javier Campos Morales, provides configure-fatigued users with everything necessary to compile and install from the source code in the usual KDE look and feel applications. The latest version of the compiler aid can be found on the homepage of the author at http://kconfigure.sourceforge.net/. The way kconfigure works is simple: Open the program with a kconfigure in any terminal emulation of your choice, and marvel at the window as shown in Figure 1. After that you have the choice as to whether

you wish to use your graphical compiler assistant via the buttons or the menu bar. But first, you must trawl, by clicking on the folder icon, through the unpacked source directory of the program to be installed and there select the configure file. Unfortunately, unpacking the sources is not something kconfigure will do for you; here, the K-tool karchiver can help you along. If you want to give the configure command an argument such -with qtdir=/path/to/qt-directory, it is advisable to take the route via the menu bar. To do this, select the item Build/Configure with arguments.... In the dialog window which then plops open (Figure 2) you can enter the required

Figure 1: Compile me!

Figure 3: Once with and ...

Figure 4: ... once without errors

Figure 2: Always the right arguments 9 · 2001 LINUX MAGAZINE 89


089ksplitter.qxd•

07.05.2001

13:24 Uhr

BEGINNERS

Figure 5: Identification, please

Figure 6: And off it goes

Seite 90

KORNER

options. Anyone who is not really sure about the individual options can access these at any time, by clicking on Build/Configure help in the top half of the window kconfigure. If you started the configuration process via the menu bar or by clicking on the Configure button, the program will immediately set to work without any further challenges. You can monitor progress in the upper half of the window, while error messages or warnings appear in the lower half (Figures 3 and 4). The commands make and make install are treated exactly like the configure command by kconfigure. With one small but special difference: Before the actual installation the application checks which user you have logged on as. If you are travelling as a normal user, kconfigure will still at first confront you with a dialog box, as in Figure 5, in which you must enter the root password before it continues. If you want to interrupt one of the commands

[left] Figure 7: The big clean-up begins [right] Figure 8: Better safe than sorry...

you have given, all you need to do is click on the button Kill Process.

Kleandisk-2.0beta1, but its predecessor Kleandisk1.2beta2. The reason: The new version provides support for the first time for the removal of unused rpm packets. Unfortunately, though, this leads to problems with some versions of rpm, so that on various computers - for example on Red Hat 7.0 the program will not compile. Call up kleandisk either via the K-button/ Applications/ Kleandisk or by entering kleandisk & in a terminal emulation. After that you will see a window, as in Figure 6. Click there on the button UDG Viewer in the Clean Up tab. The ominous abbreviation stands for User Defined Group. In the next window you can define the directory which kleandisk is to clean up for you, and also the file types, which the program is to give their marching orders. kleandisk then sets about searching for the less useful files on your system and sooner or later presents you with its inventory in the lower half of the window, as in Figure 7. At that point I decided that I really do not need the core-file indicated below, and also informed the disk cleaner of this decision by clicking on the green box next to the core file. After that it is enough to click on the Cleanup button. kleandisk then begins to communicate cheerfully with you. It dutifully asks you window by window (as for example in Figure 8), whether you want to move, delete or archive the selected files, if you might perhaps prefer to make backups of the files to be deleted, and if so, where should they go. In the last step, you find out how much space you are saving overall as the result of making these decisions. To get rid of the file now once and for all, click on Finish, which lets kleandisk off the leash...

Permission granted Cleared up

Figure 9: Everything legal?

Anyone who has become too carried away by the simple handling of kconfigure may now break out in beads of sweat when taking a look at the amount of space occupied on their hard disk. If you are plagued by a bad conscience, you should risk a look at Kleandisk. The latest version of this easy-to-use disk cleaner can be found at http://www.casema.net/~buursink/kleandisk/. Contrary to normal practice, though, at this point you should download, not the latest version

Tar-ball: The program tar is an archiving tool which is well-known under Unix. A collection of data packed together with this into a file is usually called a Tar-ball and has the file ending .tar.gz or .tgz, if it has been put together with tar and compressed with the program gzip. core-file: The last memory retrieval of a crashed program is retained for posterity in files called core. Experienced programmers can find out the cause of the crash from these with the aid of a debugger, but for anyone else these files are simply a waste of space. ■ 90 LINUX MAGAZINE 9 · 2001

Wanting rights and getting rights under Linux as a truly multi-user system are two different kettles of fish. As you may already have noticed, your system differentiates very precisely as to who exactly can read, write or execute the diverse files and programs on your computer. And to avoid confusion, information is stored with each file as to whether the owner, group member or other users can read, write or execute the respective file. With the command chmod you can obviously change these access rights at any time. There is a graphical front-end for this command at http://www.leeta.net/kchmod/ in the KDE-Look named kchmod, with which the setting of access rights is twice the fun (Figure 9). Simply select, via File/Open, the file you want to edit and choose between the options on offer. Is the file to become writ(e)able, read(able) or executable? The choice is yours. After that, quickly save the change with File/Save and it’s a done deal - if only it were always so simple to guard one’s rights. ■


091ktools.qxd•

07.05.2001

14:24 Uhr

Seite 91

KORNER

BEGINNERS

K-tools

WELL PACKED STEFANIE TEUFEL

This month’s installment of K-splitter covers Kleandisk, a tool which you can use to make a bit of space on your overstuffed disk. However, it really isn’t necessary to fill up your disk, so this article is devoted entirely to Karchiver, a program which will help you to compress data and files simply.

Maybe one or two of you have already worked with the forerunner karchiveur. But you should still risk an update. karchiver might have lost the u from its name, but in other ways it has only gained. karchiver 2.0.3 co-operates smoothly with KDE 2.0. Besides, with diverse wizards, a few little helpers have been brought in to make life even easier for you. And a lot has stayed the same: karchiver still turns working with compressed data whether tar-, gz-, bz2- or zip files - into child’s play. And in the new version you can use this tool to look at all these files, unpack and repack them. The latest Karchiver can be downloaded from http://perso.wanadoo.fr/coquelle/karchiveur_en.sht ml. Also, the packages/programs gzip, bzip2,

unzip, zip, lha, rar and/or arj should be on your computer. That’s no problem anyway, since common Linux distributions always come with these on board. They just have to be installed.

Packing Start your graphical archiver by simply entering a karchiver & in a terminal emulation, and off it goes. karchiver first supplies, in a separate window, some tips, which may or may not be helpful. If these bother you, though, you can quickly chase them away by deselecting the box Display tip of the day at next start. Admittedly, the introductory window (Figure 1) does not exactly look spectacular. But the first

[left] Figure 1: karchiver says Hello [middle] Figure 2: Faster access thanks to the archive browser [right] Figure 3: Meaningful

9 · 2001 LINUX MAGAZINE 91


091ktools.qxd•

07.05.2001

14:25 Uhr

BEGINNERS

[left] Figure 4: Unpack me! [right] Figure 5: Optional

Seite 92

KORNER

impression is misleading. In the new version, the socalled archive browser opens automatically, which helps you quickly select the tgz-, zip etc. files on your hard disk (Figure 2). If you want to know more about the inner life of a compressed file, all you need to do is click on it in the archive browser. Alternatively, select File/Open in the menu bar and troll through the old familiar KDE selection box until you reach the right file. Depending on the size of the archive, karchiver presents you, sooner or maybe a bit later, with the content of the file, including useful information such as the size, date and permissions of the individual files (Figure 3). Once invited into the karchiver, it’s entirely up to you to choose what you want to do with the archive. You can find the various options all neatly listed under the menu item Archive. Let’s assume the file is to be unpacked. Simply select Archive/unpack to, and you can immediately

gzip: This tool compresses the files specified by you with the Lempel-Ziv coding (LZ77). This automatically renames the packed file as file.gz, normally retaining access rights and timestamps, but ignoring symbolic links. bzip2: bzip2, like gzip, allows data to be compressed. Since, due to the fact that it uses a different algorithm, better compression can often be achieved, this program has been increasingly given preference recently. Files compressed with bzip2 can be recognised by the ending .bz2. Compression level: Determines the quality and speed of compression; the lowest value, 1, produces a fast compression, but bigger files. 9 is the maximum and leads to higher/longer computing times, but smaller (better compressed) files. $HOME: The environment variable HOME contains the location of your Home directory. The $ symbol in front allows (e.g. within a shell) access to the variable content. Patches: Using so-called patch files, you can upgrade from one version of a program to the next. These are text files, in which there is an exact description of the places at which the individual files of the source code must be altered. The pre-requisite for patching a program is that the complete and unaltered source code of the respective previous version exists. The advantage of patch files: They are relatively small and so save you the sometimes very large and thus expensive download of a new program version, in which maybe only one file has changed. ■ 92 LINUX MAGAZINE 9 · 2001

define, in a window as in Figure 4, where all the files, or only the files you are looking for, are to be unpacked. As soon as you have decided, click on the bold Unpack button, and off it goes. karchiver would not be a proper KDE program if it didn’t offer even simpler methods. As is so often the case, these are revealed by the drag and drop ability of KDE applications. To create a new archive, select File/New or click on the page icon in the menu bar. Then simply drag the files or directories of your choice out of a Konqueror window into this empty archive. If you wish to add data to an existing archive, drag it in exactly the same way into the open archive.

Options karchiver also proves to be flexible with respect to compression levels and lets you define, with the aid of the menu item Configuration/Settings, how thorough the programs implemented, gzip and bzip2, should be in each case (Figure 5). Under Tar you can specify the behaviour of the program of the same name in more detail (for example whether subdirectories are to be created or not), under Icons the icon size can be set, and Packer answers the crucial question: ”Have I really installed all the pack programs?” In directories you define in which directory ($HOME, the last directory etc.) karchiver should unpack the archive by default.

Cutting your cloth... All this compressing may be very nice: But even with this, disk space will run short at some point. Wouldn’t it be fantastic if you could also trim bigger files so that they would fit onto completely normal diskettes? Then we could safely wipe them off the disk drive. The command line command split does just that. So that you don’t have to read up first on its command syntax, karchiver provides the Diskette menu item. If you want to split a file into bite-sized,


091ktools.qxd•

07.05.2001

14:25 Uhr

Seite 93

KORNER

or rather diskette-sized, morsels, select Diskette/Split. Now simply specify, in the selection box that appears, the file to be split, and karchiver automatically parcels it out into morsels 1.4MB in size and any remainder respectively. You can then calmly shovel each of these pieces, which are given the suffixes .01, .02 etc., onto a diskette. If you want to piece the data back together, choose Diskette/Combine instead.

Pure magic The various wizards are a completely new feature of the latest karchiver release, with which karchiver takes you by the virtual hand and helps you to deal with your archives. For example, if the selected file contains the data necessary to patch a source code directory, simply let the appropriate wizard guide you step by step. Another task that can be dealt with by the wizard is that of completely installing a source text archive (meaning: unpacking everything and then applying the Linux installation rule of three –

BEGINNERS

configure, make and make install). And if you want to convert an archive into a different format, this is where to come. First, select the archive file you want to edit, and then click on the menu item Archive/Start wizard. A window appears, as in Figure 6, in which you can select which karchiver (or rather wizard) should organise next with the corresponding file. As an example, let’s convert a file into a different format. To do this, we click on the item Convert archive format. Now we need to activate the Next button to continue. In the window from Figure 7 we can now select which format our file should have from now on. How about a .zip archive for the ex-Windows users among us? After that, you have the option of giving the baby a new name. If you want to leave it with the old name, you need do nothing at all. To finish off, karchiver asks you if you want to delete the original archive (Figure 8). Decide for yourself. You should now find a file with the same basic name, but in the format and with the ending .zip on your hard drive. ■

[left] Figure 6: Which wizard should it be? [right] Figure 7: Being well zipped is half the battle

K-tools K-tools presents tools, month by month, which have proven to be especially useful when working under KDE, which solve a problem that otherwise is deliberately ignored, or are just some of the nicer things in life that - once discovered - you wouldn’t want to do without.

Figure 8: Rather not delete it? 9 · 2001 LINUX MAGAZINE 93


094gnomogramsbd.qxd•

07.05.2001

11:53 Uhr

BEGINNERS

Seite 94

GNOMOGRAM

News from the GNOME garden

GNOMOGRAM BJÖRN GANSLANDT

GNOME and GTK have been attracting more and more followers in recent years. There are now programs for almost every task and new ones are being added daily. Our monthly Gnomogram column is the place to find all the latest rumours and information on the pearls among the GNOME tools.

Red Flag joins the GNOME Foundation Red Flag Software, known for its distribution Red Flag Linux, which is also used by the Chinese government, has joined the GNOME Advisory Board and will be contributing to the translation of GNOME into simplified Chinese. But there will be no support from Red Flag for traditional Chinese, which is common in some Chinese cities and Taiwan.

Red Hat and Eazel co-operate

Figure 1: Tux, reduced to vectors

Since Nautilus is becoming part of GNOME 1.4, nobody is surprised that Red Hat is also supplying it. Eazel and Red Hat have, however, agreed to include additional official Red Hat packets in the Nautilus software catalogue and to integrate the Red Hat

Network into Eazel Services. This will mean it is possible to update one’s Red Hat system easily via Nautilus.

Ximian and KDE Anyone searching for ‘KDE’ does not usually expect to find Ximian, but thanks to Google’s Adwords, a text-based advertisement, that’s exactly what did happen. Ximian had in fact registered this and similar keywords such as Konqueror for itself. After an open letter from some KDE supporters Ximian withdrew the Adwords and itself published a reply. A similar, now cleared case also occurred in 1999, when Martin Konold took control of gnome-support.de and referred all visitors to KDE. Fortunately, Ximian displayed somewhat more sensitivity when dealing with Hewlett-Packard, which, as announced in San Jose, will replace CDE at the end of 2001 with Ximian GNOME as the standard desktop.

New GNOME versions After a bit of a delay, on 3 April GNOME 1.4 was finished at last. One of the reasons for the waiting period is the file manager Nautilus, which is the greatest innovation in GNOME 1.4. The rest of GNOME has also changed somewhat, but we don’t anticipate too much here. The next big version leap is GNOME 2.0, about which Miguel de Icaza has already expressed some concerns. But these concerns are by no means damning - the timing of the GNOME 2.0 release and what’s new about it remains to be seen.

Sodipodi Sodipodi can be used to create vector graphics in the SVG format (Scalable Vector Graphics). In vector graphics it is not the individual pixels and their colour information, but forms themselves which are stored, which has the advantage that images can be enlarged as much as you like without any loss of 94 LINUX MAGAZINE 9 · 2001


094gnomogramsbd.qxd•

07.05.2001

11:53 Uhr

Seite 95

GNOMOGRAM

quality. Of course, this process is not suitable for all types of image, but graphics with clear forms and simple colour gradients can be compressed very well in this way. SVG is XML-based, which is no great surprise, as the format was developed from W3C. Since SVG files are, when all’s said and done, nothing more than text files, they can easily be created dynamically, and it is possible to change them interactively via Javascript. Unfortunately the standard is still at a very early stage and is not supported by most browsers. As well as rectangles, ellipses and free forms, Sodipodi also supports embedded graphics that do not exist in a vector format. All objects can be scaled and rotated as desired, although this rapidly makes imported graphics look pixelly. Free forms can be changed at so-called ‘node points’, or the line delimited by the node points can be automatically straightened, rounded off or otherwise edited. To simplify working with large documents, Sodipodi can combine several objects into a group, which can then be edited as an object. Apart from the normal colour information, it is also possible to alter the transparency of an object using fill style and in the same dialog the object can be provided with a border. With the aid of Gnomeprint, the graphics created can be printed very easily, and thanks to Bonobo it is possible to view the whole thing from other programs.

Red Carpet After a long wait and many promising screenshots, Ximian has finally released a Beta of the new packet management tool Red Carpet. Version number 0.9 is in itself unusual for the first version, but since poor packet management can lead to serious damage, Red Carpet has been very thoroughly tested in advance and hopefully cleared of any serious faults. Software is combined in Red Carpet into socalled ‘channels’, to which the user can subscribe and which function similarly to an entry in Debian’s sources.list. So for example Red Carpet offers a channel for the installed distribution, where it does not matter whether the system is based on RPM or dpkg. Of course it is also possible to install or remove new packets working from Red Carpet. Unlike the old Ximian Updater, though, Red Carpet is able to resolve conflicts created thereby or, with the aid of GnuPG to check packet signatures. Like Evolution, Red Carpet can download the latest news from the Net, which is probably the prelude to Ximian’s future source of money, services. It is hoped that these will not lead to conflict with Eazel, as they also offer services and update software via Nautilus. Since Red Carpet, with the aid of GtkHTML, uses a lot of HTML, it would in any case have what it takes to become a sort of Internet portal. Whether this would make sense is another matter.

BEGINNERS

URLs www.redflag-linux.com news.gnome.org/gnome-news/980366651 www.redhat.com/about/presscenter/2001/press_eazel.html www.granroth.org/ximian.html www.ximian.com/google.php3 www.ximian.com/newsitems/hp-partnership.php3 primates.ximian.com/~miguel/gnome-2.0 sodipodi.sourceforge.net www.ximian.com/apps/redcarpet.php3 www.gnupg.org ■

Session management under GNOME GNOME uses a session manager so that you can pick up your work where you left off and so that certain programs are called up directly on start-up. A session can be stored at any time under Configuration/Session/Save current session or in the log off dialog, where only GNOME programs are included in the session. Other programs can be entered in the control centre under Session/Start sequence. To edit the GNOME session, you should not touch ~/.gnome/session directly, but call up Configuration/Session/Session manager properties. The program then shows the session with all currently active programs. From here, you can change the sequence of the programs to be started and the start style. Most programs use the normal start style, which starts a program at each new session, but allows the program to be closed within the session. The new start style on the other hand also starts a program that has been closed or crashed during the session again immediately. Programs with the recycle style are not restarted at all, but merely given the opportunity to save their data. Last of all, there is a style for settings and thus programs that restore specified preferences at the beginning. ■

The author Björn Ganslandt is a student and a passionate bandwidth squanderer. When he is not busy trying out new programs, he reads books or plays the saxophone.

Figure 2: Red Carpet in the Ximian Channel

9 · 2001 LINUX MAGAZINE 95


106freeworld.qxd•

08.05.2001

10:16 Uhr

COMMUNITY

Seite 106

FREEWORLD

Close enough to UNIX

POSIX COMPLIANT RICHARD SMEDLEY

Although GNU/Linux is the most popular free UNIX-like operating system (OS) on the block, it’s not the only one. With so many interesting free OSs offering Linuxcompatible programs, even the most penguinfixated can choose alternative ways of doing things.

The ancient war between vi and emacs may still be raging, but the battle between BSD and System V has effectively been settled by POSIX. The POSIX guidelines give a standard for UNIX-like operating systems. Although few pay the certification fee, many OSs aim for POSIX-compliance. This means that programs written for one UNIX-like system should compile on another with little trouble. Our new column gives Linux Magazine readers a view of some alternative OSs, most of which offer the Bash shell and other GNU tools, but first some history. UNIX grew quickly in the 1970s and early 1980s. This was largely due to its portability and the ease with which it could be enhanced. Another central factor was the open availability of the source code, which had been rewritten in C, the new high level language of choice. In the 1970s, AT&T was prevented from profiting from computer development by the US government, due to their telephone access monopoly. By the end of the decade several companies were making their own versions of UNIX, based on the AT&T code. Looking for a way to commercialise UNIX, AT&T established UNIX

Success story Dozens of different operating systems have been developed, but only UNIX has so many varieties. Four factors have facilitated this growth: Portability: The first widely used operating system written in a high level programming language, making it easier to port to different hardware architectures Modifiability: Written in C, modifications and enhancements are easily made Open Source: Developed at AT&T Bell Labs, a non-profit research institution, enabling publication of source code Open System: Designed as an open, modular system, with a host of features to assist with the development and integration of applications

106 LINUX MAGAZINE 9 · 2001

System Laboratories (USL) to develop a product version. This resulted in the 1983 release of System V Release 1 (SVR1), a new commercial baseline. The following year AT&T ended its monopoly control over telephone access and entered the computer business, marketing its own commercial UNIX and releasing SysVR2.

BSD Meanwhile, the open development of UNIX continued in academia. Bill Joy and Chuck Haley, of the University of California at Berkeley (UCB), started working with UNIX in 1975, leading two years later to the first Berkeley Software Distribution (1BSD). 2BSD followed in the next year with a new full screen WYSIWYG text editor called vi. Work with DARPA and the American Department of Defence lead eventually to the 1984 4.2BSD with virtual memory and TCP/IP networking. The Berkeley Domain Name Server, included in the 1986 4.3BSD release in 1986, expanded the number of sites able to implement Internet networking. Commercial uptake of BSD was strong, however vendors needed to pay AT&T a licence for the SystemV code included in it. Licence costs increased, whilst many vendors only wanted the Berkely code. In 1989, UCB published Networking Release 1 containing their TCP/IP networking system for the first time without any AT&T code and released under an open license, allowing free source code modification and distribution. The next release was a full rewrite of hundreds of AT&T utilities without any AT&T code. At the same time, groups such as X/Open and IEEE POSIX tried to prevent AT&T UNIX standard domination. In 1987, AT&T entered into an alliance with Sun Microsystems to develop a standard UNIX version. Two years later they released SVR4, which


106freeworld.qxd•

08.05.2001

10:16 Uhr

Seite 107

FREEWORLD

integrated the System V and BSD UNIX baselines. Vendors of other commercial Unices reacted with alarm and united to form the Open System Foundation (OSF). The UNIX wars effectively ended in 1993, when AT&T sold System V to Novell, who assigned the rights to UNIX to X/Open. In 1996 The Open Group was formed by the merger of OSF and X/Open. The Open Group now works with the IEEE on the POSIX family of standards.

column, over the coming months, to explore the potential of some of these alternative OSs.

Apple’s decision to abandon Copland for OSX, and move to a FreeBSD core, named Darwin, running on the Mach microkernel, made many in the Linux world take notice. Apple’s leadership in perceived user-friendliness of the GUI and the robustness of UNIX sounds like a winning formula to many. As the Apple developers and open source community contribute bug-fixes and improvements OSX will be watched with interest. It could even mean Microsoft Office and Internet Explorer running on a desktop UNIX - an interesting thought to say the least.

”The nice thing about standards is that there are so many of them to choose from.” - Professor Andrew S. Tanenbaum (among other things, the author of MINIX) POSIX.1 (IEEE1003.1), published in 1988, set out a standard Application Programming Interface (API) enabling source compatibility amongst several UNIX and UNIX-like systems. Torvalds aimed for POSIX compliance from the earliest development of the Linux kernel. This enabled GNU tools and many applications from BSD and other Unices to be used. This same compliance means that today we can take many applications written for Linux and compile them for AtheOS, BeOS or OSX (Darwin). POSIX.2 (IEEE1003.2) is an enhancement rather than a replacement of the original. Even though Linux is not certified as POSIX compliant the aim of compliance, where appropriate, ensures that POSIX remains a meaningful standard.

The Portable Application Standards Committee of the IEEE develops the POSIX family of standards and can be found at http://www.pasc.org. ■

And then there were three

No alternative? GNU/Linux continues to improve in scalability and performance and is a wonderful general purpose OS, which is also adapting well to embedded systems. However, one tool won’t always be the best for every job. By choosing different design goals other OSs are often better adapted in particular areas of performance. It will be the purpose of this

Commercial Unices (eg AIX, HP_UX, Tru64 et al) Darwin ecos FreeBSD GNU Hurd MINIX NetBSD Oberon OpenBSD Plan 9 QNX RTEMS Solaris V2_OS

Info

A new desktop UNIX

POSIX

What’s on the bootloader today Operating System AtheOS BeOS

COMMUNITY

The common view is that FreeBSD is robust, NetBSD is on every platform and OpenBSD is secure. FreeBSD vs Linux is certainly the new Holy war for the UNIX community. In the commercial world, however, uptake of FreeBSD has been seen due in part to the licence, which allows closing off of the source code into proprietary software - something which the GPL does not permit. For many, this difference is far more important than the different development model for FreeBSD or indeed, technical considerations. Whilst the BSDs (including Apple OSX) are the most obvious alternative to Linux, many smaller projects have considerable merit. A trawl of the Web reveals dozens of OS projects that are little more than an alpha kernel and a bootloader written in assembler, but there are many serious projects out there, some with impressive pedigrees (see What’s on the Bootloader Today box). Next month we start by examining real-time OSs. ”Those who don’t understand Linux are doomed to reinvent it, poorly.” - Anonymous

Feedback Over the next few months we shall be featuring articles on the BSDs, OSX, QNX, Atheos and microkernels. However we welcome suggestions and input for coverage in the Freeworld column.

Comments UNIX-like with consistent GUI, written from the ground up Awaiting release of hardware OpenGL and new network stack. Excellent multithreading and large file handling for demanding media apps. Proprietary, but binaries freely downloadable Some good enterprise-level OSs, but mostly expensive and closed source Very interesting project, particularly the i386 port Now managed by Red Hat. Supports many embedded platforms Powers Yahoo, Google and many other seriously busy sites without breaking into a sweat Closer to usability than it was 10 years ago but don’t hold your breath No longer under active development Runs on anything. Try it on your toaster. Small, modular OS, written in Oberon. Open source for non-commercial use. The only OS to have every line of code security audited. Secure out-of-the-box. Opened the source too late and failed to develop enough interest. Very mature commercial real-time OS, availble to download and as a single floppy edition Real-time executive developed for the U.S. Army Solaris8 binaries are available for download under a restrictive licence Written in i386 assembler to be fast and light. Active development since open sourcing. 9 · 2001 LINUX MAGAZINE 107


108gnuworld.qxd•

08.05.2001

10:21 Uhr

COMMUNITY

Seite 108

BRAVE NEW WORLD

The monthly GNU Column

BRAVE GNU WORLD GEORG C.F.GREVE

Welcome to another issue of Georg’s Brave GNU World, where we reveal news about several projects which you may not have heard of yet.

GNU Pipo BBS Those who believe Bulletin Board Systems (BBS), also often referred to as Mailboxes, are dead, are mistaken. The GNU Project contains the GNU Pipo BBS, a BBS under the GNU General Public License. The ancestry-line of the GNU Pipo BBS reaches over YAWK (Yet Another Wersion of Citadel) back to Citadel, although it is completely independent code-wise. In fact it was a disagreement with Kenneth Haglund, author of YAWK, because of copyright-problems that triggered the development of the GNU Pipo BBS. The original development-team were Gregory Vandenbrouck and Sebastien Aperghis-Tramoni who worked on the GNU Pipo BBS with help from volunteers like Sebastien Bonnefoy. After Gregory resigned, Sebastien Aperghis-Tramoni became the official maintainer of the project. The GNU Pipo BBS contains support for forums, direct messaging, mail, chat, Web access and bots. For the amusement of the users, the bots come in different personalities like a parrot, a dog or a pseudo-user. 108 LINUX MAGAZINE 9 · 2001

It’s interesting to note that these juiced-up BBSsystems might offer users a viable alternative to Web portals as a homebase on the Net. The GNU Pipo BBS is ready for production use and is being used by the Atlantis BBS in Marseilles, France. But since Pipo contains a significant amount of old code, Sebastien plans a code freeze in order to revise the code. The use of libraries especially is to be increased, since in some places the wheel has been reinvented - which is not good for the maintainability of the code. The only really weak point is the documentation. The system does have system messages in different languages, but the code still requires better comments. Also the homepage and the manual require authors and translators.

Larswm larswm is a window manager by Lars Bernhardsson that is interesting for several reasons: First of all, purists should expect to fall in love with it, because it is very simple and minimalistic in the way it looks and uses resources. It is solely based on ANSI C


108gnuworld.qxd•

08.05.2001

10:21 Uhr

Seite 109

BRAVE NEW WORLD

with standard Xlib-functions and completely avoids using widget libraries like GTK+ or Qt. But more importantly, it offers an alternative to the known, windows-like desktops. Even though these are widely spread, the user interface is definitely something where innovative concepts are refreshing. The Free alternatives like KDE or GNOME essentially limit themselves to imitating the Windows desktop, although KDE is much closer to the original than GNOME. This is not an argument against KDE or GNOME, because they make the shift to GNU/Linux much easier and open avenues that were previously closed. But GNU/Linux especially, is a platform that is well suited to innovative user interfaces and larswm gives new impetus, following its motto: ”Because managing windows is the window manager’s job!” The desktop is split into two parts. The left part is bigger and normally contains a single window possessing the focus, which means that key presses and other input are directed into this window. The right side contains the rest of the windows as equally sized tiles, which is the reason larswm is called a ‘tiled window manager’. The keyboard support is also very good - if only using key driven applications, the fingers never have to leave the keyboard. larswm definitely takes time to get used to, but it does have a well-deserved group of fans and everyone interested in alternative concepts should definitely give it a try. There is one problem about larswm, however. Since it is derived from the 9wm, it was forced to use its rather ugly licence. This licence does speak of Free Software, but there are clauses that most likely make it incompatible with the (L)GPL. Also it is legally weaker, as the right to modification is only granted implicitly - just as the protection of freedom. The project was officially finished in January 2001 by the author. larswm has been an experiment to try a new user interface concept. In the long run, he hopes to be able to replace all 9wm code with his own so that larswm will become a truly independent window manager. This could also help in solving the license problem. Additionally, Lars hopes to inspire other authors of window managers and to motivate them to implement similar concepts in their programs.

GNUstep GNUstep is an object-oriented framework and toolkit for program development, that is already successfully being used on many platforms. The function of a toolkit is to supply prefactored components for the graphical user interface so programs can be written faster and more effectively; also programs based on a certain toolkit have a similar look and feel. Two classic examples of toolkits are GTK+ and Qt.

COMMUNITY

Some of the We Run GNU logos available.

GNUstep is based on the original OpenStepspecification by NeXT, Inc. (now Apple), so it profits from years of professional experience especially by NeXT Computer Inc. and Sun Microsystems Inc.; the API is very high-level and well-defined. By now there are several success stories where developers were able to write complex applications with GNUstep in minimal time. It is also very helpful that GNUstep provides high level APIs around some of the best Free Software packages like gmp, OpenSSH and tiff. Additionally, it gives the term WYSIWYG new meaning, as GNUstep uses a common imaging model called Display PostScript, which is related to the Postscript printer language, for all graphical output. Although the GUI is still in the beta stage, it is ready for production use and people successfully do so. Developers not afraid of something that is a little different from the rest should feel encouraged to give GNUstep a try. Currently, development is mostly undertaken by three to four people with a group of 30 to 40 developers committing bugfixes, patches and comments. The libraries are published under the GNU Lesser General Public Licence – tools and isolated programs use the GPL. At the moment, development is focused on completion of the GUI and a port to MS Windows. Since GNUstep is API-compatible with MacOS X (Cocao), it is already possible to develop programs for Unix and MacOS X parallel. With a port to Windows, programs could be developed for all three platforms simultaneously. Also interesting is the GNUstep Web part, which uses a system similar to the Apple WebObjects and makes it easy to create dynamic Web pages with connections to databases. Even though this part is still rather new, it is already almost completely usable. 9 · 2001 LINUX MAGAZINE 109


108gnuworld.qxd•

08.05.2001

10:21 Uhr

COMMUNITY

Seite 110

BRAVE NEW WORLD

W3Make The XML Web publishing system, W3Make, by Stefan Kamphausen, is one of those small but rather useful projects. In this case it should prove useful for users of small to middle-sized Web pages. Many XML-based approaches like, for instance saxon, allow only a single input file, so automatic linking is lost. Thanks to W3Make several XML source files can be piped through an XSL stylesheet with the help of saxon and written into several HTML output files. The central core is a GPL-licensed Perl script that parses W3Makefiles. As the name already suggests, these are rather similar to the standard Makefile syntax, which allows you to use the Makefile mode of your favorite editor to edit them. The author himself is using it successfully for his employer’s websites and his personal homepage. It is definitely ready for production use. What he would like to include in future releases is a link checker that will canonically detect relative, absolute and local links and transcribe one into the other. Also he plans to start using the Perl XML::* modules instead of the saxon XSL parser. While making that shift, he is considering creating a plugin interface so it becomes possible to use DSSL instead of XSLT.

OpenWebSchool Wilfried Romer and Hans-Peter Prenzel started the OpenWebSchool project in Berlin, Germany. The goal is to establish cooperation between elementary and high schools and make school resources available online. Based on the principle of Free Software using the GNU General Public License and the GNU Free Documentation License, students of the higher grades create learning units for students of the lower grades and elementary schools. This allows the students of higher grades to gain experience in program development and Web programming. Thinking about pedagogical aspects when creating the units also helps students to reflect on their own way of learning. Additionally, the project introduces students to computers and the Internet via topics that normally have no direct connection with these areas. Students of the lower grades and elementary schools gain an interesting addition to the normal classes that also helps in familiarising them with the medium. The website, central point of the OpenWebSchool, already contains some lessons in different topics, but due to the nature of the project and its youth, it is of course not complete. There is a need for more developers and the usability could also be improved. The OpenWebSchool is definitely a very promising project that will most probably see re110 LINUX MAGAZINE 9 · 2001

implementation in other countries. An international cooperation, where students of one country create units for their native language to be used by students of other countries, seems to be the next logical step.

Free Software Foundation Europe update As covered in issue five, a group of progagonists of Free Software is currently creating the European sister organization of the FSF. By now the original team consisting of Peter Gerwinski, Bernhard Reiter, Werner Koch and myself has been joined by Frederic Couchet, Alessandro Rubini, Jonas Oberg and Loic Dachary; the next step to enlarge the team is already planned. The central point of our work in the past weeks has been finding the right organisational structure and realising it with the constitution. Since we consider transparency to be very important, we’d like to introduce some results at this point. In the middle of the FSF Europe is a central organization, the so-called Hub, which provides the European coordination, the office and all tasks that can be centralised. Connected to the Hub are national organisations that work on the local tasks and provide local points of contact for politics and press. In order to be independent of popularism, the membership policy of the FSF Europe follows that of the FSF. New members are only being appointed by a majority of the current members. To allow working together with volunteers better and more closely than the model, the local organisations, the so-called Chapters, are in close contact with societies which are open to everyone in general. Those organisations, called FSFE Associate Organisations, do a lot of the basic work and are in very close contact with the Free Software Foundation Europe. As it is possible to have Associate Organisations with different orientations, there can be several in one country. Very often, these Associate Organisations are also tied to the FSF Europe Chapters personally. A good example for this is France, where Fredeic Couchet as the President of APRIL is also FSFEChancellor, which is the highest representative of the FSFE in France. APRIL itself has been established in France for several years now and has been doing valuable work there. It has now joined the network as an Associate Organisation of the FSF Europe. In this way existent local structures are being protected and networked with each other through the FSF Europe. Additionally this allows everyone to work closely with the FSF Europe. The personal structure is designed in such a way that all members of the FSF Europe are members of the Hub and meet once a year. At these meeting the


108gnuworld.qxd•

08.05.2001

10:21 Uhr

Seite 111

BRAVE NEW WORLD

guidlines binding all parts of the FSFE are discussed and decided. Every second year the Europe-wide positions of president and vice-president and the ‘head of office’, who is responsible for all officerelated matters, are elected. The election of the local representatives, the chancellor and vice-chancellor, is done by the local chapters at their yearly meetings. The responsibilities of the president and his deputy, the vice-president, are the political and public work on the European scale, the coordination of the Europe-wide cooperation and on demand the support of the chancellors in their tasks. This structure has been written down into a constitution with the help of a lawyer and, at the time of writing, it is at the tax authorities in Hamburg, Germany to be checked for the granting of charitable status. After the last necessary steps have been performed to complete the legal founding, the main target will be the creation of the local organisations. The Germany, France, Italy and Sweden Chapters are already being prepared, Austria and the U.K. should probably not take too long as well. Parallel to this, it will also be my task to introduce the Free Software Foundation Europe into discussions and speeches and to establish contact with local organisations and politics. If you would

COMMUNITY

Info Send ideas, comments and questions to Brave GNU World column@brave-gnuworld.org Homepage of the GNU Project http://www.gnu.org/ Homepage of Georg’s Brave GNU World http://brave-gnu-world.org ”We run GNU” initiative http://www.gnu.org/brave-gnuworld/rungnu/rungnu.en.html GNU Pipo BBS homepage http://www.gnu.org/software/pipo/Pipo-BBS.html Larswm homepage http://www.fnurt.net/larswm/ GNUstep homepage http://www.gnustep.org/ W3Make homepage http://www.skamphausen.de/software/w3make/ OpenWebSchool homepage (in German) http://www.openwebschool.de/ Free Software Foundation Europe homepage http://fsfeurope.org/ Conference Page - Georg C. F. Greve http://www.gnu.org/people/greve/conferences.html ■ like to meet with me at one of these occasions, you can inform yourself about my planned and fixed dates at my homepage.

Enough for this month That’s it for this month, as usual I’m asking for plenty of mail to the well-known address below and hope to receive interesting suggestions, ideas or project descriptions. ■

Anzeige inhouse

9 · 2001 LINUX MAGAZINE 111


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.