linux magazine 15

Page 1

COMMENT

General Contacts General Enquiries Fax Subscriptions Email Enquiries Letters CD

01625 855169 01625 855071 www.linux-magazine.co.uk subs@linux-magazine.co.uk edit@linux-magazine.co.uk letters@linux-magazine.co.uk cd@linux-magazine.co.uk

Editor

John Southern jsouthern@linux-magazine.co.uk

Assistant Editor

Colin Murphy cmurphy@linux-magazine.co.uk

Contributors

Alison Davis, Richard Smedley, Richard Ibbotson, Jono Bacon, Jason Walsh, Jack Owen, Bruce Richardson, Steven Goodwin, Janet Roebuck, Kevin D. Morgan

International Editors

Harald Milz hmilz@linux-magazin.de Hans-Georg Esser hgesser@linux-user.de Ulrich Wolf uwolf@linux-magazin.de

International Contributors Simon Budig, Björn Ganslandt, Georg Greve, Jo Moskalewski, Christian Perle, Stefanie Teufel, Oliver Kluge, Mirko Dolle, Andreas Jung, Patricia Jung, Anja Wagner, Carsten Zerbest Design

Advanced Design

Production

Stefanie Huber

Operations Manager

Pam Shore

Advertising

01625 855169 Carl Jackson Sales Manager cjackson@linux-magazine.co.uk Verlagsbüro Ohm-Schmidt Osmund@Ohm-Schmidt.de

Publishing Publishing Director

Robin Wilkinson rwilkinson@linux-magazine.co.uk Subscriptions and back issues 01625 850565 Annual Subscription Rate (12 issues) UK: £44.91. Europe (inc Eire) : £73.88 Rest the World: £85.52 Back issues (UK) £6.25

Distributors

COMAG, Tavistock Road, West Drayton, Middlesex England UB7 7QE

Print

R. Oldenbourg

Linux Magazine is published monthly by Linux New Media UK Ltd, Europa House, Adlington Park, Macclesfield, Cheshire, England, SK10 4NP. Company registered in England. Copyright and Trademarks (c) 2001 Linux New Media UK Ltd No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, emails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing. ISSN 14715678 Linux is a trademark of Linus Torvalds Linux New Media UK Ltd is a division of Linux New Media AG, Munich, Germany Disclaimer Whilst every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material providied on it is at your own risk. The CD is comprehensively checked for any viruses or errors before reproduction. Technical Support Readers can write in with technical queries which may be answered in the magazine in a future issue, however Linux Magazine is unable to directly provide technical help or support services either written or verbal.

Current Issues

NOT A CD W

hen is a CD not a CD? The answer is when it contains the new stealth anti-piracy coding. The system, called Cactus Data Shield, has been trialled by Sony in both Europe and the US. The system adds noise to the data stored on the CD. Copying the disc results in a CD full of noise, which could potentially harm your speakers. The protection system relies on the error correction algorithms within CD players, which treat the extra noise as though it was just a disc covered in fingerprints, whereas a computer CD drive, PS2, MiniDisc or even the new CD/MP3 portable players treat this extra code as read failure errors. This means you could walk into a shop. Buy a CD. Take it home and everything plays fine. Later in your new portable player or on your computer it fails. Put it back in your stereo and all is well. What would you do first? Assume it has draconian copy protection or that you have a faulty drive because you have worn it out. I wouldn’t mind so much if the discs were clearly labelled as such. One does have a warning but this is on the inside cover and my local music shop is unlikely to let me open every CD just to check. With clear notices at least I then get to choose not to buy. A similar system is being trialled by Macromedia called SafeAudio. Both of these copy protection systems can be simply circumvented by hooking your speakers up to your soundcard. Piracy is obviously wrong but should this harm my consumer rights? This is limiting what we can do with a product legally bought. What will they think of next – software that’s not yours but only licensed? Happy coding!

John Southern, Editor

We pride ourselves on the origins of our magazine which come from the very start of the Linux revolution. We have been involved with Linux market for six years now through our sister European-based titles Linux Magazine (aimed at professionals) and Linux User (for hobbyists), and through seminars, conferences and events. By purchasing this magazine you are joining an information network that enjoys the benefit of all the knowledge and technical expertise of all the major Linux professionals and enthusiasts. No other UK Linux magazine can offer that pedigree or such close links with the Linux Community. We're not simply reporting on the Linux and open source movement - we're part of it.

Issue 15 • 2001

LINUX MAGAZINE

3


NEWS

LINUX NEWS IBM donate Eclipse to open source IBM has created a new open source community, codenamed ‘Eclipse’. The Java-based open source software, derived from WebSphere Studio Workbench, will enable developers to use software tools from many suppliers. This will enable developers to combine business tasks used to create e-business applications, such as those for Web services. With more than 150 software tool suppliers working together it will be available free-of-charge to developers. More than 1,200 individual developers from 63 countries have already participated. The community will be managed by a multi-vendor organisation and will include IBM, Merant, QSSL, Rational, Red Hat, TogetherSoft and others.

SuSE Linux Enterprise Server 7

Mandrake to work magic with the Sims Mandrake is to launch a gaming edition of its distribution. Expected to cost around £70 the Linux distribution will contain the 3D-simulation game, The Sims from Electronic The Sims on Mandrake Linux Gaming Edition Arts. To achieve this Mandrake has used TransGaming Technologies’ innovative and unique portability technology layer, where cutting edge games operate seamlessly on the Linux platform. The Sims, is a popular virtual simulation game that lets players create a neighbourhood of simulated people known as “Sims” and control every element of their lives. Linux players will be able to fully interact with the Windows-based Sims world. They will be able to download Sims’ household furnishings, clothes and accessories from the same sites as their Windows counterparts and then post their story lines on hundreds of existing fan sites. TransGaming’s portability technology and subscription services also enable Linux users to play many other popular Windows games. The portability technology, in collaboration with the Wine community, allows games and business software applications that were originally designed for other platforms to function on Linux in a completely seamless and transparent manner.

6

LINUX MAGAZINE

By using software tools that easily plug into the Eclipse software, developers can create higher-quality applications in less time and inherit technology developed by other vendors. Developers no longer need to create ebusiness applications in Windows and port them to Linux, since they can work directly on Linux. It will also enable the abundance of Windows software tools to be more easily supported on Linux, accelerating the establishment of Linux as an application development environment. Since Eclipse is open source, Linux developers can participate in the community evolving the software itself, as they do with Linux. http://www.eclipse.org

Issue 15 • 2001

SuSE Linux Enterprise Server 7 for IBM S/390 mainframes, has been released. Based on the kernel 2.4, it now supports S/390 servers as well as IBM eServer zSeries z900 giving the ideal solution for running mission-critical applications such as the administration of complex e-business transactions. “By means of the consolidation of large server farms, customers enjoy a lower level of administration expenses due to uniform server structures,” explains Dirk Ott, Head of IBM Linux Marketing, EMEA Central Region. “This not only reduces system administration costs during the integration of new Linux servers, it also hosts a great savings potential for locality and energy costs”. As the host operating system z/OS also supports HiperSockets, data exchange between SuSE Linux and z/OS is possible with maximum bandwidth and nearzero latency. The Logical Volume Manager allows the runtime integration of dynamically attached storage devices into existing virtual volumes of almost arbitrary size, delivering practically unlimited disk space without downtime. The rail freight company Transtar successfully runs a real-time tracking system on SuSE Linux Enterprise Server for S/390, and thus, provides online freight information to employees and customers. http://www.suse.co.uk/s390


NEWS

Texas Instruments licences HyperTransport AMD announced it has licensed its high-speed interconnect HyperTransport technology to Texas Instruments Incorporated (TI). HyperTransport is an innovative solution that moves information faster, enabling the chips inside of PCs, networking and communications devices to communicate with each other up to 48 times faster than some existing bus technologies. Texas Instruments will use HyperTransport technology in devices in computers, telecommunications and other equipment. HyperTransport technology is a high-speed, low-latency bus that can interface with today’s buses like AGP, PCI, 1394, USB 2.0, and 1Gbit Ethernet as well as next generation buses including AGP 8x, Infiniband, PCI-X, PCI 3.0, and 10Gbit Ethernet. “HyperTransport technology is being driven by the HyperTransport Consortium as a free and open industry standard aimed at removing system design bottlenecks and dramatically increasing the performance of the next-generation computation and telecommunications systems. The licensing of this technology by an industry leader like Texas Instruments increases the momentum of HyperTransport as the future connectivity standard. It is gratifying to see a leader like Texas Instruments join the growing list of companies such as Cisco Systems, nVidia, PMC-Sierra, and SGI, who have incorporated HyperTransport technology into their future product plans,” said Gabriele Sartori, president of the HyperTransport Technology Consortium. More information concerning the HyperTransport Consortium can be found at www.hypertransport.org.

da Vinci takes off South Coast software development company da Vinci is combining its official launch with the announcement of significant new business at Southampton company iInventory. da Vinci will assist iInventory with the next phase of development of its LANauditor software, due for release in the first quarter of 2002, as well as a unique rocket direct mail piece. da Vinci, which was formed earlier this year to serve the growing demands for UK-based teams of software developers, has had a successful year resulting in a 25 per cent increase in development staff. LANauditor automates hardware and software asset tracking for

networked and standalone computers. “The iInventory project is very exciting for us, as it allows us to use our skills across several areas of expertise”, comments Andy Eakins Managing Director of da Vinci. “After locating several suitable partners in the US we were recommended to da Vinci by Apple itself and we are very pleased to be able to source the development expertise required so close to home. iInventory approached us hoping for a Macintosh solution but, after seeing the experience that da Vinci could offer in other areas, extended their requirements to include Linux and Web development”.

LynuxWorks unveils BlueCat 4.0 LynuxWorks Inc. unveiled the latest version of its popular BlueCat Linux distribution. Using the new 2.4 Linux kernel and tool chain, the new distribution empowers developers to select the best-suited development platform for their needs by providing support for multiple microprocessors. Offered in professional bundled solutions, BlueCat is packaged with a range of host/target combinations, commercial grade tools, integrated development environments (IDE) and support programmes. “BlueCat 4.0 professional tools bundles for Intel’s microprocessor platforms provide developers a comprehensive embedded development environment,” said Inder Singh, CEO of LynuxWorks. BlueCat tools benefit developers by providing unique gdb extensions for kernel debugging and a simple time-saving graphical interface to trace, debug and tune kernels and application functions for increased performance. By including simple utilities for controlling boot, flash programs, disk, connectivity and display functions, this enables the loading and using of tested application components such as boot routines, Web servers, application shells, demo programs and more. BlueCat 4.0 supports the Intel XScale microarchitecture, Intel IXP1200 Network Processor and Embedded Intel Architecture. Other platforms will be available later in the year. http://www.lynuxworks.com/products/whatisbcl.html

Issue 15 • 2001

Oil rigs running on Linux Varco International Inc. has announced it will standardise on MontaVista’s Hard Hat Linux embedded operating environment, for its full line of oil rig floor equipment controllers. Varco made the selection after the company had exhausted its search for an off-the-shelf next generation industrial controller that would meet its complex needs and had tested several major embedded operating systems for its own custom controller product. Varco International produces equipment and automation systems for the Oil and gas drilling industry. Varco’s next generation oil rig controller, its e-Drill product, promises equipment that has a far higher level of automation and functionality (through sophisticated robotic algorithms); is simpler to set up and maintain (through thin-client, Web browser-based “screens”); and leverages the Internet to allow remote diagnostics, monitoring and control of these systems. The first systems using MontaVista’s Hard Hat Linux are operational today in the North Sea and the Gulf of Mexico. Many additional systems have already been shipped and will be commissioned later this year. Reliability is critical in these applications, with operator’s liability often exceeding a quarter of a million dollars a day for any downtime of this equipment. http://www.varco.com/ edrill.htm http://www.mvista.com LINUX MAGAZINE

7


NEWS

Windows software on Linux Lindows is a version of Linux specifically designed to run Windows applications. It uses Wine, the open source project that Corel used to port many of its applications during Corel’s Linux venture. PCs running Lindows OS require no additional software to run both Windows and Linux software. For more information see http://www.lindows.com/.

Red Hat expands its consulting groups A core team of open source consulting and implementation experts join Red Hat from VA Linux. The consulting professionals joining Red Hat will operate from locations around the US. “By supplementing our high-end networking and security experts with these experienced and talented open source engineers and consultants, Red Hat is well prepared to meet the needs of our current and future Global 2000 clients as they migrate from UNIX to open source computing,” said Kevin Thompson, Executive Vice President and Chief Financial Officer of Red Hat. “In many cases we were already working with Red Hat Linux,” said Marty Larsen, former Vice

President with VA Linux and one of the experts now joining Red Hat, “so we know the tremendous benefits this powerful new way of computing can bring to enterprise class companies in all types of business.” The expansion of the consulting practice enhances Red Hat’s ability to offer a full range and depth of services and support, including the assessment and security analysis, network infrastructure analysis, cost-benefit analysis, and code and application analysis. They also gain additional expertise in Infrastructure and large-scale implementation expertise and open source graphic engineering and consulting

SNARE catches the market SNARE (System iNtrusion Analysis and Reporting Environment) from InterSect Alliance Pty Ltd. is a kernel module-based auditing system that has a core goal of reducing the “cost of entry” for host-based intrusion detection and system auditing on Linux. This makes system event logs less of a chore, and more of a resource. One of the key components that has been missing from the Linux operating system, is a comprehensive auditing and event-logging facility. The lack of such security functionality, and the fact that it exists in commercial operating system rivals such as Windows NT and Solaris, has been reported as a significant reason why organisations and government departments have been reticent in taking up Linux, despite the significant cost savings that would otherwise have resulted. Hopefully, SNARE will go a little way to removing such reluctance. For more information see http://www.intersectalliance. com/.

8

LINUX MAGAZINE

EMC Fastrax gets Time Navigator Atempo, Inc. (formerly Quadratec Software) announced that its Time Navigator software platform for high-performance backup and restore services now supports the EMC Fastrax data movement platform in an Oracle environment. Time Navigator for EMC Fastrax is one of the most advanced solutions on the market for backup of active online data to secondary storage. EMC Fastrax accomplishes this without LAN or SAN (Storage Area Networking) bandwidth or application processor resources to move the data. Time Navigator reduces backup administration and provides an advanced approach for restoring Oracle databases. The combination of Time Navigator and EMC Fastrax reduces the database backup window to its minimum, resulting in the optimal solution for large and critical database backup. This integration demonstrates EMC’s commitment to leverage open application programming interfaces and product strengths to deliver infrastructure solutions that help customers meet current business challenges, such as backup and recovery of critical business information. The product runs on IBM AIX, Sun Solaris and HP-UX. Benefits include server-less backup solutions, maximising network performance with minimal impact to users and application performance. Time Navigator can seamlessly share the same library for backup and restore operations for maximum hardware and software investment protection. Issue 15 • 2001

Trend Micro now protects Linux-based servers Trend Micro has announced ServerProtect for Linux 1.0, which protects Linux-based file servers from computer viruses and other malicious code. It offers all the standard ServerProtect benefits, such as remote management via Web browser, real-time virus detection, timecontrolled virus searches and flexible messaging. Trend Micro’s MacroTrap is also an integral part of the antivirus solution, detecting and removing unknown macro viruses. The importance of the Linux operating system is steadily increasing in the corporate marketplace, with the open source environment considered to be particularly stable and secure. The Linux system is based on the philosophy that each user contributes to the improvement of the system according to his or her area of expertise. The disclosure of the source code means that errors and omissions can be identified and resolved by all programmers who are involved with the system. With ServerProtect for Linux 1.0, Trend Micro is now offering comprehensive protection from computer viruses to organisations that base their networks on Linux servers. As the use and popularity of the Linux platform expands, so too will the need for Linux-based virus protection. A test version of Trend Micro’s ServerProtect for Linux 1.0 can be downloaded from http//www.trendmicro.co.uk for a limited period.


NEWS

Embedded Linux apps in the palm of your hand The Sharp SL-5000D is a PDA with a difference; it is the first PDA from a major power in consumer electronics to ship with the Linux operating system. “Sharp believes that consumers are just waiting for the kind of power and flexibility this operating environment brings to palmtop computing,” said Steve Petix, Sharp’s Associate Vice President, Mobile & IT Solutions Group. Sharp is accepting pre-orders from the developer community for the SL-5000D developer unit. The SL5000D uses the Embedix Plus PDA solution, which contains: Lineo’s Embedix Linux; Trolltech’s Qt/Embedded and Qt AWT GUI technologies; Insignia Solution’s Jeode PDA Edition; and Opera Software’s embedded Web browser. Running on top of Embedix Plus is Trolltech’s Qt Palmtop, an application environment based on Qt that provides a full range of applications for business productivity, handheld games, personal information management, and synchronization across multiple desktops. “The SL5000D shows that when you combine slick hardware with a cool application environment, you end up with a product that pushes the limits of what a PDA can do,” said Haavard Nord, Trolltech’s CEO.

Zeus Web Server cuts hardware requirements in half. Zeus Technology Limited, announced the launch of Zeus Web Server Version 4.0, which enables twice as many people to access a Web site simultaneously, and over a continued period, compared with competitive products. This is because version 4.0 can handle more simultaneous connections from the Internet, and responds to each individual connection much faster than competitors. This enables the number of hardware servers to be halved. Version 4.0 is also the world’s fastest Web server when delivering dynamic content such as PHP – 45 per cent faster than the competition. Web servers are traditionally optimised to deliver large volumes of traffic out from the server to the Internet. The advent of Web services will require Web servers to handle huge volumes of inbound traffic. Zeus Web Server Version 4.0 has been optimised for traffic travelling in both directions, making it the only Web server designed for the next generation of the Internet. For a free 30-day evaluation see http://www.zeus.com/downloads/.

Rackspace managed hosting Rackspace is launching its managed hosting services in the UK and across Europe. Operating from a new, state-of-the-art data centre outside London, Rackspace Managed Hosting deploys and manages entire Internet hosting platforms, ranging from single managed servers to complex managed server clusters that need advanced firewall security, load balancing and data storage. Rackspace is the second largest managed hosting company with over 4,000 managed Web servers. “Business is booming,” says Dominic Monkhouse, newly appointed Managing Director of Rackspace Europe. “Rackspace’s sole business is doing managed, dedicated hosting. They have doubled their business over the past year while most hosting companies were retrenching or going out of business. I came from the hosting industry and had to compete with Rackspace’s legendary Fanatical Customer Support”. Monkhouse was previously the Managing Director of Interliant UK and joined Rackspace in September 2001. Rackspace boasts a fully redundant, Class A data centre in the UK and in Texas, USA and offers a 99.999 per cent uptime guarantee. Another key to Fanatical Support is its 24-hour Rapid Deployment guarantee. Further information on Rackspace Managed Hosting is available from the company’s Web site, which can be found at http://www.rackspace.co.uk

Keyhaven Systems Team with LinuxIT for support

Hansa Business Solutions launches new software Hansa Business Solutions has launched its nextgeneration suite of fully integrated CRM, financial and logistics software. Hansa Business Solutions is one of the few companies that can deliver true multiplatform capabilities to its customer base. In addition to the availability of its software on Linux, the company is also able to support implementations on Apple’s latest OS X operating system, Windows, UNIX, IBM iSeries (formerly AS/400), IBM pSeries (formerly RS-6000) and mixed networks. In addition to the core financial and logistics functionality of Hansa Version 3.9, the new release incorporates a suite of integrated customer relationship management features, which includes an interactive task manager and calendar that can share relevant data with other employees and other modules of the software, such as purchase ordering, sales ordering and stock control. Alternatively, if companies do not wish to implement the full suite of Hansa software, they have the option of purchasing First Contact, a bundle of CRM modules that is sold as a stand-alone system. Issue 15 • 2001

Keyhaven Systems Ltd, a UK specialist in networking, Internet and email solutions, has reached an agreement with LinuxIT for support of the Concera range of systems. Keyhaven manufacture the Concera, which is a rack mountable unit, intended to solve customers IT problems by providing a fully featured server with secure Internet services and an easy to use interface based upon the Linux operating system. The Concera is designed either to integrate into an existing network or operate on its own, providing for all the network needs. Each Concera is pre-configured to meet individual customers requirements before delivery. LinuxIT (http://www. linuxit.com) provide unlimited email, telephone and remote diagnostic support during either business hours or 24/7 for mission critical systems. The LinuxIT-Concera support lines are manned by experts in the underlying Linux operating system and the range of applications preinstalled. LINUX MAGAZINE

9


NEWS

K-splitter

POT-POURRI Who says there’s no place for gossip and scandal in a Linux magazine? K-splitter broadcasts news from the K-world and noses around here and there behind the scenes. The new season holds many a surprise for all KDE-ers and

Grumble corner If you’ve ever wanted to get rid of an annoying element in a KDE application, or know how your favourite tool could be made even more useful, you’ll now get the ultimate chance thanks to the revived KDE Usability Study. The makers of the KDE project are collecting user reports at their Web site: http://usability.kde.org/, which highlight weak points or especially successful features of diverse KDE programs. There are already some evaluations on the site of consoles, Kicker, Quanta and the Konqueror. Anyone wanting to contribute their views on greater user-friendliness at KDE should not delay. Head straight to http://mail.kde.org/mailman/listinfo/

Join up now – your usability study needs you

kde-usability/ and join the project’s mailing list. Alternatively, the makers will also be glad to receive new reports of your experiences. The usability lists can be found at http://usability.kde.org/reports/ maintainerlist.phtml.

Stefanie Teufel is at hand to introduce them all

This season’s fashions The latest in chic for your KDE desktop is presented to you this season by Kristof Borrey. Instead of simply clicking together another KDE theme, he has a whole collection of bright, shiny, colourful icons waiting for you. Anyone wanting to give their desktop and Konqueror a whole new look can obtain the package Kicons_0.2.1.tar.gz from

Figure 1: Your country needs new icons

10

LINUX MAGAZINE

Issue 15 • 2001

http://prdownloads.sourceforge.net/ktemplate/. Open the control centre and select Design/ Symbols. Then click on the folder icon and in the file browser which then appears select Kicons_0.2.1.tar.gz. All you have to do now is press OK and click on Install New Design. Select the new entry Kicons in the Design tab to view the works of art, as in Figure 1.


NEWS

Black on white All those who were unable to attend the LinuxTag live, now have the option of viewing an HTML version of Michael Goffioul’s talk on the new KDE print system at http://users. swing.be/kdeprint/www/index.html. As chief developer of this system, which is being made accessible to a broad range of users for the first time in the newly published KDE Version 2.2, Goffioul has a thing or two to say about the new technology. Further information, including additional screenshots and photos, can be obtained from the KDE Print Web site at http://users.swing.be/kdeprint/. The author is

If you missed the real thing round check out the website

also seeking support for the documentation of the individual components and will be glad to receive any offers.

Finding room No matter how big your screen may be, once you’ve opened a couple of windows the desktop will already look much too small. With the increasing multiplicity of applications space is also getting tighter in the KDE Panel. Thankfully, the same thing happens to Karl-Heinz Zimmer, which is why he’s thought up a little trick for using dial-in connection and ISDN to get on the Internet, without all that tedious messing about with the mouse or having to burn up valuable space with the status indicator. You can now dial in at the press of a key, and the ISDN status indicator is replaced by a coloured panel. To do this, download the script created by KarlHeinz Zimmer (http://bugcops.org/downloads/ isdn_on_off) and the background image (http://bugcops.org/downloads/Panel_Online_Backgro und.png) onto your computer. Then copy the file isdn_on_off into the directory /usr/local/bin or wherever else you store your treasures like this. As root, ensure that the script can be executed globally: chmod a+x /usr/local/bin/isdn_on_off

Figure 2: A menu item...

For safety’s sake, take this opportunity to check whether, as user, you have execution rights

Figure 3: ... and a shortcut key for getting on the Net

For safety’s sake, take this opportunity to check whether, as user, you have execution rights on /usr/sbin/isdnctrl. If not, correct this. Start KDE’s menu editor in the K menu, under Install Control Bar/ Menu Editor and make a new entry under Internet. Enter the following values here: In the General tab an ISDN_On_Off is on offer by name. For a comment, add something like Activate and Deactivate the ISDN connection. If you’re happy to do without a comment, you can always leave the box empty. For command, enter isdn_on_off (Figure 2). Now switch to the Extended tab, where you should click on the Change button. In the dialog box, which will then appear (Figure 3), set the key combination to Shift+F12. Now when you press Shift+F12 together you can get onto the Network of Networks and, after glancing at your latest telephone bill, you can hang up again in the same way. ■ Issue 15 • 2001

LINUX MAGAZINE

11


GNOME NEWS

GNOMOGRAM

A GLIMPSE OF THE FUTURE Björn Gansladt looks at GNOME 2.0, Mono gleanings, Oregano, Fractal landscapes with Terraform and a Usability study of GNOME

GNOME 2.0 With GNOME 2.0 on the horizon, there is hard work going on everywhere at GNOME and, to some extent, the first fruits of this labour are ready to be admired. The control centre is set to become a standard part of Nautilus; systems not running Nautilus will still have a separate control centre window open. Anyone brave enough to compile this so-called shell at this early stage will be rewarded by an interface which has been given a complete facelift, and which, with its big icons, looks more like Windows than the old control centre. Another new feature enables applets to be started with the respective settings in their own windows. There have been a few changes under the bonnet, so now the control centre makes use of Ximian Setup Tools to archive settings and to allow a multi-step undo. This is also a very simple way to load specific local settings, such as those one might use on a laptop. The applets themselves have also been revised and some have been swapped into their own packet, named Control-Center-Plus. One change that has been long overdue is

The GNOME Control Center

the overhaul of the screensaver applet, which hasn’t run smoothly with the available screensavers for some time. To prevent this situation from repeating itself in a few months time, consideration is being given to combining the new applet with the official Xscreensaver demo. Even if the lovely gtkhtml-based interface is easily fooled by it the control centre is still highly unstable. In addition to a new version of libcapplet, it still needs some fairly exotic packages

Mono gleanings Ximian’s announcement that parts of Microsoft’s .NET are to be implemented as freeware under the name of Mono has struck a nerve in the Linux community: there followed numerous articles by incensed, and sometimes badly-informed, authors. The main bone of contention was Microsoft’s Passport, a system to allow simpler authentication of users. Personal data is managed centrally and if necessary passed on to sites such as Hotmail without the user having to type it in. It is understandable that not everyone wants their personal details to be controlled by Microsoft, but it was never Ximian’s aim to completely clone .NET. In the first instance the programming environment contained in the .NET framework C# will be implemented. The report that Microsoft will be supporting Ximian in this implementation certainly does not mean that Mono will support Passport, and much less that programmers will be forced to implement this system in their future work.

12

LINUX MAGAZINE

Issue 15 • 2001

When it comes to support by Microsoft, this report was followed by another in which it was rumoured that .NET (at least in the USA) was based on software patents which could make Ximian’s efforts worthless. In respect to Passport, DotGNU, the second project involved with .NET, is more interesting as this is where the possibility of decentralised authentication is being worked on. Since the cooperation with Pocket .NET the project has also included a C# environment. The fact that both projects have been blessed by the FSF is due among other things to bad timing and is not a one-off. The FSF is also producing Harmony, a free replacement for the GUI library Qt. The Harmony project was launched to make it possible to include KDE (and any other free programs that were designed to use Qt) in wholly free operating systems such as GNU and Debian GNU/Linux. Harmony will be released under the GNU Library General Public License (LGPL).


GNOME NEWS

Oregano Oregano is a program for drafting and simulating circuits. To do so, Oregano offers a wide variety of circuit elements from resistors to transistor logic, which can, if required, be printed out for copying. What is most interesting for the hobbyist is the possibility of simulating the circuit and demonstrating voltage or frequency at specified points as graphs. The actual simulation work is done, not by Oregano itself, but by Spice. Despite its 30 years, Spice is still one of the best programs for this task and offers analyses Oregano cannot yet visualise. Since back in 1971 neither GNU nor Open Source yet existed, it’s not surprising that the Spice program has no free licence by today’s criteria. There are a few projects based on Spice, such as Al’s Circuit Simulator and Ng-spice. These are either not fully compatible with the original software or not readily available, thus no genuine alternatives exist. such as bonobo-conf and pkgconfig, most of which can be found on the GNOME-FTP server. Another very nice, but alone still completely useless, feature is the new GNOME file selector, which is of course based on Bonobo. This also uses a few widgets from Evolution, which can be found in libgal, and enables the directory view to be grouped according to file types. All users of Ximian GNOME or Windows should find the shortcuts to important directories familiar. The intuitive file name completion is practical for new users and the dialog also notes when a file was last accessed or saved. By using GNOME-vfs it is possible to access distant files such as those on a digital camera just as one would access a hard disk with Nautilus.

Usability study of GNOME Sun has been asking users without any GNOME experience to perform certain tasks with the GNOME system and has published the results at http://developer.gnome.org. The value of such a study is revealed by the fact that many of the suggested improvements are simple to implement but make a massive difference to the user. In order to clarify any questions of usability before they are wrongly answered, there is also the GNOME Usability Project, which is working on interface guidelines. One problem when creating such guidelines is the large diversity of GNOME users: On the one hand, a few users felt baffled by the numerous options in the control centre, yet on the other hand these options are what makes the desktop so flexible. Based upon classification by experience, the configuring options of Nautilus and Sawfish now appear correct.

Info Sun report Gnome useability Mono project Mono news .NET patent news DotGNU project Gnome Control Center Pkg-Config libraries Oregano Circuit Simulator Ng-spice Terraform Ray tracing

http://developer.gnome.org/projects/gup/ut1_ report/report_main.html http://developer.gnome.org/projects/gup/ http://www.go-mono.com http://www.softwareuncovered.com/news/ cgram-20010716.html#1 http://www.zdnet.com/zdnn/stories/news/ 0,4586,2801560,00.html http://www.dotgnu.org http://ftp.gnome.org/pub/GNOME/unstable/ sources/control-center/ http://www.freedesktop.org/software/pkgconfig/ http://oregano.codefactory.se/ http://metalab.unc.edu/pub/Linux/apps/circuits/ http://www.geda.seul.org/tools/ng-spice/ http://terraform.sourceforge.net/ http://www.povray.org

Fractal landscapes with Terraform As in so many programs, Terraform’s stated objective is to become the GIMP of its field – in this case editing fractal landscapes. Fractal landscapes, also known as heightfields, owe their name to the fact that certain characteristics can be found in a landscape if they are hugely enlarged. To this extent they are similar to fractals, which makes it possible to write algorithms that generate such a landscape. Terraform offers several algorithms at the same time which each deliver somewhat different results. A few filters can be used on the generated landscapes, which depending on what you want, can smooth out the landscape, provide it with craters or raise the water level. In addition to the normal preview Terraform provides several threedimensional views, some of which can be moved around in space. To calculate a completed image, the program makes use of the raytracer Povray. The heightfield can also be exported in various formats and thus be inserted into other programs. Those who want to printout the heightfield, can do so with GNOME-Print through Terraform.

A fractal landscape in the wireframe view

Issue 15 • 2001

LINUX MAGAZINE

13


COVER FEATURE

Security: Bolting the door...

OPEN ALL HOURS No computer is ever

The view from the inside

completely secure –

What is more valuable, your computer or your data? No matter how good your hardware is there is always the risk that it will stop working, failing and corrupting your data as it goes. It’s been said before, and it will be said again, make backups of your data: Tape drives and WORM drives and buy hardware support contracts if need be. It’s not too hard to make a backup of your /home/ directory to a CD writer either. Along with the household insurance of a data backup, your data is also at risk from theft and malicious corruption. You may think that the theft of data is the most obvious but corruption could do you just as much damage. What is more worrying is that either theft or corruption could easily happen on an unsecure machine. The simplest way for someone to steal your data is to physically take it. A chance thief will make off with a backup tape or removable drive. Even a whole machine – especially if it’s a nice shiny notebook – is very attractive to a passing light-fingered Fred. We can all remember the MI5 worker who lost his laptop in a Tapas bar this summer. The only answer is lock and key and physical security. Following all the best practices and procedures, such as restricting access and bolting the casing down, we are still left with the potential of cyber crime over the network as so loved by fiction writers and hacker/cracker wannabes. With the growth just starting to appear of broadband access in the home, be it ISDN, ASDL or Cable modems, it is no longer just a corporate network administrator’s worry. More Small Office/Home Office users will also be tempted to take on these new forms of Internet connection. We all have to be wary of the potential threats. If your computer is connected to the outside world then there are always risks. Lots of companies make

there’s always risk. The type and extent of that risk can vary. Colin Murphy investigates the dangers

HACKER/CRACKER The term hacker is always used out of context. Anything good can be a hack: fixing a piece of code to make your company more profitable or producing a meal from leftovers are all good hacks, and you could be proud to be called a hacker. Unfortunately the term is abused by the press and is usually taken to mean Cracker – someone who breaks into computers. Even the term cracker can be subdivided into someone who breaks in for the mental challenge and those whose intent is malicious.

44

LINUX MAGAZINE

Issue 15 • 2001

their income by helping you to reduce these risks in what is a complex system. Complex or not, it is only right that you take some precautions yourself, which will also give you the chance to discover more about your system.

Keeping up with the Jones’ When you install a new distribution on a computer it is reasonable to assume it is almost up to date. There is always a delay between the final collection of packages, QA testing, manufacture, distribution and finally sale before you get hold of it. So it’s wise to check the Web site for security updates as you install it. This is where the major distributions gain an advantage. They have invested in their upgrade networks fix, and some can make this seem almost automatic. Red Hat uses its Red Hat Network, Mandrake use its local client MandrakeUpdate to look for updates and bugfixes, and SuSE have its security announcements in its support database. All distributions worth their salt, or your money, will also


COVER FEATURE

ntop The first tool to look at is ntop. This program shows the network usage through a simple Web interface. This is a Unix tool that shows the network usage, similar to what the popular top Unix command does for processes. ntop contains a powerful and flexible interface to the ntop packet sniffer. Since ntop has grown so much in functionality and it cannot be simply considered a network browser, the problem of capturing and showing network usage has been split. The ntop engine captures packets, performs traffic analysis and information storage. ntop must be run as root, or at least with root permissions. This is best achieved by a user with normal permissions logging into a superuser mode with the su command. This will mean the user will need to have access to the root account, but this shouldn’t be a problem because only system administrators should be playing with this anyway.

Firewall A firewall is nothing more than a piece of software that enables information to pass through it according to simple rules. The rules can be changed and so care must be taken to ensure everything is double-checked. Firewalls can form part of a workstation machine, but if your network is any more complicated it is often useful to have a dedicated firewall machine.

ntop running in a terminal

This is a far more secure method of access than setting the SUID bit on the executable, which would enable anyone to run the program – systems administrator or not. If ntop is started in a terminal you will be shown a display of network processes, almost like with top. Here you will see some basic information about what data is being sent where and how much. ntop can also present you with much more information via a Web browser. Start ntop with ntop -i eth0 -w 888

ntop viewed through a browser

run mailing lists to advise you of any security issues. As new development is done, patches are released for your packages and some of these will have security implications. There is still the risk that a weakness has been identified and that the developers for your distribution have still to find out about it, which means you are your first line of defence.

Ring of fire Our first port of call is with the firewall, a fundamental protection from network intruders. Like a condom, it gives you a sense of security, unlike a condom it will deny someone the penetration. It enables you to control what types of services you are happy for your machine to handle – not all are as secure as you would want. Most distributions will let you set up a personal firewall and configure the type of access you require, usually from their graphical configuration tools. Often

and point your browser to http://localhost:888. With ntop you will be able to sort through the network traffic by protocol and various criteria, look at network statistics, show the IP traffic and sort that by source or destination and much more. This will enable you to get a feel for the movement of data through your network.

Services Data coming into your system needs to be handled by the correct type of software. It would be pointless for your email client to be looking at the data coming in from a time server – it would be meaningless. These types of data coming into your machine are broken up into services. Ports Ports are the means by which your computer knows how to handle services. There is a defined list of ports together with their associated services in the file /etc/services.

a firewall will have been set up during the initial installation, a couple of questions about whether or not you are running Apache or a Web server. If you are not running those services then the ports to them will be closed off. This can range from allowing anything to connect – useful maybe, if you are running a self contained network that never has any access to the outside world and is only connected to another computer in the spare room, which you want to pass files around with ease – all the way to blocking all incoming and outgoing access without direct intervention. It is at this point we must consider just what is running on our systems. As Linux boxes are used to acting as both server and client at the same time we must be careful about access rights. We also tend to have helpful system services (daemons) running in the background. The daemons wait and when required enable connections via ports but unfortunately this gives huge access holes in the system. You can use Issue 15 • 2001

LINUX MAGAZINE

45


COVER FEATURE

the process status utility: ps -aux |less to list which are running, but they will only be running if something has tried to open the port associated with that service. Another utility, netstat, will show you what is listening in your box: netstat -l some of these may not even be wanted, others may be an outright security risk, telenet and finger are just two that spring to mind. If you are running Red Hat then you can use the chkconfig -list while SuSE users can use YaST and as root System Administration/ Change Configuration/ Services started at boot. If you want a more hands-on approach you can configure access manually. The configuration takes place in one of two places depending on which distribution you are using, inetd.conf or xinetd.conf:

Nmap The next program is Nmap, which enables you to run a port scan yourself. This is most useful to highlight what parts of your system are insecure. A portscan is what a cracker will use to find weaknesses in your system, by sending a stream of data to a range of ports and waiting to see if any of them reply. If they do, then they are prone to attack. Portscanning is a sign of attack, so you should not use this tool against networks that are not under your control. Should you, you are likely to find yourself barred from NMap shows the list of open accessing wherever you scanned and a ports on a local machine complaint being sent to your ISP, which could mean them withdrawing their service from you. Nmap (Network Mapper) is an open source utility for network exploration or security auditing. It was designed to rapidly scan large networks, although it works fine against single hosts, which is where you’ll probably use it. Nmap uses raw IP packets in novel ways to determine which hosts are available on the network, what services (ports) they are offering, what operating system (and OS version) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. Nmap runs on most types of computers and both console and graphical versions are available. With Nmap you also get XNmap, enabling you all of the probing, but from the comfort of a graphical user interface. The primary goals of the Nmap project is to help make the Internet a little more secure and to provide administrators/auditors/hackers with an advanced tool for exploring their networks. The NMap project also boasts a wealth of tutorials and help files to make sure you get the best out of its powerful features.

46

LINUX MAGAZINE

Issue 15 • 2001

● /etc/inetd.config will contain a list of services and their ports. You can switch off these services by commenting out – putting a ‘#’ in front of the line – lines that concern you. ● /etc/xinetd.conf has a more complex configuration file structure. You still just have to comment out the service lines that you don’t need. The above services are not running until some call has been made to the associated port. inetd or its eXtended daemon cousin will spot this call and then start the required service if it can see it in its configuration file. Commenting out lines like this makes them invisible to the daemon, but they are much easier to reinstate than if we had just deleted the line completely. If you have configured by hand you will need to restart inetd by using the command: killall -HUP inetd Good daemons to remove are the r* services such as rshd or rlogind as well as those just not used like daytime. Fingerd is also worth considering removing as it gives out a lot of information to potential intruders.

From the outside looking in. Now we have removed some daemons we can add useful software. Not only should you make sure that your distribution is up to date with security packages, but that any other third party is also up to date. Browse the Web and make sure you have the latest servers that you want to run such as Apache. The system is now more secure but all is still not well. Very often, passwords are still sent in a plain text format. When you set up a connection across a network it is possible for someone to use a passwordsniffing tool to listen in to whatever you type. To overcome this we can use ssh. The sshd is the daemon that replaces the r* services we removed earlier and adds encryption to all the communications. For ssh you will need the following packages: ● Openssh-.rpm ● Openssh-server-.rpm ● Openssh-clients-.rpm ● Openssl-.rpm To connect with ssh use the following command: ssh -1 usename target.computer.com Copy files by using: scp /where/the/file/is.txt /where/you/want/it/to/go


COVER FEATURE

Info

Tripwire Prevention of a crack attack is important and you must treat it with utmost priority. But how will you know that you have succeeded, or more importantly, failed? Should someone sneak past your best efforts and manage to make themselves at home amongst your precious data and computer resources, it’s important to know that your defences have been breached. One of the biggest concerns of a breach is knowing that your data is still accurate and hasn’t been tampered with. This is where Tripwire can help. Tripwire is a tool that checks to see what has changed on your system. The program monitors key attributes of files that should not change, including binary signature, size, expected change of size, etc. The hardest part of doing this is balancing security, maintenance, and functionality. Tripwire maintains database details of all of the files that you have configured for and compares these details against what is really in your directories. Should anything be different then

Ntop www.ntop.org Nmap www.insecure.org/ nmap/index.html TripWire www.tripwire.org PortSentry www.psionic.com/ abacus/portsentry

The home of Tripwire

warning bells will sound, or, at least a log file will be written. The important thing is that you know there has been a breach, and you know that you cannot fully rely on all of your data.

Portsentry Portsentry is the most powerful tool we will mention here, running on TCP and UDP sockets to detect port scans against your system. PortSentry is configurable to run on multiple sockets at the same time so you only need to start one copy to cover dozens of tripwired services. PortSentry will react to a port scan attempt by blocking the host in real time. This is done through a range of configured options. The most useful is when PortSentry drops an illegal packet. Because the packet is dropped and forgotten about, no acknowledgement is received by the cracker sending the packet, who therefore doesn’t know they have hit anything and so remain none the wiser regarding you and your machine. PortSentry also takes full advantage of either dropping the local route back to the attacker, using the Linux ipfwadm/ipchains command, and/or dropping the attacker host IP into a TCP Wrappers hosts.deny file automatically, which will further strengthen your system. PortSentry will detect SYN/half-open, FIN, NULL, XMAS and oddball packet stealth scans. These are much more obscure form of port scanning and are not usually used by the average script-kiddie. Once a scan is detected your system will turn into a blackhole and disappear from the attacker. This feature stops most attacks cold. PortSentry has an internal state engine to remember hosts that connected previously. This

PortSentry in action

allows the setting of a trigger value to prevent false alarms and detect “random” port probing, which can happen as part of regular Internet life. PortSentry will report all violations to the local or remote syslog daemons indicating the system name, time of attack, attacking host IP and the TCP or UDP port a connection attempt was made to. When used in conjunction with Logcheck it will provide an alert to administrators via email. Here is your last line of defence, you must regularly check for discrepancies. If you have yet to set up something like Logcheck then you must go through the system log files on a regular basis to make sure that everything is still in order. Without this final effort, the whole exercise is worthless. Issue 15 • 2001

LINUX MAGAZINE

47


FEATURE

Lego Mindstorm Dreaming of electric sheep

BRICKS AND PIECES Robotics is a minefield of a subject due to the many different avenues of exploration. Despite limited time and resources, John Southern bites the bullet and plays with Lego...

14

LINUX MAGAZINE

Y

ou can spend your time building hardware or you can concentrate on the programming. You can even do it all virtually such as the University of West Florida’s robot modelling site. Hardware wise most of the kits available depend on the Parallax BasicStamp, which is a PIC microcontroller. Unfortunately it is Windows controlled but by the time this is published a new C environment should be available. The BasicStamp allows simple circuits to be built and controlled giving rise to many third party kits. What I wanted was something ready made so I could play with the software. A wet Saturday afternoon meant a trip to the local Toys’R’Us to see what we could find to while away the afternoon. Lego

Issue 15 • 2001

A simple rover

Mindstorm cried out but we were cautious. Lego released the Mindstorm a few years ago inspired by the MIT programmable brick. The main programmable block (RCX) based on a Hitachi H8/3292 microcontroller is available in three versions (1.0, 1.5 and 2.0). In the UK you can buy either the Lego Robotic Invention System 1.5 or 2.0. The difference lies not the RCX brick but with the infrared controller. In version 2.0 the controller is USB while 1.5 is serial. The most recent stock in the shop was version 1.5 with a RCX 2.0. The RCX version does not really matter as it can be upgraded and using LegOS can be replaced. The first hour was spent just opening lots of bags of Lego and playing. Finally deciding to build a robot to follow a path we face our first challenge. Without a Windows machine in the house the supplied software is of no use. We can spend Saturday night installing Windows or turn to the Web for help. After just a couple of minutes on the Web we were faced with an array of choices. We can use the Lego RCX built-in software and run a Linux-based programming tool or we can download a new programming language into the RCX and again control it from Linux. Starting with the inbuilt software we can then choose from a range of programming languages such as Forth or NQC (Not Quite C). We opt for NQC as the Forth primer is somewhere upstairs and lazyness has taken over. The NQC is command line based and the latest version (2.3r1) is a 188K download. The package contains a test file to check that the system is


FEATURE

Seek the light and avoid the cat

working and you have everything connected. It is probably worth downloading the NQC package just for this test as it puts your mind at ease over the hardware. To control the RCX brick we first write our NQC in a simple text editor. We save the file with a .nqc extension and the command nqc test.nqc compiles the code. Now by adding the -d switch we can send the complied code to the RCX brick. nqc -d test.nqc Similar to C or C++ the NQC follows very similar syntax: task main() { SetSensor(SENSOR_1, SENSOR_PULSE); while(true) { if (SENSOR_1 ==2) { PlaySound(SOUND_FAST_UP); ClearSensor(SENSOR_1); } } }

As can be seen from the above example no surprises appear in the coding. The real surprise is that the sensors and output ports (three of each on the RCX brick) are not just digital on/off but analogue and can be used to sense a range from 0 to 1023. This means with just a few logic gates we could expand the number of sensors, but that’s for another weekend. From the above example we can see that there is a sound generator on board. With a little work with the PlayTone command we can get the brick to sing, so long as we are careful to not let the buffer queue overflow (every eighth note we have to pause until the buffer is empty). Actually more time was spent on building the robots, due to the huge amount of parts in the box and the constant hunt for the correct brick shape. The robots can be initially built by using the easy to follow booklet supplied with the kit – just remember to allow more time to find all the parts. Having tested that the Linux box could control the RCX brick and had fun making little models dance, sing and even head towards the light (Warning: Cat owners should note that the robot is easily knocked over by an angry feline), we couldn’t resist updating the firmware. Our first choice was the pbFORTH but in practice we settled for LegOS. Both of these routes enable you to replace the firmware inside the RCX brick and thus give you far more control over the unit. To update the firmware we need to use a firmware

downloader. One was at hand from the NCQ with the -firmware switch. The LegOS gives much more control and we can now program the tiny LCD screen. We wrote a simple C text file and complied it. First mistake. You need to set the makefiles TARGET environment variable otherwise you will wonder where the .srec files go. Second mistake. Make sure the serial port is correctly set, as the default is ttyS0. Third mistake. Getting the error message “no response from RCX” does not necessarily mean the RCX brick is faulty or out of range. It may be that the IR control unit’s battery has finally died. With the LegOS finally running and after recompiling code to remove errors we find all sorts of extra functions. Motor speeds can be subdivided into 255 units. A brake function enables us to lock a wheel while the off function lets it freewheel. The only disappointment with the kit was to do with the number of pieces included. While it may have taken ages to find parts because they are so numerous, the number of pieces is cleverly limited to only just build the robots in the supplied booklet. Now serious consideration must be put to asking Father Christmas for more pieces. Who knows maybe next time the smaller Lego Scout module may appear.

Info http://www.enteract.com/~dbaum/lego/nqc/ http://www.legOS.sourceforge.net/

Issue 15 • 2001

LINUX MAGAZINE

15


FEATURE

EMAIL SERVER

SUSE EMAIL SERVER III SuSE have recently

Why me?

produced a very

What is it that the SuSE eMail Server has to offer you? Apart from the absolute reliability of Linux software and the present day fact that there are not many viruses that will attack it, there is also the extremely user friendly Web-based interface that you can use to perform all of the many tasks required from a mail server on a daily basis in a busy commercial or academic environment. The eMail server supports all of the usual Internet standards such as IMAP, LDAP, POP3, TLS and SASL. All common email clients can be administered from the workstation that’s connected to the server, which provides a central administration point for a commercial organisation. Dedicated workgroups and all of the things that you might associate with proprietary software are available. Internal and external lists can also be set up and administered. What SuSE Linux UK Ltd has done is take some free software and written some high quality software to compliment it. The end result is that the user only has to click on a Web page and fill in some easy to understand values for user and group configurations. SuSE has also populated the boxed product with some excellent manuals. The first part of the installation manual is easy to understand: If you can install a Microsoft product then you can use YasT2 and install the eMail server. Many commercial and even Government organisations that we have spoken to have said that they want this kind of commercial product and they are willing to pay good money for it. Quite a few of them also say that proprietary software is becoming prohibitively expensive due to the cost of licences. They also say that Linux is a viable alternative and they want more of it. The eMail Server III that was reviewed for this magazine was tested on i386 hardware, which is the kind of hardware you can find in most small companies worldwide. A 450MHz AMD K6-2 CPU and 128Mb of 100MHz RAM was the hardware used

glossy and highly desirable range of commercial products which neatly undercut the prices of most other similar products. Richard Ibbotson takes a long hard look at the latest SuSE eMail Server

16

LINUX MAGAZINE

Issue 15 • 2001


FEATURE

Apache configuration.

User account control

to install the server into the i386 architecture. The installation was over in less than ten minutes and after fifteen minutes an IBM notebook was connected to the networked server so that administration and configuration of the new accounts could begin. Configuration of a single account took only a few minutes and we noticed that the server and the Web browser that we were using both moved like lightning across the screen. This was also tested across the Internet with an ISDN link and little or no loss of speed was experienced. Most commercial organisations use digital communications and so there should not be a problem with remote administration and configuration.

Reasons to use? How does it work and what makes it better than some of the others? To be honest it may not be better than some of the others, but there are a lot of people out there who do not want to hack a command line in the middle of a busy schedule or be involved in administering several hundred machines that crash all of the time. Let’s face it: we’ve all had that problem at one time or another with internal or remote computers that need that demon tweak or an account adding/removing on the one day of the year when everything else has gone wrong. To use the eMail server for the purposes of configuration and administration, all you need is a Java-enabled Web browser on any machine in your own internal network. You can also connect to the same machine with SSH if you prefer command line. The eMail server is basically a cut-down version of the ordinary SuSE distribution and so any secure session that you might wish to establish over an internal network or across an untrusted network is possible with the eMail server. This means that an SSL connection can be made with Samba as well, if you

Configuring Fetchmail from the frontend.

Postfix configuration

Issue 15 • 2001

LINUX MAGAZINE

17


FEATURE

Postfix mail queue

Creating a new user account to receive email

want to do that. Apache configuration can be done with the addition of a CA. After the initial login the user or admin person is pointed by the graphical interface towards the first time configuration of a single user or complete group. There is also provision for browser-based configuration of postfix, procmail and fetchmail. If you have hacked these on the command line as much as I have then you’ll probably prefer a graphical approach. The actual Web forms that you are asked to complete vary in complexity and sophistication. If you are confused by first time configuration the manual should help you out and if that doesn’t work then support by fax or email is easily obtained. The first part of the manual gives easy to understand graphical instructions on how to install. The later pages (starting at chapter five) explain the simple task of logging in as an administrator and configuration. This part also shows pictures of what you can see on the screen so there shouldn’t be any problems. There are also complete descriptions of how to use various mail applications with the eMail server and how to

configure those as well. Finally there is a section on how to use the Arkeia backup software to make a copy of your mail folders so that nothing is lost in the event of a disaster. Arkeia is the software that is included on the installation CD that comes in the box so that you can make your backups. As well as the installation manual there is also a cut down version of the original SuSE manual. Those of us who know about this will be aware that the SuSE manual is one of the best books about Linux that has been produced. If you don’t like paper you will find that you can install the same documents into your own local hard drive for further reading from the CD. If you are short on disk space you can also connect via the Internet to the SuSE site where you will find the same documentation as well as the SuSE support and hardware database. A second CD contains the source code for the eMail server. So, if you are a developer or if you just want to change the way that the eMail server runs on your network you can do that.

What about viruses? Amavis is included with the CD. There is a very good SuSE security team who are paid to look after you and a security list if you wish to discuss any security issues. You can get the kind of support for virus and other security issues that will make sure that your server will run for a very long time without interruptions and without intruders. If you don’t like Amavis you will find email virus scanners out there on the Internet, which are commercial in nature and you will have to pay for them. If you want a reliable and virus-free mail server then the SuSE eMail server is for you. You can find more info about the eMail server by visiting the useful links.

The author Richard is the Chairman and organiser for Sheffield Linux User’s Group. You can view their web site at www.sheflug.co.uk Expert mode Postfix configuration

18

LINUX MAGAZINE

Issue 15 • 2001


FEATURE

Agenda VR3 Linux PDA test report

PENGUIN IN YOUR POCKET The recently released Agenda VR3 is the first of a new generation of PDAs based on the Linux operating system. Under its chic exterior lies a solid Linux core, but is its bite is as good as its bark? Carsten Zerbst investigates

PDA Personal Digital Assistant, or in other words, an electronic organiser. The most well known PDAs are those from Palm, Handspring and Psion. Handspring bases its Visor on the Palm OS, which is licensed from Palm. In contrast the PDAs from Psion have a keyboard and are significantly larger. Psion however recently announced its intention to discontinue selling its own devices to the end user.

20

LINUX MAGAZINE

L

inux has acquired the reputation of being an ideal base for PDAs in the past few years. From the many PDAs announced, there are only two candidates presently available: LISA supplies the Compaq’s iPaq with Linux (Compaq itself only distributes this with the Windows CE operating system); and the Agenda VR3 – the first genuine Linux PDA, which has been available since July 2001. Agenda Computing presented the first production VR3 machines at the Linux Expo. They are marketed directly by Agenda at the price of around £200, though this will depend on exchange rates, shipping and local taxes. Included with the PDA you also get a cable in order to connect to the serial port of a PC, a cradle, a headphone/microphone combination and a leather cover. A CD, which includes software for the Issue 15 • 2001

PC as well as clear operating instructions, is also included. Third party software is also now available – the Agenda Software Repository is a good place to look.

First impressions The VR3 measures approximately 4.5in by 3in (8cm by 11cm) and thereby fits comfortably in the hand. The display cover is connected to the housing and can be folded to the rear. On the one hand, it cannot be lost as easily as the cover of the Handspring or Visor, on the other however it gives a somewhat awkward impression. Only time will tell if this is the optimal solution; in any case the cover can be easily removed. The supplied leather case provides space for the Agenda (with or without display cover), the stylus, as well as a few business cards.


FEATURE

The display has a viewable area of 2 1/8in by 3 1/4in (5.5cm by 8.5cm), which is over half an inch (1.25 cm) longer than the Palm display. The main reason for this is that the VR3 uses the entire display surface for applications. The handwriting recognition or the keyboard are only displayed when required and therefore do not permanently take up the bottom inch (2.5 cm). The 160 x 240 pixel display represents 16 grey scales and can be easily read in direct sunshine or with bad lighting. A quick check of the schedule at night is possible as well due to the internal lighting. As with all PDAs, the quite strongly reflective glass surface can be annoying. There is even a program that turns the display completely black, so that it can be used as a make-up mirror – perhaps the mirror effect should be regarded as a marketing feature. The housing has six keys. Two for up/down, two for left/right and two shift keys. The two large shift keys each operate two micro-keys and could therefore theoretically function as toggle keys. Unfortunately both micro switches are wired parallel on the circuit board and thus can’t be differentiated from each other. The arrangement of the buttons on the Agenda is equally suitable for left and righthanders; however both hands are normally required for operation.

Software When switching the machine on for the first time, the Agenda displays its ‘Booting’ messages. An xdm is started for logging on after the touch screen has been calibrated. There are two users ready for selection: default without password and root with the password agenda. We will no longer have to go through this procedure after this as logging on again is not necessary. There is unfortunately no logging out, so that the data may be read by anyone at any time. The cornerstone of the user interface is the Launchpad, which starts all the graphical applications. This includes everything that one would expect on a PDA – available are not only the classic PIM applications, but also system programs and various games. Another practical feature is the status bar, which includes a combination of the time of day and a battery display. The status bar can additionally be used to switch between the different windows. One should not forget that the number of applications run at the same time is restricted by the availability of CPU and memory. The speed and performance of the Agenda, in its as-delivered condition, is nevertheless very disappointing. In this respect, an update to the new SNOW binary (described later in this article) is advised. The Agenda, even after the update, gives a somewhat lethargic impression – especially when starting applications. While, for instance, the equally expensive Palm PDA

Under the hood Under the hood, we find a 66MHz MIPS processor, as well as 16 Mb Flash memory and 8 Mb RAM capacity. Internal extensions do not seem to be intended nor planned for. Available on the outside is a serial port, an IrDA port as well as a mini-jack for audio input and output. The VR3 lives on two AAA batteries, and therefore has no spare power to give away freely. With a small tool, the power management settings can be easily adjusted. This means lights out for the PDA after the pre-set time. The VR3 also has an ingenious power saving solution: removing and replacing the stylus in the PDA also turns it on and off. Nevertheless, the batteries do not last particularly long and were run flat after a week’s intensive use. Agenda would have perhaps been better to select the AA size batteries, which are not much larger, but have almost three times the capacity.

makes its applications immediately available, the Agenda is decidedly more hesitant.

Dates, addresses and more Agenda did not go in for any major experiments with its main PDA applications; the scope and user interface of the programs are not dissimilar to the Palm. In the schedule, dates can be entered with the beginning and end times, and an accompanying description. In no case should this be forgotten, as dates without a description are simply lost. The Agenda schedule can remind you about your dates with an alarm that features an adjustable preset lead-time. An inactive VR3 gives an acoustic reminder, otherwise it will give you a message the next time you switch it on. The reminder can also be set to go off daily, and different repetitions are possible for different dates. This also works when carrying out changes, for example altering one date can automatically alter similar dates. Contacts – your glorified address book – is organised with similar functionality: Names are accompanied by addresses, which can be stored in different systems (postal address, telephone, email). MIPS Spread processor architecture. MIPS processors are produced by different manufacturers and used in all sorts of different devices, from high-end servers (for example Silicone Graphics) all the way to small, power-saving PDAs. Flash A Flash memory system can be repeatedly written and read almost like normal RAM. The main difference is that it also memorises data when the power is off. Another difference to RAM is that writing is in comparison quite slow and may not be arbitrarily repeated. IrDA The abbreviation actually stands for the Infrared Data Association. However it also defines the standard for infrared ports, determined by this organisation.

Issue 15 • 2001

LINUX MAGAZINE

21


FEATURE

Calibration A touch screen reacts to contact, i.e. with a stylus. During the calibration, it is determined at which point the program surface responds to such a contact. xdm The X11 display manager is the graphic log-on program, into which the user enters his user name and password. PIM Personal Information Manager. Most importantly, this includes the schedule planner, contact register and note book.

applications and displays the appropriate records. The audio input and output is not yet used to its full potential at present, however Agenda has already announced that a dictation program is on its way. Additionally, there is a port for the Madplay MP3 player. It must be noted here that we could not get this to work during the test. In view of its mono output and small memory, the VR3 is in any case not an adequate substitute for an MP3 player.

With stylus and keyboard Each case can also be accompanied by remarks and notes. Addresses can be assigned different categories to better manage the entries. The categories can be created and designated at will. The transfer of addresses by infrared (beaming) between the Agenda and a Palm poses no problems and can be accomplished in both directions. The VR3 however receives the addresses without first asking the user. These two applications are probably the most important on a PDA. They are flanked by a pocket calculator, an expense book, a small word processor for notes, a To Do list and a world clock.

Plus and minus There are a few points of criticism to mention here. The somewhat tardy behaviour, as mentioned earlier, is rather perturbing. The Mail program is displayed in the Launchpad, but is not yet installed. Items from the to-do list cannot be easily transferred into the schedule as a date, and dates without a description disappear after entry into Never-Never land. On the other hand, there are genuine pluses. The find program searches the data of the standard

Looking to the future

Information LISA Agenda Computing Agenda Software Repository

http://www.lisa.de/ http://www.agendacomputing.de/ http://www.supermegamulti.com/agenda/ index.asp Mirror http://www.newbreedsoftware.com/mirror/ PPP-connection and other software http://www.agendacomputing.de/agendae/software-e/index-soft-e.htm> Andrej Cedilnik page http://www.csee.umbc.edu/~acedil1/agenda Busybox http://busybox.lineo.com/ Mailing List http://lists.agendacomputing.com/ Developer Page http://dev.agendacomputing.com/ Community Portal http://www2.math.uni-potsdam.de/agenda/ Dawn http://members.home.com/zakharin/ Software/Dawn.html vrflash http://www.apex.net/~jeff/agenda-utils/ Agenda Wiki http://agendawiki.com/ PMON http://www.csee.umbc.edu/~acedil1/ agenda/update.shtml rsync http://rsync.samba.org/ SNOW ABI http://www.desertscenes.net/agenda/snow/

22

LINUX MAGAZINE

The acid test of any PDA is in the operation. The operation with a stylus is mostly easier than with a mouse. On the other hand, there are times when one yearns for a real keyboard. There are two available possibilities for the entry of text: a virtual keyboard and handwriting recognition. Once the virtual keyboard is accessed from the icon below the display, you can then start hitting your virtual keys. As mentioned, the Agenda comes with handwriting recognition as an alternative to the keyboard. As with all devices of this type, we can not speak about a true recognition of the written word – these machines only understand certain letters. As with the Palm, four input areas are available on the display. Small and capital letters, numbers and special characters are detected. The letters used here are similar to those used by Palm (in contrast to Windows CE), so that no new writing style needs to be learned. The rate of handwriting recognition is slower than with the Palm, however the VR3 displays the written letters. You therefore don’t have to write blind.

Issue 15 • 2001

The development of the Agenda is naturally an ongoing process. This includes both the kernel and the programs supplied with the PDA. These can both be brought up to date through the serial connection. Before you do this however, you should first save your data onto the PC. The kernel and the programs (referred to as the rootdisk) can be downloaded from the Agenda homepage. There are currently two differently types of binaries for the VR3, and these should not be mixed with each other. On delivery, Agenda uses normal binaries with ELF libraries. With this technique, all references have to be calculated and transformed into library functions – all this costs computing time. This, in contrast to normal desktop computers, is a major issue on the weaker PDAs. Including libraries into all programs is, for space reasons, not an alternative.

A SNOW storm Jay Carlson has come up with the idea of using libraries with fixed, allocated memory spaces and to this end he created the SNOW ABI. This naturally requires more work when compiling, however the


FEATURE

success comes with the speed. A VR3 with SNOW binaries starts substantially faster, making this version the only really sensible choice for serious use. The only catch however is that all programs, including the Kernel, have to use SNOW. Open architecture provides the normal user with the possibility of independently developing software for this platform. The difference between the Agenda and a normal Linux PC is small (in contrast to the Palm) and development tools are available free of charge. For speed reasons, the decision between ELF and SNOW falls in favour SNOW. The Agenda is also interesting for the commercial market as a base for mobile applications. Missing at present are the extensions. A modem, Ethernet card and keyboard are apparently on their way; a dream would be a jacket for PCMCIA cards (as with the iPaq from Compaq). Very nice mobile applications could be developed with this. On the other hand, the Agenda VR3 is inexpensive enough that it represents a genuine alternative to other embedded solutions. Two things at the top of the hardware wish list unfortunately rule each other out: longer battery life and higher processing speed.

All in all Altogether, the Agenda VR3 is fun to use and takes good care of all the normal daily functions of a PDA. With its synchronisation with desktop programs, GNOME users at least can work with a single data basis. There are off course still a few things in this respect that need to be improved or corrected, for example more consideration needs to be given to the KDE user. Recommending either Palm or Agenda is difficult. Even the smallest Palm performs normal functions faster. Next to its undisputed geek-appeal, the Agenda wins plus points with its seamless integration into the Linux landscape. Additionally, it offers (depending on your perspective) more possibilities than the Palm because of its many and varied ports. All in all, it is worth serious consideration if you can live with the somewhat slow processing speed.

ELF The current format for binary programs and libraries under Linux. ABI Application Binary Interface. The generic term for formats and procedures that find and connect the libraries at program start

The author Carsten Zerbst is scientific technician at the TUHH. Apart from researching the service integration on board ships he is involved with Tcl in all walks of life. He is currently looking for new challenges in the Unix/Linux environment.

Issue 15 • 2001

LINUX MAGAZINE

23


FEATURE

Usenet newsgroups and clients

READ ALL ABOUT IT Usenet groups

Paid for NNTP servers

contain a wealth of

Should Usenet becomes a must have facility for you, but your ISP still doesn’t provide an acceptable server, then you have the option paying for access to an NNTP server. There are many Usenet groups available and the majority of ISPs will only carry a subset of these groups, either for political or financial reasons. If your hunger for Usenet demands a ‘full feed’, then you might consider subscribing to one of the subscription services like http://www.supernews.com/ to name but one.

information about subjects so diverse it can boggle the mind. Colin Murphy takes a look at the weird and wonderful world of Usenet and newsgroups

I

n Usenet forums not only will you find discussions about food, television, beer and some of my other favourite subjects, you will also find technical and scientific information on nutrition, poor eyesight and alcoholism. Self help groups and self interest groups can flourish via Usenet. You could have access to as many as 80,000 different groups. A group is defined by the topic it deals with and these topics are nested in an hierarchical structure. For example, uk.comp.sys.sinclair is the Usenet group primarily given over to discussing all things related to Sinclair, be they C5 trikes, Black watches or ZX Spectrums. This is in the systems group, which in turn is in the computer group, and they all are in the UK group, which is also know as the top level domain. There are seven big top level domains such as comp, sci and news to name but three. There are many more top level domains, with country specific domains like uk, ie and za, company specific domains

Free access NNTP servers Sometimes your ISP’s NNTP server may not quite fit your requirements; it may not carry the groups you are looking for, or it just may not be very well run. There are some free access NNTP servers, which may offer a solution, one of which is FreeNews (IP address 202.85.164.51).

24

LINUX MAGAZINE

Issue 15 • 2001

like demon and blueyonder, and the marvellous alt domain, which stands for alternative and caters for a lot of what wouldn’t fit neatly into other domains or indeed into polite society. You won’t need to be bothered with the vast number of groups available, you subscribe only to the groups you think you will be interested in. In a group someone will post a message, a comment, a question, or just leave a piece of their mind. With luck, someone else will post a reply, and others will post a reply to the replies, and these will hopefully have some relevance to the group subject, although quite often not. Just like email, these discussions start off with a subject to give you some clue as to what they might be about. Usenet is open to all, which is both good and bad. Usenet can be a most valuable source of information, but you cannot rely on the quality of that information straight off. It takes a little time, but after a while you learn whose views are worthwhile. Usenet is used for more than just messages; you can also find a wide variety of data files, like software, graphics or audio tracks. These are to be found in binary groups. Often a discussion group will have a binary group attached, so comp.sys.psion will include a comp.sys.psion.binaries group where people can exchange files. Usenet can be accessed via a specialist client or


FEATURE

through Web interfaces, probably the best known being DeJa News, which is now owned by Google. If you were to use a standalone client you would then need access to an NNTP server. Luckily, most ISPs run their own servers, so this shouldn’t be a problem. You will need to know the IP address, or the servers name at least, which will need to be entered into this configuration file. How you use Usenet depends very much on what type of link you have to the Internet. If you have a permanent connection, then you will probably access the NNTP servers directly. If you’re not so lucky and you rely on a dial-up Internet connection, you may want to consider running your own local NNTP server as well, using something like Leafnode. Some of the Usenet clients also have support for off-line browsing built in.

Mozilla If you are already using Mozilla as your Web browser and email client, then there is very little you need to do to start using it as your Usenet client as well. From the main window select Edit/ Mail/ News account settings, New Account, and select a Newsgroup Account. All you need is the name of the Usenet server from which you are going to obtain your news, the rest you can make up. Make sure you are online, click on the server entry in the listing and Subscribe to Newsgroups. If this really is the first time you have connected to this news server, then a list of newsgroups will now be downloaded, which can take a few minutes, especially through a dial-up connection. Once downloaded, you can then start to choose groups to subscribe to. You can browse through this list, which could take a while, or you can reduce the list to a more manageable size by including some keywords. Subscribing to a group will add this to your list. Click on this group from the list and the latest message headers will be downloaded. Click on a header and you can read the body of the article. Articles that have follow-ups and replies will usually be nested in a tree structure, which is automatically opened if you start to read a message in that chain. You can reply

Saving time with Leafnode Much of your time with Usenet will be taken up by reading text, which is a slow process at best. If you’re using a dial-up Internet connection then this can be wasteful of your telephone resources. Ideally, you want all of the most recent texts from Usenet downloaded in one batch so that you can read them offline. Leafnode enables you to do just that, by downloading all of the new messages in a group that you have recently shown an interest in. These are then kept locally so that your client can access them. So, instead of configuring your News browser to contact the server directly, you ask it to look to your local machine.

Looking for groups to subscribe to Any self respecting Usenet newsreader client will enable you to search for newsgroups, either by looking just at the group names, or occasionally by group description as well. On one server, the keyword ‘Linux’ produced a list of 367 different groups – not all will be active, not all will be in English and you can’t even rely on all of them to be referring to Linux as we know it, but at least it’s a manageable size. Groups that every self-respecting Linux Usenet reader should subscribe to include: comp.os.linux.announce and uk.comp.os.linux.

and forward messages just like you would with email. You can now read your messages while online, but this is a time consuming task, tying up the telephone line and possibly costing you a packet. You can configure Mozilla to download the headers and bodies by default. Settings for this can be found in the offline menus. Should the newsgroup be very busy, you may not want to download all of it, in which case you can flag the message headers you are keen to see the bodies for and then download these in one batch. The off-line features in Mozilla are improved from previous versions and are much better than those found in Netscape 4.x. If you are just interested in finding out about Usenet and you already use Mozilla (or Netscape – the functioning of the two is very similar) for email then using its News features would be a good place to start.

Knode Knode comes with the KDE desktop environment and so would integrate seamlessly if that’s the environment you are using. Knode, according to the Web page, is GNKSA compliant, but hasn’t been

You won’t need to be bothered with the vast number of groups available, you subscribe only to the groups you think you will be interested in

Subscribing to Newsgroups

To help you limit the amount of News that you download, you subscribe only to the Usenet groups of interest to you. The newsgroups have room descriptions, if your News server supports them, to help you to further decide if a certain room is dealing with your subject. Mozilla - subscribing to some Usenet newsgroups.

Issue 15 • 2001

LINUX MAGAZINE

25


FEATURE

Message headers Usenet messages, much like email messages, come in two parts, the message header and the message body. Unlike email, which is, hopefully addressed to you and of relevance to you, a Usenet message often might not be as relevant. Because of this, Usenet clients will usually only download the message headers first, you then decide which bodies are worth getting.

Knode: Not all the graphics on Usenet are unsuitable for a family magazine. Pan, with some messages highlighted for a future batch download

words of wisdom are sent out to the world to be viewed by all and sundry, anything that might have been hastily said, or even downright wrong, could haunt you for some time to come. Here you are given the functions needed to send special control messages to Usenet which enables you to cancel, or at least update, your previous posts. reviewed as yet. It has support for MIME and usefully supports multiple servers, which enables you to increase the groups you can access should you be unfortunate enough to have to use an NNTP server with restricted content. It will deal with images online, as can be seen in the screenshot, which is something that Mozilla doesn’t yet support. Knode is designed to be used as an online browser only, so you either need a permanent Net connection or run your own local NNTP server like Leafnode. It has the full set of features that you need to take full advantage of Usenet. Knode can sometimes download binaries in multiple parts, which may make it easier to deal with than with Mozilla. If you are interested in taking a lot of data files from Usenet then use one of the standalone programs like bgrab. You can score articles, which enables you to easily cut through the noise on Usenet groups and follow those articles that are of interest to you. You create a set of rules, which can follow articles, or follow your fellow posters if they always seem to have pearls of wisdom. By doing this, these interesting messages can by highlighted so they are easier to spot. You are also given access to features such as cancelling and superseding articles. Because your

GNSK

LINUX MAGAZINE

Pan is part of the GNOME project but can be used with other desktop environments so long as the correct libraries are installed, which in most distributions they are. Pan has good support for offline browsing, so, if you don’t have a permanent connection to the Internet and are bothered about setting up a local server like Leafnode, then Pan might be the choice for you. The Pan developers are proud, and rightly so, of their 100 per cent mark of approval from the Good Net-Keeping Seal of Approval evaluations team; making it the only Unix reader that can make this claim, at the time of writing. Pan gives you many choices as to how you want to download your messages: you can download full bodies with headers on a per newsgroup basis, or flag messages and download them in a batch form for offline reading. You are given control of how and when Pan will try to make contact to the specified Usenet server, which is important if you have configured your system as dial on demand. Pan also offers a full range of filtering features (Bozo and Spam), with which you can easily avoid the more noisy and bandwidth-wasting participants of Usenet.

Info

The Good Net-Keeping Seal of Approval 2.0 (GNSK for short) is an independent set of criteria, which should be thought of as the minimum requirements to make an NNTP News client useful. A useful source of information. http://www.xs4all.nl/~js/gnksa/gnksa.txt

26

Pan

Issue 15 • 2001

Mozilla: http://www.mozilla.org/releases/ Knode: http://knode.sourceforge.net/ Bgrab: http://www.student.dtu.dk/~c960941/bgrab/ Pan: http://pan.rebelbase.com/


FEATURE

A little light music

STUDIO TIME Making music with Linux is now becoming easier. Soon we could all be the next number one. Jack Owen looks at the possibilities of MIDI on Linux

B

ored with the usual chart fodder, we decided to make a fortune with Linux by producing the next big song at home. Having listened to MTV we feel we could do better. To make our masterpiece we could arrange for an ensemble of session musicians to perform the music while we record it. Getting everyone to play perfectly at the same time can be frustrating, so it is usually better to try and record individuals and then layer these tracks on top of one another. This allows finer control of variables such as volume and the ability to cut and paste sections. However we want to avoid the expensive studio costs and session musicians and are going to produce the sounds ourselves electronically. We could start by using editors to rearrange .wav files, but this would be very tedious. Better still would be to generate our own sounds with an attached music keyboard. A synthesiser is an electronic device designed to produce synthetic sounds. We could use frequency modulation to generate sound waves, as this is used for some electronic music. A more popular method is to use sounds of real instruments that have been recorded. The use of these samples is referred to as wave table synthesis. Typically we may use a music

Figure 1: The Jazz++ sequencer

28

LINUX MAGAZINE

Issue 15 • 2001

Figure 2: PianoWin

Figure 3: Random noise

keyboard to act as an input device. The keyboard acts as a controller to the sound generating hardware (your computer). Together the keyboard and the sound generator make up a synthesiser. We are not limited to just using keyboards, however. With the MIDI protocol we can connect many devices together to generate the sound output. However we can go one stage further, as the computer can act as a sequencer. The sequencer enables you to take input (from the keyboard or program file) data and rearrange it in whatever order you choose. It is usually capable of editing, rearranging and storing the data. It can then send the data as a finished arrangement to the sound generator to play. MIDI is a data communications protocol. This


FEATURE

Figure 4: Streaming Ogg Vorbis encoded BBC Radio 1

defines the rules by which electronic musical instruments communicate. Examples of these musical instruments (devices) are synthesisers, keyboards, effect processors, recording machines and sequencers. The MIDI specification defines the format of the signals flowing from one device to the other. Such signals are commonly referred to as MIDI messages, each carrying a MIDI event. The information of an event would for example be “play note A at velocity 75” or “stop note A” or “change sound to acoustic piano”. First we need to generate some music samples. We can connect a music keyboard to a Linux box with a standard MPU-401 port (this is the joystick port on your soundcard). By using a program such as Jazz++ (See Figure 1) we can record whatever we play on the keyboard. If you do not own a music keyboard, or do not consider yourself an accomplished player, all is not lost. You can enter notes one by one with a note editor such as pianowin (See Figure 2). This enables us to choose the note and its start and stop periods. Although this may seem a long process we only have to enter a few samples as later we will reuse and repeat the samples to build up the music track. If this is too much of a chore you could always opt for the randomly generated rhythm feature. At this stage we have a file that represents a sound track in the computer. Once we have made up our track we can change to the sequencer section where we can start to lay down the tracks in whatever order we choose. We can modify the sound sample so the output sounds however we want it to from a high flute sound to a series of screams. We can now repeat the process and produce as many of these tracks as we want. Using the editor feature we can then layer these

on top of one another. The resulting output file will now be our masterpiece. An example of the type of output that can be achieved with Jazz++ can be found on the CD in the music directory. Now we have made our tune we really need to let everyone know about it. Being good Open Source people we obviously want to release it as an Ogg Vorbis file. Ogg Vorbis files are similar to those of MP3 file types except they do not suffer the copyright restrictions. Until the end of the year the BBC are streaming Radio 1 and occasionally Radio 4 as Ogg Vorbis files. The streamed sound is not CD quality but is excellent for radio. You will however save your own track with a higher sample rate so it is CD quality. To convert your tune to Ogg Vorbis you need to use the Oggenc tool from the Vorbis tools package. The command to use the encoding codec is:

Figure 5: Output from Lilypond

oggenc farbettr.wav To play this use either the Ogg codec built into xmms or the command: ogg123 farbettr.ogg This now brings us to publishing free music. The Electronic Frontier Foundation has published an Open Audio License and examples of music released under this license can be found at the Open Music Registry. Another free license system is the OpenMusic system. This produces two licenses depending on the level of use required. Sadly the Web site is currently down but on the coverdisc we have included the track Penguin Planet by Void Main. The author Dennis Gustafsson is a strong supporter of GNU/Linux and you can visit his web site at http://mp3.com/voidmain So now we have produced a top ten hit, given it away and everyone is talking about it. A local music society would like to perform it and ask for the score. Again you could write it out by hand or let Linux come to the rescue. With plain text input we can use the music typesetter Lilypond. A more recent program is Rosegarden, which also features a MIDI sequencer.

Info http://www.gnu.org/software/lilypond/ http://www.all-day-breakfast.com/rosegarden/ http://www.openmusicregistry.org/ http://www.eff.org/IP/Open_licenses/eff_oal.html http://www.jazzware.com Figure 6: Rosegarden scripting

Issue 15 • 2001

LINUX MAGAZINE

29


ON TEST

Hard disks

DISK PARADE P

ooking through the manufacturers’ information you notice one thing above all: manufacturers still haven’t stopped thinking in terms of billions of bytes instead of actual gigabytes. By now the difference is a hefty seven per cent, which means socalled 100Gb disks actually provide a capacity of just over 93Gb. The manufacturers have also cranked up the caches. One immediate consequence of this is that write access is almost invariably intercepted successfully. Our test results should therefore be taken only as guidelines rather than cast-iron speed values. All testing was carried out with default cache configurations. For a more reliable indicator of the speed of each medium you need to look to the read rates.

Although its little 40Gb brother IC35L040 is slightly faster, the L060 is more attractively priced. Access times are also well within acceptable limits at 12.7 milliseconds, making this hard disk quite enticing, especially with its low power usage of only 6.3 Watts (on average) and a tolerable noise level of 48.5 dB(A). Overall, this hard disk is an attractive mass storage device at a reasonable price.

Oliver Kluge and

Mirko Dôlle introduce 21 current hard disks in three categories: ATA IDE, SCSI and Notebook disks

100Gb hard disks At the top of the 100Gb range, we’ve placed the brand new disk from Western Digital. Although its 100Gb capacity only equates to 93Gb in real terms, this is still a pretty tidy amount to be getting on with; enough to store over 1,000 CDs in MP3 format (more than most users actually own) or a fair few hours of digital video. Western Digital’s WD1000 disk is not only big, it’s also fast. A transfer rate of 38.6 Mb/sec is a good result for an ATA hard disk, which only few achieve. At 48.1 Mb/sec, writing is not exactly slow either. The access time of 14.6 milliseconds (including operating system overhead and latency) is also among the better results in its class. To wrap things up nicely power usage is low at an average of only 7.5 Watts. All in all, the price of £225 seems perfectly acceptable.

ATA IDE hard disks IBM is the winner in this category. The IC35L060 disk stands out from the crowd during testing due to several characteristics. For one thing its read rate is good, a very respectable 38.1 Mb/sec. Issue 15 • 2001

LINUX MAGAZINE

37


ON TEST

Ultra SCSI hard disks

Notebook hard disks

The ST373405LW hard disk from Seagate is something of a speed demon. Its test result of 53.6 Mb/sec may well make it a record breaker. Include the Ultra SCSI connection into the equation and you cannot help but conclude that this hard disk is almost crying out for database applications, which demand a lot of power from disks. The write rate is also very good at 41.7 Mb/sec, as is the access time at 15.6 milliseconds. With such a fast disk you’d expect energy usage to be a little higher, but at 9.7 Watts it’s hardly excessive, even if the heat given off is starting to be noticeable. There is also perceptible operational noise, which is not exactly loud but somewhat persistent – hardly a problem in a server, however.

Portable computers make their own demands on hard disks. One of the most important is power usage. At 2.5 Watts IBM’s device is a bit hungrier than others in the test. However, this hard disk offers something few others do: more than 45 Gb of storage capacity, which is an awful lot for a notebook. At this sort of size you can fit more on to a machine than just an ample operating system with lots of presentations and videos – you can take almost half a server with you as well. On the other hand, 17.2 milliseconds access time is a rather ordinary result for a hard disk in this category. Considering the performance on offer the price seems justified.

Technical data ATA IDE Western

Western

Western

Manufacturer

IBM

IBM

Maxtor

Maxtor

Maxtor

Seagate

Seagate

Digital

Digital

Digital

Model

IC35L040

IC35L060

4W100H6

5T040H4

D540X-4K

ST360020A

ST380021A

WD1000

WD600

WD800

Web site

www.ibm.com www.ibm.com www.

Price†

www.

www.

www.

www.

www.

www.

www.

maxtor.com

maxtor.com

maxtor.com

seagate.com

seagate.com

wdc.com

wdc.com

wdc.com

£90.00

£135.00

£260.00

£120.00

£175.00

£120.00

£175.00

£225.00

£140.00

£175.00

40Gb

60Gb

100Gb

40Gb

80Gb

60Gb

80Gb

100Gb

60Gb

80Gb

(laboratory)

38.3 Gb

57.2Gb

93.3Gb

38.1 Gb

74.5Gb

57.2Gb

74.5Gb

93.1Gb

55.8Gb

74.5Gb

Interface

ATA-100

ATA-100

ATA-100

ATA-100

ATA-100

ATA-100

ATA-100

ATA-100

ATA-100

ATA-100

Form factor

3.5 inch

3.5 inch

3.5 inch

3.5 inch

3.5 inch

3.5 inch

3.5 inch

3.5 inch

3.5 inch

3.5 inch

39.2

38.1

30.3

37.2

32.8

29.1

40.5

38.6

32.2

30.2

Transfer rate write [MB/sec] †† 36.8

36.8

15.8

17.8

40.3

37.5

47.9

48.1

38.5

38.7

Access time [ms]

12.9

12.7

16.4

13.3

21.5

21.7

15.6

14.6

16.8

4.4

Power [W]

6.3

6.5

5.0

6.2

4.8

5.2

7.0

7.5

6.1

7.5

Noise [dB(A)]

48.3

48.5

35.0

33.0

36.0

38.1

34.9

35.1

38.8

37.1

Capacity (manufacturer) Capacity

Transfer rate read [Mb/sec]

(†) Prices are as a guide only and are not inclusive of VAT

38

LINUX MAGAZINE

Issue 15 • 2001

(††) Write cache with default configuration


ON TEST

Diagram: Access times [ms]

Diagram: Transfer rate read [Mb/sec]

IBM IC35L040

12.9

IBM IC35L040

39.2

IBM IC35L060

12.7

IBM IC35L060

38.1

Maxtor 4W100H6

16.4

Maxtor 4W100H6

30.3

Maxtor 5T040H4

13.3

Maxtor 5T040H4

37.2

Maxtor D540X-4K

21.5

Maxtor D540X-4K

32.8

21.7

Seagate ST360020A

29.1

Seagate ST360020A Seagate ST380021A

15.6

Seagate ST380021A

40.5

Western Digital WD1000

14.6

Western Digital WD1000

38.6

Western Digital WD600

16.8

Western Digital WD600

32.2

Western Digital WD800

14.4

Western Digital WD800

30.2

8.6

Fujitsu MAN3367MP

47.7

IBM DDYS-T18350

9.8

IBM DDYS-T18350

34.0

IBM DDYS-T36950

9.4

IBM DDYS-T36950

33.2

13.4

Seagate ST318437LW

Seagate ST318451LC

7.4

Seagate ST318451LC

39.9

Seagate ST373405LW

9.3

Seagate ST373405LW

53.6

Hitachi DK23CA-30

20.1

Hitachi DK23CA-30

20.6

IBM IC25N030

18.8

IBM IC25N030

19.8

17.2

IBM IC25T048

19.9

Toshiba MK2017GAP

21.1

Toshiba MK2017GAP

20.3

Toshiba MK3017GAP

22.3

Toshiba MK3017GAP

20.5

Fujitsu MAN3367MP

Seagate ST318437LW

IBM IC25T048

Ultra SCSI Fujitsu

37.1

Notebook IBM

IBM

Seagate

IBM

IBM

Toshiba

MAN3367MP DDS-T18350 DDS-T36950 ST318437LW ST318451LC ST373405LW DK23CA-30

IC25N030

IC25T048

MK2017GAP MK3017GAP

www.

www.ibm.com www.ibm.com www.

www.ibm.com www.ibm.com www.

fujitsu.com

Seagate

Seagate

Hitachi

www.

www.

www.

seagate.com

seagate.com

seagate.com

hitachi.com

Toshiba

www.

toshiba.com

toshiba.com

£290

£140.00

£290.00

£195.00

£195.00

£550.00

£225.00

£210.00

£340.00

£105.00

£120.00

36.7Gb

18.4Gb

36.7Gb

18.4Gb

18.4Gb

73.4Gb

30Gb

30Gb

48Gb

20Gb

30Gb

34.2Gb

17.1Gb

36.7Gb

17.1Gb

17.1Gb

68.3Gb

27.9Gb

27.9Gb

44.7Gb

18.6Gb

27.9Gb

Ultra 160

Ultra 160

Ultra 160

Ultra 160

Ultra 160

Ultra 160

ATA-100

ATA-100

ATA-100

ATA-100

ATA-100

3.5 inch

3.5 inch

3.5 inch

3.5 inch

3.5 inch

3.5 inch

2.5 inch

2.5 inch

2.5 inch

2.5 inch

2.5 inch

47.7

34.0

33.2

37.1

39.9

53.6

20.6

19.8

19.9

20.3

20.5

64.8

43.6

40.8

46.7

30.0

41.7

23.4

20.6

23.2

23.5

23.7

8.6

9.8

9.4

13.4

7.4

9.3

20.1

18.8

17.2

21.1

22.3

9.3

11.6

12.2

6.6

12.0

9.7

2.4

1.9

2.5

2.5

2.5

36.1

41.1

43.2

36.1

42.2

39.1

27.8

27.1

27.2

28.4

27.0

Issue 15 • 2001

LINUX MAGAZINE

39


ON TEST

Red Hat Linux 7.2

STANDARD BEARER Red Hat is often seen as synonymous with Linux, but does the latest version justify its position as the industry standard? Janet Roebuck takes a long hard look at Red Hat 7.2

T

he latest offering from Red Hat is based on the 2.4.7 kernel. The standard version costs £70.80 and comes with seven CD’s. Of these one was documentation and one had demo versions of two Loki games (Kohan and Rune). Red Hat itself came on four discs with the final disc offering StarOffice 5.2 in five differing languages. The documentation CD again supports lots of languages and as well as the expected HOWTO documents the two supplied booklets also have a Red Hat customisation guide. The only disappointment is that the Red Hat network guide is only viewable online, in either HTML or PDF format. The Professional version of Red Hat 7.2, which weighs in at £176.40, has four additional CDs and a DVD. The CDs cover two discs of applications, a Web server CD and a system Admins CD. Of the 2,000+ packages some big changes have been made. KDE 2.2 is shipped along with GNOME 1.4. The other big change is XFree86 4.1, which makes the most of new graphics card such as GeForce3 and G450 chipsets and adds a 1400x1050 resolution. The biggest change, however, is that the new ext3 journaling system is supported. Although other filesystems are available for Linux (XFS, JFS, and ReiserFS) Red Hat supports and champions the ext3 system. This allows easy conversion without data loss from existing ext2 partitions. Transition is straight forward as in reality the new journaling system is an extra layer on top of the ext2 system. The gain in using journaling systems is most apparent when you

Figure 1: Firewall configuration

32

LINUX MAGAZINE

Issue 15 • 2001

have a crash. Recovery of information is very quick. And although it is not designed for it, the journaled system allows for fast restarts on a laptop. Both Netscape 4.78 and Mozilla 0.9.2 are included as well as a host of KDE application upgrades.

Don’t be put off Documentation is limited to two small booklets, which means there’s physically not a lot to show for the cost. But don’t let that put you off. The first 150page booklet on installation explains the standard install, a text install and a system upgrade. A nice appendix on disk partitions is well written in clear logical English. The 200-page getting started booklet gets any new or intermediate user up and using most of the tasks very quickly. Where Red Hat excels is with its support for GNOME. The latest version 1.4 is supplied with the Nautilus file manager. This always seems slow to start compared to the KDE desktop. This is due to the way Nautilus can be used to view files. The file manager has the ability to show information about a file such as a thumbnail picture of an image or play a music file. This ability to drill down to information


ON TEST

Errata Alerts From: Red Hat Network Alert rhn-alert@redhat.com 31/10/01 00:51 Subject: RHN Errata Alert: New teTeX packages available To: me

Registering with the Red Hat Network

obviously has a hit on performance speed but with faster processors this is less of a worry than it was a few months ago. One of the major reasons for buying the Red Hat distribution is the Red Hat Network. Through the Web you can update your system and so always keep ahead in the security stakes. By following the graphical wizard you can register very quickly. This allows you to compare your system against the latest packages at Red Hat and upgrade if necessary. That’s all very well if you are proactive as it allows you complete control. You have a choice on how you update your system and even location to do so. What we like is the Red Hat Errata Alerts. These now pop into my email account and tell me whatever is of importance. They come in little groups and average about one a day. An example can be seen in the Errata Alerts boxout.

Red Hat Network has determined that the following advisory is applicable to one or more of the systems you have registered with the Software Manager service: Security Advisory – RHSA-2001:102-10 Summary: New teTeX packages available Description: A flaw has been discovered in the temporary file handling of some of the scripts from the teTeX set of packages. This can, under some circumstances, lead to a compromise of the groups that LPRng runs as. Several scripts used the current process ID as temporary file names and have now been altered to use the ‘mktemp’ program instead. Additionally, an insecure invocation of the ‘dvips’ program has been discovered in the print filter used for handling DVI files. This has been corrected to use the -R option. The temporary file-handling flaw affects Red Hat Linux 7.1 and earlier. The DVI print filter problem affects Red Hat Linux 7.0 and earlier. This vulnerability was discovered by zen-parse. Taking Action You may address the issues outlined in this advisory in two ways: – log in to Red Hat Network at https://rhn.redhat.com and from the listing showing under ‘Your RHN’ select the affected servers and download or schedule a package update for that system. – run the Update Agent on the affected machine. There is one affected system registered in ‘Your RHN’ (only systems for which you have explicitly enabled Errata Alerts are shown). Release Arch Profile Name 7.1 i586 my.system

No problems The installation is as expected flawless and we cannot help but judge all other distributions against Red Hat. That is not necessarily because Red Hat is the best but that it is the standard. By being the standard it has to maintain its reputation. This is done admirably, as Red Hat is a good all round system. The lack of cute penguin characters does not mean it is not fun for home users. The support and documentation available mean that developers will not miss out and the support packages mean that any business would be happy. As a system to run straight out of the box we could not find anything wrong. It found all our hardware and worked without a flaw. The firewall configuration allowed some flexibility as can be seen in Figure 1. We did like the new animated cursors although for how long they will amuse and not become an annoyance remains to be seen. The range of packages that come with modern distributions is now so large that no review could ever do more than scratch the surface. Fips is supplied to partition hard drives. The KDE has some nice utilities supplied such as Krayon for drawing pictures and Kugar for viewing XML data. I noticed all the usual packages and some that I had not yet

The KDE desktop and KOffice on Red Hat

played with such as the wireless tools and the Reiserfs tools. Some complaints have been voiced that the buying public assume Linux only means Red Hat. This is good for Red Hat and means their marketing is reaching people. If they then buy Red Hat it is still Linux and being a good product will not turn them away. All in all, the product is solid and will encourage a new generation of Linux users. Issue 15 • 2001

LINUX MAGAZINE

33


BOOK REVIEWS

SAIR LINUX AND GNU CERTIFICATION: T

he Sair/GNU Linux certification program is aimed as a vendor neutral exam system to give Linux a more credible training path. To become a certified engineer you are required to pass the core concepts and practices exam plus three elective exams. This book covers the Apache and Web servers elective exam and is split into separate sections, which deal with distinct topics such as installation or security. Each section is given a chapter, which explains the basic information. There then follows detailed exercises to walk you through all areas you need to cover and practice questions and answers to prepare for the exam. At the end of the book you’ll find a sample exam, which is in multichoice format. The book covers Red Hat, Debian and

Slackware distributions and provides reference data of where to download utilities and files. As a study book the chapters cover enough information to complete the test exam supplied. As a reference book for Apache it does surprisingly well. As the information is aimed to be a crammer it means all information is relevant and so gets you up to speed quickly. Some slight Americanisation of spelling is noticeable but a nice glossary at the back is worth reading.

Author Publisher Price ISBN

Sair Development Team Wiley Computer Publishing £39.99 0-471-40537-X

A PRACTICAL GUIDE TO LINUX F

or all the new users of Linux out there who are looking for a good introduction to the subject (and anyone else who has not already come across it), although it has been around since 1997, there is still nothing better than A Practical Guide to Linux by Mark Sobell. The book has a forward by Linus Torvalds and he recommends it to anyone interested in learning about Linux. The first part of the book is perfect for new users as it features an overview of Linux and then an extensive tutorial to guide the new user through starting to use the system. The latter part of the book is aimed at the more experienced user and goes into more detail on subjects

68

LINUX MAGAZINE

such as GUI’s, networking and systems administration. The second section of the book describes various utilities including examples to be downloaded from the Internet. There is a very comprehensive appendix, written as FAQ’s to help the new user with troubleshooting. All in all, this is a classic textbook, clearly written and with much to recommend it even to experienced users, which has retained its place at the front of the field.

Author Publisher Price ISBN

Issue 15 • 2001

Mark G. Sobell Addison-Wesley £32.99 0-201-89549-8


KNOW HOW

Apple Mac:Applications for PPC/Linux

FORBIDDEN FRUIT Okay, so you are using Linux on a Macintosh. The chances are that it’s not a server, so what do you do with it? Jason Walsh investigates

L

ast month we took a close look at GIMP, the GNU Image Manipulation Program, and suggested using it instead of Photoshop. This is fine, but unless you’re in a design studio with many computers, you probably use your Mac for a whole lot else besides Photoshop. After you’re finished tinkering with your photos in GIMP, what next? You could reboot into the MacOS, but this begs the question why use Linux at all? Why not just stick with Photoshop on the Mac OS? Well, what else can you do in Linux on a PPC machine? This month we’ll be taking a very quick look at some great PPC/Linux applications.

There’s more to PPC/Linux than just GIMP

Web browsing For all intents and purposes, UNIX is the Internet. Okay, so the majority of professional Web sites are still designed on Macs, but it’s UNIX that does most of the serving thanks to its rock-solid networking capabilities. You’d therefore expect that UNIX variants such as Linux would have plenty of Internet applications and you’d be right – even on a minority platform such as PPC/Linux. Here we’ll look at just one of the most popular Internet activities, Web browsing. After all, no matter what version of Linux you have installed on your Mac, it has come with plenty of email applications.

Opera Opera Software have released its popular and light Opera Web browser version 5.0 final for the PowerPC in free adware or $25 shareware versions. Like its x86/Linux, Mac OS and Windows counterparts it’s stable, quick and has a decent interface. http://www.opera.com/linux

Communicator remains a safe choice as it’s familiar, differs little from the Mac and Windows versions and interprets HTML well. http://home.netscape.com/download/0709101/10000-en—— _qual.html

Mozilla Mozilla is the open source follow up to Netscape. Abandoning the majority of the old code, Mozilla is a rewrite from the ground up and includes the famous Gecko display engine. It is a very serviceable browser, though the package size is enormous and you’ll need a fairly hefty Mac if you want to see good results. Unlike its predecessor, it can thankfully use Netscape plug-ins. GNOME office uses a version of Mozilla named Gaelon as its standard Web browser. http://www.linuxppc.org/software/index/developers/mats/RPMS/ppc /mozilla/nightly/mozilla-pre0.9.3/mozilla-pre0.9.3-0.ppc.html

Konqueror Netscape Communicator 4.x Netscape is the obvious choice of Web browser, and was doubtless included with your distribution. Netscape does have its downsides though. It crashes frequently and is rapidly dating, but that’s not where it ends. The PPC version of Netscape cannot use Netscape plug-ins. Bizarre, but true. Nevertheless, Netscape

30

LINUX MAGAZINE

Issue 15 • 2001

Part of the KDE environment, Konqueror is an excellent little browser and unlike Netscape it can use Netscape plug-ins. Hmm. This application is installed by default by most distributions so why not give it a try? If nothing else it’s a cut above the previous KDE effort. http://www.kde.org


KNOW HOW

Productivity The Mac is famed for kick-starting the desktop publishing boom in the 1980s, and rightly so. The intuitive GUI and applications such as MacPaint, Pagemaker and even ClarisWorks allowed a whole new set of uses for desktop computers. So what if you’re switching your Mac over to Linux, even part time? Do you have to sacrifice your productivity to play with this new operating system? Of course not. While there are no direct replacements here for the behemoths of publishing such as Quark XPress or Adobe InDesign, there are plenty of alternatives to AppleWorks and Microsoft Office, and if you take care you can get excellent results. Most Linux users on the Intel platform use either Sun StarOffice or Corel WordPerfect Office, unfortunately neither is available for PPC/Linux. Both are dependent on x86 specific code and Corel’s effort even uses WINE emulation so it is unlikely to ever make it to PPC/Linux.

Open Office.org Progress. Open Office.org is the open source version of StarOffice and thankfully there is a build for the PPC chipset. It is easily comparable to Microsoft Office in terms of usability, features, and also, sadly, bloat. However, if you need a professional office

suite for no cost, this is your best bet. However, it is still under heavy development. Open Office.org consists of a word processor (Open Writer), a spreadsheet application (Open Calc), a vector illustration program (Open Draw) and a presentation application (Impress). The plan for this suite of applications is to integrate it with GNOME Office. http://www.openoffice.org/dev_docs/source/ build_638c/build638c.html

AbiWord This standalone word processor is the single application I use most regularly under Linux, and it will most likely remain so unless Nisus Software release a Linux version of their excellent Nisus Writer. It looks and feels a lot like the Windows version of Microsoft Word (though not at all like Word 2001 for the Macintosh) and though it has less features and virtually no documentation, it is perfectly useable and most importantly, stable. AbiWord is the main word processing component of the GNOME Office suite, which is included with most distributions. http://www.abisource.org

Gnumeric Gnumeric is another GNOME Office application built using the GTK toolkit. This

Other picks BOCHS BOCHS is an x86 emulator which enables you to run DOS, Windows and x86/Linux, should you feel so inclined. http://bochs.sourceforge.net/ http://www.bochs.org xchat A graphical IRC client, which has reached version 1.8.1. http://www.xchat.org/files/source/1.8/ Knapster2 for KDE2 Knapster2 is a clone of the Windows Napster client and requires KDE. A pre-built RPM is available online. http://prdownloads.sourceforge.net/knapster/knapster2-0.31.ppc.rpm QCAD This excellent 2D computer-aided design software can be recompiled to run on PPC/Linux. Instructions are available online. http://www.resexcellence.com/linux_icebox/08-01-01.shtml HotJava Sun’s Java-based browser runs just fine on PPC/Linux, though it does require a working installation of Java (obviously). http://java.sun.com/products/hotjava/3.0/

Excel-like spreadsheet is a fairly robust program and has the vast majority of the features that normal users would ever want. Accountants may have to look elsewhere for the time being as, like many open source efforts, it’s not quite finished. Small businesses and home users will be right at home though. http://www.gnome.org/gnumeric

KOffice Part of the KDE desktop, KOffice is most directly comparable to AppleWorks. That is to say, whilst very useable it doesn’t have all of the functionality of Microsoft Office. This is less of a mixed blessing than it sounds: MS Office is, frankly, overpowered for everyday use. KOffice, usually installed by default, isn’t a resource pig and this is a good enough reason to consider it. It offers word processing, vector illustration and spreadsheet facilities among many other features, all suitable for SOHO use. http://www.koffice.org

Applixware Applixware is a commercial office suite and as such is well supported and easy to use, but if you’re using Linux for budgetary reasons, forget it. This isn’t free in any sense of the word. http://www.applixware.com

Antiproductivity software There are more than a few ways to waste your time on PPC/Linux. The SNES9x Super Nintendo emulator is available in pre-built form for PPC/Linux (http://www.snes9x.com/downloads.as p), as is Civilisation: Call to Power (http://www.lokigames.com/products/ci vctp/), which isn’t even available on the MacOS. Loki have also ported the excellent war game Myth 2: Soulbinder(http://www.lokigames.com/ products/myth2/). Bungie’s classic Marathon(http://www.unimainz.de/~bauec002/A1Main.html;http ://source.bungie.org/) is also now open source and runs on PPC/Linux.

Issue 15 • 2001

A little bit more This short article has really only scratched the surface of PPC/Linux software, but hopefully it’s given you a taste of what’s available. You may have to do a bit more digging than x86 users, but the software is available. ■

LINUX MAGAZINE

31


KNOW HOW

QT

GETTING STARTED WITH QT Getting organised

To get us started this month we take a look at some of the layout classes Qt has available for helping create your interfaces. First, type in the following program and compile it:

Welcome to part three of our foray into the interesting world of Qt application development by Jono Bacon. This month we will take a long hard look at geometry classes for creating interfaces and examine how Qt deals with interaction with our widgets

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38

#include <qapplication.h> #include <qvbox.h> #include <qpushbutton.h> class MyClass : public QVBox { public: MyClass(); ~MyClass(); private: QPushButton QPushButton QPushButton QPushButton };

* * * *

bobButt; fredButt; frankButt; jimButt;

MyClass::MyClass() { bobButt = new QPushButton(“Bob”, this); fredButt = new QPushButton(“Fred”, this); frankButt = new QPushButton(“Frank”, this); jimButt = new QPushButton(“Jim”, this); } MyClass::~MyClass() { } int main( int argc, char **argv ) { QApplication a( argc, argv ); MyClass w; a.setMainWidget( &w ); w.show(); return a.exec(); }

In this snippet of code we create four QPushButton pointers on lines 12 – 15 (making sure to include qpushbutton.h on line three). The actual QPushButton objects are then created on lines 20 – 34

LINUX MAGAZINE

Issue 15 • 2001

23. main() is much the same as in the previous code we have looked at. When you run the program you get something like in Figure 1. As you can see, the four buttons are lined up vertically in a nice neat fashion and they take up equal space in the window. Try editing out a button Figure 1: Four buttons and recompiling, and you equally spaced will see that the space is accommodated cleanly for each button again. So how does this magic work? Well, if you look at line five, you can see we inherit QVBox. QVBox is a class for arranging widgets in a vertical fashion and is very useful when you inherit from it as it will automatically arrange child widgets into a vertical layout. In this example we added push buttons as child widgets, but let’s also look at combining QVBox with a QHBox (for horizontal layouts): 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

#include #include #include #include

<qapplication.h> <qhbox.h> <qvbox.h> <qpushbutton.h>

class MyClass : public QVBox { public: MyClass(); ~MyClass(); private: QHBox * hbox; QPushButton * QPushButton * QPushButton * QPushButton * QPushButton * QPushButton * QPushButton * };

bobButt; fredButt; frankButt; jimButt; janButt; aprilButt; mayButt;


KNOW HOW

24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48

MyClass::MyClass() { hbox = new QHBox(this); janButt = new QPushButton(“Jan”, hbox); aprilButt = new QPushButton(“April”, hbox); mayButt = new QPushButton(“May”, hbox); bobButt = new QPushButton(“Bob”, this); fredButt = new QPushButton(“Fred”, this); frankButt = new QPushButton(“Frank”, this); jimButt = new QPushButton(“Jim”, this); } MyClass::~MyClass() { } int main( int argc, char **argv ) { QApplication a( argc, argv ); MyClass w; a.setMainWidget( &w ); w.show(); return a.exec(); }

In this example I firstly added hbox.h as an include file on line 2. I then created a pointer to a QHBox object on line 13, and created the object on line 26. On lines 27 – 29 I created three more QPushButton objects (their pointers being declared on lines 18 – 20), but instead of setting the parent to Figure 2: Now with three new push buttons ‘this’, I set it to ‘hbox’ which is the name of the QHBox object. By setting the parent to ‘hbox’ of a widget, it is added to the layout manager specified in the parent and is organised. So when we create a QHBox object on line 26, it is then housing the new three push buttons horizontally at the top of the vertical manager. This can all be seen in Figure 2. Layout management is something integral to Qt interface design. We will cover more on interface design in the next issue.

Connecting the pieces together OK, so we’ve now come quite far. We have discussed widgets, layout, parent/child relationships and written a couple of small programs. This is all fine and dandy, but our programs don’t actually do anything yet. For example when I click on a button, I want something to happen. To do this there is a comprehensive framework built right into Qt called the Signal/Slot framework. This is a system of connecting widgets to functions so that when you do something some functionality can be associated with it. The way signals and slots work is that each widget

(a graphical object on screen like a button) has a number of signals. A signal is a function that is emitted when you do something with the widget. For example, to see the signals that are available for QPushButton’s, we need to look at the QButton documentation (as QPushButton is a type of QButton and inherits it). We can see the following signals: ● void ● void ● void ● void ● void

pressed () released () clicked () toggled ( bool ) stateChanged ( int )

So when a user clicks on a QPushButton, the clicked () signal is emitted. We can then connect this signal to a slot. A slot is just a normal method that can do whatever needed in response to the signal being emitted. So how does this work, you ask? Well to explain, lets look at some code to get us started. You will need to use multiple files for this code. Type the following code in: myclass.h: 1 #include <qapplication.h> 2 #include <qhbox.h> 3 #include <qvbox.h> 4 #include <qpushbutton.h> 5 6 #ifndef MYCLASS_H 7 #define MYCLASS_H 8 9 class MyClass : public QVBox 10 { 11 Q_OBJECT 12 13 public: 14 MyClass(); 15 ~MyClass(); 16 17 public slots: 18 void slotJim(); 19 20 private: 21 QHBox * hbox; 22 QPushButton * bobButt; 23 QPushButton * fredButt; 24 QPushButton * frankButt; 25 QPushButton * jimButt; 26 QPushButton * janButt; 27 QPushButton * aprilButt; 28 QPushButton * mayButt; 29 30 }; 31 33 #endif

A slot is just a normal method that can do whatever needed in response to the signal being emitted

myclass.cpp: 1 #include <qmessagebox.h> 2 #include “myclass.h” 3 4 MyClass::MyClass() Issue 15 • 2001

LINUX MAGAZINE

35


KNOW HOW

moc (Meta Object Compiler) is a little tool that converts some of the Qt signals and slots syntax into regular C++ code

5 { 6 hbox = new QHBox(this); 7 janButt = new QPushButton(“Jan”, hbox); 8 aprilButt = new QPushButton(“April”, hbox); 9 mayButt = new QPushButton(“May”, hbox); 10 bobButt = new QPushButton(“Bob”, this); 11 fredButt = new QPushButton(“Fred”, this); 12 frankButt = new QPushButton(“Frank”, this); 13 jimButt = new QPushButton(“Jim”, this); 14 15 connect( jimButt, SIGNAL( clicked() ), this, SLOT( slotJim() ) ); 16 } 17 18 MyClass::~MyClass() 19 { 20 } 21 22 void MyClass::slotJim() 23 { 24 QMessageBox::information( this, “Woohoo!”, “slotJim() has been called!\n”, “Cancel” ); 25 } main.cpp: 1 #include <qapplication.h> 2 #include “myclass.h” 3 4 int main( int argc, char **argv ) 5 { 6 QApplication a( argc, argv ); 7 8 MyClass w; 9 a.setMainWidget( &w ); 10 w.show(); 11 return a.exec(); 12 } You will need to run the moc tool on the header file if you are building this by hand. See the Qt documentation for details on this. Before we look at the code, let’s have a quick discussion of what moc actually is. moc (Meta Object Compiler) is a little tool that converts some of the Qt signals and slots syntax into regular C++ code, and it also does some other nifty little things. You can see this code for example in the header file where you see ‘Q_OBJECT’ and public slots:. The ‘Q_OBJECT’ code indicates you are using the Qt object model (the signals/slots framework) in this header file. Always put this at the top of any class that uses signals and slots. The ‘public slots:’ part of the code indicates the following methods are slots that will be connected to signals. We have a single slot slotJim() which is a method like any other normal method. Now let’s take a look at line 15. This line is where the actual connection between the signal and slot occurs. It is in this format: QObject::connect( object_that_emits_the_signal, SLOT( signal() ), object_with_slot, SLOT(

36

LINUX MAGAZINE

Issue 15 • 2001

slotname() ) ); We have the following code: connect( jimButt, SIGNAL( clicked() ), this, SLOT( slotJim() ) ); First of all we do not need the QObject:: prefix as we inherit QVBox which in turn inherits QObject down the line. We can see that the jimButt object (the button with “Jim” written on it) is the object we are connecting a slot to. We are dealing with the clicked() signal in this connection. We could of course use any of the other signals, but clicked() is a good one to start with. We then connect this signal to the slotJim() slot. We specify ‘this’ as the object whilst MyClass has the slot definition. You may have seen some of the signals have a parameter such as toggled( bool ). This signal is for when the button is a toggle button and you want to pass to the slot whether the button is toggled or not as the parameter. To use signals that pass a parameter, your slot MUST accept the same parameter type. This may sound like a limitation but in practice it really isn’t: this feature is due to Qt being type safe which is a good thing. So for example you could have the following connection: connect( toggleButt, SIGNAL( toggled( bool ) ), this, SLOT( slotIsToggled( bool ) ) ); You could then use the slotIsToggled( bool ) slot like this example: MyClass::slotIsToggled( bool state) { if( state == TRUE ) { // do something } else { // something else } }

Wrapping things up Well, in this tutorial we have looked at layout managers, signals in widgets, signals and slots and a few others things. We are well on the way now to writing more comprehensive Qt applications. Next month we will build our first application based on this knowledge and use Qt Designer to develop our interfaces. Until then, I suggest you read through the Qt documentation and have a play with the different signals and methods available for widgets such as QPushButton, QLabel etc. Have fun!


KNOW HOW

GIMP WORKSHOP: Image processing with Gimp: part 8

COLOUR RUNS It’s best to quit while you’re ahead, so they say, which is why this is the last part of our Gimp Workshop by Simon Budig

P

aths enable users to define the outlines of objects or simple figures. Gimp’s path tool is based on so-called Bezier curves, which may also be familiar from other graphics programs. A Bezier curve is defined by means of two support points and two control points. In Figure 1 you can see roughly how the various points affect the curve. By placing several of these segments behind one another you can draw more complex figures. Incidentally, if the end point of such a path coincides with its starting point this is known as a closed path.

Figure 2: The various path tool buttons

Figure 1: Various types of Bezier curve

40

LINUX MAGAZINE

Let’s get one thing out of the way before I start: I don’t like Gimp’s path tool. I find it counter-intuitive in comparison with other programs and sometimes limiting, but since it offers an important functionality I will describe it here in detail. Start Gimp and open a new image. Open the Layers, Channels & Paths dialog and select the Paths tab. Now activate the Bezier tool (the pen nib with a curve, on which a point sits) and click in the image. In the dialog there will now appear a new entry for the current path. The first support point has also appeared in the image window. If you hold down the mouse button you can drag the control point out from the support point. If you click once more in the image area the second support point appears and the Bezier curve between Issue 15 • 2001

them will become visible. You can now drag out the second control point. In this way you can very quickly copy the rough outlines of an image element. With another click on the start point, the path is closed. Depending on where we click in the image different things will happen. If the path is closed and we click inside it, the path is converted into a selection. We can now handle this as a normal selection. If we instead click outside, a new component of the path starts. In this way we can define a path covering several areas. Due to the crude data structure inside Gimp it is unfortunately not possible to have several open components, which would be useful for arrows and suchlike. Be careful when you try to correct an existing path. You can in fact, as is the custom in other programs, drag the control points through the locality with the mouse. (Normally Gimp will move two opposing control points symmetrically, but if you hold down the Shift key they can be moved independently). However, in the case of support points this doesn’t work: instead of moving the point, the support points are dragged out again – I have ruined many paths myself in this way. To move a support point, you have to press the Ctrl key at the same time. With the four buttons in the upper part of the path dialog (Figure 2), it is possible to toggle the Tool


KNOW HOW

Figure 3: Frame the eagle with the path

between four operating modes. From left to right, these are the tools to: ● Create or continue a new path ● Add nodes to a path ● Delete nodes from a path ● Modify the nodes of a path If we click outside the nodes with the last three tools, Gimp automatically switches to the first tool. It is possible to tell from the shape of the mouse pointer what will happen with a mouse click. With a bit of practice, it is possible to adapt the paths to a specified form. Click on the start point, drag the control point as appropriate, release the mouse button and click on the next support point. Using different combinations of the Shift and Ctrl keys, the position can now be adjusted and both control points can be placed independently of each other. You’ll simply have to forget about trying to correct the third to last nodes again – or else you will have to manually switch to the modify path tool (the fourth button). What are paths for? I would like to give two small examples. For the first we will use an example image of a bald eagle, the heraldic animal of the USA. After loading the image into Gimp we zoom into the image a little, so we can place the points more precisely. Select the path tool and click on the upper edge of the neck, on the left edge of the image. We have to do a bit of guesswork here, as the image is

Figure 4: The eagle turns into a logo

very dark. Now we create the path along the head. At the beginning we will have to place support points on the tips of the feathers. At the places where the plumage is more close-fitting, fewer support points are needed since gentle curves can usually be approximated nicely using the control points. In Figure 3 you can see the finished path. As you can see, we have closed the path outside around the image. If the path is converted into a selection this guarantees that the selection also includes the left lower corner. Otherwise the start and end point would be directly linked to each other, which would mean leaving out a triangular area. Now we have the outline of the eagle in a geometric form, we can use it to draw fairly graphicsorientated logos. Create a new layer with a white background and click on the third button under the path list. The path is now converted into a selection. Using the colour fill tool, which I will describe in detail below, we can now fill the form of the eagle with a colour fill. Deactivate the selection, select the paintbrush tool with any paintbrush from the paintbrush dialog and set red as foreground colour. We have used the calligraphy brush. Using the fifth button under the path list, the path will now be followed by the current tool (Figure 4). In this way, you can make fairly graphic elements out of photos. You can also export a particularly successful path into a file, and later insert it into another image, via the pop-up menu in the path dialog. A few more comments on paths: Unfortunately paths are not scaled when you scale an image as a whole, but there is the option to change paths using the transformation tool. Click in the space to the left of the path preview image and an image of a lock appears. If you now rotate or blur the perspective of the image using the transformation tool (rotation, scaling, shearing, perspection) the preview grid also Issue 15 • 2001

Figure 5: Geometric path transformations

You can also export a particularly successful path into a file, and later insert it into another image

LINUX MAGAZINE

41


KNOW HOW

Figure 6: A box made from paths and fills.

shows a preview of the path (Figure 5). If the transformation is then applied, the paths marked with a lock will then also be adapted (Figure 5). If you need to move a path as a whole, click any support point on the path with the Alt key held and drag it into position. Like layers, you can also rename and duplicate your paths. It can sometimes be very useful to convert a selection (to be more precise: its edge, as displayed by the marching ants) into a path. Simply click on the fourth button under the path list and wait a moment. The result is not always ideal but it’s acceptable. By the way, you can also set the parameters for optimisation: if you hold down the Shift key and press the button, a large dialog appears with lots of parameters. I don’t know exactly all the things you can set with them but there are probably some image editing experts out there delighting in this option. Paths can also be used to construct geometric objects, in particular polygons. The normal selection tools are restricted to ellipses and orthogonal rectangles. With paths you can create any polygon you like with a couple of clicks and convert it into a selection at the touch of a button. In Figure 6 you will see a box, whose walls have been constructed using paths then converted into a selection and filled with a gradient colour fill.

Gradient blends

Figure 7: The tool settings for the blend tool

42

LINUX MAGAZINE

Until now I’ve shamefully neglected the gradient tool, as I assume you’ve already tried it out a bit. However, since there are also a few nice touches hidden here I’d like to cast a bit more light on the subject. If you click in the image with the activated blend tool, drag the mouse a little way and release the mouse button again, a gradient fill will be painted from the foreground to the background colour. The artificial line indicates how “soft” the colour fill runs and in Issue 15 • 2001

which direction it is oriented. It’s not wildly exciting but it’s certainly useful. Now open the tool settings by double-clicking on the tool icon (see Figure 7). From here you can now set a multitude of options. Uppermost are the options which are present in all painting tools: the opacity and the paint mode. You may want to try out the Mode settings with another tool, in order to understand the various options, but Normal really is normal; it’s only very rarely that you’ll need other modes. Under this it then gets a bit more specific. With Offset you can set the percentage at which the fill really begins. For normal fills this scarcely matters but have patience... With Blend you can define which colours the colour fill should use. The top two entries overlap between the foreground and the background colour, once in RGB mode and once in HSV colour mode. The HSV changes are usually more colourful, as they run along the colour wheel (see also Part 1) and so cover a broad colour spectrum. You can usually do more with the RGB colour mode. The entry FG to Transparent blends from the foreground colour towards transparency, for example the colour slowly fades towards the outside. With the last entry you can use the user-defined colour fill, which can be seen in the main toolbox at the bottom right. With a click on this preview (Active Gradient) you can access a selection dialog. Don’t worry, I’ll explain how you can define your own gradients. The gradient entry defines the form of the colour fill: ● Linear we have already met ● Bi-Linear reflects the colour fill again on the opposite side ● Radial paints the fill in a circle with the length of the artificial line as the radius about the start point (the direction of the artificial line does not matter) ● Square is just a square where the end point of the artificial line lies on the outer edge of the square ● Conical arranges the colours of the fill like rays about the start point, the direction of the artificial line specifying the orientation. In the case of symmetrical, the fill only paints over half the angle and is reflected on the artificial line, while with asymmetrical it paints over the whole angle ● The Shapeburst fills adjust their shape to the current selection. While angled treats all colours equally, spherical favours the first and dimpled favours the last part of the colour fill. ● The Spiral fills are very useful for hypnotic eyes. These come in clockwise and anticlockwise forms. The artificial line is used to define the width of the spirals and the direction of the ‘nose’ in the midpoint. In Figure 8 you can see a brief overview of the various forms of fills.


KNOW HOW

Figure 8: Various forms of gradients: top: Linear, Bi-Linear, Radial, Square, Conical (symmetrical), bottom: Conical (asymmetric), Shapeburst (angular), Shapeburst (spherical), Spiral (clockwise), Spiral (anticlockwise)

The offset parameter means the colour fill does not begin immediately at the starting point of the artificial line. The best way to explain this is using a radial fill. If you use the artificial line to define a radius of 100 pixels (this will then reach the outermost colour of the fill), the change parameter will define the radius of the innermost colour of the fill. With an offset of 30 per cent the fill would only start to run at a distance of 30 pixels from the midpoint. The inner area will be filled with the starting colour. In the lower area of the dialog, you can activate Adaptive Supersampling, which essentially boils down to antialiasing at sharp colour transitions. This does, however, increase computing time. Gimp comes with a whole heap of useful gradients but the chances are when it comes down to that crucial moment, the right colour won’t be there. To remedy this, pull up the gradient blend selection dialog and click on the edit button. In Figure 9 you can see the dialog which appears, which is divided into three areas. At the top left you can see the list of available fills, on the top right are a few basic operations and at the bottom is the editing window. Under the gradient you will see a bar with black and white triangles. A gradient is composed of several segments, at whose end points you can define a colour in each case. The black triangles separate these segments. If you drag the triangles back and forth you will see how the fill changes accordingly. The white triangles can move the focal point of the colour within a segment. Normally they stand in the centre between two segment end points. Segments can be selected by clicking in the area

Figure 9: The dialog for definition of gradient levels

between two black triangles. The pop-up menu which appears on a right-click always relates to the segment marked dark grey. You can extend the selection by clicking with the Shift key pressed. If you click on a triangle you can move it and adapt the gradient accordingly. By clicking in the dark grey region you can move the whole area; if you had pressed the Shift key at the start of the click, the white triangles to the left and right of the selected area would also have been altered. You can access the pop-up menu via the right mouse button. As already mentioned, this relates to the area currently selected. You can now define the

colour (and transparency) of the left and right corners. Frequently-used colours (the adjacent colour of the next segment, foreground colour and the colour of the other end) are then immediately available; you can also save up to ten colours for rapid access in the menu. From this menu you can also access other functions which influence the details of your fill. You can define how the transition between the end point colours will occur, re-arrange the triangles, split up and delete segments and so on. Every colour can also be assigned a transparency. With Gimp there are some colour fills which come as standard, in the Flare Glow fills you can see how these can be used to best effect (Figure 10). That’s it. I hope I have been able to help you understand the basics of Gimp. Of course, this Workshop could never cover all the functions in Gimp – it’s not without good reason that you’ll find inch thick books dedicated to the program. Have a go and see what else you can get out of Gimp. I’m inviting you to send me your tips and tricks (sbudig@linuxuser.de) and if a suitable number can be collected, we’ll publish them here in this column. Happy Gimping! ■ Issue 15 • 2001

Figure 10: Light effects with the Flare Glow fills

The author Simon Budig is a maths student at the Uni-GH Siegen. He now uses nothing but Linux, to pound Gimp into the subconscious of innocent victims. He was incited to do so within Unix-AG, which carelessly allowed him to make contact with the developers of Gimp.

LINUX MAGAZINE

43


KNOW HOW

Migration: USER ADMINISTRATION

IN GOOD COMPANY Windows 98 was primarily designed for single-user machines, although it is quite possible to set up several user accounts. As Anja M. Wagner explains, Linux is basically a multi-user system, which is why user administration is more sophisticated and user-friendly than in Windows

T

he multi-user approach is already evident during the installation of Linux, when the system administrator, or superuser “root”, and at least one additional “normal” user must be created. In this workshop we are going to show you how to create additional users and groups and how to assign access permissions to files and directories, in short, how to administer Linux users. For the purpose of this tutorial we will be referring to Linux SuSE 7.2 Professional with a KDE 2.1.2 desktop. The philosophy of Windows 98 is fundamentally different from that of Linux. The operating system is primarily designed for one user on a single-user machine. However, Windows 98 also offers the facility to set up multiple users. What is the point of that? Even a computer without a network connection could potentially be used by several people, for instance, different members of a family or flatmates. In this case it is useful to be able to create a user account for each person. Each user can then create his or her own individual desktop and My Documents folder. However, under Windows 98 this folder is not protected against access by co-users, as it would be with Linux. Under Windows 98, additional users are created in the Passwords section of the Control Panel. By enabling this option Users can customise their preferences and desktop settings. Windows switches to your personal settings when you log on via the

Figure 1: Under Windows, individual user profiles have to be enabled

48

LINUX MAGAZINE

Figure 2: All users happily united on one list

User Profiles tab. If the desktop icons, the Network Neighborhood contents, the Start menu and the program groups are to be included in the user settings, you also need to tick the relevant options on the same tab. After confirming with OK, you then need to open the Users section, which is also in the control panel. All existing users are listed under User Settings. By clicking on the New User button you can start a wizard, which will help you to create additional users. There is no point in setting up a password, by the way, it will not protect against access and manipulation by other users. In the step Personalized

Figure 3: A small selection of items for personalised settings

Issue 15 • 2001


KNOW HOW

Figure 4: The file manager shows the file permissions for “tux”

Figure 5: Permissions for the system administrator “root” are a little more complex

Items Settings you can select to personalise favourites, downloaded Web pages and the My Documents folder. You can also choose between starting off with a copy of the existing desktop or with new, empty desktop items. Now every Windows user can log in with their own name and password at start up. The operating system will create the folder Windows/ Profiles/ username, in which the individual settings are stored.

Grass roots Under Linux, two users exist right from the start, the system administrator “root” and a “normal” user who we shall call “tux”. All users are assigned to a group by the system and can have different permissions. Only the superuser “root” has all permissions for all files and directories, i.e. read = r, write = w and execute = x. This is the reason why system files can only be amended when logged in as “root”, as only this user has the necessary permissions to make such changes. For directories the execute permission “x” indicates the ability to access the directory at all and “w” means being allowed to create new subdirectories and files. If even “root” does not have execute permission for a file then it is not an executable file, for instance it could be a text or image file. In SuSE Linux, to start off all normal users belong to the group “users”. Red Hat, however, creates a new group for each user, where the name of the group is the same as that of the user account. This is then the standard group for that user. If you want to know the permissions for a directory or file, start the file manager Konqueror by clicking

on your home directory icon in the KDE panel. Your home directory under Linux is roughly equivalent to the My Documents folder under Windows. Activate the detailed list view under View/ View Mode on the Konqueror menu bar. The Permissions column contains a nine part combination of the characters “r”, “w”, “x” and “-”. The first three elements show the permissions for the owner of a file or directory. The first triple “rwx” on a file created by “tux” and stored in his home directory indicates that the owner “tux” has all permissions for this file. The second triple shows the group permissions assigned to the file, in this case for the group “users”. Permissions for all other users can be seen in the last triple. The hyphen indicates the absence of a permission in the “rwx” sequence. For example, if “tux” was able to read and execute a file, but not to change it, the triple would be “r-x”. To display permissions on the command line you first need to open a terminal emulator window by clicking on the window and shell icon on the KDE panel. At the prompt, type “ls -l” and press Enter. The character in front of the familiar nine bit combination of r, w and x indicates the file type: a hyphen “-” represents a file, “d” represents a directory.

Admin or not admin Returning briefly to the system login: the graphical login shows all existing users, in this case “root” and “tux”. You have to decide right at the start of a session whether you want to work with Linux as the administrator or as a normal user. You should only log in as “root” if you are intending to make system changes and if you know what you are doing; otherwise you might damage the system. Each user is identified by their username and a password. We are assuming for this workshop that you have installed Linux on a single-user machine. Even if you are the only user of this machine it still makes sense to create several “normal” users. This will give you more freedom to experiment with the uses and design possibilities of the graphical Linux interface. For instance, “tux1” could use a different KDE theme to “tux”, and “tux2” could default to using GNOME instead of KDE. We discussed how to customise the KDE graphical

Figure 6: File listing in the terminal window

Issue 15 • 2001

LINUX MAGAZINE

49


KNOW HOW

Figure 7: The administrator “conducts” the system

Figure 8: This way to the user manager

igure 9: The user manager clearly lists users and groups

Figure 10: User administration is accessed by button or menu

login in the workshop “Tailormade desktop, part 2” in issue 13 of Linux Magazine. In the field containing images for the different users the system administrator is represented by a conductor. This normally needs to be activated after an installation. Log in as “root” and open the KDE control centre using the K icon on the panel. Select System/ Login Manager in the left column and then select the Users tab. By default “root” is set to be one of the users that is not displayed during graphical login. This is a security precaution to make it harder and less tempting to log in as superuser. Click on the entry “root” in the list of no-show users and then on the button with the double left chevron to remove it. The stylised conductor will now appear at the next graphical login and you only need to click on it to enter “root” in the login field. User administration is one of the classic tasks of a system administrator. Log in as “root” to create new users. Click on the K icon in the panel and select System/ User Manager. This starts a tool that makes administration tasks much easier. The user manager window is split into two halves. On the left all existing users are listed with their login and their full names. At the end of the list is the “normal” user “tux” that you created during the installation. On the right are all existing groups with their group IDs (GID).

New user Figure 11: A new user is born...

Figure 12: ...and equipped with vital information

50

LINUX MAGAZINE

In order to create a new user, click on the Add user icon or select User/ Add from the menu bar. A small window pops up and you are prompted to enter the new username. We are going to use the name “Tux01”. In the following step you will specify the properties of the new user. Enter the full name. Additional information such as the address can be entered in the text fields Office1, Office2 and Address. An important part is the selection of the login shell. A shell is basically the interface between the user and the operating system. It provides a command line on which you can enter commands. The login shell is the shell that the respective user is given to work with. Issue 15 • 2001

Figure 13: The login shell determines the user’s working environment on the command line

Normally this is the “bash” (/bin/bash) – “Bourne Again shell”, but there are others, for instance the C shell (csh) or the Z shell (zsh). Equally as important as the specification of the login shell is the creation of a home directory for the newcomer. The user manager creates this automatically. It is called “/home/username”, in our case /home/Tux01. The user ID (UID) is also assigned automatically by the system. You should not change this number, as this is how Linux recognises the user. In order to simplify the configuration task you should ensure that the options Create home directory and Copy skeleton are ticked. The second option provides the new home directory with a number of standard configuration files that are copied from the skeleton directory /etc/ skel/.

Passwords If you want to set a password for the new account, click on the button Set password and enter it twice. The groups to which “Tux01” belongs can be set on the Groups tab. More on this later. The primary group for a normal user (for SuSE) is logically called “users”. User properties are changed in User/ Edit, and if you want to get rid of an account, select User/ Delete. The user manager only saves your entries after confirmation once you exit the program. A new group is created in a similar manner: click on Group/ Add and specify a meaningful name. The

UID The user ID is a number between 0 and 65535, which the system uses to recognise and identify the user and to administer his or her account. The system assigns the UID automatically when a new user is created. This number should not be changed. GID The group ID is similar to the UID. It is also a number between 0 and 65535. Each new user is initially assigned to a primary group, in SuSE it is called “users”.


KNOW HOW

Figure 17: To avoid confusion the “Community” gets its own directory

Figure 14: The final goodbye: deleting a user

system also identifies groups by their group ID (GID). This number is assigned automatically by the system. We are going to use an example to explain how you can apply user and group administration:

Big brother Andrew, Colin and John share a flat. Andrew owns a computer and since he is a friendly sort of chap he creates user accounts for both of his flatmates. Each account has a password so that all directories and files in the respective home directories are protected from the curiosity of the others. He himself has, of course, all access rights, because he is also the system administrator “root”. Well, someone’s got to do the job. However, the three flatmates want to make certain files available to everyone. They all need to be able to read the cleaning rota and to enter who did the

Figure 15: A new group called Community is created

Figure 16: Members are assigned to the new group

cleaning and when. Andrew, alias “root”, therefore creates a group called “Community” with the user manager. He then amends the user groups to which Andrew, Colin and John belong. To do this, he first clicks on the new group, Community, on the right side of the user manager and then either clicks on the Edit icon or selects Group/ Edit from the menu bar. This opens the field Group properties in which he selects his flatmates on the left one by one and assigns them to the Community group by clicking on the arrow pointing to the right. Finally, “root” also becomes a member of the group.

Figure 18: Defining access permissions for the group

Community To avoid confusion, and to make sure that the common files can be accessed, “root” now creates a directory /home /Community, in which the files and directories to which Andrew, Colin and John all have access are going to be placed. He creates this directory with the file manager Konqueror. Permissions to the directory and its contents for the three are set by “root” through right-clicking on the directory and selecting Properties/ Permissions. At first only the owner has all r, w and x permissions. So that Colin, John and Andrew can also access Community, “root” puts three ticks in the Group line and changes the group from “root” to “Community” in the Ownership section. There is no point in ticking the option Apply changes to all subdirectories and their contents, because KDE will not save this yet. If one of the flatmates now puts a file into the directory that is meant to be accessible to everyone in the Community group, they then need to right-click and select Properties from the context menu and set the permissions, i.e. give read, write and, depending on the file type, execute permission to the Community group. If a member of the group amends a file, permissions do not have to be reset. Issue 15 • 2001

LINUX MAGAZINE

51


KNOW HOW

Organisation DIY package management in /usr/ local

STOW IT! The /usr/ local tree can easily become a tangle, but as Bruce Richardson explains this needn’t be the case. GNU Stow is a simple application, which keeps it organised by allowing each piece of software to be installed into a separate directory tree

Up the creek without a package Modern Linux distributions have sophisticated package management tools. Large and complex applications can be installed or removed with a single command or mouse click. Where you require nonstandard options a few tweaks to the source package will usually give you what you want. Package management tools do your housekeeping for you and (when the packages are built to a well-planned policy) ensure that the various components of your system interoperate and function consistently. Sometimes though, the app you want may not be available as a package, the source package may not be flexible enough or you may want to work with the latest cvs source. Whatever the reason may be, you find yourself working with the unpackaged source or binary files. For most Linux users, this is not a daunting task. Typically, you unpack the tar-ball into /usr/ local/ src and run through some variation of the following: # ./configure --some-option --some-other-option # make # make install At the end of which the application should be safely installed into the /usr/ local tree. That is easy enough. At this point, however, you should ask yourself some questions: ● Where did all those files go? I can’t use a package tool to get a simple list of what went where.

Further advantages Having un-stowed an application, there is no need to delete it. You could, for example, keep two or more instances of an application in /usr/ local/ stow (different versions, perhaps, or compiled with different options) and switch between them at will. Just install them to different target directories. The fact that Stow keeps applications in their own hierarchies makes them portable. You can quickly copy stowed apps between machines by making a tarball of the app’s directory tree and un-tarring it into the Stow directory of the target machine. Stow prevents applications from overwriting each other’s files. Before it creates any symlinks it checks to see if any proposed links would overwrite existing files. If a conflict is found, Stow does not proceed.

52

LINUX MAGAZINE

Issue 15 • 2001

● Will it be easy to cleanly uninstall the application? Even if I have a list of all the installed files, what steps do I need to take to uninstall safely? ● Am I sure the application installed itself nicely and didn’t break anything? This application wasn’t packaged for my system and the developers may not have been as careful as they should have been. These questions should worry you. The more applications you install like this, the harder it becomes to answer them. With unpackaged applications you have to do the housekeeping and maintenance yourself. This can turn into a nightmare. /usr/local/ – The Land Where The Wild Things Are. To quote the Filesystem Hierarchy Standard: “The /usr/ local hierarchy is for use by the system administrator when installing software locally. It needs to be safe from being overwritten when the system software is updated”. That is to say, it is the place to install software, which is not part of the standard system. In practice this means software that has not been pre-packaged using your distribution’s packaging tools. The /usr/ local hierarchy is essentially a twin of the /usr hierarchy, with bin, sbin, lib (and so on) subdirectories. A tar-balled application will usually install itself entirely within this hierarchy, unless you specify some other location. Precisely what goes where (docs to /usr/ local/ doc or /usr/ local/ share/ doc?) varies according to the developer’s whim. If you are lucky, you may be able to place things exactly where you want them by passing the correct options to the configure script.

A simple solution Stow offers a way to organise the /usr/ local hierarchy, avoiding tangles and breakages. This is done by installing each application into its own corralled directory tree, and then creating symlinks to the application files. To install an application with Stow, follow this sequence: ● Create a destination directory for your new application. /usr/ local/ stow/ appname is traditional (and logical).


KNOW HOW

● Install the software into this directory in such a way that files which would normally go into /usr/ local/ bin are placed in /usr/ local/ stow/ appname/ bin, files for /usr/ local/ share go into appname/ share and so on. For tips on how to do this see the section called Installing to the target directory. ● Then simply do: # cd /usr/local/stow # stow appname In the third step Stow moves recursively through the appname/ tree. For each file in appname/ bin a symlink is created in /usr/ local/ bin, for files in appname/ doc links are created in /usr/ local/ doc etc. What was the point of that, you may ask? It’s more laborious than the usual method and the installed application doesn’t work any faster. The advantage becomes clear, however, when you come to uninstall the stowed app. Here’s the entire procedure: # cd /usr/local/stow # stow -D appname This removes all symlinks to the application. You are then free to delete the /usr/ local/ stow/ appname, knowing that you are deleting all the application’s files and only those files.

Installing Stow Stow is a Perl script and Perl is the only prerequisite. It should work with Perl 4 or Perl 5. A Stow .deb package is available as part of the standard Debian distribution. Mandrake is the only rpm-based distribution for which we could find a Stow package. On any other set-up you will need to download the tar-ball from ftp://ftp.gnu.org/gnu/stow. For the obsessive compulsives amongst you, it is possible to install both Perl and Stow as stowed applications:

Things to watch Stow attempts to create as few symlinks as possible. If it can link to a directory rather than the files within it, it will. So if the target directory contains a lib/ data directory but there is no data directory in /usr/ local/ lib, Stow will create a symlink to the data directory, thus importing all its contents with only one link. If you later use Stow to install another application which also includes a lib/ data directory, Stow will resolve the conflict by replacing the symlink with an actual /usr/ local/ lib/ data directory and then populating that with symlinks to both applications. Imagine what would happen, however, if you were to install the second application directly, not using Stow. The installation procedure would simply follow the symlink and install files directly into the first application’s target directory. If you ever un-stow the first application, some files belonging to the second application would be unstowed with it. To avoid this problem (a) make sure that /usr/ local contains all the standard setup directories (bin, sbin, share and so on) and (b) use Stow to install all local applications, if possible. Another gotcha is that ldconfig ignores symbolic links when scanning for libraries. If a stowed app includes libraries you may need to add some symlinks of your own in /usr/ local/ lib. Always un-stow an application before making any changes to the contents of the target directory. Re-stow after the changes are made. Otherwise you risk broken links. ● The app won’t be run from its target directory. It will be run from its apparent location, as created by the symlinks. ● If the app uses shared resources, it will look for them in the prefix tree. If you set the prefix to be the Stow target directory, the app won’t be able to find any shared resources because the target directory contains only files belonging to the application itself. Instead you must let the application think it will be installed to /usr /local (which is usually the default anyway) but divert the actual installation into the target directory. One way is to run # make install prefix=/usr/local/stow/targetdir

● Install Perl into /usr/ local/ stow/ perl ● Install Stow into /usr/ local/ stow/ stow ● Now simply:

rather than just

# cd /usr/local/stow # perl/bin/perl stow/bin/stow perl stow

but this will not work for every application. See the Compile-time and install-time section of the Stow documentation for a detailed discussion of this issue.

# make install

Installing to the target directory Many source tarballs are designed to be relocatable. This means that you can change the base directory – the “prefix” – into which the application is installed, usually by passing a --prefix=desiredlocation option to the configure script (/usr/ local is usually the default). You might think that this is how you should install to the Stow target directory, but you would be wrong.

Lastly... My experience has been that while veteran Linux users tend to be familiar with Stow, newer users have usually not discovered it. This is a shame as it makes the potentially dangerous wilderness of /usr/ local a much safer place. Properly used, Stow can help you keep your local hierarchy as tidy and well organised as the standard parts of your system. Issue 15 • 2001

LINUX MAGAZINE

53


KNOW HOW

Pre-emptible Linux

A REALITY CHECK Can Linux be a “real-time operating system”? Kevin Morgan investigates

R

eal-time is a term that characterises a particular application. Hard real-time means an application fails catastrophically if deadline requirements are not met. Soft real-time means an application suffers degradation in quality, but not catastrophic failure, if deadline requirements are not met. Both hard and soft real-time are clock time independent. Linux is capable of meeting a wide variety of realtime requirements, in terms of specific timing needs, addressed by specific levels of software (interrupt service routine versus user application level). Interrupt service routine software is delayed by interrupt off periods in the kernel, and the Linux 2.4 kernel has very short interrupt off timings, with none greater than 60 microseconds on an 800 MHz Pentium III class system. This level of performance meets the vast majority of real-time requirements for interrupt level software. This is particularly true given modern system designs, where extremely fast I/O response requirements tend to be serviced by dedicated hardware in the form of intelligent I/O controllers, dedicated micro-controllers or custom dedicated hardware. In the rare remaining cases where Linux interrupt

The patch specifics The pre-emptible kernel patch modifies the definition (implementation) of a spinlock, changing it from its symmetric multiprocessing (SMP) specific implementation to a pre-emption lock. In both cases, the locking function acts as a control on re-entrancy to a critical section of kernel software. Additionally, the pre-emptible kernel patch modifies the interrupt handling software to allow rescheduling on return from interrupt if a higher priority process has become executable, even if the interrupted process was running in kernel mode (provided the process is not in a critical pre-emption locked region). Spin unlocks are redefined to return the system to a pre-emptible state, and check if an immediate context switch is needed. Lastly, the kernel build definition for a uniprocessor target system is modified to include the spinlocks (implemented as pre-emption locks). Through these four basic changes, the Linux kernel becomes generally preemptible (with short non-pre-emptible regions corresponding to the spinlocked regions in an SMP kernel). Process level responsiveness is dramatically improved, both on average and in the worst cases.

54

LINUX MAGAZINE

Issue 15 • 2001

off periods cannot be tolerated, RTLinux and RTAI are available. These are sub-kernel technologies that provide simple multi-threaded interrupt handling environments for driver level software. These environments emulate (virtualise) interrupt management requests from Linux, and thereby reduce the worst case interrupt off timings for the driver software written for these environments from the 60 microsecond level down to approximately 10 microseconds. Modern real-time environments typically involve substantial control and monitoring software in the real-time control path. Such software resides at the user application level. For example, consider real-time control software written in Java, running on a JVM, an increasingly common design choice. Such a system would never be structured as driver level software. Response requirements for applications are directly tied to the kernel’s ability to pre-empt a running process and switch to a higher priority process (newly awoken) very quickly. The lack of kernel pre-emption in Linux means that long system calls can delay high priority user process execution for relatively long periods, running into the tens of milliseconds in a 2.4 kernel. There is now a kernel pre-emption patch that today reduces this time down to one to two milliseconds, with further improvements planned for the future. Whether an operating system capable of these levels of responsiveness guarantees is considered realtime or not is a positioning rather than a technical issue. This level of improvement in Linux moves it from “problematic” to “very acceptable” for the vast majority of applications that have real-time requirements (soft or hard).

Maintenance cost and longevity All of the changes for the pre-emptible kernel patch directly leverage the SMP spinlocks, which are themselves fundamental in Linux for symmetric multiprocessing. The code modifications in the preemptible kernel patch are thereby limited to the four areas (see The Patch Specifics). New kernel code that functions correctly in an SMP kernel requires absolutely no additional changes in the pre-emptible


KNOW HOW

kernel patch. Thus, maintenance of the patch against the evolving Linux base is low cost. Improvement in Linux process level responsiveness is a must requirement for many embedded system designers considering the use of Linux as an OS platform. Embedded developers have a simple choice: enable kernel pre-emption if needed by the demands of their responsiveness requirement, or continue to use non-pre-emptible Linux if sufficient as is.

Audio processing under load In order to achieve over 20x improvements in process level responsiveness, what level of throughput loss is acceptable? If throughput loss is less than two to three per cent, the cost is outweighed by the improvement in system responsiveness. This trade off does not have to be made if every ounce of throughput is critical, and process level responsiveness is not. Users can select pre-emption or not, as they see fit.

Official Linux kernel source Pre-emptible kernel technology (as a build option, similar to SMP) should be included in the Linux source code, as provided at kernel.org, starting with the 2.5 kernel base. It is a fundamental improvement in Linux, which has value to all Linux user communities (desktop, server and embedded), and should be provided with this central distribution. However, continued development and deployment of pre-emption technology in Linux will not slow down if the technology is not integrated into the official source tree. Many Linux technologies are available and in widespread use today that are not part of the source code, and may never be included. This is one of the key benefits of open source; new and innovative technologies can be developed and provided when necessary, with the best getting extensive usage and support. Independent of Linux 2.5 and beyond, Linux kernel pre-emption technology is available today as an open source patch and there will continue to be enhancements to this technology. In any case, certain embedded Linux companies are committed to the long-term availability and support of this capability and committed to providing this leading edge advancement across all major target platforms. Providing an alternative semaphore implementation that utilises priority inheritance is an improvement under design. Continuing to refine long spinlock held regions is an ongoing effort. Characterising throughput impacts (positively and negatively) on a number of workloads is under progress and will be shortly available. A number of application success stories will become public over the course of the next year as this technology is widely designed in and deployed by embedded system product organizations.

The impact of throughput At a simplistic level, changing a uniprocessor kernel to add internal re-entrancy management means “more code” and hence “more time.” Superficially, a preemptible kernel will have reduced throughput. At the heart of the throughput issue is the question of a balanced system design, and the overall design objectives. How important is a responsive Linux? In a world of streaming media, responsiveness is quite important. A demonstration of the pre-emptible kernel doing simple audio processing shows that even a trivial load on non-pre-emptible Linux causes user process delays that exceed the threshold of the human ear, and audio glitches are heard. With pre-emption enabled, these delays are vastly reduced, and no audible glitches are heard.

Throughput concerns Some oppose a pre-emptible kernel because of throughput concerns. Others oppose pre-emptibility because of concerns about growing complexity in the kernel. This argument is specious, because the preemption approach takes advantage of already required and in-place SMP locking. No additional complexity is created. All Linux kernel engineering must already take into account SMP requirements. Some oppose continued refinement of SMP locking to achieve better SMP scaling (on higher way SMP systems); such refinement has the beneficial side effect of also reducing pre-emption off periods in a pre-emptible kernel. Pre-emptibility on 2.4 already provides dramatic improvements in user process responsiveness, and while further improvement would be beneficial, the current level of improvement is already of tremendous value. Hence, the pros and cons of improving SMP scaling in Linux can be debated relatively independently of pre-emptibility improvement opportunities.

Responsibility to the community Embedded Linux companies have responsibilities to the open source and Linux communities, as well as to the embedded system product development communities. They have a responsibility to innovate and release innovations early and often, for public comment and contribution. They have a corporate responsibility to do their best to enable Linux to be a viable operating system platform for embedded system design and implementation. Their customers will also find significant value in the exercise of that responsibility, through the delivery of such product technologies as a preemptible Linux kernel. The Linux kernel community is large and diverse. In every technical area, there is lively discussion and debate. Pre-emptible kernel technology is no different. The embedded systems marketplace, and the Linux community itself, will eventually decide the relative merits of pre-emptible kernel technology. Issue 15 • 2001

The author Kevin Morgan is Vice President of Engineering at MontaVista Software. He has 20 years of experience developing embedded and real-time computer systems for Hewlett-Packard Co. Experienced in operating systems and development. Kevin was a member of the HP 1,000 computer software design team. While at Hewlett-Packard, he worked as an engineer, project manager and section manager spanning the development of five operating systems. Most recently serving as HP-UX Operating System Laboratory Manager, Kevin was responsible for overall HP-UX release planning, execution and delivery for Hewlett-Packard server computers.

LINUX MAGAZINE

55


INTERVIEW

ALAN COX

NUT-CRACKER Linux Magazine talked to him about his views on kernel development and what might be in store in the future

56

LINUX MAGAZINE

Linux Magazine What drives you to do all this good work? Alan Cox I enjoy it! LM How do you find the time to do it all? AC Large amounts of sleep. I work US hours so sometimes I don’t get up until midday. LM What do you consider to be the most vital pieces of software that are missing from Linux? AC Better calendaring software, certain groupware programs. The big ones are now starting to fall into place – office suites like StarOffice and the KOfficework. Especially with StarOffice going to OpenOffice. The Ximian guys are working with Evolution to create a complete clone of Outlook with all the features and then some. A pure mailer program is Sylpheed – It means “Wind Spirit” in Japanese. LM Are the software support models for companies correctly set up? AC Support costs real money. You can pay large amounts of money for complete enterprise-wide support or just the back-end part. It depends on what you negotiate. All the support is there in theory. The Red Hat guys think they are doing a wonderful job but you should really ask the customers. LM Development of the kernel does not use the CVS model. Why not? AC The kernel proper does not use CVS but some developers use some for their parts. The big problem with CVS is that it is not a good way for a single person to have an overview of everything going in and the right kind of quality control and auditing that we require. I believe that Linus is using CVS as he wants to see everything in his CVS tree. LM How does the kernel grow and develop? AC A directed explosion is the best answer. Development goes off at all kinds of angles from a huge number of people for a large number of reasons. Sometimes it is because they see a financial advantage – if we pay someone to do this, then we can sell that. Other times people do it because they like a piece of hardware or they have bought a USB device that is not supported and think that “this is cool and I want to use it under Linux”. Some people do it out of academic interest, researching a given topic to improve a piece of software. LM Does it come about that features are left hanging because everyone wants the feature Issue 15 • 2001

but no one gets around to doing it? How is this co-ordinated? AC It’s not coordinated. It does sometimes happen, but eventually it irritates someone enough that they fix the problem. It is not uncommon that we have a piece of hardware that someone has written a driver for, but it is not really being maintained or the person who wrote it did not care about certain machines that it did not work on. If there are enough people using it then someone will sit down and say “okay I need to fix this” and then do it. Sometimes it’s the vendors, because when they run their QA test there is a problem. Often it is the end users. For example, those people with very old Soundblaster CD drives, if we break something then they still fix the driver. LM Do you get much in the way of requests from users who are unable to contribute directly to the kernel development? AC We certainly get feedback saying it would be nice if Linux did this or if Linux did that. The vendors are very good at getting feedback - “we would install five hundred machines but...”. It can be very useful. LM Do you think that there are good lines of communication from end users to coders? AC Yes. To the extent that what matters is that the end user is prepared to do the work or that they are prepared to pay someone to do the work, and that is how things come around. LM With the continued growth of Linux, do you see any downsides? AC Having a larger mailbox is the biggest potential downside. In the early days the Linux kernel would get two or three messages a day. It has continued to grow with more people becoming involved and more happening in the kernel. I do not know how much longer that will continue to happen essentially we are running out of things that are important to add to the kernel. Most of the really critical things are now in user space. For example, the world of KDE and GNOME, 3D graphics and all those kind of things. LM What is in the next version of the Linux kernel? Is that the type of question that’s even polite to ask? AC It’s the type of question we don’t know the answer to! There are things people are contributing which look like obvious candidates: An improved input layer; the ability to have multiple monitors and


INTERVIEW

multiple consoles used more sensibly; plug two mice, two keyboards and two monitors into one Linux box; various pieces of filesystem stuff – XFS, JFS. Compaq are donating clustering code, which is a very interesting and exciting area. You would be able to have a room full of Linux boxes acting as one system, but on top of that, if you lose specific machines then the system continues without a glitch. This is very important for a lot of business applications. LM Filesystems, then. Do you have a favourite filesystem? AC Journaled filesystems are useful for lots of applications. It really depends on what kind of thing we’re trying to do. So, we have things like ext2 which migrated into ext3 – a journaled filesystem, which does exactly what every standard generic filesystems has done over a period of time. Reiser FS has done a lot of work on small files, on faster directory handling. IBM’s JFS is looking extremely fast – it is interesting for that. We will see, I think, over time, which one will work out for the best. It’s a bit muddy at the moment for some of them. LM In what way will that muddy water clear? Who will be the victor? AC I don’t actually think there will be a victor. Before, we had competing filesystems – ext fs, xiafs – that was a long time ago, then ext2 came along. Pretty much everyone ended up running the same filesystem after six to nine months, simply because it was the natural one to use. It was the one everyone else used and it seemed to work. I think the vendors will ship the filesystems that work. They do a lot of QA testing on that. In some cases we have specialist filesystems: JFFS for Flash devices is very useful for the iPAQ, but completely useless for your average hard disk user. LM What hardware do you recommend and do you get involved with hardware concerns? AC I try to pick devices with free drivers which work. Like everyone else, I’m trying to build myself a machine that works at a sensible speed or uses as little power of whatever. I play with a fair amount of stuff. I build machines that are all Athlon because that gives me the best build performance. For desktop machines I’ve been playing with the new VIA C3 chip – it doesn’t need a fan and is so much quieter, but it’s not a speed demon. I’ve not really played with the Pentium 4, some benchmarks but that’s as far as I’ve got. It’s the first generation of the chip and I think the real question is not how the P4 performs now but in twelve months time. LM The support from hardware manufacturers is growing but do you think anything can be done to encourage them more? AC Most of them make the openness decision based on business risk, or financial reward. So, for example, a lot of small USB vendors have no secrets. Some

hardware vendors like SIS, who are working very closely with the Linux community, are keen to give good support. We also have people like nVidia who are more worried about not giving away secrets to rivals. They are worried that if they release their software technology then someone is going to use it and make the ATI Radeon run even faster than their card. You talk to these people and sometimes it makes sense, sometimes it doesn’t, but at least you understand their point of view. The other thing that has really helped is when people like Compaq and particularly Dell and IBM get involved because when they are building a server they think Linux is going to be one of the supported systems. It means that they go to the hardware vendor they buy from and say “If you have Linux support for this board then we will buy them in large numbers and sell them in our servers”. For many more conservative businesses the fact that you can actually say: “Well, if we will do this piece of work for this approximate cost, we will get this money back”. It obviously makes it easier for them. Many of them are generally uneasy about giving out documentation to you, as they are more used to a traditional business relationship. LM We were talking to someone at the Systems show, they offer the service of a manager/coder interface. Is this common? AC It’s one of the jobs that LinuxCare have been doing. It is to a certain extent, part of what Red Hat offer in doing direct device driver work. We will write you a driver, we will help you commit it to the mainstream kernel, if that’s the way you want to go, and we will convert the things the techies say to the things the management need to hear about “time scales and pricing”. It’s no good asking your average programmer, “How long will this driver take?”, the answer is always far too short, so the project will overrun, and they are often far too vague. LM Do you use a desktop or are you a console man? AC Well, I mostly use X, I tend to use XFce for my desktop most of the time, then run mostly GNOME applications, sometimes KDE. You can set both GNOME and KDE up to look the same. In the desktop world, I am very much an end user and as far as I am concerned it all looks the same. OK, some bits are QT, some bits are GTK, but, who cares? LM You have a very good line of communication to the community, tell us more about your diary. AC Well, the diary was originally set up pretty much for that purpose because when I joined Red Hat one of the things Eric wanted, as my then manager, was a monthly summary of what I was doing. So I figured out seeing as this was open source the monthly summary probably ought to be, so that became the diary. Issue 15 • 2001

It’s no good asking your average programmer , “How long will this driver take?”, the answer is always far too short

LINUX MAGAZINE

57


INTERVIEW

Richard Morrell and SmoothWall

SMOOTHWALL Linux Magazine What is SmoothWall? Richard Morrell SmoothWall is a specialist version of Linux, which has been carefully designed, secured and optimised in order to provide a network with all the functionality of a secure router and firewall, but at a fraction of the normal cost. It started out as a personal project. I didn’t want to, and couldn’t afford to, buy a watchguard box (£1600 – £1800). That’s a lot of money and I have a young family so couldn’t afford it. I was involved with user groups in the Bay area and Silicon Valley, where I was working with VA. I also discovered a young chap called Lawrence Manning and we spent more and more time together. In the end what we wanted to do was try and do the development all by the Web. It all sort of snowballed from there, we said, hmmm this works, maybe we can elaborate on this and see if anyone else is interested. We decided that maybe we could make it into a project, because at the time I was involved in deploying Sourceforge in the US. It was really quite an exciting time to be part of VA Linux. There was plenty of money about, plenty of bandwidth and maybe some of that excitement rubbed off and we came up with SmoothWall. From there we came up with a logo and decided that if we were going to do this properly as a project that we should come up with a brand before we came up with the product. With properly designed logos, proper domain names, proper architecture, we were ready to roll, should the project ever to take off. SmoothWall was available on July 15 2000, on Sourceforge initially, and I still remember the first 16 people to download it. It then grew from there, after the first three or four weeks we had four and a half thousand users and I thought that I was completely out of my depth. At this point, I was spending a lot of my work time with SmoothWall. A lot of VA corporate customers in the UK started to use it. That was because VA would send me to customers and they would end up talking about SmoothWall instead of VA. Then it appeared on Linux Magazine in the States and in Linux Journal. All the big magazines carried it as a coverdisc and we got prime billing because at the time there wasn’t a huge amount of stuff going on in the UK. We were the only Linux project in Britain going apart from woffle. As we started growing we began using more and more bandwidth, and costs started increasing. I’ve had to put the best part of £35,000 in just to keep it

afloat, which is a lot of money when you have a young family. Unfortunately, open source doesn’t pay the bills and geeks expect stuff for nothing. If you try and turn commercial, then they attack you and they’ve attacked my like I’ve never been attacked before. I’ve received nasty mail, and even my son has suffered part of a death threat: some guy in America threatened to burn my house down, and told my son that. This is all because I want to take part of the project and make it non-GPL because part of it will use non-GPL code that we couldn’t GPL even if we wanted to. The Linux community is full of wonderful people, but it also has its fare share of morons who haven’t got a clue. They sit in their bedrooms developing code and think that anything that involves a GUI or a browser is not suitable for public consumption. They take the GPL to its extreme and I know most of the Linux luminaries because of my time at LinuxCare and VA. I’ve been around long enough, I am Richard@linux.com. One of the biggest voices we have is Alan Cox, and if I have a problem with SmoothWall at one o’clock in the morning and I need kernel advice I know I can phone up Ted Searle, or anyone, and get advice. There aren’t many other projects that have the breadth of friends that we do. If I need bandwidth or testing I phone up Larry Augustine and it happens. Last night we were working on licensing issues and it was Chris di Bona and Joe Ruiner and you can’t get much higher than that, without going to Linus, and that’s an advantage we have that other Linux projects in the UK just don’t. A lot of it is built on cronyisms and that certainly helps. It’s hard graft and when you are putting, very often 21-hour days into open source, you can’t be a family man as well. So far, so good though, we are up to 740,000 installs worldwide since 2000 We know this because each individual Smoothie calls home to register – we are very open about that fact – we now have to support users in 107 countries. LM Is that the best way to describe how big the SmoothWall project is, by the number of installs? RM I prefer to use financial figures. We have over 70,000 SmoothWall installs that manage systems with more than 300 clients behind them. So, if you imagine that company would have had to buy a Cisco pixie box for $15,000 to $18,000 and instead Issue 15 • 2001

Richard Morrell tells us about SmoothWall, it’s development and shares with us his views on Open Source software

‘’ We are up to 740,000 installs worldwide

LINUX MAGAZINE

61


INTERVIEW

Smoothwall on a chip.

The government of Peru, they are running Smoothwall instead of Cisco

62

LINUX MAGAZINE

they have replaced it with something open source. They don’t necessarily know it’s Linux and they don’t really care, it does the job. Now if each of those PCs has an inert value the SmoothWall protects something in the region of $3.1 billion worth of hardware worldwide and that’s just the corporate clients, that doesn’t include home use. That also doesn’t include the cable users. In the UK alone we have over 24,000 installs. Now each one of those SmoothWalls calls home, and you can’t argue with the figures, they are in black and white, and we make those figures public. In the UK, we’ve been logging since April this year, prior to that we didn’t bother because it was still just us having fun. We then realised that this could be commercial and we could be acquired. If you are acquired you need to be very adult and grown up and you need to be able to prove to people what you’ve actually done. You can say I’ve got 50,000 users, but unless you can prove it, it doesn’t mean a thing. Now if you look at the Web site you’ll see about three or four thousand people who have written back to say “Hey, we like this” and “Hey, we don’t like that”, but you will also see some really nice quotes. This is what really makes it worthwhile. It’s not the money – because we don’t get paid. It’s things like the government of Peru, they are running SmoothWall instead of Cisco. Colleges in the UK, they are ditching Cisco and being able to use that money on teaching budgets, hospitals in Australia, schools in China. You think “Cool!”, that’s the nice thing. You get the gimps and the gits who haven’t got a clue. They read part of the GPL and don’t understand what it means. If Richard Stallman says I’m GPL, I’m GPL and we regularly go to battle with the Free Software Foundation, who are a paper tiger with no teeth at the moment. LM How do you see the SmoothWall project developing? RM As of mid-November we became a limited company. We’ve put too much money into SmoothWall to keep it as a project. Just to keep it alive costs me around £4,000 a month. We use something like 1.4 terabytes of bandwidth a month on the Web site. We get 16.1 million Web page hits Issue 15 • 2001

a month that’s a lot of hits and unfortunately that’s got to be paid for, you can’t get sponsorship for that. We are currently located in Raleigh, North Carolina, alongside Red Hat: same ISP, different boxes. We pay our bills, we have some sponsorship by High Speed Web.net, a dubious ISP in North Carolina, but I don’t care what their money is like, they pay for our hosting. And then we have Tucows mirror sites in 11 or 12 countries. We based the way that we’ve grown on what Jeremy Allison has done with Samba. We have learned to sit back and watched the failing of other projects. We are quite aggressive and I think we are seen as quite rude and arrogant, but then there’s a reason for that – we’re not a Linux project. Over 70 per cent of our users don’t use Linux for their systems. Once SmoothWall is up, it’s up – it’s just a box. You could paint it pink and put a bow on it, it doesn’t matter, no keyboard, no mouse – it’s a device. SmoothWall enables someone to take some old hardware, P133 or P100 and turn it into a box that would have cost them thousands of dollars. Now we’ve got our knockers that say that the Linux Router project does this, and yes it does, but they do it for the Linux community. How many Windows users use that project? None. There are hundreds of comments on the Web site from people that say “This is the first time I’ve used Linux, I didn’t know about it, thank you for making it so easy”. I go on what people tell me. LM What are the disadvantages with the way the project has grown, did it grow too fast? RM It didn’t grow too fast, it’s always been managed, the trick with an open source project is ‘plan the team’. It’s not different from building a sales team or a management team. You’ve got to understand the strengths and weaknesses of the people involved. As project manager you’ve also got to be able to stand back and let people stand on their own two feet, without standing on their toes – too much. My gut feeling with SmoothWall is that we, deliberately, didn’t grow too fast. We could have grown to 50 or 60 developers by opening up a CVS tree and we didn’t. A CVS tree is all well and good if you are running something like Gimp, or if you are developing a multimedia application, or a theme for KDE, where you need the input from designers and graphics bods, and people with a knowledge of X and KDE, from all over the world. SmoothWall is a secure system, whether it’s based on Linux or BSD or Mac, who gives a monkeys it’s secure. When we release the product we release the source – that is our definition of open source. If we used open source ways of working, open source methodologies we would be dead 13 months ago. LM It’s obvious that you do spend a lot of time on this project.


INTERVIEW

RM A huge amount of time. I’m still on the IRC channels at 2am kicking and banning people who are moaning about the product, people say why are you horrible to them, and I say it’s because they are not customers. If they are going to be customers then that is great but don’t come in here moaning because you want a print server on SmoothWall. Linux gives people an opportunity we never had as developers six years ago. What SmoothWall should show people is that you can take a Linux distribution and all we did was take one CD. We cut it from 650 Mb down to 40Mb. Now, with a Linux distribution, you have all the ingredients you need. People make demands, they will say they want SmoothWall to handle multiple IP aliasing. They demand! Now to me demand means I delete, I’m not interested. If people want to shout at me that’s fine, they’re not paying me. I do give people want they want, but I give it to them in a controlled manner. We have our updates programme, which works very well. You would expect the same of a commercial software organisation. The product works, it looks clean, it does what it says. The documentation is OK, it’s not brilliant, but then it is not a commercial product. Our commercial product does have brilliant documentation, everything you would expect of a professional product. But with the GPL, we try to give people value, but for no money. With the project we have given people the opportunity to go away and think, if the SmoothWall team can do it, as a hobby, imagine what really could be done if someone really tried. SmoothWall has done a lot of good, we have raised a huge amount of money for charity. We have been earning money for a year now for something called the Dorothy Miles cultural centre in Fleet, Hampshire. We read about them in the papers and what we do is provide them facilities, they need a copy of software or hardware, we will buy it for them. We have always encouraged people who want to give to charity on our behalf to give to Dorothy Miles, I think we’ve raised about £5,000 for them, which really makes me made up with happiness. We sponsor a junior football team, which keeps you at ground level. What a lot of Linux companies have done is sponsor beer fests and geek get-togethers. LM Do you agree that there is a place for that? RM Oh, yeah, it’s very necessary. I was doing it a couple of nights back, I was geeking until 6am. But, we’re having to move on. The Linux industry in the UK doesn’t realise that there is a demand for people with Linux skills, there are not enough Linux consultants, spending enough time in reality to address that skill shortage. They are shooting themselves in the foot. SmoothWall is about trying to be good at one little thing. Don’t always try and be good at everything.

SmoothWall is about making something secure, keeping it secure. Not trying to be too big for your boots. Take the product and polish it, if you polish it enough it shines. Were not shiny yet, but we are getting there. It’s taken a long time. We are on our ninth release. LM With your aspiration towards a more commercial project, what complication does that give you with the GPL? RM Very few! My personal viewpoint for the necessity for the FSF to remain a fighting political force is marred by the fact that they are a force no more. They shout about how the GPL has never been challenged in court. I think they are probably quite thankful, because they wouldn’t have the money to defend it. LM Do you think the FSF could do more or should have done more in the past? RM I had an article in SmoothWall last week. I was saying “We want to take SmoothWall commercial. What can the FSF do to help us?”. Unfortunately the FSF are not interested in helping good GPL projects go commercial. LM Why is that? RM A very honest answer is that they don’t have any money. To help people like this costs money. Now, if I wanted advice from the FSF I would be happy to pay for it, just like I would be happy to pay for a consultant from the bank. The problem is they don’t have the advice, they don’t have the finances to develop advice. Georg Greve works damn hard to give what help he can. But there are differences between Europe and America. He stands up for Richard Stallamn, who sometimes can be a bit of a liability. Richard is a man with ideologies and I admire him for his persistence. There are people who are put on this planet to make a difference, Richard is one of them. LM How do you see online.smoothwall.org developing? RM That’s going to be a commercial subscriptionbased support service, for people who download the free software. They can use that or they can still use the more traditional forms of support like newsgroups and IRC. It costs money to run, but it is important because it puts people in touch and it will foster new customers for us commercially. For us, it is a move away from mailing lists that are becoming unmanageable. We’re getting 5,500 posts to some of the mailing list, and I can’t cope with that, and if I can’t cope with that then God knows what kind of message it is sending to our customers. Flame wars are far too easy to start on mailing lists as well, and flame wars are so nineties!

Smoothwall is about trying to be good at one little thing

SmoothWall: www.smoothwall.org

Issue 15 • 2001

LINUX MAGAZINE

63


PROGRAMMING

C: Part 2

LANGUAGE OF THE C A language so synonymous with computing history and Unix it’s very name is the epitome of the elite. These articles for the beginner by Steven Godwin, teach you the fundamentals of ‘ANSI C’, as well as providing interest snippets from under the hood of the

F

irst a correction from last months article. In table 1 we said a character has size 1 with a signed range of -128 to +127 and an unsigned range of 0 to 128. The unsigned range should have read 0 to 255

else if (iNumber == 1) printf(“one”); else printf(“Not a binary digit”);

Switched on Bach

But don’t write that! Write this!

The switch statement is a polite way of writing twenty ‘if-else if’ statements. It will evaluate its given expression and, depending on the result, will execute a single specific case statement. Should none of given cases match our result we can (optionally) supply a default case. If the result does not match, and there is no default case, nothing happens and code continues executing the next line after the switch statement. Before this paragraph, you may have written: if (iNumber == 0) printf(“zero”);

compiler.

switch(iNumber) { case 0: printf(“zero”); break; case 1: printf(“one”); break; default: printf(“Not a binary digit”); break; } In all cases (pun intended!) the same expression

Express Yourself One of Denis Richie’s tenets for ‘C’, was an ‘economy of expression’. Whilst this is true, the ‘rich set of operators’ he also endowed it with can provide hours of fun for the bored programmer! An expression consists of a number of terms that are evaluated when the program is run. Each term should be of the same type (int or float, say), but can originate from anywhere – there is no distinction between an integer variable or an integer constant. Or, for that matter, a function which returns an integer! So while an expression like,

And, as if to complicate matters further,

a = b * c;

Table below presents the basic of set of expressions used in C. This is not a complete list, just enough to get you out of trouble; but not so many as to get you in! The term ‘ident’ indicates where a variable should be, whereas ‘exp’ can be replaced by a variable, a constant number, a function (with the appropriate return type) or (recursively speaking) another expression. It is this recursive nature of applying expressions in code that can make ‘C’ very – how shall I put politely – illegible! It is very possible, and easy, to include expressions inside expressions inside expressions. For example, to make sure the compiler generates code to evaluate them in the correct order, you should use the brackets ‘(‘ and ‘)’. In future articles we’ll look at how ‘C’ decides which order in to evaluate the expressions in. It is called precedence, and helps remove the unwanted clutter of brackets. However, if you need to know the precedence before you can understand the code it’s too complex, and needs simplifying with brackets or separate lines!

is both valid and usual, ‘b’ could be a function that returns an ‘int’, allowing: a = GetNumEntries() * c; Saving temporary variables for you, and reduced processing for the computer at run-time. Also, remember the function ‘LeaveGap’? The input parameter was an ‘int’. So an expression of type ‘int’ would work in the place of an integer constant. This allows code like, LeaveGap(b*c); Or LeaveGap(GetNumEntries() * c);

64

LINUX MAGAZINE

Issue 15 • 2001

LeaveGap(a = GetNumEntries() * c); If the function returns a void, it doesn’t actually return anything, so you can not assign it to a variable. a = Banner(); /* ERROR: Banner returns a void */ Banner(); /* CORRECT: void functions can only be called like this */


PROGRAMMING

-exp ++ident

ident++

--ident ident-!exp

~exp exp * exp exp / exp exp % exp exp + exp exp - exp exp >> exp

exp << exp

exp < exp exp > exp exp <= exp exp >= exp exp == exp

exp != exp exp & exp

exp ^ exp

Turns 4, into -4. Increments the variable by 1, then uses that value as the expression. a=1; b=++a; /* Here, a=2, and b=2 */ Post-increment. Uses the value of the variable, and then increments it. a=1; b=a++; /* Here, a=2, and b=1 */ Pre-decrement. As pre-increment, but subtracts one. Post-decrement. Got the idea yet?! Logical not Turns a zero into a one, and any non-zero into a zero. ‘C’ concept of true, is anything non-zero, which is why code like, if (x != 0) is often written if (x) Bitwise not. (Ones Flips each bit, turning 12 (1100) into -13 complement) (1111111111110011) Multiplication Despite ‘C’ low-level tedendancy’s, there is no carry and no overflow with any mathematical operation. Division Modulus 10%3 is 1, for example. (remainder) Addition Subtraction Bitshift to right. Only makes sense for integers. 8>>1 = 4. Traditional a fast way of performing a divide by 2 (or multiple), although most modern compilers will optimise to this automatically. Bitshift to left. Similar to bitshift right, except this is akin to multiple. One interesting use is ‘1<<x’, where ‘x’ is a bit number (0 to 31). The result is a number with only bit ‘x’ set. 1<<10 = 1024 Less than Evaluates to a 1 or 0 (like all similar operations) Greater than Less than, or equal Greater than, or equal Is equal to Evaluates to a 1 or 0 . ‘C’ uses the double equals to differentiate between equality and assignment since both can occur in places marked for ‘expressions’. As this is one of the more common typos in ‘C’ it is preferable to write ‘if (0 == iNum)’ instead of ‘if (iNum == 0)’. This way, should you accidentally omit one of equals signs, the case of ‘if (0 = iNum)’ will become invalid since zero can never be assigned to anything. On the other hand, ‘if (iNum = 0)’ means assign 0 to iNum, and evaluate (to 0 - i.e. false). Therefore the ‘if’ branch never gets called. Not equal to Evaluates to a 1 or 0 Bitwise And Compare each bit, and only set the equivalent bit should both be set. E.g.. 1&2=0. 3&1=1. Used for masking flags to see which are set. Bitwise Exclusive Compare each bit, and only set the equivalent bit

Or

Unary minus Pre-increment.

exp | exp

Bitwise Or

exp && exp Logical And exp || exp Logical Or exp1 ? exp2 Ternary, or : exp3 conditional

ident=exp

Assignment

ident+=exp Add, then assign ident-=exp Subtract, then assign ident*=exp Multiple, then assign ident/=exp Divide, then assign ident%=exp Modulus, then assign ident>>=exp Bitshift right, then assign ident<<=exp Bitshift left, then assign ident&=exp Bitwise And, then assign ident^=exp Exclusive Or, then assign ident |= exp Bitwise Or, then assign

if both differ. From the truth table: 0^0=0 0^1=1 1^0=1 1^1=0 Very useful for flipping bits; x^1 (swaps the least significant bit). It is also bi-directional. y=x^73. y^73=x Compare each bit, and set the equivalent bit if either is set. E.g.. 1 | 2=3. 3 | 1=3. Used for setting flags. Evaluates to a 1 if both expressions are non-zero. Otherwise, it’s a 0. Evaluates to a 1 if either expression is non-zero. Evaluates to exp2 if exp1 is non-zero, otherwise its exp3. Similar to an ‘if’. But because this is an expression it can used in places where the ‘if’ (a statement) can not (i.e. as a parameter to a function), and since it gets evaluated you can write code such as: a = x==0 ? 1 : 2; instead of if (x==0) a=1; else a=2; Copy the value of exp into the variable. You can also link assignments, e.g. x=y=z. This is because ‘y=z’ is an ‘exp’, and ‘x=exp’. See also ‘Is equal to’ The ‘? then assign’ expressions are a very usable feature of ‘C’. From the programmers point of view, it saves typing iCount = iCount + iNum; since you only need to type iCount += iNum; From the computer’s point of view, it only needs to find the memory location of ‘iCount’ once. Not much saving here, you might say, but if ‘iCount’ was a complex expression the savings would certainly mount up. Note: iCount += 1; is equivalent to ++icount; not iCount++; Since the latter must retain the original value of iCount to evalue the expression correctly.

exp1,exp2

Multiple evaluation This evaluates both expressions, but does so as separate entities. It this is used recursively within another expression (i.e. x = exp1,exp2), then x is assigned with the value of exp1. sizeof(ident) Size of type Calculates the size of the variable given, in bytes. Can be used to validate the sizes of variables shown in table 1. The evaluation of this expression is done at compile-time. Issue 15 • 2001

LINUX MAGAZINE

65


PROGRAMMING

Layout ‘C’ is a free-form language, meaning that the layout is fairly unimportant. Whitespace (tabs, spaces and newlines) can appear anywhere within the source (except strings) and compiler will blissfully continue without giving it a second look! Both pieces of code that follow compile identically. iAverage=iTotal/iElements;

Break On Through

iAverage = iTotal / iElements This allows you to indent your code in a manner that is meaningful to you. There are a number of styles and guidelines available on the web. None of them are ‘right’, in the same way there is no ‘right’ text editor! Avoid holy wars - find a style you like and stick to it. If you are working in a code shop, it is likely they will dictate style guidelines for you to follow. If you are maintaining code, then adopt the style of the original author.

C Dialects For a language that is intended to be portable, there are a number of different versions. It is useful to know they exist, especially if you plan on writing code for more than one platform. K&R The original. Rarely used nowadays. ANSI C The most common, and focus of this article. GCC is largely complaint with ANSI C. C99 A recently ratified update to ANSI C. This version supports single comments (a la C++) and dynamically sized arrays. Small C A subset of ANSI C. Objective C A superset of ANSI C, incorporating object orientation and message passing. This article (and most code in circulation) conforms to the ANSI C standard. However, depending on the application, some code will use specific libraries that are not covered here in any depth. Examples of the more common libraries are curses, sockets and X. None of the functions used are part of the ANSI C standard, but because of the design of the language, such libraries can be added at any time (even after the compiler has been written and shipped) without breaking existing code. These extensions usually ship with (at least) one header file, and one library file. Also (as if to complicate matters), most compilers implement a number of extensions. These are features of the language that are not included in the standard, but added because the compiler programmers thought it was ‘a good idea’. I, personally, disagree with them. They encourage non-standard, nonportable, code and tempt the unwary into bad habits, since the feature may not be implemented on the next platform (or even version of the compiler) they use. GCC, for example, supports nested .

66

LINUX MAGAZINE

(iNumber) is compared, allowing the compiler to do more optimisations, the reader to gain a greater understanding, and the programmer less chance of making a mistake! In this example, the order in which the cases appear is of no consequence. The default need not appear at the bottom, either, it’s just a convention. However, this is not always the case (no pun intended!).

Issue 15 • 2001

The break statement above appears innocuously enough. It has a simple property, but with some nasty side effects (which we will come to later). Basically, it causes the execution to jump out of the current statement, in this case the switch statement. If break is omitted, execution continues with the next statement in the switch – even if it is not part of the same case. switch(iNumber) { case 0: printf(“zero”); case 1: printf(“one or zero”); break; case 2: case 3: printf(“two or three”); break; }

As you see, case 0 ‘drops through’ to case 1 because there is no break to stop it. Similarly, case 2 drops through to case 3 for the same reason (it is not necessary to have any code associated with a particular case). So, what is the price to pay for this rather groovy statement? Well, it can only switch on constant cases. That is, the value after the word ‘case’ must be constant: a number or a single character (represented with ‘A’, remember). A case such as: case iNumber2: printf(“Both numbers are the same!”);

is illegal! iNumber2 is a variable, and therefore not constant. The other problem is that strings can not be compared with a switch (and we’ll find out why when we cover strings).

Hello Nasty The word break can be used anywhere a statement can. It jumps out of the current block (i.e. the switch) and continues with the next instruction. Some programmers introduce unwitting bugs by not realising what block it will jump out of. A break will only jump out of statement blocks created with:


PROGRAMMING

switch while do...while for For example, iVal =0; @l = while(iVal < 100) { iVal++; if (iVal == 10) { printf(“Limit reached...”); break; } } This while loop only iterates 10 times, because of the break statement. What is subtle is that the statement attached to ‘if’ does not get considered as a block (it is not in the list above, note). Also, the break will only jump out of one block. To be specific - the current block. So if you have nested two loops, and issued a break command inside the inner one, only it would terminate. This is most obvious when using for: for(y=0;y<20;y++) { for(x=0;x<32;x++) { printf(“X”); if (x == y) break; } /* break causes the code to jump here, and continue with the next value of y */ printf(“\n”); }

Layout (Case) ‘C’ is case sensitive. All the reserved words (like ‘if’ and ‘while’) must be written in lower case, as must the type names (‘int’, ‘float’, ‘char’ and so on). Variable names, on the other hand, do not need to be in lower case, but you should be consistent when naming and using them. ‘Num’ and ‘num’ are different variable names, and often cause problems, especially for non-Linux users who are used to the case-insensitivities of DOS and Windows. It is best to establish a style, perhaps using underscores instead of spaces, or capital letters to indicate new words.

The author Steven Goodwin celebrates (really!) 10 years of C programming. Over that time he’s written compilers, emulators, quantum superpositions, and four published computer games.

Issue 15 • 2001

LINUX MAGAZINE

67


PROGRAMMING

Python: Distributed applications with XML-RPC

PUPPET OBJECTS XML-RPC is a portable

XML-based process for Remote Procedure Calls. In conjunction with Python it can be used for quick access to distributed applications. Andreas Jung explains how

T

hese days many applications exist in a distributed environment. This means applications are running on several machines that communicate with each other and exchange data. There are two basic setups: on the one hand there are message-oriented processes for interprocess communications, like Named Pipes; and on the other hand are Remote Procedure Call processes (RPC). RPC processes are used in Unix, for example, to implement various daemons and services such as Portmapper or NFS. The first standardised processes for providing services across the boundaries of individual machines were created in the 90s. The most important of these processes are CORBA (Common Object Request

Listing 1: Client-Server communication with RPC Client request to calculate the sum of 17 and 15: <methodCall> <methodName>example.sum</methodName> <params> <param><value><int>17</int></value></param> <param><value><int>15</int></value></param> </params> </methodCall> Server response: <methodResponse> <params> <param><value><int>42</int></value></param> </params> </methodResponse>

Table 1: Data types in XML-RPC Type int string

Description whole number with sign, 32 bits in length character string (typically with Unicode support, since XML explicitly demands Unicode support) boolean truth values, true or false double double-precision floating-point number datetime.iso8601 date and time base64 base64-encoded raw data array one-dimensional array, in which the individual array values can be of any type struct A set of key value pairs; keys must be character strings, values can be of any type

Broker Architecture) and its proprietary Microsoft counterpart DCOM. Both standards are very complex and therefore more suitable for large enterprise solutions. For the Java environment there is the RMI mechanism (Remote Method Invocation), which allows remote procedure calls between Java programs. Unfortunately RMI is not portable and cannot therefore be used from other programming languages. XML-RPC bridges this gap by offering the following two advantages: one, it is portable, i.e. it is independent of any particular programming language or operating system and can therefore be applied universally; two, it is simple to implement because XML-RPC is based on the established standards XML and HTTP.

How XML-RPC works Communication between a client and an XML-RPC server always takes place via HTTP. The advantage of this is that it allows the use of existing components. Using HTTP also simplifies communication across firewalls and proxies. The client’s request to the server and the server’s response are encoded in XML. Listing 1 contains an example of this, showing a simple arithmetic calculation using integer data types. Table 1 shows all data types that can be passed between client and server.

XML-RPC for Python At the moment there are two XML-RPC implementations for Python. One is the xmlrpclib package by Frederik Lundh, currently maintained by Pythonware. This package supports XML-RPC for server as well as client applications. The client-side components of the package are going to be integrated into Python and should make their first appearance in version 2.2. All examples in the following text relate to this package. The other implementation is the py-xmlrpc project, which is more recent. Its authors, Chris Jensen and Shilad Sen, have re-implemented time-critical parts in C with very good performance results. Unfortunately the documentation is still on the sparse side.

XML-RPC clients with Python The xmlrpclib server object makes it easy to address XML-RPC servers from Python: import xmlrpclib

58

LINUX MAGAZINE

Issue 15 • 2001


PROGRAMMING

server = xmlrpclib.SERVER (“http://localhost:9000”) print server.example.sum(17,15) The constructor links the passed HTTP-URLs to the server object. In our example the XML-RPC server is running on a local machine on port 9000. The actual RPC call of the sum() method in the example class is similar to a local function call – with the restriction that keyword parameters such as sum(a=17,b=15) are not allowed. Some XML-RPCs support something called Introspection API, which clients can use to obtain information about a server’s method calls. All methods of this API can be addressed via the system object of the server instance (see Table 2).

XML-RPC server in Python Creating an XML-RPC server with Python also involves little effort. The module xmlrpcserver provides all the important functions required. Listing 2 shows the body of such a server. The call() method of xmlrpcHandler is called by the underlying socket server for each incoming call and receives the name of a method to be called and the arguments for the function call. The call getattr() checks whether the xmlrpcHandler class contains a method of that name, and returns a reference to this method if successful. The method is then started with the appropriate arguments by s_method() and returns a result. Some methods, like in the example, are linked as xmlrpcHandler class methods.

Authentication for XML-RPC The XML-RPC standard does not define any authentication processes. Instead, this is left to the transport protocol HTTP. The most common method is basic authentication, in which the user name and password are transferred in the authorisation section of the HTTP header. Cookie-based authentication works in a similar way, but the information is stored in a cookie and then transferred. Unlike basic authentication this method is not standardised. If you want to be able to use basic authentication via XMLRPC you will need to extend the internal transport class, as shown in Listing 3. The new transport class BasicAuthTransport extends the HTTP header with the appropriate authorisation at each request. This is done by redefining the request() function of the basic class. Applications can use the new transport class by passing an instance to the constructor of the XMLRPC server object (Listing 4).

Conclusion Applying an XML-RPC interface to Python applications does not involve much effort. The essential part of the XML-RPC infrastructure remains

Listing 2: Server body 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23

import SocketServer import xmlrpcserver import xmlrpclib class xmlrpcHandler(xmlrpcserver.RequestHandler): def call(self, method, args): try: s_method = getattr(self, method) except: raise AttributeError, \ “Server does not have XML-RPC “ \ “procedure %s” % method return s_method(method, args) def sum(self,a,b): print ‘Arguments:’,a,b return a+b if __name__ == ‘__main__’: server = SocketServer.TCPServer((‘’, 8000), xmlrpcHandler) server.serve_forever()

Listing 3: Authentication with XML-RPC 01 import string, xmlrpclib, httplib 02 from base64 import encodestring 03 04 class BasicAuthTransport(xmlrpclib.Transport): 05 def __init__(self, username=None, password=None): 06 self.username=username 07 self.password=password 08 09 def request(self, host, handler, request_body): 10 h = httplib.HTTP(host) 11 h.putrequest(“POST”, handler) 12 13 # required by HTTP/1.1 14 h.putheader(“Host”, host) 15 16 # required by XML-RPC 17 h.putheader(“User-Agent”, self.user_agent) 18 h.putheader(“Content-Type”, “text/xml”) 19 h.putheader(“Content-Length”, str(len(request_body))) 20 21 # basic auth 22 if self.username is not None and self.password is not None: 23 h.putheader(“AUTHORIZATION”, “Basic %s” % string.replace( 24 encodestring(“%s:%s” % (self.username, self.password)), 25 “\012”, “”)) 26 h.endheaders() 27 28 if request_body: 29 h.send(request_body) 30 31 errcode, errmsg, headers = h.getreply() 32 33 if errcode != 200: 34 raise xmlrpclib.ProtocolError( 35 host + handler, 36 errcode, errmsg, 37 headers 38 ) 39 return self.parse_response(h.getfile())

Issue 15 • 2001

LINUX MAGAZINE

59


PROGRAMMING

Table 2: Introspection API methods Method Description server.system.listMethods() returns a list of all methods of the XML-RPC server server.system.method Signature(methodname) returns a list of signatures for a method name, for example server.system.methodSignature(‘sum’) returns [(‘int’,’int’)] server.system.method Help(methodname) returns method documentation; in Python this is typically the documentation string for the function

Listing 4: Calling the new transport class import xmlrpclib server = xmlrpclib.SERVER(“http://localhost:9000”, \ BasicAuthTransport(‘jim’,’mypassword’)) print server.example.sum(17,15)

hidden from the developer. This has contributed greatly to XML-RPC’s popularity, which has practically become the de-facto standard.

New features in Python 2.2 Guido van Rossum and his team are currently working on Python 2.2, due for release at the end of the year. The second alpha release already offers some new features.

New division operator: // The present division operator always returns an integer value. This is inadequate for genuine floatingpoint arithmetic. From version 3.0 the standard operator / will return results as floating-point values, while the new operator // will be responsible for integer division. You can already use this new functionality by linking from __future__ import division into your programs.

Unification of built-in types and classes Until now it has been impossible to derive your own classes from built-in types (lists or dictionaries for example). This limitation ends with 2.2. User-defined dictionary classes can now be derived as follows:

The author Python expert Andreas Jung currently lives near Washington D.C. and works for Zope Corporation (formerly Digital Creations) as a software engineer in the Zope core team.

60

LINUX MAGAZINE

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

class count: def __init__(self): self.data = range(0,100) self.n = 0 def __iter__(self): return self

def next(self): try: num = self.data[self.n] except: raise StopIteration self.n+=2 return num obj = count() iter_obj = iter(obj) for item in iter_obj: print item

In order to be able to do this, classes must implement the method __iter__(). An iterator object is created with the new function iter(). This calls __iter__() for the object and returns a reference to an iterator (normally the object itself). The for loop calls the next() function of the iterator until it raises a StopIteration exception.

Generators The concept behind generators is closely related to that of iterators. They are basically functions which return a generator object when called and which provide data via the next() method. For instance: 01 02 03 04 05 06 07 08 09 10

from __future__ import generators def numerator(N): for n in range(N): yield n gen = numerator(100) while 1: print gen.next()

The new command yield returns a value with each next() call. However, the local variables of the generator function are frozen and processing continues in the same place at the following next().

class MyDictionary(dictionary): def __getitem__(self,key): ...

Iterators Iterators are closely connected to for loops. Sequence types (character strings, lists, tuples) used to be the only types through which iteration was possible within a loop. From 2.2 all objects can be used for iteration with for if they have implemented the new iterator interface. For example: Issue 15 • 2001

Info XML-RPC xmlrpclib

http://www.xmlrpc.org/ http://www.pythonware.com/products/ xmlrpc/ py-xmlrpc http://sourceforge.net/projects/ py-xmlrpc/


BEGINNERS

The Answer Girl

EGALITARIANISM Have your Web links got split ends, simply because upper and lower case notation was ignored when they were produced? Patricia Jung shows us how to resolve this problem using a Perl script

A

t last, the commissioned Web site is finished. A sigh of relief in the office is closely followed by the sobering realisation that the Web designer has been working under Windows and has not been all that careful with upper and lower case notation of filenames, because on her Microsoft test computer index.html, Index.html and INDEX.HTML are all identical notations for a single file. Unix file systems on the other hand, such as the ones mainly used under Linux, ext2fs and ReiserFS, insist that a capital A and a lower case a are completely different things – even in file names. The site’s relaunch site still has to take place on time, and who wants to sit down and correct all the wrong A HREF details by hand across several dozen files? So the question arises, as to how the whole deal can be dealt with automatically?

Defining the task The task is by no means trivial because there is quite a bit to do. The first thing is to find all the references, search out all those which relate to local files, extract the corresponding filenames together with path and check whether there is a file of this name at the appropriate place in the file system. If the designations of the file in the link and in the file system match, we need do nothing. If they are completely different, the best thing to do is to add a comment, to the effect that this point will need reprocessing by hand. If the details differ only in the upper and lower case lettering, we can adapt the filename in the link details. This does not look like something that can be solved simply with a couple of command line tools and a few pipes. Instead we will have to stick out our necks, do it properly and write a little script. A shell script, a sed-script, an awk-script... – there are a number of options – but so that this article does not become overlong, let’s agree on a Perl script. This is a good idea anyway as Perl’s regular expressions lighten the load a bit when it comes to search and replace operations. This would also work with sed, but since we have to check the presence of the files in the file system, sed could only cope with the aid of other shell tools. Perl has advantages here, since as a “real” programming language it also has functions for accessing the file system and is faster than a shell script. Awk comes into its own especially when working with columns, which in this case we do not want. It 70

LINUX MAGAZINE

Issue 15 • 2001

The Answer Girl The fact that the world of everyday computing, even under Linux, is often good for surprises, is a bit of a truism: Time and again things don’t work, or at least not as they’re supposed to. The Answer-Girl in Linux Magazine shows how to deal elegantly with such little problems.

must be emphasised at this point that the use of a specific tool for a specific task always depends on one’s personal tastes. If you prefer Python or Tcl, that’s perfectly all right. Unfortunately, Perl also has some drawbacks. Although, or more precisely because, there is masses of documentation – with manpages and tutorials on the Web as well as paper books – finding help on a specific task is a highly time-consuming job. Since Perl also wants to be “human”, by allowing several notations for a syntax that is normally fixed in other programming languages, writing Perl code looks simpler at first glance, but the reading is then made more difficult if a Perl script originates from people with different Perl customs. Of course this versatility does not make learning Perl any easier.

Perls in action All this lamentation is useless if the release date for the new Web site is imminent. So turn to your


BEGINNERS

favourite editor and create a new file. Let’s call it, cgks as an abbreviation for “change upper to lower case notation” – who would want to call up a program starting with an “A” – as in “Alter”?. As in every script, the first line comes easily: It consists of a special comment, stating which interpreter is to do its job here. Using which perl, we can find out in which path the Perl interpreter is located (provided it is installed and the corresponding directory is entered in the search path). Then we should have

What does perl -w do? -w prints warnings about dubious constructs, such as variable names that are mentioned only once; scalar variables that are used before being set; redefined subroutines; references to undefined file handles or file handles opened readonly that you are attempting to write on; values used as numbers that don’t look like numbers; using an array as though it were a scalar; if your subroutines are nested more than 100 deep; and innumerable other things.

editing, thus the editing of a file which is currently being edited is switched on or off. And the next line,

#!/usr/bin/perl sitting there. When developing programs it makes life easier if the interpreter points out the snares a bit more, rather than merely griping about real syntax errors. The manpage should come up with some information on this. First, though, man perl explains to us that the Perl Manual “is split up to make accessing individual sections easier”, which are reached as special manpages.

@ARGV = <*.html>;

looks for the section we need, in order to find out more about options which make debugging easier. As a matter of fact, man perlrun explains an option -w (as in “warn”), which appears suitable for our protection.

looks like a pre-defined variable and an array because of the preceding @, thus a one-dimensional or multidimensional value field. @ARGV, the “Argument vector”, is one-dimensional and according to the perlvar manpage contains the command line arguments of the script. We can use this to define within the program which arguments it should actually be called up with – obviously with all files ending in .html. Perl makes provisions for the argument files to be opened and to enable access, via the handle <>, to the data contained therein. So all we have to do is measure off the content line by line, until there are no more lines:

Simply take everything

while( $line = <> ) { }

man perlrun

# Perl execution and options manpage

Now we want to edit lots of files, ideally, all those in the current directory. Yet this is something someone else should have done before us at some point. There are many Perl scripts in this world and on the Web but those with comprehensible documentation, on the other hand, are much scarcer. Searching the Web we find a script that converts international character entities in all HTML files in the current directory into real ISO characters (Figure 1) and with $^I = “.bak”; in front, it even makes a backup file with the ending .bak. This last feature is one we first mark with a # at the start of the line so that the interpreter ignores it. By decommenting the line, this produces the nice side effect that in the meantime we are not even writing any files, but are being shown the result on the standard output. In any case, it’s much better for testing! In Perl, simple (scalar) variables always begin with a dollar symbol, and a funny variable such as $^I must simply be something pre-defined. As a matter of fact man perlvar explains that this means the Inplace

This is clearly a loop, which will run continuously as long as the condition in the round brackets applies. In Perl, it makes no difference whether we declare the variable $line needed for buffering $line first, with the my() function, or only allow it to arise where we need it. It’s only in connection with object-oriented Perl programming that my() really becomes important, although it does no harm to get used to this right from the start. If we again output the content of $line inside the curly brackets with print $line; our script should simply output the content of the .html files in the current directory line by line. We can test this in a directory that contains (not too many) HTML files. Since cgks does not presumably lie in the search path, we also state the path (perhaps with the dot as an abbreviation for the current directory). pjung@chekov:~/answergirl$ ./cgks bash: ./cgks: No such file or directory No file or directory of this name? There’s something Issue 15 • 2001

LINUX MAGAZINE

71


BEGINNERS

fishy going on here. We have in fact forgotten to assign ourselves executability rights with chmod u+x cgks.

Pattern recognition Since the script outputs the file content so nicely for us, we can now look for the links. Perl has the nice construct of “data to be edited by default”, the data hiding behind the variable $_ . When you are looking for something, there is no need even to state where to look, as long as you mean the content of $_ . We want to edit the content of $line and therefore file it with $_ = $line; in the default. Are there links in this line? If so, follow them to an <A HREF= within double quotes (“). The end is shown by a >. (So as not to complicate the script unnecessarily, we shall assume that there are no line breaks in a character string). As a regular expression this looks as follows: <A HREF=\”(.*)\”> We have to escape the double quotes with the backslash, since Perl uses these to delimit string contents. In round brackets, we note the reference (either a URL or a local file specification), which is a sequence of characters of any length or .* for short. Unfortunately, regular expressions have the habit of wanting to cover as much as possible. If, after the HREF, several “> appear on the line, the above regexp will save everything in the round brackets until the last occurrence. We wean it off this greedy habit by placing the one-time or no-times character ? after .*: <A HREF=\”(.*?)\”>

A small Perl script

local file, we check that its content does not begin with a protocol such as ftp or http (other protocols such as gopher can be ignored): if ( ! /(ftp|http):\/\//i ){ } The m of Match operator can be left out, and “ftp:// or http://” can be shortened to (ftp|http)://. In this case, the pipe symbol | serves as a logical Or. Since the forward slashes are already framing the pattern, we must escape them, and in order to ignore upper and lower case notation, we make use of the i-flag of the match operator. Lastly, the exclamation mark ensures that the condition is met precisely when no match is found.

In order to look for it in the content of $_ , we make use of the ‘match’ operator m/pattern/. We can tell that an href can also be written in lower case (“case-insensitive search”) by the i-flag. We also want to collect all the links occurring on the line and to do so we use the flag g (global):

Local only

@files = m/<A HREF=\”(.*?)\”>/ gi;

The first argument, FILE, is a so-called handle, so is a stand-in for the file whose name is hiding in $file. If we get the file open, we have nothing to correct for this link and can close the file again:

We store whatever ends up in the round brackets in an array variable named @files and go through it step by step:

If there is no protocol hiding in the reference, we try to open the file: open( FILE, $file );

close FILE; foreach $file ( @files ){ } If, on the other hand, the opening goes wrong... To do this we file the respective current reference in the variable $file, which as “run variable” of the foreach loop lands automatically in $_ . In order to change something solely when it is a reference to a 72

LINUX MAGAZINE

Issue 15 • 2001

if ( ! open( FILE, $file ) ){ } ...we must try to find out the right file name. If that


BEGINNERS

in the link specification starts with a / , as absolute pathname it does not relate to the root directory of the file system but to the corresponding document root on the Web server. We must first pin down the directory in a variable, which serves as the start directory for the files on the Web site:

empty. In the prefix the dot is saved as a stand-in for the current directory, together with separator slash:

$rootDir = “/home/pjung/LM/LM1001/answergirl”;

Bit by bit

This circumstance makes our task a bit harder: To find a file specified in the link, which begins with /, it’s best to look for it, not in /, but in $rootDir. If there is anything to correct though, $rootDir/corrected_name must not be written back into the link, but only /corrected_name. So what could be more obvious than saving the corrected name and the prefix we need to find the file in the file system separately? If one combines the contents of both variables, $prefix and $corrfil with the dot operator . , we obtain the file details made to measure for the file system. If we take only $corrfil, we have the specification matching the link. So we first write a / in the variable intended to contain the corrected link: $corrfil = “/”;

else { $prefix = “./”; }

If the link is ever broken, this can be due to the file name itself or else to a directory specified in the path. Consequently, we must swallow the bitter pill and check every component separated by / from root to tip. To do this, we divide the content of $file, which

A HREF In order to use a hyperlink on a Web site to point to another file or Internet resource, you have to write an anchor into the text. This is supplemented by the hyperreference, which states where the link is pointing, for example A HREF=”http://www.linux-magazine.co.uk/”. To prevent the reader of the page from seeing this specification in the text, it is placed in pointed brackets (<A HREF=”http://www.linux-magazine.co.uk/”>). This is followed by the text, which should be clicked on in order to get to the reference. Last of all comes the end tag </A>, with which the anchor is completed: <A HREF=”http://www.linuxmagazine.co.uk/”>Linux Magazine</A>. Path The road that paves the way, along directories, to a file. Absolute paths begin at the root point of the file system indicated by /.

We also note the root directory as prefix: $prefix = $rootdir; Using this preparation for the later combination in the corrected file name, we can do without the slash at the beginning (^) of the link details save in $file. In order to formulate the condition under which the $corrfil and $prefix, as just written, should be set we therefore do not use the match but the substitute operator s. We simply tell the script: “If, at the start of $file, you can replace a / by nothing, set $prefix and $corrfil as just discussed”. Unfortunately, / is another case for the escape character. Luckily there is also an option of separating the patterns to be searched for and replaced from each other, not only by /, but also by other special characters. For example, take the dollar sign. This turns “Search / at start of string, and replace it with nothing” into not so much an escape orgy, but rather a simple s$^/$$. In order to make this replacement directly in the variable $file, we say

Pipe A pipe through which the output of a command line program is transmitted. The end of the pipe serves as input for a second tool. Symbolised by a vertical stroke | : command1 | command2. Regular expressions Option used by various standard Unix tools to express patterns. A dot stands for any symbol you like or a letter for itself. If an asterisk follows, whatever is covered by the preceding pattern can occur any number of times, or even not at all. A question mark on the other hand means that whatever it relates to occurs precisely no times, or once. Python An object-oriented script language. Script language Programs written in script languages do not have to be separately compiled, but can be executed with an interpreter direct from the source code. Often (in Perl for example) the interpreter compiles an internal binary program to increase the speed of execution, although users will not usually notice anything in normal circumstances. Since there is no need to call up a compiler, interpreted languages are especially suitable for small programs, which are quickly jotted down to solve a problem and are not intended to be used by third parties. Tcl A script language which is usually used in connection with the GUI toolkit Tk for writing graphical applications. It can also be used without Tk.

$file =~ s$^/$$; Formulated as a condition, this looks as follows: if ( $file =~ s$^/$$ ){ }

Search path If one enters a command, the shell searches in the directories saved in the environment variable PATH, in sequence, for an executable file of the same name. The first find is used; if the shell finds nothing, it outputs the error message command not found, even if the command exists elsewhere in the file system.

If the link was not an absolute one, $corrfil remains Issue 15 • 2001

LINUX MAGAZINE

73


BEGINNERS

may have been robbed of a leading slash, at the / points into little bits and save them in the array @parts:

and so that this cannot be confused with the right wing / , a \ comes before it. We now take a close look at each particle in succession:

@parts = split( /\//, $file ); foreach $part ( @parts ){ } The split() function needs two arguments: which string it should chop up, and the separator. Instead of simply specifying a delimiter, the match operator comes into play at this point. Between its two / wings, we set the slash / separating the directories,

cgks as a whole #!/usr/bin/perl -w $^I = “.bak”; @ARGV = <*.html>; $rootDir = “/home/pjung/LM/LM1001/answergirl”; while ( $line = <> ){ $_ = $line; @files = m/<A HREF=\”(.*)\”>/ig; foreach $file ( @files ){ $corrfil = “”; if ( ! /(ftp|http):\/\//i ){ if ( ! open( FILE, $file ) ){ if ( $file =~ s$^/$$ ){ $prefix = $rootDir; $corrfil = “/”; } else { $prefix = “./”; } @parts = split( /\//, $file ); foreach $part ( @parts ){ if ( $part eq “.” || $part eq “..” ){ $corrfil .= $part . “/” ; } else { opendir (DIR, $prefix . $corrfil ) || last ; @selection = grep ( /^$part$/i , readdir( DIR ) ); closedir( DIR ); if ( $#selection < 0 ){ $corrfil = (“<!-- “ . $file . “ not found! -->” ); last; } elsif ( $#selection > 0 ){ $corrfil = (“<!-- “ . $file . “ not clear! -->” ); last; } else { $corrfil .= $selection[0]; } $corrfil .= “/”; } } $corrfil =~ s+/$++; $line =~ s+$file+$corrfil+; } close FILE; } } print $line; }

74

LINUX MAGAZINE

Issue 15 • 2001

If the path component in $part is a dot for the current or (||) double dots for the superior directory, we do not need to check any notation and add the content of $part to the string already in $corrfil: if ( $part eq “.” || $part eq ”..” ){ $corrfil .= $part . “/” ; } Perl has two equality operators: one for numeric values and one for character strings. The latter is called eq (“equal”). $corrfil .= $part; $corrfil = $corrfil . $part; With the appendix operator for character strings, the dot, we also insert a slash as directory separator. On the other hand if we have a real file or directory name sitting in $part, there is more to be done. First, we try to verify what has been in $corrfil until now: With $prefix in front, we are dealing with a directory which needs to be opened: opendir (DIR, $prefix . $corrfil ); We will close it later using closedir( DIR );. But if we do not manage to open it, we can give up immediately and the stop processing the @parts: opendir (DIR, $prefix . $corrfil ) || last ; last leaves the active loop, so that we can continue with processing the next $file. If, on the other hand, we did open the directory $prefix . $corrfil and were able to “install” the handler DIR, it is best to use readdir( DIR ); to read out all the files in it. Is there a file or directory in there with the name which is saved in $part? On the shell we would use the grep command for this – and neatly enough, it’s the same in Perl: grep ( /$part/i , readdir( DIR ) ); The pattern here is surrounded by the match operator – and of course the i-option must be there too, since the upper/lower case notation of the actual file can be completely different from $part. However, we have left one thing out: grep also finds matches in this version when the content of


BEGINNERS

$part is only a component of an existing file name or directory name. In this case the match must be specified precisely: We state it inclusive of beginning (^) and end ($) and save the result in an auxiliary array: @selection = grep ( /^$part$/i , readdirU ( DIR ) ); If @selection now contains nothing – not even a zeroed element – we cannot create a corrected version of the link and are misusing $corrfil for an HTML comment which says that $file could not be found. It continues, with the aid of last at the start of the loop, with any subsequent element of @files: if ( $#selection < 0 ){ $corrfil = (“<!-- “ . $file . “ not found!U -->” ); last; }

Perl is endowed with a wealth of funny special character combinations, which provide you with some involuntary memory training. If one pinches the @ from an array and replaces it with a $#, one obtains ($) a scalar variable, in which (#) the number of array members is saved. If the yield saved in @selection was a bit too successful (we recall that directory and Directory can exist side by side on Unix systems without any problem), we add $corrfil to a comment which says that there are several options: elsif ( $#selection > 0 ){ $corrfil = (“<!— “ . $file . “ not clear!U —>” ); last; }

Only if we find precisely one variant can we attach the zeroed element from @selection to $corrfil:

Debriefing There are always more elegant ways of knocking together a script. For example a quicker variant for large quantities of data could be written which remembers each directory checked, so that it does not have to look at each directory for each link again. Far more critical for deployment is the fact that the proposed program then gives up if the HTML anchor A HREF and the link source are on different lines. The alternative was to either take this handicap in stride or build in even more complexity, especially in the regular expressions. Those who are only occasional users of Perl, however, will not be familiar with the fact that in the Comprehensive Perl Archive Network (or CPAN), there are any number of modules stored which, like similar libraries to those for C, C++ or Java, contain untold amounts of functions for all possible and impossible areas of use. In the case of our script, we could have saved ourselves the work with the regular expressions, for the tedious recognition of links, if we had used the HTML::Parser-Module which is already installed by default in many distributions (in SuSE in the packet perl-HTML-Parser; in Caldera in perl-modules). The major drawback with this module is that we are moving in object-oriented Perl areas and could thus be struggling with the problem of what a parser is. For Perl novices and programming beginners this is presumably too ambitious a project.

$corrfil =~ s+/$++; Uh oh, we nearly forgot to correct the $line read out from the file, by replacing the old link $file by the corrected output $corrfil... $line =~ s+$file+$corrfil+;

else { $corrfil .= $selection[0]; }

... and last of all, of course, print it:

There is still a trailing slash to be attached, in order to prepare $corrfil for new sub-directory levels. To blsd – if $part now contained the filename, this too has a slash at the end, although it goes no further now. Nevertheless, despite how far as we have come with our script it’s still only a script for ironing out a few upper/lower case notation errors – so we are going to take the liberty of an evil hack: We replace the slash at the end of $corrfil with nothing. And because it’s so nice, we use the plus as separator for the substitute operator:

Now the great moment approaches: Off to the test directory and let the script loose on the files in there without additional parameters, but perhaps better “piped” by less. Looking good? Then we’ll just quickly remove the comment symbol from the line

print $line;

# $^I = “.bak”; and already cgks is filing the converted files under their old names, with the original version as a backup file with the ending .bak to allow for a comparison. Issue 15 • 2001

LINUX MAGAZINE

75


BEGINNERS

K-tools

SCANDAL MONGERS Fancy a bit of

gossip? Stefanie Teufel shows you how, with the IRC client KSirc, you’ll be kitted out for hour-long chats.

A

nyone who thinks that only little old ladies and their friends gossip, has certainly never looked in on a refined chat in IRC. Moreover, KDE wouldn’t be KDE if it didn’t send you on your way with the IRC client KSirc, an easy-to-use tool, which makes the virtual coffee morning twice the fun. The best thing is that KSirc is part of the Network package, so you don’t even have to install the program specially.

A smart suit If you’ve called up your IRC client in the K menu via Internet/ Chat Program (KSirc) or alternatively, entered ksirc & in a terminal window, the program switchboard immediately recommends a control server. Should this fail to occur then just select a server from the Connections menu. If you’ve read the title to this section then it should come as no surprise that you can smarten up the appearance of your IRC client. Under Options/ Colour setting, there will be a range of colours to choose from, and in Options/ Universal Character Set, you can look for your favourite font. For those you who are colour-mad, KSirc even supports background images. To use this feature simply activate the field background image under the menu item Options/ Colour Preferences, and locate the appropriate wallpaper.

K-tools In this column we present tools, month by month, which have proven to be especially useful when working under KDE, solve a problem which otherwise is deliberately ignored, or are just some of the nicer things in life, which – once discovered – you wouldn’t want to do without. More important than all this graphical chitchat is the fact that you should now make up a nickname for the IRC, if you have not already done so. Unlike Usenet, it is in fact the done thing in IRC to give oneself a pseudonym, by which one can be addressed by others. As soon as you click into an IRC network where someone is already using this pseudonym – known as a nick, in the jargon – you will get an error message and will have to select another nickname. Enter your desired name in the tab Options/ Preferences/ StartUpmenu under nick, (Figure 1). Under your real name, you can enter your true name, should you want to publicise it. But take care – even if you hide your real name here, finding it out would scarcely pose a problem to another IRCer, unless you have provided your computer with firewalls in such a way that, even using utilities such as Finger, not a word about you will be able to escape it. If this is not the first time you have knocked around in IRC and you already know some other IRCers, and you want to be kept up to date on where they’ve got to, then enter their IRC nicks in the reporting list underneath. When later on you go onto IRC and the person concerned (or another person, who is already claiming this nick for himself) is also there or joins it, you will be informed of this by a Notify message. If you wish to underline this with a bleep, you can set this in the General tab under signal tone for messages.

Off to the Net! Figure 1: A nick needs to be given some thought

76

LINUX MAGAZINE

KSirc presents you with a selection of the best-known IRC servers of the day, via Connections/ New server or the F2 key. These servers are all neatly ordered Issue 15 • 2001


BEGINNERS

IRC The acronym IRC stands for Internet Relay Chat and refers to a loose collection of servers, which enables its users to meet in so-called channels, in order to talk via text and in real time. All those who have clicked into a channel can see everything other people there are saying. The serious drawback of IRC is that your can’t exactly use it offline, so all too often it leads to your becoming BT’s best customer. So remember your phone bill and do your chatting off-peak in the evenings...

Figure 2: Click on in!

according to the various IRC networks so you can, for example, settle simply for an IRC server in the largest IRC network, IRCNet (Figure 2). Since almost all IRC servers can usually be addressed via port 6667, you can leave this setting as it is, although you may also find settings of 6665 to 7000. Now make sure you are connected to the Internet, so that when you click on the connect button a connection can be made to your selected server and all of the users thereon. A new window immediately pops up, in which you will firstly be overpowered by the welcome message from the server (Figure 3). It’s worth reading this and complying with any codes of conduct included therein, so as not to attract the wrath of the IRC ops – the operators responsible for the IRC server. Repeated violations of the rules can make you a persona non grata, and in the worst case you can be denied access to the corresponding IRC network. Now you are in the big wide world of IRC, which you can clearly recognise by the fact that the window has filled server controls with new life: A list of all IRC

Port A dock for network connections. Ports are numbered and many of these numbers have already been assigned a fixed service (well-known ports). For example FTP uses Port 21, SSH Port 22, TALK Port 517 and so on.

Figure 4: Where am I?

servers can be seen showing you which you have made contact with; whether any of your favourite friends and enemies from the Notify list are romping around in the same IRC network; and indicating the channel into which you have clicked (Figure 4). If you enter a channel by clicking on its name in this drop-down menu, you get a main window divided into

Figure 3: Sometimes welcome greetings are a bit sad

Issue 15 • 2001

LINUX MAGAZINE

77


BEGINNERS

Figure 5: In the IRC it can get pretty wild

three different areas (Figure 5). In the biggest of these you will see, apart from the messages from the server, the conversations of your co-chatters. If the exchange of words is rushing past you too quickly, you can look later at any snatches of conversation which you have missed by sliding the right roll bar. If the pre-set 200 line buffer is not enough for you, adjust the number in the server control window under Options/ Personal settings/ General with the aid of the field marked Size of the buffer. The right-hand window shows all the users who are in the channel. The most important area is the inconspicuous line on the bottom edge: This is the command line, on which you type in all your actions and send them off with the [Enter] key. Please do not forget: IRC commands are always started with a forward slash: /. For anyone who is a little uncertain, typing /help shows you all the available commands. Anything you don’t mark with a / at the start of the line is visible to all channel occupants. Your own actions, and the

Table 1: Colour codes [Ctrl+B]text[Ctrl+B] [Ctrl+U]text[Ctrl+U] [Ctrl+R]text[Ctrl+R] [Ctrl+O]text[Ctrl+O] [Ctrl+K][0]text[Ctrl+K][0] [Ctrl+K][1]text[Ctrl+K][1] [Ctrl+K][2]text[Ctrl+K][2] [Ctrl+K][3]text[Ctrl+K][3] [Ctrl+K][4]text[Ctrl+K][4] [Ctrl+K][5]text[Ctrl+K][5] [Ctrl+K][6]text[Ctrl+K][6] [Ctrl+K][7]text[Ctrl+K][7] [Ctrl+K][8]text[Ctrl+K][8] [Ctrl+K][9]text[Ctrl+K][9] [Ctrl+K][10]text[Ctrl+K][10] [Ctrl+K][11]text[Ctrl+K][11] [Ctrl+K][12]text[Ctrl+K][12] [Ctrl+K][13]text[Ctrl+K][13] [Ctrl+K][14]text[Ctrl+K][14] [Ctrl+K][15]text[Ctrl+K][15]

78

LINUX MAGAZINE

bold text underlines text text in which foreground and background colour are reversed normal text white text black text blue text green text red text brown text violet text orange text yellow text pale green text cyan text light cyan text light blue text pink text grey text pale grey text

Issue 15 • 2001

public actions by the other channel inmates, are shown in the main KSirc window in various colours and marked with icons at the start of the line (Figure 6). Since life can be grey enough, KSirc allows you to dip into the paint pot to highlight your conversational contributions. For example, if you express your enthusiasm for KSirc with the exclamation: [Ctrl+B] Ksirc[Ctrl+B] is [Ctrl+K][7] [Ctrl+U] dead cool[Ctrl+U] [Ctrl+K][7], you will see a bold ‘Ksirc’, ‘is’ in the normal font and an underlined, orange ‘dead cool’. The control codes, with which you can enclose emphasised words are listed in Table 1. Nests are allowed here (as our example with [Ctrl+K][7] for orange and [Ctrl+U] for underlining shows). You can even change the foreground and background colour of your text at the same time. So that you don’t have to remember the whole range of colour codes, as soon as you press [Ctrl+K] a colour selection window pops up, in which you need only seek out the foreground (top) and background colour (lower row) by mouse clicks.

Figure 6: Go wild with colour

Cyber stalking If you want to get a more detailed impression of your co-chatter, click on the respective user in the righthand window and in the menu list select User/ Whois. You can check his or her availability via User/ Ping, and uncover the IRC client of your opposite number with User/ Version. If someone has particularly piqued your interest, you can avoid missing any of their utterances by following them inconspicuously with User/ Follow. KSirc then makes his or her actions stand from the grey mass for your benefit. You can immediately cancel this snooping with User/ unFollow.

Some are more equal By now you have been wondering why some channel inmates are more equal than others, namely those whose nicks are marked out in bright red. This is how KSirc marks out the channel operators, or ops for short. The person who breathes life into a channel automatically gets more rights than all the other users who arrive later. As a channel op one can, for example, throw people out of the channel (User/ Kick in the channel window); ban them completely (User/ Ban); grant other channel users op rights (User/ Op); and withdraw this privilege (User/ Deop). There are also actions in the user menu which only an op can perform: see the little box For OP only on the user menu tab under Options/ Personal settings. ■


BEGINNERS

WRITE ACCESS Timely plea for help As a Linux user, who doesn’t even have access to a Windows machine at home, I sometimes feel like a second class citizen. With the PC world being so dominated by Windows, technical support for anything else is usually unavailable. I would very much like to upgrade my Internet connection – I am currently using a 56K modem, but I am unsure what will be compatible with my desktop system: ADSL, cable modem or whatever. It would help very much if you could do a roundup of what is available to me as a Linux user, D Gates, London More people are taking an interest in what Linux has to offer, and with this the need for mainstream support must also increase. We can only hope that this happens sooner rather than later. High speed Internet access is going to be tackled in full next month. As it stands, most cable modem installations seem to work without added complications, so long as you are familiar with accessing a network on your machine. ADSL uses hardware that is less than fully compatible with Linux systems, but drivers are available. Exactly what you need for ADSL depends on who your supplier will be.

SuSE LiveEval 7.3 coverdisc I haven’t had any success with the LiveEval of the SuSE 7.3 OS. I get at far as part six: “Finishing basic installation ...updating configuration (10 per cent)” and that’s it. The Abort button doesn’t work so I’m left to turn the machine off, go into Windows where I find one file (suselive) to be deleted. I’m running Windows 98SE, 600MHz Athlon Processor, 700Mb+ RAM and I was hoping that a Linux platform would be more enjoyable to work with than a Windows system which has something of a mind of its own. The evaluation CD would have given me a taster of what to expect from Linux. It’s unfortunate, and more than a little annoying, that the CD does not deliver what it offers. A Beattie via email We have had similar comments from Windows 98 and Me users. Apologies to all those that have

Star letter

From word of mouth of a friend I tried installing SuSE 6.4. I installed it on a spare Celeron 300MHz machine in my home, and just as a test I decided to see how long it would be before the system crashed. I’ve only had to switch the machine off once in one year and two months, and that was to switch off power to install a new socket in my house. The installation was every bit as easy as a Microsoft Windows install and I would say the OS is much more configurable and fun to use. I have had an application crash on me once on Linux but it never affected the OS. It has been quoted that “ Window Millennium Edition is actually the cheapest OS, coming in at £147.04”. Linux is free if you could be bothered to download it. A standard distribution should never cost more than £80 from any distributor, and that comes with more software you can shake a stick at. The only downside about Linux I would say is the lack of good games. So you see I am not a Linux fanatic! Nicholas Herriot via email The ease and simplicity of a Windows install is often quoted as one of its features. As far as we’ve found this just isn’t the case. Once you’ve run the risk of the installation itself – holding it in check from overwriting partitions that you want to keep – you still have to gather together all of the driver disks to get the full support out of your peripherals. You don’t suffer this with a Linux install. The ability to download a distribution is no stumbling block either. If you are looking to get yourself set up at a rock bottom price and the option of downloading 650Mb is not an option, then contacting your local Linux user group is bound to bring results.

suffered disappointment. We have been in contact with SuSE and you’ll be pleased to know that there is a work around for this problem. Should YaST2 hang while writing the config files at ‘10per cent’, you should press the Alt+SysRq+S keys (all three keys simultaneously) The SysRq key may be marked as Print Screen on some keyboards. This will get YaST2 running again, the Alt+SysRq+S key presses synchronises the mounted filesystems again. Full details of this problem can now be found on the SuSE Web site at: http://sdb.suse.de/en/sdb/ html/cndegen_yast2_error_diagnosis.html Issue 15 • 2001

Contact us You can write to us at Linux Magazine, Linux New Media UK Ltd, Europa House, Adlington Park, Macclesfield, Cheshire, England, SK10 4NP. Alternatively you can email us at letters@linuxmagazine.co.uk

LINUX MAGAZINE

79


BEGINNERS

DESKTOPIA: Jo’s alternative desktop

ENTRANCE DOOR deskTOPia

E

or fingerprint, then

ven Ali Baba and his 40 thieves knew that without a password, there is no open sesame. Since then a lot of water has flowed under the bridge, and it’s been a long time since a single magic phrase was enough to open the gates of our computers: Every visitor is unique and has his own password – we certainly don’t want to admit all 40 thieves at once, after all. Now, there are various ways into our own kingdom – a classic route finder for this is xdm, the X Display Manager.

xdm could be a

Stone age flair

timely interim

Many people might feel like biting into their keyboard in view of the fact that this is definitely the bedrock xdm being presented. Considering KDE’s kdm offers more features and runs well, what would anyone want with xdm? With equal justification, users of alternative interfaces (like XFce) may ask why they should waste disk space just for kdm, when in fact xdm starts quicker and as a standard X tool, does not require any additional disk space. Apart from nibbling away at resources, the apparently practical features of kdm are debatable: On a system with, for example, 50 or more users, clickable user selection does not really make sense. Anyone wanting to log in will have to look longer for their icon, than they would take to type in their name. They can still type afterwards anyway, but by the time you get to the password the rodent will have come to a sticky end. A system which does not even reveal the existing users is clearly more secure than one which simply lists them all in the first place.

Unfortunately the age of Star Trek is still a long way off. If your computer doesn’t recognise you, either by voice

solution. Jo Moskalewski explains how

Distribution sovereignty Anyone who now reaches happily for their cover CD expecting a brand-new xdm is going to be bitterly disappointed. No, we haven’t forgotten to tie up a little packet, it’s rather that the good part does not come on its own. xdm is a component of XFree itself, so you will either find it has long since been installed on your system, or it’s slumbering in a free-standing 80

LINUX MAGAZINE

Issue 15 • 2001

Only you can decide how your Linux desktop looks. With deskTOPia we regularly take you with us on a journey into the land of window managers and desktop environments, presenting the useful and the colourful, viewers and pretty toys. xdm packet taken from stock by your distributor, on your distribution CD. In any case you’ve owned the X Display Manager, which goes with your graphical user interface, for a long time. Whether or not xdm is already installed or not, the simplest way to find out is via entering: locate bin/xdm at an input prompt. If there is an output, xdm is already there. If only the prompt appears again, the package manager including the CDs will help with the distribution.

Open, sesame! If xdm is installed, it should also show itself when you start the system. If it doesn’t, then your distributor has presumably intended it for a different run level than the one in which you are booting. Each run level is a compilation of services that are started or stopped. Run levels 2 to 5 are intended for ordinary working (0 provides for a shutdown, 6 for a reboot, and the single user mode alias run level 1 serves unfortunate admins as a safety rope). You can change the run level with the tool init: for run level 3 it’s the input prompt with the command init 3 (important: only root is authorised and able to use init). Once you have found the run level which serves for a graphical login by means of xdm, you can enter it as default in the file /etc/ inittab. To do this, you need only alter the line id:2:initdefault: accordingly. SuSE users are better off using YaST for this, in which the graphical login can be activated.


BEGINNERS

Manual labour... ... is required, if you want to configure xdm (but you don’t have to do this – it’s preconfigured in any case). The configuration files are usually to be found under /etc /x11 /xdm/ or /usr/ X11R6/ lib/ X11/ xdm/. It’s worthwhile casting a glance into each of the files in there – the distributors often cook their own individual soups here, so it’s only possible to give general indications. It may be that you have options not available to other users. The most exciting thing for the home desktop tinkerer may be the file Xresources, which is responsible for the display. Here among other things, the pixel width of the frame, the colours, fonts and the greetings text can be altered to your heart’s content. No less exciting is Xsetup. This bash script is run through as soon as xdm becomes active. It’s therefore possible in here to provide xdm with a background image or a background colour or to change the standard mouse pointer into an arrow. In the simplest case the first lines of this file could look like this: #!/bin/sh xsetroot -cursor_name left_ptr & xsetroot -solid black &

Figure 1: Simple, proven and yet modern - xdm

manager by adding a clock or the penguins as presented in previous “out of the box” articles – your imagination has no limits. By the time you log in for the first time you will find everything you start here will be taken over onto the user desktop. And so there is a file which is run through when leaving xdm: Xstartup. Since this (as with xdm) is executed by root, it cannot be used to define individual user settings. Instead of this, there is a suitable way to sweep tkshutdown off the desktop: in Xstartup, enter a killall wish

But it’s not only the simple settings that can be altered here: When you also start applications, new and interesting options open up.

Down with the mouse If you run a minimal Linux installation you’ll certainly miss having a shutdown button with xdm. The computer boots up independently until login, but there is no option of shutting it down from there just as simply. Just typing in shutdown -h now will after all scarcely allow just anyone to log in as root. Attentive readers of this column have long known that Ctrl+Alt+Backspace easily does away with the XServer (and thus, too, xdm, where the latter is better stopped by Ctrl+R), yet an X-Server armed with xdm has several lives. On a smart system it has accepted its death after three attempts at resurrection, but on some installations he would not spread his heavenly wings even after several weeks. Those of a less patient nature therefore find a plain and easily adaptable Tcl/Tk script on the cover CD, which supplements xdm by a shutdown and reboot button. The buttons place themselves automatically at the bottom right corner of the screen. Installation is pretty simple: Just copy tkshutdown to /usr/ X11R6/ bin/, and add the following line to Xsetup:

and the shutdown button is done away with when you leave xdm. Why is wish sacrificed, when in fact the program is called tkshutdown? Well, the program is a simple script, which wish starts as interpreter. The rug has to be pulled away from under wish, as it is in fact the active program.

Labyrinth It gets more confusing if one looks at the userequivalent of Xsetup, called Xsession. This file is called up after the user has logged in. What you enter here affects all users; if on the other hand you want only to grant special settings to just one specific user (so that he can use a different window manager than the system default, for example), then you should create for them an executable file named .xsession in their home directory. Settings which you have previously activated in ~/.bash_profile, are better cancelled in the graphical login in this file. Your tried and trusted ~/.bash_profile no longer has any effect when you login via xdm, because no bash is needed from booting the kernel to the start of your X session.

/usr/X11R6/bin/tkshutdown & In this way, you can also expand your display

Figure 2: Small expansion in Tcl/Tk

Issue 15 • 2001

LINUX MAGAZINE

81


BEGINNERS

OUT OF THE BOX

AGAINST IT! Do you get irritated by “dear colleague” emails laden with

I

t’s not always that easy to shake off the world of Windows. Or is it? With Antiword from Adri J. van Os it is possible – even in the text console – to display Word documents so they’re easy to read.

Word attachments?

Clear

Christian Perle

Although Antiword can cope with a great many Word formats, it’s still a very compact program at only about 100Kb. For the latest version (0.31), you can get the packed source text from http://www. winfield.demon.nl/index.html. To install Antiword you need to unpack the source and compile the program. You then need to copy the files into your home directory and into the /usr/local branch of the filesystem.

shows you how you can take a peek at the document without sacrificing memory to Word

Word for Word To test Antiword, I have put my head into the lion’s mouth and created a brief Word document (tex_is_best.doc) with Microsoft Word 8.0. This includes headings at various levels, a list and a table. The original display can be seen in Figure 1. In order to feed Antiword with this Word document, enter in the shell: antiword tex_is_best.doc > tex_is_best.txt The > symbol causes the shell to divert the output of the program into the file tex_is_best.txt, otherwise it would just sail through to the console. Many documents can be converted with a simple shell script. In Listing 1 you can see the text output created by

Out of the box There are thousands of tools and utilities for Linux. “Out of the box” takes the pick of the bunch and suggests a little program each month which we feel is either absolutely indispensable or unduly ignored.

Antiword. What stands out is that the program identifies the headings as such and creates an appropriate numbering. The justified formatting of the first paragraph is also retained. The table representation could be better though – this is where the text-based Web browser w3m could be put to good use. Pure text data can obviously be better edited with standard Unix tools such as grep than cumbersome .doc files. In order to filter out all the lines containing the word TeX, you need only pass the Antiword output on to grep. In this case the pipe character | links the two programs: antiword tex_is_best.doc | grep -w TeX Apart from pure text, Antiword can also create data in the page description language PostScript, which can then be displayed or printed out using gv. The invocation looks like this: antiword -p a4 tex_is_best.doc > tex_is_best.ps This output format offers more options for text display – for example, font colours are retained (Figure 2).

With filters If you’ve tried out all the different text-based mail programs, you should have finally found your way to the best – which is mutt – and you’ll certainly want to call up Antiword directly from this program. In the .mailcap file (which you may have to create from scratch) in your home directory enter: application/msword; antiword %s

Figure 1: Original display in Word 8.0

82

LINUX MAGAZINE

Issue 15 • 2001

Straight away mutt displays in the internal viewer all attachments with the MIME type application/ msword, without you having to worry about


BEGINNERS

Text console In addition to the graphical user interface X there are usually several consoles running on a Linux system in text mode. These are reached from running X using Ctrl+Alt+F1 to F6. You can get back to X with Alt+F7. Source text The form of a software package which can be read by humans. By translating it (“compiling”) with a compiler this is turned into an executable program. Shell script A text file with shell commands, which are executed automatically one after the other. Home directory The personal home directory of a user. This is where he or

she ends up after a successful log in or with the command cd with no other parameters.

Quotes If one replies to an email using the reply function, the cited mail text is distinguished by the mail program by placing quotation marks at the start of the line of the text which you are writing. Most mail programs comply with common sense on the Net, which prescribes the character string “> “ (greater-than and space). Nevertheless many graphical mail programs do not provide these characters and mark out the quote by using a different colour or a different font.

Attachment The optional file attached to an email. This cannot be transferred in binary form, because non-printing characters would get lost. Base64 has become the most frequently used transfer format for binary attachments. MIME Multipurpose Internet Mail Extensions. A method for specifying standardised file types. Some examples of MIME types are text/plain (pure text file without formatting) or video/mpeg (MPEG-compressed video stream). MIME is used mostly in mail programs and Web browsers.

macroviruses. The file manager Midnight Commander (mc) can be extended in a similar way. Select from its menu Command/ Edit suffix data, or use an editor to open the file ~/.mc/bindings. Now enter the following lines there, and save the change:

Manpage Linux, like all Unix systems, has a sort of online reference manual for the installed programs. This aid is called up using man program name, e.g. man antiword.

leaves enough space for the quote characters. Text which has been made “invisible” by the Word function of hiding text, is shown by the program when you use the option -s. Whatever else is still hidden in Antiword will be revealed to you by the manpage.

shell/.doc Open=antiword -p a4 %f | gv View=%view{ascii} antiword %f

Listing 1: Text output from Antiword If you press F3 display, when mc’s bar cursor is over a Word file, the internal viewer shows the text output of Antiword. If instead you use the Return key, the file will be converted into the PostScript format and passed straight on to gv.

Specialities A few of Antiword’s useful options should not go unmentioned. With -L, the program creates the PostScript output in sideways format (“landscape”). This can make wide tables easier to read. The option -w col is relevant for the text output and limits the line length to col symbols. If you want to quote the content of the document in a mail something like -w 75 would be advisable, as this

1 Why TeX is better than Word This document describes in a few bullet points the advantages of TeX/LaTeX over Word and WYSIWYG word processors in general. Obviously potential disadvantages are also pointed out. This document also serves as a demonstration of antiword, a Word filter for Linux and other Unixes.

1.1 The advantages Lower hardware requirements Input can be done using any text editor Document format portable over operating system boundaries Professional setting to book printing rules ... 1.3 Systems supported | | |TeX/LaTeX |Word

|Linux/Unix | |+ |-

|TOS (Atari |MacOS |ST) | |+ |+ ||+

|Windows | |+ |+

| | | |

Figure 2: PostScript display with gv

Issue 15 • 2001

LINUX MAGAZINE

83


COMMUNITY

Internet

THE RIGHT PAGES Linux Printing

When we’re not hard

http://www.linuxprinting.org As the name suggests, at Linuxprinting.org you’ll find all the information that you could require for printing under Linux. The site includes the HOWTO for printing along with a database of supported printers. There’s a suggested printer help page if you are considering a new purchase as well as a vendor scorecard. The Foomatic section will also help you get your printer up and running.

at work producing the

Humorix http://i-want-a-website.com/about-linux/ Humorix offers the latest ‘news’ with an ever-so-slight Linux bias. Funny stories of Microsoft’s supposed antics cosy up to the latest conspiracy theories (usually about Microsoft as well). Never fails to raise a laugh.

News http://slashdot.org http://www.kuro5hin.org Along with Slashdot, Kuro5hin has been helping us keep news topics in focus. Both systems use a moderator who sorts out news stories and posts them onto the Web site. Here readers can make comments that then spawn their own comments.

magazine, we like to spend our time searching out software and news on The most popular site for Linux users. Designed to act as a central source of Linux information and as a voice for the promotion and advocacy of the Linux operating system.

the Internet. In the office we all have our favourite bookmarks. Janet Roebuck sifts

Linuxcare http://linuxcare.com/ A professional company that helps others gain the most from Open Source technology. It’s worth downloading the Linuxcare Bootable Toolbox if you have a CD writer as this disc could save your system one day.

through some of the latest finds that we feel are important and useful.

Linux Voodoo http://www.linuxvoodoo.com/ Another essential resource site. The viewlets feature leads you to some nicely written tutorials.

Beowulf SlickPenguin http://slickpenguin.com This site is devoted to helping out Linux users through sharing their experiences. It’s worth checking out to save you time and it’s also worth posting your own comments to help out others.

Programming sites http://leapster.org/linoleum ***internet5.png The best programming site we’ve found is Linoleum. Here you can pick up all those little tips and tricks to make your code smoother. The algorithm section saves you reinventing the wheel, while the widget sets get you going quicker.

http://extremelinux.esd.ornl.gov/ The Oak Ridge Extreme pages are a resource for clustering projects, with a builder’s forum and whitepapers on building your own supercomputer. Well, we can all dream, can’t we?

Tips http://portico.org This site has to be the best-kept secret on the Web. Its simple text layout means there’s nothing to get in the way of the tips.

Linux PDAs http://www.zaurux.com If you run Linux on a PDA then this online community is a must visit. Includes news, information and forums on mobile Linux devices.

Doom http://jcomm.uoregon .edu/~stevev/LinuxDOOM-FAQ.html The Doom FAQ for Linux. A quick guide to getting and setting up Doom under Linux.

Linux Online http://www.linux.org Issue 15 • 2001

LINUX MAGAZINE

93


COMMUNITY

The monthly GNU Column

BRAVE GNU WORLD Welcome to

Ganesha’s Project

another issue of

Ganesha’s Project, named after the Hindu god of wisdom and prosperity, has been set up to help children of the Shree Bachhauli Secondary School in Nepal set up and administrate a GNU/Linux network using donated computers. The idea for the project evolved during a two-month stay with Kuma Raj Subedi, who teaches at the Nepalese school. The situation for children in Nepal is quite problematic. Having to work, they often cannot attend school regularly. However, without education they lack a perspective for their future and so their children will also end up having to work. Ganesha’s Project tries to break this circle by teaching children how to use computers in order to enable them to participate in the information age and keep them in school. The first stage is to raise the required finances and computers in order to transport them to Nepal, where the network will be set up and the software installed. The first class of children will then be taught how to use the machines, so they can subsequently help other children use the computer pool. Besides elementary computer use Web programming,

Georg’s CF Greve’s Brave GNU World.

94

LINUX MAGAZINE

Issue 15 • 2001

Ganesha’s Project aims to help Nepalese children set up a GNU/Linux network

databases, networks and graphics will also be covered. Besides financial aid, the project also needs network cables, computers, network cards, a video-projector, printers and so on. English books about PHP, networking, MySQL, shell scripting and more would also be very useful. In our richer countries, computers are quickly outdated and get thrown away. Using them instead to give children anywhere in the world a better outlook for their future seems like a much better use to me. Of course similar problems exist in many places, because of this Ganesha’s Project seeks to be a Free Software project in the sense of trying to inspire others to copy the concept and participate. It might be useful to collect all experience, operation procedures, and ideas in a kind of project repository under the GNU FDL in order to create a how to that will enable others to start similar projects in order to help people help themselves.


COMMUNITY

HTMLDOC in all its splendour

HTMLDOC HTMLDOC bears some similarities to the Logidee project, because it also tries to make documents widely available. It is also released under the GPL and has been developed by a company; in this case Easy Software Products (ESP). HTMLDOC uses HTML as the source format for writing documents. These can be used to generate indexed HTML, PDF or PostScript (Level 1, 2 or 3) files. Kurt Pfeifle considers the killer feature to be that links present in HTML are preserved in PDF documents as hyperlinks. People who want to make use of this don’t have to use the proprietary Acrobat Reader, they can also use the Free project xpdf. There is justified hope that more Free projects will be available soon. The “Linux Documentation Project” has been using HTMLDOC to convert their HOWTOs into PDF format for quite some time now, replacing the formerly used SGML-Tools. This seems to prove, it’s safe to say, HTMLDOC is ready for everyday use. The recently released version 1.8.14 added support for Acrobat 5.0 compatible files (PDF 1.4), which allows 128-bit encryption of documents. It also uses less memory and some problems regarding displaying tables have been resolved. In terms of speed, HTMLDOC can convert its current handbook (with 102 pages and 17 screenshots) in 4.0 seconds to PostScript and 6.2 seconds to PDF with maximum compression on Kurt Pfeifle’s 500MHz Pentium III. Another option available with HTMLDOC is

remote-access through proxies or secure/encrypted connections in order to convert Web pages into PDF. Thanks to bindings to Shell, Perl, PHP, C and Java, it can do this even as a “portal” that gets Web page addresses as input and returns ready-made PDF documents of the page. An example of this can be found on the Easy Software Products home page. When using HTMLDOC on a local machine, it can be controlled through a GUI based on the “Fast Light Toolkit” (FLTK) or via the command line. The latter also allows using it in batch jobs in order to automate the process, should this be desired. These are just some of HTMLDOC’s features in order to convey an impression of what the project can do. The project is already very mature and allows not only defining special effects when turning pages in PDF presentations, but also definition of title pages, background images or the creation of “PDF books” from randomly chosen Web pages. On top of this, HTMLDOC is also remarkably portable. Not only does it run on GNU/Linux, but also on IBM-AIX, Digital UNIX, HP-UX, *BSD, OS/2, Solaris, SGI-IRIX, MacOS X and MS Windows 95/98/ME/NT4/2000. Further plans for development include XHTML and an extended stylesheet support.

FLTK widgets in action

Logidee-tools Another project this month is Logideetools, authored by Raphaël Hertzog and Stéphane Casset. The project’s goal is to simplify the writing of courses and their conversion into printready documents and Web pages. The courses are written as XML documents, which are converted into presentations or complete training documents. In order to do so, Logidee-tools uses a XML DTD with some XSL- and Makefiles. For XSLT processing, the project makes use of the xsltproc of the GNOME project. Logidee-tools’ typical users could be anyone teaching courses or giving lessons. Professional trainers in particular should give this project a look, as it was specifically written to fit their needs. The project was originally created by the French company Logidee, which specialises in professional training for Free Software. When they realised that this might also be useful to others, Logidee-tools were released under the GNU General Public License and the GNU Free Documentation License. The documentation is still is a weak point as yet, however, as it is only available in French. An English translation is desired but is not yet planned.

Issue 15 • 2001

LINUX MAGAZINE

95


COMMUNITY

Sketch

Just a few examples of what you can achieve with Sketch

GNU Passwords On Card The GNU Passwords On Card (POC) project is a rather young addition to the GNU Project by Henning Koester. This program, under the GNU General Public License, offers the capability to administrate passwords via smartcards. The use should be rather obvious for every reader with more than five passwords – especially if some of the passwords are only used once or twice a year. Until now, many people either wrote down their passwords on pieces of paper, saved them on their hard disk or reused passwords in several places. Everyone knows these are things you shouldn’t do, but what they don’t know is how to solve the problem of memorising many passwords reliably. GNU POC offers a solution to this by saving the passwords and short descriptions of them on a smartcard in encrypted form. Currently GNU POC only supports I2C memory cards, but it is planned to support as many cards as possible. One way of helping GNU POC is providing other cards, so their support can be included. The next project has been on my Brave GNU World wishlist for some time now and I’m glad it finally worked out.

96

LINUX MAGAZINE

Issue 15 • 2001

It is no exaggeration to call Sketch the currently most advanced Free Software vector-drawing program. The project was started in 1996 by Bernhard Herzog, who has been the central developer ever since. Sketch is now rather stable and supports several advanced features like gradient-filling, fading from one picture to another, transition and masking. It is also possible to convert all vector objects, including text, into curves. Another fascinating feature is the ability to use pretty much any object as a “magnetic” guideline by moving it to the guideline layer. Of course this is additional to the horizontal and vertical guidelines and the standard grid. Sketch is already being used as the GIMP pin badges on the last GNU/LinuxTag prove. These were created by Simon Budig with Sketch, as was the poster of the first Libre Software Meeting in Bordeaux. Sketch can easily be extended with the help of Python scripts and plug-ins and since Sketch itself is written in Python, all user scripts have full access to Sketch objects. New object types and import/export filters can also be added through plug-ins. Python was the language of choice for Bernhard Herzog as the object-oriented approach is a very natural choice for vector drawing programs and Python’s flexibility makes experimenting with new concepts much easier than it would be in C or C++. Therefore Sketch relies almost exclusively on Python, with only a few modules written in C. Among Sketch’s weaknesses are the limited text support and the lack of a possibility to directly enter coordinates and the size of objects by hand, although these problems will probably be solved in the foreseeable future. Right now Sketch is being migrated from Tkinter to GTK. The completion of this migration is the primary goal for the next stable version (0.8). The long-term goal is to make Sketch a complete vector drawing program, which is able to compete with proprietary solutions. In order to achieve this, the import/export filters still need to be completed and expanded, and the aforementioned text support needs to be improved. New features like transparency effects, vector filling patterns, CMYK and colour


COMMUNITY

Info Send ideas, comments and questions to Brave GNU World Homepage of the GNU Project Homepage of Georg’s Brave GNU World “We run GNU” initiative Ganesha’s Project homepage Logidee-tools homepage HTMLDOC homepage HTMLDOC PDF-O-Matic management are also planned. So there’s still quite a bit waiting to be done and Bernhard welcomes any help. In his eyes, the filters in particular are a good way to get into Sketch development, as they don’t require complete knowledge about the Sketch internals. Furthermore there is documentation in French, which should be translated into English and help with the Web page is equally welcome. However, it’s not only possible to support the development of Sketch through voluntary work, which is more or less the classical way. Bernhard Herzog works for Intevation, a German company specialising in Free Software. Even if Intevation tries to give Bernhard as much time as possible to work on Sketch during his regular hours at work, they cannot afford having him work on Sketch full-time. Therefore Intevation has created an online account that can be found via the Sketch home page, which makes it possible to buy time for Sketch development in US$10 steps. These should not be understood as donations, but rather an investment in future possibilities gained through Sketch. Similar approaches are very often designated as “tipping culture,” so we are talking about voluntary payment of an acceptable amount triggered by the understanding that this service should still be available tomorrow. So if you lack the time or the know-how to get actively involved in developing Sketch, you can let Bernhard Herzog do it for you by buying him time that he can spend on Sketch. Should you ask for special things to be includes in Sketch as a feature, Bernhard has requested that you also mention any possible patent problems. Adobe holds some US software patents regarding transparency features of PDF 1.4 and some other parts for PDF >= 1.3. At the moment, Adobe does not ask for patent fees, given that the algorithms are being used for PDF processing. But this may mean that Sketch cannot implement these features as its main purpose is not PDF processing. It’s also not clear whether the “Scalable Vector Graphics” (SVG) format poses patent-related problems for Free Software. So it may be that at least some features of Sketch may not be used for commercial

Fast Light ToolKit homepage GNU Passwords On Card homepage GNU software directory Sketch homepage Intevation homepage Eurolinux Petition

column@brave-gnu-world.org http://www.gnu.org/ http://brave-gnu-world.org http://www.gnu.org/brave-gnuworld/rungnu/rungnu.en.html www.ganeshas-project.org http://www.logidee.com/tools http://www.easysw.com/htmldoc/ http://www.easysw.com/htmldoc/ pdf-o-matic.php http://www.fltk.org http://www.gnu.org/software/poc/ poc.html http://www.gnu.org/software/ http://sketch.sourceforge.net http://www.intevation.de http://petition.eurolinux.org

purposes in the USA. The same will be true for Europe should these patents become valid here. If you haven’t signed the Eurolinux-Petition yet, you should do this as soon as possible in order to support the movement against software patents in Europe.

Enough for today Since the question is raised repeatedly, I’d like to point out that the Brave GNU World features all Free Software, whether it is part of the GNU Project or not. Every type of Free Software project can get featured. Alright, that’s enough for today and as usual I’d like to ask for comments, questions, ideas and new project introduction by mail to the usual address. ■ Issue 15 • 2001

LINUX MAGAZINE

97


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.