COMMENT
General Contacts General Enquiries Fax Subscriptions Email Enquiries Letters CD
01625 855169 01625 855071 www.linux-magazine.co.uk subs@linux-magazine.co.uk edit@linux-magazine.co.uk letters@linux-magazine.co.uk cd@linux-magazine.co.uk
Editor
John Southern jsouthern@linux-magazine.co.uk
Assistant Editor
Colin Murphy cmurphy@linux-magazine.co.uk
Contributors
Philippa Wentworth, Alison Davis, Richard Ibbotson, Jason Walsh, Steven Goodwin, Janet Roebuck, Ruediger Berlich
International Editors
Harald Milz hmilz@linux-magazin.de Hans-Georg Esser hgesser@linux-user.de Ulrich Wolf uwolf@linux-magazin.de
International Contributors Thomas Drilling, Helga Fischer, Heinz Mauelshagen, Björn Ganslandt, Georg Greve, Jo Moskalewski, Christian Perle, Stefanie Teufel, Anja Wagner, Carsten Zerbest Design
Advanced Design
Production
Rosie Schuster
Operations Manager
Pam Shore
Advertising
01625 855169 Carl Jackson Sales Manager cjackson@linux-magazine.co.uk Verlagsbüro Ohm-Schmidt Osmund@Ohm-Schmidt.de
Publishing Publishing Director
Robin Wilkinson rwilkinson@linux-magazine.co.uk Subscriptions and back issues 01625 850565 Annual Subscription Rate (12 issues) UK: £44.91. Europe (inc Eire) : £73.88 Rest the World: £85.52 Back issues (UK) £6.25
Distributors
COMAG, Tavistock Road, West Drayton, Middlesex England UB7 7QE
R. Oldenbourg
Linux Magazine is published monthly by Linux New Media UK Ltd, Europa House, Adlington Park, Macclesfield, Cheshire, England, SK10 4NP. Company registered in England. Copyright and Trademarks (c) 2001 Linux New Media UK Ltd No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, emails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing. ISSN 14715678 Linux is a trademark of Linus Torvalds Linux New Media UK Ltd is a division of Linux New Media AG, Munich, Germany Disclaimer Whilst every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material providied on it is at your own risk. The CD is comprehensively checked for any viruses or errors before reproduction. Technical Support Readers can write in with technical queries which may be answered in the magazine in a future issue, however Linux Magazine is unable to directly provide technical help or support services either written or verbal.
Current Issues
WE NEED A FEW FREE MINDS T
he New Year has started well enough in Linux Land: the Christmas period saw a host of new projects and releases; some big corporations have recently chosen Linux as their corporate operating system; and new distributions have made Linux even easier to use. Maybe it’s too much winter weather but I have an uneasy feeling about some of the companies promoting Linux. My problem is with those that claim to be Linux friendly, but really just seem to be jumping on the bandwagon. How can I tell which are which? Well to me, the good companies provide source code and use open standards. They release products under the GPL and encourage development. I can see the need for proprietary code if you’re a business. I understand you need to turn a profit and I realise that times are hard for many companies. I do use proprietary code and I even like some of it, but I prefer not to use it. I go out of my way to find companies that support GPL and Open Source. The others seem to have missed the point. Linux is not just an operating system produced by a group of technogeeks – it’s a way of life. Being open means other people look at the code, the code improves and the product is better. No one has a reason to fear, as there is no unknown. I’ll put my money where the source is and wait for the others to see the light. Happy coding!
John Southern, Editor
We pride ourselves on the origins of our magazine which come from the very start of the Linux revolution. We have been involved with Linux market for six years now through our sister European-based titles Linux Magazine (aimed at professionals) and Linux User (for hobbyists), and through seminars, conferences and events. By purchasing this magazine you are joining an information network that enjoys the benefit of all the knowledge and technical expertise of all the major Linux professionals and enthusiasts. No other UK Linux magazine can offer that pedigree or such close links with the Linux Community. We're not simply reporting on the Linux and open source movement - we're part of it.
Issue 16 • 2002
LINUX MAGAZINE
3
NEWS
LINUX NEWS Tommy Hilfiger Chooses IBM and Linux The Tommy Hilfiger corporation has chosen IBM, eOneGroup and Linux for its new e-business infrastructure, as the company works to expand its presence among thousands of US specialty retailers, as well as its worldwide manufacturing facilities and employees. Tommy Hilfiger selected technologies from IBM, including IBM eServer products running Red Hat Linux, DB2 Universal Database, Java and eOneGroup’s suite of software products to create a new B2B portal (www.tommyb2b.com) that enables Tommy Hilfiger’s specialty retailers and sales force to view, via the Web, selected core and seasonal clothing products such as jeans and other inventory items in real-time,
Trolltech launches developer contest Trolltech is launching a worldwide developer contest to help generate applications for the Sharp Zaurus SL5000D – the first PDA from a major power in consumer electronics to ship with embedded Linux. “We thought this would be a great way to help introduce the world to the Zaurus, to embedded Linux, and to Qtopia, our newly-named application environment,” says Haavard Nord, Trolltech’s CEO. “These technologies give users the first viable ‘Third Way’ in PDA computing, and we are very happy to have been chosen by Sharp to provide the GUI framework, windowing system, and application environment for this device”. The contest, scheduled to run until 11 February 2002, will offer prizes to applications in five categories: games, entertainment/educational, business, system tools, and communications. The prizes include $10,000 cash, Sharp Zaurus SL-5000D PDAs, laptops, television and stereo equipment. For more information, including the entry form, contest rules, and judging criteria, visit http://contest.trolltech.com.
6
LINUX MAGAZINE
Issue 16 • 2002
and place, track and ship orders. The new technologies have also allowed the development of a business-to-plant Web site that links Tommy Hilfiger’s production facilities around the world, which is expected to speed design-toproduct time and significantly decrease costs; and a virtual employee store that for the first time enables Tommy Hilfiger personnel to shop online around the clock. The Web infrastructure, designed and built by IBM business partner eOneGroup, includes IBM eServer xSeries servers running Linux to handle Web-based transactions, integrated online with IBM eServer iSeries servers running Java that are tied to existing wholesale and warehouse management systems.
Ensuring service with Caldera The Life Insurance Corporation of India (LIC), the largest insurance company in India, has installed more than 6,000 Caldera UnixWare 7 servers to manage its records and operations across the country. LIC uses UnixWare 7 to link over 2,000 branch offices throughout India and to serve approximately 11.6 million customers. UnixWare 7 links LIC’s local area networks (LANs), metro area networks (MANs), wide area networks (WANs), interactive voice response system (IVRS), and other technologies. This enables every LIC branch office to act as a standalone entity with mutual access to all transactions, information and computer support for all policyholders. “LIC’s implementation of UnixWare 7 demonstrates the ability of Caldera products to handle a high load without shuddering,” said Ken Bergenthal, Caldera’s Managing Director for Asia Pacific. “It provides enterprise-class computing on Intel-based processors, delivering superior reliability, availability and performance 24/7, even in the most remote locations.” In addition, UnixWare 7 servers host the applications used to settle all claims of policy holders and agents’ commissions. This increases productivity, helping to manage more than one million transactions per day.
NEWS
SuSE fills the board SuSE Linux has announced the appointment of a second Executive Board member and the completion of its management team. Gerhard Burtscher, an experienced IT manager, assumes the position of Chief Executive Officer and takes on the responsibility for the five newly created Business Units of SuSE. After having assumed CEO tasks during the refocusing of the company, CEO Johannes Nussbickel returns to his original function as CFO, heading Finance and Administration. Gerhard Burtscher has been active in the IT and telecommunications sector for more than 20 years. He began his career in the Marketing division of Nixdorf Computer AG and held executive positions at Texas Instruments, Digital Equipment, MIPS Technologies, Silicon Graphics, and Siemens-Nixdorf. The focal points of his activity were marketing, sales, and controlling. Since 1995, his main field of activity has been the establishment of European subsidiaries for international technology providers. During his career he gained extensive national and international experience as Managing Director and Business Unit Manager with these companies.
Korean Air Sendmails with IBM
MontaVista appoints new VP MontaVista Software, Inc. has named Raymond Mak as its new Vice President of sales for Asia Pacific. Mak, who has more than 14 years of experience in managing IT business in Asia, will be responsible for MontaVista’s sales and operations in the region. Based in Hong Kong, Mak will expand MontaVista’s current sales personnel in Asia Pacific and will work closely with the growing network of MontaVista distribution channels in the region. The Hong Kong base will complement the company’s presence in Japan, where MontaVista already has an office in Tokyo, headed by Hitoshi Arima. Mak most recently served as Vice President of Asia Pacific at Caldera, where he managed Unix and Linux server software sales through various channel partners. Prior to Caldera, he was a sales director at Apple Computer for nine years, primarily developing and managing sales channels in Hong Kong and Macau. 8
LINUX MAGAZINE
Sendmail has appointed C’QUEST International Asia Ltd. as its distributor in South Korea. C’QUEST will offer Sendmail, Inc.’s email solutions to service providers and enterprises in South Korea. The partnership of Sendmail, Inc. and C’QUEST has already resulted in a contract with Korean Air, the largest airline in Korea, who will implement Sendmail, Inc.’s solution for Linux on IBM’s eServer z900. “As the largest airline in Korea, we need a robust, highly reliable email system to manage our
communication across different departments and divisions,” said Mr. D.K. Shin, system technology team manager of Korean Air. “Sendmail solutions offer us a highly reliable, scalable and manageable email system at a much lower cost than before”. According to the performance tests conducted by Sendmail, Inc. and IBM, a single IBM z900 mainframe can support more than two million users, making it the largest single-server email system on Linux available in the industry.
Bakbone give support to MySQL databases BakBone Software has release a new NetVault Application Plugin Module (APM) for MySQL v4.0 databases. Available on Linux and Windows NT platforms, the MySQL APM provides NetVault customers with a high performance, reliable and easy to use hot back up and recovery solution for protecting corporate data. “The new MySQL APM is the latest addition to NetVault’s comprehensive line of add-on modules and is the first product on the market to provide fast, online backup of MySQL databases,” said Cali Viviano, Product Manager of BakBone Software. “This release represents an extension of NetVault’s ongoing commitment to provide application-specific support to the Linux community. NetVault offers hot application backup for more applications running on Linux that any other storage management software manufacturer”. The APM is a fully featured solution that includes superior device management and dynamic drive sharing capabilities for distributing data and maximising resource utilisation while increasing performance. Seamless precision integration with the native backup API allows NetVault to present a common, easy to use GUI for all file system and application specific operations.
Issue 16 • 2002
NEWS
LynuxWorks to license Apogee Aphelion IDE Apogee’s Aphelion Integrated Development Environment (IDE) is to be offered to LynuxWorks’ customers deploying Java applications on the LynxOS real-time or BlueCat Linux operating systems. The IDE gives developers access to stateof-the-art Java technology and translates to time and cost savings for OEMs who want application portability and code re-use options, as they develop and manage product life cycles. VeriFone Inc., a worldwide leader in electronic payment solutions and a major provider of point-of-sale terminals, will become the first company to use LynxOS with the Aphelion IDE to develop its new line of Java-powered Sapphire site controllers. Aphelion is a comprehensive IDE for developing high-performance embedded Java and Java/C/C++
iTS-LiNUX distribute Guardian Digital Guardian Digital, Inc. has announced a key partnership with Cheshire-based iTSLiNUX. The terms of the partnership include exclusive rights to distribute Guardian Digital Internet server software. EnGarde Secure Professional is a comprehensive software solution that provides all the tools necessary to build a complete online presence. It offers unsurpassed levels of security, ease of use, and the most sophisticated Open Source Web-based management system available, capable of supporting thousands of virtual Web sites, email and DNS domains.
da Vinci takes off The software development company da Vinci, which was formed earlier this year to serve the growing demands for UK-based teams of software developers, has had a successful year resulting in a 25 per cent increase in development staff. Having carried out several development projects since its inception, da Vinci is now officially launching itself to an industry that seems to be crying out for a company that can develop complete software solutions for all platforms within budget and on time. The team behind the company has extensive expertise of developing software for Macintosh, Linux and PC, with each engineer having an average of 12 years experience. Earlier this month, da Vinci won a contract with iInventory for the next phase of development of their LANauditor software, due for release in the first quarter of 2002. LANauditor automates hardware and software asset tracking for networked and standalone computers. First launched in 1990, the product has a well established, worldwide user base.
10
LINUX MAGAZINE
Issue 16 • 2002
applications with deployment on Sun’s Connected Device Configuration (CDC) and Connected Limited Device Configuration (CLDC) Java2 Micro Edition (J2ME) Virtual Machines. Developed for embedded systems, Aphelion removes the obstacles hindering the use of Java for embedded systems such as: large run-time footprint, slow runtime execution, and non-deterministic behaviour of Java applications when running on Virtual Machines.
SuSE 7.3 for the PowerPC The latest edition of SuSE is the most comprehensive Linux version for Motorola PowerPC processors on the market today. SuSE Linux 7.3 PowerPC Edition comes with 8 CDs containing a state-of-the-art Linux operating system and more than 2,000 applications. SuSE Linux 7.3 PowerPC Edition offers software for virtually any purpose, including image processing, security solutions, emulators, network and ecommerce tools. The improved KDE 2.2.1 desktop sets new standards for functionality, ergonomics, and user-friendliness. Furthermore, in the KDE control centre new arrangements have been made to organise the numerous applications in sections such as archiving, graphics, publishing, and network. The virtual machine MOL 0.9.60 (Mac on Linux) was also improved substantially. It allows the use of a networkcapable MacOS under Linux in window or full-screen mode. Highlights such as RealPlayer 8, Opera 5.0, Samba 2.2.1a, and GNOME 1.4.1 compliment the software equipment. “SuSE’s mission is to make Linux available for whatever system a user has,” explains Jasmin UlHaque, Commercial Director of SuSE Linux UK Ltd. “Now, Mac users can have all the benefits of the latest developments in Linux and still run their native Macintosh system and applications. We want everyone to experience Linux”.
NEWS
Red Hat delivers for IBM
Caldera evolves new Volution
Red Hat is set to collaborate with IBM to deliver Open Source software solutions, services and support for the entire IBM eServer product line. Red Hat support for IBM eServer platforms include: ● zSeries mainframes (64-bit kernel); ● iSeries integrated servers for small and medium business (32-bit kernel); ● pSeries UNIX servers (32-bit kernel); and ● xSeries Intel-based server (32 and 64-bit kernel). For the IBM eServer product line, Red Hat will provide a base solution offering that includes the Red Hat Linux operating system, product support services and professional services, as well as an upgrade offering. Red Hat will also provide a wide array of service upgrades.
Caldera has begun previewing a pre-release version of Caldera Volution Manager 1.1 and will begin shipping the new version during the first quarter of 2002. Caldera is extending the Webbased systems management capabilities of Volution Manager to support the latest versions of all major distributions of Linux as well as Caldera’s Unix products. In addition, Caldera is introducing several new systems management features in Volution Manager to help system administrators and solution providers save time, scale resources, and ease deployments in a cost effective manner. Caldera Volution Manager is a secure, Web-based, systems management solution that reduces the cost of deploying and managing established versions of Linux and Caldera OpenServer, UnixWare and Open Unix. Volution Manager does so by enabling secure, remote management, monitoring and updating of multiple systems through a browser.
Adaptec support Linux RAID Adaptec’s RAID driver has been embedded in the Linux kernel, allowing future versions of Linux operating systems based on the kernel to support Adaptec’s family of SCSI RAID and highend ATA RAID PCI cards straight out of the box. Adaptec’s Ultra160 SCSI drivers are already embedded in the Linux kernel and the company plans to embed drivers for its leading Ultra320 products as well. According to research firm IDC, Worldwide shipments of servers with Linux operating systems are expected to grow from 620,000 in 2001 to 1.8 million in 2005, a rise in market share from 13 percent to 28 percent. In the past, Adaptec’s RAID driver was maintained as a value-added feature by only certain Linux kernel distributors such as Mandrake and SuSE. Now that the driver is embedded in the mainstream kernel – kernel 2.4.10 – the new code, available at the Linux kernel FTP site, can be reviewed and maintained by virtually any kernel distributor. As part of that commitment, Adaptec will be providing source code for the OSM layer of the Linux drivers for Adaptec’s fibre channel host adapters. The company also plans to continue to add to the list of products it supports under Linux and open-source BSD operating systems, including FreeBSD, NetBSD and OpenBSD
SavaJe selects Espial Escape
Redmond Linux in action
Redmond Linux A new distribution has been launched, under the name of Redmond Linux. The Personal Edition is designed with ease of use in mind: Redmond Linux loads pre-configured for Internet access, home or small office productivity, financial management, multimedia, entertainment, and more. The Network Browser icon helps bridge the gap between Microsoft and Linux networking by letting you access network resources by browsing to them in a similar manner to the way Microsoft’s Network Neighborhood works. Each networked Redmond Linux machine becomes part of a Windows network. Further information can be found at http://www.redmondlinux.org/home.php Issue 16 • 2002
SavaJe Technologies has licensed the Espial Escape browser for inclusion with its new SavaJe XE operating system. Launched earlier this month, SavaJe XE is optimised to run Java applications efficiently on embedded and handheld devices. By selecting Espial Escape as the default browser, SavaJe has ensured its customers have a fast, compatible, and above all secure browser for their enterprise applications; and that they experience faithful rendering of even the most complex Web pages. SavaJe’s customers will also benefit from the reduced memory footprint of Espial Escape – a key requirement since SavaJe XE is designed to power the resourceconstrained ARM-based handheld devices that are proving extremely popular in today’s enterprises. LINUX MAGAZINE
11
GNOME NEWS
GNOMOGRAM
LOOKING FORWARD Björn Gansladt looks
Accessibility in GNOME
New income for Ximian
at GNOME
Behind the friendly word Accessibility, lies the objective of making GNOME accessible not only to typical users, but also to those with special needs. As in the case of Usability, there exists an associated project, which is supported by Sun. The basis of the Accessibility framework is the Accessibility ToolKit or ATK, through which the data goes to an “Assistive Technologies Service Provider Interface” (AT SPI). This interface in turn serves diverse programs, one such concept being speech output for example. Since the definition of a GTK interface already includes any amount of useable information, only a small amount of information has to be passed on directly to ATK. Lots of information is read out by another library with a nice acronym, namely the GNOME Accessibility Implementation Library (GAIL), and passed on to ATK. This means that GNOME programs can be made substantially more usable by the disabled, without a great deal of additional expense. Such a library could obviously be written for any other toolkit such as Qt or Motif, for example. Another option might be bridges to existing accessibility frameworks, such as Java’s JA, enabling various programs to communicate via the same output device. Whether ATK will actually become established is of course another story, but it has at least made a good start.
Just in time for the LinuxWorld Expo, Ximian has decided that man cannot live by fuzzy monkeys and T-shirts sales alone, and has organised a range of new sources of income. Following the example of the common Linux distributors, there now exists both standard and professional versions of the Ximian desktop, which differ mainly in the support period and StarOffice. The two versions will cost $29.95 and $49.95 respectively, and can be downloaded – without support, naturally – free of charge. Red Carpet, will also be making money in future. Faster downloads via Red Carpet Express will in future cost $9.95 per month, and with Corporate Connect, the management and updating of software within larger groups from $2,500 plus annual fee will become much simpler. So that Red Carpet does not remain limited to software from Ximian, a partner program has also been developed, which is an option Mission Critical Linux took advantage of immediately at the LinuxWorld Expo.
Foundation, Accessibility in GNOME, New income for Ximian, Overflow, GDKXFT and Devhelp
Libraries required Overflow LibXML, FFTW Gdkxft LibXft (should be included in XFree86) Devhelp GConf >= 0.12, GtkHTML >= 0.10.0, Gnome-print >= 0.29, LibXML >= 1.8.10
GNOME Foundation appoints Timothy Ney It’s been common knowledge for some time that the GNOME Foundation has had a candidate for the post of Executive Director – exactly who that candidate was has only recently been revealed with the appointment of Timothy Ney. With experience in the Free Software Foundation
12
LINUX MAGAZINE
Issue 16 • 2002
and a few other non-profit enterprises, Ney appears to be the ideal candidate for the position. In future he will be concerned with operational activities and the representation of the GNOME project. Since his salary depends on successful fund-raising, Timothy Ney should have
sufficient motivation to organise as many donations as possible. This makes him the only non-elected director who gets paid for his work. Nevertheless, Ney comes highly recommended and will hopefully contribute to the strengthening of contacts between business and the GNOME Foundation.
GNOME NEWS
Overflow
Devhelp The fact that every larger Linux library now comes with some documentation of its own is a promising development, but this can also lead to a certain amount of confusion. Devhelp has set itself the task of bringing order to the chaos of the various pieces of documentation. The program manages documentation files in books, for which metadata is stored in so-called spec-files. Thanks to Gnome-vfs, it makes no difference whether the actual content of the books is on your hard drive or on a network, although there are lots of books available for download at the codefactory. Of course it is also possible to search the books, and the search method should be familiar to all Emacs users. Since the target group of Devhelp and IDEs are intended to be roughly the same, it’s obviously a good idea to integrate the program into some common IDEs, which should occur in the CVS, at least in the case of Anjuta and Gide.
With Overflow, relatively fast flow-oriented programs can be developed, by networking various functions. In this it is similar to programs like Simulink and LabView, even if the range of functions might appear somewhat smaller by comparison. Since Overflow is part of the Open Mind Speech Project, these functions also encompass, in addition to fuzzy logic and neural networks, Figure 1: An audio effect in Overflow analyses for human speech. As with other programming languages, for the sake of clarity, it is also possible to define your own functions, which can then be used in the MAIN subnet. Although this is the main task of Overflow, the functions are not restricted to the analysis of data, but also enable graphical inputs and plots, and in future, image processing will also be possible. Once you’ve set up a network, this can not only be executed but also translated and compiled in C++ code. This also makes it possible to produce comparatively fast programs. An optimised fast-Fourier transformation is also critical for the speed of many functions, which is why Overflow relies on FFTW (Fastest Fourier Transform in the West). If you like it even faster, you might also want to get your hands on some processor-optimised offspring of FFTW. There are also plans to integrate Overflow into Piper, which will make it possible to run Overflow components on different computers. Nevertheless, Overflow will continue to be available in future as a separate program.
Info
Figure 3: Devhelp can also download documentation from the Internet if required
GNOME Accessibilty Project Ximian Overflow Open Mind Speech Project Piper FFTW gdkxft Devhelp Devhelp
developer.gnome.org/projects/gap/ www.ximian.com freespeech.sourceforge.net/overflow.html freespeech.sourceforge.net www.bioinformatics.org/piper/ www.fftw.org gdkxft.sourceforge.net devhelp.codefactory.se devhelp.codefactory.se/books.php
Gdkxft A diverse range of patches for GTK+ 1.2 has existed for some time now, offering soft focus fonts and facilitating binary compatibility, for example. However, compiling GTK itself is not everyone’s cup of tea. For those unwilling to wait for GTK+ 2, there’s now a minimally-invasive method, in the form of Gdkxft. In order for this to work, Gdkxft sets the environment variable LD_PRELOAD and thus ensures that libgdkxft.so is loaded before any other libraries. This means that the library can start commands really
Figure 2: Soft edges, wherever you look
intended for GDK, and display soft-focus fonts via Xft. Obviously LibXft is necessary for this but this ought to be included in XFree86. The configuration of Xft here is taken over by its own script gdkxft_sysinstall, which also installs its own GTK theme with soft focus fonts. Which fonts the library gives a soft focus can be specified in ~/.gdkxft, since Xft certainly can’t handle all fonts. Nor does Gdkxft cope with all programs, but with “gdkxft_sysinstall -u” and “make uninstall” the whole thing is also easy to remove again.
Issue 16 • 2002
LINUX MAGAZINE
13
NEWS
K-splitter
DIY DIALOG Stefanie Teufel takes
The winner is...
a look into the KDE
There’s been a lot of talk in the media over the last few months concerning IBM’s commitment to Linux. The Big Blue developer has not only lost its heart to a penguin, they are also interested in our favourite desktop. Who knows, perhaps this encouraged a few of you to enter your own theme in IBM’s KDE Theme Contest (http://www-106.ibm.com/developerworks/ linux/library/l-kde-c/). The winner’s have now been chosen, and the overall winner is Matthias Fenner, whose theme can be seen in Figure 1. Naturally, this vividly coloured work of art is also available for download, at http://www106.ibm.com/developerworks/linux/library/l-kdec/988319063-79002-lms-kde2-theme.ktheme. The second and third places are also on display – just take a look at http://www-106.ibm.com/ developerworks/linux/library/l-kde-c/theme2.html and http://www-106.ibm.com/developerworks/ linux/library/l-kde-c/theme3.html, where the Happy People and Egypt Office themes can be admired. Apart from the prestige of winning the competition, the prizewinners are allowed to select a non-commercial open-source project, to which IBM will shortly be sending the prize money (1,000, 2,000 and 3,000 dollars). Which projects will benefit from this had not yet been decided as we went to press.
world and shines the spotlight on some useful DIY dialog boxes and inventive prizewinners
Another prize These days, the developers of KDE pick up prizes the way dogs pick up fleas. With the addition of the Tuxie 2001, the KDE project has added yet another title to its list of honours. This time the award was for
Listing 1: messageBox #!/bin/bash if test -z `dcop|grep kio_uiserver`; then kio_uiserver fi JOB_ID=`dcop kio_uiserver UIServer newJob 0 0` dcop kio_uiserver UIServer messageBox$JOB_ID 5 “$1” Information a b > U /dev/null dcop kio_uiserver UIServer jobFinished $JOB_ID
14
LINUX MAGAZINE
Issue 16 • 2002
Figure 1: The winner on the KDE2 desktop
the Jack of all trades of the desktop environment – the Konqueror – which was voted tops in the category of Best Web Browser.
Yet more programmers! Anyone in need of a little extra push to start programming, might find an incentive in the KDE programming competition. Pro-Linux, in co-operation with Ralf Nolden of the KDE Project and the KDE League, is organising a big competition for KDE programmers and all those who would like to be. All hobby developers can send in their new KDE programs, plug-ins, applets and suchlike. Portings of existing applications onto KDE will also be taken into account. Programs that have already been published are excluded from the competition, as are members of the KDE Project and Pro-Linux colleagues. If your fingers are already itching, before starting you should briefly note that participating programs must be published under an Open Source Licence and submitted in source code (as tar.gz archive). They must also run under KDE 2.2 and be compatible with a simple configure;make. Please send your entries to jury@pro-linux.de.
Picture gallery Since only very few of us were able to experience KDE at the LinuxWorld Expo 2001 in San Francisco live, Rob Kaper at http://jadzia.nl.capsi.com/~ cap/digicam/2001-09-01-lwce/ has set up a small picture gallery of the KDE stand and the exhibition
NEWS
Listing 2: dialog query #!/bin/sh if test -z `dcop|grep kio_uiserver`; then kio_uiserver fi JOB_ID=`dcop kio_uiserver UIServer newJob 0 0` REPEAT=3
Figure 2: Your own dialogs are the nicest
for all KDE fans with a yearning for faraway places on the Net.
Glued to the TV Anyone who read this column back in issue 14, will be glad to hear that KwinTV has finally got a new maintainer. George Staikos couldn’t bear to see the project languishing in the doldrums and has personally taken over its further development. Although the new project does not yet have a comprehensive home page, at http://www.staikos.on.ca/~staikos/kwintv/ TV enthusiasts will in future be able to find frequent updates of the easy-to-use KDE TV program.
KDE answers back So, you have a script that you want to use under KDE and would love to have a suitable information window with the customary look and feel to go with it? Then Karl-Heinz Zimmer has the answer for you. He has developed a shell program, which is superbly suited to firing off a KDE infobox after the end of a long script. Place the shell script from Listing 1 under the name messageBox in a directory which is in your search path, for example under /usr/ local/ bin, and make it executable with the command chmod a+x messageBox If you now call it up in a terminal emulation with a friendly messageBox “KDE Rocks”, you will get a dialog box as in Figure 2, which is just waiting for you to give your OK. If you’d like to take a completely innocuous look at what can be done using such dialog boxes, you should first put Listing 2 under the name dialoguequery] in the search path. Before you start it in a terminal emulation with the command dialoguequery, you must obviously make it executable. When you have typed everything out correctly, you should be gratified to see dialog boxes as in Figure 3.
while test “$REPEAT” = “3” ; do echo ——- new round ——if test 3 = `dcop kio_uiserver UIServer messageBox $JOB_ID 1 “Do U you <b>really</b> want to do it?” “Short refer-back” “&Yes of course” U “Er, &no, I’d rather not”`; then echo You have selected Yes. else echo You have selected No \(or pressed Esc\). fi if test 3 = `dcop kio_uiserver UIServer messageBox $JOB_ID 2 “Think U about it, is this really <i>the way</i> you want to continue?” U “Incredibly important question” “Of course I do” “&No I have lost my U nerve”`; then echo You have selected Yes. else echo You have selected No \(or pressed Esc\). fi RES=`dcop kio_uiserver UIServer messageBox $JOB_ID 4 “<b>Risk of U data loss</b><br>Shall we just quickly get everything <i>saved</i> ?” U “Incredibly important question” “&Yes, super idea.” “&Nah, don’t care.”` case “$RES” in 2) echo You have selected Cancel \(or pressed Esc\).;; 3) echo You have selected Yes.;; 4) echo You have selected No.;; esac REPEAT=`dcop kio_uiserver UIServer messageBox $JOB_ID 1 “Repeat?” U “Question” “&Yes” “&No”` if test “3” = “$REPEAT”; then echo You have selected Yes, so repeat the whole merry dance. else echo You have selected No \(or pressed Esc\) echo - that’s quite enough of that. :-D fi echo ——————————— done dcop kio_uiserver UIServer messageBox $JOB_ID 5 U “<center>I<br>HAVE<br><b>F I N I S H E D</b></center>” Information U a b > /dev/null dcop kio_uiserver UIServer jobFinished $JOB_ID
Figure 3: Silly questions can be fun
Issue 16 • 2002
LINUX MAGAZINE
15
FEATURE
The POV-Ray raytracing utility
PERSISTENCE OF VISION P
find raytracing a
OV-Ray (Persistence Of Vision Raytracer) is one of the many utilities that you may have on your system without even realising it was there. As such, you may have never got to play around with it, missing out on the chance to while away the hours with nothing more than idle tinkering, a bona fide pursuit if ever there was one.
worthy pursuit.
Raytracing
Colin Murphy
For the uninitiated, POV-Ray is a raytracing tool. Raytracing is a method that enables you to create stunning graphic images, be they abstract, geometric or photo-realistic. This will take a little time to learn, as it calls upon your imagination and requires a little patience. The first step in the road to creating your graphical masterpiece is to “describe” what you want to depict in your picture. This description comes in the form of a programming language, or uses an interactive modelling system, like a CAD package. Either way, you’ll need to specify what objects are in your imaginary world, what shape they are, where they are, what colour and texture they have and where the light sources are to illuminate them. Having done all of this, you feed it into your ray tracer then sit back and wait, which is a necessary evil, as the rendering can take some time. Without having some idea of how the images are built up in a raytracing description file, you won’t be able to fully appreciate what is going on under the
Should you have an evening, weekend, or a whole year to fill, then you might just
finds out why
hood, so we will leave the CAD package-like development systems to one side and ‘play’ in text mode for the moment. POV-Ray is freely available for download. If you want the complete Linux distribution of POV-Ray 3.1g, including X Window and SVGAlib ELF binaries, documentation, sample scenes, and include files, download the povlinux.tgz file, which is 1.5Mb. cd /usr/local/ tar -xzvf /download directory/povlinux.tgz read the README.linux file and if all is well povray31/install will call up the installation script. Using this method, I found that a ‘povray’ executable wasn’t created, but I did get x-povray and s-povray – you might like to set up a symbolic link to whichever you will use on your system. I will just refer to the executable I use. There are two documents you should read to make most use out of POV-Ray: the README mentioned above and povuser.txt, which will also be in /usr/ local/ povray31/ if that’s where you uncompressed the files to. This second text file contains a beginners’ tutorial, a complete reference to the Scene Description Language, which you will use to code your efforts, and other information. This file is also available from the POV-Ray site in other formats, including HTML, which might make it more useful to some.
Create those images To get you started, we’ll run through some basic examples. In the /scenes/ directory you’ll find some .pov files, which we’ll use as our examples to take a look at. The command x-povray +i /usr/local/lib/povray31/scenes/incdemo/shapes2 +o /home/LinuxMag/shapes.png+W800 +H600 +D0
Figure 1: POV-Ray example image shape2. Well, we all have to start somewhere. On a 450MHz machine this took 15 seconds to render
18
LINUX MAGAZINE
will produce the output shown in Figure 1. The +i and +o switches define the input and output files; Issue 16 • 2002
FEATURE
Figure 3: A much more complex example of rendering, which includes reflections
Figure 2: A more complex picture in only 500 more bytes, which only took another minute to render
the +W and +H define the resolution of the output file. The +D0 switch is a must for beginners, as it allows for some instant gratification that POV-Ray is doing something by printing a version of what it has rendered so far in an X window. This no doubt takes up valuable processor cycles, but while you’re playing with POV-Ray it can only help.
different ways. Full details of how this works can be found in the povuser documentation. Experimentation really is the order of the day. You should tinker with the parameters in your favourite text editor and render a new image from your code. Code can be built up very quickly – the image shown in Figure 2 is only another 500bytes longer than the original example, which is almost an effort worthy of printing and mounting.
Some code
Graphical user interfaces.
This has come from a file less than a page long and we will run through just some of example lines in that code. The camera section defines how the objects in the file are viewed:
Working from the command line might be a little offputting for some people, particularly for those that like a friendly environment. There are plenty of GUI interfaces for POV-Ray available, and you should be able to find one that works on your desktop. Povfront is one such front-end. It’s designed to run under any flavour of Unix using GTK and glib libraries, it’s POSIX compliant and has been successfully tested on Linux Red Hat 5.2/6.0/6.1 and Mandrake 6.0/6.1. It aims to provide an easy way to launch pov rendering with a graphical interface, which provides all the available options – even the script only ones. The only requirement is that you have the GTK+ (1.2 version) or later libraries installed.
camera { location <10, 10, -20> look_at <2, 0, 0> } Briefly, location <10, 10, -20> places the camera up ten units, ten units to the right and back twenty units from the centre of the raytracing universe (which is at <0,0,0>). By default +z is into the screen and -z is back out of the screen. Also, look_at <0,0,1.5> rotates the camera to point at those coordinates. The look_at point should be the centre of attention of our image. light_source {<0, 1000, -1000> colour LightGray} The vector in the light_source statement specifies the location of the light set to some extreme values to make the shadows obvious. The light source is a tiny invisible point that emits light. It has no physical shape, so no texture is needed. There are two light sources referred to in the code listing. The remaining lines describe the construction of the objects in the image using Constructive Solid Geometry. POV-Ray allows us to construct complex solids by combining primitive shapes in a number of
Figure 4: Povfront will allow you easy access to all the switches you can use in PovRay, and then some
Support There is a very active user base for POV-Ray and raytracing in general. There are plenty of sites on the Web offering tutorials and other documentation, as well as people’s own efforts in creating objects and textures for you to use in your work – there are even competitions to enter. Take a look at Figure 5 to see what you can achieve with some open source software and three day to render in!
Info POV-Ray homepage Povfront
http://www.povray.org/ http://perso.clubinternet.fr/clovis1/
Issue 16 • 2002
LINUX MAGAZINE
19
FEATURE
Connecting your Linux box to a Broadband network
NETWORKING Linux is not supported by most Broadband ISPs, but that doesn’t mean all is lost. Colin Murphy reveals why
Broadband A transmission medium capable of supporting a wide range of frequencies, typically from audio up to video frequencies. It can carry multiple signals by dividing the total capacity of the medium into multiple, independent bandwidth channels, where each channel operates only on a specific range of frequencies. That’s the technical definition anyway. For the purpose of this article broadband loosely refers to some means of Internet connection other than a standard dial up modem.
Wherefore art thou, cable? The two major players in the UK cable market are ntl and blueyonder. Due to the way the cable franchises were distributed, back in the dim and distant past, we ended up with a geographic monopoly. Through a process of mergers and acquisitions, the cable television and Internet networks have for the main part been left in the hands of these two companies, though there are one or two independent areas still around. Your particular local region will only be handled by the cable company that holds the franchise for that area, which doesn’t leave much room for competition. Cable is costly to install, which is the reason given for the way the franchise process was handled. It’s also the reason why you can’t expect cable to be available nation-wide. If you can’t get cable television in your area, then the harsh reality is that you won’t be able to get cable Internet access either. Even if you can get cable television in your area you still may not be in the clear: some of the older networks may not have been designed with aforethought and may not be capable of carrying broadband information. If this is the case then you’ll just have to wait until the cable system in your area has been upgraded. The only way to be sure is to contact the company’s Web site and go through their availability ritual, which is usually laden with Flash animations, to make it that much more of a pain to look at via a dial-up modem. There is one perceived problem with cable modem
Broadband availability Because of the lack of nationwide cable coverage, you’ll need to check to see if broadband cable is available in your area: blueyonder http://www.blueyonder.co.uk ntl http://www.ntl.co.uk More information about cable broadband in general, including availability, is available from: http://www.cablemodeminfo.com/linbasics.x.html-ssi http://www.by-users.co.uk/faqs/linux/index.htm
20
LINUX MAGAZINE
Issue 16 • 2002
access, and that’s contention. The physical connection between your PC and the cable company’s network box in the street is shared by as many as 100 or so other users, which means you must share the available bandwidth. ADSL, which we will come onto in a moment, does not have this contention issue – a fact that is often used to claim superiority over cable. There are other factors that balance this out though, most importantly that the cable coax or fibre has a much higher bandwidth than the link used with ADSL. On the plus side, if you are lucky enough to have access to a broadband Internet connection, the connection to a Linux system is fairly painless. With both blueYonder and ntl networks you simply need to run a network cable to the back of your set-top box and set up the Ethernet network protocols. On installation, the engineer will want to register the MAC address of your network card which you can easily look up by inputting the command ifconfig into a terminal screen. This will present you with a list of parameters for each network card. You will want to quote the Hwaddr for the card you intend to connect your cable modem to. Obviously, you might want to consider putting in a firewall to offer you a greater level of protection. Neither of the cable companies actively support Linux systems, but there are very active Linux support groups for blueyonder, though I can’t comment for ntl.
FEATURE
ADSL availability You will need to confirm that you are in an area that is covered by ADSL. If your current ISP offers ADSL then you can check availability through them, otherwise you should try http://www.btopenworld.com/broadband/ looking for their ‘Availability’ button. For more information about ADSL see http://www.adslguide.org.uk/links.asp, and for more information about the Alcatel modems in particular see http://www.linuxdude.co.uk/docs/Alcatel-Speedtouch-USB-miniHOWTO/speedtouchusb.html. Community support for Linux users can be found on the Web
Wireless xDSL broadband ADSL is the most common form of DSL broadband in the UK at the moment, but it still cannot offer a nationwide coverage. BT Ignite handles the ADSL connection but the service is sold through various other ISPs including BT’s own BTopenworld. You’ll need to be within 5.5km of the nearest equipped exchange, and there’s the catch, as not all exchanges are up and running yet. Once again, you need to check availability through the BTopenworld Web page, which is not offered in plain text. In the UK, broadband is supplied to domestic users with an Alcatel USB modem, limiting you to what type of equipment you can use. There is also a driver issue with these USB devices. Alcatel has produced drivers but they have received a mixed reception from Linux users. Work is constantly being done by Alcatel and other parties to try and improve this situation, but there is still some way to go before the matter is settled. The best way to stay up to date with the efficacy of domestic ADSL in this country is to keep an eye on the uk.comp.os.linux newsgroup For businesses, or at least those that can write-off the additional expense, there is a more businessorientated service, which will enable you to use a straightforward Ethernet connection.
HomeChoice HomeChoice are an ADSL service provider but with a difference. They’ve been set up to provide ondemand television, which is transmitted through ADSL cables, so you still need to be in an area where ADSL is available. The advantage with HomeChoice is they use their own set-top boxes to provide the television signals, but they also offer a serial port for data. There are no driver issues to worry about with a serial connection, as everything you need is provided in with the most basic Linux distributions. See their Web page http://www.homechoice.co.uk/ for information. There is further help available from http://www.maxuk.net/hc/faq.html.
This developing technology falls into two categories: microwave and satellite. The microwave technology is available now in selected areas, from companies such as Tele2 and on a trial basis from ntl. The ntl trials have been give a high rating by Linux users. Much like with the ntl cable connection, this service uses a standard cable modem connected to the computer using an Ethernet card (10/100mbps). USB connection is also available, but there is still no knowledge base as to how well this would work. Systems will be allocated a single public IP address via a DHCP server and ntl recommends that you take appropriate precautions (i.e. install a firewall, antivirus software, etc.) to protect your system from the risk of attack. Typical downloads speeds will be up to 512kbps, though actual speeds may vary depending on Internet traffic loads.
ADSL under Linux
Satellite Could the future lie in the stars? Perhaps it will, as this technology should see the end to regional problems. Broadband delivery systems have been supplied by satellite for sometime – anyone who has used Sky Interactive will be able to vouch for that. Unfortunately, the return path is less than fast, and usually calls on the use of a 56Kb modem. I don’t think anyone is going to consider having this sort of system installed as a standalone Internet service. The future is changing though and two-way satellite connectivity is a reality – albeit a very costly one, at present. For the moment at least there’s no reason why you can’t use these types of services with your Linux boxes, as they use standard Ethernet connections to their modem devices. Issue 16 • 2002
LINUX MAGAZINE
21
FEATURE
Converting files to HTML
2HTML * W
is available in other
hen considering the conversion of a diverse range of document types into HTML format, it’s worth examining not only how these conversions work, but also how good they are. Most Office packages (under both Linux and Windows) have an HTML export option, but the results vary widely and are often unsatisfactory.
formats, such as
Microsoft Word
Office documents,
There are several different possibilities when converting Word documents into HTML. Firstly, Word (2000) itself offers its won conversion function under File/ Save as Web Page. HTML files created this way can be viewed in all Web browsers but they’re not particularly suitable for further editing due to the continual use of variously defined styles in the text. An example of this is the simple listing of individual items, which instead of being shown as:
HTML is the most
important format on the Web but a lot of
data is still created or
tables, PDF or ASCII files. Hans Georg Esser looks at the available conversion methods
<li>Text</li> is presented as a line of the form: <li class=MsoNormal style=’mso-list:l0 U level1 lfo1;tab-stops:list 36.0pt’>Text</li>. This solution may be practical if you merely seek a
LaTeX The free text typesetting system LaTeX (http://www.latex-project.org/) has many friends, particularly amongst scientists, due to the fact that it enables the simple creation of complex formulas for seminars or theses. Thanks to LyX, you can create documents using a text-processing environment, which means you don’t need to learn the LaTeX syntax (http://www.lyx.org/). LaTex uses its own mark-up commands, which have a certain similarity with HTML in terms of their structure. An automatic conversion from LaTeX files to HTML is not therefore a giant leap. A program that does this is latex2html (http://wwwtexdev.mpce.mq.edu.au/l2h/docs/manual/). This has an added function which is particularly useful for larger documents – you can select whether to create a single HTML file (option -split 0), or whether each chapter, paragraph etc. should be outputted to a separate file. The HTML code produced is very clean, and tables of contents, footnotes and cross-references are correctly converted.
22
LINUX MAGAZINE
Issue 16 • 2002
quick means of placing a Word file on the Net, however it does require having both Windows and Word installed. For those of you who want to edit HTML files further, or don’t have access to Word, there are some alternatives. One of these is the program word2x. You can find the current version (0.005) on the Web at http://word2x.alcom.co.uk/. In our test, a Word 8 document could not be converted (the output was empty). The reason for this is that a special conversion tool, called wv (earlier referred to as mswordview), exists for the current Word 8 format. It can be found on the Web at http://www. wvware.com/. Once the wv tools are activated, the command wvHtml test.doc test.html begins file conversion. Unfortunately, the results of the conversion are even more disappointing than when using Word directly. The simple listed item given in the example above takes the following form: <li><p><div align=”left” style=”padding: U 0.00mm 0.00mm 0.00mm 0.00mm;”> <p style=U ”text-indent: 0.00mm; text-align: left; U line-height: 4.166667mm; color: black; U background-color: white;”> Text </p></div></li>
In addition to this, headings do not conform with the HTML standards of <h1>, <h2> and so on. The wv tools offer conversions into other formats besides Word, such as LaTeX, PostScript, PDF and more. However, even the LaTeX format generated by wvLaTeX could not be converted into useful HTML with latex2html (see below).
StarOffice In a similar manner to Word, the text module of StarOffice also offers its own HTML export function. This produces quite useful HTML, and an export function also exists for converting files to Word
FEATURE
format. To export an HTML file, simply select File/ Save as and then select the file format HTML (StarOffice Writer).
Tables After text files, the most important Office documents are the products of the various spreadsheet applications such as Excel and StarCalc. As we did for text programs, we will first look at the export functions.
Microsoft Excel The first program we put to the test was Microsoft’s Excel 2000. For the purpose of this test we created a simple table with the columns and column totals, which was then saved as a HTML file. The result looked tidy enough in a Web browser and the formatting of bold and colour were preserved. The only downside was that the sum formula was lost (i.e. the result values were saved). As with Word, the outputted HTML file was very large – here producing a 8,194 byte file from 800 bytes of information – with most of its bulk arising from style declarations. The formulas were retained for the subsequent reimport into Excel: In this case the sum fields had the structure: <td class=xl29 align=right x:num=”41.96 “ U x:fmla=”=SUM(D5:D9)”>41.96</td> which is useful for other Excel users. One error during the conversion did come to our attention: While the number columns in Excel were right-aligned (format
PDF as a picture? The converter pdf2html takes the path of least resistance. Instead of analysing the data in a PDF document (PDF: Portable Document Format, a standard of Adobe) and converting this to HTML, it simply converts the individual PDF pages to .png pictures and produces HTML pages from these. This is fast and simple but it doesn’t permit any analysis of the data by the web page viewers. The program is found under http://atrey.karlin.mff.cuni.cz/~clock/twibright/pdf2html/. The similarly named tool pdftohtml (http://www.ra.informatik.unistuttgart.de/~gosho/pdftohtml/)takes another path. Here, the PDF data is analysed and is converted into an HTMl text file. pdftohtml also detects and converts links in PDF files. Pictures are likewise extracted from the file and built into the appropriate place in the HTML file. The visual and layout quality is not amazing (any formatting information is ignored), but at least the created file can serve as a starting point for subsequent fine-tuning – the only problem with this is that the tool has a strange habit of only putting one word in each line in the HTML code.
###,##), they became left-aligned in the HTML file.
StarCalc Our next candidate was StarCalc, the spreadsheet analysis tool from the StarOffice package. Converting the same table produced a somewhat smaller HTML file (3,572 bytes). Here, a table entry had the form: <TD WIDTH=86 HEIGHT=17 ALIGN=RIGHT U SDVAL=”41.96” SDNUM=”1031;”><B><FONT U COLOR=”#0000FF”>41.96</FONT></B></TD> The style specifications were done without and as a
Email Why would anyone want to convert emails into HTML, one may ask? This is an interesting option that enables mailing lists administrators to make postings public through a Web page, for example. The relevant package Hypermail, http://www.hypermail.org/, permits the conversion to an Mbox compatible mail file (like those produced by Netscape Mail, Kmail, mutt and elm). To create a new directory with the mailbox generated HTML files, simply use the command mkdir /tmp/hypermail hypermail -m ~/Mail/Incoming -d /tmp/hypermail The program produces a separate HTML file as well as additional overview files for sorting (according to thread, date, topic, author and attachments) for each individual mail, and puts these in the indicated directory. The sorting data is streamed to a separate subdirectory, which enables fast access to all attachments – the HTML file of a mail with an attachment no longer contains the attachment’s information; instead it contains
Hypermail produces an eye-pleasing overview page, in which the threads can be seen
a link to the file in the mail’s directory. This is also extremely practical for purposes of archive keeping. An example of the thread overview is shown above.
Issue 16 • 2002
LINUX MAGAZINE
23
FEATURE
Figure 1
Excel export
StarOffice export
Excel original
consequence, the format options for each field need to be indicated separately. The alignment of the number fields was, incidentally, correctly converted. StarOffice did not store the formulas, however, and with the re-import into StarCalc the formulas were thus lost.
xlHtml Anyone who receives a table via email, and who has neither StarOffice nor Excel handy, will be pleased to know there is a tool that enables conversion to HTML without needing to start an Office package. This service is offered by xlHtml (http://www.xlhtml.org/), the current version being 0.2.8. The program can be translated and installed in the normal way with ./configure; make; make install and with a cue, produces the form
xlHtml converted
xlHtml test.xls > test.html This table only proves useful on closer inspection. Most fields appear in white writing on a white background and are therefore illegible. Only after changing the colour can something be seen. Table content is however correct. The sums were correctly calculated, although a warning was given that they would possibly be incorrect – this warning can be suppressed using the parameter -fw (“suppress formula warnings”). The outputted HTML file, with a size of 2,705 bytes, is even smaller than that produced by StarOffice. This is due to minimum formatting as seen here with the HTML data of a table field: <TD><FONT COLOR = U “00FF00”><B>41.96</B></FONT></TD> The syntax of the FONT attribute is incorrect. If correct, it would read: font color=”#00FF00”. As the colours were not correctly detected (blue writing became green, black became white), you may as well 24
LINUX MAGAZINE
Issue 16 • 2002
xlHtlm -nc converted
ignore the colours completely – xlHtml provides the option -nc (“no colours “) to do this and the result is now pure, simple HTML, which is very suitable for subsequent editing: <TD><B>41.96</B></TD> There is another option, which is also useful. -te (“trim edges”) gets rid of empty columns and lines
FEATURE
in the upper left hand corner of the document. xlHtml matches the source of an Excel file with remarkable accuracy. This was also shown by the fact that a pure Excel document was converted free of errors whereas an Excel document produced by StarCalc supplied only zeros in the total fields. In Figure 1 you can see the different converted files as displayed in Netscape.
ASCII files ASCII files occasionally crop up, usually in the form of longer HOWTOs or Read Me files. These contain no mark-up information at all about the document’s structure. There are nevertheless tools, which attempt the conversion to HTML. txt2html (http://www. aigeek.com/txt2html/) is a program that offers the possibility of indicating something about the structure of the text file on the basis of templates. t2t (http://216.254.0.2/~dogbert/t2t/) takes on the task of analysing tables in ASCII files. Many databases and spreadsheet programs offer the opportunity to export files into “Tab Delimited ASCII” format. – Such tables are easily processed by t2t.
PostScript Many documents can be found in the PostScript format and as such, there are also HTML converters for this class of document. One example is the Pscript Document Publishing System http://www.ubka.uni-karlsruhe.de/~ guenter/pscript/. One should not expect too much of the conversion however: items such as header and footer lines, or page numbers are simply added in with the produced HTML. As with the PDF converter, formatting is also ignored.
Is anything better than nothing? The results of these free conversion tools are not always convincing, and in many cases, a manual rework with a HTML editor is necessary. What is crucial however, is that the data can be converted into HTML in some form or other. A re-edit is surely easier than manually transferring the data and formatting the HTML from scratch. Good luck in generating your own Web content.
Issue 16 • 2002
LINUX MAGAZINE
25
ON TEST
WIN4LIN 3.0 Operating Windows under Linux
GOOD THINGS ARE WORTH WAITING FOR Emulations offer the possibility of quick access to Windows applications. Thomas Drilling takes a closer look at the new Win4Lin version 3.0
W
hen we first looked at NeTraverseâ&#x20AC;&#x2122;s (formerly Trelos) Win32 emulator Win4Lin 1.0 in last year, we were not overly impressed. Our tests of version 2.0 (available since the start of the year) in the May 2001 edition of Linux Magazine were also far from earth shattering. When we learned of the release of Win4Lin version 3.0, we were once more keen to put it to the test, to see whether it lived up to the promising reports that accompanied its release. Right from the start we could see that this had the right stuff to make VMware more than a little anxious. Besides Win4Lin, VMware Express or Wine, many manufacturers foresee an enormous requirement for the use of native Windows applications under Linux. For this reason, they are making quite substantial technological investments. Although the fields of application may be targeted in slightly different directions from the above mentioned emulation concepts, the prevailing agreement is that there are more than a few Linux users who want to (or have to) occasionally execute Windows applications. The starting point of a practical and functional emulation, in view of the performance available on home computers today, is not merely a dual-boot installation. An emulator puts Windows programs into a virtual machine and puts a safe and stable runtime environment at their disposal. In the inevitable eventuality of a Windows program crash, at worst you need only restart the emulator.
New with Win4Lin 3.0 Both version 1.0 (tested in issue 3 of Linux Magazine) and version 2.0 suffered due to their unreasonably 26
LINUX MAGAZINE
Issue 16 â&#x20AC;˘ 2002
complicated installation procedures, as well as due to their instabilities. This made normal, day-to-day work with the software only really possible with restrictions. At first glance, the new version has a new look with a graphical installation program as well as a set of detailed improvements concerning the hardware support of the host system (including SMP). The stability of the new version is impressive. The annoying accompaniments of the previous versions have been done away with, such as the incorrect screen layout in the guest system window. Contrary to VMware, Win4Lin operates with complete transparency with regard to the data exchange and
ON TEST
communication between the host and guest systems. The user need not worry about any subsequent configuration, such as the installation of a virtual graphic support program onto the guest system, for example. Win4Lin, in guest operation, directly supports most performance characteristics of the host hardware. An example here is the practical performance of the 2D graphics, which has a resolution that directly matches the abilities of the physical graphics card. If you want similar functionality in VMware, a toolbox must be first installed. The guest system’s FAT32 filesystem is transparently shown in the host file system, i.e. it is displayed relative to any chosen directory position in the filesystem. A further innovation of Win4Lin 3.0 is a complete reworking of the network modes. This allows the full functionality of a genuine Winsock network between host and guest as well as the implementation of a virtual private network (VPN) between the guest Windows system and the Linux environment.
Concept For users who are new to Win4Lin, it is useful to learn a little more about the operation or conception of the software prior to installation. From the user’s point of view, Win4Lin 3.0 is in principle a Windows emulator that runs under Linux, which means that one basically uses Win4Lin to enable the execution of Windows applications on the Linux desktop. The designation ‘emulator’, however, does not exactly hit the mark here, as neither Win4Lin nor its direct competitor VMware emulate the actual Windows 9x operating system. Rather, Win4Lin creates an environment within an X11 window, which makes the installation of Windows in an X11 or KDE window possible and necessary. The Windows operating system is not provided and must be explicitly installed after the installation of the Win4Lin environment. The user will therefore need his or her own copy of a Windows CD and the appropriate user license. While VMware could be designated as a ‘genuine’ hardware emulator, one could call Win4Lin’s concept a symbiosis between hardware emulation in the style of VMware, API emulation à la Wine and filesystem conversion. Regardless of how it is designated, Win4Lin is easier to use and more flexible than the competing products. Following the installation of the host system and the subsequent installation of the Windows guest operating system, the Windows programs can be run by Win4Lin with breath-taking speed in a X11/KDE window directly on the Linux desktop. The real advantage is that Windows programs can save their data directly into the appropriate home directory of the Linux filesystem. This means that Win4Lin makes
Figure 1: Windows Me running under Win4Lin without restricting the KDE window
the user files available parallel – both from a Win4Lin session and under Linux.
Core problems Win4Lin displays enormous flexibility and performance, in particular concerning the above mentioned filesystem transparency between Linux and Windows. This performance is based on a specially modified kernel from NeTraverse, and as such, the installation of the software is unfortunately not the most simple of affairs. The difficulty is that Win4Lin’s specially adapted kernel has to be integrated into the system. Independent of whether you install Win4Lin directly from CD or with the help of the fully automatic Live-Installer (the latter is necessary if you use Win4Lin’s 30 day download licence), there are three ways of putting a Win4Lin specific kernel into operation. The following kernel versions are all available in the download section of NeTraverse’s Web site and on the CD: ● NeTraverse-enabled kernels are pre-compiled kernels for the most popular distributions and kernel versions. In each case, the RPM archives contain a finished kernel image. ● NeTraverse pre-patched generic kernels are patched kernel sources. In each case, the RPM
Data exchange Although a network socket implementation is not fundamentally necessary for pure communication between host and guest, Win4Lin includes flexible network functionality, so that other hosts in the LAN can communicate with a Win4Lin Windows session. This also means that the network services or devices of the host system are available from a Win4Lin environment. Another useful feature of Win4Lin is that the Windows CAB files from the installation CD need only be read in once during the Win4Lin installation and are stored directly in the Win4Lin shared directory in the host’s Linux filesystem. You therefore do not need the CD to carry out a user specific installation of any future Windows 9x session. It is however unclear whether this method may be linked with hidden license violations.
Issue 16 • 2002
LINUX MAGAZINE
27
ON TEST
Preparations
Figure 2: One of the substantial improvements is the graphical installer
achives contain a tar.gz archive, which again contains the patched kernel sources and modules. â&#x2014;? NeTraverse patches are genuine kernel patches in the *.patch format, which one can apply to the earlier kernel sources as differences. In the process of the installation discussed here, we will go through the simplest method first, i.e. the use of pre-compiled NeTraverse kernels. The NeTraverse server has the appropriate archives for the popular distributions of Mandrake, Red Hat and SuSE. After unpacking the archives, the prepared kernel images land in the /boot directory and can thus be referenced directly in the local Lilo configuration. However, this simplification brings with it a certain degree of inflexibility i.e. the lack of available kernel sources allows no further adjustments to the prepared kernel.
You can naturally create your own appropriate boot disk if the local Lilo configuration is to remain untouched. After this, install the Win4Lin 3.0 application on your own. The package is present on the Win4Lin CD or on the NeTraverse server under /cdrom/ Win4Lin/ RPMS/ i386/ Win4Lin-5.2.0d-1.i386.rpm in the RPM format. However, the installation requires a correct licensing of the product. For this reason, the win4lin-install installation script must be used for the installation of the RPM package. Apart from the licensing considerations mentioned in the above paragraph, this ensures that the software is unpacked at the intended directory positions, i.e. var/ win4lin or opt/ win4lin. Win4Lin cannot be used without a valid license file, and this also applies to the demo version. After correct registration, the licence file is sent as an attachment by email is sent. This should be done prior to download. The license file is a simple ASCII file and must be saved under /var/ win4lin/ install/ license.lic before Win4Lin is started the first time. The above mentioned installation script automatically applies the license file. If you already installed the Win4Lin binary package Win4Lin-5.2.0d-1.i386.rpm with kpackage, you can copy the license file to the designated position by hand.
thereby able to operate Win4Lin without compromises.
Generic or prefabricated? Another disadvantage of this prefabricated method is the fact that there is no guarantee that the kernel is up to date; this depends solely on the commitment of the good people at NeTraverse. At present, there is no prepared kernel for SuSE 7.2, although a SuSE 7.1-kernel (kernel 2.4.0) should be sufficient for normal users taking their first steps with Win4Lin. Those of you who need a newer kernel will have to use the more complicated method to access the generic NeTraverse kernel. The NeTraverse-enabled kernels available online are normally more up to date. Although there was still no current SuSE 7.2 kernel (2.4.4) available at the time of testing, there was however a generic kernel of the version 2.4.5. The kernel sources supplied by NeTraverse allow more flexibility compared with the NeTraverse kernel images, because the kernel sources can be accordingly adapted to oneâ&#x20AC;&#x2122;s own needs. The most attractive option for users who want to be able to adapt their own kernel for specific functions, is the NeTraverse Patches version. Users are 28
LINUX MAGAZINE
Issue 16 â&#x20AC;˘ 2002
Installation overview The installation of Win4Lin has always been relatively complicated due to the core of the product, i.e. the adapted kernel. In the new version however, this has been decisively simplified. Installation now requires the following three steps:
Figure 3: With Win4Lin display settings are easy to adjust
ON TEST
Figure 5: Configuration selection
Figure 4: The installation is now menu driven and thus substantially simpler than in the previous versions
● Install a kernel adapted for Win4Lin, or integrate a present Win4Lin kernel into the existing Lilo boot management. Boot with the Win4Lin kernel. This step must be executed as root. ● Install the Win4Lin software, including licensing, with the help of the install-win4lin.sh script; read in the Windows CAB files from a Windows 9x installation CD; and then read in the Windows system files from a bootable Windows start disk with the help of the new win4lin-install graphical installation program. This step must be likewise executed as root. Please note: The new graphical installation program should not be mistaken for the installation script for the Win4Lin package. The win4lin-install program presumes that Win4Lin has already been successfully installed. ● Configure a user specific or system-wide Win4Lin session with the help of the menu-driven configuration tool winsetup. This is to be started as root for the system-wide part of the configuration, or user for the user specific part of the configuration. After completing these three steps, you can start a Windows session at any time as a normal user by entering win in the command line (in a terminal). When starting win for the first time, a Windows box will be started and the necessary Windows 9x installation will be initiated (if the Windows CAB files were correctly installed in step 2). You can likewise complete the installation of the Windows CD during step 3 in the system-wide section with the help of winsetup.
Installation assistance NeTraverse offers a variety of different installation options up to and including a completely automatic online installation including the download of the necessary kernel, program data and patches. For this reason, we cannot describe the installation in all its conceivable versions. All in all, NeTraverse seems to favour online
installation, since this gives the manufacturer the possibility of collecting registration data from the user. Apart from this, the installation script enables the analysis of one’s own system environment and thus the automatic supply of the suitable kernel version. For those of you who are seriously interested in carrying out the online installation, there is outstanding documentation to be found both online and offline. This provides detailed information about the operational sequence, so that you are completely informed as to what the installer does or which entry it expects at any given point during the installation process. We will now demonstrate the simplest and fastest method of Win4Lin installation – using a Win4Lin program CD. We carried this out using the scheme described above, without the help of the CD installation documentation. Our test computer had a freshly installed SuSE version 7.2 operating system, plus online updates from 01.09.2001. Since the Win4Lin CD does not contain a suitable precompiled kernel image for SuSE 7.2, we simply installed the kernel image 2.4 for SuSE 7.1, which Issue 16 • 2002
Figure 6: Configuration of special features is possible under the graphical interface
LINUX MAGAZINE
29
ON TEST
this means a usable kernel image, win4lin, was put directly into the boot directory. This was then easily located and inserted into the Lilo configuration file /etc/ lilo.conf. The Win4Lin kernel was subsequently booted without a problem.
Graphical installation
Figure 7: With the help of the menu-controlled tool winsetup a printer can be created for the guest system
can be found on the CD as an RPM archive under /cdrom/ LINUX/ RPMS/ kernel-Win4Lin2SuSE7.1_2.4.0-03.i386.rpm. The installation via RPM can be executed free of problems with kpackage. By
Preparations You can naturally create your own appropriate boot disk if the local Lilo configuration is to remain untouched. After this, install the Win4Lin 3.0 application on your own. The package is present on the Win4Lin CD or on the NeTraverse server under /cdrom/ Win4Lin/ RPMS/ i386/ Win4Lin-5.2.0d1.i386.rpm in the RPM format. However, the installation requires a correct licensing of the product. For this reason, the win4lin-install installation script must be used for the installation of the RPM package. Apart from the licensing considerations mentioned in the above paragraph, this ensures that the software is unpacked at the intended directory positions, i.e. var/ win4lin or opt/ win4lin. Win4Lin cannot be used without a valid license file, and this also applies to the demo version. After correct registration, the licence file is sent as an attachment by email is sent. This should be done prior to download. The license file is a simple ASCII file and must be saved under /var/ win4lin/ install/ license.lic before Win4Lin is started the first time. The above mentioned installation script automatically applies the license file. If you already installed the Win4Lin binary package Win4Lin-5.2.0d1.i386.rpm with kpackage, you can copy the license file to the designated position by hand.
Figure 8: Illustration of the drives under Windows and the paths in the filesystem
30
LINUX MAGAZINE
Issue 16 • 2002
Only after all the preparatory installation work is completed can the afore-mentioned graphical installation program be put into action. Newly developed for Win4Lin 3.0, the designation “installation program” is hardly appropriate here: The tool is less about the installation of the Win4Lin binaries and more about the set up of the guest environment for Windows 9x. You will find the program for example on the Win4Lin CD under /cdrom/ install-win4lin. Those of you who have two CD drives (for example an additional CD writer) can start the program directly from CD, i.e. the Windows CD can be read in from the CD writer. In the first step, the graphical installer reads the CAB files from the Windows CD and stores these in a prepared Win4Lin drive on the hard disk. In the next step, a bootable Windows start disk is needed, from which the Win4Lin installer takes over the Windows system files. The preset configurations of a Windows session are thereby concluded and Win4Lin is operational. In the following step you will install the Windows 9x operating system.
Preparing a Windows session Before Windows can be installed, a suitable system environment for the virtual system must be prepared. This involves fine-tuning the hardware emulation, and to do this, another configuration tool is used. The tool in question is the menu-driven winsetup application, and you can start this as either a normal user or as root from a terminal. This serves to set up the most important environment parameters for a local Windows session, such as the paths to the user directories and Windows system files, or the allocations of the machine files or the used devices of the host system. With the help of winsetup, you can also implement additional optimisation settings for the graphics support program. If you start winsetup as root, the settings will apply system-wide. If you want to implement a user-specific configuration, you will need to operate winsetup as a user. After this, you can start a Windows session at any time by entering win. When you call up win for the first time, it will initiate a normal Windows installation in the exact same way as with a native installation. But first back to winsetup. Configuration using winsetup is divided into a system-wide section (winsetup must be called up from root) and a user-
ON TEST
specific section. The latter helps each Win4Lin user configure his or her own personal Win4Lin session. In the system-wide section, the administrator can set up additional native DOS partitions or virtual DOS drives, such as the so-called ‘Shared Drive’ J:. More information on the drives provided by Win4Lin can be found in the following section of this article. Note that in the system-wide section of winsetup, there is yet another possibility of installing the Windows CD (CAB files). Before you start to install Windows, you should adjust the paths and mappings to the personal drives as well as the Windows drives (C:) in the user-specific section of winsetup. These are found below $$HOME/ win directly in the Linux filesystem. To check, you can also have a look at the default specifications. You can then initiate a Windows installation as a normal user within the Win4Lin session with win.
Hard drive letters The C: drive, under $$HOME/ win, in the Linux filesystem is directly addressable. Make sure that the authorisation details for the respective users work for this directory. If there is no longer enough space in the Linux filesystem for a standard Windows installation, this need not be a problem: Linux lets you spread the load with symbolic links to other partitions. A J: drive also exists by default in each case. This stores the so-called DOS-shared drive and the Win4Lin system files for all users. The J: drive also contains the previously mentioned Windows CAB files from the installation CD, so that individual users can carry out further Windows installations on the basis of their respective personal drives. This can be done at any time without having to access to the Windows CD. The position of the J: drive is /var/ Win4Lin. Besides the two floppy disk drives A: and B:, you can also configure access to a Linux CD-ROM drive (allocated the letter N:) in the system-wide Win4Lin configuration. Apart from this, it is also possible to create DOS sessions, or (if required) more native DOS partitions within the Win4Lin configuration. These DOS sessions are based on a so-called virtual device, similar to the VMware method. The respective mappings can be produced with winsetup. In the default setting, Win4Lin establishes the virtual DOS device D: with a rudimentary DOS 7.0 system.
Working with a Windows session The Windows installation under Win4Lin takes place in exactly the same way as with a native Windows installation. The simplest way to start the installation is to start winsetup as user (not as root) and therein select the Personal Win4Lin-Session... menu item. The personal drive is then booted with the help of the
Start button. This checks if Windows is currently installed, and if not, automatically continues with the introduction of the Windows installation. In practical use, Win4Lin 3.0 surprises with its considerable performance and, for an emulation, amazing stability. A complete Windows session is loaded and initialised with extreme speed – noticeably faster than with native Windows. Standard applications such as Microsoft Office 2000 can be operated with refreshing speed, even on a relatively slow 800MHz computer. The rate of data transfer is also impressive because Win4Lin can access physical devices block-by-block. This applies whole-heartedly to Win4Lin’s personal drives, which are physically displayed in the Linux filesystem. The performance is slower when using the virtual disk devices, which Win4Lin controls (as does its competitor VMware). This performance is however clearly faster than VMware and within the comfort level for practical use. Surprisingly, the extremely high performance doesn’t lead to any unpleasant side effects. Along with filesystem transparency, this is one of the primary development specifications at NeTraverse.
Figure 9: Classical Win32 standard applications such as Office 2000 run without a problem under Win4Lin
Summary Win4Lin 3.0 gives Linux users the ability to access and operate classical Windows 32-bit applications such as MS Office or Intuit Quicken. Win4Lin could be a very useful tool for developers, or other users who are absolutely dependent on the fast and flexible availability of both operating system platforms (cross development, Web developers, system administrators and software testers, for example). Win4Lin 3.0 gives users a functionally comparable to Vmware express, but is both cheaper and faster. Indeed, Win4Lin 3.0 (and also VMware express) may have reached the limit of what is currently technically feasible with Windows emulation.
Issue 16 • 2002
LINUX MAGAZINE
31
FEATURE
QUANTA+ WORKSHOP Web pages with Quanta+
HTML MAGIC If you want a
The application
presence on the
It may not look like it at first glance, but Quanta+ has turned into a proper little IDE, with correspondingly complex help facilities and configuration options for the program. After starting the program you are presented with a three-way view (Figure 1). On the left is the directory tree, on the right the editor window and underneath you will see the message window, which can display all sorts of different messages. Both the tree view and the editor window offer various tabs through which you can browse by clicking on them. The tree view hides an overview of the HTML document to be created later, as well as the project view and an access point for a whole variety of manuals (HTML, Javascript, PHP4 and CSS) linked either to your local hard disk or to the Internet. HTML consists of numerous formatting commands that belong to different function groups. Correspondingly, the Quanta+ authors have grouped similar or related tag options together on tabs above the editor area. Clicking on these reveals the respective toolbars. The first tab, labelled Standard, contains the sort of formatting instructions you will be familiar with from word processing applications. If you position the mouse pointer on a tool icon for a moment a tooltip will reveal the function of the respective button. The remaining tabs offer buttons for font selection, table and list creation, designing forms and also a group of functions that do not easily fit into any of the other categories, for instance a colour selection dialog.
World Wide Web then you have two choices: spend money on commissioning someone or invest the time in doing it yourself. The HTML editor Quanta+ makes it much easier to opt for the latter. Helga Fischer investigates
W
hen creating Web pages, experienced Linux users will of course head straight for vi or Emacs. If you don’t have too much confidence in your knowledge of HTML you might look for an HTML editor with a graphical interface that will help you with your first steps towards your own homepage. One such program is Quanta+ (http://quanta.sourceforge.net/). Quanta+ integrates seamlessly into the KDE desktop environment and its look, feel and behaviour are similar to other KDE programs, so if you’ve already worked with KDE applications you’ll immediately feel at home. Of course, the program also runs under other desktop environments and window managers. The development between different beta versions often results in marked improvements and it is therefore advisable to install whatever is the latest version. This article refers to version 2.0.pr2-60. Don’t be put off by the beta status though, as this is a stable program. As is normal under Linux, the application is either started from an X terminal with quanta & or – where available – from a selection menu in your desktop environment.
32
LINUX MAGAZINE
Issue 16 • 2002
Figure 1: Quanta+ overview
FEATURE
Figure 3: ...and the result Figure 2: the Quick Start wizard in action...
First steps On the Standard tab (Figure 1) you will notice an icon on the far left which does not really fit in with the other character and paragraph formatting tools. By clicking on this empty page with the magic wand you launch Quick Start in which a wizard will lead you through the most important presets for your HTML document. This is a simple way of configuring the title, the background image or colour and the font and link colours (Figure 2). When you click on the OK button, a basic skeleton for your HTML document is loaded into the editor window and the cursor is placed between the body tags so that you’re ready to go. Of course, these basic settings are not sufficient to create an attractive Web page, so you’ll want to use other formatting options. There are two ways of working on HTML files. If you already know which element you need next, click on the corresponding icon. Quanta+ will create the start and end tag and place the cursor between them so that you can immediately enter your text in the right place. Alternatively you can write your text and decide afterwards how you want to format it. In this case, mark the appropriate areas and then click on the icon that represent the required formatting. Quanta+ brackets the marked text with the appropriate tag pair. The editor colours the individual parts of the document. These programmer colours (properly called syntax highlighting) help you to get your bearings in the source text and to correct errors. If any syntax elements do not change to the appropriate colour during entry then something is wrong. In this case, you need to have a closer look or to consult the online help facility. Whenever you want to admire the result of your efforts in the browser view, click on the eye icon in the Quanta+ toolbar or press F6. The application will then switch to internal preview mode (Figure 4). By pressing F6 again you return to code view. A preview is sufficient for a quick check on how your pages are coming along, but it doesn’t
Figure 4: The browser view
necessarily guarantee the desired appearance in different browsers. For this you need to save the file and load it into the respective external browsers.
Project administration Web sites rarely consist of only one file. Normally a number of different files are involved, making it easy to lose track unless you invest a little time in doing some admin work. Thankfully, Quanta+ offers tools suitable for project administration. The project basis is created using a wizard, which takes you through three steps. It can be found on the toolbar of the IDE under Project/ New Project. The
vi and Emacs the two best known command line text editors of the Unix world. Their operating philosophy takes a bit of getting used to, so they are not everyone’s cup of tea, despite being very powerful. IDE Integrated Development Environment; an application that helps with the creation of software (or in this case Web pages) by providing a variety of functionalities under one roof. Standard features of an IDE are a “proper” editor with many menus and help functions, access to external programs and the ability to track errors. PHP PHP Hypertext Preprocessor; a server-side, versatile scripting language, which is embedded into HTML files. CSS Cascading Style Sheets; format templates held in a central location, which determine the appearance of the different elements of HTML documents.
Issue 16 • 2002
LINUX MAGAZINE
33
FEATURE
Syntax highlighting Programming and mark-up languages consist of predefined words. Good editors recognise these keywords and display them in colour during the editing process. XML eXtensible Markup Language; a special language that allows a unified description of documents and data structures. It is machine and human-readable and is a preferred file format for KDE projects. Bugs another word for errors. Back in the dawn of the computer age insects really could cause faults, which then had to be corrected manually. These days we are only plagued by software errors, but the name has stuck.
Figure 5: Setting project data
34
LINUX MAGAZINE
first step provides an entry form for general settings (Figure 5). Click into the first field of the entry form and give your project a name – the wizard will now fill in the lines below automatically. The second field, Project Destination, is used to specify the project directory. If you don’t want to save your files in a subdirectory immediately below the home directory, as per the default, then you need to select a different directory using the button with the three dots. Your choice of directory also determines whether you can include existing files in your project with the help of the wizard or whether you will start with an empty project. The third field is for entering the name of the project file. Here, Quanta+ stores any information it requires for administering the project in XML format. Under Project sources in the lower part of the wizard window you can determine whether the files you’ll be working on are already located on your machine or whether they will need to be downloaded from the Web first. For a new project, Add local files is the right choice.
Issue 16 • 2002
Figure 6: Linking existing files
Figure 7: The key to publishing
Now click on Next. In a second step the wizard lets you link existing files to the project (Figure 6). By selecting the Insert files from Project directory option, all files from the project directory (entered in the previous step) and its subdirectories are listed in the lower window. Alternatively, the second check button is used for the specific selection of files according to their extensions. As shown in Figure 6, the standard Web file formats are predefined. As the field is not large enough to show all of them at once, the screenshot doesn’t tell you that this includes, of course, HTML files with the suffixes HTML, HTM, html and htm. The third check button, Insert files with the specific file mask, allows you to link files with other, userdefined extensions. With Next, the wizard will open up a gateway to the Internet for you (Figure 7). Should you be planning on making your homepage accessible to the public at large, you’ll need to transfer it to a computer that your audience can visit using a Web browser. You should be able to get the relevant access data from your service provider. Although it would be useful to enter the name of the target Web server at this point, in the version of Quanta+ tested such entries had no discernible
FEATURE
Figure 8: Complete overview
Figure 10: Editing fonts
Figure 9: Tunnel vision
effect. In order to initialise the project, skip this step and click on Finish. Now nothing stands between you and the development of your homepage.
Working on the project It’s worth remembering that Quanta+ distinguishes very strictly between files that are part of the project and other files that can also be accessed. So that you don’t lose track of them, Quanta+ provides a tree view and a project view in the left window of the IDE – you can toggle between them using tabs. The tab with the screen icon represents the tree view, while the icon with the three dice in red, blue and green stands for the project view. The project view only shows those files that are part of the project and ignores any other files, including the project administration file (which has the extension .webprj) while the tree view shows all existing files (Figures 8 and 9). Both views enable you to open a file and load it into the editor by double-clicking on it. The only file that can’t be opened this way is the administration file, myproject.webprj. You can add more files to an existing project at any time using Project/ Insert file(s). A new tab is created at the bottom of the editor for each file that is added. You can toggle between the individual files using the mouse. Should you find it awkward to reach for the rodent while you are writing you will soon come
to appreciate the key combinations Alt+right arrow and Alt+left arrow. As long as a file has not been saved it receives a temporary description. You should save all your files as soon as possible and give them meaningful names. Not only does that make it easier to keep track of them but it also enables you to link the documents using hyperlinks. To save a file, either use the menubar or the context menu of the right mouse button. In the dialog that follows, Quanta+ offers you the previously created project directory as the location for saving your file. Give the file a name and click OK. Confirm that you want to add the file to the project. If you answer No here, the file is not lost but stored separately so that you can continue to work on it. However, it will not be included in the project files. At this point Quanta+ changes the temporary name on the tab to the one you have chosen and it adds the file to the project view in the left window.
Error handling Wherever there is work being done, gremlins won’t be far away, leaving their calling cards in the form of bugs. Along with everyone else, HTML writers also suffer from the fact that computers have no imagination. Whatever task you give them, they perform it stolidly and without any feeling for accuracy. That is why a markup language like HTML has to follow very clear rules. Checking this syntax is the job of a syntax checker, which reads the code and lists any errors it finds. The Quanta+ syntax checker gets to work once it has been called via Tools/ Syntax check or with the key combination Ctrl+P. Its output is displayed below the editor window. If you click on an error message in the debug window, Quanta+ will put the cursor right Issue 16 • 2002
Wherever there is work being done, gremlins won’t be for away
LINUX MAGAZINE
35
FEATURE
Figure 12: Going live Figure 11: Pest control black list
on the appropriate spot. In addition, the faulty section of the HTML source text is positioned in the middle of the editor window. This enables you to spot missing tags quickly and to complete the text. However, a syntax checker is not a panacea. Many errors remain hidden from it; often it will only indicate error sources indirectly at best. This is the point where you will need some imagination as well as experience for the debugging process. HTML normally makes things reasonably easy for you, though: faulty HTML produces a faulty display in the preview (and often in the browser as well).
Final upload Despite the difficulties with the project wizard described above it is possible to copy a complete project with all its data to the target server without much fuss. Simply open the input window for your access data using Project/ Upload Project or F8. The project data is read in and you can then enter the destination of the homepage. Your password is only displayed as a series of asterisks. Normally port 21 is the right port for FTP file transfer. If there is an open Internet connection, clicking on the Upload button is enough to transfer the data to the Web server where it can then be admired through a browser.
User-defined functionality Is the functionality offered by Quanta+ not enough for you? Then select the menu item Configure actions under Settings. This kicks off an extensive dialog, which not only lists the existing functions but also enables you to configure Quanta+ according to your requirements (Figure 13). On the left are the functions that have already been defined. If you highlight an item on this list, relevant information is provided on the right. Each of these functions belongs to one of three categories: Tag, Script or Text. Tag lets you define additional HTML tags, while the Text function lets you store frequently used text, such as a personal copyright notice. Since Quanta+ does not offer any facility for converting unusual characters into character entities, we are going to use the Actions configuration dialog to add this functionality to the application. Most Linux
Character Entities specialised code for characters which ensures that they will be displayed correctly anywhere in the world. recode a command line utility that converts different character sets or formats into each other. In this particular case the character set iso-latin-8859-1 is converted into HTML format, referred to as latin1..h4. The option -d means that only the special characters in the text are processed, but not the angle brackets of the HTML tags. Figure 13: Confusing diversity
36
LINUX MAGAZINE
Issue 16 â&#x20AC;˘ 2002
FEATURE
Figure 14: Icons help you get your bearings
installations contain recode, a command line application that seems to be made for this task. Click on the New button on the left (Figure 13). A blank function appears in the action list, symbolised by a blue dot. Enter a name in the upper form field (for example Recode) and select the Script tab. Into the blank text field, enter the command recode -d latin1..h4 This information did not appear out of thin air, by the way, but rather from a quick look at the recode manpage. This command line alone does not do the trick, however. The command has to know what it is going to be processing, what to do with the result and how to handle errors. You can control these action on the Input, Output and Error tabs. Since we are going to process entire HTML documents we will specify current document as the input source. The Output tab offers even more options. We shall choose replace current document – after all, we want the edited version of the document to appear on the Web. In the selection dialog for the Error tab we are going to opt for have any error messages displayed in the Message Window. Clicking on OK would make our new function ready to use. However, the user cannot be expected to work out what the function is for from a blue dot. To fix this, click on the icon with the blue dot in the Actions configuration dialog to the left of the Text field. The selection dialog allows you to choose a more meaningful icon (Figure 14). Quanta+ provides its own icons. If these are not agreeable then a system directory such as /opt/ kde2/ share/ apps/ quanta/ toolbar will offer additional choices.
To access these, click on Other icons/ Browse. Naturally, you are free to create your own icons with an icon editor. All we need now is a button, so that we can call the Recode function by clicking on the toolbar. Open a dialog window under Settings/ Configure Toolbars (Figure 15) to customise your Quanta+ toolbars. The upper drop-down menu gives you the choice of whether to insert the new tool button into one of the application toolbars or into one of the tabs above the editor window. Below this, you’ll find a split view with available actions listed on the left and those that are actually linked (and therefore visible) on the right. Arrows in the middle enable you to add or remove individual elements. Up or down arrows allow the individual elements to be re-positioned. Our Recode function fits in well with the Editor Toolbar, which up to now contains actions such as cut, copy and paste, undo and spellcheck. Locate our Recode script in the left window, then highlight it and move it onto the list of actions displayed on the toolbar by clicking on the blue arrow pointing right. If its position is not to your liking highlight the element again and move it to the required position using the up or down arrow. Click on Apply to transfer the icon into the toolbar. Quanta+ even ensures that your new tool is included in the list which lets you define keyboard shortcuts, found under Settings/ Configure key bindings. Experienced users, above all, will appreciate this facility for customised extensions. However, Quanta+ is worth a closer look for anyone in search of a bit of support from a modern GUI, although it will disappoint those hoping for drag and drop Web page creation.
Quanta+ is worth a closer look
Figure 15: Extension potential – the toolbar
Issue 16 • 2002
LINUX MAGAZINE
37
FEATURE
CRYPTOGRAPHY Protection from prying eyes
BEWARE THE EYES OF MARCH You may have
something that you don’t want every Tom, Dick or Sarah knowing about, so encrypt it. John Southern shows us how
I
f you consider your data to be sensitive – i.e. something that you may not want other people to view – then you need to think about encrypting it. A cryptosystem is a way of disguising a message so that only the intended recipients can view the true data. Only those in the know will be able to identify the false nose and wig and decrypt the message beneath.
Can you rely on encrypted text?
Public and private: public/private key encryption is based on the use of two keys. The public key is freely distributable and can be sent in emails, cut and paste, or saved to floppy disk and handed out; and is used, in part, to encrypt messages that only you will be able to decrypt. The other key is the private key, which should remain secret and should not be spread. This key should only be available to the keyholder. Someone sending you an encrypted message will use your public key and their private key.
38
LINUX MAGAZINE
No matter how securely you encrypt your messages there is no absolute guarantee that no one, other than your intended recipient, will get to the information they contain. With a little brute force, enough processing power and a lot of time, anything is crackable. All you need to know is the encryption algorithm. Once someone gets hold of the encrypted text they can find the guarded text through the lengthy procedure of trying every possible key. Back in the early ‘70s, it was agreed that a strong cryptographic algorithm was needed. Development started on DES – the data encryption standard – which uses an algorithm called Lucifer. DES has a staggering 2^56 (about 10^17) possible keys. In the mid ‘70s this was sufficient to thwart all but the most dedicated government agencies. As processing power has increased however, so has the strength of brute force attacks, so the need for more key combinations is always growing. Given enough time all keys can be found but information usually has a finite useful life and so encryption only has to withstand the length of time that the information remains useful. As all the information can be considered a string of numbers (ASCII symbols are just numbers) modern ciphers use mathematical functions to encrypt data. If the same key value is used to both encrypt and decrypt, then it is known as a symmetrical cryptosystem. One example of a symmetrical cryptosystem is ROT13. Here we let A=1, B=2, all the way to Z=26. Issue 16 • 2002
To encrypt we just move each letter along 13 characters. HELLO WORLD becomes URYYBJBEYQ. Performing the ROT13 process again return us to the original message. Many IRC systems include this as a quick method of disguising a message, until the recipient wants to pull the mask off. It’s a good way to stop someone stumbling across the punch line of a joke or the spoiler to a film, for example. Asymmetrical cryptosystems are much more secure, and therefore useful. These use mathematical functions that require two keys, which are not the same. Some ciphers work on a block of data, say one byte, with operations such as addition, transposition and multiplication, then move onto the next. A product cipher performs several block ciphers on each block. Feistel ciphers work on half of the cipher text then swap them round before performing on the next section. Lucifer happens to be a Feistel cipher. Triple DES works on 64-bit blocks of data using 56bit keys three times. This gives rise to the publicprivate key system. By using an asymmetric key system one key will encrypt while the other decrypts and vice versa. If you publish your public key then encrypt a message with your private key. Everyone can decrypt it (with your public key) and they know that only you could have encrypted it. This way you can also prove who you are. Similarly, they can encrypt a message to you, which only you can read. One form of this is the RSA (Patent 4405829). To test how strong this cipher is, RSA Data Securities Inc. post a series of numbers each month, and a cash prize is awarded to the first person to break down the factoring numbers. Remember: the security and secrecy of your data not only lies with the power of the encryption algorithm, you must also bear in mind the security of your machine. As such, you should be thinking about your system security in general. Basically, we recommend PGP and GPG, as we don’t know of any
FEATURE
KMail ready to be configured with your newly created GPG keys
cracks in the system. Encrypted data might be more than trivial to crack but other means can be used to attack your security and piece of mind. Some time back, a Trojan horse was found, which rooted about in systems searching for secret PGP keys and FTPd them away to some ne’er do well! The quality and integrity of your password obviously play as important a role.
How to encrypt under Linux The two most frequently used means of encrypting files on Linux are using GPG (GNU PrivacyGuard, which is based on PGP – Pretty Good Privacy – but without the patent issues) and RIPEM. In this article we’ll concentrate on GPG, which is shipped with all of the main distributions of Linux First you need to generate two keys, public and private.
Generate your key gpg
––gen-key
Here you are asked for the type of algorithm you wish to use. Lets use the default DSA/ElGamal, because it’s not restricted by patents. Next you are asked for a key length. The minimum is 768 and the DSA minimum is 1024, but lets think about this for a moment. The decision lies between choosing security and the amount of time you want to spend on encrypting messages. The greater the key length the less likely your message will be cracked open. We will run with the default of 1024-bits. We are now asked for details such as our name, email address and a comment. Finally, we now need to enter a pass phrase to help generate our keys. You’ll need to make this something you can remember so that you can decrypt your files, but not something that can be easily guessed. Before starting to generate your keys, GPG will get some random numbers from the system, so working with lots of windows helps to generate randomness.
Getting and using your keys In order to have your key in a form that you can conveniently use, you’ll need to export the key to a file: gpg ––export –ao public-key The –a switch will produce your key in 7-bit ASCII, so it’s easier on the eye, while the o switch sends it to the file ‘public-key’ so that we have something to handle. When you receive someone else’s key, you need to put this on your keyring. This is done with gpg ––import public-key
Key distribution To make use of your public key, other people need to have a copy of it. This can be done by attaching the file in emails or by sending it on floppy disks. You can even use servers like http://www.keyserver.net/, which enables you to post your keys for other people to search for and pick up. There is a problem of trust with this however. Who is to say that someone with evil intent won’t give away public keys pretending to be you? The way around this is to use key signing. Your public key can be signed by other people who have verified it really is you, and who have been verified themselves. In practice you meet someone at a keysigning party for a quiet drink and exchange keys on floppies. Now their friends can accept your key because your friend says yes that really is you. In this way, a Web of trust builds up.
Locking the door Now that we have our keys, and the public key of the person we want to send something encrypted to, we run with the command: gpg –e –r LinuxMag test.txt To be safe we had better sign the file as well gpg –s data_file gpg –d test.txt.gpg will decrypt. There are gui front-ends to help you through all of this (see http://freshmeat.net/projects/) but they are often not needed. The real trick is to set up your email client to automatically encrypt whenever you want – see the KMail screenshot for an example.
Info GPG HOWTO Public key server
http://www.dewinter.com/gnupg_howto/english/ GPGMiniHowto-1.html http://www.keyserver.net/en/
Issue 16 • 2002
LINUX MAGAZINE
39
KNOW HOW
Apple Mac: Running Mac-on-Linux under PPC/Linux
MAC EMULATION D
It’s been said many times that Linux has no applications. Jason Walsh looks at how to expand your productivity with Mac-on-Linux
espite the availability of many professional desktop applications for Linux, such as StarOffice or Corel WordPerfect Office, rumours of the dearth of Linux applications still persists. In all actuality, there is a grain of truth in it, at least for users of non-standard versions of Linux such as PPC/Linux. Last month we looked at productivity applications on PPC machines, and before that on replacing Photoshop, but what if you really want to run Quark Xpress or need access to Photoshop’s CMYK tools? What do you do if you need to use a particular application that doesn’t exist under Linux and has no real equivalent? An awkward but useable solution is to boot into Windows on a separate partition or hard
Illicit use of Mac-on-Linux Users of non-MacOS PPC hardware are no doubt wondering whether they’ve just been given a Mac for free, or rather, can they boot MacOS using Mac-onLinux on their IBM RS/6000, for example? Well, there really is no easy explanation for this. Legally, the answer is no. You must have a machine licensed to run the MacOS, whether it is an actual Apple Macintosh or one of the many clones that were produced in the mid-nineties by the likes of Motorola and Umax. From a technical perspective, it’s a different story. Apple love standards, or rather they love helping to create them and then subverting them. Any tech-savvy Mac user will recall the acronym CHRP, or Common Hardware Reference Platform. This was a hardware standard developed by Apple and other tech companies, in order to replace the Intel x86 chipset. Unfortunately, not much ever come of it. However there are still some of these machines about, as well as some based on the PreP subset, and there’s no technical reason why you couldn’t run the MacOS on these systems in conjunction with Mac-on-Linux. This is because the Mac OS no longer needs a hardware ROM in order to boot. Since Mac OS 8.5 there has been a file lurking in the System folder named MacOS ROM. This file effectively replaces the physical ROM chip found in older (pre-G3) Macs, without which the OS refused to boot. Users of x86 systems, on the other hand, can forget about it. Mac-on-Linux is a PPC native application and requires one of the following CPUs to run: The Power PC 601, 603, 603e, 604, 604e, G3 and G4.
drive. However, what do you do if you’re running Linux on a PPC machine, such as a Macintosh? In this case, you can reboot into the MacOS (or AIX or BeOS, depending on your machine) but as with so many things in Mac land there is a more elegant alternative. Why not simply run your MacOS applications under Linux?
Introducing Mac-on-Linux If you’re running MacOS X and need to run an application that hasn’t been ‘carbonised’, (that is, an application that hasn’t yet been ported from the old MacOS to OS X) the machine will boot the ‘Classic’ environment and the run your application. Essentially the Mac is emulating an older version of the operating system and running the application through it. This may be clever, but it’s nothing particularly new. VMWare and WINE enable Linux users to run Windows applications on their x86 systems; SheepShaver enables BeOS users to run the MacOS in a window; and any Power Macintosh (PPC-based Macs) runs old applications for the 680x0-based machines, using an invisible emulation process. Incredibly, even parts of the OS were run under emulation until the release of MacOS 8.5. Now Mac based Linux users have a similar application, and best of all it’s open source.
Mac-on-Linux
40
LINUX MAGAZINE
Issue 16 • 2002
KNOW HOW
Compatibility Issues Sadly it’s now time to rain on your hightech parade: Mac-on-Linux has some downsides. First of all, if you’re using a Mac based on the Power PC 603e, such as a Power Macintosh 4400 or many PowerBooks, you’ll need to apply a kernel patch. Luckily, this is included in the RPM in the /usr/ doc/ mol-0.9.58/ folder along with the appropriate instructions for running it. Users of early Power Macintosh G4 machines also have a minor issue to resolve. MOL is incompatible with the
MacOS ROM file included on the MacOS 8.6 CD, which shipped with the original G4s. However, later ROMs, such as 1.6 and 1.8.1 are available from the download page: http://www.maconlinux.com/ download.html MOL is also incompatible with many peripherals, such as SCSI scanners, some USB scanners, USB storage and so on. FireWire support is also patchy, as Linux currently has incomplete drivers. Depending on how you intend to use your Mac, sound may be a problem as
MOL simply doesn’t support it – nor does it support audio input or output. Lack of support for accelerated video is also a problem on a platform noted for its use in the creative industries. Finally Localtalk networking and PPP within MacOS require workarounds, details of which can be found on the MOL Web site. One final sad note is that MOL is not compatible with MKLinux, the only version of Linux that runs on some oddly configured older Power Macs, such as the Performa 5320.
Installing and running Mac-on-Linux Installing Mac-on-Linux is easy – not quite ‘Mac easy’, but simple nonetheless. It’s important to remember that you must be running a Linux distribution which uses the 2.2.10 kernel, or later. After downloading the RPMs, issue the following command: rpm -i mol-version.ppc.rpm Alternatively, those without the Red Hat Package Manager, or the brave, can download the source and compile it themselves. Next, invoke the commands below. This copies the MacOS ROM file from the MacOS System CD. mount -t hfs /dev/cdrom /mnt strip_nwrom “/mnt/System Folder/Mac OS ROM”U /usr/lib/mol/rom/rom.nw The above instruction assumes that you intend to use MacOS 8.6 or later. Should you wish to use an earlier version, you’ll need to grab a copy of the ROM and convert it into a ROM image, using the ROM Grabber utility, which is available from the MOL downloads page: http://www.maconlinux.com/download.html This is reason enough to use a version of MacOS later than 8.5. The MacOS ROM file from MacOS 8.6 onwards will work on any PPC Mac. Finally, invoke the boot command from the bash shell: startmol The MacOS should now be booting. If instead of booting, it is displaying a flashing question mark, this means that the MacOS cannot find a suitable partition to boot from. It is looking for a HFS partition with a working system folder. If this
happens, you must configure MOL manually. Edit the /etc/ molrc file and make the appropriate volume available.
Two copies of MOL running
Performance issues Running software through emulation or API layers will always cause some loss of performance. However, depending on your machine and what kind of application you want to run, it may prove to be worth it for the sake of convenience, particularly given how long the MacOS (and Linux) can take to boot. Some software is not designed to boot up for a bit of quick work. While you may want to quickly load an image editor to alter file formats, or boot up a word processor to fire off a letter, it is difficult to see why you would ever want to run the likes of Quark XPress for a few seconds. In cases like this you would probably be better served by rebooting natively into the MacOS. Issue 16 • 2002
LINUX MAGAZINE
41
KNOW HOW
Running MacBench
Users of MacOS X and Photoshop, which has not as yet been ‘carbonised’, will know that heavy-duty graphics manipulation under emulation is a pain. PPC/Linux users would be better served by rebooting to MacOS 9, or using the Linux native GIMP, which offers most of Photoshop’s features at, wait for it, no cost. However, the performance tests for Mac-on-Linux are rather revealing. The Linux Icebox section of the famed Mac Web site, ResExcellence, found that Macon-Linux was only slightly slower than the OS X ‘Classic’ environment. I cannot compare like for like as I currently run PPC/Linux on an iMac G3/233 and OS X on a Power Mac G4/400, but I will state this:
CHRP Common Hardware Reference Platform. A basic open platform developed by several hardware companies, including Apple, for producing machines that would run a series of operating systems, including Unix, MacOS, BeOS and, interestingly, Windows NT. This plan never came to full fruition as Apple effectively pulled the plug. HFS Hierarchical File System. The native disk format for MacOS. Also known as the MacOS Standard. HFS+ An improved disk format for PPC Macs. Also known as the MacOS Extended. MOL Mac-on-Linux Power MacintoshMacintosh computers that use the Power PC processor. Earlier Macs used the Motorola 680x0 series, commonly referred to as 68k. PPC PowerPC. A chip series developed by Apple, IBM and Motorola. Used in Macs since the mid 1990s and in high-end IBM servers. To be used in the Nintendo Gamecube. ROM Read Only Memory (chip). A non-writeable area of computer memory. In this case it contains booting information and basic OS services for the Mac. RPM Red Hat Package Manager. An installation application for Linux. 68k Motorola 680x0 series of CPUs. Included the 68000, 68020, 68030, 68040 and 68060. Used in the original Macintoshes and also in the Amiga, Atari ST/TT/Falcon and the Japanese M680x0 computer.
42
LINUX MAGAZINE
Issue 16 • 2002
MOL seems to be little or no slower than ‘Classic’, even on this older machine. If your MacOS requirements are more in the AppleWorks or MYOB accounting vein, then MOL is perfect.
Conclusions All in all, Mac-on-Linux can only be a good thing. The performance loss your system will suffer when using it is minimal, to say the least. Mac-on-Linux opens up a whole world of applications to PowerPC Linux users and though the MacOS must still boot, not forcing you to halt Linux is a fantastic boon. Perhaps the best thing about MOL is that it makes Linux a true alternative to MacOS X for users of older Power Macs, which simply won’t run Apple’s next generation operating system. By offering similar features to the ‘Classic’ environment in OS X it enables users to have the power of Unix alongside the familiarity and legendary ease of use of the Mac. Without MOL, Linux would not compare to the functionality of OS X, but with it you can really get productive on your computer – after all, isn’t that why you bought a Mac in the first place? However, the best news has been kept for last. Though I haven’t personally tried it, MOL will apparently boot BeOS, and MacOS X compatibility is being worked on. Imagine that, a computer than can natively run, MacOS 9.1, MacOS X, BeOS and of course Linux. Now that is a workstation. ■
Info The Linux Icebox PPC site http://www.resexcellence. com/linux_icebox/ The Mac-on-Linux site http://www.maconlinux. org
FEATURE
EMAIL SERVER
SUSE EMAIL SERVER III Why me?
SuSE have recently
What is it that the SuSE eMail Server has to offer you? Apart from the absolute reliability of Linux software and the present day fact that there are not many viruses that will attack it, there is also the extremely user friendly Web-based interface that you can use to perform all of the many tasks required from a mail server on a daily basis in a busy commercial or academic environment. The eMail server supports all of the usual Internet standards such as IMAP, LDAP, POP3, TLS and SASL. All common email clients can be administered from the workstation thatâ&#x20AC;&#x2122;s connected to the server, which provides a central administration point for a commercial organisation. Dedicated workgroups and all of the things that you might associate with proprietary software are available. Internal and external lists can also be set up and administered. What SuSE Linux UK Ltd has done is take some free software and written some high quality software to compliment it. The end result is that the user only has to click on a Web page and fill in some easy to understand values for user and group configurations. SuSE has also populated the boxed product with some excellent manuals. The first part of the installation manual is easy to understand: If you can install a Microsoft product then you can use YasT2 and install the eMail server. Many commercial and even Government organisations that we have spoken to have said that they want this kind of commercial product and they are willing to pay good money for it. Quite a few of them also say that proprietary software is becoming prohibitively expensive due to the cost of licences. They also say that Linux is a viable alternative and they want more of it. The eMail Server III that was reviewed for this magazine was tested on i386 hardware, which is the kind of hardware you can find in most small companies worldwide. A 450MHz AMD K6-2 CPU and 128Mb of 100MHz RAM was the hardware used
produced a very
Issue 16 â&#x20AC;˘ 2002
glossy and highly desirable range of commercial products which neatly undercut the prices of most other similar products. Richard Ibbotson takes a long hard look at the latest SuSE eMail Server
LINUX MAGAZINE
43
FEATURE
Apache configuration.
User account control
to install the server into the i386 architecture. The installation was over in less than ten minutes and after fifteen minutes an IBM notebook was connected to the networked server so that administration and configuration of the new accounts could begin. Configuration of a single account took only a few minutes and we noticed that the server and the Web browser that we were using both moved like lightning across the screen. This was also tested across the Internet with an ISDN link and little or no loss of speed was experienced. Most commercial organisations use digital communications and so there should not be a problem with remote administration and configuration.
Reasons to use? Configuring Fetchmail from the frontend.
Postfix configuration
44
LINUX MAGAZINE
Issue 16 â&#x20AC;˘ 2002
How does it work and what makes it better than some of the others? To be honest it may not be better than some of the others, but there are a lot of people out there who do not want to hack a command line in the middle of a busy schedule or be involved in administering several hundred machines that crash all of the time. Letâ&#x20AC;&#x2122;s face it: weâ&#x20AC;&#x2122;ve all had that problem at one time or another with internal or remote computers that need that demon tweak or an account adding/removing on the one day of the year when everything else has gone wrong. To use the eMail server for the purposes of configuration and administration, all you need is a Java-enabled Web browser on any machine in your own internal network. You can also connect to the same machine with SSH if you prefer command line. The eMail server is basically a cut-down version of the ordinary SuSE distribution and so any secure session that you might wish to establish over an internal network or across an untrusted network is possible with the eMail server. This means that an SSL connection can be made with Samba as well, if you
FEATURE
Expert mode Postfix configuration
want to do that. Apache configuration can be done with the addition of a CA. After the initial login the user or admin person is pointed by the graphical interface towards the first time configuration of a single user or complete group. There is also provision for browser-based configuration of postfix, procmail and fetchmail. If you have hacked these on the command line as much as I have then you’ll probably prefer a graphical approach. The actual Web forms that you are asked to complete vary in complexity and sophistication. If you are confused by first time configuration the manual should help you out and if that doesn’t work then support by fax or email is easily obtained. The first part of the manual gives easy to understand graphical instructions on how to install. The later pages (starting at chapter five) explain the simple task of logging in as an administrator and configuration. This part also shows pictures of what you can see on the screen so there shouldn’t be any problems. There are also complete descriptions of how to use various mail applications with the eMail server and how to configure those as well. Finally there is a section on how to use the Arkeia backup software to make a copy of your mail folders so that nothing is lost in the event of a disaster. Arkeia is the software that is included on the installation CD that comes in the box so that you can make your backups. As well as the installation manual there is also a
cut down version of the original SuSE manual. Those of us who know about this will be aware that the SuSE manual is one of the best books about Linux that has been produced. If you don’t like paper you will find that you can install the same documents into your own local hard drive for further reading from the CD. If you are short on disk space you can also connect via the Internet to the SuSE site where you will find the same documentation as well as the SuSE support and hardware database. A second CD contains the source code for the eMail server. So, if you are a developer or if you just want to change the way that the eMail server runs on your network you can do that.
Creating a new user account to receive email
What about viruses? Amavis is included with the CD. There is a very good SuSE security team who are paid to look after you and a security list if you wish to discuss any security issues. You can get the kind of support for virus and other security issues that will make sure that your server will run for a very long time without interruptions and without intruders. If you don’t like Amavis you will find email virus scanners out there on the Internet, which are commercial in nature and you will have to pay for them. If you want a reliable and virus-free mail server then the SuSE eMail server is for you. You can find more info about the eMail server by visiting the useful links.
The author Richard is the Chairman and organiser for Sheffield Linux User’s Group. You can view their web site at www.sheflug.co.uk Postfix mail queue
Issue 16 • 2002
LINUX MAGAZINE
45
KNOW HOW
MIGRATION Working with windows in KDE
WINDOW VIEWS Working with windows is an essential characteristic of graphical user interfaces – and under Linux this is just the same as it is in Windows. Anja M Wagner explains
T
here are lots of different window managers available under Linux to control the appearance and actions of the windows. In this workshop we’ll explain the window configuration options for KDE, since KDE, which includes the window managers Kwin (version 2.x) and kwm (version 1.x), is very common and easy to use. For the purpose of this tutorial we’ll be looking specifically at KDE 2.0.1 under SuSE Linux 7.1. It’s not hard to migrate from Windows to KDE, as the ways in which windows operate are very similar in both graphical user interfaces. In KDE an application can be opened in its own window by clicking on one of the symbol buttons in the panel – comparable with the taskbar in Windows – or activating it via the menus. The active window in the foreground can be closed by clicking on the X button on the far right; maximised by a click on the button containing a little box; and minimised via the button displaying a horizontal line (see Figure 1). When maximising a window, KDE offers a simple option for maximising a window in the lengthwise direction only: hold down the Alt key whilst clicking on the maximise button. After minimising, a window shrinks down into a button in the window bar, which
Figure 1: Components of the desktop and windows
LINUX MAGAZINE
is integrated into the right half of the KDE panel (see Figure 2). All it takes is a single click on this button to enlarge the window again. The size of the window can be smoothly altered by grabbing and dragging the edge or a corner of the window with the mouse cursor. In some applications, the KDE control centre for example, minimising only functions up to a certain point – namely only so long as the application can still be adequately displayed by the window manager.
Differences All this should be familiar to the average Windows user, but there are a few differences in the way things are done. Firstly, let’s take the header of a window: this is where the name of an application and, where applicable, the name of the file opened in it are displayed. Double-clicking on this bar activates the “window winder”, which causes the window to roll itself up into the header (see Figure 3). Another double-click unrolls the window again. This option makes it easy to get an overview of the desktop when there are lots of windows open at the
Figure 2: Panel with window bar
46
Figure 3: Double-clicking on the header makes the window roll up into the header bar
Issue 16 • 2002
KNOW HOW
Figure 7: There are three modes to choose from for positioning windows
Figure 4: The window menu as alternative to the mouse
same time. On the far left in the header bar there is a small symbol button: a click on this opens the window menu.
Window menu The window menu (see Figure 4) can also be opened via the key combination Alt+F3 or by right-clicking on the header bar. In this menu, in addition to the usual actions such as closing, maximising and minimising, it’s also possible to change to another of KDE’s virtual desktops. There is also the option of always showing a window in the foreground. If you activate this and later want to bring a different window to the foreground, you must first deactivate this option. On the header you’ll also find a button with a drawing pin. This will be pointing horizontally if the window has just been opened (see Figure 5). The window is not “pinned on” and only appears on the desktop on which you have opened it. If you want a window to appear on all the installed virtual desktops, all it takes is a click on this button, and the window is “pinned down” (see Figure 6). If you close a pinned window in one of your desktops, this action will also apply to all your other desktops. Unlike Windows, KDE enables you to make and configure a maximum of 16 desktops. You can do this in the Look & Feel/ Desktop area of the KDE control centre. If you open several windows on one desktop, the window manager will arrange these so
Figure 5: When the drawing pin stands upright, the window is “loose”
Figure 6: Firmly “pinned” onto all desktops
Figure 8: When there are lots of windows, one can lose the overview
they overlap. There are three possible modes for defining this arrangement. In the KDE control centre, select Look & Feel/ Window Behavior/ Actions. In the Positioning action field, a drop-down menu enables you to select whether the windows should be arranged in a smart, cascading or random manner (see Figure 7). With the Smart option, the window manager tries to keep the overlap as low as possible, so that as much of each window is visible as possible. This arrangement is the default, but does not differ substantially from cascading. Whichever arrangement mode you select, at some point it will become difficult to locate a window that is almost hidden. This is when the window bar on the right-hand side of the KDE panel can help (see Figure 9). Since KDE offers up to 16 desktops, the display of the window bar can be set such that either all windows are displayed on all desktops or only the windows on the currently active desktop. Both variants have their advantages and disadvantages. In desktops where a number of windows are open at the same time, clarity is likely to suffer (see figure 9). If only the windows of the respective desktop are displayed, on the other hand, you must click through all the desktops until you find the window you’re looking for. The configuration to display the window bar is done in the KDE control
Figure 9: It’s getting cramped in the window bar, too
Panel The panel (control bar) in KDE is located by default on the lower edge of the screen. In the left part there are important menu and program buttons. The K button at the far left opens a menu list and has the same function as the Start button in Windows. The window list is integrated into the right part of the panel.
Issue 16 • 2002
LINUX MAGAZINE
47
KNOW HOW
Figure 10: Making space by moving
centre under Look & Feel/ Taskbar. Activate the option Display all windows. If a window is minimised, by the way, its title will be placed in brackets in the window bar.
More space for the window bar To make more room in the panel for the window bar, you can shift the border of the window bar to the left. To do this, you first have to push together the panel’s symbol buttons. When the central mouse button is pressed, the appearance of the mouse cursor changes, and you will be able to shift the symbol buttons (see Figure 10). Click again with the middle button on the two vertical lines in the panel, which separate the window bar, and drag the separating lines as far as possible to the left (see Figure 11). When a large number of windows are open, the easiest way to find a window is to select it from the window list (see Figure 12). This is reached by means of a symbol button in the panel. If, after the installation of KDE, this button is missing, press the right mouse button over a clear area of the window bar and select Add/ Window list. The Window list menu enables you to change the desktop and lists all open windows sorted by desktops. KDE wouldn’t be KDE if there wasn’t another way. Press Alt+Tab and a window will appears in the middle of the desktop displaying an opened application with its name and symbol (see Figure 13). Hold down the Alt key and leaf through all the windows by pressing the Tab key. When the one you want is displayed, release Alt and the window will jump to the front. You can also
Virtual desktop The graphical user interfaces of Linux have at their disposal more than one desktop, in order to provide sufficient space for lots of open windows. KDE offers four of these desktops as standard, and a maximum of 16. Every desktop can be separately configured. This improves the working conditions considerably, for example if one matches the desktop properties to the respective working area.
Figure 12: The window list makes it easier to get an overview
browse through your desktops in a similar way. Instead of pressing Alt+Tab, you just have to use Ctrl+Tab.
Focussing As in Windows, a program window can be brought to the front by clicking on a visible part – this is called giving the window focus, as the active window is now the focus of all keyboard inputs. You can also configure the desktop to focus on a window through contact with the mouse cursor, rather than just clicking. To configure this in the KDE control centre, select Look & Feel/ Window Behavior/ Actions. Here you can set whether window content should be displayed when a window is moved or its size altered. If you have little in the way of computing capacity then you would be advised to deactivate both
Figure 13: Leafing through the windows with the keyboard
Figure 11: A little clearer
48
LINUX MAGAZINE
Issue 16 • 2002
KNOW HOW
Figure 14: Mouse contact is all it takes to activate
Figure 15: Three buttons and any number of options
options. You have already met the area of Positioning action. As standards for the activation of a window, there are several options available:
these zones can be varied between 0 and 50 pixels via a slide controller in the Magic Edges area. This is in the control centre under Look & Feel/ Desktop/ Edges. If you work with multiple virtual desktops it’s possible to move a window from one desktop to another. You can do this by pinning on the window in one desktop, then deactivating the pin in the desktop of your choice. Another way of doing this is to right-click on the title bar or the edge of the window to be moved. Select the menu item On desktop and choose the desktop you want the window to be moved to. You will still need to change to that desktop, where you will find the window placed exactly the same as before. This also works in the opposite direction. If you’re working on one desktop it’s possible to fetch a window from a different one. Right-click on the button of the corresponding window in the window bar. (Obviously this only works if you’ve configured the window bar to display the windows on all desktops). From the pop-up menu select On desktop and choose your desired option. The desired window will now appear on your current desktop.
● Activation following a click is the default. ● Activation by mouse contact activates a window if the mouse touches any part of that window (see Figure 14). In addition to this, the option Automatically to the fore can also be selected. You can set how fast the window should come to the foreground by using the slide controller. ● If instead you select To the fore by a click, the window reacts to a click on any part of the window. ● Unlike the option Activate under mouse cursor: Here you have to click on the title bar in order to bring a window forward. The automatic option does not apply in this case. ● Activation precisely under mouse cursor triggers the same actions as activation on mouse contact. ● What happens if you click with one of the three mouse buttons on an active or inactive window? KDE offers a pretty bewildering range of possible combinations. These settings are made in the KDE control centre under Look & Feel/ Window Behavior/ Mouse Behavior. The default settings are largely similar to the action of the mouse under Windows. Left-clicking on the title bar of an active window brings this to the fore; a click on an inactive window activates it and again brings it forwards. If you’ve always wanted to do this with the right mouse button, then that’s not a problem. For example, the left mouse button can move the window backwards and the middle key can open an actions menu (see Figure 15). There are almost no limits to the fun you can have experimenting. It should also be pointed out that the edges of the windows are equipped with “magic” zones. These ensure that if a window is shoved near to the edge of the desktop, it “sticks” to the edge. The width of
The windows are equipped with “magic” zones
Shot down As well as closing a window from the X button on the title bar, you can also press Alt+F4 or select the relevant option from the window menu. If a window refuses to close you can force it to do so, although this option should be saved for when there really is no other option. Press Ctrl+Alt+Esc and the mouse cursor will turn into a death’s head. By left-clicking on a window you can now kill this window – this usually also ends the corresponding application at the same time. In order to get rid of the death’s head mouse cursor simply press Esc. In the next issue we shall be dealing with Konqueror, KDE’s counterpart to the Windows Explorer. Just like Windows Explorer, Konqueror combines the functionality of a file manager and a Web browser all in one compact package. Issue 16 • 2002
LINUX MAGAZINE
49
REPORT
A subjective view of the Linux market
THE LITTLE DIFFERENCES... Rudiger Berlich investigates the origins of the various Linux distributions, and asks the question: “Are they really all the same?”
L
et’s face it – most Linux distributions are built from very similar components. Once you’ve installed them, you’ll be able to configure them in such a way that a mere user will hardly be able to notice the differences. That’s not to say there are no differences at all, only that they are smaller than one might think. Differences on a political and economic level are bigger. The decision about which Linux Distribution to use should be based as much on criteria like the local market position or the availability of services and support, as on the technical merits of a particular brand.
The Free Software Foundation The humble beginnings of Linux have their roots, not in 1991 with Linus Torvalds, but in the 1980s with Richard Stallman and the Free Software Foundation (FSF). At least, if you perceive Linux to be more than just the core Operating System kernel, that is. The fact is that Linux wouldn’t be what it is today were it not for the plethora of programs provided by the FSF. These include compilers, editors (yes, the famous EMACS editor) and many of the standard utilities available in a Unix system. Above everything else Richard Stallman and the FSF have contributed, however, is the GNU General Public License (GPL), which is the building block of Linux’s success and is the core reason why there is an Open Source 50
LINUX MAGAZINE
Issue 16 • 2002
movement today. If Linux wasn’t possible without the FSF, then it’s only fair to say that the FSF and its goals wouldn’t be as widely known and accepted today were it not for Linux.
The early distributions Linux quickly became widely accepted thanks to its free distribution under the GPL and the then emerging Internet. At first, the standard method of installing Linux was the Linus Torvald boot/root floppies – which required a lot of Unix expertise and was not suitable for a wider audience. Owen LeBlanc of the Manchester Computing Centre in the UK developed the first representative of what is known today as a Linux distribution: the MCC Interim Releases, which automated some of the tasks involved in installing a Linux system, such as copying software packages to your hard drive. Soon after this, Peter McDonald brought the Softlanding Systems (SLS) distribution into existence. This was followed by the Slackware Linux distribution by Patrick Volkerding, which was in large parts based on SLS. It’s worth bearing in mind that this all happened in 1992, barely a year after Linux began. Slackware was, and still is, semi-commercial – i.e. they fund their activities through the sales of Slackware on CD-ROM.
REPORT
SuSE was founded in late 1992 in the Nuremberg area of Germany by four students of mathematics and computer science – Burchard Steinbild, Hubert Mantel, Thomas Fehr and Roland Dyroff. Other German distributions included LST (in Erlangen) and DLD (Stuttgart). SuSE was initially based on Slackware and incorporated various changes to make it more appropriate for the German market. Another Linux distribution – Jurix by Florian La Roche – was later incorporated into SuSE and the documentation was also translated into various languages, including English. SuSE is the oldest commercial distribution still available, and offers support for more hardware platforms that any of its commercial competitiors. Ian Murdock began the non-commercial Debian distribution in late 1993, in an attempt to provide free alternatives to the emerging commercial Linux distributions. It’s arguably the most well known example of a free (as in both “free beer” and “free speech”) Linux distribution, although in March 2000 Ian Murdock began work with Progeny on a commercial variant of Debian.
Red Hat and beyond Bob Young and Mark Ewing founded Red Hat back in 1993. With the exception of Debian and its derived distributions, the Red Hat Package Manager is the standard amongst most Linux distributions. Red Hat was one of the first Linux companies to go public and it has subsequently bought various other companies. Among them is the German DLD distribution and Cygnus, the manufacturer of the embedded operating system, eCos. The Cygnus part of Red Hat’s operations today contributes a significant proportion of its revenue stream. Red Hat initially made its Linux distribution available under the GPL, which enables other commercial vendors to build their own distributions based on Red Hat. Utah based Caldera, Inc. was founded in 1994 by Ransom Love and Bryan Sparks. Caldera bought the
The market leader If you want to make a statement about market leadership then you really need to define what you mean by the term market leader: 1 You could define market leadership in terms of the market’s perception of this topic. For example, you could survey 1,000 people and see which Linux distribution gets mentioned most often. The problem with this method is that many people will tell you their perception of the market situation. 2 You could count the number of people that actually use a specific distribution. However this is also problematic as a single Linux distribution can be legally installed on any number of machines. 3 You could count the number of packages sold by a particular Linux distribution. This method suffers from the problem that few vendors may be willing to give you their exact sales figures (particularly if that vendor believes that they are not the market leader). Also, due to the ways that Linux can be distributed, not all of the installs of a distribution need to come about by someone purchasing the product. Most vendors are the market leader in some way or another. Statements to this effect are printed all over press announcements and marketing material. What a specific vendor actually means by this varies greatly. If you follow method number one for the definition of market leadership, then you’ll probably come to the conclusion that Red Hat is the market leader, at least outside of Germany that is. If you do the same survey in Germany – SuSE’s home turf – the situation is very different. The mood will change again in France, this time in favour of Mandrake. There are independent surveys and online opinion polls that try to measure variables such as revenue, number of boxes sold or people using a specific Linux distribution. If you follow these (and thus use methods number two or three), the picture looks slightly different. Within Europe, the UK used to be a Red Hat stronghold. However, a recent survey put Mandrake in from, with SuSE pipping Red Hat to second place. Within Germany, SuSE is consistently rated number one – a 1999 survey by Deutsche Bank even rated SuSE as the worldwide market leader. In the US, surveys frequently rate Red Hat as the market leader, though a recent survey instead placed Mandrake at the fore. Mandrake appears to be gaining ground in many areas worldwide. Turbo Linux has proven very strong in the Asian markets, where Red Hat has also had its successes. Caldera seems to be forging its own path by building a very business-oriented customer base, rather than positioning itself as a consumer-oriented company – as is evident in its acquisition of SCO. It’s difficult to draw any conclusion from all this other than that Red Hat, SuSE and Mandrake are arguably as the forefront of the Linux revolution. Where they exist in relation to one another is left to the judgement of the reader. German LST distribution, which today forms the company’s German arm. In 2001 Caldera also purchased the assets of the Santa Cruz Operation, and so now owns the rights for SCO Unix and Unixware. (The only part of the former SCO organisation that remains independent is the Tarantella division). This development symbolises one of the effects of the Linux movement – a consolidation phase in the whole Unix industry. It is particularly noteworthy that Caldera has decided to branch out from being solely a Linux company – it now provides a customised version of UnixWare (now Issue 16 • 2002
LINUX MAGAZINE
51
REPORT
There are literally hundreds of Linux distributions
called OpenUNIX 8) to the former SCO community. The UnixWare kernel now includes a “Linux personality”, which in simple terms means that it’s capable of running Linux programs. In fact, one large component of OpenUNIX 8 is a complete Linux distribution. OpenUNIX could therefore be described as a Linux distribution with a SCO Unix kernel. Turbo Linux was founded in 1992 under the name of Pacific Hitech. However, it was only later, as a Red Hat-based Linux distribution, that it became known throughout America and Europe. Turbo Linux recently attempted a merger with Linux Care, a company providing professional services mainly to the US market. Mandrake was founded in 1998 and is another of the “big” European Linux distributions. It quickly gained a large base of followers due to technical reasons and its unique distribution model – Mandrake is sold through the MacMillan publishing house in a franchise style agreement. Mandrake has recently gone through a mini-IPO, which has furnished them with 4.3 million Euros in additional funds, to help them through the hard months ahead. Although the company
Standardisation efforts Whilst technical differences exist between Linux distributions, these are becoming increasingly less important due to the ongoing standardisation efforts pursued by the Linux community. Some of this standardisation happens silently, such as the adoption of the RPM package format by most commercial Linux vendors. Others are a consequence of the fact that key components tend to be identical across different Linux distributions. Other efforts are more proactive and are steered by a committee of companies and private participants. Amongst these are the Linux Standard Base (LSB), which aims to promote a set of standards to increase compatibility, and the Filesystem Hierarchy Standard, which is consolidating the filesystem layout. On the training side, the specifications of the Linux Professional Institute could be regarded as a standard – particularly as most Linux vendors observe them. However, Red Hat has also brought out its own course material and test specifications, so the impact of the LPI is not as big as could be desired. Another standardisation that goes beyond the Linux market can be seen on the side of vendors of proprietary Unix systems.
52
LINUX MAGAZINE
Issue 16 • 2002
offers all the standard services, it’s still regarded more as targeting the desktop user rather than businesses. Although Caldera, Turbo Linux and Mandrake were all initially based on Red Hat, they can today be thought of as wholly independent (though compatibility is being maintained). Caldera in particular is heavily based on the former LST distribution, to the extent that much of the development is being undertaken in Erlangen.
The list goes on After bringing its WordPerfect office suite to the Linux market, Corel later decided to test the waters with its own Debian-based Linux distribution. Having failed to generate much market awareness they now seem to restrict themselves to merely selling Linux software, such as WordPerfect and Corel Draw. Connectiva is a Brazilian-based distribution for the Latin American market and is mainly available in Portuguese and Spanish. Although relatively new, the distribution seems to be gathering significant momentum. There are literally hundreds of other Linux distributions, both commercial and non-commercial, but very few of these have gained a wider international acceptance. Many other companies are active outside the realm of Linux distribution creation. Examples include Linux Care, who started to provide professional services to commercial entities in 1998. VA Linux started in 1993 and for a long time positioned itself as the worldwide market leader in Linux hardware solutions. Following the decline of the whole IT industry in early 2000, VA has undergone extensive restructuring to become a services and software engineering company – based upon its development platform, SourceForge. VA Linux went public at the end of 1999 and alongside Red Hat has undergone the most successful IPO of the Linux industry. IBM has a special role to play in the Linux amrket, as this huge corporation has integrated Linux into its strategic planning and invests heavily in new technology. IBM’s engagement has also marked a turning point in the adoption of Linux by large corporations. Other blue chip companies such as Compaq, Dell, Oracle and Fujitsu-Siemens also invest in and develop the Linux market.
Similarities Linux distributions all use very similar components. To start with, in order to be called a Linux distribution the Linux operating system kernel must be included. While this particular piece of software (now totalling over a million lines of source code) is being developed and maintained with the assistance of Linux distributors (they employ some of the core developers, for starters) the kernel itself is not bound to any specific company. The final control rests in the
REPORT
and drop, unified look and feel of all applications through the use of the same widget set, etc.). While some vendors, such as Red Hat, put more emphasis on GNOME, and others (SuSE and Mandrake) on KDE, every vendor will provide both environments.
So, all distributions are the same? Although the similarities between the different distributions are strong, there are also some differences. In the following example, the package zsh, compiled for Red Hat 7.1, was chosen randomly from the Red Hat ftp server (well, it was the last one on the list, so choosing it was easy although maybe not all that random...) and an installation was attempted on a default SuSE Linux 7.2 system. SuSE is based on the RPM format and also uses the newest versions of the core packages (kernel, libraries), as does Red Hat. The first attempt to install the package results in the following message:
hand of Linus Torvalds, the original author of Linux. All Linux distributions will aim at providing the latest stable version of the kernel in order to remain competitive, particularly as driver support is usually provided by the kernel (with the exception of the graphics subsystem). All this means that, with certain restrictions, all recent Linux distributions compiled for the same architecture (Intel, for example) will be able to run the same programs. At the very least, installing and compiling a source package of another distribution on your preferred version will be possible, and in most cases you’ll even be able to use the binary packages. On Linux a distinction is being made between the graphical interface and the underlying technical infrastructure. The part that communicates directly with the graphics card – and also has certain networking responsibilities – is called the X Window system. The arguments raised above for the Linux kernel apply just as well to the X Window system, which is called Xfree86 under Linux. What this means is that commercial Linux companies will all tend to provide the newest version of the X Window system, and there also have the same hardware support for graphics cards. Today there are two major graphical interfaces on Linux that sit on top of the X Window system: GNOME and KDE. KDE is the older development (GNOME started a year later) but the two systems are arguably now on the same technical level. Both GNOME and KDE provide the functionality of a Window Manager (they draw a frame around a window and let the user move and resize it with the mouse, for example) but they also offer all the usual features of an integrated desktop environment (drag
# rpm -i zsh-3.0.8-8.i386.rpmerror: failed U dependencies: libtermcap.so.2 is needed by U zsh-3.0.8-8 libtermcap was subsequently installed from the SuSE CDs and a second attempt was made to install the software. The error message now encountered was: # rpm -i zsh-3.0.8-8.i386.rpmfile /etc/zshrc U from install of zsh-3.0.8-8 conflicts with U file from package aaa_base-2001.5.15-2 It should be noted that zsh was not installed in the system before, and yet there was still a file /etc/ zshrc. It is still possible to install the Red Hat zsh package, however you have to force RPM to overwrite the existing file or resolve this conflict in some other way, which can be dangerous. After the problem with /etc/ zshrc was resolved, zsh ran without problems. The filesystem layout used to be a big problem in the interoperation of Linux distributions, for example Red Hat places its start-up scripts under /etc/ rc.d, while SuSE had for some time put its scripts under /sbin/ init.d. Documentation will also find different homes. This is changing thanks to the efforts of the FHS (Filesystem Hierarchy Standard), but even when these differences still existed, vendors tried to make their distributions as compatible as possible. SuSE, for example, provided links in the appropriate places in order to make sure that Red Hat RPMs could be installed without problems. The example above illustrates the following points:
The filesystem layout used to be a big problem in the interoperation of linux distribustions
● Different vendors have different ideas about what software should be installed by default. This means that you can’t always rely on the fact that all software needed by a particular package by vendor Issue 16 • 2002
LINUX MAGAZINE
53
REPORT
Crystal ball? Eno Crystal Ball!
A is already installed in the system of vendor B. In the case of SuSE, libtermcap could be postinstalled without problems from the SuSE CDs, which will be the usual situation. ● Different vendors might place certain files in different packages, which means you may run into some conflicts with already existing files, if you install a “foreign” package. ● The Red Hat zsh binary was dynamically linked with three libraries: libc, libtermcap and libnsl. All of these were available in the correct version on the SuSE system or could be installed without problems (in the case of libtermcap). The last point is probably the most important. Even though there were some minor difficulties in installing the Red Hat package on the SuSE system, the necessary infrastructure was available or could be easily installed. Incidentally, similar problems to those described above will probably happen when you install a SuSE package on a Red Hat system. When you have a look at the reviews of Linux distributions in computer magazines, you’ll see a strong focus on the installation rather than the everyday usage of the tested products. This is unfortunate, as it is not the installation alone that determines the quality and long-term success of a Linux distribution. On the other hand, this is the area where you’ll see the biggest differences between different distributions, although these differences are more on a cosmetic rather than a functional level. As the installation covers only one hour in the life cycle of a system of ideally a year or longer, differences on this level are largely unimportant. Another difference in the various distributions is the configuration utilities provided. Of course, you can always configure your system manually, and none of the distributions restrict you to using their specific configuration utilities, though they certainly play a role as no mere desktop user can be expected to fathom the ins and outs of a Unix system. Finally, let’s have a look at the licensing side of the picture. The majority of components in a Linux distribution are covered by the GNU General Public License, which basically allows everybody to do (almost) everything with it, as long as he/she doesn’t try to take this right away from anybody else.
Info (1)Rebel Code – Glyn Moody (Penguin Books) (2)Just For Fun – Linus Torvalds and David Diamond (Texere Publishing)
54
LINUX MAGAZINE
Issue 16 • 2002
Occasionally, in-house developments of Linux vendors are covered by licenses other than the GPL, which then prevents the copying of the CDs. Generally, differences between different versions of a specific Linux distribution can be at least as big as the differences between the products of different vendors (on the same technical level), at least as far as programming libraries, compilers and kernel versions are concerned.
Looking into the crystal ball Linus Torvalds once started an interview by looking into an imaginary crystal ball, which subsequently broke. He commented this with “Crystal Ball? Eno Crystal Ball”. ‘Eno’ refers to the way kernel error messages are labelled. He then had to replace it with an imaginary “I can’t believe it’s not crystal” ball made out of plastic. It gave less accurate results, but didn’t break. Looking into the future isn’t easy, especially when taking into account the current market conditions. Some vendors will emerge from the current recession stronger than they were before, as they the absorb market shares from other vendors that didn’t make the grade. It’s evident that the core business of Linux vendors is shifting away from pure operating system design, to providing services and software for the Linux operating system. This development has already been evident over the past couple of years. Standardised systems increase the available market segment for everybody. Only a marginal amount of money will be earned in the future through the sales of Linux distributions alone. Already the technical differences between the distributions are small, and people debating the question “which particular brand of Linux should be used” should put at least as much emphasis on local market conditions and the availability of services and support for that brand as on the increasingly unimportant technical differences. The key message here is that differences between Linux distributions on the same technical level are smaller than differences between different versions from the same vendor. It should also be kept in mind that no one can be interested in a market that is dominated by just one Linux vendor. We’re not yet in a situation where Linux has gained the majority of the market share in the computer market. Linux has a stronghold in the server market, but it’s still weak on the desktop. With the advent of more handheld devices and the immense improvements taking place with regards to Linux’ graphical interface, a large increase in it’s share of the desktop market is not unlikely – we should keep in mind that Linux still only “owns” six per cent of this segment. It is in the interest of all of us to join forces, not to encourage fragmentation. ■
COMMUNITY
LEARNING THE UNIX OPERATING SYSTEM T
his, the fifth edition of this handbook, has been fully updated from the last edition nearly four years ago. It now covers the pico text editor, pine email, lynx Web browser, two interactive chat programs and a chapter on networking. Other sections have been updated to cover newer systems and the glossary and index have been expanded and improved. The book concentrates on teaching basic commands to give the user a good grounding on using their Unix system without overwhelming them with detail. Exercises and examples are included, as is a quick reference card, to pull out and keep by your computer. The book is a very handy guide for new users of Unix; it covers all the most frequently used areas and explains clearly and concisely what to do. It only touches the surface of what’s possible, but for those who want to go further
into the subject there are plenty of weightier tomes out there to help. It doesn’t cover any systems administration and assumes that there is a competent systems administrator around to ensure that all is running as is should be. The authors assume no prior knowledge of computing so that many sections may seem a bit too basic but, all the same, it is a useful quick reference guide to basic Unix commands and is easily navigated without getting confused by all the detail that so many user guides are full of. Author
Jerry Peek, Grace Todino & John Strang Publisher O’Reilly Price £13.95 ISBN 0-596-00261-0
LINUX SYSTEM ADMINISTRATION
I
n contrast to the previous book this book assumes that you want to be your own systems administrator and it sets out to give you the knowledge to do just that. It opens with a useful comparison of the different distributions with pros and cons and contact Web sites, and continues with ideas of where to find help, from the manual pages of the program to Usegroups. The book goes into great detail right from the start. It tries not to assume any particular distribution is being used though this means that some explanations are, perhaps, a little more complex than they might otherwise be. Having said that, the book is very user friendly and talks you through installation and use in a straightforward and entertaining manner. Some chapters contain sections on specific distributions; each
chapter ends with a list of resources to help with aspects covered in that chapter, usually Web sites but also books. All the major topics are covered in depth with an emphasis on the necessity for security and there are plenty of tips and tricks to make your life easier. Whether you are administering a company system or simply setting up your own at home this book will guide you through the maze and help configure your system, as you want it. Some chapters are more readable than others, but on the whole this is a very accessible work on a complex subject. Author Publisher Price ISBN
Marcel Gagné Addison Wesley £34.99 0-201-71934-7
Issue 16 • 2002
LINUX MAGAZINE
55
KNOW HOW
LVM: Enterprise computing with the Linux Logical Volume Manager
VARIABLE DIVISION A Logical Volume Manager (LVM) makes it possible to adapt disk capacity to dynamically changing requirements while the system is still in use. Heinz Mauelshagen explains why LVM is indispensable for business-critical applications
O
ne of the most important requirements in the field of professional IT is to be able to reconfigure computer systems online and without halting operations. In this regard, logical volume management plays a major role. The advantages are obvious: time and costs are saved, as back up and restore tasks are dispensed with, and applications don’t have to be interrupted, so there are no expensive system stoppages. This is achieved by decoupling block devices and physical disk partitions. The latter, as physical storage media (Physical Volumes, PVs for short) form the lowest level of a three-level architecture. One or more PVs are combined on the second level into virtual disks (Volume Groups, VGs). The full memory capacity (minus a small metadata portion per PV) can be assigned to virtual partitions in the third level (Logical Volumes, LVs). The LVs are addressed as regular Linux block device files, so that any filesystems can be set up on them (see Figure 1).
HP-UX as godfather When LVM is used, physical disks can be added to a system and the capacity of existing volumes can be assigned to them. After many years of experience with commercial LVM variants of HP, IBM, Sun and Veritas, my fingers were itching to expand Linux by LVM functionalities. The LVM project started in February 1997, and version 0.1 was launched in July 1997. After some wide-ranging functional expansions in the past few years, version 1.0 was released in August of this year. Since the LVM in HP-UX displays a highly intuitive command line interface and
Figure 1: The three-level memory architecture of the Linux LVM
56
LINUX MAGAZINE
Issue 16 • 2002
thereby considerably cuts down on the costs of learning, my own implementation is largely based on this. Incidentally, LVM was originally developed by IBM and adopted by the OSF (now the Open Group). The LVM implementation in HP-UX is based on this. To be able to use the logic block devices provided by the LVs under Linux too, and to set up filesystems on them, an expansion of the Linux kernel is necessary. This is done by means of a device driver.
Queuing magic In this article, I will limit myself to the principle and the application of LVM under Linux 2.4. If anyone would like to delve somewhat deeper, I would advise studying the source code of the kernel and Linux LVM – assuming you have knowledge of C. Hence this article will also make references to these sources. Under Linux 2.4, in addition to elementary functions such as open, close, read and write, a block device driver registers a make request function, which is invoked by a central function of the Linux block device layer (see /usr/ src/ linux/ drivers/ block/ ll_rw_blk.c; functions ll_rw_block, submit_bh and generic_make_request), before an I/O request is ranked in a device-specific queue. Queues serve the best possible processing of I/O requests, by holding these for a (very) short time in the queue (device plugging). This is in order to put them in the best possible sequence before they are passed on to the device for processing (such as an ATA-adapter) (device-unplugging). Since LVM has to convert between logical and subordinate (physical) devices, it implements a remapping driver. This contains its own make request function and registers it, so that before submitting a request (call up of generic_make_request in /usr/ src/ linux/ drivers/ block/ ll_rw_blk.c), it can perform manipulations on its administration data. The administration data relevant to us is in the buffer_ head structure, which is set up by the kernel for each buffer containing I/O data. All I/O data
KNOW HOW
LVM application
Figure 2: Example of the mapping of an LV onto two different PVs
buffers in the disk cache – which the kernel maintains and dynamically adapts in size for performance reasons – have a buffer_head structure. Linux 2.4, unlike Version 2.2 and predecessors, now only performs caching in a page cache and uses buffer_ head only at the interface with the block device layer, whose central function is ll_rw_ block. buffer_head (see /usr/ src/ linux/ include/ linux/ fs.h) has, in addition to several members, a real sector address – b_rsector – and the address of a real device – b_rdev – as content. After opening a logical volume (LV) by mke2fs there follows some read and write accesses, so as to save the Ext2 filesystem structures on it. The ll_rw_block calls executed at this point lead directly to the invocation of the LVM driver Make-Request function, which is called the lvm_make_request_fn and is defined in /usr/ src/ linux/ drivers/ md/ lvm.c. The function invoked by lvm_make_request_fn, lvm_map requires a table in which the addresses of the devices – namely those of the physical volumes and the sector addresses thereon – are listed. To avoid the need for a table entry for every individual sector – which would end up as a gigantic table – a number of sectors lying one behind the other are combined into physical extents (PE) and assigned one to one in the logical address space of LV, as logical extents (LE) of the same size. The mapping table thus contains an entry for each assigned PE, describing the address (b_rdev) and the real start sector (b_rsector) on the respective PV (Figure 2).
Once all the LVM elements (PV, VG, LV, LE and PE) have been pre-set, it’s the turn of the application itself. Similarly to PVs, whose names are defined by the device files issued by Linux (such as /dev/ hdb2), VGs and LVs also receive a name when they are remade. VG names appear in the form of subdirectories in /dev/ and LV names appear as block device files in the VG subdirectories. The user interface of the Linux LVM is implemented as a CLI (Command Line Interface) with 35 commands, which correspond to the three levels of the memory architecture. All commands for manipulation of the PVs begin with pv; all those for the VGs with vg; and those belonging to LVs with lv. Since almost every level is involved with creating, removing, displaying, extending, reducing, renaming, scanning or changing attributes, most of the command names are produced from a combination of the prefixes pv, vg or lv with these abilities (Table 1). In addition to these, there are commands for backing up and restoring the metadata stored on the PVs; to move VGs from one system to another; to combine two VGs into one or to split up one VG; to move LEs or LVs of assigned PEs; and to change the size of an LV including the Ext2 filesystem (Table 2). Don’t be scared off by the number of commands at this point, since only three commands are necessary to create the first LV: pvcreate, vgcreate and lvcreate. There are manuals available for all the
Table 1: Basic LVM commands pvcreate pvdisplay pvscan pvchange vgcreate vgremove vgextend vgreduce vgdisplay vgrename vgscan vgchange lvcreate lvremove lvextend lvreduce lvdisplay lvrename lvscan lvchange
Create a PV Display the attributes of PVs Scan for existing PVs Alter attributes of PVs Create a new VG Remove an empty VG without LVs Extend a VG by additional PVs Reduce a VG by empty PVs Display the attributes of VGs Rename a VG Scan for existing VGs Change attributes and activate/deactivate VGs Create an LV Remove inactive LVs Extend an LV Reduce an LV Display the attributes of LVs Rename an LV Scan for all existing LVs Change attributes of an LV
Issue 16 • 2002
LINUX MAGAZINE
57
KNOW HOW
lvcreate -n melv -L100 mevg
Table 2: Extended LVM commands pvdata pvmove vgcfgbackup vgcfgrestore vgck vgexport vgimport vgmerge vgmknodes vgsplit e2fsadm lvmchange lvmsadc lvmsar lvmcreate_initrd lvmdiskscan
Debug displays of the attributes of PVs Move LV data online Perform back up of metadata of VGs Restore metadata on PVs of a VG Check consistency of metadata of VGs Log off a VG, in order to move its PVs to another system Make moved VG known to the destination system Combine two VGs into one Remake the device files of VGs Split one VG into two Change size of LV and Ext2 file system Reset LVM Collect statistical data Display collected statistical data Create initial RAM disk to boot with root file system on LV Scan for devices supported as PV
commands. To get you started, there is a basic introduction with a list of all the commands (man lvm).
Fdisk indispensable To avoid unintentionally overwriting a partition already in use with pvcreate, partitions must be set via fdisk to the type reserved for LVM, 0x8E; only then can pvcreate be used on them. It is in any case advisable to create at least one partition, even if the whole disk is to be used as PV under LVM. The advantage is that this then appears under /proc/ partitions and is displayed in fdisk, so it simply cannot appear unused later by mistake, if one invokes fdisk l, for example. The disadvantage – that the partitions table (one sector) for the PV gets lost – is an acceptable price to pay.
which creates an LV named melv (my first LV) with 100Mb. This LV has the device name /dev/ mevg/ melv. Via mke2fs /dev/mevg/melv a filesystem can now be installed, which can be mounted as usual in any directory of your choice that uses a normal partition-based filesystem. These first few steps do not yet display any particular strengths since all we have done is create a virtual partition on a physical one. If the LV becomes too small and there is still free capacity in the VG, we can expand it without re-installation or a reboot. This is done with the command lvextend -L+200M /dev/mevg/melv which adds a further 200Mb to the 100Mb. Since the filesystem stored in the LV is not (yet) automatically expanded at the same time, a filesystem command has to take over this task. If one uses Ted Ts’o’s resize2fs, the filesystem must not be mounted; on the other hand, Andreas Dilger’s ext2online is capable of expanding Ext2 filesystems in mounted condition – providing you have the necessary kernel patch for this: resize2fs /dev/mevg/melv Both tools are supported by the e2fsadm program, which is supplied with Linux LVM, which executes lvextend and resize2fs in Ext2 resizings. e2fsadm -L+200M /dev/mevg/melv Figure 3 shows the main inputs and outputs for the
Simple practical examples If you have created /dev/ sde1 as described and have set the type to 0x8E, you can use pvcreate /dev/sde1 to create a first PV, then with vgcreate mevg /dev/sde1 a first VG named mevg (my first VG). If this is successful, vgcreate automatically loads the necessary metadata in the LVM driver, so that subsequently the mapping tables of existing LVs are available or tables of newly created LVs can be loaded. Seen another way, vgcreate creates our first virtual disk, which (still) contains a physical disk partition, and activates the VG for further use. The first LV is created with the command 58
LINUX MAGAZINE
Issue 16 • 2002
Figure 3: Installing and expanding a filesystem
KNOW HOW
Info Alessandro Rubini & Jonathon Corbet – Linux Device Drivers (O’Reilly) LVM HOWTOs http://tech.sistina.com/lvm/ doc/lvm_howto LVM homepage http://www.sistina.com/ products_lvm.htm
Figure 4: Instead of calling up lvextend and resize2fs separately, one can also use e2fsadm
little sample session. The last two commands (lvextend and resize2fs) can be replaced by calling up e2fsadm (see Figure 4). Since expanding or newly creating LVs can easily make our VG reach the limits of its capacity, it’s possible to add additional disk space after installation. New PVs are created as described above. If /dev/ sdb1 is available as an additional partition, the new PV is initialised via
relocated to PEs of other PVs without data loss, with the aid of pvmove. After that, the free PV can be removed from the VG with vgreduce mevg /dev/sde1 to add it to another VG whose capacity has now become too small, for example. Another instance when pvmove may be used is to replace partition A on a disk, which is too small or too slow (/dev/ sde) by a larger or faster one (B on /dev/ sdb). Provided there is a free connection for B, one would first add this to the VG, in order to then move all data with
pvcreate /dev/sdb1 pvmove /dev/sde1 /dev/sdb1 The VG mevg is then expanded using: vgextend mevg /dev/sdb1 Then the extra storage space is immediately available for the creation or expansion of LVs. If it is not possible to install additional hard drives in advance, all the steps for expanding the VG can be done without rebooting. Owners of Hot-Plug SCSI do not even have to reboot after installing new SCSI disks.
from A to B. If only A and B are contained in the VG, it is superfluous to specify the device file of B (/dev/ sdb1), since apart from this, no other destination PV exists. Figure 5 shows the expansion and reduction of the VG together with the relocation of data with pvmove. You’ve been introduced to some of the standard applications of the Linux Logical Volume Manager. For more in-depth information, the LVM-Howtos and the general guide over at http://www.sistina.com are highly recommended.
Removing disks If you want to remove PVs from a VG, these must be empty. In other words, none of their PEs must be assigned to any LVs. This can be checked using the instruction: pvdisplay -v
Figure 5: pvmove relocates data onto other disks
/dev/sde1
The mapping of a specific LV can be found with:
The author
lvdisplay -v /dev/mevg/melv
Heinz Mauelshagen is the author of the Logical Volume Manager for Linux. He works at Sistina Software, Inc., which specialises in file system development; in addition to LVM, they also maintain GFS.
If PEs are occupied, but sufficient capacity is free on other PVs of the same VG, the PV can be emptied with pvmove /dev/sde1 (the option -v displays the relocation of the individual LEs). Data in assigned PEs can be Issue 16 • 2002
LINUX MAGAZINE
59
PROGRAMMING
Scientific visualisation with VTK and Tcl
EYECATCHER Graphical
install only a selection of the associated files. The many demo programs are worth their weight in gold and you can often solve your own problems just by looking through them. The VTK book by the library’s authors is also recommended for anyone seriously interested in the subject. VTK originated in the medical science division of General Electric but has been under Open Source for many years. Consequently it has a lively user community with an active mailing list, and Sebastian Barré has compiled a useful selection of links. Some of the algorithms contained in VTK are patented, however: in their FAQ the authors point out that commercial use may incur license fees.
visualisation of complex data is no problem with the VTK library. This turns mountains of multi-dimensional data into clear images – and as Carsten Zerbst
Another field
explains, it can
A
s the saying goes, a picture is worth a be programmed thousand words, and with good reason: using Tcl/Tk visualisation is helpful in understanding complex correlations. For 2D data there are several tools available under Linux, such as Gnuplot, Grace of ScigrAphica. For anyone who needs more, a suitable free software solution is also available in the form of the Visualization Tool Kit (VTK). With its many display variants, VTK produces meaningful 3D graphics from multi-dimensional measurement data or calculations. VTK is not an independent tool like Gnuplot, but rather a class library that handles both visualisation and image processing. This library, written in C++, can also be used with Tcl, Python and Java and its vast array of options offers solutions for any requirements. The sources are available at http://public.kitware.com and SuSE also includes VTK as an RPM. The demo programs and documentation files are also recommended. The library, language bindings and other files amount to a whopping 800Mb. If your space is limited, Figure 1: Possible topologies of structured points. however, you can restrict yourself to Each point relates to one or more measurement the languages you really need and values that apply to this location 60
LINUX MAGAZINE
Issue 16 • 2002
Visualisation is based on data from calculations or measurements available for individual structured points. In VTK these points are called a dataset. The topology (arrangement) of these points can take various forms, depending on the methods used for the measurements or calculations. Finite element method calculations normally use an unstructured mesh, while computer tomographs measure points on a fixed grid. Figure 1 shows some of the common variants. Data exists for each of these structured points, these can be scalar values (such as temperature, density), vector values (displacement, flow speeds) or tensors (tension). Visualisation is much more than just the representation of forms. While CAD systems simply need to represent the geometry, things are more complicated when visualising measurements. Here, the aim is to render a mass of data in a suitable format through the use of various techniques. This means that the character of the data will sometimes have to be changed completely in order to achieve a comprehensible result. We will try to clarify this with the help of some examples. The magnification function (also referred to as amplitude frequency) of a simple oscillator depending on harmonisation and absorption (Figure 2) is a pretty simple task for VTK. Here an area is mounted and coloured in according to the formula.
PROGRAMMING
Tcl update Another two months have passed in Tcl land. The current release 8.3 fixed some errors in the Mac port and then 8.3.4 came out at the end of October. Apple is supporting Jim Ingham in a native port of Tk to Mac OS X; the result is going to be available in 8.4. Although the developers are adding more and more functionality to 8.4 they seem to be lacking the will to finally release it. Aside from a lot of background work, the Tcl Core Team (TCT) has decided to integrate two new widgets: a frame with label by Peter Spjuth and a paned window by Eric Melski. Both widgets will be contained in Tcl/Tk 8.4. Although the new version is entirely useable by now, it is officially still in its alpha stage. Thanks to CVS access at SourceForge, the wait for the next edition has lost some of its horror, but a crowd-pleaser in form of the completion of 8.4 would be even better. Away from those little steps another
Figure 2: The magnification function of a simple oscillator. The colour (and height) represents the extent to which the oscillator reacts to a stimulus, depending on its harmonisation and absorption
decision is much more important: TIP (Tcl Improvement Proposal) number 50 has been signed off by the TCT. This is the proposal to deliver the OO extension [incr Tcl] together with the core release. That makes classes and inheritance available as part of the normal Tcl distribution from Tcl 8.4 (similar to Tk at the moment). A set of additional widgets, such as calendar, a progress display and many others, is also part of [incr Tcl]. Thread extensions are already available prior to the release of 8.4. Tcl itself has long been thread-enabled but this was mainly used from C, in applications with embedded interpreters. At SourceForge you can now get extensions, which enable the script side to use several threads as well. For Tcl this marks another step away from its father John Ousterhout, who thinks that threads are generally a bad idea. Whatever you think of .NET, the SOAP protocol is getting more and more support
Figure 3: The Tux Racer course as a grid: for each of the 80,000 structured points information exists on the height of the landscape at this point
Off the straight and narrow The Bumpy Hill course from the Tux Racer game is a bit more demanding. It is based on a bitmap with a height profile, which provides height information for every point of the landscape. In the first step the profile is represented as a grid (Figure 3). This grid contains 80,000 rectangles, far too many to handle easily. This is a common visualisation problem, the original volume of the data is simply too large. In Figure 4 the number of elements has been reduced significantly, although the visual appearance remains much the same, thanks to intelligent decimation techniques. In our last example Figure 5 shows a human skull. This is based on a file with density values from a
under Linux as well as elsewhere. TclSOAP can be used to develop servers and clients for the SOAP protocol. The extension uses TclDOM and TclXML and has been written entirely in Tcl. TclSOAP 1.6 implements SOAP 1.1 and fulfils the Userland test suites. Csaba Nemethi has introduced new versions of his Tcl extensions. All three packages have been written in Tcl and can therefore be used without compiling independent of platform. The widget callback and multi entry packages help with those niggling little user entry problems. These packages can reduce possible entries to a particular format (date, for instance), Ethernet address or telephone number. The multi column listbox is a feature that is sadly lacking in Tk at the moment. It is easily adapted to your own requirements, be they sorting functions or special colours, right down to individual cells.
Figure 4: The Tux Racer course, but this time represented only by a few triangles. The graphic appears almost as detailed, although it contains significantly less information.
computer tomograph. Using the marching cubes method, a surface consisting of triangles is calculated from these. The surface covers the points whose density corresponds to that of bones. This changes the topology of the data fundamentally: the structured (3D) density value grid is turned into an area consisting of polygons.
The toolbox To be able to keep track of all these conversions and renderings VTK splits them into separate steps, with each step implementing its own object. The idea behind this is a flow of data from source to rendering. The individual objects modify this stream of data until the dataset (measurement values) finally Issue 16 â&#x20AC;˘ 2002
Figure 5: The surface of the skull was calculated from density values measured with a computer tomograph. While the data is provided in form of a volume grid, the surface consists of polygons.
LINUX MAGAZINE
61
PROGRAMMING
Listing 1: Pressure distribution 01 #!/bin/sh 02 # \ 03 exec vtk “$0” “$@” 04 05 # read in data 06 vtkStructuredPointsReader reader 07 reader SetFileName “press03” 08 reader SetScalarsName “pressure” 09 10 # create surface for values U of 1.0 11 vtkMarchingCubes iso 12 iso SetInput [reader GetOutput] 13 iso SetValue 0 1.0 14 15 # map model to graphical U primitives 16 vtkPolyDataMapper isoMapper 17 isoMapper SetInput [iso U GetOutput] 18 isoMapper ScalarVisibilityOff 19 20 # the actor 21 vtkActor isoActor 22 isoActor SetMapper isoMapper
23 24 # colour surface seagreen 25 set isoProp [isoActor U GetProperty] 26 # X11 colour sea_green_light 27 $isoProp SetColor 0.1255 U 0.6980 0.6667 28 $isoProp SetAmbient 0.4 29 30 # create renderer and window 31 vtkRenderer ren1 32 ren1 AddActor isoActor 33 ren1 SetBackground 1 1 1 34 35 vtkRenderWindow renWin 36 renWin SetSize 600 480 37 renWin AddRenderer ren1 38 39 vtkRenderWindowInteractor iren 40 iren SetRenderWindow renWin 41 42 # start representation 43 renWin Render
Figure 6: Example output: pressure distribution in high seas. The image shows the wave surface, which is situated where the pressure is at 1hPa.
turns into the required rendition. It is the vast number of possible intermediate steps and their interrelations that make VTK so powerful. How these VTK objects can be used in Tcl is again best illustrated using an example. It is based on a file containing pressure values in high seas. The aim is to calculate the wave surface and to render it as a simple image. As a first step, the measurement values must find their way into VTK. The data can be read with one of
Info VTK sources http://public.kitware.com Manual ftp://public.kitware.com/pub/vtk/nightly/vtkMan.tar.gz S. BarrÈ http://www.barre.nom.fr/vtk/links.html VTK pipeline http://brighton.ncsa.uiuc.edu/prajlich/vtkPipeline/ TclSOAP http://tclsoap.sourceforge.net/ Csaba Nemethi http://www.nemethi.de Thread Extension http://sourceforge.net/projects/tcl/ John Ousterhout http://home.pacbell.net/ouster/threads.ppt Will Schroeder, Ken Martin and Bill Lorensen – The Visualization Toolkit (Prentice Hall, 1997)
62
LINUX MAGAZINE
Issue 16 • 2002
the various reader classes. Apart from ones for VTK’s own format these also exist for many other 3D formats (e.g. 3D Studio, VRML, PLOT3D, BYU, SLC, STL) as well as for bitmap formats. The measurement values are normally in the form of a grid. The location of the wave surface is where the pressure is 1hPa, only this surface is to be displayed. This is achieved using the marching cubes method, which calculates triangular surfaces from fields of scalar values. Listing 1 shows in detail how the VTK pipeline is constructed; the result is can be seen in Figure 6.
From grid to wave VTK is going to calculate the wave surface from the raw data contained in the file press03. The conversion is handled by the object iso, which is of the type vtkMarchingCubes and which implements the algorithm of the same name. This object is allocated the output of the read object reader as input using the method SetInput. All VTK classes use this mechanism for implementing the stream of data. Once the area has been calculated it needs to be mapped to graphical primitives (for example points, lines or triangles) for rendering. This is done using mapper objects, which represent the geometry of the model. The mapper isoMapper is linked to the output of the iso object and thus integrated into the data stream. Both geometry and colour are then represented by an actor. To give the whole thing a maritime touch the area is going to be displayed in sea green. Part of every actor is a property object for colours and ambience. This object can be returned using the method GetProperty, and is to be stored by our example program in the isoProp variable. The method SetColor then sets the colour as requested, while the SetAmbience method changes the lighting characteristics. There are still two steps missing from the pipeline before we get to rendering: a window which brings the graph to the screen and before that an object that calculates what is to be rendered. Calculation is handled by the render class, and an object from this class is linked to the actors. Here, the procedure is slightly different from before: instead of linking input and output, the AddActor method tells the renderer which actor to render.
VTK, windows and Tk Now we just need the output window. VTK recognises several window classes, in this case we are using vtkRenderWindow. You can also zoom in on or out of graphs and turn or move them within the window using the vtkRenderWindowInteractor object. Apart from vtkRenderWindow there is also a vtkTkRenderWidget. This behaves like a normal Tk
PROGRAMMING
Figure 7: Decimator is a display utility for 3D formats. It uses Tcl/Tk for its GUI and VTK for 3D rendering
Figure 8: ctkPipeline illustrates the data flow of our example program. On the left it shows the objects and their relations, on the right the objects’ properties
Free issues of Linux Magazine!
3
See page 82 to order now or call us on 01625 850565 and guarantee you copy today. You can also place your oder at www.linux-magazine.co.uk
widget; it allows you to construct the entire surface with Tk and to use 3D visualisation as a sort of 3D canvas. A good example for the integration of Tk and VTK is Decimator, a utility for displaying 3D formats like BYU, STL and Wavefront (Figure 7). Now that the entire pipeline from file to window has been linked, renWinRender starts the data flow. Instead of a confusing mass of data we are presented with the desired image. The vtkPipeline utility can prove very useful: it represents the pipeline structure graphically and provides a good insight into its interdependencies and processes (Figure 8). The reader transfers the raw data via the connection vtkTemp0 to the iso object. This is the object selected for rendering in our program – consequently the right half of the window shows the name of its class (i.e. vtkMarchingCubes) and its properties.
The ess magaz ential ine f Linux u or all sers
Conclusion Once you have familiarised yourself with its pipeline structure and its many options VTK offers a pretty quick route to producing images. In combination with Tcl, visualisation programs can be created quickly, and with user-friendly interfaces to boot, thanks to Tk.
The author Carsten Zerbst works for Atlantec on a specialised PDM-System for the ship-building industry. Apart from that he devotes his time to the general application of Tcl/Tk.
Issue 16 • 2001
LINUX MAGAZINE
63
PROGRAMMING
C: Part 3
LANGUAGE OF THE ‘C’ In this, the third part of our C language tutorials, Steve Goodwin looks at more complex data types, extending our temperature conversion idea
T
here is an African tribe with a very basic number system. It is simple, elegant, and only has three numbers: Ock, far and rup, which stand for ‘one’, ‘two’ and ‘many’. Like them, we have a variable to store one piece of information. If we want to store two items, we need two variables. But for an arbitrary number of items we need something else. Perhaps an array. Perhaps a structure. So we’ll look at both.
Array of light There is no explicit data type for arrays. How could there be? Every variable in ‘C’ must be given a predetermined type. How would we know if our array stored ints, shorts, floats or doubles? Arrays are therefore declared by appending square brackets to the variable name (which does have a known type), giving it a dimension. float fSwapVar; /* a ‘normal’ variable */ float fTemperateEachHour[24]; /* an array */ We can then use these variables, thus: fSwapVar = fTemperateEachHour[8]; fTemperateEachHour[8] = fTemperateEachHour[20]; fTemperateEachHour[20] = fSwapVar; Simple, eh? Now for the drawbacks. The array cannot be passed as parameters into functions (I’ll spill the beans on pointers that get around this in a later issue!) and each element in the array must be of the same type, as determined by the syntax above. Also, it’s not possible to change the type or size of an array: fTemperateEachHour will always hold 24 floating point numbers. Now, next week, until the end of time and the day after.
64
LINUX MAGAZINE
Issue 16 • 2002
On a boundary Each element in the array can be referenced with a unique index. The index is an expression (a constant number or variable) which sequentially references each element in order, from zero to the number of elements-less one. In our example, that is zero to 23, inclusive. The last entry is 23, and the first is zero. Not one. Zero. Never one. Zero. Got that? Good! When I said ‘can’ refer to, it would be better to say ‘should’. There is no bounds checking in C. That is, the compiler never validates the array index to make sure it lies within the legal range. This makes it possible to write data to (and read from) fTemperateEachHour[24], fTemperateEachHour[-1], or fTemperateEachHour[7456925]. This is not an error but a design feature of the language, since the lack of bounds checking promotes fast execution times. This can cause bugs, which are caught at runtime when we try to read or write elements in an array that do not exist (see Memory access). The problems originate from the fact that although fTemperateEachHour[24] does not exist, it is unlikely to cause a segmentation fault, since it’s probably the memory location of another variable. The size of the array must be declared as a constant, integral, number – variables are not permitted in ANSI C (C99 does, as do some compilers, but as non-standard extensions). int iMaxHoursInAYear = 366*24; float fTemperatureEachHourForAYear[366*24]; /* OK – size is constant */ float
PROGRAMMING
Table 1 How we declare the struct
How we declare the variable
Example 1 struct { float fMultiplier; float fAddition; } Centigrade2Fahrenheit;
(we’ve already declared one – Centigrade2Fahrenheit – however the structure has no name so we cannot create another variable with the same type without re-declaring the whole structure)
Example 2 struct sConversion { float fMultiplier; float fAddition; };
struct sConversion Centigrade2Fahrenheit; (when declaring ‘sConversion’, we must precede it with the keyword ‘struct’)
Example 3 typedef struct sConversion { float fMultiplier; float fAddition; } Conversion;
Conversion Centigrade2Fahrenheit; (by telling the compiler we wish to define a type, it doesn’t need to be told Conversion is a structure explicitly, as in Example 2, as this name is now in the compiler’s symbol table*) struct sConversion Centigrade2Fahrenheit; (an alternative based Example 2)
fTemperatureEachHourForAYear[iMaxHoursInAYear]; /* Bad – size is not constant */ Arrays can be of any dimension. We’ve already seen a 1D array, with one set of brackets. Now let’s briefly see a 2D array, with two sets of brackets: float fTemperatureEachHourThroughoutAYear[366][24]; fTemperatureEachHourThroughoutAYear[0][23] = 4.5f; /* Jan 1: 11 pm*/ fTemperatureEachHourThroughoutAYear[365][0] = 4.7f; /* Dec 31 (or 30!) at midnight */ It is theoretically possible to create a ten-dimensional array but no one outside the asylum has done so!
In the beginning? Arrays, like variables, can be initialised when they are declared, but as they contain multiple values, you must initialise all of them – or none of them – using a comma separated list, enclosed between braces. Each value must be a constant, although some compilers permit variables to be used.
float fTemperateEachHour[24] = { 20, 18, 17, 16, 15, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 27, 26, 25, 24, 23, 22, }; There is one deliberate non-mistake above! The comma after the last number – it ‘can’ be there! It doesn’t have to be, but it can. This feature was incorporated into the language to make it easier to write tools that output C source code, as they can blindly add a comma after ‘all’ numbers. The practise is also recommended for human programmers to ease maintenance!
Automatic for the people If you are willing to type in the initial values for an array (as opposed to reading them from a file, as we’ll examine in a later issue), then C supports another feature to help you: it will automatically count the elements, and declare an array of the correct size. The following are therefore equivalent: int iList[5] = { 1,2,3,4,5, }; int iAutoList[] = { 1,2,3,4,5, }; Naturally, you can only omit the size if you include Issue 16 • 2002
LINUX MAGAZINE
65
PROGRAMMING
data (otherwise, how is the compiler to know how much space it needs?). When reading from such an array, the data must either describe itself, or you’ll need to know its size. iSizeOfWholeArray = sizeof(iAutoList); iSizeOfEachElement = sizeof(iAutoList[0]); iNumOfElements = iSizeOfWholeArray / iSizeOfEachElement; for(i=0;i<iNumOfElements;i++) { printf(“element %d is %d\n”, i, iAutoList[i]); } This also demonstrates the convention of a loop counter; using the low bound inclusive, and the upper bound exclusive. It stops gatepost, or off-byone errors, and is encouraged.
Centigrade2Fahrenheit.fAddition = 32; fFahrenheit = (fCentigrade * Centigrade2Fahrenheit.fMultiplier) + Centigrade2Fahrenheit.fAddition; As with arrays, this could be initialised with: Conversion Centigrade2Fahrenheit = { 9.0f/5.0f, 32, }; Naturally, each element in the structure is set up with its corresponding element in the list. As with arrays, you must provide data for each element in the structure, or none at all. The compiler will not guess on your behalf.
Happy together Architecture and morality Abstract Data Types (ADTs) include a number of differently named elements (which can be of different types) encapsulated within a single logical block. For example, a payroll package might group ‘name’, ‘date of birth’ and ‘employee number’ into an ‘employee’ structure. Although C doesn’t hide the elements of the structures, it is good practise to keep their handling functions together in the source. There are three basic methods for declaring structures. See table 1 on previous page: Each structure consists of normal variable declarations. You may include as many as you like here (within reason), and you may nest other structures inside this as deep you like (again, within reason). C’s rule of ‘declare-before-use’ permeates everything, including structures. You must have declared the structure before you can create variables with it – if its name’s not down (in the symbol table) it’s not getting (compiled) in – and the compiler stops! The name of each structure element (fAddition, for instance) only exists during compilation. It is not included in the executable (except as debugging information). Sorry, but I didn’t write the language! Technically speaking, new types cannot be created. Instead, the compiler simply creates a new name for an existing type. In this case – a structure.
Don’t stop the rock Structure elements are accessed with the ‘.’ (dot) notation, regardless of how it has been created. As with arrays, a variable inside a structure is like referring to one anywhere else.
Centigrade2Fahrenheit.fMultiplier = 9.0f/5.0f; 66
LINUX MAGAZINE
Issue 16 • 2002
C is a very simple language: it has very few rules and even fewer instructions, making it almost trivial to combine different types. So an array of structures would be: struct { float fMultiplier; float fAddition; } ConversionList[3] = { { 5.0f/9.0f, (-32*5)/9.0f, }, fahrenheit to Centigrade */ { 9.0f/5.0f, 32.0f, }, Centigrade to fahrenheit */ { 1.0f, 273.15f, }, Centigrade to kelvin*/ };
/* /* /*
From here, we could declare an array of temperatures, and convert them from Centigrade into Fahrenheit and Kelvin. In the best traditions of technical papers – “this is left as an exercise for the reader”!
Sonata for flute and strings in C# minor The biggest drawback with types in C (especially in the sysadmin field) is the lack of support for built-in strings. Or rather, a built-in string type, because strings do exist in C. Their implementation comes in two parts – data and processing. The data is stored as an array of characters, which, in addition to the usual array methods, can be manipulated with functions in libc. Let’s start with the storage.
PROGRAMMING
Hold the line We’ve spotted the char datatype before, without ever really looking at it. That’s because I’ve been saving it for somewhere special. Namely, here. A char (pronounced char – as in lady, or car – as in automobile) can hold a letter from the standard ASCII character set, from 0 to 127. A char can be either signed or unsigned and so it is not portable to use extended ASCII characters from 128 onwards. So, if a char can hold one letter, an array of chars can hold a word, a whole sentence, or, providing the array was big enough, a whole book! There is no difference between an array used to hold a string, and one holding data of another persuasion. It still has (unchecked) bounds, its contents cannot be passed to functions as a parameter, and each value is stored in the array sequentially from the first element. However, as a special requirement, strings always end with zero. This is called the null terminator (written as either 0 or \0), and tells the string functions where to stop processing. For you, this means your array must be large enough for the string and the null terminator. Index Value
0 ‘s’
1 ‘t’
2 ‘r’
3 ‘i’
4 5 ‘n’ ‘g’
6 0
7 unknown
We can manually create a string using the array initialisation code we’ve already learnt: 1 #include <stdio.h> 2 3 int main(int argc, char *argv[]) 4 { 5 char MyString[] = { ‘s’, ‘t’, ‘r’, ‘i’, ‘n’, ‘g’, ‘\0’}; 6 7 printf(“str=%s”, MyString); 8 return 0; 9 } However, there is another way, and we’ve been using it for the last three months! Notice whenever we print a string to the screen (like “str=%s” above), we use double quotes (“). That isn’t just a convention, that’s a requirement! The double quote is shorthand telling the compiler to build an array of characters, and automatically add a null terminator to the end. This creates a string which printf can take and process as normal.
7 8 9
printf(“str=%s”, MyString); return 0; }
Sometimes, you will see this listing written with: 5
char *MyString = “string”;
This is not identical. There is one incredibly subtle difference. Here, the string data (“string”) is created as part of the code in exactly the same way that “str=%s” is in line 7. And because it’s part of the code – it isn’t part of the data. So, MyString[4] = ‘d’; will core dump as it tries to modify the code, where it wouldn’t, in the first two listings. Obviously, it is perfectly valid to use it as a read only string.
The string library Manipulating strings in C is a time consuming and thankless job. Every time you want to join, format, split or change a string you must make sure all the arrays you are using are big enough for the largest string you need (because there’s no bounds checking, and no simple way of growing an array once created). So how do you know what the largest text string is? You don’t! Ever! Strings can be overwritten so easily it isn’t even funny any more. The buffer overruns you might have read about in security bulletins happen for this reason. It is a very weak area of C programming. Being a library, you need to include the header file describing the functions, and a library to link in the code. Well, the string library is part of libc, and the header file is just: 1
#include <string.h>
The library provided for string manipulation is good enough for production work, but requires effort on your part to stop the overruns (which is why most programmers have their own string library). For the purposes of our examples, all arrays here are 80 characters, and we’ll assume that no string will be exceed that, so we can concentrate on the functionality.
We built this city 1 2
#include <stdio.h>
3 4 5 6
int main(int argc, char *argv[]) { char MyString[] = “string”;
There are four main functions for constructing strings. In all cases, the strings given can be variables or double quoted constants, and the first string is the destination (or target) that gets written into, whilst the second (the source) is read from, and left untouched in all cases (i.e. it remains constant). Issue 16 • 2002
LINUX MAGAZINE
67
PROGRAMMING
if (strcmp(szMyName, “Susan”) != 0) printf(“My name is not Susan!”); strcpy(szMyName, “Steven”); strncpy(szMySurname, “Goodwin”, 79); strcat(szMyName, “ “); strcat(szMyName, szMySurname); sprintf(szInfo, “Todays average temperature was %f\n”, iAverage); The strcpy is probably the most widely used. It copies string data from the source to the destination. It does not know how big the destination buffer is, and will continue copying until it finds a null terminator in the source, which is the last character it will write. strncpy is the function that should be the most widely used! It works the same as strcpy, but will copy – at most – 79 characters (in this example) to stop you overrunning string bounds. However, it only adds the null terminator if the source string is less than 79 characters. This can cause problems if you then try to read from the string (since it never terminates, C does it’s little trick of trampling over memory). It is recommended you manually terminate such strings: strncpy(szMySurname, “Goodwin”, 79); szMySurname[79] = ‘\0’; /* Single quotes indicating a character literal */
Quite simple, these ones! strlen calculates the number of characters in the string (excluding the null), while strcmp does a case sensitive comparison between two strings, returning zero if they are the same. It does more than a direct comparison, though. If the first string is ‘less than’ (i.e. first in the alphabet), the return value is -1, if it is greater (i.e. later), it returns 1. Although both equate to true (i.e. non-zero), it is better to explicitly write ‘!=0’ to remind you of the other possibilities. There is also a case insensitive comparison with the function stricmp. Additionally, to find out if the first 3 characters are the same, there’s strncmp, another ‘n’ function, which takes an extra third parameter indicating the number of character to check. There are a number of other string functions not covered here, with interesting names like strchr, strtok and strstr. They can be found in /usr/ include/ string.h, and will be understood after our lesson on pointers, which, since I can see the bottom of the page approaching, will have to wait until next month!
The second line treats the string like the normal array it is. This means we can mimic the LEFT$ function of BASIC (which takes the N left-most characters), for example, by writing: szMySurname[4] = ‘\0’; strcat performs concatenation: it searches for the end of the target string, and bolts the source to the end. Again, no bounds checking is done (observe the lack of numeric parameter). Notice the first strcat example we place double quotes around the space. Although a space can be denoted with the character constant ‘ ‘, we are actually dealing with strings. And strings must have a null terminator, so we place the space inside quotes to produce a char array, as opposed to a char. sprintf is a fun one. It acts like the ‘printf’ we’ve seen throughout this series. Although instead of writing the text to the screen, it writes it into a buffer. It’s great for formatting output, and converting integer values into text strings.
Too much information The two oft-used functions for learning about our strings are: iLengthOfString = strlen(szMyName); 68
LINUX MAGAZINE
Issue 16 • 2002
Memory access When your program is running (usually in user space) it has access to some memory. When Linux loads your program, your variables (the program’s data segment) are placed into an area of memory, which has read/write access. The code of your program is also placed into memory – but this memory can only be read. If you try writing data into this code memory you will cause a segmentation fault, or core dump. If you try reading from, or writing to, memory owned by neither code nor data, it will core dump. With the code we’ve seen so far, arrays are the only thing that can write information outside the data memory that we’ve been given to work in, and that’s what causes the problem with fTemperateEachHour[7456925]. If you happen to own this memory location too, however, the program will change whatever data is there. However, since this is usually an unrelated variable, the program will have completely unpredictable results, which is obviously a bad thing.
BEGINNERS
Dr. Linux
ALWAYS UP TO DATE Self built Mozilla.
Q Even large software
I have downloaded the Mozilla source text, mozilla-source-0.9.3.tar.gz. I now wish to compile and use it, however I cannot find any information on what to do.
projects always want you to install the newest version, but updates can easily become an annoyance. Helga Fischer helps to get installation problems with KDE and Mozilla under control
Dr. Linux: First a warning: the compilation process requires you to have a lot of free hard disk space (depending upon which option you choose at least 800Mb, or preferably 1Gb) and dependent on processor performance it may take a long time. A 400MHz processor takes about two and a half hours to finish producing Mozilla. So before you start make sure you have sufficient resources. Unpack the source file with tar – xzvf mozilla-source-0.9.x.tar.gz, and then change the directory in the console to mozilla. Like most software you can compile, the Mozilla distribution comes with a configure script, which lets your system check for installed libraries and other programs that makefiles produce. So that you’re not disappointed, you should give configure some options. With configure --help you get a very detailed listing of the possibilities available. Unfortunately it does not supply the correct method for most people and it is aimed at those who do a lot of software compiling. With the following example configuration you will normally achieve the target result: ./configure --prefix=/opt/mozilla\ --disable-debug\
Log File, into which a utility program (here the Web server) enters accesses of all kinds and in the case of an error, the assumed cause. Log entries are mostly found in /var/ log and they help during the error correction, and in addition warn of unauthorized accesses, so that one can take counter measures. HTTP-Request A browser’s request to the Web server to send to it a given Web page. The transmission protocol, and thus the language, in which the browser and Web servers converse is called HTTP (HyperText Transfer Protocol).
70
LINUX MAGAZINE
Issue 16 • 2002
Dr. Linux Complicated organisms, which is just what Linux systems are, have some little complaints all of their own. Dr. Linux observes the patients in the Linux newsgroups, issues prescriptions here for the latest problems and proposes alternative healing methods.
--without-debug-modules\ --enable-crypto\ --enable-chrome-format=jar\ --with-x\ --with-gtk\ --disable-ldap\ --enable-mathml\ --enable-svg The first option, – –prefix, gives the installation directory of where you want the configure to copy the necessary files to. In the above example we can see that we are going to install Mozilla and its files to /opt/ mozilla, alternatively we could have used /usr/ local/ mozilla. If you create the directory first with mkdir, then you are prepared for all eventualities. Since you’ll probably not wish to debug the Mozilla code, switch the corresponding options out (– –disable-debug and – –without-debug-modules).
BEGINNERS
Red Danger
Q
I have the Apache Web server running on my computer. In its log I find the following outputs :
[12/Sep/2001:16:36:40 +0200] “GET /default.ida?XXXXXXXXXXXXX...XXXXU XXXXXXXXXXXXXXXXX%u9090%u6858%ucbd3%u7801%u9090%u6858%ucbd3%u7801%U u9090%u6858%ucbd3%u7801%u9090%u9090%u8190%u00c3%u0003%u8b00%u531b%U u53ff%u0078%u0000%u00=aHTTP/1.0” 404 281. Do I have to now worry that someone is trying to penetrate into my computer?
Figure 1: A built Mozilla
This way Mozilla becomes slimmer and faster. The further options activate the Personal Security Manager (– –enable-crypto) as well as the Mozilla mechanism, which enables different themes to be used (– –enable-chrome-format=jar). To ensure Mozilla will run under the X window system (– –withx) and the GTK Class libraries binds (– –with-gtk) Mozilla’s appearance and determines its behaviour. LDAP is still an experimental feature and therefore should be switched off (– –disable-ldap). The flag – –enable-mathml teaches Mozilla a specification language, with which mathematics can be displayed on Web pages. SVG (Scalar Vector Graphics) is another vector diagram format that Mozilla, owing to – – enable-svg, can display. The backslash character at the end of the lines tells the shell to continue with the next line as though they were all on one long line. It takes a while for the configure to run. Afterwards you call make and let the computer do the work. You can find the results of all your efforts under /Workdirectory/ mozilla/ dist/ bin. Here the left-most part of the directory is symbolically linked and refers to the necessary files within the Mozilla data trees. Change into this directory, and copy the files to their final destination with cp -r * /opt/mozilla Some of you may be missing the last step of the compilation process: make install is not needed here. Change to a normal user (not root) in the installation directory /opt/ mozilla, and start Mozilla for the first time with ./run-mozilla.sh. This small shell script creates the working environment for the browser and then calls it. If everything functions here, you can then delete the work directory and start the program from now on with /opt/ mozilla/ mozilla. In order to fix the program call in the KDE2 Startmenu, select K/ System/ menu editor (Figure 2).
Dr. Linux: These log entries are caused by the Code Red worm and are harmless for Apache servers. These servers can ignore it. Code Red is a virus, which spreads over the Internet by HTTP-Request via a loophole in the security of Internet Information Servers (IIS), a type of Web server from Microsoft. The actual damage Code Red does consists of producing unnecessary network traffic and infecting computers, which can then be abused to attack other computers in such a way that the attacked computers fail and cannot offer their services any longer. Manufacturers of antivirus software and universities offer further information on this topic, for example http://linuxpr.com/releases/4067.html.
Figure 2: Mozilla in the K-Menu!
In the left-hand menu tree, choose Internet and click on the toolbar above New Item. In the right window you can now enter a name for the menu entry, a comment, as well as the command to start the application with which Mozilla is called (in our example it is /opt/ mozilla/ mozilla). You leave the
makefile A process specification for the compiler. This file determines the relationships between the source, object and executable files. Debug Error detection in a piece of software. With small programs we can still use hand and eye, but for larger software projects the additional Debug codes are given and used in compiling. LDAP Lightweight Directory Access Protocol. A possible means to test on-line directories (similar to telephone directories).
Issue 16 • 2002
LINUX MAGAZINE
71
BEGINNERS
Type in the menu option as Application. To finish this task you need to assign a meaningful icon by clicking on the one present and choosing another from the lists presented. Click on Apply and the task is inserted into the K Menu, so that Mozilla is now just a mouse-click away. Alternatively – and somewhat easier – Mozilla is also available as ready compiled tar files to install. Here unpacking and copying to the final destination are sufficient. Additional information can be obtained on the developer page under http://www.mozilla.org/.
KDE wedges
Q
The new KDE installation ran error free. Unfortunately, the Desktop environment no longer starts. What could be the problem? Dr. Linux: First, look in the /tmp directory for the subdirectories mcop-username and ksocketusername, as well as for files whose name begins with dcop*. Delete them, and try to start KDE again. If that fails, look in your home directory for .DCOP*and .MCOP*- files, and remove these. If the start problem still remains, then the personal KDE configuration data was possibly pulled in. This error occurs if you execute the update as a user. Those who were not logged in are not affected by this. First protect your adjustments by making a copy. They are in the .kde2 directory in your home directory, move them with mv .kde2 .kde2.old
The point before the directory name is not a typing error – it protects and hides the directory against inadvertent deletion with rm *. After this you should be able to restart KDE – however everything has been reset back to the default values so you need to change any adjustments. In doing this KDE creates a new .kde2 directory. You can now copy item by item from your old backed up copy of the .kde2 directory to the new one.
Konqueror conquers Java
Q
When using Konqueror to surf the Web, sections of the window remain grey. I also get a message that a program has been loaded but after a long wait nothing happens. Is it Konqueror or the Web page that is at fault? Dr. Linux: The problem arises from the fact that the relevant Web pages contain a Java applet. We first need to check that you have Java installed and that
72
LINUX MAGAZINE
Issue 16 • 2002
Figure 3: The Java configuration of the Konquerors
execution is also permitted by the Java plug-in. You can check the configuration under Settings/ Configure/ Browser/ then the Java/ Javascript tab. This tab page will let you configure for either specific sites or global actions. With Enable Java globally you are simply allowing the execution of Java programs in the browser. The other options here are a little more complicated. If you want to restrict Java to a few specific Web sites, just your bank or building society sites for example, don’t use global activation option, instead add the sites to the Domain-specific list with the Add button. You can then input a site and choose to accept or even reject Java applets from that address. You should now check that the Path to JDK is correctly set to where ever you have your Java runtimes loaded. If you should find that Java still doesn’t function despite being activated, it can be for two reasons. Either the version that you are using has become outdated or you are using a previously loaded version. Konqueror only works with the latest versions of JDKs (Java Development Kits). If necessary, you can install a new version of the JDK from http://www.blackdown.org/ or http://www.ibm.com/developer/java/ and place it in a new directory such as /opt/ jdk-1.3. By setting the path in Konqueror to this new directory you can have a new version that only effects the Konqueror operation and nothing else, such as StarOffice. Now you just need to click the Apply button to implement the changes. More information about configuring Konqueror and Java can be found at http://www.konqueror.org/ konq-java.html. *A wildcard character, which the shell interprets as any amount of (or zero) characters.
BEGINNERS
OUT OF THE BOX
QUESTION AND ANSWER I
The dialog program has been around for a long time, and is an integral part of almost every Linux distribution, but it has been living an undeservedly shadow existence. At best, one or two of you may have come across it when configuring the Linux kernel with make menuconfig, but the kernel dialog is a specially adapted version, which is not compatible with the normal program.
Do you write shell
Out of the box There are thousands of tools and utilities for Linux. “Out of the box” takes the pick of the bunch and each month suggests a little program, which we feel is either absolutely indispensable or unduly ignored.
scripts, in which you want to ask for user inputs? Christian Perle takes a look at the Dialog tool, which provides you with a
make chmod 755 dialog chmod 644 dialog.1 su(enter root password) cp dialog /usr/local/bin cp dialog.1 /usr/local/man/man1 exit
wide variety of input mechanisms
Both rights amendments with the chmod command are necessary because by default the group to which the file belongs obtains write rights.
Yes or no Figure 1: Make menuconfig
Seek a dialog The program has recently acquired its own homepage at the URL http://www.advancedresearch.org/dialog/. The present dialog maintainer, Vincent Stemen, created this site while the original author, Savio Lam devotes himself to other projects in the meantime. You can find out whether you need to install the program with the which dialog command. If no output is supplied, then dialog is not installed. On the other hand, if the words /usr/ bin/ dialog appear, you will find the program in the /usr/ bin directory.
For a quick test, enter the command dialog – –yesno “Do you play an instrument?” 15 60. A box should appear, 60 characters wide and 15 lines deep with the question text and two buttons, Yes and No (Figure 2). With the cursor keys and Tab you can toggle back and forth between the buttons, and your selection can be confirmed with Return. The buttons can also be selected directly via the raised letters Y and N. You can also leave the box without making a selection by pressing Esc. dialog returns the selection you have made in the
Installation With YaST(2), rpm, dpkg, apt-get and co. you can also install dialog as an rpm or .deb packet which comes with your distribution. If you nevertheless want to compile the program yourself, proceed as follows: tar xzf dialog-0.7.tar.gz cd dialog-0.7
Figure 2: In dialog
Issue 16 • 2002
LINUX MAGAZINE
73
BEGINNERS
Listing 1: Shell script with dialog #!/bin/sh dialog --backtitle “Quiz” --title “music question” \ --yesno “Do you play an instrument?” 15 60 ans=$? if [ $ans = 255 ] ; then echo stopped exit fi if [ $ans = 1 ] ; then dialog --backtitle “Quiz” --title “challenge” \ --msgbox “Well then go and learn one!” 15 40 exit 0 else dialog --backtitle “Quiz” --title “Details” \ --radiolist “Which instrument do you play? \ You can only choose one.” 16 60 5 \ “Violin” “(bowed-string instrument)” off \ “Guitar” “(plucked-string instrument)” on \ “Piano” “(keyboard instrument)” off \ “Trumpet” “(brass instrument)” off \ “Bass” “(bowed-string instrument)” off 2> /tmp/dialog.sel instr=$(cat /tmp/dialog.sel) rm /tmp/dialog.sel if [ -z $instr ] ; then echo stopped exit fi dialog --backtitle “Quiz” --title “Quiz ends” \ --msgbox “So you can play $instr. Well listen to this \ then... sounds atrocious! ;-)” 16 40 fi special shell variable named ?. This variable – which can be interrogated with echo $? – basically contains the numeric return value of the last shell command. In the case of the yes/no box, 0 means yes, 1 no and 255 exit without making a selection.
Embedded In order to use dialog to the full, you embed it into a shell script, which does various things, depending on the return value. Listing 1 shows one example. A series of further options has been added to this.
Figure 3: A radio list
So the respective boxes with – –backtitle and – –title are kitted out with suitable headings. The return value of the first box is stored in the variable ans and evaluated with if constructs. The option – –msgbox makes dialog display a simple report with no alternative choice. The flag – –radiolist on the other hand displays a list, of which only one element can be selected with the space bar (Figure 3), similar to the station buttons on a radio (hence the name of the option). The selection of a radio list element is not returned numerically, but as text on the standard error channel (stderr). Accordingly, this has to be diverted for dumping in a temporary file, which is done with the construct 2> /tmp/dialog.sel. The content of this file is written in the variable instr. If this is empty (which can be checked using -z), the selection had been interrupted with Cancel or Esc. Otherwise, its content is shown in a further msgbox and the script is ended.
What’s the option? – –file is a relatively new option in dialog, which provides for easy file selection. The shell script in Listing 2 shows a sample application, although this will only run with a new version of dialog, not with version 0.62 or older, which is installed on current distributions. It shows a file selection dialog, with which one can browse through the filesystem starting from
Listing 2: File selection with dialog #!/bin/sh dialog --backtitle “Open text file” \ --title “select file” --clear \ --file $HOME 15 62 0 2>/tmp/dialog.file file=$(cat /tmp/dialog.file) rm /tmp/dialog.file if [ ! -z $file ] ; then echo $file contains $(wc -l < $file) lines and \ $(wc -c < $file) characters. fi Figure 4: File selection
74
LINUX MAGAZINE
Issue 16 • 2002
BEGINNERS
one’s own home directory. If one ends the dialog by selecting the OK button, the number of characters and lines of the last file selected will be displayed. The option – –file follows the start directory for the file selection (in this case the home directory of the current user, as saved in the environment variable HOME). The next two values define the height (15) and width (62) of the box. The following value specifies the mode for the box, possible modes being listed in Table 1. The selection of an existing file (Mode 0) is shown in Figure 4. In a similar manner to the previous example, the return value is first saved in a temporary file and read into the variable file. If this variable is not empty (which is checked using ! -z), then the script uses the command wc to output the number of lines and characters in the file selected.
RTFM dialog can also be used as a simple Pager to read text files. To do this, use the option – –textbox. For example, to read the file /etc/ services, enter, in the shell dialog – –title /etc/services – –textbox /etc/services 18 70. The cursor keys, Page Up, Page Down and the space bar can be used to navigate in the text. With / and ?, you can search forwards or backwards respectively. Reference to additional useful dialog options, such as checklists or input boxes, can be found in the manpage, which you call up with man dialog.
Kernel The operating system kernel forms the interface between hardware and running processes. It also provides multitasking and memory management. The actual Linux is only the kernel.
URL Uniform Resource Locator. The unique address of a resource on the Internet. The URL also specifies the transfer protocol, for example http://www.google.com or ftp://ftp.kernel.org/pub/.
.deb The packet format of the Debian distribution. Such packets can easily be installed and uninstalled with the packet manager dpkg or the easy to use front-end apt.
rpm With the Red Hat Packet Manager (which is also used in SuSE) software packets can be neatly installed and uninstalled. The associated packet format is also called RPM.
Shell One of the most important parts of every Unix system – the command line-controlled user interface.
RTFM Read The Fine Manual, the discreet reference to the fact that there is documentation available to read.
$ To find out the content of a shell variable, put the operator $ before the variable name.
Pager Program for page-by-page display of a file. Common pagers are more and less.
Table 1: Modes for – – file Mode 0 1 2 3
Meaning Selection of an existing file Selection of an existing directory Input of an existing or non-existent directory Input of an existing or non-existent file
Box 1: Shell scripting For those who have not yet had any dealings with shell scripts, here are a few explanations of the listings in this article. Shell scripts are text files with sequences of commands, which are executed by the shell in sequence after the file has been called up. You can also bind the execution to conditions (if command) or repeat parts of the script (while- and forcommands). Variables are useful for dumping values, and these variables are created by simply writing down a name and assigning it a value after an equals sign. You can find out the content of a variable by placing a $ in front of the variable name. In the case of if constructs, the actual condition is often formulated with the test command, which can also be written as [ for short. Simple comparisons can be done with =, but checks of files and character strings are also made available by options.
So [ -f foo ] checks whether foo is a regular file; [ -z $bar ] checks if the content of the variable bar is empty, and [ $a -gt $b ] finds out if the content of a (interpreted as a figure) is greater than (gt) that of b. An if query for the shell must always end with an fi. In between there can also be an else branch, in which alternative execution options are specified, which will come into play when the if condition is not met. The notation bla=$(command)first executes the command in the brackets and then uses its output at this point in the rest of the command line, so that the output is assigned the variable bla. This mechanism is referred to as command substitution. To make over-long lines in shell scripts easier to read, you can write a backslash (\) before the end of the line. By doing this, the shell knows that the next line is to be regarded as the continuation of the current one.
On the other hand, if several commands are to be placed on one line these must be separated from each other by semicolons. This is done, for example, following an if test. If the following keyword then were to be standing on its own line, though, no semicolon would be necessary. Apart from the standard output and the standard error output, which normally end up on the screen and can be diverted into a file with > or 2>, the shell can also use the standard input channel. This is normally linked to the keyboard, but a command < file ensures that the command processes the data in the file. This is how the wc command in Listing 2 receives the lines and characters to be counted from the content saved in the file variable.
Issue 16 • 2002
LINUX MAGAZINE
75
BEGINNERS
K-TOOLS: Qtella 0.2.2
BARTERING If you’re on the hunt for MP3s then Qtella makes it happen. Stefanie Teufel dips her toe in the file sharing waters
N
apster is dead – long live its successors! One of the best-known alternative peer-to-peer projects glories in the name of Gnutella. Although the name has a lot in common with a popular nutty spread, what lies behind it is something completely different – a whole network of linked computers (“peers”), which exchange and answer search queries between themselves. Hits are sent back via the same route as the search query, but the data exchange is always done directly. A search query always contains a TTL specification (Time To Live), which is reduced by one at each intermediate station. This is to prevent the Net being swamped with a flood of endless search queries. One thing you need to be clear about before telling the whole world about your musical tastes: anonymity is only guaranteed with Gnutella as long as no data is exchanged. If a data exchange does take place the IP address of the user is passed on.
Figure 1: All options to hand
76
LINUX MAGAZINE
Issue 16 • 2002
K-tools In this column we present tools, month by month, which have proven to be especially useful when working under KDE, solve a problem which otherwise is deliberately ignored, or are just some of the nicer things in life, which – once discovered – you wouldn’t want to do without.
Tasty morsels Believe it or not, Gnutella was originally developed in order to swap sumptuous recipes. Nowadays, MP3s appear to be Gnutella’s greatest culinary delicacies. The legal situation with respect to downloading and passing on MP3 files has still not been completely clarified. At the moment, the prevailing opinion is that downloading MP3 files for private use is legal, at least so long as you own the original recording, but
BEGINNERS
this does not apply to the provision of music in MP3 form for the unknown general public (unless one has a licence for this). With Qtella we want to present a program that, as an extremely easy to use client for Gnutella, makes finding music files, images etc., twice as productive. If you want to plunge into complete download heaven, don’t delay and download the latest version of the program from the project’s homepage at http://www.qtella.net/. As well as a tarball package, Mandrake and Red Hat users can also download the program as a suitable rpm packets. SuSE users and the rest of the gang will unfortunately need to compile the program themselves. All you need to do, though, is unpack the sources and install using the commands ./configure, make, su root make install.
Made to measure Before venturing in to the configuration of the program, first start your new Gnutella client. To do this, simply enter qtella & in a terminal emulation. After going online, all you need do is select the menu item File/ Connect, and the search for your favourite file can begin. Qtella very kindly connects itself automatically to a server, which acts as host, where it snaffles a list of Gnutella addresses and uses this to connect to the Gnutella network. Now take another look at the configuration menu, as this is where one or two useful features can be activated and undesirable properties can be switched off. To do this, click on the Configuration tab (Figure 1). Since it has always been better to give than to receive, first enter the directories you would like to share with other users in the box marked Shared Directories. Any directories that you open up for sharing can be easily removed from this list by using the Remove button. Under Download Directories you can specify – neatly divided according to completed and interrupted downloads – the directories on your home hard disk in which you want to hoard newlywon treasures from the Gnutella network. If you can’t even be bothered with incomplete files, Qtella can also be configured, with the aid of the On Exit item, so that interrupted files up to any size you choose (remove interrupted downloads with size <=:) or all incompletely downloaded files are automatically deleted from the disk when you shut down the program. Anyone who never gives up hope in such situations can of course attempt to resume the transfer at any time. In the Unfinished Downloads area you can define how your client should act in such cases; it’s your choice whether you start a completely new download or to resume an old attempt. In the second case you define whether your criterion is based on
IP address An IP address currently consists of a code number of four sets of digits, each from 0 to 255 inclusive, separated by dots (192.148.0.195, for example). This means every individual Internet computer has its own unique address. So that you don’t have to recall such blocks of numbers, these IP addresses are converted into alphanumeric designators such as www.linuxmagazine.co.uk. Even a PC that’s only connected to the Internet for some of the time needs an IP address. Some Internet providers assign their customers a fixed IP address. Big providers and online services, which give lots of customers access to the Net, often keep a whole pool of Internet addresses. Instead of assigning each participant a fixed address, when a customer dials in he is allocated any address which has become free from the pool, which means it is assigned dynamically. MP3 Actually MPEG 1 Audio Layer 3. A method which reduces the audio data in CD quality with negligible loss to about one eleventh of the original size. Firewall Technology in the form of hardware and/or software, which controls the dataflow between a private and an unprotected network (such as a LAN and the Internet) or protects an internal network from attacks from the Internet. To do this a firewall compares, for example, the IP address of the computer from which a received datapacket originates with a list of permitted senders – and only their files are allowed to pass.
the size of the files, the download host or both. It is also possible to specify how to deal with downloads which use file names already in use on your hard disk. Do you want to write over the existing files (overwrite), download and rename (download and rename file), or stop the download completely instead (abort download)? You can define this in the Existing Files box.
Seek and you shall find The configuration work is now done, the network connection has been made (Figure 2) – but where
Figure 2: Only connect
Issue 16 • 2002
LINUX MAGAZINE
77
BEGINNERS
Figure 3: Full match
are the files? On one of the connected computers, if you’re lucky. In order to track down the files your looking for, Qtella provides the helpfully titled Search box. Given the breadth of files being shared across the network, it can be helpful to restrict your search to a specific file type. In the second pull-down menu, instead of Any Type, select the desired file type – Music, Images or Video. You can also set the minimum speed, which you expect from your counterpart using the Min Speed pulldown menu.
Red, yellow, green and grey If your search query is successful, Qtella displays a window as in Figure 3. The length of the bars reflects the connection speed to the host and the colours, the status of the connection. Red stands for a host that is “closed”, yellow signals that it is behind a firewall. Green means the host is open, and grey leaves you without any further information. If you have found the file, together with a suitable host, select it using the mouse and then click on the Download button. You can find out whether you have been successful by clicking on the Downloads tab. Here you can monitor status, progress, download rate and time remaining live (Figure 4). Anyone who subscribes to the Winston Churchill school of thought, and only trusts statistics which they themselves have falsified, will be duly disappointed by the Statistics tab, where Qtella lists everything important and unimportant to do with your Qtella sessions. See for yourself how long you have spent so far on the Net, which files you are sharing, how many downloads there have been and so on.
Figure 4: Are you making progress?
Figure 5: Bean counting, Qtella style
78
LINUX MAGAZINE
Issue 16 • 2002
BEGINNERS
DISTRO WARS or ‘Not LinuxWorld Ireland’
T
he LinuxWorld Ireland conference, which was scheduled for the end of November, was cancelled due to the current world situation. However, after a big discussion on the Linux Beer Hike list on LinuxWorld IE, the general consensus was that the opportunity for a good drinking session should not be missed. So it was that at the end of November, John Hearns and I found ourselves in Dublin to attend the community-based replacement event, “Distro Wars”. We spent the first day doing the tourist things in Dublin, whilst the others visited Newgrange, an important Irish prehistoric site. After a tour of Dublin’s more recent historic landmarks, we then had a very welcome hot chocolate in Bewley’s CafÈ on Grafton Street. The following morning John looked a little rough – understandable, given the previous night’s sampling of the Black Bush with Liam Bedford and the LBW crowd. A 10am lecture at Trinity College made for an early start, so we booked a taxi.
Great turnout On the way we spotted two familiar figures trudging along, so the taxi screeched to a halt and Alan Cox and Telsa Gwynne were bundled in. After a long journey round the back lot of Trinity, we finally found the lecture hall where Alan was scheduled to give a talk. The talk was on Free Software, and covered a range of topics such as DMCA, DECSS, software patents and the GPL license. Around 60 people turned up to hear the talk – proof that Linux really does come from the grass roots. After the lecture, we took a quick trip to see the Book of Kells and the superb Long Room library at Trinity (http://www.tcd.ie/Library/kells.htm). We then retired to the pub for some food and discussions. Liam had reserved a space in Messrs Macguire, a brew pub on the Quays beside O’Connell Bridge. The area was on the third floor and there were no lifts for my wheelchair. John soon returned with three Linux volunteers, however, and together all four carried my chair up the stairs. We sampled some of Messrs Macguire’s finest beers, and even Tux was seen sampling a glass of stout (see the photograph). John also found himself leading a Birds of a Feather session on clustering and Grid computing. That evening ILUG had arranged a booze-up in a brewery – well, we had to prove that Linux geeks
The LinuxWorld Ireland event may have been cancelled, but that didn’t stop Linux users from jetting off to the Emerald Isle. Phillipa Wentworth was in attendance Penguins having fun at the party
could organise one! We went to the Dublin Brewing Company for a tour of the brewing process and all the beer you could drink for £20. The small drinking area of the Brewing Co. was full of Linux people sampling the brew and eating Indian takeaways. We joined up with our two Bavarian friends, Kerstin and Philip Weinbrenner. The Bavarians were a bit ‘green’ in the selection of Indian food, and so hadn’t learned to err on the side of caution – so they got a very spicy meal. For the next morning, ILUG organised ‘Distro Wars’ – a paintball match between champions of competing distributions. We plumped for the safer option and had a walk around Georgian Dublin instead. On the walk we encountered two Linux friends – Yalla and Edwin. We took them along to see the Dublin Post Office in O’Connell Street, which still has the bullet holes. That afternoon, we went off to see a film at the Savoy cinema, which has Ireland’s biggest screen. After the film we said our goodbyes, hugged and promised to meet again. Then we had to dash for the airport. On the flight home, Aer Lingus were marvellous – even though we were the last aboard and they had to call us on the Tannoy! I got to sit in business class and got a bottle of champagne for John. We arrived to a rainy city airport, and a quick cab ride took us home.
Info ILUG – Irish Linux Users Group http://www.linux.ie/ You can read more about ‘Distro Wars’ on Telsa’s trip diary: http://www.linux.org. uk/~telsa/Trips/ dublin.html
LinuxWorld Ireland A ‘real’ LinuxWorld Ireland is going ahead next year. The dates are 9-11th April 2001 – see http://www.linuxworld.ie. So put the date in your diaries, and come over the Irish Sea in April for three days of Linux, Guinness and good Craic!
Issue 16 • 2002
LINUX MAGAZINE
79
BEGINNERS
DESKTOPIA: Xscreensaver 3.33
BREAK IN TRANSMISSION S If you’re going to take a break, it’s worth protecting your screen from interlopers. Jo Moskalewski takes a look at the ins and outs of Xscreensaver
desktopia Only you can decide how your Linux desktop looks. With deskTOPia we regularly take you with us on a journey into the land of window managers and desktop environments, presenting the useful and the colourful, viewers and pretty toys.
creensavers are undoubtedly very popular. While many home and office users vie for the most attractive, one question needs to be raised – what’s the point of them? Screensavers were originally intended to prevent the contents of the desktop being permanently burned onto the terminal screen. However, with any halfway modern monitor this is hardly something to be afraid of. Should the period of non-use last as long as that, it would surely be a better idea to simple turn off the monitor for the next few days.
Coffee break The other reason for using a screensaver, which is as valid as ever, has nothing to do with the lifetime of the monitor. It’s much more likely to be that of protecting the desktop content from the gaze of the curious. This is then called locking the screen. When this happens, the screensaver can only be escaped by entering a password. Anyone who wants to do this will activate their screensaver manually when leaving the workstation, instead of letting it make an appearance automatically after all too long a period of inactivity. Lastly, today’s screensavers are also more to do with satisfying the play instinct: the function of saving is questionable and can be realised equally well with the mains switch or an energy-saving mechanism. Anyone who wants to protect their desktop from others, is really more in need of a manual start, including a password challenge. If only really value the original function (or again, simply want a rudimentary energy-saving function), then you would do well to use xset.
Black, no sugar
Figure 1: Standard screensaver, configured via xset
80
LINUX MAGAZINE
The simplest screensaver is a part of XFree itself. It offers a simple X logo animation or else simply turns off the desktop. You can control it with a range of arguments, so it’s possible to make the screen go blank with a simple xset s 300 after five minutes (300 seconds). Table 1 sheds some light on the main functions. Anyone who tries to use the keyboard to protect the screen manually with xset s activate will soon Issue 16 • 2002
notice that this is not necessarily rewarded with success. The keyboard command itself deactivates the saver, which has just been called up. The remedy lies in a sleep 1s && xset s activate, so that the keyboard has a full second to come to a stop. If you set the option noblank and first set the background with xsetroot, you will be faced, instead, with the mousegrey spotted X-Standard background which was set with xsetroot (Figure 1). Users of the K Desktop Environment can keep ahead of all these games. Here it is not the actual Root Window which is used as desktop background: Instead, a frameless window covers everything that’s happening on it. So an xsetroot -solid blue merely changes the desktop behind the KDE desktop.
Milk and sugar If you also want a feast for your eyes at the same time, and are not averse to a goodly portion of comfort, you may wish to check out the Xscreensaver package at http://www.jwz.org/xscreensaver/. In most cases, however, this has probably already been installed as a part of your standard installation. Xscreensaver is not what an exiled Windows user may understand by the term screensaver. Rather than choosing a single screensaver to be displayed, Xscreensever lets the user select the desired ones from a list of countless options. These will be continually alternated between – if you only select a single font then this will always appear when the screensaver is active. The concept is modular, so any graphics demos you like can be integrated into the package, so long as they are able to redraw the X-Root window. The basic package comes with no less than 123 demos. These are intended, not only as eye candy, but also as a feast for scientists, who want to have one or other
Figure 2: Splashscreen at the start
BEGINNERS
of the famous mind games paraded before them. Some of these will just be one big yawn for the average home user, some take the modern PC to the absolute limits of its performance, and some require features that may not be present in your home Linux system. Fortunately, with xscreensaver-demo you can rifle through the long list of demos. Even after filtering out the unsuitable graphics, there ought to be enough left over to suit just about any taste.
Cream buns You can personalise each individual graphics demo, and thus the screensaver itself, via the command line. Click on the Documentation... button to read the information entered therein. There is an introduction like this for each of the 123 screensavers, which you can exit again via q. It’s not only the list of available screensavers that can be viewed and configured here. In the second tab there are options with which you can rule, switch and experiment to your heart’s desire (Figure 4). The changes made under the Screensaver Options tab don’t take effect until the xscreensaver daemon has been re-started. This item can be reached via File/ Restart Daemon. As xscreensaver-demo merely edits a configuration file, the program Xscreensaver always carries on taking care of functionality regardless (and unnoticed in the background). Once started, it knows nothing about the changes and has to be informed of them explicitly with a new start.
Cakes Anyone who wants to control the daemon from the keyboard can do so using the xscreensaver-command tool. You can find out which options this command understands with a parameterless command. If the brief instructions, which are then outputted, are not enough for you, the manpage man xscreensavercommand offers more comprehensive information. The -activate parameter may be of particular interest. If you include the command xscreensavercommand -activate in your start menu (or if you create a desktop icon with this command, for example), then you can activate the screensaver directly with it. If instead you start it with a -lock, then the screen will also be immediately protected. If troublemakers turn up, they will be challenged to enter the password. Only once the password is correct can the user return to the X session and carry on working. This leaves the question as to how xset, or the daemon, is launched automatically. If the window manager in use offers a so-called Autostart function, then you simply enter Xscreensaver (or xset) there. Regardless of which window manager is in use, there is yet another solution: The X server searches on start-up in the home directory of a user either for the
Table 1: xset’s screensaver options Command xset s xset s 600 xset s blank xset s noblank xset s 600 300 xset xset xset xset
s off s on s activate q
Function reset to standard settings activate after 10 minutes use blank screen display X logo instead of blank screen (cf. Figure 1) move X logo to new position every 300 seconds (provided this is set) deactivate screensaver switch on screensaver activate screensaver immediately display current settings
Figure 3: Selection menu for the graphics demos
Figure 4: Configuration menu
file .xinitrc or for .xsession. If it finds this, then the X start mechanism finds out from this file how the user wants their X session. One possibility would be the following structure: xsetroot -solid “#102040” & xsetroot -cursor_name left_ptr & xset s off & xscreensaver & evilwm
In this example, firstly the desktop background colour is set. Thanks to the concluding &, the second xsetroot command doesn’t have to wait until the previous one has been dealt with, but can start immediately and change the mouse cursor into an arrow. When this happens, the standard screensaver is deactivated. After that, the Xscreensaver daemon starts and finally a window manager (in this case the evilwm – discussed in issue 14 of Linux Magazine). This is not sent on its way with a concluding &, because as soon as it is concluded the whole X session is meant to end – and with it, all programs which were started by it.
Figure 5: Unlocking a locked screen
Issue 16 • 2002
LINUX MAGAZINE
81
COMMUNITY
Free World Want to know more about BSD?
ALL POWER TO THE DAEMON Richard Ibbotson takes a look at FreeBSD, an advanced operating system for many different architectures, and some of its fellow BSD Unix counterparts
T
here are no doubt many people out there who will get a bit bored of Linux and decide to try out one of the versions of BSD Unix to see what they have to offer. Those of us who have been around Free software for a long time will be aware that there are some Linux and BSD people out there who exist simply to have a go at each other. By and large this situation is thankfully something that your average computer user doesn’t come across. If you want to try one of the BSDs, for either desktop use or for that all-important firewall you need to protect your internal LAN, then feel free to go ahead and try something out. As the OpenBSD people will probably tell you Unix would seem to be about experience and knowing what to do rather than paper qualifications. The only way to learn with this software is by doing and not by going to a university. You might well ask the question what can I do
with BSD that I can’t do with Linux? In fact, quite a few BSD users ask this of Linux users. BSD is generally accredited with being more useful for firewalls and security applications than Linux, although in recent times there is some doubt as to whether this is true. Linux developers have been heard to say that the BSD developers they have encountered were much too pedantic for their tastes. Over the past two years BSD, and the X-servers that it uses, have become much more sophisticated from the point of view of desktop use. If you like to use KDE2 on your Linux desktop or notebook, then you can have the same thing on any BSD system you might want to install and use. Hardware like scanners and ADSL or ISDN cards are supported and there’s lots of documentation and support from the many mailing lists for people who are having a particular problem with one or more aspects of BSD.
FreeBSD Traditionally FreeBSD is the version that most people start out with. There is a massive pile of documentation and quite a few lists to subscribe to, so that if this is your own choice you can get the right kind of help and support delivered into your mailbox at a frightening speed. If you look at the FreeBSD Web site and compare it with the other BSD sites you will notice that the graphics and layout of the site proclaim its popularity. Over in the States both students and academic staff are actively engaged in a passionate argument about which software to use. This can occasionally break out into a riotous assembly and FreeBSD can be frequently found at the centre of that argument.
Where can I get FreeBSD from? If you can’t download FreeBSD then you can get hold of it from many other places – search the Net and you should find a supplier. The best version to get hold of is the official boxed set. This comes with an
NetBSD running KDE
88
LINUX MAGAZINE
Issue 16 • 2002
COMMUNITY
FreeBSD 4.3 with Windowmaker 0.64
excellent book entitled The Complete FreeBSD. With the book in front of you – and the large mound of documentation from the FreeBSD site – you should be able to carry out a first time configuration.
NetBSD NetBSD appears to be very fashionable just now. It runs on most types of hardware, so if you want something like Windowmaker running on your Amiga then why not try it out? A quick look at the left-hand side of the NewBSD Web site will reveal just how many different computers you can run NetBSD on. Its clean design and advanced features make it excellent in both production and research environments. The BSD codebase can be traced back to the early 80s at the University of California, Berkeley, and has been open to public scrutiny ever since. NetBSD continues this tradition and works ever harder to promote clean design and functionality over hype. NetBSD is being used at NASA’s Numerical Aerospace Simulation facility for a reason: their main platforms are Alpha systems with lots of RAM and disk space (terabyte and up), and they need a good, stable, codebase on which they can build custom projects. NetBSD was also the first free OS to make a Y2K statement.
OpenBSD If you are a choosy system administrator who’s seen it all then you’ll probably already know to use OpenBSD if you’re going to use BSD at all. The authors of OpenBSD like to make sure that you know that OpenBSD has not suffered a hole in a remote install for four years. Having made it past this kind of shocking news you can then as a newbie sample the delights of the OpenBSD community first hand and try out the ample and well catered for mailing lists
OpenBSD running the Gimp
which will provide you with the information you will need to produce your first working installation. The experienced person may find a snag or two on the way but these people are catered for with some of the more advanced mailing lists. OpenBSD emphasise portability, standardisation, correctness, proactive security and integrated cryptography. The current release is OpenBSD 2.9, which started shipping 1 June 2001. OpenBSD contains OpenSSH, which supports SSH1 and SSH2!
ccScript
Want to know more? If you want to know more about some simple aspects of BSD, why not follow our newbies guide over the next few issues. Hope we see you again next month.
Info FreeBSD Resources for newbies NetBSD OpenBSD Daemon News Linux Emporium
http://www.freebsd.org http://www.freebsd.org/projects/newbies.html http://www.netbsd.org http://www.openbsd.org http://www.daemonnews.org http://www.linuxemporium.co.uk
Issue 16 • 2002
LINUX MAGAZINE
89
COMMUNITY
Internet
THE RIGHT PAGES When we’re not hard
The Linux Kernel Archives
at work producing the
http://www.kernel.org This is the primary site for the Linux kernel source. It also has a nice deposit of free software not necessarily for Linux.
magazine, we like to spend our time
Linux Dot Com
searching out
http://www.linux.com A good portal site for all things Linuxbased. The news section is from NewsForge. It has a nicely laid out Learn section with some good articles for all levels.
software and news on the Internet. In the office we all have our favourite bookmarks. Janet Roebuck sifts through some of the latest finds that we
Linux Documentation Project http://www.linuxdoc.org LDP’s goal is to create the canonical set of free Linux documentation. It includes all the HOWTOs, manpages and guides.
feel are important
The Linux Counter
and useful
http://counter.li.org The Linux Counter Project is a non-profit organisation dedicated to monitoring the numbers and distribution of Linux usage. The more people that register the more we can see demand and present to companies. Statistics can be sorted by country.
Linux World http://www.linuxworld.com LinuxWorld delivers hands-on technical information and real-world cases of Linux in the enterprise. It has features and assists for making purchasing decisions. LinuxWorld caters for enterprise technologists in need of products and consulting services that will solidify open source computing within their companies’ infrastructure.
Linux Central http://www.linuxcentral.com A well-produced Linux portal with the emphasis on selling you Linux-related products.
Linux International http://www.li.org Linux International is a non-profit association of groups, corporations and others that work towards the promotion of growth of both the Linux operating system and the Linux community.
90
LINUX MAGAZINE
Issue 15 • 2002
Linux Apps http://www.linuxapps.com A quick way to find a new application. The site is split into categories with a search function and applications can be sorted by either name or date.
COMMUNITY
Start here http://www.linuxstart.com Linux Start is another search engine with a strong Linux focus. Applications are split by categories and the Web site is very clean and useable with easy to follow directions.
TuCows http://linux.tucows.com The TuCows sites are enormous, with mirror sites everywhere to help reduce download times. The Cream of the Crop feature is a quick way to keep up with the latest and greatest utilities. *8.gif
UK portal http://www.linux.org.uk As Alan Cox maintains this site it is worth a visit. The portal engine provides all the news, but the best part is the diary. Actually there are two diaries and both are worth reading.
Fast search http://www.google.com/linux Direct your usual search by typing linux in the path name. This also works for mac and bsd.
Power PC users
Windows user?
http://www.linuxppc.org If you are using a PPC machine then this is your first port of call. The recent update list keeps you abreast of what has been ported and what’s being worked upon.
http://www.linuxnewbie.org If you’re moving from Windows to Linux then this is the place for you. It has its own brand of help files written specially for newbies.
Planet Linux http://www.linuxplanet.com The LinuxEngine is a nice search facility on this site. Along with the reviews section it’s well worth a visit.
Programming news http://www.linuxprogramming.com Linux Programming is the portal site for all the news from the Web’s other programming sites.
Routers
Games galore http://linuxgames.com Want to waste some time and need a new game? Then this is the site for you. Better yet are the HOWTO’s for setting up a LANparty.
http://master-www.linuxrouter.org:8080 The Linux Router Project is small enough to fit on a single 1.44MB floppy disk, and makes building and maintaining routers, access servers, thin servers, thin clients, network appliances, and typically embedded systems next to trivial.
Issue 15 • 2002
LINUX MAGAZINE
91
COMMUNITY
The monthly GNU Column
BRAVE GNU WORLD GNU grep
Welcome to another issue of Georg CF Greve’s Brave GNU World. After introducing a classic, this issue will soon become relatively technical, but it should be interesting even to not-sotechnical readers
92
LINUX MAGAZINE
Bernhard Rosenkraenzer is the new maintainer of GNU grep, so I’d like to take this opportunity to write a little bit about the project. Most GNU/Linux users probably already know GNU grep and use it on a daily basis, but there may be some who are not yet familiar with it, so I’ll give a short introduction to its functionality. GNU grep searches files or standard input for certain patterns – usually text strings – and outputs lines matching or containing this pattern. It is also possible to search multiple files in order to determine which files contain the pattern. Typical uses go from processing texts and finding passages, to narrowing the output of other programs down to relevant information. Grep is definitely one of the standard commands of any Unix-like system and GNU grep contains not only the standard features but also options like recursive search of directories or word-based output. Since it is already being used for many years on a multitude of systems, there can be no doubt about its readiness for daily use. I’m not quite sure about the exact age of GNU grep, but the ChangeLog goes back to 1993 and the first copyright notice by the FSF originates in 1988, which is about three years before the creation of the Linux kernel. This feature is interesting not despite but because of the age of the project, because it shows how lively Free Software is – even when it reaches such a high age. During those years, over 80 people contributed to GNU grep and development still continues. On the ToDo list are some small changes for complete POSIX.2 conformance and a few new options. Among them are for instance a ––max-count option, a switch for PCRE (Perl Compatible Regular Expressions) or a highlighting option for colourmarked output. Even such an old, stable and widespread package needs help. In particular users willing to test development versions for portability beyond GNU/Linux and FreeBSD and bug-hunting are very welcome. Also the current maintainer only speaks Issue 16 • 2002
English, German and French, so he cannot truly determine whether the multibyte-support is actually working. So users in multibyte-areas should feel especially encouraged to participate in the testing.
QTreeMap The QTreeMap project by Alexander Rawass implements Treemaps in a Qt-Widget under the GNU Lesser General Public License. Deep hierarchies or trees are usually displayed as a kind of structure that can be (un)folded via mouse click. Treemaps offer the capability to display these complete hierarchies at a single glance. The principle is understood more easily when giving an example, so I’ll explain it with the operation of KDirStat, which uses QTreeMaps in order to visualise the hard disk usage. QTreeMaps are displayed in rectangular areas. The total area of the rectangle represents the total size of the partition to be visualized. Directories and files are displayed as areas proportional to their size. A directory structure using a third of the available space on a partition would also get one third of the available display area. A subdirectory of this structure containing half the size of the whole structure would get half of its display size and so on. Treemaps are especially useful in situations where size-dependent hierarchies have to be displayed, like
Squarified Treemaps with Fast Bump shading
COMMUNITY
filesystems, network traffic or content/organisation management. QTreeMap supports classic Treemaps, quadratic Treemaps and different colouring schemes (also based on regular expressions). The generated Treemaps can be loaded and saved as XML and saving them as bitmap is also possible. According to the author, the special problems of QTreeMap are the usual KDE-specific problems as well as lack of the Cushion algorithm, which is present in Sequoia View, the proprietary program inspiring QTreeMap, but which could not be implemented because Alexander wasn’t able to figure out the mathematical concepts for it. QTreeMap was written in C++ after Alexander experimented with the algorithms in Python. According to him, there are two similar projects, but both of them are written in Java. As the projects KDirStat and KProf prove, QTreeMap is usable, so interested developers should take a look.
Dap Another new member of the GNU Project is Dap by Susan Bassein. Dap, which stands for Data Analysis and Presentation, is released under the GNU General Public License. Dap provides basic functions for data management, analysis and graphical visualisation as they are commonly used in statistical consulting and education. Also it is useful for managing sets of data; Susan herself uses Dap to do her taxes and prepare the payroll for her employees. The program is written in C and users with C experience should have no problem using Dap after studying the examples provided with the package. Since the GNU Project already had one statistical package called R, it now provides two alternatives. Whilst R is object-oriented, Dap follows the procedural approach. Users of the proprietary programs S or S-plus will probably prefer R. Former users of the non-Free SAS package will quickly feel at home with Dap. Dap is also more memory-friendly: while R reads the whole file into memory first, Dap works line-oriented which makes it suited for very big sets of data. Problems mentioned by Susan are that Dap has fewer statistical tests than R and was never optimised for speed. Fixing these weaknesses as well as expansion and improvement of its functionality is the goal of further development. Despite these problems Dap has been used for about three years now, so it has been thoroughly tested and can be recommended to interested users.
GNU ccRTP The Goal of the ccRTP project is to implement the RFC standards for the Realtime Transport Protocol (RTP) which allows transport of time-dependent data
GNU UnRTF GNU UnRTF is a recent addition to the GNU Project by Zachary T Smith. It enables people to transfer documents from the Rich Text Format (RTF) into other formats. RTF is often used as a transfer format by Windows users, but other text processors also use it for saving text with formatting information. Thanks to this project it is now possible to convert these documents into pure text, HTML, LaTeX and PostScript. So anyone who uses RTF themselves or is dependent on people who do should profit from this project. The license of the project was always the GNU General Public License, but it is possible that some people already got in touch with the project under its old name, rtf2htm. The current focus of the development process is two-fold: the character conversation routines and the output to LaTeX. Additionally it is planned to support more target formats in the future. Regardless of the ongoing development, the project is definitely ready for use.
(like audio or video) over a network. The streams are transmitted in packets containing time information to enable correct assignments on the receiving end. Typically this is done by UDP packets because network problems do not block the transmission, which would destroy the synchronisation. ccRTP implements this object-oriented in a C++ class library under the GNU General Public License. The authors, David Sugar and Frederico Montesino Pouzols, want ccRTP to become the most versatile, efficient and standards-compliant RTP implementation and they have taken several steps in this direction so far. GNU ccRTP already supports multicasting as well
ccScript GNU ccScript is, just like Common C++, Bayonne, ccRTP and cc Audio, a project under maintenance by David Sugar, who gladly stated this makes him a PentaMaintainer. The ccScript C++ class library under the GPL provides a virtual machine (VM) for real-time applications in state-transition event driven systems. This assembler scripting language is being used by GNU Bayonne and other parts of the GNUComm project to script the user interaction. As can easily be understood, defined execution speed is very important for real-time environments, which is exactly what ccScript has been written for. Any operation in ccScript is deterministic; the only exceptions to this rule are operations where this is impossible, like database lookups and the like. Because of this it cannot deal with complex or open-ended expressions. Even if it provides general functionality and macros, ccScript should not be confused with projects like Guile or Tcl, because they are more versatile but do not have matching real-time capabilities. As the next steps in ccScript development, the syntax will be restructured and clarified a bit; also more parts of the language should be available in loadable modules. This project also needs help with its documentation – so if you feel you want to do this, please do.
Issue 16 • 2002
LINUX MAGAZINE
93
COMMUNITY
as point-to-point transmission, multiple inputstreams and the Real-Time Control Protocol (RTCP). Additionally the transition to IPv6 has been prepared and even real-time packet filtering and mixed-mode data streams are possible, which allows features like RFC 2833 signalling inside a data stream, for instance. The high transmission rates required for video data as well as partial packet reconstruction and Class Of Service routing are possible, so the library can already be used for clients and servers. Work is currently being done on the Resource reSerVation Protocol (RSVP) and completion of the RTCP-support. The next step will be proper project documentation, which is lacking right now. Help by real-time specialists and good authors of documentation would be very welcome. The latter in particular are harder to find than most people would assume.
GNUComm Other projects pursue the goal of furthering the GNUComm meta project, which is about creating a complete and flexible communication environment based on interactive components. One part of the GNUComm meta project is Bayonne, the telephony server. One of the focal aspects of GNUComm development is interoperability and integration with the GNU Enterprise project. GNU Enterprise works on a complete solution in the field of the so-called Enterprise Resource Planning (ERP) and will bring Free Software into this largely
Info Send ideas, comments and questions to Brave GNU World Homepage of the GNU Project Homepage of Georg’s Brave GNU World “We run GNU” initiative GNU grep homepage GNU UnRTF homepage QTreeMap homepage KDirStat homepage KProf homepage Dap homepage GNUComm project homepage GNU Bayonne homepage GNU Enterprise homepage GNU ccRTP homepage GNU ccAudio homepage GNU ccScript homepage GNUTLS Logo Contest
94
LINUX MAGAZINE
column@brave-gnu-world.org http://www.gnu.org/ http://brave-gnu-world.org http://www.gnu.org/brave-gnuworld/rungnu/rungnu.en.html http://www.gnu.org/software/grep/ http://www.geocities.com/tuorfa/ unrtf.html http://qtreemap.sourceforge.net http://kdirstat.sourceforge.net/kdirstat/ http://kprof.sourceforge.net http://home.earthlink.net/~sbassein/public http://www.gnu.org/software/gnucomm http://www.gnu.org/software/bayonne http://www.gnuenterprise.org http://www.gnu.org/software/ccrtp http://www.gnu.org/software/ccaudio http://www.gnu.org/software/ccscript http://www.gnu.org/software/gnutls/ logo-contest
Issue 16 • 2002
Overview of GNUComm
proprietary area; more information can be found in issue seven of Linux Magazine.
GNU ccAudio The ccAudio project was also started by David Sugar. As the name suggests, its goal is the creation of a general purpose library for manipulation of audio data on hard disk and in memory. Like ccRTP, ccAudio is implemented as a C++ class library under the GNU General Public License and is, since it originates in the GNU Bayonne project, a part of the GNUComm meta project. Currently ccAudio supports accessing audio data from the hard disk through libsndfile and other libraries and provides basic signal/audio processing facilities. It treats audio data as discrete samples and can deal with RIFF headers and such. David considers the treatment of audio data as sets of samples rather than binary buffers in byte format to be especially notable. It is also aware of different sample encoding formats, endian ordering, and multiple channels. Possible platforms for ccAudio are Unix-like systems as well as Win32, so developers in this area should feel encouraged to take a look at it. Further development will aim towards better support for dynamically loadable software-codecs and also making more built-in audio-codecs available. David is also considering including fourier-transforms (FFT) and different forms of audio mixing and transforming into ccAudio. Someone with more experience in the digital audio/signal processing would be very helpful for this.
Enough Enough said for this month’s issue. As a last thing I’d like to point out that the GNU Transport Layer Security Library (GNUTLS) is looking for a new logo and has started the GNUTLS Logo Contest. If you’d like your artworks to become part of the GNU Project, this is a good opportunity. Of course I’m asking for lots of impulses, ideas, comments, criticism and new projects by mail. ■