Relaunch
COMMENT
Blindtext
COMMENT
New World We pride ourselves on the origins of our publication, which come from the early days of the Linux revolution.
Dear Linux Magazine Reader, Welcome to the new and improved Linux Magazine! The magazine you are holding in your hands is the result of months of planning and our response to your wishes as expressed in the recent reader survey as well as to general changes in the Linux market. As the acceptance and professional use of Linux increases and the market matures, the service that the Linux Community expects from our magazine changes. We felt it was time to relaunch the magazine to better meet those needs. Our goal is to continue in our tradition of providing high-quality, useful Linux information while adapting the magazine to meet higher design and technical standards. Here is an outline of the most important changes that take place with this issue:
Coverdisc The reader survey shows us that the coverdisc was becoming less and less important. Therefore, we decided to reduce distribution costs by removing the CD from newsstand copies. We will, however, still produce a monthly CD that will be included in subscription copies free of charge.
Lower Price The cover price has been lowered from a previous £4.99 to just £3.99. In addition, the subscription prices have been reduced significantly, especially for readers outside the UK.
have responded to this interest by choosing articles with a Our sister publication in Germany, founded in 1994, was the first more advanced level and by Linux magazine in Europe. Since adding several new themed then, our network and expertise sections. has grown and expanded with the Linux community around the We’ve added the News world. subsections Insecurity and As a reader of Linux Magazine, Kernel to inform you on you are joining an information security holes in common network that is dedicated to distributing knowledge and Open Source software and the technical expertise.We’re not latest kernel developments. simply reporting on the Linux The new Sysadmin article and Open Source movement, section covers system and we’re part of it. network management issues, with a strong focus on practical use. Further, the old section Beginners is now called Linux User. The point is that we are delivering useful information on the practical use of Linux, not an introduction for newbies.
Expanded Geographic Coverage Although Linux Magazine was originally started as a UK-only publication, it has gained quite an international following. To better serve all readers and to reflect the international aspect of the Linux Community, we are expanding our geographic coverage in both content and distribution. To ensure that everyone can get the magazine delivered to them quickly, we have implemented a much more convenient subscription system with expanded options.
New Design and Layout We felt that it is important for the cover design and layout of the magazine to emphasize the practical use of the content. Therefore, the new design is cleaner, better structured, and true to the motto “form follows function”.
Advanced Content The average reader is increasingly more knowledgeable in Linux and therefore interested in a higher technical content level. We
Updated Website The Linux Magazine website has also been updated to reflect the new look and changes in the magazine. Take a look at the new website at www.linux-magazine.com. With all these changes, it’s easy to feel that nothing has stayed the same. To the contrary, the most important aspect has not changed: our goal to bring you the Linux information that you want and need to improve your work with Linux. We would like to thank all readers for their valuable feedback, and the entire Linux Magazine team for the hard work they have put into this relaunch. We are very interested in your opinion on the new Linux Magazine and your suggestions for the future. Have a good look through the magazine and send your comments to edit@linux-magazine.com. Enjoy!
John Southern
Hans-Jörg Ehren
Brian Osborn
Editor
Project Manager
Business Development Manager
www.linux-magazine.com September 2002
3
NEWS
Software
Software News Qt C# binding make new version Qt# has released version 0.4. This is a collection of cross-platform C# bindings for Trolltech's Qt GUI toolkit that is aimed at both the Mono and Portable.NET projects. There is now some preliminary support for Microsoft.NET along with object tracking, support for events and multiple custom slots. Interested parties should visit qtcsharp.sourceforge.net.
Virtual Atoms With the ability to edit molecules and compile under Qt 3.x, the Kmovisto molecular viewer is developing fast. KMovisto imports Gaussian 94 or 98 files and can handle XYZ files. For viewing of images KMovisto exports to POV-Ray files for rendering high quality presentations. If you need to produce a 3D presentation then KMovisto can export VRML output for browser manipulation.
Sight for sore eyes KDE 3.1 Alpha is a new development branch that has been announced. This release sports everything from wonderful new eye candy to tons of popular new features including new and exciting “Easter eggs” (aka bugs) just waiting to be discovered.
Calling home KSetiwatch is now at version 2.50. In an attempt to eliminate all the sorts of
Logged signals dialog
KDE 3.1 alpha showing Tabbed browsing, ToolTip file previews and enhanced file downloads in Konqueror
Those wanting a more stable life should be using KDE 3.0.2. For more info see dot.kde.org/1026405852.
Quick Fish Sherman's Aquarium is a window maker and gnome applet. It draws an aquarium with some randomly positioned fish. The fish are done by Jim Toomey, the author of the “Sherman's Lagoon” comics. The temperature scale on the right side shows the CPU load. It can also be configured to display the time and show the status of numlock, capslock and scrollock. It can be downloaded from aquariumapplet.sourceforge.net.
6
configuration problems with the latest release, an updated source package has been made available. This package should now be preconfigured for KDE3 and configurable with older versions of autoconf. If you had problems with the previous package, you can get the new one from ksetiwatch.sourceforge.net.
SVG in GTK+ With the release of Gnome2 the developers are now setting sites on features for Gnome 2.2. A suggestion has been made to incorporate the SVG graphics support library (rsvg) into GTK+. One obvious advantage would be the use of scalable interfaces for those with high resolutions rather than the standard bitmap themes. See developer.gnome.org/news/summary
O'Reilly releases Ant “I have to confess that I had absolutely no idea that Ant, the little build tool that could, would go as far as it did and make such a mark on the Java developer community,”James Duncan Davidson said, the creator of Ant, in the new book “Ant: The Definitive Guide”. “It might be that the key to Ant's success is that it didn't try to be successful. It was a simple solution to an obvious problem. In 1998, frustrated by his efforts to create a cross-platform build of Tomcat using the tools of the day (GNU Make, batch files, and shell scripts), Davidson threw together his own build utility on an aeroplane flight. Named Ant because it was a little thing that could build big things, James's quick-and-dirty solution to his own problem of creating a crossplatform build has evolved into the most widely used build management tool in Java environments.
GnomeICU 0.98.3 Released GnomeICU 0.98.3 has been released. This is the latest version of GnomeICU for the Gnome 1.4 desktop platform, and probably the last release for this desktop platform. You should only get this if you are still running Gnome 1.4 instead of Gnome 2, as we are going to have a Gnome 2 preview release soon. Within GnomeICU 0.98.2 you can now create new users perfectly. Fixed user info receiving, and the UI of it. Better user authorization support. Stable server side contacts list support. User interface cleanups..
September 2002 www.linux-magazine.com
Ant: The Definitive Guide Complete Build Management for Java By Eric M. Burke, Jesse E. Tilly; ISBN 0-596-00184-3; 288 pages; £24.95; www.oreilly.co.uk
NEWS
Software
O'Reilly releases XML Schema Many developers see W3C XML Schema as the principal language for defining the content and structure of XML documents, while others resist the specification as unnecessarily complex, preferring to use tools such as DTDs, Schematron or RELAX NG. Eric van der Vlist, the author of the newly released “XML Schema: The W3C's Object-Oriented Descriptions for XML”, approaches this controversy with a sober and objective view: W3C XML Schema, he says, is both essential and potentially dangerous for XML. “XML Schema is the most complex specification ever published by the W3C” van der Vlist says. “The technology itself is complex, and the specification was written in a way that's very difficult to read. Many experts lack the objectivity necessary to show the limitations and pitfalls of the technology. My book is an honest attempt to provide a description of W3C XML Schema that is neither bashing nor praising.” Involved in developing ISO standards as the editor of the DocumentSchema Definition Languages Part 5 specification, which describes “Object Oriented XML Schema languages”, van der Vlist is an XML consultant and developer, creator and chief editor of XMLfr.org, as well as being a regular contributor to XML.com and xmlhack.com.
Helping others Following on from a letter 'Computing For The Disabled' on page 17 of issue 2, Barry Coates, an Access Technology Trainer at the RNIB kindly wrote with some helpful news. The first link he gave was http://www. braille.uwo.ca/speakup/ which is helpful, certainly if the person in question is primarily a speech user. OK, it does mean no X Windows – which isn't too big a deal given that neither KDE or Gnome environments are fully accessed from the keyboard – but you still have full access to the substantial number of command line applications especially Lynx, Pine, EMACS etc. Basically, the Speakup project provides a customised version of Red Hat for download, and, if the PC user has an external synthesizer, this is probably the most accessible Linux system around for now. Speakup support has also been bolted on to Zipslack, the 100Mb version of Slackware. This is called Zipspeak and again it will run with an external synth such as a Dectalk or an Apollo. Last but not least, there is also the Emacspeak project which again might be worth checking out: • www.linux-speakup.org/ftp/disks/U slackware/zipspeak • www.cs.cornell.edu/Info/People/U raman/emacspeak/emacspeak.html He is hopeful that with Gnome 2 and importantly the Section 508 legislation in the States the incentives will be there to put accessibility and usability issues higher in the agenda!
Presenting projects Agnubis is the GNOME Presentation Program comparable to such programs as Microsoft PowerPoint or Corel Present. It has been developed and designed for the
XML Schema – The W3C's Object-Oriented Descriptions for XML By Eric van der Vlist, ISBN 0-596-00252-1, 400 pages, £28.95, www.oreilly.co.uk
8
Agnubis showing its own summary
September 2002 www.linux-magazine.com
GNOME 2 platform and is created to integrate well with the rest of the components in the GNOME Office suite. The Agnubis team is currently working hard to get the first release (0.1) out on general release. The application is now starting to take shape in the CVS tree. For more information see www.gnome.org/ projects/agnubis.
Apple iPod support for Linux tex9 announced the public release of their software that provides full operation of the Apple iPod on the Linux operating system. tex9's software is a plugin for the xtunes program (see www.tex9.com/ software/xtunes.php), a popular tex9 free software product.
xtunes now linking the Apple iPod to Linux
The plugin allows simple drag-and-drop of songs and playlists from the xtunes library onto an iPod. It is currently priced at $10.00 per copy and can be purchased at www.tex9.com.
Kylix 3 by Borland Borland has launched Borland Kylix 3, a Rapid Application Development (RAD) solution for C++ on Linux. Linux developers can quickly create GUI, database, Web, and Web Services applications in C++. Kylix 3 includes expanded support for Web Services development. Kylix 3 extends RAD for Linux to the more than two million developers in the C++ community world wide. “With Kylix 3, Borland has brought industry standard, and powerful enterprise tools to the Linux development community. The addition of C++ support in Kylix means that any class of Linux application, from GUI to database to Web Services, can be created quickly” said Jeff Bates, cofounder of Slashdot.org and director of OSDN Online..
Business
NEWS
Business News UK Government support for Open Source Software The UK Government has announced its long awaited Policy statement on the usage of Open Source software within UK government. The announcement comes after a similar statement from the EC. The EC published a paper on the pooling of Open Source Software by administrations and declared they should share costs on an open source licensing basis, to cut eGoverment IT costs. Graham Taylor, Programme Director for OpenForum Europe, said “The statement is essential and very welcome but it must be seen as a first step”. “Now is the time to rapidly develop best practice and ensure the UK can play its full part in the European opportunity. We are keen to bring the wide experience of our partners to assist in accelerating the use and adoption of OSS.” OpenForum Europe was launched in March this year and includes amongst its members major suppliers and user groups. It enables the government and private sector users to assess impartially the true viability and cost of ownership of the OSS Business Model, and gain real evidence of the commercial benefits that can be achieved. ■
SuSE groupware server supports 6,000 clients SuSE announced the release of an updated version of the SuSE Linux Groupware Server which supports up to 6,000 clients per server. The interweaving of the Linux operating system with the newly released Lotus Application Server 5.0.10 makes the SuSE Linux Groupware Server the most powerful Lotus solution for Intel and AMD 32-bit processors. The SuSE Linux Groupware Server combines the stability and security of the SuSE Linux Enterprise Server operating system with the functionality of Lotus Domino. With over 85 million Notes users, Lotus is undisputedly number one when it comes to both the groupware and messaging market. SuSE Linux Groupware Server is built on the SuSE Linux Enterprise Server 7 operating system with the new kernel 2.4.18. Lotus Application Server 5.0.10 provides efficient tools for document management, workflow management, messaging, and scheduling. Furthermore, the Lotus Application Server constitutes a flexible basis for the development of its own web and messaging applications. As part of ongoing system maintenance, SuSE delivers all relevant patches, fixes,
and updates for the server operating system in a quality assured and well documented form. SuSE Linux Groupware Server helps implement infrastructure inexpensively, reliably, and securely on a long-term basis; certifications of leading hardware and software providers keep their validity even after updates. The recommended retail price of £2,239 plus VAT for one server includes extensive documentation, 30 days of product support and 12 months of system maintenance. For more details see www.suse.co.uk/uk/products/suse_U business/groupware_server/index.html. ■
Manufacturing management with vision Caliach, a developer of manufacturing management software systems for small and medium-sized enterprises, is maintaining its reputation for innovation with the launch of version 1.10 of its ERP software Caliach Vision on Linux. Caliach Vision brings to Linux users a fully integrated, state-of-the-art system which is easy to use and can be maintained without dedicated IT staff. It features a class leading user interface and integrated Internet capabilities and provides all required to automate the management of a manufacturing business. Caliach’s road to Linux began when it needed to replace an Apple Mac server with a Linux version. Managing director, Chris Ross, wanted the Mac server for development work and also wanted to
experiment with the Linux platform, especially as Omnis Studio on which Caliach Vision is based, has introduced their own Linux development suite. So he called in Paul Nash, a Linux developer, to swap over the server to run the Mandrake 8.0 version of Linux. Paul transferred file serving and the Sendmail mail server and other functions including FTP server (for customers to download updates), Web server, Print server (on a mixed Apple and Windows network) and Majordomo (for users mailing lists). The server also runs programs such as Analogue, for gathering web statistics, and daily back-up to a removable disk. In addition, Caliach decided to load Linux on to a number of PCs in its training suite on a dual-boot basis with Windows.
Once Linux was up and running, Paul assisted Caliach with getting to grips with Linux, although he says this did not take too long: “Chris picked up Linux very quickly and he obviously feels very comfortable with its reliability, the low hardware requirements and flexibility. “Chris has developed the Linux version in response to customer enquiries and Caliach Vision represents another killer application by providing a comprehensive ERP suite on Linux.” Caliach Vision has been fully tested at Caliach’s training centre using different versions of Linux including SuSE 7.3, Red Hat 7.2, and Mandrake 8.0. A fully functional Linux version of Caliach Vision is available for downloading at the Caliach web site at www.caliach.com. ■
www.linux-magazine.com September 2002
9
NEWS
Business
Three year support The Danish subsidiary of Telia, Telia Connect has agreed to a long term maintenance contract with SuSE for its IBM zSeries G7 mainframe systems. In 2001 Telia Connect moved from using a 70 Unix server farm to one IBM S/390 to handle over 400,000 customers internet accounts. By using the virtual machine capabilities of the z Series, each customer has their own SuSE Linux Enterprise Server to operate on. As SuSE Linux Enterprise Server allows the modifications of the network and hard disk configuration while the system is active, Telia Connect customers are provided with an almost unlimited virtual server capacity which has a zero downtime. “The combination of IBM zSeries machines and SuSE Linux Enterprise Server not only meets our high standards with respect to the stability and one hundred per cent availability,” explains Arne Larsson, CEO of Telia
Connect. “In SuSE Linux Enterprise Server we have also found an operating system that allows us to react flexibly to our customer’s demand for performance. Through the system maintenance and the production support, SuSE Linux AG gives us the security we need in order to meet our customers’ expectations regarding the quality of our services.” ■
In tune with Red Hat Agnula turns to Red Hat Linux to develop a new distribution aimed at both professional and amateur musicians. Red Hat, Inc. is directly involved in Project Agnula (A GNU/Linux Audio distribution), which is being subsidised by the European Community, They will use this to create a new Open Source distribution based on the Red Hat Linux. With its numerous professional and multimedia audio applications, this package will be distributed free of charge under the name ReHMuDi, which stands for Red Hat Multimedia Distribution. Entirely designed within the concept of Free Software, this new version will at the same time be easy to install and update, and will also offer all the tools required for the musical creation and production, compilation and distribution.
10
Arkeia demonstrates new backup solutions Arkeia Corp., a supplier of enterprise network backup software, will be showcasing its new beta version of its Arkeia 5 backup software and Virtual Backup Server Solution. The new Arkeia 5 now features smooth network integration, including automatic hardware detection as well as a greatly improved new user-friendly interface. Arkeia 5 is now scalable from SME businesses through to large enterprises. Its new modular plugin structure can now simplify specialised tasks, such as the backup for open file databases and online servers. To simplify data security and reduce costs, Arkeia 5 can channel data from multiple servers onto a single tape library. Arkeia’s Virtual Backup Server solution provides a new array of local and remote data protection services especially adapted to ISPs, telecom specialists and cable operators. By using Arkeia’s Virtual Server capacity, ISPs can now offer complementary services to current clients, plus the ability to attract new business from expanding companies that want to avoid the expense of their own hardware purchases. ■
Xandros brings life to Corel This ambitious project, scheduled to run for 24 months, is now being implemented in close collaboration with acoustics and musical research centres as well as the Free Software Foundation Europe. By developing a release specifically designed for the professionals in the musical industry, Red Hat wants to enable authors and composers, as well as simple amateurs, to free themselves from technological and cultural constraints. By giving more freedom to artists, the company’s aim is to expand the global nature of music even further and to extend the concept of Open Source Software to Open Source Music. This distribution will be available as a download and on CD-ROM. The first Beta version will be released on the Internet by the end of 2002 at www.agnula.org. ■
September 2002 www.linux-magazine.com
Linux Global Partners (LGP), a NY based software investment firm with financial holdings in eight Linux desktop and server application companies, launched Xandros in August 2001. Xandros is developing a customized Debian-based Linux distribution that is derived from version 3.0 of Corel Linux. It will support both the KDE and Gnome desktop environments. In addition to the features that Linux users expect, Xandros plan to distribute significant additions and enhancements. Xandros is also creating an enterprise management solution that will reduce the total cost of ownership. This solution is complete “off the shelf”, but Xandros can customize and integrate the products and provide additions to legacy systems as needed. Xandros will offer a support package. They acquired Corel Linux division late in August 2001. ■
Business
Caldera partner Conectiva Caldera International, Inc. announced a new comprehensive partnership with Conectiva, Inc. that will expand Caldera’s presence in Brazil. Under terms of the agreement, Conectiva’s sales force and reseller channels will sell all Caldera’s products and services. Specifically, Caldera will provide the Volution product and services family of messaging and systems management solutions, as well as the company’s Open UNIX and OpenServer UNIX solutions into Brazil, with future possibilities of extending the relationship to the rest of the Latin American market. In addition, Conectiva and Caldera will partner to provide customer support, training and professional services for the companies’ mutual customers. “Expanding our relationship with Conectiva was a natural extension of the UnitedLinux initiative,” said Darl McBride, president and CEO of Caldera. “With 70 per cent of the Linux market share in Latin America, Conectiva is in a better position to more economically and efficiently service the business customers in this important market. This will also facilitate business with customers and partners who need to seamlessly deploy business solutions throughout the Americas by facilitating a united product, service and support model.” ■
NEWS
SixTRAK IPm Open Controller
Ensim software gives partners more control. Dedicated Servers, announced that all of its Windows and Linux servers will be incorporating Ensim Corporation’s WEBppliance software as standard. As part of its ongoing drive to provide a high quality and innovative service that not only meets but anticipates its clients’ needs, the large investment in Ensim’s software is in order to allow customers even greater control of their servers. The wide range of features included with WEBppliance is of interest to resellers of hosting services, web designers and systems integrators as it is designed to allow these businesses to manage their clients effectively, and in turn, allow their clients to administer their sites with ease. WEBppliance’s customisation settings will let the partners offer private-label hosting with specially tailored control panels and layouts to help reinforce their own brand image and place their clients’ focus solely on them. Support calls can be reduced by directing customers to context-sensitive online help pages via the control panel, monitoring and the managing of bandwidth usage is also simplified through the tools provided, and the software greatly simplifies the configuration and deployment of new customer accounts. For more information see www.dedicated-servers.co.uk ■
The SixTRAK IPm is the ultimate process controller with the power of open Linux software. It’s powerful communications and advanced programming capabilities make it the perfect solution for process control, SCADA, or DCS application. Though it is built on the open Linux operating system, no knowledge of Linux is required in most applications. Allowing you to get all the benefits of open Linux with no extra effort. The SixTRAK IPm is 100% compatible with SIXNET’s legacy SixTRAK Gateways. All existing ISaGRAF user programs will load and run in an IPm without any changes. IPm is the SIXNET trademark for a large family of flexible automation solutions based upon open-source Linux software. They have upgraded the firmware in the RTU and process controllers to run on a remarkable embedded Linux subsystem. Now, in addition to all the powerful features of the pre-integrated SCS (Scalable Control Systems) technology, the systems have the power of open source Linux. These new capabilities includes a web server, advanced Ethernet services and the ability to add your own applications programs or share in the wealth of free software provided by the extensive Linux community. ■
NEC UK launches Data Storage Division NEC UK has announced the launch of a new Data Storage Division and unveiled a range of floppy disk drives, optical and tape backup solutions, including new two-terabit capacity tape backup devices. The company has also announced its channel marketing strategy and future product roadmap. NEC Corporation has a long 40-year pedigree in the design, manufacture and marketing of high performance, high reliability storage products, both in Japan and in other European markets. The company has now established a new UK operation to serve the storage market. NEC tape drives use Linear Tape Open (LTO) technology to ensure high-speed backup, accuracy, longevity as well as reliability. All solutions are automated for
maximum ease of use and are field upgradeable for future technologies. NEC already has Mount Rainier-capable optical disk drives. Mount Rainier provides OS support for simple dragging and dropping of data. In addition, NEC’s new DVD-RW drives will be among the first in the UK capable of writing both +RW and -RW formats. Other new developments will include 50-Gigabit blue laser devices for data-intensive users; combination DVD-RW / CD-RW drives for the portable market; and Network Attached Storage (NAS)
and Storage Area Network (SAN) solutions for large businesses. ■
www.linux-magazine.com September 2002
11
NEWS
Insecurity
Insecurity News Security flaw hits Windows, Mac, Linux Systems that use Sun Microsystem's XDR software are vulnerable. This applies to MS Windows, Apple Mac OS X and Unix based systems. It is possible to gain root access and so take control of your system. The XDR library is available and used across a range of operating systems, so the flaw is not limited to any one OS in particular and even extends to Kerberos authentication systems. The problem is widespread because it affects some implementations of XDR (external data representation) libraries, used by many applications as a way of sending data from one system process to another, regardless of the computer system's architecture. The affected libraries are all derived from Sun Microsystem's SunRPC remote procedure call technology, which has been taken up by many vendors. The
CERT Advisory CA-2002-24 CERT has received confirmation that some copies of the source code for the OpenSSH package have been modified by an intruder and contain a Trojan horse. The following three files were modified to include the malicious code: openssh-3.4p1.tar.gz openssh-3.4.tgz openssh-3.2.2p1.tar.gz . These files appear to have been placed on the FTP server which hosts ftp.openssh.com and ftp.openbsd.org on the 30th or 31st of July, 2002. The OpenSSH development team replaced the Trojan horse copies with the original, uncompromised versions at 13:00 UTC, August 1st, 2002. The Trojan horse copy of the source code was available long enough for copies to propagate to sites that mirror the OpenSSH site. The Trojan horse versions of OpenSSH contain malicious code that is run when the software is compiled. This code is used to connect to a fixed remote server on port 6667/tcp. It can then be used to open a shell running as the user who compiled OpenSSH. ■
Red Hat
Mandrake
Util-Linux
Apache
The Red Hat Network site at rhn.redhat. com/errata/RHSA-2002-132.html reports a vulnerability in the util-linux package. This security error was discovered by the BindView RAZOR Team. By exploiting the flaw it is possible to allow a local user to use privilege escalation when the ptmptmp file is not removed properly when using the chfn utility. The util-linux package contains a host of utilities such as fstab, mkfs, and chfn. Because setpwnam.c inadequately locks a temporary file that is used when it is making changes to /etc/passwd, a race condition could be used by the exploiter to elevate his privileges on the system, and so compromise security. This new vulnerability is not limited to the Red Hat distribution alone.
PHP errors A vulnerability has been discovered in PHP versions 4.2.0 and 4.2.1. It is feared that this vulnerability could be used by a remote attacker to execute arbitrary code
12
Computer Emergency Response Team (CERT), a security network based at Carnegie Mellon University, warned that systems using the affected code should immediately apply patches or disable the affected services. Cory Cohen and Jeffrey Havrilla from CERT report that the integer overflow in the library can lead to buffer overflows these in turn can allow unauthorised users to compromise the system by either executing other program code, taking data or taking down a system. With the Kerberos 5 administration system an unauthorised user could take control the Key Distribution Center authentication functions. The Kerberos development team at MIT has issued a warning and patch on their website. Patches are also available from CERT, Apple or the Linux distributors' websites. ■
Trojan OpenSSH
or crash PHP and/or the web server. The vulnerability occurs inside the portion of PHP code, which is responsible for the handling of the file uploads, specifically multipart and form-data. However, by sending a specially crafted POST request to the web server, an attacker could now corrupt the internal data structures used by PHP. In this way an intruder could cause an improperly initialized memory structure to be freed. In most cases, an intruder could then use this flaw to crash PHP or the web server. Under some circumstances, an intruder then might be able to take advantage of this flaw and execute arbitrary code with the privileges of the web server. This vulnerability was discovered by the e-matters GmbH team and this is described in great detail in their security advisory. Stefan Esser of e-matters GmbH has indicated that fortunately intruders cannot execute code on any x86 systems. The vulnerability is not limited to just the Red Hat distribution alone. ■
September 2002 www.linux-magazine.com
A Denial of Service attack was discovered by Mark Litchfield in the Apache webserver. While they were investigating this common problem, the Apache Software Foundation also discovered that the code for handling invalid requests that uses chunked encoding might also allow some arbitrary code to be executed on 64bit architectures. All versions of Apache prior to 1.3.26 and 2.0.37 are vulnerable to this problem.
Msec The Mandrake Linux Security tool usually called msec has a potential security vulnerability. The msec utility will restore the default property settings during a periodic system audit. These default settings being 755 mode for each user's home directory. CERT/CC does not believe that this utility represents any security vulnerability as this behaviour is accurate and maintains the configured security policy. This is consistent with the product documentation. ■
Insecurity
Debian dietlibc The RPC library which is used by the dietlibc package, being a size optimized libc, has had an integer overflow error discovered. The RPC code is derived from the SunRPC library. By exploiting this security flaw it is possible to gain root access to any software which is linked to this library code. These problems have now been fixed in version 0.12-2.2 for the current stable Debian distribution (woody) and also in the version 0.20-0cvs20020806 for the unstable distribution (sid). Debian GNU/Linux 2.2 (potato) is not affected since it does not contain the dietlibc packages. The vulnerability is not limited to the Debian GNU/Linux distributions. It can occur in any system in which the package has been installed.
tinyproxy A small bug has been found by the authors of tinyproxy, a small sized HTTP proxy program, in the way that it handles certain invalid proxy requests. It might be possible that in some cases the invalid proxy request may result in the freeing of an allocated memory block to happen twice. This in turn could then lead to the execution of arbitrary code. This problem has been fixed in version 1.4.3-2woody2 for the current stable distribution (woody) and in version 1.4.3-3 for the unstable distribution (sid). The old stable distribution (potato) is not affected by this problem. This security vulnerability is not limited to Debian GNU/Linux alone.
super The super package which can be used to provide certain system users access to particular users and programs has been found to have a vulnerable use of format strings. By exploiting this bug a local user could possibly gain root access. This problem has been fixed in version 3.12.2-2.1 for the old stable distribution (potato), in version 3.16.1-1.1 for the current stable distribution (woody) and in version 3.18.0-3 for the unstable distribution (sid). The vulnerability is not limited to Debian alone. ■
NEWS
SuSE gallery
wwwoffle
The gallery program, which is a webbased photo album toolkit, has had a vulnerability discovered. It is possible to remotely pass in the GALLERY_BASEDIR variable. By doing this, it is possible to execute commands and so compromise the system under the uid of the web server. This has been fixed in version 1.2.5-7 of the Debian package and upstream version 1.3.1. The vulnerability is not limited to Debian alone.
mm Sebastian Krahmer and Marcus Meissner have both discovered, as well as fixed, a temporary file vulnerability in the mm shared memory library. This error could be exploited to gain root access onto a machine which is running Apache that is linked against this library and if shell access to the user “www-data” is already available (and this could be also triggered easily through PHP). This problem has been fixed in the upstream version 1.2.0 of mm, which will be uploaded to the unstable Debian distribution while this advisory is released. Fixed packages for potato (Debian 2.2) and woody (Debian 3.0) are linked below. The vulnerability is not limited to Debian alone.
libapache-mod-ssl The libapache-mod-ssl package provides SSL capability to the apache webserver. Recently, a problem has been found in the handling of .htaccess files, allowing arbitrary code execution as the web server user (regardless of ExecCGI / suexec settings), DoS attacks (killing off apache children), and allowing someone to take control of apache child processes – all through specially crafted .htaccess files. More information about this security vulnerability can be found at online. securityfocus.com/bid/5084 This error has now been fixed for the libapache-mod-ssl_2.4.10-1.3.9-1potato2 package (for potato), and also in the libapache-mod-ssl_2.8.9-2 package (for woody). The vulnerability is not limited to Debian alone. ■
SuSE reference SuSE-SA:2002:029 The WWWOFFLE, World Wide Web Offline Explorer, program suite acts as a HTTP, FTP and Finger proxy to allow users with dial-up access to the internet to do offline WWW browsing. The parsing code of wwwoffled that processes HTTP PUT and POST requests fails to handle a Content Length value smaller then -1. It is believed that an attacker could exploit this bug to gain remote wwwrun access to the system wwwoffled is running on. Temporarily, the wwwoffle daemon can be disabled in the following way (as root): rcwwwoffle stop.
bind, glibc A vulnerability has been discovered in some resolver library functions. The affected code goes back to the resolver library shipped as part of BIND4; code derived from it has been included in later BIND releases as well as the GNU libc. The bug itself is a buffer overflow that can be triggered if a DNS server sends multiple CNAME records in a DNS response. This bug has been fixed for the gethostbyXXX class of functions in GNU libc in 1999. Unfortunately, there is similar code in the getnetbyXXX functions in recent glibc implementations, and the code is enabled by default, but, these functions are used by very few applications, such as ifconfig and ifuser, which makes exploits less likely. Until glibc patches are available, you should disable DNS lookups of network names in nsswitch.conf. Simply replace the line containing the tag “networks:” with this line: networks: files. If having configured a name to network mapping via DNS, copy this information to /etc/networks. The resolver bug is also in the libbind library included in BIND. This library is used by the bindutil package. ■
INFO www.cert.org/ www.vulnwatch.org/ rhn.redhat.com/errata/ www.linux-mandrake.com/en/security/ www.suse.de/uk/support/security/index.html
www.linux-magazine.com September 2002
13
NEWS
Kernel
Zack’s Kernel News Doorway to BitKeeper Pavel Machek has set up a CVS gateway to BitKeeper, so developers who want to use only free software, can use CVS to communicate with the BitKeeper trees maintained by Linus and others. Ever since Linus began using BitKeeper to organize development, the kernel developers have been split into two camps. One camp feels that BitKeeper solves a lot of problems and is a good thing to use, especially as there is no free alternative; while the other camp feels that Linus, as spokesman for the entire community, should not compromise the ethics of free software by giving such a central role in kernel development to a commercial product . The most visible advantage to BitKeeper is that each new kernel release is now accompanied by a complete description of the patches that went into it. But many people feel that this and other advantages are outweighed by the fact that BitKeeper is a commercial, closed source product. Various alternatives to BitKeeper have sprung up recently, but none of them have achieved the technical maturity of BitKeeper, and Larry McVoy (BitKeeper owner) predicts that it will take years to develop a free alternative to BitKeeper. So Linus and a number of other kernel developers continue to use it. ■
Quick freeze
INFO The Kernel Mailing List comprises the core of Linux development activities.Traffic volumes are immense and keeping up to date with the entire scope of development is a virtually impossible task for one person. One of the few brave souls that take on this impossible task is Zack Brown.
A 2.5 feature freeze is planned for October 2002, as decided at the recently held Linux Kernel Summit. A code freeze will follow, in which only bugfixes will be accepted into the kernel; and finally, 2.6 will be released, amid joy and jubilation around the world. That is the plan. The reality, however, will almost certainly prove somewhat different. Earlier transitions from unstable series to stable releases have all taken much longer than anyone expected, and this has been recognized as a problem for years, not just in kernel development, but in many other large open source projects as well. In open source, there are many developers working at all times to add features, rewrite various existing portions of the kernel, port the system to other architectures, and so on. As long as the development series is in full swing all is well. These developers may work and work at their own pace, concluding their work when the time is right. But the 2.6 kernel cannot be released until all the various tendrils of development have been brought together, at least somewhat, or the system would not work at all. It’s quite common for kernels in the development series to be broken and not even to compile successfully. This is because work in one area may be at a
particularly invasive stage, while a new release is required in order for other developers to continue merging their work. Before the transition to a stable series, however, all the developers must bring their portions of the code to roughly equal status, as complete and as stable as they can get them. Naturally there is always a big push to get just one more feature in before the deadline, and in earlier years Linus would often make such exceptions, which then needed time to stabilize, during which other people would protest the exclusion of their own patches. If 2.5 does successfully freeze in October, it will be the shortest development cycle on record, and will indicate a shift in the way it has been handled. ■
yet handling the link between the EA framework and capabilities. It seems that all that remains is to meet in the middle, which does not seem such a long way off. POSIX capabilities allow root privileges to be split up into atomic privileges that may be granted or withheld individually. A given program may run with the capability to delete a file on the system, but not to modify it. Extended attributes are a general purpose method of storing metadata within an inode on disk. Each attribute is composed of a name and a corresponding value, stored with the file.
As with many new Linux features, capabilities, extended attributes and Access Control Lists have been extremely controversial at times. The developers tend to take a unique approach to all aspects of system design. It is not unheared of them to reject the accepted standards, if they feel a better solution is available. As a result, the question of whether to add a particular feature like capabilities often boils down to details of implementation and behaviour that may not have been envisioned by its original designers and developers. ■
Our regular monthly column keeps you up to date on the latest decisions and discussions, selected and summarized by Zack. Zack has been publishing a weekly digest, the Kernel Traffic Mailing List for several years now, reading just the digest is a time consuming task. Linux Magazine now provides you with the quintessence of Linux Kernel activities straight from the horse’s mouth.
Joining up Filesystems Filesystem capabilities are making progress. There’s been partial support since 2.2, with several individuals and groups working on the problem ever since. Now it looks as though complete support may arrive within the 2.5 time frame. The 2.5 Virtual Filesystem (VFS) has supported extended attributes (EAs) since 2.5.3, and plan to implement POSIX capabilities within the EA framework. However, the Linux Security Modules (LSM) project has been coming at the problem from the opposite direction, implementing capability support, without
14
September 2002 www.linux-magazine.com
Kernel
Time for a change Ever since Ingo Molnar wrote his fast new process scheduler, there has been a tremendous push to see it included in the 2.4 kernel. But for six months the maintainers have resisted including it. A number of Linux vendors have included the patch in their distributions with no problems, so that most Linux users in the world have probably been using the patch for some time, but among the kernel developers there is reluctance to include such an invasive change into the 2.4 series, which is supposed to be kept as stable as possible. Any patch to modify a fundamental component as the scheduler, would make the kernel less dependable, because it would be less well tested. Ingo hopes is that the patch will receive more testing, and be shown to be truly stable, before being included in the stable kernel series. Some developers feel that the patch should wait for the 2.6 series. These developers point out that the default scheduler currently in use in the 2.4 series is perfectly usable, and doesn’t need to be replaced. It is impossible to think about this issue without recalling Linus’ decision to replace the entire Virtual Memory subsystem early in the 2.4 series. This was a very invasive change on the order of replacing the scheduler, and was met with harsh criticism and much bitterness. In addition, the VM subsystem at that time still had many problems, and was improving only very slowly. In the event, it turned out that Linus’ decision to replace it, led to a more robust system, though many developers felt he should not have taken such a big risk. ■
Temporary unstable fix
NEWS
Stable Detection
The 2.5 IDE disk code is being entirely rewritten for 2.6, and is currently in a broken state, which has been causing delays in other areas of development. Some developers attempting to test their own work on recent 2.5 kernels have been unable to do so, because IDE support has been removed during the extensive changes. Recently it was reported that system lockups and even data corruption could result from testing the current 2.5 IDE code. While this is not uncommon for a development series, it has caused some frustration among various developers, and recently inspired Jens Axboe to port the 2.4 IDE code up to 2.5 and maintain it as a separate patch. He did this in order to be able to test his own projects, but thought other folks might find it useful. In fact, many people were overjoyed by this development. Some developers had been too frightened even to try any 2.5 kernels, but with IDE temporarily patched up, they felt they could begin to reach tentatively into doing 2.5 work. Jens was careful to add in his announcement, that the IDE maintainer was doing a very good job, and that Jens’ patch was simply a temporary expedient until the real IDE code stablized. It is this tactful acknowledgement that probably prevented an angry flame war. The IDE rewrite has been controversial, because it has had to get worse before it could get better. Most large rewrites have either not entailed long periods of breakage, or else have involved less central systems, whose breakage would not inconvenience too many developers involved in doing other work. ■
Hardware detection is reaching stasis. In the desire for a fully developed plug-andplay system, the question occasionally comes up, of how to automatically detect all hardware currently installed, and how to detect hardware that is hot-plugged into and out of a running system. Current kernel policy is to detect all hardware that it is possible to detect, but not to make assumptions about the ways that hardware will be used. For instance, it would be a security risk for the kernel to automatically mount all filesystems it detected at bootup. The decision of when to mount the filesystem is left to the administrator, even though most Linux systems mount their filesystems at bootup, it is controlled by user-level configuration, not by the kernel. The situation is made more complex by the fact that Linux runs on a great variety of systems, which don’t all support the same kinds of hardware detection. Some systems, such as s390, s390x, x86 and ia64, are now able to hot-plug CPUs in and out of the system at will, while others show no promise for such a thing. For a long time, developers despaired of ever being able to hot-plug regular PCI cards, until Compaq demonstrated a Linux system capable of this in January of 2001. Within a couple months, patches for this were in the mainstream kernel sources. But it remains dangerous to hotplug certain pieces of hardware. On some hardware, plugging a mouse or keyboard into a running system may break those components. There is nothing the operating system can do about those situations, because the problem occurs at a more fundamental level. ■
probably only release 2 or 3 additional versions, and will almost certainly stop with 2.0.42. Amusingly, Mikulas Patocka recently refused to take over as maintainer of the 0.01 tree. Last September, while playing around on the earliest version, Mikulas discovered a bug and posted a fix on linux-kernel. A lot of folks’ eyes popped out of their heads over that one, and Linus offered to let Mikulas be the official maintainer of that tree.
Then it was Mikulas’ turn to have his eyes pop out of his head, and sadly, he refused to honor. Maintainership has often been delegated based on interest. Alan became 2.2 maintainer primarily because he insisted on producing patches for it. David became 2.0 maintainer because he objected when Alan decided to stop maintaining 2.0 himself. As far as I know, Marcelo Tossatti (2.4 maintainer) is the only person to actually go through a selection process. ■
2.0 marches on Someone recently suggested dropping support for the old 2.0 kernels. David Weinehall, the 2.0 maintainer, said he would continue to patch 2.0 bugs as long as people continued to send fixes to him. He said, and Alan Cox (2.2 maintainer) agreed, that maintaining these kernels did not drain development effort from more current projects, primarily because there was so little required to maintain them. David predicted, that so little work needed to be done on 2.0, that he would
www.linux-magazine.com September 2002
15
NEWS
Letters
Letters to the editor
Write Access
Your views and opinions are very important to us. We want to hear from you, about Linux related subjects or anything else that you think would interest Linux users! Please send your letters to: Linux Magazine Stefan-George-Ring 24 81929 Munich Germany e-mail: letters@linux-magazine.com Please tell us where you are writing from.
Deutsche Post World Net
No Change
Image access Q I have been given a new camera for work. The camera is a HP PhotoSmart 215 which is not listed as supported under gPhoto. Can I extract the images using Wine or Lindows. Alan Slater, by e-mail A It maybe possible to use Wine or run an emulator such as VMware to access
the HP software for image extraction. However after a quick web search we turned up a page where someone has just the same camera. www.sonic.net/ ~rknop/linux/hp215.html We then took a trip to our local PC store and purchased a Dane-Elec PhotoMate Combo USB compact Flash reader. By simply plugging this device into the USB port and mounting the device as a SCSI drive we were able to browse the camera contents. mount /dev/sda1 U /mnt/flash cd /mnt/flash ls
Figure 1: mc copying from the Compact flash to a hard drive
16
September 2002 www.linux-magazine.com
We then used midnight commander (mc) to copy the files and viewed them with Electric Eyes or other image software. â–
Q I have set up a new Linux machine and want to change the IP address I use ifconfig eth0 192.168.0.1 U netmask 255.255.255.0
This works fine until I next reboot when the previous IP address appears. How do I write a script so I do not have to keep typing this. Erik Bildt, Tromso, Norway A Rather than write a new script you just need to change your current bootscript. On both Mandrake and Red Hat Linux distributions this is in the /etc/sysconfig/network-scripts/ifcfg-eth0 and can be changed with either the DrakConf utility or with the Network Configuration tool. Whereas, on the SuSE Linux distribution the file is found in /etc/sysconfig/network/ifcfg-eth0 and in this case is best changed by using the YaST2 / Network Basic / Network card configuration option. â–
Letters
Multiboots
To many man pages
Q I run a mixed OS on my desktop. I have installed SuSE 8.0 but BootMagic does not see it and Linux will only boot off a diskette. How should I reinstall BootMagic. Jeff Millese, Gatineau, Canada A When you installed the SuSE Linux distribution you told it to place the boot loader on to a floppy disk. In this case
BootMagic, when it starts, cannot see the Linux loader. Start your Linux with the floppy disk and then use the YaST2/ System/BootLoader configuration option to put the bootloader onto either the bootsector of the boot partition or the boot sector of the root partition. Do not put this onto the Master Boot Record as this is being used by BootMagic, which is now sold as part of the new Partition Magic program under the PowerQuest brand name, and so you do not want to overwrite this. Now when you rerun your BootMagic disk and it should see the Linux loader on the hard disk. For those of you who do not want to pay for a System loader program, they can always download the XOSL (eXtended Operating System Loader) at http://www.xosl.org/ although this is still Figure 2: SuSE Linux’s YaST2 boot configuration utility that lets you save under development. ■ the boot loader where you want.
Missing in action Q I have just upgraded my Red Hat Professional Linux and although it works smoothly I have no LinuxConf. This is a key tool for controlling my system. What should I do? C Singer, by e-mail A Quoting from the RH 7.1 manual “One of the most powerful tools you can use for system administration is Linuxconf. You can use Linuxconf for adding and manipulating accounts, monitoring system activities, controlling the way your system starts, and more.” So what have they done with it? In RH 7.3 the package was finally removed after being deprecated in 7.2. This was due to many users complaining it was not functioning for them and so now we have a more tightly Integrated utility in ServiceConf. For those who just cannot live without there beloved Linuxconf it is still available from www.solucorp.qc.ca/linuxconf ■
NEWS
Q I know that Linux comes with lots of documentation, sometimes I think there is too much. If I forget the name of a command I can waste ages flicking through man pages trying to find it. Is there an easier way to look them up? Mark Day, by e-mail A The man pages are a fine resource and everyone really should take some time to make sure they know how to make full use of it. The information in a single man page can be broken up into sections, like 'synopsis' and 'description'. With some line switches added to your man command, you can get to search through this information. For example, if you know there is a command to back up an ext2 file system, but you are unable to remember its name: man -k ext2
will list of all of the man pages that have 'ext 2' in their description text. ■
Accessing all areas Q I am forgetful. Just the usual but it is annoying. I download a file and save it. Fine, you say, but if I do it under Windows how do I get it under Linux when I reboot rather than downloading and filling up my hard disk again. Steve Hall, Bilston, UK A You need to mount the Windows partition as another Linux device. Then you can access all the files and data. We have in our Filesystem table (/etc/fstab) file the line /dev/hda1 /mnt/windows vfat U noauto,user 0 0
Figure 3: Red Hat Linux's ServiceConf tool for all your control needs
Now use the mount /mnt/windows command to gain access to the FAT32 windows partition on hda1. The noauto option means that it is not mounted by default when you boot the Linux system. The user option means users can mount it and not just root. The two 0’s mean that a dump command will not attempt to backup this partition and when mounted the fsck command will not run. Looking at it from the other side you could always access your downloaded data stored on your Linux partitions while using the Explore2fs utility. ■
www.linux-magazine.com September 2002
17
COVER STORY
Red Hat intro
Red Hat’s future
Red Hat Unwrapped W
hen Bob Young and Mark Ewing founded Red Hat back in 1995 their goal was, above all, to make people think “Red Hat” when they thought Linux, just like they thought “Heinz” when they thought about ketchup. The Red Hat was intended as a symbol, a synonym for Linux and Open Source. Of course this is not quite what has happened – an operating system is not as simple as tomato ketchup whether viewed from the technical or from the marketing perspective. What is left over from the Young era is the strong business orientation; Red Hat never has been a Geek to Geek business. The commitment to Open Source and the GPL is what is mainly responsible for the benevolent attitude the Community has shown – despite there having been some technical and strategic decisions which might be considered to be questionable emanating from the main company headquarters in Raleigh.
Although it was only just released in time to make this issue, Red Hat, the US market leader’s, Advanced Server distribution is the subject of an exhaustive test. Its aim – to find out whether Red Hat purchasers are in good hands. BY ULRICH WOLF
Red Hat and the Consumer Red Hat has concentrated almost entirely on the business to business market for the past 3 years, working on the premise that profits in that area are more easy to come by than on the OTC market. But a larger user base is important to the long term success of a software product. No problem, anyone can download ISO images of the current distribution. End users who insist on buying a box are being asked to spend more each year,
COVER STORY Red Hat Advanced Server test . . . . . . . . . . . . . . . . . 20 We put Red Hat’s flagship product through a series of grueling tests to see if it lives up to its promise.
Scott Harrison Interview . . . . . . . . . . . . . . . . . . . 24 Scott is the director of Red Hat’s Northern Europe division. We managed to ask his views on the future.
18
although the added value has stayed more or less on the same level. Until about two and a half years ago fresh Red Hat boxes were available for around £30 to £40, however, now even the Personal box with its lean content costs up to £50. The increase in price for the Professional version, which now weighs in at around £180, has been even more alarming. New Professional and Personal versions appear twice a year on a fairly regular basis.
Red Hat Network The Red Hat Network is the central instrument for keeping a Red Hat system reliably up-to-date. A basic subscription
September 2002 www.linux-magazine.com
now costs US $60 a year. The customer is regularly notified of patches for this amount and can apply them via a GUI. Preferential access to the ISO images is part of the package and this is useful at times when FTP servers are feeling the strain. The Enterprise version comprises genuine system management features: allowing you to group servers by task, also to manage the privileges for multiple administrators, setup local proxy servers in enterprise networks and so on. Those wishing to subscribe to the Enterprise version of Red Hat Linux will have to pay about US $240 annually. It is interesting to note how the Red Hat Network has been hardened over the
Red Hat intro
years. Anonymous access was originally available via the up2date update agent; mandatory registering for the service was then introduced in 2001. The service level demo still allowed you to use up2date to keep your Red Hat system up to date without paying the registration fees. But this backdoor has since been closed. The purchasers of the Personal and of the Professional distributions now have time limited access to the basic version of the Red Hat Network. For companies and private users Ximian’s Red Carpet is an alternative that works with various distributions, that offers more packages and is often much more up to date.
Training and Certification Red Hat has managed to establish the RHCE (Red Hat Certified Engineer) as a quasi-standard for Linux certifications and training. In contrast to the LPI (Linux Professional Institue) multiple choice tests available previously, the RHCE exams are practically oriented. The test candidate is required to solve the types of problems that real-world administrators are faced with. The inclusive system administration courses cost around £1,600 or just the exam for £480; students and teachers at recognized schools can apply for a 50 percent reduction. Courses are also available for other Red Hat products such as the PostgreSQL
Red Hat and the Software Patents Red Hat is planning to compile a portfolio of software patents for strategic reasons. At the same time the company intends to continue actively opposing software patents. A welcome side effect of this is that analysts tend to rate an enterprise by the amount of “intellectual property”it has collected. And patents are the units used to measure intellectual property.The fact that the principle of free software is diametrically opposed to this and the fact that intellectual property is impossible to quantify does not seem to trouble Red Hat too much. Red Hat has promised not to pursue patent infringements if these occur in a Freeware context.This promise was formulated in the classical form of a legal statement of intent, but it will prove difficult to pursue Red Hat in any way, should the company then decide that it is going to renege on this promise some time in the future.
based Red Hat Database, Webserver, the Online-Shop, Interchange, or their Embedded Tools for the eCOS realtime operating system. Red Hat does not automatically lead you to thinking about embedded systems. However, the former Cygnus developers are responsible for a major portion of turnover. The Services division for Embedded Systems has returned a double figured million dollar sum each year. The Sony Playstation 2, which was developed using Red Hat’s compiler technology, is one of the prestige projects in this area. For Embedded Systems with a small memory base Red Hat does not use Linux but eCOS. But the eCOS operating system is also Open Source.
Forthcoming From a business point of view Red Hat is certainly an attractive proposition, especially in comparision with the other Linux and Open Source enterprises that started up in the 90s. However, expecting profitability does not seem to be a very realistic option at present. Although Red Hat has returned neutral operating results in some quarters, the company is still struggling with a large problem common to many enterprises who went on shopping sprees during the New Economic boom: namely write offs on goodwill and immaterial items related to acquirements. These figures may ruin the balance sheets for some time to come. Red Hat closed the last financial year, which ended in February 2002, with a loss of US $140 million, with a turnover of only US $79 million. The most recent turnover figures show no signs of a drastic slump, Red Hat does seem to be riding the storm prevailing in the IT sector more comfortably than other software companies. Times of crisis may also be a good opportunity for value for money Open Source solutions. On the server side of the business, competition from other Unix systems still plays a much more important role than the Microsoft landscape, and although Red Hat is a player on the mainframe Linux scene, the company does not seem to have risked its neck as much as its Linux rival SuSE, for example. More recently, it seems that Red Hat’s focus may be shifting to the corporate desktop and there have been some announcements concerning “advanced”
COVER STORY
(that is more expensive) desktop and workstation distributions that would seem to confirm this trend. When it comes to database systems Red Hat is apparently incapable of steering a straight course. On the one hand Red Hat has its own Open Source based Red Hat Database product in its portfolio, and this product is probably equal to most tasks. On the other hand Red Hat seems to be assisting the database giant, Oracle, in an attempt to open some doors to the large enterprise market. The logical result of this is an unusual hesitance to market their own product. Despite their efforts to make their name synonymous with both Linux and Open Source in general, Red Hat will have to accept being measured on the strength of their Linux distributions. Their prices would seem to indicate that they are by no means lacking in self-confidence. ■
What ever happened to… Bob Young: The charismatic Red Hat founder and former used automobile salesman, Robert F. Young, was one of the first to discover the commercial potential of free software way back in 1995. After handing over his offices to Matthew Szulik in 2000 he was still actively involved with the Red Hat board but has gradually sold a major part of his Red Hat shareholding. He is now the owner of a small business called Lulu Enterprises that organizes technically oriented events whose fun factor and proximity to the participants distinguishes them from traditional events.The first “Lulu Tech Circus”will be taking place September 27 through 29 in Raleigh, North Carolina, on Red Hat's home territory and the site of a large IBM branch. Colin Tenwick: While he was responsible for Red Hat's European activities, Tenwick was never short of a catchy phrase, much in the tradition of Larry Ellison or Scott McNealy. But he did not stick to this position for long. He is now CEO of the Stepstone online career and recruitment service that is also known for it disposition to loud marketing – although this line of business does require a certain amount of discretion at times.
www.linux-magazine.com September 2002
19
COVER STORY
Advanced Server test
Red Hat Advanced Server 2.1
Advanced Level R
emember Red Hat for Oracle or Red Hat for SAP? These were both available as separate products and certified by the appropriate ISVs (Independent Service Vendors). To prevent this list from running into the middle of next week, the Marketing guys in Raleigh have come up with the Advanced Server. At least 20 ISVs have given the go ahead with the list including major players like the IBM Software Group, application server specialist BEA and SAP. Additionally, Red Hat is asking the hardware manufacturers to climb on board. As the SuSE Linux Enterprise Server proves, certification justifies a much higher selling price in its own right. But in contrast to their competitor, Red Hat have added some technical enhancements and are pushing the products scalability on SMP machines, cluster support, load balancing via Piranha and high availability. In order to do justice to Red Hat’s technical claims we decided to focus our activities on setting up a cluster to provide high availability with two node failover. The availability of the distribution itself was not too hot. Although we waited until well after our editorial deadline, Red Hat was unable to deliver a boxed product to our test lab. So our test is based on the CDs we created from the ISO images that Red Hat finally managed to upload to our FTP server.
Red Hat’s latest flagship goes by the name of Advanced Server. Approved by major software publishers and equipped with enterprise features such as high availability and clustering the Advanced Server targets the more demanding customer, but despite the version number 2.1 it is quite obviously a newcomer. BY MIRKO DÖLLE, ULRICH WOLF & ACHIM LEITNER
Installation
20
September 2002 www.linux-magazine.com
Tourismus, Frutigen, visipix.com
The installation procedure for Advanced Server is very similar to the procedure already used in Professional 7.3. Red Hat uses the same GUI installation program for both distributions. The welcome page now additionally offers the Advanced Server option, in contrast to the various server and workstation variants available in the Professional edition. Advanced Server is an international version providing multi-lingual support, although the documentation is entirely in English. The character based installation does not seem to be any different from the
Advanced Server test
COVER STORY
Network Power Switch
Figure 1: Two mode failover cluster configurations require a dual channel SCSI RAID or fiberchannel solution that can be accessed simultaneously by both nodes.
Professional 7.3 Red Hat distributions, although you may discover one or two issues (as we did), if you need special keyboard layouts. Setting up a firewall on a cluster is more complex than on a single machine. You cannot perform the installation just using the defaults (medium security level, allow no services or just DHCP) because the defaults will interfere with the cluster configuration. We recommend omitting the firewall installation at this step and manually adding customized rules for the cluster at a later stage. Red Hat distributions still use the Gnome desktop, although a KDE option is available. But production systems will tend to be managed remotely, and that makes the GUI redundant. To install a text-based environment, you simply disable the Gnome package during the installation.
Hardware en masse The documentation describes a two node failover as a typical setup for Advanced Server 2.1, so we decided to base our test on this scenario. The cluster for our test system comprised two Dual Athlon machines running at a clock speed of 1.533 and 1.666 GHz respectively, both equipped with an Adaptec 29160 U160 SCSI controller. We installed the Red Hat system on the internal hard disks of both machines. There were no complaints regarding hardware, although a Promise Fasttrack 100 RAID 0 system was recognized as two separate disks. This meant having to break up an existing RAID array or
replace it with a software RAID array. And there was a slight APIC issue with the Asus A7M266-D board in the first machine. The kernel kept on crashing during initialization, but the “noapic” boot parameter soon sorted that out.
Two Channel SCSI We stored data for the cluster services on an Easy-RAID X12 by Starline Computers [2]. This SCSI / IDE RAID system (see Figure 1) features a dual channel SCSI host controller and twelve 120 GB drives, although we used only the first four. When we attempted to mount the total capacity of 1.44 TB, we could not access the device. Linux complained about read errors on “/dev/sda”. To allow both machines simultaneous access to all the partitions on the RAID system we then configured the four disks as a large share. Red Hat supports fiberchannel systems, which you would need to configure for parallel access. NAS systems are not currently supported and the cluster configuration will not talk to network drives.
Red Hat’s “Cluster Manager Installation and Administration Guide” [3] recommends the use of a power switch, to completely power down a faulty machine in case of node failure. The idea is to prevent the common RAID system from freezing. APC kindly provided us with a Master Switch AP9212 (Figure 2), which featured eight switchable outlets. We attached the power switch to the network leaving the serial port unused. However, we found the cluster software was unable to control the power switch correctly: Instead of powering off a failed machine (Immediate Off) the cluster merely emitted an Immediate Reboot signal, causing the failed machine to power off for a few seconds before it then powered on again. Depending on the BIOS configuration the computer may attempt to restart, and in this case a damaged SCSI controller could lead to the RAID system freezing. Since the software will not transmit a second signal, this would take the whole cluster down.
Cluster Installation The configuration of the cluster software with the “cluconfig” console tool is detailed in the Cluster Guide. Although the software has outstripped the guide in some places, this should not give the administrator too much of a headache. You should be cautious of following all the sample configurations given without considering your options. The Cluster Guide recommends the activating of the “Relocate when preferred member joins the cluster” option for an Apache configuration on page 126, but fails to mention that the relocating will drop any current sessions. This causes active downloads to fail when the primary node rejoins the cluster after a failure.
Figure 2: The Master Switch AP9212 can switch eight power circuits individually. The integrated web server provides administrator access to the management software via SNMP or telnet.
www.linux-magazine.com September 2002
21
COVER STORY
Advanced Server test
Figure 3: In case of interrupted network services, the services cannot be transferred and therefore fail.
The nodes use quorum partitions to transfer status information, for which no details are available. The partitions, which are about 10 MB and mounted as unbuffered raw devices, store status information on the clusters and active services. You need to use separate RAID partitions for your data to provide the redundancy for individual services. The node that owns the process will mount the partition assigned to the process.
LAN/WAN
that the hardware address of the cluster will change to match, and needs to be accounted for when configuring switches or routers, and also that redundant services will need an IP address of their own.
Application Server A
Application Server B Failover
App X
Heartbeat Private
Boot Drive
App Y Boot Drive
Failover
Data A
Data B
Shared FC Disk System
Interrupted Connections We used an Active-Active configuration for our Cluster comprising one machine with an NFS drive as its primary node, and the other with an Apache web server. In this constellation one machine would take over the service that had failed on the other machine. Failover means the restoring of services of the failed node as quickly as possible, but this does not mean necessarily that active connections will be kept. Our clients could only continue working unaffected by the failure if they were using connectionless protocols (such as NFS). When a node fails over, the IP address of the cluster service is assigned to the other machine. The address is then bound to the network device responsible for the subnet by IP aliasing. This means
Hidden Heartbeat
Figure 4: Both servers have redundant connections to disk system(s), but Red Hat
You will need to Linux Cluster Manager controls access. configure at least on failure of a service or device. You can one heartbeat channel for the cluster use the status function in the services init operations. The nodes use the heartbeat script only to implement a verification channel to check how the other nodes function. The cluster software calls the respond, if a node fails to update the init script with the “status” flag set at pretimestamp on the quorum partition. defined intervals and the Cluster Manager The heartbeat channel is unused in the determines whether to restart the service current 2.1 Version of Advanced Server. based on an analysis of the return value. Only the status output from “clustat” or The administrator can specify what “cluadmin” (Figure 5) shows you if the details the status check covers. The heartbeat channel is online or offline. You Apache script, for example, checks cannot define any actions for these cases, whether the daemon is running. and there was no sign of scripting access. Red Hat has stated that this feature will Relocate or bust be available in the following version. The cluster software does not offer any But don’t expect a failed status check to options for launching customized actions launch a “relocate”. If the script detects
Figure 5: The cluster status can be queried using “clustat” or interactively
Figure 6: Although only the network connection to the second cluster node
using the “cluster status” flag with cluadmin. The service section shows how
has failed, the power switch status is unknown. This effect also occurs if you
the services are distributed across the cluster nodes.
have not configured a power switch.
22
September 2002 www.linux-magazine.com
Advanced Server test
an error condition that would necessitate switching to a backup system, you have to launch this action using the “cluadmin – service relocate service” syntax. The cluster server handled a total node failure gracefully; depending on the service they were using, the clients simply had to repeat a file transfer process. But a partial failure caused a whole bunch of unanticipated problems. Although the affected node did a clean reboot after disconnecting the SCSI subsystem, there seems to be no way to deal with a disconnected network cable. Although the heartbeat channel and the SCSI connection were both active, the missing network link between the two nodes meant that it was impossible to relocate a service to a backup machine: “cluadmin” kept on reporting errors (see Figure 3). Figures 7 and 8 show our attempts to relocate the service via the console. While we were searching for the cause of this problem with “tcpdump”, we noted that “clupowerd” continually talks to its neighbors via TCP/IP port 4004. The daemon seems to be responsible for power switches and that would explain the “unknown” status in Figure 6, where the network connection is down. While relocating a service we noticed some traffic between the nodes on port 4002, i.e. the port the Cluster Service Manager “clusvmgr” listens on. It seems that service relocations are negotiated via this connection, and that means a failure is inevitable if the network connection is down. We will need to check the sources to be sure, though, because we could not
find any man pages for the Cluster Tools, or any documentation anywhere else for that matter. Even the “--help” switch only worked on rare occasions. So Red Hats failover solution only works as advertised in case of total system failure, and that is not our idea of a high availability solution. The remedy would seem to be a script that uses a power switch to power a node off. Or as a colleague put it “All we need is someone to watch the machine and blast it with a shotgun if something goes wrong.”
COVER STORY
Conclusion Advanced Server 2.1 is a tried and trusted solution, that is in line for certification by hardware and software manufacturers. If you need this and are also a faithful Red Hat customer, the Red Hat flagship is your only option. However, the high availability features were not convincing. The cluster can only manage two nodes and despite the additional hardware resources required it seemed incapable of dealing with error conditions apart from the total failure of one server. ■
Red Hat Advanced Server 2.1 Scope: 4 CDs, 2 manuals Support: 12 months Red Hat Network and maintenance Basic: 12 months support for installation and configuration Standard: 12 months all-in support, 4 hour response time (weekdays) Premium: 12 months all-in support, 1 hour response time (24x7) Price: US $800 (Basic), US $1,500 (Standard), US $2,500 (Premium)
INFO [1] Red Hat: http://www.redhat.com [2] Starline Computer: http://www.starline.de/produkte/easyraid/easyraid_x12/easyraid_x12.htm [3] Easy-RAID X12: http://www.phertron.com/products/easyraid_x16/erx16_fc.htm [4] Cluster-Guide: http://www.redhat.com/docs/manuals/advserver/RHLAS-2.1-Manual/clustermanager
Figure 7: Following a failure of the network connection to “lab2”, there was
Figure 8: When a service needs to be relocated, the cluster software obviously
not even a manual option available for relocating Apache to a running
attempts to contact the other node via Ethernet. A faulty route could take the
machine.
cluster down.
www.linux-magazine.com September 2002
23
COVER STORY
Q
Red Hat interview
What are the main opportunities for Red Hat in the coming year ?
We always knew that the market for Linux was getting bigger and bigger, the question was how could we make money from that growth as a Linux provider. You can see how the hardware vendors benefit and the big software vendors too. A lot of commentators would ask “How do Linux providers make their money”. They could see how money was made through selling boxes, training, support, but it was hard to see how they could really be successful, while, at the same time, remaining true to the Open Source ethics that Linux providers had built their businesses on. That was always the dilemma we faced, being true to the open source community and, at the same time being true to the shareholders, especially for Red Hat, as a public listed company, we have a responsibility to be financially successful. Quite a challenge. The thing that has changed has been the adoption of the enterprise customers. Historically, the enterprise market we have addressed has not been handled with an enterprise capacity, both as a vendor and also in the way they have used our technology, which, often has been in the back room, undercover, without any official endorsement, as mail servers, DNS server, web servers, edge of the network type services. The customer could go down to PC World and buy one of our packages and go and install it on as many servers as they wanted. Many companies would also put some people on Red Hat training, but the majority of the machines being installed would not need mission critical, 24/7 support, the customers would back-up their own machines, and so, Red Hat wasn’t seeing any support revenues.
A
How does a company like Red Hat scale its business around this type of business practises ?
Q
We kept thinking that these servers must be more critical, if it is a big name company, they must need support and, it proved that they didn’t. The servers being installed were by the backroom guys who were taking the decision to do a Linux install for their own peace
A
24
Scott Harrison interview
Crystal Gazing We managed to catch up with Scott Harrison, Red Hat’s director for Northern Europe, working out of Guildford in the UK. Scott has brought 15 years of enterprise accounts management skills to Red Hat, having worked for Sybase and Powersoft before that. BY COLIN MURPHY their mission critical computer systems. The demands for these systems are fundamentally different to the peripheral servers and, more importantly, they are prepared to pay, on a per server model, for the services they now require.
Q
Can you explain demands change ?
how
those
There are a few key differences. One key thing, the support by independent software vendors and their certification around the platform, and part of that is how we behave around our product. This has been one of the challenges as we started to engage with the more mission critical enterprise market. We started to get feedback from them and when we would approach the ISVs for certification, we would also get requests. The nature of the request was very common, they would all say that the Linux companies were producing technology at such a fast pace that they couldn’t keep up. They couldn’t and never would want to update their systems every six months, the average turn
A
Scott Harrison, Red Hat’s director for Northern Europe
of mind, to make their own lives easy. Often the best option for them was to build a cheap old Intel box with Red Hat, configured as a Samba server or whatever was needed. The reliability of the server meant that it just hummed away and no one ever noticed it, because it was never a problem. This was happening with our smaller users and our enterprise market, that was the nature of the Open Source business. The thing that has changed for us has been the downturn in the economy. We’ve seen some of the investment banks and even retail banks taking a much closer look at the cost of infrastructure, when putting in new applications into
September 2002 www.linux-magazine.com
“The main thing required by a Unix Infrastructure Manager is stability.” around for the release of a Linux distribution, which was seen to be needed because of the development of Linux. This didn’t matter to the backroom guys, who could take the decision to upgrade
Red Hat interview
their distribution only if it offered something they needed for the server tasks they were running. A Unix Infrastructure Manager has a far different set of criteria. If he is then responsible for the ‘gate’ that every server has to step through in order to be deployed in the datacenter as either an application or database server, he will want to go through a checklist of items to make sure it won’t cause a problem. Linux has only been on the outside of that ‘gate’ up until now. Now we have the sponsors in these big companies saying that they want to take Red Hat servers, the ‘gatekeepers’ have been saying “No – not unless you can show that you satisfy all of the things on my checklist.” The main thing that is required by the ‘gatekeeper’ is stability, of the version and of the certification of that version. The ‘gatekeeper’ will only be looking for major updates every 2 – 2.5 years. The idea of having to do an upgrade every 6 month is just too unpalatable.
Q
How do you provide for this stability at Red Hat ?
We now produce two product lines, the standard package, Red Hat 7.3 at the moment, which, in turn will move to version 8 and 8.1, 8.2, etc. Usually there is a major version number jump after x.2, so the next version after 6.2 was 7.0. This will continue to be revised every six months or so, as has been the case in the past. The major change in version number means that there has been a major and fundamental change to the technology in that package. This means that by the time version X.2 has come out, it will have been tested fully and bug fixed the most and will be the most stable and secure version. New and separate from this is the Advanced Server, which is only the first in a line of Enterprise products. This is based on the standard package of Red Hat 7.2, and it is our expectation that the next version of the Advanced Server will be based on 8.2, three release cycles, or 18 months away. More importantly, for our customers, we have committed to supporting the Advanced Server products for a minimum of 3 years. As a result, this will also guarantee the support and the certification from the 15 or so ISOs that we work closely with: Oracle, Veritas,
A
BMC, and IBM Software, etc., who have committed to the platform and their certification for it. After all, they had the same problem keeping up with us with their certification schemes. This is where we see the future of our business, because it is customers like these that have been paying us the largest amount of money. Have there been any advantages to the technology in Red Hat through this certification process ?
Q
Yes, because of the relationships built, we are getting much more feedback from the ISVs. We now have dedicated people in Red Hat who act as a conduit for the ISVs. The ISVs no longer just tell us that a product is certified, but they make suggestions as to how it could work better with their product. A case in point is with Lotus Domino and the number of concurrent users at any one time. 18 months previously the maximum support was for 50 concurrent users, but once we started getting feedback, this changed to 400 and then to 7,000 concurrent users very quickly. These were all little changes, but without the feedback, we would have been none the wiser.
A
COVER STORY
with the Unix vendors in terms of performance, we have overtaken them and left them way back. We will provide Advanced Server to the market in a slightly different way to our other products. We will ask people to sign and accept an agreement with us for per server installations for which we will charge an annual subscription, which includes the provision of the media for installation and access to the Red Hat Network. The Red Hat Network is the exclusive way customers will be able to update their servers. Accepting the agreement means that you can only use our product on the servers the customer has paid for. Added to this are three categories for
“For the moment we will hedge our bets on StarOffice, but we will most likely push forward OpenOffice.”
What is in the Advanced Server and how will you be charging for this new service ?
Q
The makeup of Advanced Server includes new technologies, like clustering, thanks to having on board the developers who originally worked on Convolo Cluster, which became Mission Critical Linux. We have also backported some of the version 2.5 Linux kernel functionality including asynchronous I/O with the help of Red Hat kernel engineers like Alan Cox. This helped to dramatically improve the performance of products like Oracle 9i to the point where Oracle 9i on Advanced Server is posting some industry leading performance benchmark figures. We are no longer in a catch up mode
A
support. The Basic package, which includes the installation media and some basic installation support is US $800 per server per year. Adding on to the basic package support during normal business hour will cost US $1,500 per server per year and for 24x7 support, for those with critical systems it’s US $2,500 per server per year. We have developed a model that means people can derive value on each server annually, which includes the mechanism for updates and management through the Red Hat Network. They will now also be able to get the type of support that the enterprise market needs, including the service level agreements. We must make per server revenue to make back the costs to us for all of the development work, like the cost of the ISV engineers that have helped to gain the certifications. Because of the certification implicit in Advanced Server, if the user has a bug or a problem with one of the ISVs applications that they are using, that vendor will be able to offer support
www.linux-magazine.com September 2002
25
COVER STORY
Red Hat interview
because they know they are dealing with a known and recognized system. If they are asked to solve a problem on an unlicensed machine, they will be much more reluctant, because they have no idea what is really on the machine, so the problem could be coming from anywhere.
Q
Do you still see a market for the desktop for Linux ?
Yes, Red Hat do still see a future for Linux on the desktop. As a Linux company we have been big supporters of things like the Gnome project, partly fearing the way that KDE suggested that they wanted to go proprietary until they
A
always be a stumbling block, for which StarOffice doesn’t quite meet their needs. We have not actively tempted that market, we have waited for them to come to us. When they are looking for an alternative, they are much more likely to pass over some of the shortfalls that the Linux desktop might have, rather than us pushing the product to say it is a complete replacement for the Microsoft desktop. That way they are much more open minded. Maybe in the next six months you should see a formal commitment to a workstation / desktop product in the Enterprise line, which will be for customers who are prepared to pay for us to behave in a different way – getting certification for customer, getting libraries for developers, etc. The big problem most people have is the lack of a Domino Notes client, and there is increasing pressure being put on Lotus by their customers, to provide it. Desktop migration can only be helped by the stance that Microsoft have adopted, treating their customers as buckets ful of money that they can just dip their hands into. The way they can just change a line in their licensing means they can just hoover up more money. People will start to look for viable alternatives.
“We see the competition not being the other Linux distributors, but Sun and Microsoft.” saw sense. But now we have two healthy desktop environments. We are also part of the Eclipse project which helps with development and helps to produce the toolsets that developers need to produce applications for the desktop. Eclipse also has support from IBM with some of its Java tools. We have been doing quite a lot to support the desktop, behind the scenes, mainly from within our Open Source development side of Red Hat. We haven’t yet felt that we can generate worthwhile returns by producing a desktop specific product, so we have spent a lot of our time pushing the Advanced Server. But now, a lot of the companies that have accepted the Advanced Server model are now coming to us for desktop solutions. Some of our big customers have actually said that their ultimate goal is to have a Microsoft free system. So, we are now looking with them for solutions to corporate needs on the desktop. They are finding problems with the chore of licensing issues and the need to run bloated software just to send an email or write a few letters. 80% of the needs for the corporate desktop are available now, there are those power users of Excel or Powerpoint that will
26
Are there any big stumbling block that will hold people back from migrating to Linux ?
Q
The one key thing that is stopping the massive wave of movement away from Windows is application availability. This is the thing we found on the Server platform, which is why we started our ISV program, so they could standardize on one of our products for their server applications. Windows must have the most amount of applications written for of any other OS. As it stands today, I can’t see the the average home computer user taking to Linux unless they are really enthusiastic.
A
Has OpenOffice and StarOffice 6 made much of a difference to aid in the migration from Windows ?
Q
September 2002 www.linux-magazine.com
I personally am still a user of StarOffice 5.2, so personally I can’t comment on usability. But, some of the things that Sun are doing around StarOffice 6 is one of the reasons why we are putting support into OpenOffice. I understand that it is the corporate decision makers that are behind the way StarOffice 6 has been licensed. We are going to put all of our effort and support behind the development of OpenOffice. We don’t believe that Sun are going the right way about StarOffice 6. I have recently heard from some reports that Sun hope to make 60% of their revenue from software sales and that a large proportion of this will be from sales of the StarOffice suite. That’s just becoming another Microsoft and we don’t think that that is the way it should go. The concern is this is the thin end of the wedge, because, once you have a license, and you have established that it is a licensed product, then it is just a question of how much. The danger is what would happen if Sun created a large user base, with people locked into the product, they become easy targets for exploitation. The hard thing for any company that hasn’t built their business model on open source is getting their head around Open Source. Red Hat , from day one, launched its business as an Open Source company, while Sun is, fundamentally, a proprietary company. Sun have seen that healthy software companies derive much greater profits than healthy hardware companies and that software is probably a good place to be. Now they are looking for the ‘killer application’ on Linux to improve their position further. Our view is that this is not the way to drive it forward, what will happen is Scott McNealy will become a mini-Bill Gates. For the moment we will hedge our bets on StarOffice, but we will most likely push forward OpenOffice. As long as Sun continue to behave in a constructive way, then all well and good. The great thing about Open Source is that it does keep people honest. But there is enough skepticism about Sun to keep the OpenOffice development very healthy and lively.
A
Q
What pressures can you put on companies like IBM to help bring
Red Hat interview
forward new versions of Java that will work with code compiled with GCC 3.1 ? Being part of the community we do a couple of things. We encourage these people to bring out code that will support it. Under our Red Hat Advanced Server guise, the ISVs that we have a close relationship with have accepted the responsibility to bring out code that we can certify against. The other thing that we do, especially with companies like IBM and Dell, is to ask them to release non-critical software, things like drivers, that we can look at, to detect any problems. They understand the need for a working and compatible driver to make sure they can sell the hardware that will rely so heavily upon it. We are encouraging the hardware and software vendors to release as much as they can, so that it can be better scrutinized by all of the Open Source community. So we bring pressure to bare.
A
LM: Do you think that this will increase, that these companies will accept more of the open source ethic ?
Q
They will release some of it. I think the challenge they face is that they are a bit embarrassed by their code, because a lot of it gets rushed out and is quite badly written, especially when compared against the flowing, self documented code that the open source community manages to generate. We even know of some cases where they have re-written the code before releasing it to the community just to make it more presentable.
A
Sun are bringing out their own version of Linux. What will this do to the market ?
Q
Our understanding, what we’ve heard and been told is that Sun are taking a version of Red Hat and just making a few modifications to it. If Sun are going to contribute resources and efforts and become a part of the open community then we welcome that. Any hardware vendor that produces their own distribution, is that going to work in the long run? IBM must have had thoughts about bringing their own distribution out, I
A
think the reason they didn’t is because of the very things that were not in place, and only now addressed by our Advance Server products, a common platform, that will be certified by multiple vendors, including all of the hardware vendors. If Sun bring their own version out they won’t benefit from this. What they will have is Sun’s version of Linux, which is a Red Hat derived product. I haven’t seen open commitment from Sun to say that they will do a lot of the development work, which would suggest that all they are going to do is leach off the work that Red Hat has done with the community. Unless they are prepared to put in, then they are not going to be seen as a contributor and part of the Open community. If they just take out, and link to proprietary hardware then that is not going to be seen in a very positive light, by Red Hat or by anyone else. Time will tell. The sensible thing for them to do would be to work with the Linux distribution companies and say that we want to certify out Intel or Sun based hardware against this and work with us, that way they would get the certification. For Sun, as a hardware vendor, to say that they are going to bring out their own version of Linux flies in the face of what is happening and evolving in the open community today. Sun is a company that is thrashing around trying to figure out what has gone wrong, having had a terrible time in the last couple of years. Part of their reaction to seeing the way that Linux is going is to say that if we can’t beat them, we’ll better join them. But, if they are going to be taken seriously, they have got to join it in the right way.
Q
What about UnitedLinux, how will this change the Linux world ?
Our view of UnitedLinux is that it was a reaction to the Red Hat Advanced Server, crystallizing the thought of the other Linux distribution companies, that they were not being taken seriously and getting certified by application companies. Our conception of Red Hat Advanced Server came about through discussions with enterprise users, like investment banks, and the ISVs that they use. With
A
COVER STORY
the virtue of being early with our enterprise customers, who forced us to have those discussions, we started to develop our thinking about the enterprise products. There was a year’s worth of discussion and work before we announced the Advanced Server. Once this process had started, the ISVs saw that this was a far better model for them to work with. So, when SuSE come along to get their latest version certified, just six months on from the previous version, I think the likes of Oracle and Veritas gave them the cold shoulder and told them to work in a similar way to us. Our understanding from the ISVs was that they were saying that Red Hat Advanced Server will be the only platform we certify against because it’s the only platform that is acting in an enterprise fashion. Some of the smaller Linux distribution companies, like TurboLinux, might not ever get certified again, because the returns on investment were so small. It just wasn’t worth their time to go through the certification process. SuSE, while being the next most respected distributor of Linux realized that they didn’t have the coverage in areas like the AsiaPac market, so a partnering with Caldera, Connectiva and TurboLinux would boost that coverage. By bringing them together and unifying the code base, they now have something presentable to offer ISVs for certification. What we have is two enterprise version of Linux, both now acting in a very similar way.
Q
Do you see this as competition to Red Hat ?
We see the competition not being the other Linux distributors, but Sun and Microsoft. We are very open about all of the various Linux options and invited them to take a look at the other Linux vendors so that they can make their own mind up. There is some healthy competition between the Linux companies, but we all realize that if we scrap about in the Linux market that is available to us at the moment then we will not get anywhere. If we were to do that we would be missing the point, it’s the massive piece of pie that’s owned by Sun and Microsoft that we need to focus on and so all benefit. ■
A
www.linux-magazine.com September 2002
27
REVIEWS
Sun Interview
Sun’s new Linux strategy
Dawn of an era Jack O’Brien, manager of the Linux business office at Sun Microsystems, is the man behind Sun’s new Linux strategy. BY COLIN MURPHY
Q
What can you tell us about the new hardware for Sun’s BigBear Linux ?
The product’s name is going to be the LX50. It will be a 1U form factor dual processor server. There will be more than one configuration offered. It will be based on the x86 architecture, so it uses Intel chips in the product.
A
Q
Is there a reason why is it limited to just the one type of hardware ?
This is just the first announcement of our first x86 product, it’s by no means the only product that we have under development. We acquired the Cobalt Network organization almost two years ago now, the company has been shipping Linux appliances based on the x86 architecture for almost 4 years. The company is arguably the most successful Linux systems company ever, having shipped well over 100,000 units. We have a couple of different products in that portfolio, the webserving type product
A
“We want to be aggressive to the entry server market.” “RAQ”, used by lots of Internet Service Providers, telecommunications companies. We also have a ‘customer premises appliance’, the Qube – a neatly packaged internet server. We also have the Sun Cobalt Control Station, used in managing large installations, usually made up of “RAQ servers”. All of these are based on x86 and do use the Linux OS. Is it true that the OS with the LX50 is really just a standard Red Hat 7.3 that you have ‘tinkered’ with ?
Q
28
Tinkered is not how I would describe it. The strategy that we settled on was to be as standard as possible with everything else that is out there. Linux has appropriate momentum of it’s own, and we have just fallen right into place with that. We are Red Hat compatible, we have been running Red Hat applications on our systems and haven’t found any problems. We have some Sun management tools, but we will be following all of the standard management interfaces. The LX50 uses the standard RPM installation tools and we support all of the Intel hardware and software protocols like IPMI. The interfaces will be familiar to anyone who has administered a Linux system on x86 before.
A
Q
How closely does Sun work with the Open Source community ?
We work very closely with the Open Source community. We are one of the biggest contributors of code to the community. We have supported products like OpenOffice, NetBean, the Grid Engine product and Gnome. We have contributed the NFS file system and funded the NFS v4 port to Linux. We also funded the Blackdown Java project. There are lots of others too. We plan to ramp up our efforts with Open Source even more. Most of our Linux engineers from our Cobalt division are guys that are very well connected to the Linux and Open Source community. These guys lead some of the major driver and module projects that are being developed.
A
Q
How much of the LX50 will remain proprietary ?
A
None of it will remain proprietary. We are following standard industry
September 2002 www.linux-magazine.com
Linux, and that includes all of the practices that come with it. We will release all of the code available and anyone can run it on suitable hardware. We will play by the rules.
Q
Will Java ever be made open source ?
A
Ha ha! I can not answer that. Ha ha...
Will the Sun Fire servers become more compatible with products like the LX50 due to the work that has been done with LinCAT ?
Q
We have a clear strength with the compatibility issues. LinCAT is a product released around March of this year. We have a very comprehensive Linux/Unix compatibility strategy. We want to be aggressive to the entry server market. By doing this we can bring some impressive technology to bare. First: Our world class Unix – Solaris, available on SPARC and x86. Second: Our enterprise ready, standard Linux for x86. Third: Great Linux to Unix compatibility. There are a couple of layers to how we ensure this happens. Most important is the Java layer. Writing to Java, the J2DE edition from the Sun ONE stack is our way to ensure true cross platform independence. Adhering to all the standards like XML and SOAP. As the industry matures, developers are writing for these APIs, which are just one layer of abstraction above the operating system. We will focus a lot of attention to make sure this becomes a standard. We will also make sure that we have full application compatibility between Linux and Solaris. What we also do is ensure good API, or source compatibility between Linux and Solaris. We will build into Solaris, APIs compatible to those in Linux, so that recompilation stops being an issue. LinCAT is a code analysis tool. It allows developers to check for differences or problems that might occur and helps create code that is source compatible. We also have a package that ships with Solaris for x86 called lxrun that allows you to run the same binaries on the different platforms. We have a lot of engineering underway to ensure this compatibility. ■
A
Sun LX50
A
t a press conference in San Francisco, Sun unveiled the LX50, an entry-level server using x86architecture. The system will be pre-loaded with Sun and Open Source software, with a choice of Sun’s new enterprise-ready Linux or the Solaris operating environments and a full suite of support services. The Sun LX50 includes fully integrated infrastructure software, improved manageability, and the 7x24 support and professional services that most vendors’ entry systems lack. The enterprise-ready LX50 server is aiming to lower the cost of ownership and to help to fill the security and stability gap that other Windows and Linux systems leave wide open. The new system tries to satisfy the customers’ needs to deploy infrastructure applications – such as Web serving, firewall/VPN and streaming media – through a low-cost, scalable hardware and Open source software.
A Software-Rich System The Sun LX50 is the first Sun system to feature Sun Linux 5.0, the company’s enterprise-ready Linux operating system optimized for a 32-bit, x86 system. Sun
REVIEWS
A new day with the
Rising Sun Scot McNealy announced that Sun Microsystems will bring their commercial systems know-how to the low-cost, entry level server market with a new Linux based system that brings together industry-leading software and system design. BY COLIN MURPHY into a league of its own for edge applications, compute farms, highperformance technical computing or custom application deployment. Software applications include: Java 2 SDK Standard Edition, Sun ONE ASP for Linux, TomCat (JSP), MySQL (Database), Apache (Webserver), WU-FTP (FTP), Sendmail (Email Server), Bind (DNS Server), Sun Grid Engine and Sun Streaming Server.
Reliable Hardware The Sun LX50 is powered by either single or dual 1.4GHz Intel Pentium
To intregrate with other system management tools along with the Sun Management Center, the Control station uses standard SNMP interfaces. The Control Station is aimed at customers who need to control and manage racks of LX50 servers out of the box. As the new system was announced, Sun also gave information on a new suite of training, support and consultation packages for both Sun Linux 5.0 and Solaris on the x86 architecture. This can cater from the replacement of parts to problem solving and the creation of software patches. Sun is aiming to be a complete system provider for all the customer’s needs for its Linux 5.0 and Solaris systems. The global service portfolio includes missioncritical operating environment support, a 3-year hardware warranty, an online support center, Web-based and instructor-led training, and integration and consulting services.
Partner Support
Sun’s LX50, an entry-level server using x86-architecture
Linux 5.0 is based on the 2.4 Linux kernel and optimized for the Sun LX50 with a strong focus on stability, security, ease of installation, the set up and remote manageability. Sun Linux, similar to Solaris, includes optimized and tested drivers. It easily integrates with Sun’s Java technology and the Sun ONE platform and as expected is supported by Sun’s own support services. The Sun LX50 includes some valuable software that aims to put the new server
processors in an industry standard, 1 3/4inch (1U) high rackable server. This new server can be managed remotely with ease using the Sun Cobalt Control Station. This gives the Sun LX50 horizontal scaling capacities on a massive scale. The browser-based Control Station is designed specifically for large volume server deployments. It monitors the system health, evaluates performance, can determine the hardware inventory as well as managing software provisioning.
Because Sun has used open standards based architecture, Independent Software Vendors (ISVs) will find it easy to add and intergrate their products. This in turn will increase the applications for the LX50 while in return give the ISVs a new low cost server platform for their growing customer base. ■
INFO The Sun LX50 starts at UU $2,795 for a system with one 1.4GHz CPU, 512MB memory and 36GB SCSI disk. A system with two 1.4GHz CPUs, 2GB RAM and 36GB SCSI disk sells for US $5,295. Sun Cobalt Control Station from US $4,999. More information see www.sun.com.
www.linux-magazine.com September 2002
29
REVIEWS
Gnome 2 & Ximian
What is Gnome?
Gnomeward Bound! T
here are those who can, quite happily, do all of their day to day tasks, working from nothing more than the command line starting and stopping services, monitoring processes, reading and writing email and even the creation of text documents. It’s a case of what you are used to, The desktop, for a lot of people, is their interface to the computer, rightly or wrongly. Taking such a fundamental position, it’s not too surprising that there is great respect and consideration given over to deciding ‘the best’ desktop to use. The Linux community is blessed by having a choice the users of other, lesser systems, are not so lucky. There are two major players, KDE and Gnome, as well as others, such as Enlightenment, who deserve a mention but are rarely seen as headline items in the packaging of the various distributions. Here we are going to look at the Gnome desktop, the default desktop for Red Hat installations and some of the products that go with it.
In this article we present highlights of the Gnome desktop and of Ximian, the company that grew up around it. BY COLIN MURPHY
Gnome – the desktop
Figure 1: The default GNOME desktop from a SuSE 8 distribution
In Linux’s formative years, the KDE desktop was the leader of the pack. Based on the Qt toolkit, by TrollTech, development for KDE required a more restricted license than the GPL. Red Hat did not value the restrictive licensing and refused to ship KDE with it’s products, deciding instead to ship with a young upstart of a desktop system, Gnome. All this was back in 1998, much has changed since, especially with the licensing issues for the Qt development tools, but Red Hat have stood by their previous decision and have remained loyal to Gnome, sharing much of the responsibility and development. The development ball was set in motion, but the creation of ‘The Gnome Foundation’ in 2000 was really to give it momentum. It is made of organizations and industrial leaders, including IBM, Sun, Compaq and the likes of Red Hat and VA Linux, who pledged allegiance and support for the Gnome project.
“Gnome continues to gain momentum, we needed a forum where the developers and corporate partners can come together to coordinate the continued development of Gnome. The support of these industry leaders will help us to achieve our dream of building a free, easy to use desktop environment that will be used by millions of people.” said Miguel de Icaza, founder of the Gnome project. The development of the desktop is not really enough, you need programs and applications to run on that desktop. Gnome can boast a broad range of applications that are designed to run on top of the desktop. They work with other desktops too, but some of these are so ubiquitous that you can easily forget their pedigree and what the ‘g’ in front of their name really stands for. Even if you do not, by default, use Gnome, it is very likely that you will have used at least one of these applications.
30
September 2002 www.linux-magazine.com
Galeon Galeon is a web browser, and only a web browser, something the developers are very proud of. They value the principles of simplicity and compliance. It calls on the Mozilla rendering engine, but gives a much cleaner user interface, for both clarity and speed.
Grip The Gnome ripping tool, used for taking track from audio CDs when you want to convert them into MP3 or Ogg Vorbis files for more convenient management on a computer. The Gnome site described it as a mature product.
Nautilus This is the file browser that you will most likely use to look through and manage your files on the desktop. Customizable and themeable, this underpins the resolve of the Gnome developers to produce a
Gnome 2 & Ximian
REVIEWS
Figure 3: Highlighting its power and flexibility, here is nautilus being used as a front end to gPhoto
Figure 2: When you need to take tracks from an audio CD, Grip is the tool to use
coherent desktop. It has real power in the way developers of third party application can add to the range of features. In one instance, Nautilus has been configured to act as a front-end for the gPhoto program, which is another gnome application.
GnPan gnPan is a network news reader, similar in design to programs like Agent and Gravity on MS Windows. The recent developments in gnPan have included the facility to download multi-part messages, including those encoded with ‘y-enc’ as well as the more traditional but bandwidth wasting ‘uuEncode'. GnPan is also
the only Unix newsreader to get a perfect score on the Good Net-Keeping Seal of Approval report.
Gnome 2 Gnome is about to make a big change. Gnome 2 is available for download, but, so far, no major distribution has yet included it. Because of the long development cycle that the Gnome project has built around itself, the development team have been able to include a range of features that will improve the usability of the product, and also the performance. Time and effort have also been spent on improving ways to help with the development of applications, by building a powerful new framework for the developers to work in.
The graphical element of the desktop has had much work done on it. One of the most criticized points about the Linux desktop is the lack of antialiased fonts, which, in some cases, can make text unreadable, certainly unpleasant, to look at. Gnome 2 will put an end to this, but not at the expense of performance, the user will be able to configure how the antialiasing will work and even be able to limit its effect to certain sizes of fonts. Gnome does have an annoying flicker, especially notable when dragging a side bar. This is to be cured in Gnome 2. Enhancing the desktop icons has improved both their readability as well as functionality, giving visual clues to the status of the task attached to that icon. Eye candy, the somewhat unkind name
Improvements for Gnome 2 MENUS & PANEL • Menus scroll when they get too big • Smarter menus accommodate diagonal mouse movements • Dialogs • File selector will retain the original file name when changing directory • New Run Program dialog with command completion • Text fields include right-click menus for cutting, copying and pasting text THEMES • New stock icons and color palette • Support for themeing of stock icons • CD Player and login screens are now themeable • Applications • Redesigned and easier to use Search Tool • Brand new lightweight help application,Yelp • Control center applications for controlling Gnome 2 properties have been greatly simplified and reduced in number Figure 4: Galeon with its clear and open user interface
www.linux-magazine.com September 2002
31
REVIEWS
Gnome 2 & Ximian
Figure 5: gnPan, Usenet news browser, viewing images made from multi-part messages
given to the efforts of the developers to give their product some instant wow! factor, is often belittled by the power user, who might see this as a waste of effort, even at the expense of performance. Eye candy does play an important part in the initial acceptance of a desktop, especially for new users. Gnome 2 will have more power to produce eye candy, if the user wishes it. Images will now be placed onto the desktop backgrounds with full alpha channel support, this will enable the use of transparent, and semi-transparent, icons and other desktop features. There will be some people who might moan the loss of some function or feature, but the developers have realized that the rot was starting to steep in and decided to put their foot down, before it would go through the floor. You will now be able to see, at a glance, all of the necessary features that make a desktop worthwhile, rather than have to hunt and pick through myriad features, many of which you didn’t even know what they did, let alone use. With the realization of how cluttered Gnome had become for the user, they realized just who they were developing for. To prove Gnome’s usefulness, it had to be used, not just developed. With this new approach in mind a set of Human Interface guidelines was drawn up, with the aim of making sure that the features and applications were consistent in their approach. For details see developer.U
32
gnome.org/projects/gup/hig/draft_hig/U index.html The more invisible the user interface is the more productive it will be. The user can get on with their work rather than get on and work the interface. They have also realized the need for the graphical shell, where everything a user needs to do can be done in the one place. Speed, or lack of it, has been another criticism of Gnome, along with it being a memory hog. Work has been done on this too, a boon to users of lower spec systems. It has been reported that
Figure 6: Gnome 2 desktop in action
September 2002 www.linux-magazine.com
Gnome 2 can be installed and run on a machine with a P2-233 processor and as little as 96MB of RAM. It should be noted that minimum suggested requirements for Gnome 2 are a P400 processor with at least 128MB of RAM. The usability features are being made as widely accessible as possible, with Gnome drawing up another policy from the Gnome Accessibility Project. Accessibility is enabling people with disabilities to participate in substantial life activities that include work and the use of services, products, and gaining information. Gnome Accessibility is defined as the suite of software services and support in Gnome that allows people to utilize all of the functionality of the Gnome user environment. The last major feature to mention is the inclusion of the XML libraries, libxml2, providing access to a complete and standards compliant mark up language. Because of the many and fundamental changes that need to be made to have Gnome 2 running on existing installations, people may prefer to wait until their distribution releases an upgrade. It is possible for a user to try out the system for themselves, but it is not a ‘friendly’ upgrade path. The level of dependency on other packages and on new libraries proves to be the hardest hurdle to clear. Once you find a decent set of installation instructions you will have something to follow, and, thankfully, karubik.de/gig/2.0 has now appeared.
Gnome 2 & Ximian
Ximian background Ximian grew from the success and the popularity of the Gnome project. Lots of talented architects and engineers were drawn to the efforts of this Open Source project, with its obvious commitment to Open Source ideals. The project was now developing more than just the desktop, more developers were coming on board producing essential desktop productivity applications for UNIX, Linux and other free systems. The creation of Ximian, in October 1999, offered a unifying framework for the developers to build under, in order to help improve the tight integration between the desktop and application. It has also helped bring about other open source products, a development tool base for Gnome, and a range of user services to aid in installation and upgrade. Ximian has also allowed others in the industry to adopt Gnome more readily as their desktop of choice. August 2000 saw the partnership form with companies like Sun Microsystems, IBM, Red Hat, HP, SuSE and others with the creation of the Gnome Foundation. This now allows for full co-operation and communication between some of the biggest players in the market. In turn, this allows for stronger and more robust development of core technologies vital to make Linux a competitive product. This is best highlighted by the recently announced Mono Project, a community initiative to develop an open source, Linux-based,
version of the Microsoft.NET development platform.
Range of products Gnome still remains the core product of Ximian. Under the Ximian umbrella it can boast more than 500 software developers, of which more than 100 are full time, paid employees. The unpaid volunteer force is another example of the project’s appeal to the Open Source community. Ximian engineers sit on the board of directors of the Gnome Foundation, and Ximian has a seat on the project’s Advisory Board as well. The Advisory Board is comprised of leading computer manufacturers and software vendors, including IBM, Sun, HP, Red Hat, SuSE, and others.
Evolution Ximian Evolution is an application offshoot from the Gnome tree. Ximian Evolution is a very powerful information management application suitable for both personal and workgroup use. It allows you to bring together all of your day to day information needs, email, calendaring and meeting scheduling, contact management and online task lists. Evolution has a good set of features which will help you to organize your personal data into a convenient form. One unique feature is that of vFolder, which are virtual folders with which you can create and save powerful contextual views of your email messages. The value of your information is only realized when
REVIEWS
you manage to pass it on to someone else, so, with Evolution you have powerful collaboration software that connects Linux and UNIX users to the more popular corporate communications architectures. You will find support for all of the most useful communication standards, allowing you to exchange data with users on different platforms. These include SMTP, SMTP/Authorized, POP, IMAP and others. Migration is a big issue, and if you want to start using Evolution, you will want to make sure you can get at your old emails and alike. You will find that you can import mailboxes created with Netscape, Outlook Express, UNIX mbox, Eudora and other email managers. There is also support for peer-to-peer calendaring, where you and your colleagues can share dates and times in a seamless fashion. This even works with other applications on different platforms, so long as they conform to the iCalendar standard, which happens to include Microsoft Exchange and Lotus Notes. Addressbook details are handled by the popular LDAP protocol with support for vCard as well. This should ensure that exchanging personal information with other users is easy.
Connector Ximian Connector builds on top of the usefulness of Evolution by giving you access to that most important of facilities – connection to the Microsoft Exchange 2000 servers. With Ximian Connector installed, Ximian Evolution will function exactly like an Exchange 2000 client, but without the crashes. This will then enable your users to access their email, personal and group calendars, address books and tasks lists from existing company Exchange 2000 servers. Therefore you have a route should you be looking to upgrade away from the Microsoft desktop but rely on access to Exchange.
Apt4rpm A recent development that takes the worry and stress away from installing RPMs which, on installation, complain that they need some dependancy fulfilled. Apt4rpm will, quietly, go away and resolve all of these dependency issues, downloading the required packages if need be. For Details please see linux01.gwdg.de/apt4rpm Figure 7: Ximian Evolution
www.linux-magazine.com September 2002
33
REVIEWS
Gnome 2 & Ximian
Figure 8: Ximian Connector showing folders
Red Carpet The developments for Linux can come quick and fast. New versions and new applications appear daily. The good side to this is we get lots of things to play with, the down side is that it can be a nightmare to administer, if your only concern is keeping up with the security patches. Ximian Red Carpet is a software management tool that will simplify and even automate all of the challenges faced by managing a Linux system, including version control, system updating and package conflict resolution. If you have a fast connection to the internet then Red Carpet Express would be of interest to you. Here you are provided with high-bandwidth access to Ximian applications plus third party software for faster and automated installations and updates. Most appealing to update freaks and to lesser mortals who still like to see new applications as quickly as possible
Figure 9: Ximian Connector calendar
The needs of the corporate user are somewhat different, with practical management systems being a priority. Red Carpet CorporateConnect provides a centralized updating function. This also includes the facility for administrators to distribute their own in-house applications to their users, with speed and efficiency, through the Red Carpet interface. From an on-screen list of recently updated or added packages, the user can choose which ones to take, as well as having the option of removing any that are no longer serving their purpose. Red Carpet then controls and monitors the installation and manages any packages dependencies that might arise as a result. Red Carpet offers various third party channels from which to take software updates from, in association with the Ximian Red Carpet Partner Program.
Ximian Desktop
With Ximian Desktop, you receive a tightly integrated package of Gnome applications with the Gnome desktop. It is a complete desktop package, and should be considered as a business solution for those looking for a unified corporate workstation. In the standard edition, which is available for download or for purchase on CD for convenience, you Figure 10: Red Carpet manages the installation and updating of software
34
September 2002 www.linux-magazine.com
receive all of the applications that make for a productive workstation. Office products such as AbiWord and Gnumerics will cater for your word and spreadsheet processing, Evolution will provide your email and information needs and Galeon will allow you to browse the internet. For the corporate user, Ximian also provides a Professional Edition, which, most notably, includes Star Office 6.0 from Sun. Support packages and structures are also available, ranging from web based ‘community’ support to ‘Corporate Gold Support’ which has telephone support and software maintenance agreements.
The future The Ximian Desktop, which grew up from the Gnome project, makes for a complete desktop solution, which as some Linux critics would have you believe does not exist. Red Hat and other distributions are now realizing that Linux can offer a competitive and complete desktop solution, especially for their corporate customers and they are just starting to commit themselves to the support and the further development of this as a market leading product. ■
Links to the projects Gnome & Gnome 2: www.gnome.org Ximian: www.ximian.com Galeon: galeon.sourceforge.net gRip: www.nostatic.org/grip Nautilus: nautilus.eazel.com gPhoto2: www.gphoto.org GnPan: pan.rebelbase.com
KNOW HOW
Alien
Alien
Debian Goes Extraterrestrial A lien is a Perl program and requires Perl Version 5.004 or better. You can call perl --version from the command line to discover what version is installed on your machine: huhn@transpluto:~$ perl U --version This is perl, v5.6.1 built for U i386-linux
Alien is a program designed for converting packages in third party formats to the format required by your own distribution for installation purposes.
The tool runs on most major distributions and can handle various package formats. In this month’s article we will be looking into the topic of converting “alien” software to known package formats with Debian. BY HEIKE JURZIK
To create RPMs, you will obviously need to install the Red Hat Package Manager ([1]). If you use apt to install Alien, any dependent packages will be installed at the same time:
Press the [y] key to confirm and Debian will get on with the job. Just one more note before we get down to the nitty gritty: Alien is still under development (this includes the latest version 8.12), i.e. occasional errors may occur. So before you start converting really important packages such as init or libc with this tool, it is a good idea to find out if your Debian version already has the software you need in Debian package format.
From .rpm to .deb A Debian package(.deb suffix) contains a range of information about dependencies
36
NASA
transpluto:~# apt-get install U alien Reading Package Lists... Done Building Dependency Tree... Done The following extra packages U will be installed: debconf-utils debhelper html2U text librpm4 rpm The following NEW packages willU be installed: alien debconf-utils debhelperU html2text librpm4 rpm 0 packages upgraded, 6 newly U installed, 0 to remove and 156 not upgraded. Need to get 1320kB of archives.U After unpacking 4260kB will beU used. Do you want to continue? [Y/n]
on and to other packages. This feature ensures perfect integration of the new software, and allows you to remove it cleanly from your system, if required. However, the format is not used by any other distributions (except distributions based on Debian). Similarly, you will experience some difficulty if you try to install RPM formatted packages on Debian. Of course, there is normally no need for this, as Debian includes a variety of packages, and most new programs are quickly made available in the .deb package format. But if you do happen to need to install a third party RPM package, you can rely on alien for support. The simplest syntax for alien on the command line is alien package.rpm. You will need to have superuser (root) access
September 2002 www.linux-magazine.com
to convert a package, if not, the following error message will be displayed: Must run as root to convert to U deb format (or you may use fakeU root).
After successfully completing the conversion, alien issues the following message: transpluto:~# alien mypackage.U rpm mypackage.deb generated
Before installing the package, you check where the components have been stored by typing dpkg -c. dpkg --info mypackage.deb which provides details of the characteristics such as version number, dependencies, or even a description of
Alien
the software. If you intend to install the package, you may want ensure that the installation will succeed under your real conditions. To test this you need to type dpkg --no-act -i mypackage.deb, then the system will let you know if it finds any dependency issues. Everything OK? Next time you can omit the --no-act option, and install the package without prior checks. If you are sure that you want to install the package without a prior check, you can set the following Alien flag when converting the package -i (or the long form: --install): transpluto:~# alien -i U mypackage.rpm Selecting previously deselectedU package mypackage. (Reading database ... 53783 U files and directories currently installed.) Unpacking mypackage (from U mypackage.deb) ... Setting up mypackage (1.0.3-1) U ...
No errors occurred during conversion and installation, but you may still want to
ensure that Debian can handle the third party package. To do so you can type dpkg -s mypackage. The command line output should be something along the lines of Status: install ok installed. By the way: You can type dpkg to deinstall any packages you have installed. If you use the -P option (abbreviation for --purge), you not only deinstall the software, but you also remove the configuration files completely. A simple command, such as dpkg -r mypackage (for --remove) will only remove the package, leaving all the settings under /etc intact.
And vice versa? Of course, you can use alien to create RPMs from Debian packages. To do so, use the --to-rpm parameter: transpluto:~# alien --to-rpm U mypackage.deb mypackage.rpm generated
You can now install this package on those distributions based on RPM. If errors occur when you call rpm -i mypackage.rpm, the error may be to do with unresolved dependencies.
KNOW HOW
In contrast to Debian based systems, where apt will automatically perform a complete installation of the required packages, you will need to install the RPM packages manually (see [2]). Alien will also allow you to create tgz packages (parameter -t or --to-tgz) for Slackware, or pkg packages (parameter -p or --to-pkg) for Solaris. In addition, alien will not only run on Debian, but there is a version for RPM based systems. This tool goes by the name of alien-extra – for information and binaries see [4]. Of course, conversions of this kind can cause issues. You will often need libraries and discover that you either have the wrong version or do not have the library at all. So it makes sense to first check and see if the package is included in your own distribution before you start installing “alien” software on your machine. ■
INFO [1] http://www.rpm.org/ [2] http://rpmfind.net/ [3] http://www.kitenet.net/programs/alien/ [4] ftp://ykbsb2.yk.psu.edu/pub/alien/
NOT ROCKET SCIENCE INTEL 1U RACKMOUNT LINUX SERVER DNUK
DELL
Teramac R110 1U rackmount server Intel Pentium III 1.20GHz 512MB RAM 80GB 7,200RPM ATA disk drive Red Hat 7.3 pre-installed 3 years on-site warranty
PowerEdge 350 1U rackmount server Intel Pentium III 1.0GHz 512MB RAM 80GB 7,200RPM ATA disk drive Red Hat 7.2 pre-installed 3 years on-site warranty
£800 + VAT
£1539 + VAT
Prices correct as of 18/7/02. Please check www.dnuk.com and www.dell.co.uk for current prices.
Digital Networks
NOTICE THE DIFFERENCE in price between our server and the competition? You don’t need a degree in economics to notice the cost savings. At nearly half the price of Dell, our Teramac 110 1U rackmount server represents excellent value. Factor in a faster processor, more memory and more storage, and you can save even more. At Digital Networks, we specialise in servers, storage, workstations, desktops and notebooks designed specifically for Linux use. Unlike our competition, we offer Linux pre-installed on all our hardware – completely free of charge. We offer Red Hat, Mandrake and SuSE, plus Microsoft Windows as well. Visit www.dnuk.com and find out why corporate customers, small and medium businesses and most UK universities choose us for their IT requirements.
KNOW HOW
POV-Ray
Front-Ends for POV-Ray
Well rendered P
OV-Ray, the “Persistence Of Vision Raytracer” [1], is a unique ray tracing program. But its command line interface has caused users occasional headaches as the number of parameters are quite confusing. The following article introduces a couple of tools that help you optimize the power of POV-Ray by prompting you for critical parameters and launching the rendering process at the press of a button. Two of the candidates offer far more than just simple front-end functionality and provide you with a WYSIWYG display for creating complex scenes on screen. The other programs will unfortunately mean you tackling the POV-Ray scene description language. Box 1 describes how to install the raytracer. We tested all our candidates with POV-Ray 3.1 and the brand new POV-Ray 3.5.
POV-Ray gives you the power to create beautiful virtual worlds and the right front-end makes the tool easy to use, at the same time reducing development time and increasing the fun factor. BY FRANK WIEDUWILT
The Candidates Our test candidates are Peflp and tclPov, which were programmed in Tcl/Tk, and their GTK counterparts gPov, PovFront, and Truevision. The KPovmodeler tool rounds off the field. Some of these frontends are no spring chickens. Only KPovmodeler and Povfront are still being regularly updated, but bugfixes for all the programs mentioned here are published at irregular intervals. These tools aim to save you typing in the POV-Ray commands – either by prompting you for them in a dialog or by means of sliders, however, the offered functionality by the individual programs is varied: Peflp, gPov and tclPov only offer common options. If you intend to delve deeper into POV-Ray's treasure trove, you might like to consider PovFront instead. KPovmodeler and Truevision stand up to comparison with Windows front-ends for the ray tracer. The decisive factor is their ability to help the user to compile scenes without prior knowledge of the POV-Ray scripting language.
Peflp Peflp, the “POV-Ray Front-End For Lazy People” requires both a pre-installed
38
September 2002 www.linux-magazine.com
POV-Ray
GLOSSARY
Box 1: Installing Povray
WYSIWYG: The “What You See Is What You Get”model requires that the on-screen display closely resembles the final (printed) output. Home directory: A directory used for storing files and configurations for a user.The shell can abbreviate the home directory to ~. Tcl/Tk: Tcl is a scripting language that (in combination with the TK GUI toolkit) can be used for developing GUI programs. GTK: A C program library containing the GUI elements that was originally written for the image processing tool, Gimp. Path: The PATH environment variable contains the directories in which the OS will look for programs or scripts allowing the user to omit the path.
Figure 1: Peflp
Tcl/Tk environment and the ImageMagick tool collection to convert pre-rendered scenes. Installation is simple, just expand the peflp074.tgz archive and su to root to copy the program file peflp to a directory in your path, for example /usr/local/bin. If after typing peflp & the program complains about not being able to find the defaultstamp.gif file, simply copy this file from the peflp archive to the ~/.peflp/stamps directory (you may need to create the directory prior to this step). When you first launch the program, you are also prompted to enter the path to POV-Ray. The POV-Ray file you want to render is one of the most important parameters, of course. Additionally, you can choose the
Figure 2: Peflp rendering a file
KNOW HOW
resolution (Figure 1) and opt to use Anti Aliasing or define quality features for the image you are creating. The option Mosaic Preview allows you to view a low resolution image of the result that will be gradually enhanced. You can choose Settings / General Options to define the editor that you want to use to edit your POV-Ray files when you click Edit. You can also specify the program you want to use for converting the rendered images and the directory where you will be saving your work. It is also useful to render a small thumbprint, or Stamp. Peflp offers this option in the lower part of the screen. This allows you to gain a first impression of the lighting and composition without having to wait while a full-scale image is rendered. You can use the preview to select a part of the image that you want to render in a higher resolution – just click on Partial to do so.
The archive containing the current Povray version 3.5, povlinux.tgz, is available on the project website [1] . Use the following syntax to expand the archive tar -xzvf povlinux.tgz or alternatively a program such as ark or guitar. After changing to the directory created by this process, povray-3.5, call the install script by typing ./install – you need to be root to run the script. This copies the required libraries to /usr/local/lib and the executables s-povray and x-povray to /usr/local/bin. s-povray uses the svga library and does not need an X Window system; x-povray was design for use with X. Finally, working with user privelges, copy the file povray.ini from /usr/local/lib to .povray.rc in your home directory: cp /usr/local/lib/povray.ini ~/.povray.rc
Peflt is a program without a lot of bells and whistles that does exactly what it promises: that is, it simplifies the POVRay interface. However, this is not the program to choose if you want to fine tune a rendered scene; the feature set is far too small.
tclPov The archive file, tclPov-0.4.1.tar.gz, is only a few kilobytes in size. It contains an installation script install.sh, that you can run (as root) in the directory where you expanded the archive. You will be prompted for an installation directory (/usr/local/bin/tclpov makes). The files you need to run the program are then copied to the directory you specified.
Figure 3: Tclpov
www.linux-magazine.com September 2002
39
KNOW HOW
POV-Ray
directory and then type make in a shell to compile the program. In our tests we demonstrated that the program was far more stable when compiled without GNOME support. To do this you need to set the --disable-gnome: ./configure --disable-gnome
Figure 4: gPov
You can launch the program using the following syntax: /usr/local/bin/tclpov/tclpov &
If the bad interpreter message is shown, this means that the program could not find the Tcl interpreter wish8.3: it should be in /usr/local/bin. Just type which wish8.3 to quickly discover where the wish8.3 is hiding. Then working as root create a suitable link, such as ln -s /usr/bin/wish8.3 U /usr/local/bin/wish8.3
Some users might find the tclPov GUI slightly over the top (see Figure 3), but the (albeit simplistic) editor that you can use for scene “programing”, fills most of the screen. And the Syntax Highlighting feature should make your job easier. Use the Options menu to select settings (such as the resolution and anti-aliasing) for the rendering process. This menu also lets you convert the rendered image into a variety of graphic formats.
GLOSSARY Syntax Highlighting: This refers to an editor highlighting commands, comments and any variables in a program language by using various colors.
40
A short online help describes how to use the program and explains all of the settings you need for POV-Ray.
gPov gPov from the gPov-0.1.2.tar.gz archive is also by the developer of tclPov. The installation of this C program keeps closer to home ground: after pre-installing both the GTK library and the accompanying header files, run the make command in the gPov-0.1.2 directory to create the new program. While logged on as root, simply call make install. You can then launch the program by typing gPov & (Figure 4). Again an integrated editor is available, but in this case do not expect too many features, not even syntax highlighting. You are prompted for the rendering parameters, but only for critical values such as the quality, anti-aliasing and image size. You can save any files you create in jpeg, bmp, png, or gif format. The program dropped to the bottom of the division due to its instability, and occasionally crashing with a memory access error after clicking on a button.
flag when launching the configuration script. You can use the third program in this group make install to finally install the program – you need to be root to do so. The povfront & syntax calls the GUI shown in Figure 5. PovFront provides far more settings than any other program we have looked at so far. The options are available via tabs in the lower part of the main screen. The Output tab allows you to select the image size, the output format and the section that you want to render. Use the Quality option to select the quality and color depth for the image and to select or deselect anti-aliasing. Use Library to define the paths to the libraries which contain the elements that POV-Ray will need to access. Click on the Render button to start the ray tracer. A separate window allows you to view the current state of the image. You can then click on Abort the last job to cancel the last rendering job or the Job control will open a window with a list of the jobs POV-Ray is currently performing. What is missing is an integrated editor
PovFront Following this disappointment, it is time for a far more fully featured program: PovFront. In addition to the Gimp toolkit PovFront requires the libgtkglarea library. After expanding the source archive povfront-1.3.5.tar.gz), use the ./configure syntax in the newly created povfront-1.3.5
September 2002 www.linux-magazine.com
Figure 5: PovFront
POV-Ray
Figure 6: Main working screen of KPovmodeler
that would allow the user to open POVRay files for easy editing. Povfront was not exactly stable during all our tests and it often issued memory access errors just before crashing the program when we clicked on certain buttons.
access to functions for the scene creation and rendering. Below the toolbars and on the left you will see a tree view showing the objects that belong to the current scene. The area on the lower left allows
KNOW HOW
you to edit the selected element. The lower right area provides four different views of the current scene. KPovmodeler is not just for editing existing POV-Ray files, it also helps you be creative while defining new scenes. All of the major geometric elements are available. Various surfaces and structures can be applied to them. A selection of backgrounds is also available. After you define a scene you can click on View / Render in the menu to start rendering the image with POV-Ray. The results will be shown in a separate window (Figure 7). The KDE Modeler provides you with a useful menu item, Settings / Configure KPovModeler, where you can define the individual settings, this is where you will want to define screen colors and starting sizes for the objects you will be inserting. The program is suitable for creating complex scenes and provides you with a useful interface for inputing objects – even going through the list of features is beyond the scope of this article. You can create and modify scenes without writing a single line of POV-Ray code. Although the version we tested was only 0.2, the program was extremely stable and did not crash once during our test series. The authors are looking for help with the program documentation. As soon as
KPovmodeler The tools we have covered so far were useless without POV-Ray scene files being created elsewhere. KPovmodeler goes a few steps further, providing both a frontend for the POV-Ray command and allowing you to compile scenes. The program website only offers the sources at present – you will need to compile the KDE software yourself. You will need a current version of Qt 3.0.x and the kde3-kdelibs package as well as additionally the OpenGL, glut, glx, and glu libraries plus headers. Before compiling, you need to expand the source file archive. Type tar -xzvf kpovmodeler-0.2.tar.gz to do so and then change to the directory, kpovmodeler-0.2, and type the folowing commands in this order: ./configure, make, and (again as root) make install. After completing these steps you can run the program by typing kpovmodeler in a shell (Figure 6). The upper screen area of the program contains some toolbars that provide
Figure 7: KPovmodeler has completed the rendering process – the output window shows the results
www.linux-magazine.com September 2002
41
KNOW HOW
POV-Ray
this becomes available KPovmodeler may be the solution for visual rendering tasks on Linux.
Truevision Truevision, which requires GTK and GNOME, is already firmly established in this niche. The program website contains a source archive ( truevision-0.3.10.tar.gz) that you can expand using the following syntax: tar -xzvf truevision-0.3.10.tar.gz. As usual follow the installation trinity, ./configure, make and make install in the expanded source directory to compile and install the program – you will need the header files, of course. After following these steps, you can type truevision & to launch the program. The graphic user interface contains no unpleasant surprises, providing a menu and toolbar in the upper area, various views of the model on the lower left. The dialog boxes for inserting and editing individual objects are available via the tabs on the lower right (Figure 8). The Create tab leads to a tree view of the
available objects. Here you can click on Create to insert the selected object. Materials provides you with a list of the pre-used materials, where you can opt to create your own surfaces or select an available patina. The Edit page shows you the characteristics of the selected object and provides ample opportunity for fine tuning. There are fewer textures available than in KPovmodeler, but you can still create interesting models. Editing individual objects means toggling back and forth between various tabs, and that can be extremely time consuming if you are working on a complex scene. Again, Truevision turned out to be somewhat unstable and crashed when we tried to save a scene. And that meant that all our work had been to no avail with the program being unable to load the files we had saved up to that point. Unfortunately, the program also lacks an online help or a manual. Although you can create and edit scenes, this program is so complicated that the initial learning
Figure 8: Main Window in Truevision
42
September 2002 www.linux-magazine.com
curve will be very steep without external assistance.
Summary For producing the occasional POV-Ray scene, Peflp is definitely a good choice, as it is stable, clear and easy to use. PovFront offers all the settings that a POV-Ray user could possibly wish for. The program tends to crash at irregular intervals, which makes productive use impossible. GPov and tclPov both have an integrated source editor that you can use for scene creation, however, our impression was that both programs need some polishing. KPovmodeler and Truevision are equally suited to scene compilation. Your learning curve will be fairly steep, as they both lack a manual. KPovmodeler was just a nose in front on the stability stakes and is thus highly recommended. â–
INFO [1] Povray: http://www.povray.org/
POV-Ray
KNOW HOW
Figure 9: Complex scenes comprising a large number of elements can be created in KPovmodeler and rendered in POV-Ray
Table 1: Overview of Povray Front-Ends Program
Peflp
tclPov
gPov
PovFront
KPovmodeler
Truevision
Author
Xavier Bourvellec
Chris Hammer
Chris Hammer
Philipe P. E. David
Andreas Zehender
Vincent le Prince
License
GPL
GPL
GPL
GPL
GPL
Website
mogzay.multimania.com
www.nasland.nu/tclpov.php
www.nasland.nu/gpov.php perso.club-internet.fr/clovis1 www.kpovmodeler.org
truevision.sourceforge.net
Source tgz
x
x
x
x
x
x
rpm
-
-
-
-
-
-
deb
-
-
-
-
-
Additional Libraries
Tcl/Tk
Tcl/Tk
GTK
GTK, optionally GNOME 1.4 Qt, KDE 3.0.x (1)
GTK, libgtkglarea, GNOME 1.4
Interface language
English
English
English
English
English
English/Other
Integrated Text Editor
-
x
x
-
-
-
Integrated Preview
x
-
-
-
-
-
Store and Convert Finished Image
x
x
-
x
x
x
Graphics Editor
-
-
-
-
x
x
GPL
Installation
-
Functionality
Help Online Help
-
x
-
-
-
-
Manual
-
-
-
-
-
-
(1) KDE Version 3.1 will include KPovmodeler as part of the kdegraphics package.
www.linux-magazine.com September 2002
43
KNOW HOW
DHCP
T
he first three articles in this series covered all the basics of configuring Linux boxes as network hosts. The emphasis throughout has been on using command line tools and editing the text configuration files. If you have read all three articles you will know that network configuration on Linux boxes is actually very simple. However, as your network grows it becomes an increasing chore just to keep all those configuration files up to date, particularly if you make significant changes to the structure of your network. Add a nameserver to your network, or change the IP address of a gateway router, and you will have the task to edit the configuration files on each and every network host. Life would be much easier if it were only possible for your computers to fetch their configurations from a central source. Not only would it make the job of setting up new machines easier but you could make network design changes centrally and have them propagate to all your machines. Laptops, PDAs and other transient devices could be connected to your network and configure themselves automatically. The good news is that it is possible, using the Dynamic Host Configuration Protocol. This article will show you how to set up DHCP both on the server and client sides and how you can update your DNS information dynamically when IP addresses are assigned through DHCP.
Linux Networking Guide: Part 4
DHCP A simple guide to configuring Linux networks from the command line. This final article in the series shows how to use DHCP to configure network hosts dynamically. BY BRUCE RICHARDSON.
Overview When a dynamically-configured host is first connected to a network it has no IP address nor any notion of the local subnet address, netmask etc. So it sends a broadcast request for configuration details. If there is a DHCP server on the local subnet it sends a reply, allocating the host an IP address and passing on any other network configuration parameters
Finding a Network Card's MAC Address Many NICs come with their MAC address printed on a label on the card. Another way to find the MAC address is to configure a network interface for it (by, say, getting it to take a dynamic address from DHCP) and then running ifconfig. In the details returned by ifconfig, the MAC address for each configured card is given in the HWaddr field.
44
that it has been supplied with. The dynamically-configured host is now ready to participate normally on the network. The server can pass on more than just the parameters of the host’s network interface. It can store a wide range of network related information, including the local domain name, addresses of DNS servers, routers, WINS servers and much more. It is entirely up to the DHCP client software how much of this is then used to configure a host.
Leases When a DHCP server assigns an IP address to a host it is not a permanent allocation. The server has available a
September 2002 www.linux-magazine.com
pool of addresses to which it grants leases. A lease has a set lifespan and must be renewed before it expires if the host is then to retain the same IP address. Typically, DHCP client software will attempt to renew a lease once it is halfway through its lifespan and will repeat the attempt at regular intervals until it is either successful or the lease expires, after which time a new lease must be requested. This leasing model allows you to have a pool of addresses smaller than the total number of hosts to be connected, if you know that only a certain fraction of those hosts are likely to be connected at any one time.
DHCP
It is also possible to associate, using DHCP, a fixed IP address with a particular hostname and/or network card (which latter option, since network cards tend not to flit from device to device, has the effect of associating the IP address with a specific piece of hardware). So if the DHCP client specifies a hostname in its request or if the request originates from a network card with a specific MAC address, then the associated fixed address may be returned.
Choice of Lease Lifespan DHCP client software may specify a lease lifespan when requesting a lease but the server can have both a default lifespan setting for requests that don’t specify and a maximum setting that overrides any request for a greater span. The value you assign to these settings will be significant in the effect on your network. Firstly, setting a shorter lifespan will mean more frequent renewals and so more DHCP-related noise on the network (as well as making your network more vulnerable to a failure in your DHCP server). Secondly, it is only when you renewing a lease that the client software checks for other network configuration details. So if you set a seven day lease and then give a new set of name server addresses to the DHCP server, it will then typically take at least three and a half days for that information to propagate throughout the network. So sysadmins are typically faced with a tension between the optimum network performance and the propagation speed, which only time, experiment and experience can resolve.
The Science Bit DHCP requests and replies are sent using UDP. The server listens on port 67, the client on port 68.
Some Drawbacks to DHCP When using DHCP on your network, there are certain questions introduced about the reliability and security. On the reliability side, if your DHCP server fails then your entire network may just grind to a halt. MS Windows workstations are particularly bothersome in this situation and will assign themselves a new address on a reserved subnet, a feature called Automatic Private IP Addressing.
More seriously, the DHCP protocol makes no provisions for security. When a DHCP client sends a broadcast request it accepts the first reply it gets. A malicious person could then subvert hosts on your network by connecting a laptop running its own DHCP server. This isn’t quite as calamitous as it sounds, since queries are only sent out by newly connected hosts or those which haven’t been able to renew an existing lease. Your only protection is to run some kind of Intrusion Detection Software such as Snort. One particular possible security hole arises with fixed IP address assignments (as described in the Section called Leases). If no MAC address has been associated with the assignment then the DHCP server has no way of verifying that the requesting host has any right to the hostname it specifies. So it is wise always to specify a MAC address where practical. Because of these issues, it isn’t wise to have your servers configure their network interfaces through DHCP. Leave that for your workstations and configure your servers manually. Otherwise, your entire network will be vulnerable to attack.
The DHCP Server In this article I will be using the server software that is available from the Internet Software Consortium, which is overwhelmingly the most commonly used on Linux. It should be available as one of your distribution’s core packages or you can download the source from the ISC website [1]. On the website you will be able to find source for versions 2.x and 3.x. Examples given here will work with either.
Configuration The DHCP server has one configuration file, whose default location is /etc/ dhcpd.conf (you can specify a different file at runtime by passing a parameter on the command line). An example is shown in the DHCP Server Config boxout. As you can see, each line is terminated by a semi-colon, sub-options are contained within braces. First come the global options. The lease lifespan (measured in seconds) has here been set to one day. There follow settings for the local domain name and and DNS servers. The subnet-mask global option provides a default for any subnet which does not have a netmask specified in its own declaration.
KNOW HOW
DHCP Server Config # /etc/dhcpd.conf # # Option definitions common to U all supported networks... default-lease-time 86400; max-lease-time 86400; option domain-name U "example.org"; option domain-name-servers U 192.168.10.1, 192,168.10.5; option subnet-mask 255.255.255.0; # Options for each subnet subnet 192.168.10.0 netmask U 255.255.255.0 { range 192.168.10.101 U 192.168.10.200; option routers 192.168.10.1; } subnet 192.168.11.0 netmask U 255.255.255.0 { range 192.168.11.51 U 192.168.11.90; range 192.168.11.200 U 192.168.11.254; option routers 192.168.11.1; } subnet 192.168.12.0 netmask U 255.255.255.0 { } # Options for specific hosts U host marx { hardware ethernet U 00:08:20:81:77:82; fixed-address 192.168.10.51; } host engels { fixed-address 192.168.10.52; U 34 }
Next we have some subnet declarations, each giving specific options for a subnet to which this machine is connected. The first two declarations each allocate a range of IP addresses to the pool for that subnet and also give the router IP address. The third subnet declaration is empty, indicating that the server will not respond to requests from that subnet. Important: There must be a subnet declaration for each subnet for which the
www.linux-magazine.com September 2002
45
KNOW HOW
DHCP
host has a configured network interface, unless the server was set at runtime to only listen on specific interfaces (see the Section called Running the Server). In the latter case there must be a declaration for each specified subnet. Finally, some host declarations, which specify fixed IP addresses for particular hosts. The first declaration specifies a MAC address and so will allocate the IP address to any request coming from that network card, whether or not the request includes the “marx” hostname. Also in contrast, the second declaration means that 192.168.10.52 will be assigned to any request including the “engels” hostname, even if an existing lease has already been granted to another machine using the same name. Caution – Any fixed IP addresses assigned in host declarations must not be from within ranges that have been assigned to subnet pools.
Running the Server You can launch the server directly from the command line, as in this example: /usr/sbin/dhcpd -cf /etc/dhcp/U dhcpd.conf eth0 eth1
In this case the daemon has been told to use an alternate configuration file and to listen only on interfaces eth0 and eth1. In practice, however, you are best to stop and start the daemon by using the init scripts provided with the package. On Debian, for instance, you would restart the daemon thus: /etc/init.d/dhcp restart
If you wanted to pass extra parameters to the daemon you would have to edit /etc/default/dhcp. If you are using another distribution, please consult your distribution’s documentation for details. The daemon must be restarted for any changes to the configuration file to take effect.
The Lease File The DHCP server keeps a record of the current leases in a text file (on Debian this is /var/lib/dhcp/dhcp.leases, with a backup called dhcp.leases~. The daemon reloads it on start-up and will fail if it can’t find it (which can happen if the daemon fails at a crucial point). If this
46
Pump Config File # /etc/pump.conf device eth0 { nodns } script /usr/local/sbin/dhcp
happens, copy the backup file back to dhcp.leases and restart the daemon. Each record in the leases file records the start and end date/time, MAC address, hostname (if given) and IP address. This can be of use either to other applications or to your own scripts. One example is given in the Section called Dynamic DNS Updates.
Configuring the Client For a Linux box to configure its network interfaces using DHCP, it requires a DHCP client. The two most commonly used are pump, a simple client developed by Red Hat, and dhclient, a fully featured client from the Internet Software Consortium. Both work in the same simple way: when the client is run it sends out a series of broadcast requests until a valid reply is received. The client then configures the network interface and other parameters specified by the DHCP server, after which it runs as a daemon in the background, sending renewal requests as necessary. Both of the clients can be configured further to specify how they use the data returned to them by the DHCP server and to run a script on the granting or renewal of a lease.
pump pump doesn’t support the full range of configuration options that can be passed through DHCP and isn’t as flexible as dhclient but is adequate for most set-ups. It is the default DHCP client for many of the distributions and there should be a package available for you. Once installed, configuring an interface using pump can be as simple as this: /sbin/pump -i eth0
Which will set pump to managing eth0. As soon as it successfully obtains a lease it will configure the interface. You can modify pump’s behaviour by passing it further command line options
September 2002 www.linux-magazine.com
or by editing its configuration file, /etc/pump.conf. The example file shown in the Pump Config File boxout tells pump not to rewrite /etc/resolv.conf if it receives DNS configuration information with the lease for eth0 and to run the user-written script /usr/local/sbin/dhcp whenever a lease is granted, renewed or released. The script is passed the action ('up', 'renewal' or 'down'), IP address and interface name as parameters.
dhclient Using dhclient to configure an interface is just as simple as pump: /sbin/dhclient eth0
You can also modify dhclient’s behaviour by editing /etc/dhclient.conf, though in most cases it will function perfectly well without a configuration file. The options for dhclient configuration are much more complex and flexible than pump, as we show in the dhclient Config File boxout. The global options specify firstly that dhclient should try to obtain a lease for 60 seconds before giving up and secondly that it should wait a further 30 seconds before trying again. The interface declaration sets options for dhclient to use when obtaining leases for the eth0 interface. In this case, dhclient should identify the hostname as “marx”, request an hour-long lease and add 127.0.0.1 to the list of name servers it receives from the server. The request option specifies what information dhclient should ask for and the require option tells dhclient to reject entirely any response which doesn’t include a subnet mask and list of name servers. It is possible to have dhclient run userdefined scripts when either obtaining or renewing leases but as this is a more complex affair than with pump you should read all the man pages that come with the dhclient package before you attempt this. Your distribution will have a dhclient package and you can also get the source code from the ISC website (see the Info boxoutat the end of this article).
Doing it the Easy Way Thankfully, you rarely need to bother with any of the above complexity, nor with running the DHCP client yourself. In all the major distributions you simply
DHCP
dhclient Config File # /etc/dhclient.conf timeout 60; retry 30; interface "eth0" { send host-name "marx"; send dhcp-lease-time 3600; prepend domain-name-servers U 127.0.0.1; request subnet-mask, U broadcast-address, routers,U domain-name, U domain-name-servers, host-name; require subnet-mask, U domain-name-servers; }
have to specify in the network config files that an interface should use DHCP. When the network interface is brought up (for example using the ifup command), the networking scripts will use whichever of the clients is installed. In the Example Interface Config Files boxout you can see an example of how this is done on Debian and Red Hat.
simply the two allow-update directives, which tell the respective forward and reverse zones that they should accept updates from 192.168.10.6 (the address of the DHCP server).
BIND Configured for Dynamic Updates # /etc/named.conf
Configuring DHCPD 3.x
options { directory "/var/cache/bind"; };
If you want the DHCP server to do the updates itself, you need version 3.x. You should add the following options to dhcpd.conf (altering the values to suit your own network):
zone "." { type hint "/etc/bind/db.root"; file "/etc/bind/db.root"; };
ddns-domainnameU "example.org"; ddns-update-style "interim"; deny client-updates;
zone "internal" { type master; file "db.internal"; allow-update {192.168.10.6;}; };
zone example.org. { primary 192.168.10.1; }
zone "0.0.127.in-addr.arpa" { type master; file "db.root"; };
zone 254.10.168.192.in-addr.U arpa. { primary 192.168.10.1; }
zone "10.168.192.in-addr.arpa" { type master; file "db.10.168.192"; allow-update {192.168.10.6;}; };
The primary setting gives the IP address of the name server to send the updates to.
Dynamic DNS Updates Historically, one drawback to configuring network hosts dynamically has been that their details are not stored in DNS. The DNS standard now, however, includes a mechanism for sending updates to a DNS server. This makes it possible to update
Example Interface Config Files Debian config file: # /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp Red Hat config file: # /etc/sysconfig/U network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes BOOTPROTO=dhcp
KNOW HOW
Working with DHCPD 2.x your DNS records to reflect the leases given out by your DHCP server. In configuring your network for DDNS it is possible for the DHCP server and/or the client to send the updates to the name server and then to use secure keys for protection. For simplicity’s sake, this example allows only the server to send updates and does not use security.
Configuring Bind The first step is to configure your name server to allow dynamic updates. For BIND 8.x, this can be done as shown in the BIND Configured for Dynamic Updates boxout. In this case, the BIND config file from the previous article in this series has been modified to allow updates to the main domain. The alteration is
If you have DHCPD 2.x then the DHCP server itself cannot perform updates. There are alternatives, however. Stephen Carville has written some perl scripts that can be used to monitor the dhcpd.leases file and send updates using the nsupdate binary from the BIND package [2].
Endnotes This article, the last in the series, has shown you how to set up a DHCP server, how to configure workstations using DHCP and how to set up dynamic DNS updates. The four articles should provide you with all the information you need to set up simple Linux networks. Hopefully, they also provide enough tasters to make you ambitious to try greater things with your systems. ■
INFO [1] ISC DHCP tools: http://www.isc.org/products/DHCP/ [2] Stephen Carville’s DHCP-DNS: http://www.heronforge.net/~stephen/DHCP-DNS/dhcp-dns.html [3] Secure DDNS HOWTO: http://ops.ietf.org/dns/dynupd/secure-ddns-howto.html
www.linux-magazine.com September 2002
47
SYSADMIN
Charly’s column
Quotatool: The Sysadmin’s Daily Chores
Part Taker
/dev/hda4 /home ext3 U defaults,usrquota,grpquota 1 1
The following tools are used for quota administration: quota, quotaon, quotaoff, quotacheck, repquota and edquota. Too complicated? Sure, but luckily there is an easy way to go: Quotatool[1]. Quotatool Version 1.2.1 is currently available for Linux, Solaris and AIX. A tarball of the tool is available and after expanding it, compilation should be child’s play, if you just keep to the standard: ./configure; make; make install. And this is how you set a quota for the user hugo’s home directory: quotatool -u hugo -b -q 50M -l U 70M /home
The -b parameter indicates a block limit. You can replace -b with -i to set an inode limit. The -q 50M entry sets the soft limit to 50 Mbytes, and -l 70M sets the hard limit to 70 Mbytes. Finally, you need to indicate the mountpoint that will be the
INFO [1] http://devel.duluoz.net/quotatool
48
From time to time you might hear that disk quotas are outdated, but views like this normally come from people who have never had the dubious pleasure of maintaining a server with a few hundred users that collect MP3 files. BY CHARLY KÜHNAST root for this limit. You can apply quotas to complete partitions (for example /dev/hda4), or as an alternative refer to the corresponding mountpoint, that is, /home in this case.
Setting Group Quotas Follow the same pattern to set group quotas. You can also use the following syntax to apply the same limits defined in our previous example to the members of group users: quotatool -g users -b -q 50M -lU 70M /home
And this is how you define the grace period:
limit, because the limit of one week is too short. After bribing the Admin (with a large drink – Admins are only human after all), Hugo might ask his resident connoisseur to type the following quotatool -u hugo -b -r /home
quotatool -u -b -t "1 week" U /home
The parameter -t "1 week" is critical. The following alternative units of time are available: sec, min, hour, day, and month. You can only set two grace periods per type (user quota and group quota): one for the block limit and one for the inodes. So you will not be able to define different grace periods for two members of a group whose home directories are in the same partition. And that is why you do not need a user name for the parameter -u in the previous example – the time limits you set, apply to everyone. However, the grace period can be extended on request. Imagine that the user Hugo needs an extension of the soft
September 2002 www.linux-magazine.com
and thus extend the grace period. This does not mean a week’s extension, but simply resets the grace period. The user will be happy and the Admin can take care of more important business – like finishing off that drink, for example. ■
THE AUTHOR
A
disk quota restricts the hard disk space on a (file) server for a user or group. There are two distinct limits: the soft limit that may be exceeded for a certain time period, and the hard limit that may not be exceeded under any circumstances. When a user hits this limit, the system will refuse to perform write operations in the restricted area, instead presenting the user with a quota exceeded message. The period during which overusage is tolerated is referred to as the grace period. You must first ensure that your kernel supports quotas, if you want to activate them, i.e. the kernel must be compiled with the quota-support: yes flag set. In addition, you will need to set the usrquota or grpquota parameter for the corresponding mountpoint in /etc/fstab, for example:
Charly Kühnast is a Unix System Manager at the datacenter in Moers, near Germany’s famous River Rhine. His tasks include ensuring firewall security and availability and taking care of the DMZ (demilitarized zone). Although Charly started out on IBM mainframes, he has been working predominantly with Linux since 1995.
Debian APT
SYSADMIN
United Parcel Service of America, Inc.
A
lthough the Debian’s package management program, dpkg is quite powerful, at times it tends to provide its users with too little support – just like rpm. For example, the Debian package manager does not allow you to remove dependancies between packages, and you will not find a mechanism for automatically updating previously installed software. The first attempt to resolve these issues dates back to the Debian Version 0.93 R6 with dselect providing an interactive front-end for dpkg. dselect then later developed into a genuine all round tool that removes package dependencies, updates pre-installed packages (if required) and generally provides a whole bunch of use functions. But as the functionality and thus the number of packages in Debian increased, dselect became increasingly much more complex – with usability issues even for
Debian’s Advanced Package Tool
Packman APT is a powerful front-end for the Debian GNU/Linux package manager dpkg. This article shows you how to use APT for daily tasks. BY MARTIN LOSCHWITZ experienced users. Obviously a new alternative was required, a tool that could perform the same tasks as dselect but still provide ease of use. Version 2.1 (alias Slink) of Debian saw the introduction of APT, the Advanced Package Tool. Just like dselect, APT is a front-end for dpkg and should not be seen as a replacement for the Debian package manager dpkg. In contrast to dselect you can use the APT syntax to manage dpkg from the command line. The tool will
provide a home for a collection of useful programs designed to handle multiple tasks: apt-get installs and removes the packages, automatically removing any previous dependencies, while apt-cache is ideally suited to browsing package lists, and apt-zip allows you to keep computers that are not attached to the Internet up to date without too many headaches. This article first looks into the basic functions of APT, and shows you how to use apt-get and apt-cache for your daily
www.linux-magazine.com September 2002
49
SYSADMIN
APT Debian APT
work. It goes on to introduce three more programs, namely – auto-apt, apt-file, and aptitude – that are not part of the APT package itself, but extremely useful add-ons. Table 1 shows an overview of the most important functions of all the programs belonging to the APT suite.
Package Sources Just like dselect, APT refers to an internal database of information on the whole range of packages that are available on the system. With respect to the package database apt-get is probably the most important tool in the whole APT group. It ensures that your package lists are kept up to date. But apt-get can do far more than simply maintain the package database – the tool can also offer an intelligent download manager, an all round talent that can download packages from given addresses, ensures that any dependencies are complied with and hands over the packages to the Debian package manager, dpkg, where they will be installed. You will need to perform some simple configuration tasks to maximize the power of apt-get. Use the /etc/apt/sources.list file to type the source path of the so-called index file that apt-get will later use to generate the index database. You can alternatively use: • a normal text editor (vim, emacs, nano), • the apt-setup program and • the apt-cdrom program (if working with CD ROMs). The apt-setup menu is shown in Figure 1. When prompted, you can say yes to both of the questions on non-free and contrib software. According to Debian, a software program is non-free if it contravenes the Debian Free Software Guidelines[1]. The contrib is reserved for programs that are free themselves, but depend on software from the non-free category.
Figure 1: Defining source paths for installation packages. apt-setup helps the root user to create the APT configuration file /etc/apt/sources.list.
If you intend to use CD ROMs as your installation source, you simply have to place the CDs into your CD ROM drive, ensuring that you keep to the correct sequence, type apt-cdrom add as root, and then let the program take care of all the remaining steps.
Sources List You may not be able to avoid some of the manual editing of the sources.list file. Before you start, you should familiarize yourself with the syntax (see Listing 1 for an example). Each line conforms to the following syntax: Type URI Distribution [Cat1] U [Cat2] ...
There are two types available: deb (for pre-compiled packages) and deb-src (for the package sources). The Uniform Resource Identifier, URI, contains the home directory of the distribution, and also a reference to the position of the required files, the options
Listing 1: sources.list-Example deb ftp://ftp.uk.debian.org/debian stable main non-free contrib deb-src ftp://ftp.uk.debian.org/debian stable main non-free contrib deb ftp://ftp.ticklers.org/debian-non-US stable/non-US main contrib U non-free deb-src ftp://ftp.ticklers.org/debian-non-US stable/non-US main U contrib non-free deb cdrom:[Debian GNU/Linux 2.2 r0_Potato_-Official i386 Binary-1 U (20000814)]/ unstable main deb http://security.debian.org stable/updates main contrib non-free
50
September 2002 www.linux-magazine.com
being: a local file system (file), a CD (cdrom) or the Internet via http or ftp. The Distribution column is intended for the path to the source relative to the base directory you supplied. Normally this will be a directory called stable alias potato (containing the stable Debian version prior to release), testing alias woody (the current test candidate for the next stable release – refer to the box “Stable, Testing and Unstable”) or unstable alias sid (the developer version). Users who prefer a secure system are recommended to use the list entry for security.debian.org. This ensures that security patches are automatically applied when you update a package. The system administrator can then define either a single or multiple categories of distribution packages that can be selected on installation (as an example main for the packages belonging to the basic distribution or the categories or contrib and non-free referred to earlier). After modifying /etc/apt/sources.list to reflect your requirements, you can use the apt-get update command to update the package database. If the file contains ftp or http URIs, the local package database will be synchronized with the package lists on the official Debian servers. If you are performing a CD ROM only installation, then you will not need this update. If you need to use a proxy server to update your package database via the Internet, the http_proxy and ftp_proxy environment variables will make your life easier for you. The download protocol
Debian Blindtext APT
in sources.list defines which of these options you will need to set. If your proxy server’s address is 192.168.0.20 and the port number is 8080, then type in the following to set the http_proxy variable in bash or zsh: export http_proxy=U "http://192.168.0.20:8080"
If you are using an FTP proxy, then you simply assign these values to the ftp_proxy variable instead of http_proxy. Of course, http_proxy and ftp_proxy are not only available in apt-get for the upgrade function, they can be used for any other task that requires apt-get to access the Internet. As previously mentioned, apt-get is an excellent choice for the installation and removing of the software. To install a package the root user simply types the apt-get install packagename command. apt-get will then check to see if the required package is in the package database, and if the dependencies with respect to pre-installed packages can be respected. If required, missing packages
will be downloaded from the medium specified in /etc/apt/sources.lists and passed to the package manager (dpkg), for installation. Listing 2 shows us such an example. If the recently installed package (libdbmusic0-dev in our example) does not fulfill all of your expectations, the root user can use the command apt-get with the remove keyword to remove the rogue package and any that depend on it. If you then use apt-get remove packagename to deinstall a package, you may find that a few files belonging to the package cannot be deleted. Files in the /etc directory are tagged as config-files, for example, and cannot be removed by a simple deletion process. This avoids destroying any of the settings you may have made. To make sure that a package has been completely removed from your computer system, you will need to use apt-get with the --purge option. The complete syntax, which is used for deinstalling the libdbmusic0-dev package is thus apt-get --purge remove libdbU music0-dev.
Listing 2: Installing libdbmusic0-dev with apt-get <0>minerva[1003]:~# apt-get install libdbmusic0-dev Reading Package Lists... Done Building Dependency Tree... Done The following extra packages will be installed: libdbmusic0 The following NEW packages will be installed: libdbmusic0 libdbmusic0-dev 0 packages upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 273kB of archives. After unpacking 1110kB will be used. Do you want to continue? [Y/n] y Get:1 ftp://ftp.ticklers.org unstable/main libdbmusic0 0.2.0-2 [260kB] Get:2 ftp://ftp.ticklers.org unstable/main libdbmusic0-dev 0.2.0-2 [13.1kB] Fetched 273kB in 4s (59.9kB/s) Selecting previously deselected package libdbmusic0. (Reading database ... 68160 files and directories currently installed.) Unpacking libdbmusic0 (from .../libdbmusic0_0.2.0-2_i386.deb) ... Selecting previously deselected package libdbmusic0-dev. Unpacking libdbmusic0-dev (from .../libdbmusic0-dev_0.2.0-2_i386.deb) ... Setting up libdbmusic0 (0.2.0-2) ... Setting up libdbmusic0-dev (0.2.0-2) ... <0>minerva[1004]:~#
SYSADMIN
Changing Versions Without Headaches One apt-get feature that you may well be interested in is the ability to upgrade the current distribution on your own machine using the apt-get upgrade option. A variant of this functionality (apt-get dist-upgrade) will be particularly convenient when the next Debian Release Version 3.1 (alias Sarge) becomes available, allowing you to avoid the trouble, time and frustration often involved with installing new versions. To update your musty Debian Potato installation to Woody, you must first edit /etc/apt/sources.list. You need to replace each stable or potato with woody, for example, the previous entry deb ftp://ftp.uk.debian.org/U debian stable U main non-free U contrib
would need to be changed to deb ftp://ftp.uk.debian.org/U debian woody U main non-free U contrib
for the update. If you prefer to use the current packages and are prepared to take the risk of possible instability, you can even type sid to replace woody. The next step is to synchonize the local package database with the package lists on the official servers using apt-get update. Of course, all of these steps will work just as well if you are using CD ROMs. In this case you can simply use the apt-cdrom command to insert the package list contents on the CD ROMs into the local package database and then launch apt-get update. After updating the package database, type the apt-get dist-upgrade command. apt-get will then compare the versions of the pre-installed packages with the new versions in the package database. If the program discovers that the new package database lists a more recent version than the one actually installed on your machine, it will automatically download and install the package. You may notice that the command apt-get dist-upgrade has downloaded some packages, but dpkg cannot install them correctly, since some packages that they depend on are missing. In this case,
www.linux-magazine.com September 2002
51
SYSADMIN
Debian APT
repeatedly launch apt-get dist-upgrade until the installation has been completed for all the packages on your list. But make sure you have enough free space in the /var partition on your hard disk: apt-get will download packages to /var/cache/apt/archives, but not delete them after completing its task. To free up the disk space used by packages you no longer need, the root user can type the apt-get clean command. For users that prefer to keep a stable system instead of moving to Debian Unstable, but still require a few packages from the unstable version (because the Woody version is too old, for example), the post Debian Woody apt-get version offers a special function: Just enter the following line in /etc/apt/apt.conf APT::Default-Release "testing";
and add suitable deb and deb-src entries for unstable in /etc/apt/sources.list, and you will be able to install Sid packages in Woody by typing apt-get -t unstable install packagename. You can use the inverse functionality: The entry APT::Default-Release "unstable";
in /etc/apt/apt.conf and the right entries in /etc/apt/sources.list will allow you to install Woody packages in Sid by typing apt-get -t testing install packagename.
APT-Cache apt-cache is the second most important program after apt-get in the APT program group. It provides the user with direct access to the package database and is thus extremely useful. You can type apt-cache search identd to browse the package database for any packages containing the identd keyword as part of their name or the package description, for example. If the results contain a package that seems interesting you can use apt-cache show packagename to display the package details. If you need information on a package’s dependencies, or require a list of packages that depend on the current package, you can type apt-cache showpkg packagename.
Where are you? For practical purposes you will normally want to use apt-cache and the search
52
TABLE 1: COMMAND LINE REFERENCE Command
Function
apt-get update
Updates the local package database
apt-get upgrade
Updates all the packages on a system if the current release includes an update
apt-get dist-upgrade
Similar to apt-get upgrade but will install or remove packages to satisfy dependancies
apt-get install Packagename
Installs a package
apt-get remove Packagename
Deinstalls a package
apt-get --purge remove Packagename Deletes a package completely apt-cache search Packagename
Searches the package database for a package
apt-cache show Packagename
Displays information for a selected package
apt-cache showpkg Packagename
Displays details on the dependencies for a selected package
auto-apt search file
Searches for a file in the package database
auto-file search file
Searches for a file in the package database and returns more detailed information
keyword to browse the package list. This is hardly surprising as Debian GNU/ Linux Woody comprises almost 9,300 packages, and that makes it impossible for you to remember every individual package. But searching with apt-cache has its limitations. If a program’s ./configure script complains about not being able to find glut.h, apt-cache search glut.h will not provide you with any results and you will then be no closer to home than you were previously. Two additional programs are available in this scenario: auto-apt and apt-file. Neither program belongs to the official APT package and you will need to use apt-get install auto-apt or apt-get install apt-file to install them separately.
Just like APT, both auto-apt and apt-file use internal databases that are generated after installing the package. To do so, both programs use individual Contents-* files that depend on the architecture of your machine and are available in the debian/dists/Distributiondirectory on any Debian server. To generate the internal package database, you will need to launch both apt-file and auto-apt with the update argument. You are also recommended to update the internal database of both of the programs at regular intervals to achieve the best possible results for search operations. The search operation is launched in a similar way to apt-cache, i.e. it uses the search keyword. To find the glut.h file
Stable, Testing and Unstable Debian GNU/Linux comes in three different flavors:“stable”,“testing”and “unstable”, aka “potato”,“woody”and “sid”, which are the names of the current distributions. “stable”refers to the current stable release of the Debian Linux distribution. Currently, this is Debian GNU/Linux 2.2, codename “Potato”. In Debian GNU/Linux’s case stable means that no new packages will be incorporated into the distribution with the possible exception of security updates. Debian GNU/Linux 2.2 is already fairly old, and thus only recommended for system administrators who need an absolutely reliable system. However, this will mean doing without some current software such as XFree 4.* or an SSH daemon with built in SSHv2 support. “Unstable”is part of the Debian GNU/Linux project that will appeal to experienced users and in fact anyone who would like to live “on the bleeding edge”.The codename for the current “unstable”version is “Sid”. Any new packages will always be placed in “unstable” first and then make the transition to “testing”
September 2002 www.linux-magazine.com
distribution 10 days later, unless a critical error (“Release Critical Bug”) becomes apparent. Debian “unstable”offers both advantages and disadvantages: On the one hand you will always have access to brand new packages, on the other hand the packages in Sid have not been tested, and this can mean a program behaving badly or not running at all.You may not want to run Sid on a production system for this reason. “testing”is that part of the Debian GNU/Linux project that is due to go “stable”in the near future. Its current codename is “Woody”, and when “Woody”is released as the stable version its version number will be 3.0. It is also quite possible that this release might occur on or around the publication date of the current issue. As previously mentioned,“testing” mainly comprises packages from “unstable” that have proved themselves well behaved for a while. Most “Woody”programs work quite well by now so you should not experience any issues if you are running them on your workstation or multimedia machine.
Debian Blindtext APT
with auto-apt, you will thus need the auto-apt search glut.h command; apt-file search glut.h performs the same task in apt-file. The output formats of these two programs are completely different: apt-file simply returns the names of any packages found to contain a certain file: <0>minerva[1004]:~# apt-file search glut.h libfltk1-dev glutg3-dev cint
In contrast auto-apt returns the complete path to the files and shows the package category, although this does not always provide for easy readability (Listing 4). <0>minerva[1001]:~# auto-apt search glut.h usr/include/GL/fglut.h devel/glutg3-dev usr/include/GL/glut.h devel/glutg3-dev
User Friendly Dselect Many users like the way dselect looks but are at odds with its complexity. Others prefer the simplicity of apt-get and aptcache, but would like the functionality of a program like dselect that allows them to interactively select and install packages. aptitude is an approach to resolving this issue and provides a dselect style front-end for apt-get and apt-cache (see Figure 2). But in contrast to the former, the tool is easy to use and supports the arrow and function keys. If you run aptitude in an X11 terminal emulation, you can simply use your mouse to click on the buttons. Additionally, the intuitive handling and the menu layout enhance aptitudeâ&#x20AC;&#x2122;s userfriendliness. The program is not part of the Debian GNU/Linux standard offering, meaning that you will need to run apt-get install aptitude to install it. After the initial program launch, you might want to press [F10] and then press [U] to ensure that apt-get automatically updates its own package database. After completing these steps you can use the cursor keys, or the mouse on X11 systems, to toggle between the five menu items Installed Packages, Not Installed Packages, Obsolete and Locally Created Packages, Virtual Packages and Tasks.
SYSADMIN
Obsolete? Installed Packages will show you the packages currently installed on your machine. Not Installed Packages shows all the packages available in the database but not installed on your system. If you then select Obsolete and Locally Created Packages, aptitude lists packages that are installed on your Figure 2: Aptitude â&#x20AC;&#x201C; the better alternative to Dselect machine but do not appear in the apt database. These may You can use this to select the multiple packages for installation. Then press the include packages that were formerly part [G] key (that is, get) to display on of a Debian installation but have since overview of the packages you will be been removed, because either the author installing in the next step. Press the key or the package maintainer decided to again to download the selected packages. ditch the package. If any dependencies are discovered where Virtual Packages are a speciality of the corresponding packages have not Debian : You can use them to group the been pre-installed, aptitude will find the various programs that fulfill the same tasks under a generic heading. i.e. the packages and pass them to apt-get. mail reader mutt needs a local mail server aptitude is also well suited to deleting in order to transfer mail. The maintainer packages. Locate the entry for those would normally have to decide which packages you want to delete in the list of mail server software mutt should depend Installed Packages, then press the [-] key on, and this would annoy any users who once and the [G] key twice. If any of the prefer exim to sendmail or vice versa. So, installed packages depend on the ones instead of making exim mandatory, the you delete, aptitude will automatically maintainer can use a trick and specify a remove them. dependency between mutt and the virtual But aptitude offers more than just installing and removing packages. Just mail-transport-agent package. The datapress the [H] key for a complete overview base knows that a mail server must be and howtos for the available commands. installed to fulfill this condition. The documentation is available under The Tasks packages are of little interest nowadays. They were used to group /usr/share/doc/aptitude. Debian GNU/Linux 2.2 programs that Future Trends performed certain tasks. i.e. there was a package called task-x-window-system that Although the APT programs perform well contained XFree86. This type of package at present, the developers responsible for more or less died out when Debian dpkg and APT have big plans for the GNU/Linux Woody was released. future. Interaction between the two is due If you intend to use aptitude to install a for enhancement via a tailor-made communication layer. A protocol layer of this package, you may first want to find the type would also mean performance gains. package name in the list of Not Installed Let us not forget that work is in Packages. Place the selection bar on the progress to enhance its user friendliness. package name to highlight the package, It remains to be seen, if that will actually then press the [+] key. The package mean a GUI. The prospect is exciting. â&#x2013; name should now be selected (shaded).
INFO [1] Debian Free Software Guidelines: http://www.debian.org/social_contract [2] APT Howto: http://www.debian.org/doc/manuals/apt-howto/index.en.html
www.linux-magazine.com September 2002
53
SYSADMIN
NcFTP
NcFTP
Convenient File Transfer N
cFTP is a small compendium of programs which are designed for easy command line based file transfers, which includes: • the NcFTP Client, an FTP browser • NcFTPGet and NcFTPPut, non-interactive file transfer programs • NcFTPLs, a tool for displaying the directories on the server via FTP and without using an interactive shell • NcFTPBatch and NcFTPSpooler – all the programs within this collection work hand in hand. NcFTP is of course, available for various Unix platforms, but it can also be used on a variety of other operating systems. The current version is 3.1.4 (dated 2nd July 2002), although some distributions still include the older 2.4.3 version, where you might notice a few new differences compared to the current version. If you intend to work your way through the commands and parameters discussed in this article you should consider updating to 3.x. The client is launched by typing ncftp at the command line. Now instead of the standard prompt, you will now see ncftp>. Any other commands are then typed here. You can now use the open command or the shortcut o to open a new connection to an FTP server: ncftp> open ftp.debian.org Connecting to ftp.debian.com (208.185.25.38)... raff.debian.org FTP server (vsftpd) Logging in... Login successful. Have fun. Sorry, I don’t do help. Logged in to ftp.debian.com. ncftp / > _
As you can see, we have now logged on successfully and are looking at the FTP server’s root area. The current FTP server allows so-called anonymous logins, that is, we are now logged on to the server via the anonymous guest account. Most servers prompt you for your email address as your password for the guest
54
FTP (“File Transfer Protocol”) is an Internet protocol for exchanging data between two hosts. As a standard command line tool ftp offers only limited functionality. The easy-to-use NcFTP client not only displays the status of your downloads but offers a number of useful additional features. BY HEIKE JURZIK
account. Do try to be polite and enter your email address for use in statistical evaluation. You may find that anonymous access is denied. In this case you will see an error message, such as “Can’t set guest privileges”, “User anonymous access denied” etc. If you happen to have an account on the current server, you can log on using open -u username hostname and type the corresponding password. You can also save time by using the -p (for “password”) parameter. In this case the complete command is as follows: ncftp> open -u username -p passU word hostname
Quite a few well-known shell commands (cd, pwd, ls) work as you would expect
September 2002 www.linux-magazine.com
them to, although they have been reimplemented within the program. You can type help to display an overview of the available commands. ls displays a list of files and directories for the current server: ncftp / > ls debian-archive/ debian/ debian-cd/
If you want to change the directory to debian-archive/, you just need to type the command cd debian-archive. The prompt changes to ncftp/debian-archive > to show you the current working directory. If you want to view the README file in this directory, you do not need to transfer it to your own computer. Instead, you simply type less README.
NcFTP
File name completion is one of the more useful features, and one that you are probably familiar with, if you use bash. To launch the command we just discussed, less README, you can simply type less R [Tab], as the current directory contains only one file name starting with a capital “R”. Unique file names are then automatically completed. To do so NcFTP uses the dir command to create a list of files that is then parsed during the completion process. Depending on your network performance and the number of files in the current directory, this may take a while to complete. You can also use the get command to transfer files to your own computer. A progress indicator shows the progress of the download. As you cannot enter any additional commands while you are downloading, NcFTP provides the bgget command, with “bg” denoting “background”. You can even log out without interrupting the download. This same feature is available for the put command, which is used to upload a file from your computer to another host. The alternative command, bgput, runs the upload process in the background. You can use the jobs command to check all the jobs currently waiting in the background. To start downloading or uploading, use the bgstart command. The following message is displayed: ncftp / > bgstart Background process started. Watch the "/home/huhn/.ncftp/spU ool/log" file to see how it is progressing.
default NcFTP setting for file transfers. In other words, the data you transfer will not be changed in any way. However, ASCII mode will correct the end-of-line characters that are used by all the various operating systems (you can change modes using the type ascii or type binary) commands). The good news is, for Linux, you will probably not need this mode.
I want it all! The get command has an even greater set of features: You can transfer multiple files and even use wildcards. Suppose you are interested in two files on the FTP server, README and README.old, you can either get README README.old or use get README*. Additionally, you can use the -z flag to get a file and save it locally but with a different name. To copy README.old from the remote server to README.alt on your local machine, you would type the command get -z README.old README.alt. NcFTP attempts to restart aborted downloads. If a file transfer happens to be interrupted, the program attempts to retrieve (“reget”) the missing parts of the download when re-launched. However, you can use the -f flag to disable this behavior. You can also append data to an existing file. To do so, use get -A file.log (“append”), which will download the file and append it to a local file with the same name. One extremely practical feature is NcFTP´s ability to transfer complete directory structures recursively. get -R directory copies the directory plus any subdirectories and any files stored in
SYSADMIN
them on to your local machine. NcFTP also provides a few additional “l”-commands. The “l” means “local” in this case and allows you to shell out of NcFTP to run a few commands on the local machine. lcd changes the local directory. Once again, file completion is available. lls calls the /bin/ls command on your local machine, allowing you to quickly check the contents of the local directory, before downloading to it from the server. lmkdir (i.e. “make directory”) allows you to create a new subdirectory without having to quit NcFTP. When you attempt to terminate the connection to the server using close, or to quit NcFTP using quit, you will see the following prompt: You have not saved a bookmark for this site. Would you like to save a bookmark to: ftp://ftp.debian.com/U debian-archive/ Save? (yes/no)
If you type “yes” to confirm, the program will suggest a name for the server (“Enter a name for this bookmark, or hit enter for 'debian':”) and save the bookmark. To maintain your own bookmarks simply type bookmarks, which will launch the simple editor. In contrast to the earlier versions of this program, the new version encrypts your passwords in ~/.ncftp/bookmarks, instead of saving them in clear text. ■
A quick look a the generated logfile shows the following: 2002-04-24 14:23:07 [026858] | U Cmd: RETR file.tar.gz 2002-04-24 14:23:07 [026858] | U 150: Opening BINARY mode dataU connection for file.tar.gz (27U 06 bytes). 2002-04-24 14:23:07 [026858] | U 226: Transfer complete. 2002-04-24 14:23:07 [026858] | U Succeeded downloading file.taU r.gz.
The second line contains the message: “Opening BINARY mode” – which is the
Figure 1: Maintaining bookmarks in NcFTP
www.linux-magazine.com September 2002
55
PROGRAMMING
Perl tutorial
Perl tutorial
Perl: Part 5
Items 2-4 should be familiar to you by now, packages will be discussed in the future. More information on the 'caller' function can now be found by using the 'perldoc -f caller' command from the shell prompt.
Thinking in Line Noise
Invoking a user-defined function
In this month’ article we examine how to create user defined functions, test and apply the finished functions in several separate scripts by creating libraries. BY FRANK FISH
As with most (possibly all) things in Perl: “There is more than one way of doing it”, it follows then that there are numerous ways of calling a user-defined function. The three most common methods are listed below. Each of these methods of calling the user defined functions has its own implicit characteristics.
the duration of the function and its value returned at the end of the script. This concept is called scope and is explained in more detail later. Using parameters to provide values to a function enables the function to exist as a stand alone piece of code: # Example 6 my $error_message; my $file = '/etc/passwd'; sub log_error { my @call = caller(1); print STDERR "Error at line U $call[2] of file $call[1]" . U "in the function $call[4]\n";
# Example 3 log_error if $error == 1;
print STDERR $error_message if defined($error_message); }
Example 3 calls the function 'log_error', provided that the function has been declared beforehand. #Example 4 my $error = 1; log_error() if $error;
Walter Novak, visipix.com
U
ser defined functions are an invaluable development tool enabling sections of code to be reused many times. Shrewd use of user functions can create generic functions that reduce repetition of code whilst increasing legibility and maintainability.
would in a normal Perl script. In the example below a function being declared:
Functions
This example will write the message “Oops!” to wherever the standard error file handle has been pointed. The keyword 'sub' denotes that a function is about to be declared, next comes the name of the function, directly after is the code block which is enclosed in the curly braces. We will see later that there are several other arguments that can be passed. This form will now be sufficient to make a practical error logging function.
A function is a collection of statements that can be grouped together to perform a single task. The function can contain numerous calls to other perl functions or user defined functions, including itself. This allows functions to act as blackboxes that can be used without the knowledge of how it operates. We declare a function by specifying its name and listing the statements as we
56
# Example 1 sub log_error { print STDERR "Oops!\n"; }
September 2002 www.linux-magazine.com
# Example 2 sub log_error { my @call = caller(1); print STDERR "Error: line U $call [2]" . " of file $call U [4]" . " in the function U $call[3]\n"; }
In example 2 we use the Perl function 'caller' to give information on the calling subroutine where the error occurred. Caller returns an array of data pertaining to where it was called. The most common elements are listed below: 1. package 2. file-name 3. line number 4. subroutine / function
Example 4 calls the function, regardless of where in the script the function was declared. The parenthesis indicate that the preceding word is a function. The parenthesis are used to pass values to the function, this is covered later. # Example 5 &log_error if $error;
Example 5 calls the function, regardless of whether it was defined before the code section or not, using & is similar to using '$', '@' or '%', it clearly identifies the label as a function. However there are side-effects to using &, discussed later in this article.
if (-e $file) { $error_message = "$file is U executable"; log_error; }
It seems comical to use this method, what would happen if you forgot to reset '$error_message', you'd give the wrong error message which would be extremely misleading putting you in the position of debugging your debug code. Modifying the previous example, we can give details as to the cause of an error as parameters to the argument: # Example 7 sub log_error { my $error_message = shift; my @call = caller(1); print STDERR "Error at line U $call[2] of file $call[1]" . "in the function $call[4]\n"; print STDERR U "$error_message\n" if defined($error_message);
Parameters Parameters make user-defined functions very powerful and flexible. Arguments can be passed to user-defined functions in the same fashion as the predefined functions of Perl. Values are copied to a function using parenthesis and the contents of the parenthesis are passed using the default array '@_'. This is the same variable that can be used throughout the program, however the value is stored elsewhere for
} my $file = '/dev/thermic_lance'; unless (-e $file) { log_error("$file doesn't U exist"); }
PROGRAMMING
then create the function to check for the existence of a file: # Example 8 sub exist_file { my $file = shift; unless (-e $file) { log_error("$file doesn't U exist"); } return 0; }
This function in example 8 will call another user defined function, the code for which is shown previously. The code will now give a standardized explanation of the error that occurred, in a standard format using another user function to perform part of its task. The concept of splitting work among several user defined functions is called abstraction and has many benefits. An obvious one is that if you wanted to add a time-stamp then you would only need to add the time stamp code once and all existing calls to 'log_error' would reap the benefits.
Default Parameters A function does not mind how many parameters are passed to it by default. As with standard arrays, if you try to access an element that has no value, the value returned will be 'undef'. It is possible to make Perl strictly adhere to set function arguments as we will see. # Example 9 sub error_log { my $error_message = shift || U 'no message provided'; my @call = caller(1); print STDERR "Error at line U $call[2] of file $call[1]" . U "in the function $call[4]\n"; print STDERR U "$error_message\n"; }
In example 9 if a parameter is not passed, then the default value reads 'no message provided', so the area returned could be: Error at line 20 of file U script.pl in the function U test no message provided
If you were particularly lazy you could
The '||' operator is usually seen in conditional operators but in Perl it's equally at
www.linux-magazine.com September 2002
57
PROGRAMMING
Perl tutorial
Perl tutorial
home in ordinary lines of code. It has a low order of precedence.
Fictional Place, Some Town', 'jdoe!john doe,Flat 184A 23rdU Street, Some City', '!bad line', 'another bad U line.'
Many Happy returns All functions return a value. The value that a function returns can be the results of an operation or a status flag to show success or failure of an operation. The following example shows the results of an operation:
); for (@lines) { my $index = ''; # pass the line and a U reference to index. my $status = get_index($_,U \$index); print "The line '$_' has an U index $index\n" if $status U == 1;
# Example 10 sub circ_area { my $radius = shift || 0; my $PI = 2.14; my $area = $PI * $radius * U $radius; return $area; } my $radius = 3; my $area = circ_area($radius); print "Area of a circle U $radius in radius is $area\n";
Will be interpreted as “Set $radius to the value of the next element of the array @_, if there are no more values set the value to 'no message provided'. The results of the user defined function are returned directly, the essence of the function is to return the data. In large systems a function return value is used to convey success or failure of the function, this is extremely useful in tasks that use many sections. # Example 11 sub get_index($$) { my ($line, $index) = @_; my $status = 0; if ($line =~ /^(\w+)!/ && $1 U ne '' ){ $$index = $1; $status = 1; } else { log_error("No index on line: U $_\n"); } return $status; } my @lines = ( 'fred!fred bloggs, 12 U
58
}
Example 11 will find the indexes for an array of items and return a status for each line, this status can then be used to decide if it is possible to continue with the process. It is worth noting that by default Perl functions will return the value from the last statement in a function. It is not uncommon to see subroutines that don't have 'return …' as the last line but rather a variable, function or value by itself just before the function declaration ends: while it is only necessary to use a return value to explicitly leave a subroutine early it is good form to explicitly give a 'return' statement. # Example 12 sub get_index($$) { my ($line, $index) = @_; U my $status = 0; if ($line =~ /^(\w+)!/ && $1 U ne '' ){ $$index = $1; $status = 1; } else { log_error("No index.on line: U $_\n"); } $status;
use the scripting ethos of “Exit early”. Example 13, be;ow, is the “Exit Early” programming style. # Example 13 sub circ_area { my $radius = shift or return U 0; my $PI = 2.14; my $area = $PI * $radius * U $radius; return $area; }
It is a foregone conclusion that a radius of zero will produce an area of zero, so rather than calculate this result as we did before, we return the result immediately. Since the rules of geometry are unlikely to change in the working life of this code (and perhaps even before Perl 6 is released) such an action can hardly be seen as cavalier. We can return half-way through an assignment due to two key features of Perl. The 'or' operator has a greater precedence than the assignment operator, and more importantly Perl is a tolerant, stoic and syntactically gifted language.
Scope In example 6, we called a function and it accessed a variable declared outside of the function We assigned a particular message to the variable '$error_message' and this was used in the function 'log_error'. The variable we used is called a global variable, that is to say its name can be used anywhere and its value can be read or written to from anywhere within the program. The very fact that globals can be altered from anywhere is the biggest argument against using them, they lead to messy un-maintainable code and are considered a bad thing. #Example 15 (Does not compile) #!/usr/bin/perl use strict; use warnings;
}
} functionX; print "global is $global\n"; # This line will fail as # $private doesn't exist # outside of the function # called functionX. print "private is $private\n";
Use of global variables is best avoided, and should only be used to declare any constant values that will remain for the duration of the program. Better, even then, to make use of the fact that functions are always global, so no one can revise the code and knock out a global variable: sub PI(){ 3.14 }
Even using functions to make constants it is wise to pass the values as parameters into the function, in case the function is placed directly into another script, where the constant has not been passed. Any function that can stand alone, can be unit tested, and its functionality vouched for.
My, Our and Local There are three different ways to declare a variable in Perl, each affecting different aspects of the scope. As a rule 'my' is always used, failure to use 'our' or 'local' in the correct manner is considered an unforgivable sin. 'my' is the safest variety to use, this creates a variable and destroys it when it is no longer referred to, using the Perl garbage collection. Any variable declared using 'my' within a scope ( a looping structure or code block ) exists only when that scope is called and its value is then reset after the iteration.
September 2002 www.linux-magazine.com
sub functionX { my $private = 4; print "global is $global\n"; print "private is $private\n";
for my $x ( 0..9 ) { my $y = 0; print "Coordinate U ( $x, $y )\n" while $y++ < 3; }
'local' hi-jacks the value of a global variable, for the duration of its scope. This occurs at runtime rather than at compile time and is referred to as dynamic scope. my $v = local print } # $v
5; { $v = 1; # $v is 1; "$v\n" # $v is 1; is 5 again.
my $v = 1; # $v is 1; print "$v\n" # $v is 1; } # $v no longer exists.
Function Oddities It is possible to establish a required set of values that the function must receive or make it fail to compile and exit with a runtime exception. This can be desirable in some cases and allows greater freedom in our use of user-defined functions. We can declare the prototypes at the start of the code and then define the code block later in the program, in case we wish to use the extra features of prototyp-
Online References M-J. Dominus' excellent website: perl.plover.com/FAQs/Namespaces.html perl.plover.com/local.html
VARIABLES IN PERL keyword
value
name
my
scoped
scoped
local
scoped
global
our
global
scoped
Further Reading Perl docs: perldoc perlfunc
ing to increase legibility of the code or force certain uses. sub foo; # Forward declaration.U sub foo(); # Prototype.
Using '&' does allow a programmer to overrule any prototyping on a function. A full description of prototyping with its strengths and weaknesses (Perl is after all a weakly typed language) will appear in the future. sub debugm($) { print STDERR "\n\n****\n$_U [0]\n***\n\n"; } # Automatically uses $_ debugm;
As a rule 'local' should be avoided in preference to 'my'. It is primarily used to alter global variables for short spaces of time. Even then it is worth noting that any function calls made within this scope will also use the locally set value. If in doubt consult 'perldoc -f local' but remember 'my' is almost always what you want. 'our' allows the use of a variable within the lexical scope without initializing the value. 'our' becomes useful when we make packages, which we will investigate in the future.
{
my $global = 3;
Something greatly frowned upon in some programming disciplines is having more than one exit point to a function. Since Perl acts as both a programming as well as a scripting language the popular interpretation of this rule is to bend it and
This can be especially useful in nested loops where the inner variable is each time automatically initialized.
PROGRAMMING
# This only prints the first # parameter but ignores the # function prototype. &debugm('beware','of this');
Perl Documentation Perl has a wealth of good documentation coming with the standard distribution, It covers every aspect of the Perl language and is viewed using your computer system’s default pager program. The pages of Perldoc resemble the man pages in that they cite examples of use and give pertinant advice. There are a great many parts to the Perl documentation. To list the categories available, type the following command at the shell prompt: perldoc perl
This then displays a page which has two columns. The left hand column lists the mnemonic title while the right column shows a description of the topic: Perlsyn Perldata Perlop Perlsub
Perl syntax Perl data structures Perl operators and precedence Perl subroutines
Simply type perldoc and the mnemonic for that subject on the command line: perldoc perlsyn
This shows the Perl syntax information. ■
www.linux-magazine.com September 2002
59
PROGRAMMING
Tclhttpd
Web applications with the Tcl web server
Delivery Service A
Suitable for projects of all sizes But Tclhttpd does not need to hide its light under a bushel – after all it does host http://www.tcl.tk and this website has to cope with a considerable volume
60
Programmed in Tcl, the Tclhttpd web server is an ideal platform for advanced web applications that can also profit from the ease and speed of development that Tcl offers. Tcl library functions and numerous extensions are available including a library that takes the hard work out of generating HTML code. BY CARSTEN ZERBST
Deutsche Post World Net
number of techniques are now available for web applications in Tcl. In addition to CGI scripts, Tcl modules for Apache or fully featured application servers, you might like to take a look at Tclhttpd. The webserver is a 100 per cent Tcl code development with a long history, and this is reflected in the current version. The Tcl webserver’s functions provide an ideal platform for advanced web applications. This article demonstrates various approaches for the creation of HTML pages and also in the processing of requests. Tclhttpd’s origins go back to 175 lines of Tcl that Brent Welch wrote in the mid 90s. The codebase has grown since to encompass something in the region of 12,000 lines, and this does not include the extensive Tcllib library. This stable codebase supports speedy deployment in various areas. Tclhttpd can: • Serve static web sites • Run Server Side Includes • Link individual URLs, even whole directories, or various MIME types with Tcl scripts • Embed Tcl code in HTML • Read and write cookies • Manage sessions • Authenticate users • Evaluate forms • Upload files to servers • Support email The development activities were never intended to rival the king of the hill, Apache. If you are having to contend with several hundred requests per second, Tcl modules such as mod_tcl [5] or mod_websh [6] for Apache are definitely a better bet. But if you are looking to develop web applications for small to medium volume web sites, Tclhttpd will provide you with a solid base from which you can work.
of traffic. Other reference applications include a global network for meteorological data from airports or the Medusa Project [4] that accesses a large-scale database. But if you talk to the users, you will normally find that Tcl has mainly been used for internal projects. Sourceforge has the Tclhttpd source files [1]. There are two versions: the allinclusive variant tclhttpd-3.2-dist including Tcl, Thread and Tcllib [3] and the current version tclhttpd-3.3.1. The
September 2002 www.linux-magazine.com
older package’s advantage is ease of installation. The following steps are all that you need to do to get a web server up and running: # tar -xzf tclhttpd3.2-dist.tarU .gz # cd tclhttpd3.2-dist/tclhttpd3U .2 # make # make install # cd bin
Tclhttpd
# wish httpd.tcl Running with 256 file U descriptor limit httpd started on port 8015
Just launch your browser and point it at http://localhost:8015 to view the sample files that show you some of the package’s capabilities. You can configure the server via the tclhttpd .rc file; the following is a listing that contains an example of some of the options available. # Sample configuration # httpd running as user 500 # in group 100 Config uid 500 Config gid 100 # httpd listening on port 8015,U normal hostname Config host [info hostname] Config port 8015 # Custom scripts in # .../custom directory Config library [file join U [Config home] .. custom] # HTML files in # /usr/local/httpd/htdocs Config docRoot /usr/local/U httpd/htdocs
# Do not create threads Config threads 0 Config main [file join U [Config home] httpdthread.tcl] # Logfile: /usr/local/httpd/log Config LogFile /usr/local/U httpd/log Config LogFlushMinutes 0
When you start out on a development project, it makes sense to use the content in the sample directory, to leverage the control panel and statistics features. The control panel reads variables from browsers or reloads the libraries, and both these functions are very useful for when it comes to debugging. Tclhttpd can create web page content dynamically at runtime and it supports various approaches to do so. The easiest way to go is to configure a Direct_Url: The server will then pass requests for the URL to the configured Tcl procedure. In contrast to a CGI script the server will not spawn a new process but it will run the procedure directly in its own server process. This allows you to use variables from the server for counters or to open database connections.
Direct Url Dynamics The next listing shows a simple example.
PROGRAMMING
The Direct_Url /listing2 .html listing2 command assigns the URL http:// localhost:8015/listing2.html to the listing2 procedure. The Tcl procedure creates and returns the required page. The variables available in the script, for example env, are interesting, as they are being used to store information on the current client connection. The html::tableFromArray command formats the content of the global variable producing the result shown in Figure 1. Direct_Url /listing2.html U listing2 proc listing2 {args} { puts stderr $args set html "<html>" append html "<body>" append html [html::tableFromU Array ::env "border=1" *] append html "</body>U </html>" return $html }
The script must be stored in the contrib directory in order for the server to find it. Tclhttpd reads all the scripts in this directory automatically at startup. During the development phase the lib directory is also useful. Scripts stored in this directory still need to be loaded explicitly in the main script, but this allows you to reload them later from the control panel with the Reload Source function, which provides a facility for on the fly code modification. If an error is discovered in a script the Tclhttpd displays the debugging information directly as an HTML page for easy viewing.
Elegant Templates
Figure 1: The procedure detailed in Listing†2 outputs the content of the global Tcl variable ::env. The variable contains entries that you should recognize from CGI scripts, such as HTTP_USER_AGENT.
Templates are a more elegant solution than using Direct Url and comprise an HTML document with embedded Tcl code. The Tcl elements are encapsulated in brackets – the return value of the function that immediately precedes the closing bracket is passed to the HTML page. You can use the value of the variable in the whole template and not only in the Tcl elements. If you want to work with templates, you will need to place a copy of the .tml file and the libtml directory taken from
www.linux-magazine.com September 2002
61
PROGRAMMING
Tclhttpd
the distribution in the htdocs directory. The sample template in the next listing first defines a variable, that contains the value of the env in HTML code. Lower down in the template, the content is then integrated into the page using $later. The last section of the template contains the date of the last modification. We can see that a template can therefore encompass a varying mixture of scripts, variables and HTML. <html> <head> <title> SimpleU Template</title> </head> <body> A simple template. [ set later "watch this" html::tableFromArray U ::env "border=1" * ] <p> $later <hr>
Tclhttpd automatically updates the saved version if the template is newer than the saved results, or if the browser calls the template directly. As you can see in the following listing, the [Doc_U Dynamic] can be used to suppress the caching functionality. Templates provide an easy migration path that you can use to gradually upgrade static web sites with dynamic functions. In addition to simple counters, navigation toolbars would be obvious candidates, since a single procedure could create them for the whole site. htdocs/libtml/sunscript.tcl contains a sample of source code from the former Sunscript page.
Interactive Templates
Last change [clock format [file mtime U $::env(PATH_TRANSLATED)]] </body> </html>
Web pages with user input are the next hurdle for a web application to overcome. The interactions of this kind basically comprise of two elements: an HTML page provides the user interface in the browser and a script running on the server that evaluates the input. It makes sense for a template to create the form and evaluate it, allowing the template in turn to return a modified version of the formula in case of input errors. In case of valid input, the template would then transfer the browser to a different page.
Templates use the .tml file suffix and are stored just like normal HTML documents in htdocs. When a browser requests the listing3.html, the server first actions the listing3.tml template and then returns the result. Tclhttpd addtionally writes the result to listing3.html on the hard disk and uses it for any further requests. This kind of caching is particularly practical for templates that either perform some complex calculations or contain slow database queries.
<html> <head> <title>Entries<U /title></head> <body> [Doc_Dynamic]
set message "no project U selected" } ] <hr> $message <form action=$page(url) U method=POST> Text: <input type=text [html::formValue text]> <br> Project: [html::radioSet U project { } { "Project 1" project1 "Project 2" project2 }] <p> <input value="Send" U type=submit> </form> <p> Input was: [html::tableFromList U [ncgi::nvlist] "border=1"] </body> </html>
The sample script in the listing above shows how a template can edit a form. Although the template is short, it provides heaps of functionality. First the [Doc _Dynamic] command prevents the form from caching the template, which would make no sense at all. The next Tcl block handles the data input using a few functions from the Ncgi package in Tcllib. For example, ncgi::empty checks whether an entry for the project field in the form exists. In this case the request is passed via Doc_Redi-
[ if {![ncgi::empty U project] } { Doc_Redirect [ncgi::valueU project].html?[ncgi::query] } else {
TABLE 1: INSTRUCTIONS FOR HTML AND CGI Instruction
Meaning
HTML html::h1 Title
Produces a heading,also "html::h2" and "html::h3"
html::tableFromArray ArrayName
Produces a HTML table from a Tcl array
html::checkbox Name Value
Produces a Checkbox
html::trxtInput Name Parameter
Produces a text input
Ncgi ncgi::cookie Cookie
Returns a list of values for Cookie
ncgi::setCookie - name Name -Value Value
Sets a Cookie
ncgi::empty Name
Indicates whether an input value is present
ncgi::value Key
Returns the CGI value identified by Key
ncgi::nvlist
Returns all the query as a name,value list
ncgi::query
Returns the raw query data
62
September 2002 www.linux-magazine.com
Figure 2: Mozilla displaying the content of a cookie that originated in the template of the â&#x20AC;&#x153;cookieâ&#x20AC;?listing above.The cookie was set by the server at 192.168.42.150, is called test and contains the value 42.
Tclhttpd
PROGRAMMING
Figure 3: The googbar looks like the Google home page, but in fact it is a Tcl program that redirects search operations directly to Google.
Figure 5: dndspy (bottom) shows the MIME types and actions allowed with drag objects. Grabbing a message (flying page) from Evolution (top).
Data crumbs Having to retype data is clumsy, and not all browsers can or want to store user input. Figure 4: The BWidgets demo application shows the capabilities of this Although it is easy toolkit. The widgets are written in Tcl and only use Tk. to store data on the server, it is by no means trivial to assign it rect to the HTML page for this project. If to a specific user. HTTP is stateless and this does not occur, a message is then requests are normally independent of any stored in the variable message and it is previous requests. written to the page by $message, which is Cookies provide a solution to this the message construct. known problem: The server asks the The middle section inside the template browser to store some data (a cookie) on contains a simple form, whose input can the client. If the user revisits the site, and be returned to the template by the use of has not deleted the cookie meanwhile, HTTP-POST. The form contains a text the server can read the cookie it asked the input box and radio buttons, which have browser to store. been assigned to project1 and project2, The template in the next listing first and are labeled Project 1 and Project 2. At checks the browser for the answer cookie. the end of the script the user input is forIf the cookie exists, it outputs the value, if matted. This is then output so that it can not, the template attempts to write a be used for debugging purposes. cookie. Apart from the server that sets the When a user first opens the listing4.html cookie values the browser itself can also page, no project has been selected. In this read the cookie, as you can see in Figure case clicking on the Send button will 2. Cookies can often be troublesome, return the user to the same page. Any especially from a legal point of view (data entries in the text boxes are retained due protection and so on); they also pose a to html::formValue text. Users must first security risk. You will want to try to avoid select a project before the input validity using them as a building block of your check can redirect them to another page. application logic. The Html and Ncgi packages comprise solutions for numerous tasks that involve formulae and their evaluation. Table 1 <html> <head> <title>U contains some practical functions. The Cookie</title> </head> complete documentation for the packages <body> <h1>Cookie</h1> is available in Tcllib[3]. [ Doc_Dynamic
if {[string length U [Doc_Cookie answer]] > 0} { set html "Cookie answer is U [Doc_Cookie answer]" } else { Doc_SetCookie -name answer U -value 42 -path $page(url) set html "Set Cookie U answer=42" }] </body> </html>
Login for Web Applications Although open networks are a good thing, you might need to implement access restrictions for part of your web site. In addition to the .htaccess files you will be familiar with from Apache, Tclhttpd supports authentication by a Tcl procedure. A file called .tclaccess, which is stored in the individual directories, takes care of this. You need to set the variables realm and callback here. set realm "tickle" set callback aok? proc aok? {sock realm user U password} { if {[string match $password U tickle]} { return 1 } else { return 0 } }
www.linux-magazine.com September 2002
63
PROGRAMMING
Tclhttpd
Figure 6: Toucan a GUI developer interface for Palm programs. The IDE and any software created with it are based on Tcl.
The callback variable contains the name of the Tcl procedure responsible for authentication. The browser displays the content of realm as a text during the login dialog. This data is also passed as an argument to the callback procedure. Our example accepts users that supply the password tickle. A real application would access a user password entry in a file, database or LDAP directory. After logging in the user name is stored in the ::env(REMOTE_USER) variable.
Sessions are a more advanced variant. Here, Tclhttpd uses its own interpreter for the individual sessions. This provides for data separation between the sessions, and so retaining the data for each of the individual sessions.
More Info The capabilities described in this article cover only a small portion of this web server’s total functionality. The sample files bundled with the Tclhttpd package
Tcl News Next release anticipated Tcl 8.4 will be leaving beta in the Autumn. Although I have not noticed any errors for a long time now, the Tcl coreteam prefers to wait and deliver an absolutely perfect version. A whole bunch of new applications in and for Tcl are available right now.The BWidgets and additional GUI elements such as Tree and Combobox – which will be featured in the next issue of TCL – are now available in version 1.4 [3].The demo in Figure 4 might whet your appetite. Ora Tcl and Tcl XML There is also news on these two packages which were covered in our last TCL article:The latest version of the Ora Tcl[9] database extensions supports the full range of Oracle 9i features and the Tcl XML package on Sourceforge[10] now comprises xmlgen, providing a new approach to generating XML and HTML. xmlgen provides a language map between Tcl and XML: Instead of using elements and attributes, you can now work on a higher level with application objects.
additionally show you how to upload files and use image maps. You can obtain additional information, which is available from [2]. Amongst the other interesting snippets, you will note an excerpt from a book by Brent Welch, Tclhttpd’s author. Sourceforge also offers a mailing list that provides competent answers to complex issues, but still finds time to deal with beginners’ questions. In some cases you may not need to go to all the trouble of developing a new application yourself. The Infocetera[12] site provides a complete Groupware application, including a calendar, address book, room planner, task planner, and other modules. The whole application is based on Tclhttpd. Our next informative TCL article will soon be appearing on your pages of Linux Magazine from out of the murky depths of server applications. We will be looking into both Tk and the BWidget set. The image in Figure 4 should serve to whet your appetite for this widget. It provides capable new widgets by using only the Tk standard widgets and pure Tcl code. ■
INFO [1] Tclhttpd home page: http://sourceforge.U net/projects/tclhttpd/ [2] Informationen on Tclhttpd: http://www.U tcl.tk/software/tclhttpd/ [3] Home pages for Tcllib and BWidgets: http://sourceforge.net/projects/tcllib/ [4] Medusa project: http://ciheam.maich.U gr/medusa/ [5] Tcl Apache module: http://tcl.apache.U org/mod_tcl/mod_tcl.html [6] Websh: http://websh.com [7] Googbar: http://www.geddy.hpg.ig.com.U br/software/googbar/ [8] Toucan: http://home.attbi.com/U ~maccody/ [9] Ora Tcl: http://oratcl.sourceforge.net [10] Tcl XML: http://tclxml.sourceforge.net [11] Tk DND: http://www.iit.demokritos.gr/U ~petasis/ [12] Infocetera: http://www.infocetera.com
Apart from all these server oriented treats there is news on several smaller tools.Toucan, a developer interface for Palm programs[8] requires only a minimal hardware platform. Both the developer interface and any applications developed with it are based on Tcl (Figure 6).The Googbar (Figure 3), which can launch Google searches[7] has an extremely small memory footprint.Tkdnd, a drag & drop extension for Tk[11] needs even less space on screen. It runs on the XDND protocol used by Gnome and KDE applications and also on Windows. dndspy (Figure 5) is also included.This program shows data traffic in the DND protocol
64
September 2002 www.linux-magazine.com
THE AUTHOR
Lots of Little Tools Carsten Zerbst works for Atlantec on the PDM ship building system. He is also interested in Tcl/Tk usage and applications.
ORBIT/COMDEX EU ROPE CONGRESS 2002
Praxis-Lösungen für IT-Anwender. Information Security Vom Produkt zur Strategie – eine gesamtheitliche Betrachtungsweise
IT for Finance Das IT-Forum für den Finanzsektor in Deutschland und in der Schweiz
Donnerstag, 26.9.2002
Mittwoch, 25.9.2002
Hauptsponsor: www.ca.com
Enterprise Mobility Business-Gründe für mobile Verbindungen – überall und jederzeit Hauptsponsor: Partner: www.orange.ch www.gigagroup.net
9.30 bis 10.00
k
10.30 bis 12.00
s1 Management-Aspekte der IT-Sicherheit
f1
13.30 bis 15.00
s2 Strategische Informationssicherheit (Sprache: Englisch)
15.30 bis 17.00
s3 IDS-Geschichte, Gegenwart und Zukunft
9.30 bis 10.00
k
10.30 bis 12.00
s4 Security in der MicrosoftWelt
f4 Internet Banking
13.30 bis 15.00
s5 Macht und Ohnmacht von Grossmächten im Internet
15.30 bis 17.00
s6 Mobile IT; klein und fein, darfs auch sicher sein?
Procurement im E-Business
Content meets Business
Wie europäische Unternehmen ihre Einkaufsprozesse optimieren
Content und Knowledge Management als Teil des Geschäftsprozesses
Sponsor: Partner: Partner: www.conextrade.com www.ecademy.ch www.contentmanager.de www.softnet.ch www.gigagroup.net www.netzwoche.ch
Keynote: IT Security: das Spektrum der Bedrohung David Love, Head of Security Strategy EMEA, Computer Associates. Sprache: Englisch Potenziale für Kostenreduktion in der Banken-IT
e1 Gute Business-Gründe für mobile Unternehmensapplikationen
p1 E-Procurement für KMUs
c1 Die 10 Kernfragen im Content Management
f2 Die Zukunft der Finanzmarktplätze im Internet
e2 Die standardbasierte Plattform «Mobile Office»
p2 Kostensenkung im Ersatzteilmanagement
c2 Content-ManagementStrategien für KMUs
f3 Customer Relationship Management im Finanzsektor
e3 Sprachtechnologie – das nächste Benutzer-Interface für das Internet
p3 Beschaffungsoptimierung in Grossunternehmen
c3 CMS-Lösungen – Welche Lösung eignet sich für welches Problem?
e4 Wissen Carriers tatsächlich, woher ihr Wachstum kommen wird?
p4 Collaborative Buying
c4 Content im Business – Erfolgsberichte
f5 Internet Banking: Perspektiven
e5 Verbindung von verteilten Arbeitsplätzen – Work Wirefree®
p5 Procurement Service Providers für die öffentliche Hand
c5 Von Content über Media Asset Management zum Geschäftsprozess
f6 Versicherungs- und Bankentechnologie
e6 Wearable Computing – Das tragbare Büro
p6 Prozessoptimierung mit Lieferanten
c6 Was Sie über Webanalyse wissen sollten! Tool-Anbieter berichten
Keynote: Information Warfare: eine wirtschaftliche Betrachtung David Love, Head of Security Strategy EMEA, Computer Associates. Sprache: Englisch
(Änderungen vorbehalten. Stand 20. Juni 2002)
Attraktive Kongress-Packages! Beim Kauf einer Sessionkarte erhalten Sie die folgenden Leistungen
• Eintritt zur ausgewählten Session • Pausengetränke • Tageskarte Orbit/Comdex Europe 2002 (Messe)* • 1 Buch «Procurement im E-Business» – E-Business Cases (2001)* • 1 CD-ROM EITO 2002 (solange Vorrat)* • 1 Kongress-Bag mit Dokumentation* * Diese Leistungen sind nur bei Bestellungen von Kongresskarten à CHF 180.–/CHF 200.– inbegriffen.
Alle Seminare finden im Kongresszentrum Basel statt. Vorverkauf (bis 23.9.2002)
Preis für eine Session: CHF 180.– Peis für jede weitere Session (für die gleiche Person, verschiedene Session/s): CHF 130.– Ticketverkauf vor Ort (24.– 26.9.2002)
Preis für eine Session: CHF 200.– Peis für jede weitere Session (für die gleiche Person, verschiedene Session/s): CHF 150.–
Basel, 24.– 27. September 2002 Die Orbit/Comdex Europe 2002 bietet IT-Anwendern eine breite, praxisorientierte Informationsplattform an: Neben den fünf Kongressthemen Information Security, IT for Finance, Enterprise Mobility, Procurement im E-Business und Content meets Business präsentiert die Orbit/Comdex Europe unter anderem die folgenden Messehighlights: Information Security Park, Content Expo, Innovation leads Business und den Enterprise IT Buyer’s Club. Zahlreiche Aussteller stellen die neusten Produkte und Dienstleistungen aus den Bereichen IT, Telekommunikation, Internet und E-Commerce vor.
Kongressanmeldungen und weitere Informationen unter www.orbitcomdex.com oder Tel. +41 58 200 20 20.
INFORMATION TECHNOLOGY – ONE STEP AHEAD
PROGRAMMING
C tutorial
C: Part 10
Language of the ‘C’ W
hich comes first in the sum, is it the multiplication or the addition? By running the example through any nearby compiler you’ll see the answer is 13. But is that always true? Or is it just the gcc? Without giving too much of the plot away – it is always true! It has to be true, otherwise the compiler would be compiling another language to rather than C!
Precedence solves one of the great mysteries in programming: does 5*2+3 equal 13 or 25! To find out the answer, and why, we asked Steven Goodwin to explain this and the other finer points in C. BY STEVEN GOODWIN
All The President’s Men (sorry!) Simply put, precedence is a set of rules built into the language (which all the compilers must therefore follow) that indicate which parts of an expression should get evaluated first, and which should happen second. Table 1 is listed from the high priority operators which occur first, like the brackets (naturally, since their purpose is to group things together) through the mid-level operators (multiplication and addition) down to assignments. You will also notice that some groups (such as the arithmetic, for example) are split in half. This indicates that while multiplication, division and modulus (remainder) all have the same precedence level, addition and subtraction are slightly lower. So 5*2+3 will be 13, because 5*2 (=10) is done first, followed by 10+3. We could have been explicit by writing (5*2)+3, but this is overkill since we know the basic rules. The order itself has been well chosen as 99% of expressions you write will fit naturally the precedence order, without explicit bracketing. This can be seen through example. c = szSentance[iFirstLetter=0]; if (c >= 'A' && c <= 'Z') printf("Starting with upper U case is good\n");
The square brackets keep the assignment internal to itself, and so it can not affect anything else. As the assignment is low, any expressions we try and evaluate with always occur on the right hand of the
66
equals. The conditionals (>= and <=) bind tighter than the &&, and so both individual cases are checked separately, and then ANDed together. Most other languages have a set of precedence rules similar to C, with some minor variations, so understanding one is good grounding for the others. The most frequent problem caused by precedence is the bitwise AND (&). Since it is often used as a test (‘is bit 4 set?’, for example) one might normally attempt to use code such as: if (c & 0x7f != 0) /* Don't do this! It doesn't U work!!! */ printf("Success!?\n");
By referring to the table again, you should be able to see why this doesn’t work. Looking at both operators (& and != ), we see that the not equals has the
September 2002 www.linux-magazine.com
higher precedence, and so is done first. It is this result that is then ANDed with c in the test. Since 0x7f is never equal to 0 it evaluates to true (represented as 1 – see Truth or Dare, later), and the test will actually check for the least significant bit being set. This determines if a number is odd or even and will, quite literally, work half the time! I recommend knowing the basic rules from this table, but not to memorising all of it slavishly. My reasons are two-fold. Firstly – you should never need it, since even dullest pub conversation can not be lightened with a ‘did you know’ session on operator precedence! (I know – I’ve been there!). Secondly, if you write code that relies on the precedence rules it will not be easily understood, and almost incomprehensible to anyone that has not memorised it. And since any program will be read more times that it is written, this is very bad thing. Not to mention the
C tutorial
problems you can get yourself into if you misquote a precedence rule and spend an hour looking for a bug that could have been avoided by using brackets.
Same Size Feet Operators like * and / are in the same group. This means they have exactly the same precedence and so will evaluate them from left to right (according the associativity of the operator). This can become a problem when mixing different operators (with equal precedence), so bracketing should be used to state the intention: ans = 10*x / 5*y; U /* Careful - layout can U confuse! */
Group Reference
ans = 2*x*y; /* Acts like ( ( (10*x) / 5 ) U * y ) */
not ans = (10*x) / (5*y);
This is a good case where explicitly bracketing will actually help to clarify the meaning, and not clutter the code. The order in which the component expressions are evaluated is determined by the compiler, and not by the language. In our previous example it doesn’t matter if (10*x) is worked out before (5*y), since we get the same answer. The compiler is
Operator () [] . -> + ++ – ~ (type) ! sizeof * & * / %
Description Function call, bracketed expression Array element Structure member Indirect structure member Unary plus (as in +5) Unary minus (as in -5) Increment (pre & post) Decrement (pre & post) One’s compliment (bitwise NOT) Type cast Logical NOT Size (in bytes) of variable or structure Indirect reference (as in *ptr) Address of variable Multiplication Division Modulus (remainder)
+ << > < <= > >=
Addition Subtraction Bit shift to left Bit shift to right Less than Less than, or equal Greater than Greater than, or equal
== != &
Equal to Not equal to Bitwise AND
|
Bitwise OR
^
Bitwise XOR (exclusive OR)
Logical constructs
&&
Logical AND
Conditional
|| ?:
Logical OR The ternary operator,or conditional expression Various assignments where e1 op= e2; is equivalent to:e1 = (e1) op (e2); Multiple evaluation
Unary
Arithmetic
Bit shift Comparisons
Bitwise operators
Assignment Comma
= *= /= %= += -= <<= >= &= |= ^= ,
then free to optimise the order to suit the target platform, but in cases like:
is actually the same as:
TABLE 1 Associativity Left to right
Right to left
PROGRAMMING
iTotalDishes = CountRiceDishesU () + CountNoodleDishes();
Either function could be the first called so you can not make assumptions as to which it is (even if you know!), or change the global variables from inside those functions that the other relies on. The same is true with function parameters; either could be evaluated first and so it’s behaviour is said to be undefined. We’ll cover the definition of this later. CalcTotalDishes(CountRiceDishesU (), CountNoodleDishes());
Similarly, the following code will also be ambiguous because of ++ iDiners. The increment can happen at any time before the sequence point (the semi-colon, remember) so the GetDinersWantingRice function could receive one of two values – creating an ambiguity we should avoid. You may know in which order gcc does it, however, relying on such behaviour is bad programming practice and to be avoided at all costs! iFractionOfRiceEaters = U GetDinersWantingRice(iDiners) U / ++iDiners;
Left to right
Left to right Left to right
Left to right
Left to right
Right to left Right to left Left to right
The other major case where precedence rules need to be followed is in macros. We shall look at this in a later issue.
Truth or Dare It is sometimes a great concern of new programmers (in all languages) as to the value of 'true'. We want to know the truth! Over the years, different languages have used different values for 'true': 1, -1, any non-zero number. In 'C' the value of 'true' is 1. The concept is anything nonzero! This means that at any expression (such as 'a > b' or 'a != b') which can be 'true' or 'false' will evaluate to the number 1, or 0, respectively. Anytime in a conditional statement, a number is used, like 'if (a)' or 'while(a)', any non-zero value is treated as true, and zero is the only false case. True can only be considered as 1 in native expressions like greater than, or not equals. Functions, such as isalpha (see part 7 Linux Magazine Issue 20 p62)
www.linux-magazine.com September 2002
67
PROGRAMMING
C tutorial
return a truth concept (i.e. non-zero), but not necessarily 1. For this reason, a truth comparison should always be considered implicitly. if (isalpha(cInput)) U /* this works */ printf("%c is an alphabetic U character.\n", cInput);
An explicit test should not be used. if (isalpha(cInput) == 1) U /* this won't */ printf("%c is an alphabetic U character.\n", cInput);
Now we can handle the truth, let’s see another way to use it.
Lazing on a Sunday Afternoon Like precedence, lazy evaluators are one of the language features that require an understanding of the spirit of the law, and not just the letter. Lazy evaluators however, feature in languages other than just C, but (in the spirit of the column!) I shall concentrate on its use within C. A lazy evaluator, as the name suggests, will do as little work as necessary to get the job done! So, if an expression like: if (a && b && c && d)
presents itself, we know through simple logic, that should 'a' be false the entire expression must also be false. As C also knows this, it will evaluate 'a', realise it's futile to consider looking at 'b', 'c' or 'd', and stop, leaving them unevaluated. If the expressions were functions, they would be uncalled, and increments would not happen. If 'a' is true, however, the evaluator will continue to check the other expressions, exiting at either the first falsehood it finds, or when it gets to the end of the expression and can proudly announce that the whole expression is true! Code like this can often save space by reducing the number of nested checks. For example: int IsTableFull(struct sTABLE U *pTable) { if (pTable) /* make sure the U table exists, and protect U against NULL pointers */
68
{ if (pTable->iSize == MAX_SIZE) return 1; /* a 'true' value */ } return 0; /* 'false' */ }
This routine is not uncommon, and a classic example where lazy evaluation would help. If we needed to check the pTable pointer and the iSize value, so we could write: int IsTableFull(struct sTABLE U *pTable) { if (pTable && pTable->iSize U == MAX_SIZE) return 1; /* a 'true' value */ else return 0; /* 'false' */ }
C will never try to look at pTable->iSize if pTable is NULL since it will have already terminated its evaluation, and so is safe. Similarly, we can work the same magic with OR. if (a || b || c || d)
Here, the moment an expression is true (be 'a', 'b', 'c' or 'd') the whole thing must be true, using a similar process of logic as above. Again, C works through them from left to right, as with AND. The two cases of AND and OR are the only times when you can guarantee the order in which the expressions will be evaluated. With the cases we saw earlier, of addition and multiplication, it is up to the compiler to choose the order. But here, because it must obey the rules of lazy evaluation, the order will always be left to right.
Leader of the Pack Up until now we haven’t tried mixing types to any degree. There are a couple of reasons for that. First, with the examples we have been doing, it is not necessary. Secondly, it is preferably (from a general coding standpoint) to deal solely with the same type in any particular expression and convert (if necessary) once the task has been completed . This helps improve speed and readability. Like precedence, there is a set of rules in the language that
September 2002 www.linux-magazine.com
help produce more optimal code. These rules automatically change types within your code so calculations can be done more efficiently. You should be aware of these to greater your understanding of C. Collectively they are known as the rules of promotion. In an expression such as a+b+c, the compiler will promote each variable to a type suitable for evaluation. It does not change the variable itself, just the way in which it is handled when computing a+b+c. Changed, but to which type? Well, any chars and shorts are instantly promoted to an int for the purpose of calculation since int is defined to be the natural type for the target processor. Which, as we’ve seen, is 32 bits on an x86 machine. Even amongst integers, however, there is a pecking order! An unsigned integer in the expression will cause any of its signed counterparts to get upgraded to unsigned status for the length of the equation. This can cause problems since an expression of 'iFragCount < iBestFragCount' can never be true if iFragCount is unsigned and iBestFragCount is zero, especially since the compiler will not warn you when this happens. This can cause a great deal of grief since the bugs happen so rarely; but this it is one of the best arguments for maintaining the type consistency throughout the program, and especially within expressions. Moving on, the type long can hold a greater range of numbers than int, so any long numbers in will promote everything else to long. Don’t worry – nearly there! Despite all these conversions however, they will still get prompted to float should there be any floating-point numbers present. Likewise, any double precision floating point numbers (doubles) will promote their friends to doubles also. Everything promotes upwards to the 'largest' type. To use a colloquialism – they are largin’ it! This promotion only works on the right hand side of the equals sign, I’m afraid. x = a + b + c;
Here, a+b+c may all get promoted to floats or doubles while working out the answer, but if x is only a short, that answer will be truncated (in the same manner as casting) when it gets assigned. This should be obvious since the user has
C tutorial
specified the type of x, and the compiler can not arbitrarily change it because the answer doesn’t fit it! This rarely causes problems under Linux however; but it can on (older Unix) systems where an integer is 16 bits with expressions such as the following.
Although I personally do not use it, there is nothing wrong in doing so.
long x; /* this is usually U 32 bits */ int a; /* on old Unix systems, U this might be 16 bits */
Oh, and if you’re thinking of trying this – it will still work as a global variable (with a warning), but not as a local variable. Either way, it’s old and archaic. And like most old things – it smells! So leave it alone!
a = b = 1000; x = a * b;
Here, although 1000*1000 is 1,000,000 and the long has enough bits to hold it – the integer types that are performing this sum can not. So we would need to manually promote one of the integers to a long, that way the normal rules take over – promote the other variable to a long – and perform the calculation using 32 bits, so giving it enough precision to get the correct answer. x = (long)a * b;
Highway 61 Revisited If you have been reading any source code recently you may have ‘discovered’ some new data types. Namely, long int short int
I’m sorry to disappoint you, but these are actually quite ordinary! A long int is the more formal name for a long, whilst short int is the same as short. This stems from the time when a variable did not need to be given an explicit type, and would default to an integer. As a consequence, typing long was the same as long int, since the int part was already implied.
/* An example of old code U declaring an integer */ iAnImplicitIntegerVariable;U /* notice the lack of type */
Boom Shak A Lak ASCII is a very good method of storing data from your program. Whether you use XML or a flat text file, having your data open enough to be interpreted by other programs is an obvious plus that Linux has thrived on for many years. It is unlikely, therefore, that you will want to create binary files for your data. However, in some instances, most notably graphics, binary data is unavoidable. As is the portability problem of endian-ness. Take a four-byte integer, such as: int iValue = 0x12345678; /* hex numbers makes this U easier to follow since it U splits nicely into 4 bytes */
This will be stored in four consecutive bytes in memory – but those bytes could be 12,34,56,78 or 78,56,34,12. The x86 architecture uses the latter, and is called little endian. You can always verify this for yourself with the following code: char *p = (char *)&iValue; /* use character pointers to U read bytes */ printf("%x %x %x %x\n", *p, U *(p+1), *(p+2), *(p+3));
Heaven Knows I’m Miserable Now The three phrases that should strike fear in the heart of any programmer are 'implementation defined', 'unspecified' and 'undefined'.When a programming manual, library readme, or code says that the output is 'undefined for this case' it has a very specific meaning. All three mean your program will not (always) work as expected if you ignore their advice, but for different reasons. Implementation defined means it is up the compiler vendor to pick a method, document it, and stick by it - at least within the current version.They are, however, free to change the behaviour between releases. Unspecified means the compiler writers know what will happen, but haven’t documented it. Undefined means that anything can happen. And it means anything.The results need not adhere to logic, the expression in question, or even the day of the week! Suffice to say, you should never write code that relies on, expects, or follows any of these criteria.
PROGRAMMING
Looking back to our graphics example, if the width of the image has also been stored in little endian form, we have no problem. However, if it was stored in big endian, we could read in our number (0x12345678 is a bit wide for an image, but bear with me!) and find the size was actually 0x78563412. Certainly this is not was is intended! In the real world this situation would be known to us ahead of time (when we are reviewing the file format specification, for example) but our target machine would not. We would then have to check the endian-ness of the machine, and swap the byte order if it failed to match. Two useful functions in this case would be: int IsLittleEndian(void) { int iValue = 1; /* Simplified U version of our test above */ if (*(char *)&iValue == 1) return 1; else return 0; } int SwapInt(int iOriginal) { int iNew; iNew = (iOriginal<<24) U & 0xff000000; iNew |= (iOriginal>8) U & 0x00ff0000; iNew |= (iOriginal<<8) U & 0x0000ff00; iNew |= (iOriginal>24) U & 0x000000ff; return iNew; }
Because of Intel’s dominance a lot of binary formats are based in little-endian, so those running on x86 will have fewer problems than those on, say, PowerPC or Mac architectures. So including endian specific comments and code is advisable, but difficult to test without having an appropriate machine. To gain experience in byte swapping on an Intel platform is easy, since the MIDI file format (amongst others) uses big endian numbers. This will demonstrate how much (or little, depending on your view) work is required. The work itself however is left, as an exercise for reader! ■
www.linux-magazine.com September 2002
69
LINUX USER
The Answer Girl
The Answer Girl
Keyboard Wizardry Is caps lock getting on your nerves? Still looking for a good way to put your Windows keys back to work? Along with other definitive questions… BY PATRICIA JUNG
A
new job, a new computer or just a keyboard that bites the dust after years of faithful service – this is a situation that everyone has to face some time – getting on top of a keyboard that will surely be supplied with Windows keys nowadays. Even if you swap the Windows keys, that will not affect the functionality, or lack of it, depending on your distribution.
The Answer Girl In the world of everyday computing, even under Linux, there are often surprises:Time and again things do not work, or at least not as they’re supposed to.The Answer Girl shows how to deal elegantly with such little problems.
72
An unused Windows key might be regarded as a slight blemish, but the caps lock key is a downright nuisance that will no doubt cause you to inadvertently SHOUT at your innocent computer from time to time. So why not put the loudmouth to work? You might consider converting the key to a second left shift key, as Caps Lock is often hit by mistake instead of left shift. Some users simply shift the caps lock functions from the Caps Lock key to the left control key. And you can also consider an individual option – but let’s first look at how you can accomplish that. Unfortunately we are not looking at a single solution because the two user interfaces common to Linux, i.e. the character-based console and the X Window
September 2002 www.linux-magazine.com
GUI have separate methods of defining keys. If you have previously been required to install a Linux distribution with foreign keyboard mappings or had to add non-standard characters, you will no doubt already have guessed that this task involves tinkering with two different sets of control levers.
Keyboard Assignments without X Most systems load a keyboard definition file shortly after booting. So, if we can find the corresponding command, that should provide us with an answer to this problem. There are at least three different ways to do this: • Feed your favourite search engine with keywords such as keyboard, Linux,
The Answer Girl
assignment and keytable. • Search the boot scripts in the init.d directory (normally /etc/rc.d/init.d or /etc/init.d) for a command that should include the word key. • Search your whatis database for the corresponding command. The first option is a question of personal preference. Whether the second option will be successful or not depends on your current distribution. Take SuSE 7.2 for example, with its penchant for scripts so complex that normal users have no idea what to look for. You might try the following command: trish@linux:~ > grep keys /etc/U init.d/* [...] /etc/init.d/kbd: rc_status && U retmsg="`loadkeys $KEYMAP 2>&1`" [...]
and quickly conclude that the /etc/init.d/kbd file (that is “keyboard”) is responsible for loading the keytable. But if you just happen to look at this script without really knowing what you are looking for, you will probably feel slightly lost. Depending on which you use, Debian (/etc/init.d/keymap.sh), Red Hat (/etc/init.d/keytable) or Caldera OpenLinux (/etc/rc.d/init.d/keytable), the search results should be far more clear and indicate that the command loadkeys is what you are looking for. You can then search the whatis database using the apropos command or type the following: trish@linux:~ > man -k keys loadkeys (1) - load keyU board translation tables
GLOSSARY Shouting: Whether in email or IRC dialog, most people intuitively view text passages in CAPITALS as the visual counterpart of shouting. Keyboard Layout: This refers to the way the keys are organized on the keyboard. If you look at an English, Scandinavian, Polish, … keyboard, you see that the top row of letters begins with the keys [Q], [W], [E], [R], [T], and [Y] (this layout is thus referred to as “qwerty”). French keyboards have have the [A], [Z], [E], [R], [T], and [Y] keys (“azerty”) in the top row instead.
Listing 1: Excerpt from uk.map
LINUX USER
Listing 2: Keycodes for Windows Keys
# uk.map [...] include "qwerty-layout" [...] # Normal Shift AltGr Ctrl keycode 1 = Escape Escape [...] keycode 54 = Shift keycode 56 = Alt keycode 57 = space keycode 58 = Caps_Lock keycode 86 = backslash bar bar U Control_backslash keycode 97 = Control
and view the corresponding man page to confirm your suspicions. We find that this is the console command for changing the keyboard assignments, which are stored in the so-called map files under /usr/lib/kbd/keymaps (SuSE 7.2, Red Hat), /usr/share/keymaps (Debian) or /usr/share/kbd/keymaps (Caldera, SuSE 8.0). But don’t expect to find the map files in this subdirectory – instead they are nicely organized by the computer architecture (i386, sun, mac etc.) and keyboard layout (qwerty, azerty etc.). The map files which are stored in these subdirectories (i386/qwerty/uk.map.gz) are gzipped text files that can be viewed with the zless command (Listing 1 shows an excerpt). As you would expect, the # character at the start of a line indicates a comment that has no effect on the functionality. The line reading include "qwerty-layout" is interesting – instead of defining all from scratch, you can include pre-defined keymaps. Individual key assignments include a so-called keycode on the left of the equals sign, and up to four function values on the right: for the key itself or in combination with the Shift, AltGr and Ctrl keys.
How to Stop Your Console Shouting As we already know the keycode for caps lock (keycode 58) and so can quickly redefine its function. To do so, we create a file called personal.map, and add an entry to assign the Shift function to the key with the keycode 58, leaving
trish@linux:~ > showkey kb mode was XLATE press any key (program terminates 10s after last keypress)... keycode 28 release [Right_Win_Key] keycode 125 press keycode 125 release [Left_Win_Key] keycode 126 press keycode 126 release [Menu Key] keycode 127 press keycode 127 release
the other assignments as defined in uk.map: include "/usr/share/kbd/keymapsU /i386/qwerty/uk.map.gz" keycode 58 = Shift
(You may need to change the path to uk.map.gz on your machine.) We can now test the new keyboard assignment on the console by typing: trish@linux:~ > loadkeys U personal.map Loading personal.map
As you can see, the keyboard mapping seems to be working perfectly: Caps lock now performs exactly like the Shift key.
Making Use of those Windows Keys Before we can use the Windows keys on the console, we need to find out their keycodes. Luckily, the loadkeys man page contains an example of the corresponding command showkey in the LOAD KERNEL KEYMAP section. If you now type... trish@linux:~ > showkey kb mode was RAW [ if you are trying this under U X, it might not work since the X server is also U reading /dev/console ] KDSKBMODE: This operation is U not permitted
www.linux-magazine.com September 2002
73
LINUX USER
The Answer Girl
the non-existant function keys F100, F101, and F102 to the corresponding keycodes: keycode 127 = F102
You can now assign a string for this key, for example: string F102 = "history 20"
Figure 1: Assigning keyboard shortcuts in KDE 2.1.2
... the command will not run inside an X terminal: This shows that the keyboard assignments for X are independent of the console. If you try the same command on the console, however (Listing 2), then the keycode for the return key (28) will be displayed before we can press any other keys. showkey not only shows us the keys we press, but also the keys we release. After compiling the required codes, you need to be patient: Pressing Ctrl-C or CtrlD will not quit the program, you just have to wait for 10 seconds. Now it is simply a matter of giving the keys with keycodes 125–127 something useful to do. And why not have the left Windows key do something similar to its original job, i.e. launch the graphic user
interface (although this option does not make much sense for people that use the GUI login as X is already running in this case). The right Windows key could then display the date and time and it might be appropriate for the Menu key to display the last 20 commands. You could then choose a command number, add an exclamation mark (possibly edit) and then launch the command. These are simple tasks that can be launched from a prompt using the startx, date and history 20 commands. The real question is, how can you map all these commands to the appropriate keys? Again the loadkeys man page is a big help. The section following LOAD KERNEL STRING TABLE details how to assign symbols for
After loading the modified keymap, you can simply hit the Menu key to write the string history 20 in the command line. Just press Enter to confirm, and launch the command. We would prefer not to have to hit the Enter key; in other words, when we hit the Menu key, we want to actually launch the command history 20. The character that the Enter key writes is an end of line. We could solve this issue by including the character in our string. Thinking about outputting text strings to the command line brings the echo command to mind, and we can use \n (“newline”) to enter a new line: trish@linux:~ > echo -e U"foo\nbar" foo bar
GLOSSARY case: To translate the algorithm “If the content of variable 1 is 'start' or 'reload', do this, and if the variable contains 'stop' do that” to valid Bash syntax, you need the following:
case $1 in start|reload) this ;; stop) that ;; esac Standard Runlevel: The runlevel that a Linux machine boots to by default is defined by the number in the “id:5:initdefault:”line in /etc/inittab. Runlevel 5, shown in our example, is an operating mode that allows multiple users to work simultaneously (multiuser level), where an X Server is automatically launched and networking is permitted.The machine will shutdown in runlevel 0, and reboot in runlevel 6. Runlevel 1, the single user mode, is reserved for system maintenance – where only basic services are launched and only the user root can perform necessary maintenance tasks. Any other runlevels differ from distribution to distribution and may be individually defined. Link: Another name for an existing file. Symbolic links can be created using the ln -s file secondname syntax.
Figure 2: Assigning actions to keyboard shortcuts in KDE 3.0
74
September 2002 www.linux-magazine.com
The Answer Girl
LINUX USER
Listing 3: personal.map
Figure 3: Hotkeys for Actions in KDE 3 are defined
# Our Code Map is based on include "/usr/lib/kbd/keymaps/i386/qwerty/uk.map" # (this path refers to SuSE 7.2 and may need editing)
here
Figure 4: To modify an existing scheme, simply keep the original name
The plan seems to work fine! string F102 = "history 20\n" in the keymap (which you will need to reload after editing using loadkeys) means that: trish@linux:~ > [Menu] 926 history 20 [...] 942 echo -e "foo\nbar" 943 vi personal.map 944 loadkeys personal.map trish@linux:~ > !943
really does launch personal.map file.
vi
with
# ReassignCapsLock to Shift keycode 58 = Shift # Right Windows key launches X # (path for startx may need editing!) keycode 125 = F100 string F100 = "/usr/X11R6/bin/startx\n" # Left Windows key outputs the date and time keycode 126 = F101 string F101 = "date\n" # Context Menu Key outputs the last 20 commands # Shift or CapsLock and Menu Key outputs the current directory # content. Type q to quit. keycode 127 = F102 F103 string F102 = "history 20\n" string F103 = "ls -Al | less \n"
Box 1: .Xmodmap by GUI Using xmodmap to approach the goal of a “generic”keyboard layout for X will certainly not appeal to everyone. If you are lucky, you may find a program called xkeycaps pre-installed on your machine that helps add some GUI to the grind. If not, you can download it from [1] or from the CD which accompanies the subscription magazine, use tar -xzvf xkeycaps-2.46.tar.Z to unzip it and then cd xkeycaps-2.46 to change to the directory containing the sources.
the
Reassigning Keys on Booting Since keyboard mappings assigned via loadkeys are retained after logging out, you will want to define a single keymap for all users of a system. It also makes sense to load a modified version of a tried and tested map that suits your individual needs (use the examples in Listing 3 for reference) when you boot your machine. To this end root zips the the individual personal.map using gzip and stores it in the systems keymap directory. However, if you are using SuSE (up to and including 7.3) and modify the KEYTABLE entry in /etc/rc.config, then the distributor has a surprise for you. After performing standard mapping, SuSE loads a few additional key assignments including some for the Windows keys (as you can ascertain by typing dumpkeys | less ). As the SuSE init script, /etc/init.d/kbd, is the only script that performs keyboard assignments, we recommend entering the loadkeys personal.map.gz command here. Although this file is somewhat cryptic, it
Since the archive offers neither a configure script, nor a makefile, but simply an Imakefile, you will find the README file quite useful.You can use xmkmf to create a makefile from the latter, which you can then compile using make.The make option -n (which incidentally has the same meaning in xmodmap) does not mean what it says, so we do not need to use root privileges to check what would happen if we installed the tool: pjung@chekov:~/software/xkeycaps-2.46$ make -n install if [ -d /usr/X11R6/bin ]; then set +x; \ else (set -x; mkdir -p /usr/X11R6/bin); fi install -c -s xkeycaps /usr/X11R6/bin/xkeycaps echo "install in . done" If we really want to copy the new xkeycaps binary to the /usr/X11R6/bin directory (we can create, if needed), root must issue the install command make install (and to additionally install the man page make install.man). Launching the program xkeycaps & in an X session displays a virtual keyboard (Figure 7) that can be more closely specified using the “Select Keyboard”option.The miniature keyboard image in the Keyboards column is provided for comparison.The standard US keyboard is best described by the entry 105 key, wide delete, tall Enter, and the appropriate layout is defined in the Layouts: column as XFree86; US.You are then required to confirm your selection by clicking on ok, and the miniature keyboard is then modified to match. If you now right click on a key image, you can select Edit KeySyms of Key and then define the required mapping interactively. It is useful to know that the middle key on your mouse can be used for speed scrolling in the dialog box shown in Figure 8. An entry for KeySym 1 maps a single key; KeySym 2 assumes that the Shift key is pressed simultaneously, and KeySym 3 refers to the [AltGr] key.When selecting options you should be aware that you can only output the characters, if the corresponding character sets are available: Characters from Character Set Latin 1 are safe enough, but Cyrillic or Arabic characters may lead to weird substitutions. When you have finished your configuration tasks, click on Write Output (Figure 7 upper left) to write the complete layout or on (Changed Keys) to write just the reassigned key mappings to a file called ~/.xmodmap-machinename that you can call via xmodmap in your X initialization file.
www.linux-magazine.com September 2002
75
LINUX USER
The Answer Girl
specific actions to keyboard shortcuts via LookNFeel / Key Bindings (KDE 2.1.2, see Figure 1) or Look & Feel / Shortcuts (KDE # Definition for a PC-Keyboard with 104 keys and British 3.0, see Figure 2). # layout. It makes sense to use one of the predefined schemes, preferably the one that # XFree 3 # XFree 4 most closely complies with your way of Section "Keyboard" Section "InputDevice" working. You then mark the action that Driver "keyboard" you want to assign to a hotkey (or the Identifier "Keyboard[0]" keyboard shortcut). Then select Custom Protocol "Standard" Option "Protocol" "Standard" (KDE 3.0) or Custom_Key (KDE 2.1.2) in XkbRules "xfree86" Option "XkbKeyCodes" "xfree86" the lower frame of the dialog box Choose XkbModel "pc104" Option "XkbModel" "pc104" a Key for the Shortcut Action. XkbLayout "gb" Option "XkbLayout" "gb" KDE 2 does impose a few limits on XkbVariant "nodeadkeys" Option "XkbVariant" "nodeadkeys" your creativity. If you are not satisfied EndSection EndSection with a single key (click on the stylized key button in Figure 1 and press the desired key on your keyboard), you can does adhere to the syntax of a start stop only use combinations with the Alt, Ctrl #!/bin/sh script, where a case construct defines the and/or Shift keys. loadkeys personal action to perform when the script is Users who tend to hit the Menu key by called using the start stop argument, or mistake will not want to assign the action (the suffix map.gz is normally optional something similar. In our case, we can Dropdown Menu to the Menu key, as and some distributions require you to just ignore the mess and insert our loadshown in Figure 1, but can continue to omit it)in the appropriate init.d directory, keys line before the ;;, that marks the end strain their fingers by typing Shift-Ctrlfind the appropriate standard runlevel in of the start branch. Alt-Menu to open the K Menu. /etc/inittab and create a Link for it in the Other distributions, like Red Hat, are Now click on Save scheme... to assign a appropriate rc?.d directory (i.e. rc2.d for easier on the system administrator and name to the new configuration and avoid runlevel 2). The name of the link must offer a script named /etc/rc.d/rc.local modifying one of the existing schemes by start with an S (for â&#x20AC;&#x153;startâ&#x20AC;?) and a fairly that is called right at the end of the boot mistake. Now, click on the Apply button high number. Thus, a link called procedure after loading all other system to make your new keyboard mappings S100keymap that points to the init script settings. The end of this file is the ideal available for use. will be called after any links starting with place to load an individual keymap. KDE 3 is somewhat more flexible. If S01 through S99. And of course you can save an init you click on one of the stylized keys in Keyboard Mappings for KDE script of your own with the following the dialog box shown in Figure 2, a content: slightly cryptic dialog box appears (Figure No matter what settings you choose for 3). If you intend to assign a single key for the console, they will the selected action, then remove the not affect they way checkmark in Multi Key, and you will the keyboard reacts in only be allowed to press a single key on an X window session. your keyboard (such as the Menu key for If you are simply example) to perform an action such as interested in the three Dropdown Menu). Windows keys, the If you are defining a combination of first place to look keys, you must first select Multi Key and would be the KDE then secondly put some thought into the Control Centre that keys you intend to press in order to avoid allows you plenty of finger strain. You can use the Alternate leeway to assign your option to define a second hotkey for the same action. If you change the keyboard shortcuts for a pre-defined scheme, you will notice that KDE 3 immediately activates and selects the New Scheme option. To assign a name to the new scheme, simply click on Save (Figure 2). However, you still have to take the roundabout route via New Scheme Figure 5: KDE 3 intergrates the xmodmap output into the standard KDE Figure 6: Move the Cursor into and the Save button, even if you want control panel the Black Square and hit a Key
Listing 4: Keyboard Mappings in XF86Config
76
September 2002 www.linux-magazine.com
The Answer Girl
LINUX USER
But like so many other global system settings, the keyboard mappings set for all X users are by no means a holy cow. You can use: trish@linux:~ > apropos key | U grep -w X xmodmap (1x) - utility for U modifying keymaps and pointer U button mappings in X setxkbmap (1x) - set the U keyboard using the X Keyboard U Extension CentnerCentner (1x) - U graphically display and edit U the X keyboard mapping
Figure 7: Using xkeycaps to create .Xmodmaps
to modify an existing scheme. In this case you keep the original name in the Save Key Scheme dialog box (Figure 4), instead of supplying a new name, as you would do to create a new scheme. Back in the Control Center, simply click the Apply button to apply the new or re-defined scheme to KDE 3.
No Caps Lock in X But all of these modification options will not help you at all, if you use GNOME, XFce or a Standalone Window Manager, as these settings do not apply outside of
KDE. Additionally, you cannot re-define the Caps Lock key using the methods we discussed previously. The solution to this problem is to change the foundation, i.e. the X Window system. The basic configuration file XF86Config (normally to be found in the /etc or /etc/X11 directories and sometimes containing a 4 suffix in XFree 4) is the place to look for the basic settings, such as the keyboard type and the mappings. The syntax for the entries in these files differs between versions 3 and 4 of XFree86 (Listing 4).
to discover an extremely interesting entry: xmodmap is used in the defining of individual keyboard layouts. (Refer to Box 1 for details on xkeycaps). Typing xmodmap in the command line shows you the keys used as modifiers. You can see the output from xmodmap in the Shortcuts Modifier Keys tab under the XModifier Mapping in the KDE 3 Control Center (Figure 5). To (shift) between the two basic settings (i.e. between capitals and non-capitals in the case of letters), you can use the left (Shift_L) and right (Shift_R) Shift keys. The Caps lock key locks the keyboard in the Shift position (lock) and so on. To counteract this effect we must now remove Caps_Lock from the list of lock modifiers. After referring to the xmodmap man page, we try the following syntax: xmodmap -e "remove lock = U Caps_Lock"
And as you will see, Caps_Lock is now missing in the output from xmodmap: xmodmap: up to 3 keys per U modifier, (keycodes in U parentheses): shift Shift_L (0x32), U Shift_R (0x3e) lock [...]
GLOSSARY grep -w: This command searches the data stream piped (|) to it, or a file supplied as an argument, for the search key (In our example X), provided it occurs as a single word. Figure 8: You should restrict your keyboard mappings to displayable characters only
www.linux-magazine.com September 2002
77
LINUX USER
The Answer Girl
If we now add Caps_Lock to the shift list, using xmodmap -e "add shift = Caps_Lock", a quick test shows that we have now tamed the beast, which acts just like a Shift key in any application running on the current X Server. A call to xmodmap thus shows: shift Shift_L (0x32), Shift_R U (0x3e), Caps_Lock (0x42)
following the equals sign appears to be the name of the brace, bracket or parentheses (“brace” – curved brackets, “bracket” – square brackets). This being the case, the following syntax: xmodmap -e "keycode 115 = U braceleft bracketleft" xmodmap -e "keycode 116 = U braceright bracketright"
should help us achieve our goal.
A New Home for Braces and Brackets Redefining the Windows keys means the setting our sights a little lower than we have previously seen in KDE. Opening menus, changing virtual desktops and the like belong to the Window Managers realm. We can tell the X Server to output braces with the Windows keys on our keyboards in combination with the AltGr key, whereas with Shift-Windows we can produce left and right brackets. To do this, we first need the keycodes for the Windows keys. This is done using the simple xev program (Figure 6) referred to by the xmodmap man page, provided you are not confused by the fact that the program runs on an X Terminal not only shows keypresses but mouse events. Listing 5 shows sample output for pressing (KeyPress event) and releasing (KeyRelease event) the left Windows key (keycode 115) and the equivalent key on the right (keycode 116), if the mouse focus does not move outside of the black square in the xev window. Now we only need the symbols for the braces and parentheses. Since these are already assigned to [AltGr-7] ({), [AltGr8] ([), [AltGr-9] (]) and [AltGr-0] (}), it should be no problem to query xmodmap for them. xmodmap -pke | less (-pke stands for “print keymap expression”) should do the trick: keycode 16 = 7 ampersand U braceleft seveneighths keycode 17 = 8 asterisk U bracketleft trademark keycode 18 = 9 parenleft U bracketright plusminus keycode 19 = 0 parenright U braceright degree
That looks a lot like the map file for the console and shows you how to assign an X task to a keycode. The third entry that
78
Saving the Changes Of course you could place all of these xmodmap commands in your personal X startup file ~/.xinitrc and/or ~/.xsession. But the syntax shown on the xmodmap man page…
(Listing 6). The man page suggests ~/.xmodmaprc and distributions such as SuSE take care of parsing .Xmodmap in your home directory while starting X via xmodmap. If you look at the man page, you may note that comments are not indicated by a hash sign #, but by an !. You can now use an xmodmap ~/.Xmodmap entry in ~/xinitrc (if this is not the default setting for your Linux distribution) to take care of loading the appropriate key maps on starting X with startx. If you use a GUI login, you will need to place this command in the ~/.xsession. As a KDE user you will notice that the Windows keys are no longer available for KDE. They have been re-defined to output braces and so it is not a good idea to use them as hotkeys. ■
xmodmap [-options ...] U [filename]
INFO ... suggests that you might like to place your keyboard assignments in a single file
[1] xkeycaps: http://www.jwz.org/xkeycaps/
Listing 5: xev Output of Windows Keys KeyPress event, serial 28, synthetic NO, window 0xe00001, root 0x2c, subw 0xe00002, time 1060529679, (30,37), root:(34,57), state 0x0, keycode 115 (keysym 0xffe7, Meta_L), same_screen YES, XLookupString gives 0 characters: "" KeyRelease event, serial 28, synthetic NO, window 0xe00001, root 0x2c, subw 0xe00002, time 1060529913, (30,37), root:(34,57), state 0x40, keycode 115 (keysym 0xffe7, Meta_L), same_screen YES, XLookupString gives 0 characters: "" KeyPress event, serial 28, synthetic NO, window 0xe00001, root 0x2c, subw 0xe00002, time 1060531499, (30,37), root:(34,57), state 0x0, keycode 116 (keysym 0xff20, Multi_key), same_screen YES, XLookupString gives 0 characters: "" KeyRelease event, serial 28, synthetic NO, window 0xe00001, root 0x2c, subw 0xe00002, time 1060531733, (30,37), root:(34,57), state 0x0, keycode 116 (keysym 0xff20, Multi_key), same_screen YES, XLookupString gives 0 characters: ""
Listing 6: Personal ~/.Xmodmap ! Do not lock on pressing Caps_Lock remove lock = Caps_Lock ! ... use Caps_Lock as an additional Shift key instead add shift = Caps_Lock ! left Windows key = {, plus [Shift] = [ keycode 115 = braceleft bracketleft ! right Windows key = }, plus [Shift] = ] keycode 116 = braceright bracketright
September 2002 www.linux-magazine.com
Ktools
LINUX USER
KDE Standard Editor Kate
Let’s Edit Thanks to syntax highlighting and similar tricks, editing with KDE’s “Swiss Army Knife” Kate can be fun. The editor can even cope with simple programming tasks. BY STEFANIE TEUFEL & PATRICIA JUNG
view the open file and start the work that you have in hand. No need to worry about installing this text tool as Kate belongs to the basic KDE package kdebase. You might also like to install the kdeaddons which will give you access to the various plugins. But you need to launch the program before you can load a file. You can either launch Kate via the Start menu (Office / Editors / Kate) or the command line. The latter option is particularly useful for opening files directly from the Web. In this case you can connect to the Internet and use your favorite terminal emulation (how about Konsole?) to type a line such as the following: kate http://linux01.gwdg.de/U ~steufel/index.html &
This should give you access to the index file on Stefanie Teufel’s home page, as you can see in Figure 1. Of course you will be working with a local copy of the HTML code. If you need more information on Kate’s comamnd line prowess, you can satisfy your curiosity by typing kate --help. The kate --help-kde command is nice feature that tells you how to set the content of a (--caption) or even customize the icon of the program.
A Question of Taste You can use the Settings / Configure Kate menu to tell Kate all about your personal preferences, in regard to color schemes for editing files, indenting and similar actions. Figures 2a and 2b show the dialog boxes for KDE 2 and 3. The General Options section does exactly what its name suggests. If you want to restore the frames and views exactly as you left them in the previous editing session (Restore View Configuration), and optionally open the last file
L
inux Editors are ten a penny. Whether command line or GUI based, the wide selection ensures that everyone will find something to suit their individual needs. Even if you have a personal favorite, it may be worth your while to take a second look at Kate (“KDE’s Advanced Text Editor”). One of Kate’s major features (and this is true of every KDE Kernel application)
may well be the program’s perfect interaction with the desktop. You can concentrate on the job in hand and simply forget about trivial tasks, such as how to open files. Simply drag a file from the desktop, an FTP site you are accessing in Konqueror or even from a local directory currently on view and drop it into an active Kate window – and you are up and running. You can then
KTOOLS In this column we present tools, month by month, which have proven to be especially useful when working under KDE, solve a problem which otherwise is deliberately ignored, or are just some of the nicer things in life, which – once discovered – you wouldn’t want to do without.
www.linux-magazine.com September 2002
79
LINUX USER
Ktools
you edited when you relaunch Kate – this is where you will find all the options that you need. If you use KDE 3 you can additionally decide to edit any files you wish to open in a single (Kate MDI) window or separate (Kate SDI) windows. If you are not sure what all these options mean, you can simply right click on the text. The drop-down menu that then appears shows a single Figure 1: Out of the Internet into the Editor “What’s This?” help item, as you when programming or designing web can see in Figure 2b. sites by providing you with a clear The next major item, Editor, affects the overview of the source code. way content is displayed inside the Kate The Highlight Modes tab (see Figures windows, allowing you to set the Colors 3a and b) allows individual settings and Fonts. However, you can also use this depending on the selected programming option to set your preferences for one of or markup language. But don’t panic – Kate’s highlights – color display for you do not need to individualize all the source code that can be specially adapted available options. Kate will simply use to suit a large number of programming defaults for any elements that you have languages, such as C/C++, Java, not individually configured. Python, Perl or HTML. And it this Syntax Start by selecting the programming or Highlighting feature that makes Kate a markup language in the drop-down menu fairly useful programming tool. Highlight at the top of the dialog box Highlights Cheaper by the whose elements you need to redesign. Dozen Kate immediately shows the file suffixes and MIME types that will be assumed to You can use Highlighting to define how be written in this language. Then use and what Kate highlights. The dialog box contains two tabs: Defaults (or Default Item Style (Context Style for KDE 3) to styles in KDE 3) and Highlight Modes. select the element types for the current The first of these options allows you to language whose aspects you would like modify the appearance of common to change. elements and select a color scheme and If you are running KDE 2 you can use typeface (bold, italic) for comments, the drop-down menu under Item in the strings and data types in your program Item Style area (Figure 3a); KDE 3 offers a code. Take your time with these settings – table whose cell contents can be modified careful planning can make your life easier by mouse click (Figure 3b).
Figure 2a: Getting sorted: The Kate Configuration Menu for KDE 2 …
80
September 2002 www.linux-magazine.com
If all these options are giving you a headache, it might be useful to take a look at an example. To change the appearance of HTML comments in Kate, select HTML in the drop-down menu Highlighting. For KDE 2 you need to select Comments in the Element drop-down (Figure 3a). Now click on the color buttons to define a color for displaying HTML comments and check the typefaces you require. KDE 3 also offers the same settings, but in a more organized fashion (see Figure 3b) allowing selections within the drop-down menu for individual lines. You can also switch highlighting modes on the fly via the Document / Highlight Mode menu item. The developers even went to the trouble of categorizing the modes by main areas of use such as Sources for the classical programming languages like C, C++ and Java or Markup for markup languages like HTML or XML.
Those Configuration Options Keep on Coming Like the Editor / Edit option in the Kate configuration dialog box (Figure 4; the corresponding dialog in KDE 3 has simply dropped one or two confusing options). In addition to the standard functions such as setting automatic line wrapping or the stripping of whitespace at the end of lines, Kate also offers a few gimmicks such as the Smart Home item. If you activate this function, the cursor moves to a position in front of the first character, instead of moving to the beginning of the line.
Figure 2b: … and KDE 3 is nicely organized
Ktools
LINUX USER
integrated console like the one in Figure 8.
Daily Business Working with Kate is not much different from working with any other editor. You can modify, insert and copy to your heart’s desire. If you want to save the file you are working on or possibly print it, check out the File menu. If you need to copy, insert or search for something, check Figure 3a: Defining the appearance of comments Figure 3b: KDE 3’s configuration dialog is much tidier . out the Edit menu instead in HTML documents with KDE-2 Giving a cleaner layout overall (see Figure 9). This menu also allows you to access a specific point in the current document Goto line... or to backtrack to a former version using Undo. This is the place to look for those search functions. One of Kate’s special features is the Find Next option that allows you to continue to search for a search key you entered previously. Also the corresponding function Find Previous does the same Figure 4: Line Wrapping and Brackets Figure 5: You can choose between files already opened… thing, only this time in the opposite direction. in other windows and the If you stumble across a few typos while toolbox), drag them out of browsing a document, you can use Kate’s the Kate window and drop integrated spellchecker to check those them on the desktop as a documents with purely textual content separate window. (Figure 10). The tool you need for this job Just as in Konqueror is accessed via the Edit / Spelling... menu you can split the application windows in Kate to in KDE 2 and via Tools / Spelling... in the form multiple editor subKate version for KDE 3. windows. Split views (see It is just as easy to specify the format Figure 7) are available via when saving the current document – the the menu items View / options are Macintosh, Unix or DOS-Text. Figure 6: … or files at other locations on your machine The difference between these three is the Split Vertical and View / Points of View end-of-line character, which is either CR, Split Horizontal; a nice feature if you need to compare or transfer text between One of the major reasons for using Kate is LF or both. Just let Kate know how you multiple documents. the number of options for displaying or want to handle this point via Document / The green dot in the status bar shows accessing files. This immediately End of Line. the window containing the cursor. To becomes apparent if you open a file – as One of Kate’s useful features is the close a window you simply click on the mentioned at the outset of this article. option of bookmarking the documents square icon in the tool bar. The left-hand side of the Kate window If you need access to the command line provides two tabs with other variants GLOSSARY while editing, you will appreciate the fact (Figure 5): a file list showing the curCR/LF:“Carriage Return”(originally referred that you can embed a terminal emulation rently opened files and the file selector to the carriage on a typewriter) and “Line window in the editor. that allows for easy browsing of the file Feed”are the two ASCII characters that are embedded in a file to indicate the end of a To do so, you just select Settings / system and helps you to locate the line.They are normally invisible in text editors required files (Figure 6). Show Console (or Show Terminal apart from the obvious effect of starting a Incidentally, in KDE 3 you can click on Emulator in KDE 3) in the menu, and new line. the thicker border of these cards (just like Kate will immediately display an
www.linux-magazine.com September 2002
81
LINUX USER
Ktools
Figure 7: Different viewpoints in split view mode
you are working on. Bookmarks in the text allow you to jump quickly to a predefined point. First, select View / Show Icon Border in the menu. You can left click on this column to attach a paper clip (Figure 11). Your bookmarks are then listed with the line number and a text excerpt under Bookmarks in the menu, where you can select them directly. Just like most KDE programs Kate allows you to assign keyboard shortcuts for
Figure 8: Kate and Konsole
various actions and activities. In KDE 2 you simply select Settings / Configure Key Bindings… and in KDE 3, Settings / Configure Shortcuts. KDE 3 additionally – and confusingly – offers another setting in the configuration menu (Settings / Configure Kate.). The Editor / Keyboard item can only be used to configure keyboard shortcuts for cursor movements in the current document, whereas Shortcuts refers to the application itself (opening and closing files, spell checking and so on. If you happen to be too lazy to define your own preferences, you can refer to Table 1 for an overview of default keyboard mappings.
Plugged in Plugins make Kate a special editor. The developers decided on this design to make the editor as Figure 9: Some items in the Edit menu for KDE 2 have found a new home in KDE 3
Figure 10: Now, how do you spell that?
Figure 11: Virtual bookmarks
82
September 2002 www.linux-magazine.com
Figure 12: Choosing a plugin
quick as possible and to maintain a small footprint on the one hand, and to provide as much functionality as possible for the users on the other hand. This allows the users freedom of choice on the amount of ballast that they want to load when launching Kate. You can select the plugins Kate should load for you by accessing Settings / Configure Kate / Plugins / Manager (Figure 12). If this window is empty, you will need to load the plugins by installing the kdeaddons package. To load a plugin (provided it has been installed and is recognized by Kate), you simply click on the plugin’s name in the Available Plugins section, and then on the right arrow button. The plugin immediately appears in the Loaded Plugins area on the right. To remove a plugin simply select its name and click on the left arrow button. To immediately activate a plugin, simply
Ktools
click on the Apply button; you will be able to work with the selected plugins immediately. If a plugin offers some other additional configuration options, as in the case of Insert Command Plugin, you can select these options as a subitem of the Plugin configuration item. The subordinate item for the plugin, Insert Command allows you to set the number of commands that Kate will place in history. The plugin itself inserts shell output in the current document. You can launch the plugin via the Edit menu. Just select the Insert Command, to display a dialog window like the one in Figure 13, where you can enter the desired command and the directory where you want it to run. Kate will take care of everything else, as you can see in Figure 14. If you intend to use Kate for editing HTML documents, you might also be interested in the Kate HTML Tools. This plugin provides the additional HTML Tag functionality via your Edit menu. If you select this item, a window like the one in Figure 15 appears, where you can type the name of an HTML tag (such as title). Kate will then take care of placing the necessary parentheses, start (<title>)
and end tags (</title>) at the current position in the document. HTML authors will be pleased that the developers have assigned the keyboard shortcut [Alt -] to this function. KDE 3 users will discover that the XML plugin has already been integrated, but this is the only new plugin available in this version. C and C++ programmers will find the “Kate OpenHeader” plugin that adds the item OpenHeader in the file menu useful. Selecting this item opens the .cpp or .c file for the header file that is currently being edited and vice-versa. The Kate Project Manager adds a menu called Project which you can use to maintain simple programming projects, while the Kate text filter filters and thus
LINUX USER
Figure 15: Creating HTML tags made simple
replaces the text selected by reference to a shell commmand entered via the new menu item (Edit / Text filter..). But take care using this plugin in KDE 3; when we forgot to select a text passage to filter, Kate crashed when running on SuSE 7.2. Kate does not have a lot to offer in the way of third-party plugins at present, although a single plugin for Python programmers is available from [1]. The trick is, to keep your eyes open. ■
Table 1: Default Keyboard Mappings [Ins]
Toggles between Insert and Typeover modes. In Insert mode the editor writes the text entries at the current cursor position and moves the text right of the cursor to the right. In Typeover mode the characters right of the cursor are over written by the new text entries.
[Left arrow]
Moves the cursor one character to the left.
Figure 13: The Insert
[Right arrow]
Moves the cursor one character to the right.
Command plugin dia-
[Up arrow]
Moves the cursor one character up.
log box in KDE 2
[Down arrow]
Moves the cursor one character down.
[Page up]
Moves the cursor one page up
[Page down]
Moves the cursor one page down.
[Backspace]
Deletes the character left of the cursor.
[Home]
Moves the cursor to the start of the line.
[End]
Moves the cursor to the end of the line.
[Del]
Deletes the character right of the cursor (or the selected text).
[Shift-Left Arrow]
Selects the text one character to the left.
[Shift-Right Arrow] Selects the text one character to the right. [F1]
Figure 14: Inserting the output from a configure script in a Kate document
INFO [1] Third-Party-Plugins for Kate: http://www.kde.org/kate/3rdparty.html
Help (not pre-defined in KDE 3).
[F7]
Show Console.
[F6]
Show Icon Border.
[Ctrl-F]
Open Search Dialog box.
[F3]
Continue Search forwards.
[Shift-F3]
Search for previous instances of last search string.
[Ctrl-C]
Copy the selected text to the clipboard.
[Ctrl-N]
Create a new document.
[Ctrl-P]
Print current document.
[Ctrl-F4]
Close current editor window.
[Ctrl-R]
Search and replace.
[Ctrl-G]
Goto line…
[Ctrl-S]
Save current document.
[Ctrl-V]
Pastet content from clipboard.
[Ctrl-X]
Cut selected text and copy to clipboard.
[Ctrl-Z]
Undo last step.
[Ctrl-Shift-Z]
Redo last step.
www.linux-magazine.com September 2002
83
LINUX USER
xlhtml
N
o doubt you have been through this scenario time and time again. A friend or someone from the office sends you a CD list, club statistics or even a recipe with the message: “You probably use Excel, so I’m attaching the file in .xls format.” Now, as a Linux user you might not use Excel at all (or not feel like rebooting), and you may not want to launch a processor beast like StarOffice – so you have a problem. Other aggravating situations may occur when you attempt to access statistics placed on the Web by local authorities or government, as they too will often rely on “standards” à la Redmond. Sooner or later, when the pressure starts to build, you will probably start looking for a suitable tool – you could try Freshmeat for example, my personal favorite site for Open Source software. And that is where you will find xlhtml, a tool written by Charles Wyble that looks just the job for Excel plagued Linux users. The program converts Excel spreadsheets to HTML, allowing you to view them in a standard web browser.
Out of the box
eXcellent It has been quite a while since we introduced antiword – a filter for Word documents, in this column. A similarly useful piece of software for Excel spreadsheets has been sorely missed so far. Enter xlhtml to close that gap. BY CHRISTIAN PERLE
A Trip to Chicago
tar xzf xlhtml-0.5.tgz cd xlhtml-0.5 ./configure make su (Input the root password) make install ; exit
Instead of the last line you can also use the tool discussed in a recent “out of the box”, checkinstall [1]:
OUT OF THE BOX There are thousands of tools and utilities for Linux.“Out of the box” takes a pick of the bunch and each month suggests a little program, which we feel is either absolutely indispensable or unduly ignored.
84
Mali Veith, visipix.com
Xlhtml maintains a homepage at http://chicago.sourceforge.net/xlhtml/. And that is the place to download the source archive that you will need to install the software. Other pre-requisites are of course the GNU C Compiler and the usual suspects such as make and the glibc-dev package. The installation is a matter of a few simple steps:
checkinstall ; exit
You can then use your distribution’s package manager to remove the program, if required. After completing these steps, you will discover that xlhtml has been placed in /usr/local/bin, a directory that should be mapped in your PATH variable.
Trial Run You will need an Excel document to test xlhtml. If you happen not to have one available on your hard disk, you can use the Test.xls file from the xlhtml archive.
September 2002 www.linux-magazine.com
This spreadsheet uses all the special effects currently supported, for example colors, fonts and various attributes. Type the following to start your first test xlhtml Test.xls > Test.html
and view the HTML file Test.html created by this command in a browser of your choice. Figures 1 (Excel for Windows) and 2 (the xlhtml view in Netscape) show a test performed with a simple Excel file which includes some cell errors. But not only GUI web browsers can be used to view the spreadsheet created by
xlhtml
Figure 1: The spreadsheet viewed on a hostile operating system
Figure 2: The converted spreadsheet in Netscape
xlhtml. Even console based programs of the same ilk, such as w3m or links can show their prowess, when confronted with documents of this type (Figure 3).
The cutting table and thirdparty formats If you are only interested in a specific part of the spreadsheet and you know the row and column references, you can use the -xr (“extract row”) and the -xc (“extract column”) references to pass instructions on to the program. There is also an -xp (“extract page”) option that allows you to select specific pages in the document. To extract rows 2 through 5 and columns 0 through 2 from the Book1.xls spreadsheet
and view the results directly on the console, you can type the following (if you have installed w3m):
To view Excel spreadsheets in mc, you simply select the file and press the [F3] key.
xlhtml -xr:2-5 -xc:0-U 2 Book1.xls | w3m -T U text/html
xlhtml’s author is a very hard-working guy: The source archive for xlhtml not only contains the Excel converter, but also a program called ppthtml. This can convert Powerpoint files to HTML. To call the program, you can use the following syntax:
This command assumes that xlhtml will output the HTML page to standard output and the w3m browser will read directly from standard input. The pipe character (“|”) is used to redirect the output to the following command. You use the -T option to give w3m details on the data stream format – text/html refers to HTML in this case. Xlhtml offers additional output format options. The -xml option converts Excel to XML (“Extensible Markup Language”), -csv creates “Comma Separated Values”, and the -asc option, creates pure ASCII text. The last two formats are only available with the -x options. If you want the content of the spreadsheet called cdlist.xls in a textonly format, you type:
xlhtml -asc -xp:99 cdlist.xls
Since we want to read all the document, the value for the -xp argument must be larger than the actual number of pages.
Midnight Commander can do Just like the Word filter antiword [2], we can use xlhtml as a filter for the built in file manager, Midnight Commander (mc). To do so, you simply add the following two lines to the ~/.mc/bindings file: shell/.xls View=%view{ascii} xlhtml %f U | w3m -T text/html -dump
INFO [1] CheckInstall:“Say Hello Wave Goodbye”, Linux Magazine Issue 22, p78 [2] Antiword:“Against It!”, Linux Magazine Issue 15, p82
LINUX USER
Powerpoint
ppthtml powerpoint_file.ppt > U html_file.html
But don’t expect too much – the current version merely extracts the text. ■
Figure 3: Spreadsheet view in w3m on a character based console
GLOSSARY Freshmeat: A major resource for current projects in the Open Source area, is at http://freshmeat.net/ on the Web. HTML:“HyperText Markup Language”, the markup language originally developed by CERN for World Wide Web pages. So-called Tagsmark specific text passages as titles, lists, tables and so on. make: A program for organizing source code compilation.The makeconfiguration file (Makefile) can contain information on dependencies for the individual program modules. PATH: This variable consists of a list of directories separated by colons.The shell will search these directories for any commands typed without a path.Thus, top will be located in /usr/bin/top. Standard input, standard output: A large number of command line programs allow you to leave out the name of the input file. In this case the program reads from standard input, which is normally the keyboard. If you leave out the name of the standard output file, many programs will default to standard output, that is, display the output on your terminal. Output can be redirected into a file using the >character or to another command using the | character.
www.linux-magazine.com September 2002
85
LINUX USER
DeskTOPia
T
hinking back to my long buried Windows roots, I still get a headache when I think about that OS’s reputed user-friendliness. Installing new software meant that all kinds of file suffixes were suddenly mapped to the new program, ignoring the configuration work I had previously put in. I no longer have to suffer this torment, thanks to Linux – nothing moves on my machine nowadays without me knowing about it. And I was prepared to pay the the price for this peace of mind and tackle that “one-off” configuration work. Now, unfortunately there seems to be a hitch with that “one-off” bit. Although mapping file types has always been a bit scary, if I need to tell five applications what to do with a certain file type, I have to configure all five individually. And that to my mind is four times as much work as I should be doing. Additionally, most programs insist on a one-to-one relationship between file type and application. But what happens if I just want to view an image instead of working on it (as would normally be the case)? Or what happens if I want to use a different browser to view an HTML file? The solutions tend to break at this point.
Jo’s Alternative Desktop: Launcher
The Right Type Tired of telling hordes of applications which third-party programs to use when opening various file types? Launcher can help you solve this problem. BY JOACHIM MOSKALEWSKI
Simple & effective This is a problem that effected most of us, but not someone as clever as Ethen Gold, who hit on the idea of mapping all the file types in all these programs to a single application. This tool would then be responsible for deciding what to open a certain file type with. This still means careful configuration work, but restricts that work to a single site. And you also have a way of enhancing applications that previously insisted on a one-to-one relationship between file type and the application. The program enhancement kit you are looking for is available both on the Web (at http://thaumaturgy.net/~etgold/softU ware/launcher/) and on the CD with the subscription issue, and there it is called Launcher. However, in contrast to MS
DESKTOPIA Only you can decide how your desktop looks. With deskTOPia we regularly take you with us on a journey into the land of window managers and desktop environments, presenting the useful and the colorful viewers and pretty toys.
86
Windows, it does not use file suffixes to identify file types but MIME types, which are more common on the Internet and in Linux territory. The applications themselves are responsible for discovering the MIME types of the files. To do so, they either use their own test procedures or rely on the file command. This has the advantage that files are still recognized after renaming them. Of course it still makes sense to keep the .html suffix for an HTML file, but it is nice to know that it is not entirely necessary.
Installation Be bold – there are no strings attached to the launcher-086.tar.gz archive. You need the Tcl/Tk interpreter, wish, which you will no doubt have installed previously,
September 2002 www.linux-magazine.com
and the file and make commands – all of which are just part and parcel of your standard Linux machine’s setup. Start by expanding the archive using the following command: tar -xvzf launcher-086.tar.gz
To perform the actual installation (using make install) you need temporary root privileges; so use the su to apply them. Make sure you drop your root privileges using exit after installation. You can then supply a peppering of configuration files from your user account: make installfileslocal
This copies the required files to your home directory. It is possible to create a
DeskTOPia
deactivate all, the launcher system wide configuration (see the INSTALL file in the will start the appropriate program for each file. archive for details), but not You can use the default for individual users. After completing the button to suppress the installation, you will want to prompt, asking you to define the launcher as the choose one of the available alternatives; the launcher default applink for all of will then automatically your programs. The quickest Figure 2: Launcher Config start, without asking, the way to do this, is to use default application. wildcards (normally a global wildcard * The last button in this bunch – nowait will represent any string), and accounting for all file types with a single entry. – is applicable when the all option has However, there is an issue here: Some been deactivated: In this case the programs require individual syntax. Thus launcher does not wait for the current an entry without a file type accounts for process to terminate before passing the all file types in the XFce file manager, next file to an application, and proceeds with the next file, so launching the next XFTree: a single line containing () program as applicable. (launcher) () in the ~/.xfce/xtree.reg is all you need to hand responsibility for all Announcements file types to the launcher. If you now jump from the Launcher Just ask Options to the Handler Definitions, you arrive at a configuration dialog box that Click on a file in a file manager that you you can use to edit your Handlers. The have configured as described, to start the launcher, that will then locate and start dialog box contains a list of applications an appropriate application based on its that you may assign to the current file MIME type. For multiple mappings the type (Figure 3). launcher will prompt you to decide which Working with this dialog box is not application should be started (Figure 1). entirely intuitive. If you want to add a The factory defaults may not comply new application, you first have to click on with your personal preferences. To solve Set, then enter a mnemonic name for this, launcher offers a fairly substantial your application (XEmacs), then the real configuration tool, that you can access program name (xemacs) and finally the via the launcherconfig command. You can syntax (xemacs %s). You can use an %s just as easily right click on the dialog box instead of a file name and a %d instead to choose between multiple application of a directory. You will need to click on mappings. So, if you happen to disagree Add before your entry is added to the list. with the selection, just right click.
Rules and regulations The configuration tool offers you some general options (shown in Figure 2, Launcher Config). And three buttons are available for clicking: all affects how the multiple selections are handled – if this button is active, the whole file list is passed to your program en bloc. If you
Figure 1: Launcher between Gimp and QIV
Mapped
That still leaves us with the Type/Handler Mappings (Figure 4), where you can put the applications you have previously defined to their required tasks. If you select several applications for a single MIME type, you should note the default and possibly modify the setting. This does not mean that the default application will appear first in future operations, the option is only applied, if you you select default in the general options. Watch out for the stumbling block while defining MIME types. Let us assume that you have an image file in PNG format called graphic.png. To be consistent with the presets in the launcher configuration (e.g. image/gif) you would expect the MIME type to be image/png. But instead the launcher will
LINUX USER
Figure 3: Handler Configuration
Figure 4: Who’s playing whom?
present you with the whole handler list – the new MIME type would seem to be totally ineffective. The command below should help on the subject: jo@planet ~> launcher U --showtypes graphic.png graphic.png: image/x-png
So now you know that you need to define the image/x-png MIME type for PNG images. It is hard to find fault with this extremely practical tool, but the current version still cannot process MIME type definitions containing wildcards. That means you will not be able to configure all image files simply by defining an image/* MIME type, even though the configuration file may suggest something different. ■
GLOSSARY MIME: The abbreviation for “Multipurpose Internet Mail Extensions”. If an attachment is sent by mail, the mail program states the file type. After evaluating the MIME header, the receiving mail application does not need to perform content analysis to know whether a text document, a sound file, an archive or something completely different is coming your way. MIME types are additionally used for the dialog between web servers and browsers, and are often found when Linux applications need to interface with thirdparty programs.
www.linux-magazine.com September 2002
87
COMMUNITY
Brave GNU World
The monthly GNU Column
Brave GNU World In this monthly column we bring you the news from within the GNU project. In this issue we will look at ways to beat oppressive regimes, FSF Europe’s activities, Developing with BASIC, auditable accounting and system monitoring. BY GEORG C. F. GREVE
S
ince we seemed to have concentrate on creative methods of wasting out time in the last issue, this issue we will be dealing with some more workrelated aspects. We will start with a groupware solution.
Minkowsky Stefan Kamphausen, author of the Brave GNU World logo, has pointed out to me a groupware solution by his colleague Rüdiger Götz. The program has a calendar, address and task management and is therefore dedicated to managing space and time. Following the humour of Physicists, as this program manages “space-time” it has been named after the Minkowsky-diagrams used in the Special Theory of Relativity and is therefore called Minkowsky. [5] The program started life at the end of 2000, when the company employing both Stefan and Rüdiger were looking for a
groupware solution. Faced only with the alternative Outlook, Rüdiger decided to write Minkowsky and the company has been successfully using it since February 2001. Minkowsky allows for access rights to be fine-tuned by the administrator in order to give secretaries or other coworkers in related groups the possibility to access appointments, and to allow for better co-ordination. From experience it is the increase in the co-ordination and communication within a group that makes Minkowsky special. As a task, Minkowsky is oriented towards groups living on a single LAN. Minkowsky is based on C++/C with Tcl/Tk/Tix and it does not require an additional database, which can be an advantage in some situations. Being Free Software under the GNU General Public License, Minkowsky also secures the independence of companies using it in
Figure 1: Minkowsky showing a group view and single calendars
88
September 2002 www.linux-magazine.com
this rather crucial area. Following the first public release in May 2001, the release process is about to publicize the first version. Further plans include the stabilisation of communication between client and server, the PDA synchronisation, a port to Mac OS X (Minkowsky has been only developed on GNU/Linux) and of course the search for and fixing of bugs. Rüdiger would welcome help with a communication layer that is more stable, English translation and synchronisation with Palm-handhelds.
Webminstats Webminstats [6] allows monitoring of multiple relevant system parameters through a web browser. Since browsers are usually available on all platforms, such projects are usually very popular with administrators of (heterogeneous) networks. David Bouius began working on Webminstats in August 2001. Towards the end of 2001 he started receiving support by Eric Gerbier, who took over the project when David lacked the time to maintain it. According to Eric Gerbier, who answered the Brave GNU World questionnaire, Webminstats offers several advantages over similar projects. It is, for instance, much faster than a classic of this genre, the MRTG [7], because unlike Webminstats, MRTG also creates graphs that are usually not needed. As the name implies, Webminstats is based on the Webmin [8] project, which allows web-based administration of Unix systems. This allows sharing the access control features of Webmin for Webminstats and also this makes it browser-configurable. The Webminstats backend is based on the RRDTool (Round Robin Database
Brave GNU World
COMMUNITY
Figure 2: Webminstats showing a daily CPU usage chart.
early this year, Pascal Conrad has started to close one of the most important gaps of Free Software: Professional analytical accounting. His last employer did not see the benefits of, or have the appropriate understanding and appreciation of Free Software and GNU/Linux, Pascal decided to provide the LinCompta project to the community under the Figure 3: Webminstats showing the ftp/http module terms of the GNU General Russian. Any help with this aspect or Public License. with the web page will be gratefully During its brief history, the project has accepted. Should the project see enough already made remarkable progress – it interest, Pascal plans to expand it with has a very usable graphical interface that other aspects of business accounting, so most users find easy to understand. if you would like to see such projects Programming languages used in this come along, you should probably try to project include C with GTK+/GNOME support Pascal by testing, translating or support and Pascal. It uses MySQL as the programming. backing database. The project currently lacks a way to GNUnet print data and only French is currently supported. The highest priority on the Many months ago, Brave GNU World pretask list is translation to both English and sented some background of the FreeNET project by Ian Clarke, which had the goal to create a decentralised network which would make central control and censoring impossible and also allow data to “wander” through the network. Faced with the increasing attempts to censor the internet, file sharing services like Napster have a problem, their reliance on a central reference point. The idea of such peer-to-peer networks is common knowledge today. With GNUnet [11], a project by the students of the Purdue University, such a network has also become part of the GNU Project now. Let me try to give a short introduction for those who have not yet come into Figure 4: French accounting with LinCompta
Tool)[9] by Tobi Oetker, which provides a faster and more flexible re-implementation of the storage and display capabilities of the already mentioned MRTG project. Since it does not provide its data-collection and frontend features, RRDTool is not a replacement for MRTG. MRTG can use it as its database. These database-capabilities are also used by Webminstats. For collecting data, Webminstats provides 9 modules, which allow monitoring of CPU-load, disk space, IRQ, internet (FTP/HTTP), mail (sendmail, pop, imap), memory, processes and the number of users with a time-resolution of one minute. With the help of Webminstats, Eric has been able to find and fix a problem with his web server. By knowing the exact time of the crash and with the user module providing information about a new logon immediately before the crash, he was able to narrow down the possible problems, which in turn made it much easier to find that specific bug. Webminstats was written in Perl and the Bash-Shell and is being released under the GNU General Public License as Free Software. New modules will expand the functionality with firewall monitoring capabilities and it is planned to customise it for other Unix systems. On top of this, the team has also considered adding “alarm messages”. Help is requested in form of attractive icons for modules, customisations for other languages and operating systems as well as any new features required.
LinCompta By beginning work on the LinCompta [10] project
www.linux-magazine.com September 2002
89
COMMUNITY
Brave GNU World
contact with such a system. Most data is stationary and with the URL it can be mapped to a certain host. This allows – by blocking access to that one computer – censoring and also information about the provider of that content. This Figure 5: GNUnet running a search for GPL and finding the is problematic in countries document such as China, where access to any media that is not controlled by the professor that this project was to do with Chinese government is restricted and cryptography. They are now giving their providers of critical information have to first appearances at crypto-conferences expect sanctions. and GNUnet is in beta test state, so that Networks like FreeNET or GNUnet should not be a problem anymore. undermine this by making extensive use The authors see the advantages of of cryptography and adding anonymity, anonymity as a feature in the GNUnet which protects the provider and makes a project, which they believe to be more physical localisation of any unwanted effective than the methods employed in information impossible. You would use other networks. They are also proud of such networks whenever privacy is more their “Deniability” which provides for important than efficiency. protection against black sheep within the Other than the normal anonymous network. networks, GNUnet allows a form of And finally GNUnet allows searching accounting, which ensures that nodes for “natural” strings, instead of random providing more to the network will hashcodes used by FreeNET for instance. receive better connectivity. Exclusive As far as they know, GNUnet is the only consumption (“Freeloading”) is possible, entirely decentralised network offering but it has to take whatever capacity is these capabilities. “left over.” The authors see its biggest weakness in As we mentioned before, the GNUnet the lack of enthusiasm to program a GUI. projects originates in Purdue University, The currently available GTK+ based GUI where it began as a cryptography project works, but it is not very comfortable. of some students. By the way: Their Further development is concentrating on biggest problem was to convince their porting it to more platforms. It runs on
Figure 6: Gambas is not just Super Basic
90
September 2002 www.linux-magazine.com
GNU/Linux and BSD, and so work is being done on versions for Solaris/OS10 and Win32. Plans for the future include transport mechanisms other than UDP. They have thought about using steganography to hide data in pictures in order to bring the network through the Chinese wall – sorry – firewall. Also expansion of the network beyond filesharing – to transport email, for instance – might be possible. Help is very welcome in form of more nodes running GNUnet, help with the Win32 port, documentation, web pages, creation of graphics and so on.
Gambas Gambas [12] is an acronym for “Gambas Almost Means BASic”, which gives us a hint about the type of the project, because it is a graphical development environment based on a BASIC interpreter with object-oriented expansions. Benoit Minisini, the author of this project, drew inspiration from Java and Visual Basic. The project hopes to create an environment in which graphical programs can be assembled efficiently and with a shallow learning curve. Benoit found Java too complex and Visual Basic too buggy for this task, also Visual Basic only runs under Microsoft Windows. He also wanted a language that would secure freedom in terms of choice of desktop (KDE or GNOME) as well as license. Therefore he published Gambas under the GNU General Public License. The project has seen about three years of development, using C for the interpreter and compiler, C++ for Qt-bindings and Gambas itself for the graphical development environment. Benoit aims for the best syntactic coherence and compactness possible, making the interpreter without the Qt component should save about 200k in size. This should make it relatively easy to port Gambas to embedded environments. Thanks to its modular structure, the currently used Qt-based GUI component can easily be replaced by one based on GTK+. Further targets are the creation of a good debugger as well as a database component. It will probably take some time until Gambas is truly a complete programming language/environment, but it is certainly possible to speed up the process through
Brave GNU World
COMMUNITY
deal with a new program and new syntax. Those who have not yet found their way out of Make are given another great opportunity here.
INFO [1] Send ideas, comments and questions to Brave GNU World column@brave-gnu-world.org [2] Home page of the GNU Project http://www.gnu.org/ [3] Home page of Georg’s Brave GNU World http://brave-gnu-world.org
Free Software for Europe
[5] Minkowsky home page http://www.r-goetz.de/minkowsky/en/ [6] Webminstats home page http://webminstats.sourceforge.net [7] Multi Router Traffic Grapher (MRTG) home page http://people.ee.ethz.ch/~oetiker/webtools/ mrtg/mrtg.html [8] Webmin home page http://www.webmin.com [9] “Round Robin Database”(RRD) Tool home page http://www.caida.org/tools/utilities/rrdtool/ [10] LinCompta home page http://lincompta.tuxfamily.org [11] GNUnet home page http://www.gnu.org/software/GNUnet/ [12] Gambas home page http://gambas.sourceforge.net [13] GNU Make home page http://www.gnu.org/software/make/ [14] Cook home page http://www.canb.auug.org.au/~millerp/cook/ [15] Free Software Foundation Europe http://fsfeurope.org [16] Recommendation by the FSF Europe for the 6th framework programme http://fsfeurope.org/ documents/fp6/
help. What Benoit needs most right now are people trying Gambas in order to give him feedback. Once the component interface has been finished, Benoit plans writing some proper documentation for it, so adding Gambas components will be an easy task for everyone. When replying to the Brave GNU World questionnaire, Benoit added the following little story that he would like to share with the Brave GNU World readers: One day Benoit tried reinstalling Windows, he decided to reformat the partition under MS-DOS. Unfortunately the drive letters were inverted between Windows and MS-DOS, so he ended up deleting the wrong hard drive – which of course he did not have backups of. Having, involunterily, gained 30GB of free disk space, he then attempted to try and fiddle around with another, recently released, proprietary operating system, which really did not appeal to him. So he thought: Why not format that other hard disk from here? One mouse-click later his GNU/Linux “/home” partition no longer existed. Of course Gambas was on this partition and of course there were no backups. By sheer luck there was still a onemonth old copy of Gambas on the Windows partition that he had tried to format initially. His advice to all readers: Save important things everywhere. Be paranoid!
Even though the importance of backups is certainly widely known in theory, this little experience report may trigger some readers to back up the last three years of work. Of course one could also think that you should simply keep away from proprietary operating systems. :-)
Cook Brave GNU World in issue 20 illustrated the weaknesses of the Make program [13], this issue will introduce another alternative: Cook [14] by Peter Miller. Peter Miller, who is also author of the Aegis project, began working on a Make replacement as early as 1988. He chose C for the programming language and Cook is published as Free Software under the GNU General Public License. Advantages of Cook in comparison with Make include the possibility to do parallel builds, recipes can have hostnames connected to them in order to run them on specific machines, dependencies can be resolved, recipes have optional conditions to fine-tune their execution and much more. Those who read features about GNU Cons and SCons will be interested to hear it also supports detection of modification by fingerprints to avoid unnecessary recompilations. The transition is made easier with a make2cook program, although this of course does not remove the necessity to
At the end of April, the FSF Europe [15] issued a recommendation for the 6th European Community framework programme, which has kept me pretty busy as the president of the FSF Europe. On the grounds of a very lively and strong Free Software developer and user community in Europe, the FSF Europe suggests that the European Union should capitalise on this and set an emphasis on Free Software in all aspects of the 6th framework programme. Also to make explicit calls for Free Software in some areas. Some reasons for this recommendation were an increased sustainability for public funds, securing the democratic tradition in Europe, strengthening regional and trans-regional markets, independence from American oligopolies and intensifying European research. For these reasons the recommendation are being supported by companies and educational establishments throughout Europe. On the list of supporting parties you will find, among others: Bull (France), the TZi of the University of Bremen (Germany), the Centro Tempo Reale (Italy), MandrakeSoft (France), the FFS (Austria), Ingate Systems AB (Sweden) and Eighth Layer Limited (UK). The complete recommendation and list of supporting parties can be found on the FSF Europe home page. [16]
Closing Enough Brave GNU World for this month, I hope to have given some inspiration and as usual hope for many ideas, suggestions, comments and project introductions at the usual address. ■
THE AUTHOR
[4] “We run GNU”initiative http://www.gnu.org/brave-gnu-world/rungnu/rungnu.en.html
Georg C. F. Greve Dipl.-Phys. has been using free software for many years. He was an earlier adopter of GNU/Linux. After becoming active in the GNU project he formed the Free Software Foundation Europe, of which he is the current president. More information can be found at http://www.gnuhh.org.
www.linux-magazine.com September 2002
91
COMMUNITY
Free World
The monthly BSD column
Free World Absolute BSD Michael W. Lucas has written the Absolute BSD book. Aimed at answering all the SysAdmins questions from installation to performance tuning it goes into depth on the management of any BSD-based servers. A tight focus on network management and troubleshooting gives needed depth to the available BSD documentation. Absolute BSD goes beyond explaining what to type to make things work. Instead it takes the reader into why and how things work, and takes BSD system administrators that next much needed step forward. And as no technical book is ever perfect there is a website set up in case any errata are required. For more details see www.AbsoluteBSD.com.
Welcome to our monthly Free World column where we explore what is happening in the world of Free software beyond the GNU/Linux horizons. This month we will look into security and releases. BY JANET ROEBUCK
Virtually there Running OpenBSD on a VirtualPC is taking some steps closer to becoming a reality. Peter Bartoli is continuing to work on maintaining the OpenBSD components for installation. Due to the restricted memory available, a slimline kernel which Peter has produced is occasionally needed along with some modified drivers that make use of all the virtual hardware. The system will now install on VirtualPC which opens up the possibilities for its use. This could range from sandbox testing of security modules to the joy of just having multiple copies running. For more information see slagheap.net/openbsd.
The team is composed of 6 students from the French computer school. The G.O.B.I.E. project is developed using C and GTK. It uses the XFree86 Server version 4.2.0. They are re-coding most of the tools, such as xinit, and fdisk. The
More graphical fun The G.O.B.I.E project is aimed at adding a graphical installation to the OpenBSD OS. This project has been developed in the spirit of OpenBSD which means that the installation is as close as possible to the text one. The G.O.B.I.E team wishes to add some value to the product by the development of installation modules to known servers such as Bind, Sendmail, Inn, Apache, etc.
92
Figure 1: Adding users to OpenBSD with GOBIE
September 2002 www.linux-magazine.com
XServer is loaded into a RamDisk and it then uses a generic configuration file offering a screen resolution of 800x600 at 8bits. The first release will be available for downloading at www.gobie.net.
NetBSD support for new Opteron processor Wasabi Systems, the leading provider of the NetBSD operating system, announced NetBSD support for AMD’s upcoming eighth-generation AMD Opteron and AMD Athlon Processors, culminating in a development process that began with Wasabi’s port to a simulator of AMD’s new x86-64 architecture one year ago. NetBSD is a powerful open source operating system derived
Free World
from BSD Unix. NetBSD is used in place of many high performance Unix applications on the x86 platform. With the x86-64 architecture, customers will soon have an even higher performance 64-bit migration path at an attractive price. NetBSD for x86-64 runs both 32- and 64bit binaries. Wasabi plans to release the software, including support for SMP, binary compatibility with both Linux and Solaris/x86, and enhanced datacenter functionality in time for the commercial release of x86-64 hardware from AMD. With systems based on future 64-bit AMD Opteron processors, Unix customers will be able to benefit from enhanced 32-bit application performance of their existing NetBSD applications, with the option of deploying applications based on x86-64 architecture when available.
Hiding the problem On BugTraq it was announced that Symantec have acquired SecurityFocus. While this is good news for SecurityFocus it does raise some interesting questions about security issues. Symantec aim to keep both the BugTraq and the SecurityFocus mailing lists open. In their own FAQ they say “We observe a 30-day grace period after the notification of a security advisory to give users an opportunity to apply the patch. During this grace period, we provide our customers significant information about the vulnerability and the fix, but not stepby-step instructions for exploiting the vulnerability. We do not provide detailed exploit code or provide samples of malicious code except to other trusted security researchers and in a secured manner.” This means that you cannot test your systems using exploit code pulled off from the SecurityFocus site, unless you are a member of their ‘trusted security partners’. In effect they have closed down the vulnerabilities from the community. Both VulnWatch www.vulnwatch.org and PacketStorm packetstorm.decepticons.org still remain giving full disclosure.
Twice the power Chuck Silvers has reported success at implementing dual processor usage on the NetBSD Mac-PPC port. The work was carried out and completed on a dual G4 processor Apple.
NetBSD release
COMMUNITY
The NetBSD Project has announced on OSNews.com that the Release 1.5.3 of the NetBSD operating system is available. The future 1.6 release is also currently in Beta development. The NetBSD 1.5.3 is a maintenance release for users of NetBSD 1.5.2, 1.5.1, 1.5 and earlier releases, which gives security and performance updates relative to 1.5.2. Some new device drivers have also been added.
_switch.c and proc.h, plus a one-line change to kern_exit.c The sysctl variable kern.scheduler controls the scheduler in use. You can switch between the two schedulers by changing the value of the sysctl variable. The “old” scheduler (Feedback Priority) should be as robust and stable as always, there are still a few things to cleanup in the Proportional Share scheduler, such as the case for SMP support in the kernel.
Package Information
No Logo
At the beginning of August the number of packages in the NetBSD Package Collection stood at a high of 3076. FreeBSD ports for the same period reached 7365. The collection is found at www.netbsd.org/Documentation/softwaU re/packages.html#searching_pkgsrc. The most recently added packages are: Asnap (screen capture), ssh2 (secure shell), geoslab (TrueType fonts from Bitstream), Xvattr (modify XV attributes), Pscan (security scanner), psi (Jabber client), neon (http and webDAV client library), matchbox (window manager for small displays) and acroread (viewing pdf docs v5.06). The most recent updated packages are: Vice (C64 emulator), gdb (debugger), gmplayer (MPEG 1/2/4 video decoder), uvscan (DOS virus scanner), squidGuard (filter for squid), pth (portable thread library), zebra (routing daemon), nmap (scanner), apache6 (web server) and isect (middleware). The three packages Uvscan-dat, buildlink-x11 and pth-syscall have all been retired.
The FreeBSD Foundation logo contest has ended. It received a total of 106 entries. Unfortunately, there was no entry that the foundation thought spoke to them as the logo to use. As a result the FreeBSD Foundation will remain logoless.
Telling the truth The OpenBSD site has updated their tagline. On www.openbsd.org, it now states “One remote hole in the default install in nearly 6 years!”. This update was after the openSSH hole was found. This is still a very impressive record with the full list of patches and errata being found at www.openbsd.org/errata.html.
eZine news DaemonNews have released their August eZine for BSD users. The free eZine can be found at ezine.daemonnews.org/ 200208. It includes the regular columns with the Answerman and a review of the VicFUG 2002 in Australia. The HOWTO article is about backing up FreeBSD with SMBFS commands to transport the data to network shares. ■
New scheduler Luigi Rizzo has released the first version on the new Proportional Share scheduler at info.U iet.unipi.it/~luigiU /ps_sched.200207U 19a.diff. He has now tested on a diskless system running full X and applications with no harmful side effects. The only 3 files modified are kernU _synch.c, kernU Figure 2: The latest eZine from Daemon News
www.linux-magazine.com September 2002
93
Subscription CD
Subscription Disc
LINUX MAGAZINE
Subscription CD Issue 23 September 2002
The CD ROM with your subscription issue contains all the software listed below, saving you hours of searching and downloading time. On this month’s subscription CD ROM we start with a full distribution and a professional firewall before we move
Subscription CD Issue 23 September 2002
Highlights
Plus
Crux 0.9.3 Small lightweight i868-optimized distribution
Development Tools Games System Utilities Networking Software
SuSE Firewall on CD (unsupported evaluation version) Full bootable CD with SuSE’s firewall product Apache 1.3.19/2.0.39 Tons of modules for every Apache extension GCC 3.1 Latest stable release – along with documentation.
onto the latest and most requested programs and utilities.
Nmap 3.0 Latest release of the network scanning software
Crux 0.9.3
Games
An ISO image of a whole small lightweight i686-optimized distribution. CRUX is targeted at experienced Linux users. The primary focus of this distribution is “keep it simple”, which is reflected in a simple tar.gz-based package system, BSD-style initscripts, and a relatively small collection of trimmed packages. The secondary focus is the utilization of new Linux features and recent tools and libraries. CRUX also has a ports system which makes it easy to install and upgrade applications.
Auriferous – Find the gold in the caves – Entombed – Maze game Fly – 3D space combat Netwalk – Connect the terminals to the server Trailblazer – Move the ball to the end of the path TuxPaint – Drawing for the young
Multimedia Apollo – Mpg123 front-end GnuSound – Sound editor GQView – Image viewer Kover – Make CD inlays MagicPoint – Presentation software WhiteDune – Virtual reality model viewer
SuSE Firewall on CD Protect yourself, your IT infrastructure, and your mission-critical data! A fully bootable CD ROM with the last release of SuSE’s professional firewall product. This is an effective and reliable security service for your system. The SuSE Linux Firewall checks, monitors, analyzes, and logs ongoing data transfer, thereby providing a maximum level of security. Prevent attacks before they cause unexpected damage.
Apache 1.3.19/2.0.39 The Web-Server, which drives the Internet. Tons of modules for every extension you might need. Apache is a powerful, flexible, HTTP/1.1 compliant web server, which implements the latest protocols, including HTTP/1.1 (RFC2616). It is highly configurable and extensible with third-party modules and can be customised by writing 'modules' using the Apache module API.
Net Cherrypy – Develop web services fft – Display the route packets take Fnord – Tiny web server Proftpd – Secure FTP daemon Icrzo – A Swiss army knife for web developers Pabache – Perl web server
Security Ettercap – LAN sniffer MailScanner – E-mail virus scanner TurtleFirewall – Firewall Virge – C mail scanner Zorp – Proxy firewall suite Fiaif – Intelligent firewall
System LinuxConf – Admin tool Leka -Rescue floppy Dar – Disk archive DateShift – Change the system clock e2undel – Recover deleted data Webmin – Web based admin
GCC 3.1
Development
The latest stable release of the free compiler along with the documentation. “GCC” is the common shorthand term for the GNU Compiler Collection. Several versions of the compiler; C, C++, Objective-C, Ada, FORTRAN, and Java are integrated and GCC can compile programs written in any of these languages.
CRM114 – Controllable regex mutilator Gambas – Almost BASIC J – Java text editor Leakbug – Memory leak tracer Poe – Perl Object Environment
Nmap 3.0 The latest stable release of the network scanning software. Reworked in many places and offering more functions than ever. Network Mapper is an open source utility for network exploration or security auditing. It was designed to rapidly scan large networks, although it works fine against just a single host. Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (ports) they are offering, what operating system they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics.
Subscribe & Save Save yourself hours of download time in the future with the Linux Magazine subscription CD! Each subscription copy of the magazine includes a CD like the one described above free of charge. In addition, a subscription will save you over 16% compared to the cover price, and it ensures that you’ll get advanced Linux Know-How delivered to your door every month. Subscribe to Linux Magazine today! Order Online: www.linuxmagazine.com/CustomerService/Subscribe Use the order form between pages 66 and 67 in this magazine or download it from: www.linux-magazine.com/CustomerService/Orderform
www.linux-magazine.com September 2002
97