Welcome
COMMENT
Fireworks We pride ourselves on the origins of our publication, which come from the early days of the Linux revolution.
Dear Linux Magazine Reader,
Our sister publication in Germany, November is here and fireworks abound not just in organized displays but in founded in 1994, was the first the world of Linux and computing. Linux magazine in Europe. Since then, our network and expertise The Fritz chip is causing one of the biggest bangs. Named after the US has grown and expanded with the senator Fritz Hollings of South Carolina. It is already on sale in the form of Linux community around the Atmel’s secure processor, the AT90SP0801. This is the Palladium in action. world. As a reader of Linux Magazine, This is still one to watch as the dominant market players try to lock out all you are joining an information free and open source technologies. network that is dedicated to distributing knowledge and The Indian Times reports that the Indian Department of Information technical expertise.We’re not Technology is in talks with IBM and HCL over setting Linux as the standard simply reporting on the Linux within all of the sub-continent’s educational institutions. While this may just and Open Source movement, we’re part of it. be a ploy to gain a better Microsoft licencing deal for the Department of Education, if true and they follow through Linux could become as dominant as in China. Having the majority of Asia’s software and hardware engineers developing Linux could only lead to more innovation and better products for the community as a whole. Development in China is now so advanced that they can aim for other markets. RedFlag Linux of China is now aiming at partnering multimedia providers to produce set top boxes and car voice systems. Soon – Linux in an appliance near you. On a lighter side Transgaming has not made much noise about its ongoing achievments. As you may remember, Transgaming is developing a version of Wine and aims at getting all MS Windows games running on Linux. By subscribing you get to vote each month on the next improvements that the team will work on. Certainly this caters for their paying users. Now at version 2.2 the number of games working is advancing at a rapid pace. It is quickly turning one of my Linux boxes into a dedicated games console, which is very embarrasing when it is supposed to be for serious development. Knoppix 3.1 is certainly a bright light. A single CD Debian based distribution is available either by download if you have the bandwidth or from one of the usual vendors. Put the CD in the machine and reboot. Running from the CD it detects all of your hardware and leaves you running a KDE desktop so quickly and without any intervention that you will wonder why all Operating Systems cannot do this. Once you get over the shock of the easy setup you can start to explore all the software supplied on the single disk. My favourite use is as a check for hardware. Throw in the disk and if it works Linux New Media you know it is your own configuration files that are Awards 2002 wrong. If it does not work then the chances are it is the An international jury has recently hardware and not the setup. It is chosen the winners of the great for copying configuration Linux New Media Awards 2002. files for your own systems.
The winners are: Mobile Devices – Sharp Zaurus Network Hardware – Axiom AX 6113 Hardware – Pioneer DVR-104 Distributions – Debian Development Software – GCC Office Packages – Open Office Internet Applications – Mozilla Databases – PostgreSQL Newcomer of the Year Linux – Gentoo Linux Companies – IBM For the full story, see page 88.
Happy Hacking,
John Southern Editor
www.linux-magazine.com November 2002
3
NEWS
Software
Software News Listen to the Music! Version 1.3 of the GTK+ based FM Radio tuner is out. It works with every radio card that is supported by video4linux (http://www.exploits.org/v4l/). You can use remote controls via (optional) LIRC (Linux Infrared Remote Control, http:// www.lirc.org/) and record radio as wav or mp3. The user interface is available in several languages including English, Danish, German, French, Spanish, and Italian. The main changes to the new version are the use of GConf instead of a gnome1-style configuration file and additional translations into Belarussian, Czech, and Brazilian Portuguese. ■ http://mfcn.ilo.de/gnomeradio
KDevelop – Picture perfect The KDevelop Project was founded in 1998 to build an Integrated Development Environment. Available under the GPL, it supports KDE/Qt, GNOME and C++ projects. Now the team have released the first Alpha Release of the 3.0 version, codenamed “Gideon”. “This version features a rewritten code base utilizing plug-ins”, claim the people behind the scenes. New in this release is support for other languages such as Java and C. Other highlights are an application wizard for easy creation of KDE 2&3, Qt 2&3, GNOME, and terminal C/C++ projects. An internal debugger, an HTML-based helpsystem and CVS support complete the picture. KDE’s own news site http://dot.kde. org writes: “Gideon brings out the best in what an Integrated Development
Environment should be”. The new KDevelop requires KDE greater than 3.0.2 (or 3.1) and Qt greater than 3.0.3 (or 3.1) and can be downloaded from the project’s homepage. ■ http://www.kdevelop.org
The Chameleon strikes back
GIMP 1.3.9 released The GIMP (“GNU Image Manipulation Program”) project has announced another release in their development tree, GIMP 1.3.9. It introduces mostly minor bugfixes and some new functions. But be warned: The GIMP 1.3 series is for developers only and is not intended for end users. The development team say that all work on GIMP 1.4 (the future stable version) is done on the 1.3 series and that it’s therefore mainly made for developers who want to work on the software. If you want to install a working version of GIMP, you should stay with the latest stable release, 1.2.3. The latest release is available via anonymous CVS. ■ http://www.gimp.org/devel_cvs.html
8
November 2002
Geeko, the green SuSE mascot, starts its World conquest tour with the new 8.1 release. The latest distribution comes with KDE 3.0.3 and the new GNOME 2.0 desktop. OpenOffice 1.0.1 offers TrueType support and improved import filters – the right way to convince people that an office suite can provide access to all functionality and data without having to cost a fortune. In the multimedia section you’ll find the new GPhoto 1.2 and the Ogg Vorbis 1.0 encoder and player. It is YaST2 (“Yet another Setup Tool”) that really makes SuSE. This core administration component includes many functions that help with installing and configuring the OS. SuSE claim that even first-time Linux users should be able to “complete the
www.linux-magazine.com
installation in less than 30 minutes”. Its updated hardware detection handles USB 2.0 and Firewire devices. Using a new service called YOU (“YaST Online Update”), you can install updates from FTP, HTTP and other sources. ■ http://www.suse.com See also p32 for our review of SuSE 8.1.
Software
Privacy for the Masses The GnuPG team proudly presents Version 1.2.0 of the GNU Privacy Guard, a new stable release of GNU’s tool for secure communication and data storage. This complete and free replacement of PGP can be used for encrypting data and for creating digital signatures. GnuPG comes with an advanced key management facility and is said to implement “most of OpenPGP’s optional features, has somewhat better interoperability with non-conforming OpenPGP implementations and improved keyserver support.” You can download GnuPG 1.2.0 or a patch to upgrade from version 1.0.7 from one of the mirror sites http://www. gnupg.org/mirrors.html#ftp. To check the integrity of the version, the team recommend either verifying the supplied signature (if you already have a trusted version of GnuPG installed), or checking the MD5 checksum. ■ http://www.gnupg.org/gnupg.html
Like a Phoenix from the Flames The Mozilla-based stripped down browser, Phoenix, has reached release version 0.3, “Lucia”. The idea is to have a reduced browser without the mail, news, composer and IRC functions. On their website the developer state in the FAQ that they “want to have fun and build an excellent, user-friendly browser without the constraints.” Phoenix uses less memory than Mozilla and is therefore a lot faster, especially on startup. To try it out, the project provide binaries for Windows and Linux. In addition you can download the latest nightly builds, which are intended for testing purposes and may have bugs. For upcoming features, take a look at http:// www.mozilla.org/projects/phoenix/ phoenix-roadmap.html. There is help with the installation at http://www. mozilla.org/projects/phoenix/phoenixrelease-notes.html#install. As the website says: “Use at your own risk”, to explore strange new websites, to seek out new bugs and new features, to boldly go where no one has gone before… ■ http://www.mozilla.org/projects/phoenix/
NEWS
SpamAssassin 2.42 set loose This mail filter consists of a set of Perl modules and uses a wide range of tests to identify spam, including header and text analysis. In addition, some wellknown blacklists like http://www. mail-abuse.org/ and http://www.ordb. org/ are supported. The spam detection works like this: Typical advertising expressions like “Try it out for FREE!” or “To be removed from our list please reply with…” are recognised and there is a big set of more
sophisticated patterns. After identifying the mail as spam it is tagged for later filtering. SpamAssassin works perfectly with Procmail. Using a Procmail rule, incoming mail is piped through the Spamassassin program, and then all tagged messages can be filtered into a separate folder. Even if you use a graphical mail client like KMail you don’t have to do without the spam killer. The website http://kmail.kde.org/tools.html provides help with the configuration. If you upgrade from a version previous to 2.40 you should read the release notes carefully, since SpamAssassin no longer comes with code to handle local mail delivery. Other changes include some bugfixes, some updates to the spamd daemon and new documentation. SpamAssassin’s website offers useful tips & tricks, documentation, FAQs and HOWTOs for setting up SpamAssassin under various environments. ■ http://spamassassin.org
Happy Birthday, OpenOffice Just in time for the second birthday of the project, a new release sees the light of the world – OpenOffice 1.0.1 is out. The OpenOffice team claim that only minor bugfixes made their way into the release. If you have had problems using OpenOffice.org 1.0 or if your problems are not mentioned as fixed in the release
notes, there is no need to upgrade. The new documentation now contains a detailed guide in pdf format which helps with the single user or network installation. French, German and Italian translations are available and other languages are in preparation. ■ http://www.openoffice.org
www.linux-magazine.com November 2002
9
NEWS
Business
Business News Virtual Servers aid Travellers Mobil Travel Guide has picked IBM. They will provide the Mobil Travel Guide with large-scale mainframe computing and storgae infrastructure on-demand over the Internet. Under a 5-year agreement, IBM will provide the travel guide service with ondemand access to Linux-based server processing, storage and networking capacity from IBM e-business hosting centers in the United States. Instead of the physical Web, database and application servers they currently rely on, the guide will tap into “virtual servers” on IBM zSeries mainframes and Enterprise Storage Servers running SuSE Linux, paying only for the computing
power and capacity that they require and so match costs with revenues as seasonal demand changes. The guide will utilize the IBM computing resources to support the expansion of a new Web-based service – Mobil Companion – which offers customized service for travelers. The Mobil Companion travel program targets upscale leisure travelers with benefits that include state of the art Web-based travel planning, 24-hour enroute travel support services, preferred rates from hotels, restaurants and other travel service providers. ■ http://www.ibm.com/news/us/2002/10/ 042.html
Linux Minicluster by Sandia National Labs Parvus Corporation has helped Sandia National Laboratories to develop a portable Linux cluster computer system based on commercial-off-the-shelf (COTS) PC/104 technology. While cluster computers typically combine multiple desktop-sized PCs to work in parallel on problems too large for a single computer, Sandia’s Minicluster makes use of embedded PC components to achieve a highperformance, low-cost parallel processing system. Computers such as the Minicluster could potentially be used to demonstrate a wide variety of scientific and business applications, including weather prediction, human genome analysis, pharmaceutical design, aircraft and automobile design, seismic exploration, artificial intelligence, data mining, and financial analysis, among others. “By utilizing parvus Corporation’s components, packaging, and systems integration services, Sandia was able to quickly complete the Minicluster project,” said Mitch Williams, engineer at Sandia’s Embedded Reasoning Institute (ERI). “Our design successfully integrates all standard features of normal-sized rack-mounted clusters,
10
November 2002
including individual keyboard, video and mouse access to every node, while minimizing size to barely over a foot tall and five inches wide.” The unit incorporates a Linux OS, four PC/104 processor nodes, a 10BaseT private network, power supply, KVM switch, and an external PCMCIA wireless connection. ■ http://www.parvus.com/parvemp/ LinuxC.htm http://eri.ca.sandia.gov
www.linux-magazine.com
Huge Benefits through Open Source By replacing its back-end IT infrastructure with open source software, a London-based construction firm has gained huge benefits in reliability, scalability and performance, plus more than halved its annual Microsoft licensing fees – saving over £200,000 this year and about £100,000 each year from now on. Sirius IT, who implemented the solution, states that by running Open Source Software on Linux, yet keeping Microsoft Office on each desktop, its client, Killby & Gayford Group, has not only radically increased its business competitiveness but has also drastically lowered its operational costs. Killby & Gayford Group now use an entirely open source solution to run their complex network, spread over two sites. Sirius IT claims Killby & Gayford is one of the very few examples of a fully integrated open source server/Microsoft desktop infrastructure anywhere in the world. Measured benefits Killby & Gayford have gained are: Massive reduction in software licensing costs due to the replacement of Microsoft server software with open source alternatives. Zero user retraining costs – end users continue to use the familiar Microsoft Office suite on their desktop. Simple, powerful, secure, single sign-on – staff can now logon to the network, from any desktop, using any operating system, and use any service. Vastly increased security, plus total back-end immunity to viruses. Radical improvements in file access and print serving speeds, across the network. Rock solid network stability – uptimes now measured in years, rather than weeks. ■ http://www.siriusit.co.uk/technical/ casestudies/kg.html
UK Free Software Network UKFSN is an ISP with a difference – all of the profits from the operation will be donated to fund Free Software projects in the UK. Profits from the service will be paid to the Association of Free software who has agreed to distribute all the money raised. ■ http://www.ukfsn.org/
“I Found the Answer to IT Success!”
“SmartCertify Direct’s Linux™ Certification course gave me the hands-on training I needed to get Linux certified fast!” — Steve Rossi, Linux Engineer
Visit our Web Site! www.smartcertify.co.uk
Discover for yourself why IT professionals throughout the industry praise SmartCertify Direct’s interactive certification training. Our courses combine the personal attention of traditional instructor-led training with the convenience and affordability of self-paced courseware to get you certified... guaranteed.
SmartCertify Direct’s Linux Course Includes: • Money-back Certification Guarantee*
• Try Our Courses FREE! • On-line IT Skills Assessment • IT Job Search
• In-depth Course Manuals Written by Experts
• FREE Certification Newsletter
• Digital Video Lessons
• IT Industry News
• Hundreds of Practice Questions
• FREE Tools and Utilities
• Self-paced Study Format • The Latest Linux Distributions
Call Now for a Limited Discount Offer On Our Linux Course! UK Freephone
Also Available:
MCSE, Cisco®, A+®, CIW, .NET, Lotus® and more! Multiple Languages Available!
0800 279 2009 www.smartcertify.co.uk
Preparing You for a Successful IT Career
SmartCertify Direct, 6-8 Wicklow Street, Dublin 2, Ireland Freephone (Irl): 1800 66 00 11 • Tel: +353 1 670 3177 • Fax: +353 1 679 3624 • www.smartcertify.co.uk Copyright © 2002, SkillSoft. SmartCertify Direct, the SmartCertify Direct logo and SmartCertify’s ClassWare are trademarks of SkillSoft. All other trademarks are property of their respective owners. * Call for full details on our money-back guarantee.
ULXM
NEWS
Business
MontaVista increase Protocol Support MontaVista Software has formed a partnership with Hughes Software Systems in which the SS& protocol stack and wireless GPRS/3G will be ported to their Carrier Grade Edition. This will give
MontaVista support for more network equipment providers including the 3G and 2.5G mobile phone network markets. The range of supported protocols is broad and should help MontaVista in dominating the market. All the HSS protocol stacks are available both in portable source code or binary form on a variety of hardware platforms, along with API documents and user manuals. ■ http://www.hssworld.com
Opera first Following on from last month’s news item about Mozilla and the Optimoz project, we have been contacted by Opera who point out that they have had mouse gestures in their web browser for a long time, both on Windows and most importantly, in the Linux version. ■ http://www.opera.com/whyopera/
Qtopia Upgrade Trolltech have announced the release of Qtopia 1.6 Beta. Qtopia is based on the embedded port of Qt. Version 1.6 had better intergration for Microsoft Outlook syncronisation and an improved media player allowing skinning and better song control.The development environment has upgrade support for Asian character input and backup facilities using TCP. The product is still free to Open Source developers and can be found on Sharp Zaurus SL-5500 and SL-A300 PDA’s. ■ http://www.trolltech.com/products/ qtopia/index.html
12
November 2002
HP set the Benchmark HP announced that HP ProLiant servers have delivered the first Linux TPC-C benchmark results running Oracle 9i Real Application Clusters using Red Hat Linux Advanced Server. Demonstrating the cost and maintenance benefits of running Linux based hardware and software in enterprise operating environments, an 8-node cluster of HP ProLiant DL580 servers using Intel Pentium III Xeon processors and HP StorageWorksks MSA1000 storage system achieved 138,362.03 tpmC (transactions per minute) at a cost of just $17.21/tpmC with Red Hat Linux Advanced Server. TPC is a non-profit corporation founded to define transaction
processing and database benchmarks and to disseminate objective, verifiable TPC performance data to the industry. The complete benchmark results and test details are available to read at http:// www.tpc.org. Information about additional HP ProLiant servers certified for Red Hat Linux Advanced Server is available at http://hardware.redhat.com/hcl/.
SuSE plays in the SAP League SuSE Linux has become a SAP AG technology partner. This allows SuSE’s large enterprise customers like Siemens Business Services who run several mySAP Web Application Servers to benefit from this partnership with SAP. This partnership enables companies all over the world to use mySAP.com on SuSE Linux Enterprise Server while taking advantage of services and support from SAP. Since May 2002, mySAP.com has been available on SuSE Linux
Enterprise Server for IBM eServer zSeries. This is the first certified offer of a scalable and reliable e-business collaborative 64-bit solution that can run in an LPAR (logical partition) or on z/VM (virtual machine). Additionally, engineers of both companies are working together to integrate mySAP.com into the new 64-bit Itanium Processor Family (IPF). ■ http://www.suse.de/us/company/press/ press_releases/archive02/suse_sap.html
Instant Offices chooses Rackspace Rackspace Managed Hosting has been chosen by Instant Offices to host its online business community for the serviced office space marketplace. Instant Offices is an on-line network of serviced office operators providing a comprehensive directory and database of serviced office space throughout Europe. This allows users to view properties from their desks, make comparisons between office space and negotiate better deals. The service is free because the serviced office operators, in partnership with Instant Offices, pay a commission for the successful introduction of new business through the network. Growth in demand for Instant Office’s offering and the introduction of new
www.linux-magazine.com
services like virtual tours, 360 degree online tours of available office space and facilities, meant that Instant Offices needed to ensure it had high levels of Internet connectivity to support its growing business community. Instant Offices’ server is managed by Rackspace at its Data Centre in West Drayton, near Heathrow. The Data Centre offers customers burstable bandwidth to address rapid or continuing increases in the demand for data from their housed server. Rackspace provides round-the-clock monitoring, security and technical support, backup power supply and high-speed and redundant network connectivity to ensure uninterrupted high performance and data integrity. ■ http://www.instantoffices.co.uk/
Insecurity
NEWS
Insecurity News Dvips
Mozilla
dvips contains a flaw allowing print users to execute arbitrary commands. The dvips utility converts DVI format into PostScript(TM), and is used in Red Hat Linux as a print filter for printing DVI files. A vulnerability has been found in dvips which uses the system() function insecurely when managing fonts. Since dvips is used in a print filter, this allows local or remote attackers who have print access to craft carefully a print job that would allow them to execute arbitrary code as the user ‘lp’. A work around for this vulnerability is to remove the print filter for DVI files. The following commands, run as root, will accomplish this: rm -f /usr/share /printconf/mf_ rules/ mf40-tetex_filters rm -f /usr/lib/rhs/ rhs-printfilters/dvi-to-ps.fpi However, to fix the problem in the dvips utility as well as removing the print filter we recommend that all users upgrade these errata packages. This vulnerability was discovered by Olaf Kirch of SuSE. Additionally, the file /var/lib/ texmf/ls-R had world-writable permissions. ■ Red Hat reference RHSA-2002-192-18
Updated Mozilla packages are now available for Red Hat Linux. These new packages fix vulnerabilities in previous versions of Mozilla. Mozilla is an open source web browser. Versions of Mozilla previous to version 1.0.1 contain various security vulnerabilities. These security flaws could be used by an attacker to read data off the local hard drive, to gain information that should normally be kept private, and in some cases to execute arbitrary code. All users of Mozilla should update to packages containing Mozilla version 1.0.1 which is not vulnerable to these issues. ■ Red Hat reference RHSA-2002-192-13
bugzilla The developers of Bugzilla, a web-based bug tracking system, discovered a problem in the handling of more than 47 groups. When a new product is added to an installation with 47 groups or more and “usebuggroups” is enabled, the new group will be assigned a groupset bit using Perl math that is not exact beyond 2^48. This results in the new group
Security Posture of Major Distributions Distributor Debian
Security Sources Info:www.debian.org/security/, List:debian-security-announce, Reference:DSA-… 1)
Mandrake
Info:www.mandrakesecure.net, List:security-announce, Reference:MDKSA-… 1)
Red Hat
Info:www.redhat.com/errata/ List:www.redhat.com/mailing-lists/ (linux-security and redhat-announce-list) Reference:RHSA-… 1)
SCO
Slackware
SuSE
Info:www.sco.com/support/security/, List:www.sco.com/support/forums/ announce.html, Reference:CSSA-… 1) List:www.slackware.com/lists/ (slackware-security), Reference:slackware-security …1) Info:www.suse.de/uk/private/support/ security/, Patches:www.suse.de/uk/private/ download/updates/, List:suse-security-announce, Reference:suse-security-announce … 1)
Comment Debian have integrated current security advisories on their web site.The advisories take the form of HTML pages with links to patches.The security page also contains a note on the mailing list. MandrakeSoft run a web site dedicated to security topics. Amongst other things the site contains security advisories and references to mailing lists.The advisories are HTML pages,but there are no links to the patches. Red Hat categorizes security advisories as Errata:Under the Errata headline any and all issues for individual Red Hat Linux versions are grouped and discussed.The security advisories take the form of HTML pages with links to patches. You can access the SCO security page via the support area.The advisories are provided in clear text format.
Slackware do not have their own security page, but do offer an archive of the Security mailing List. There is a link to the security page on the homepage. The security page contains information on the mailing list and advisories in text format. Security patches for individual SuSE Linux versions are marked red on the general update page and comprise a short description of the patched vulnerability.
1) Security mails are available from all the above-mentioned distributions via the reference provided.
14
November 2002
www.linux-magazine.com
being defined with a “bit” that has several bits set. As users are given access to the new group, those users will also gain access to spurious lower group privileges. Also, group bits were not always reused when groups were deleted. This problem has been fixed in version 2.14.2-0woody2 for the current stable distribution (woody) and will soon be fixed in the unstable distribution (sid). ■ Debian reference DSA-173-1
pam Paul Aurich and Samuele Giovanni Tonon discovered a serious security violation in PAM. Disabled passwords (i.e. those with ‘*’ in the password file) were classified as empty password and access to such accounts is granted through the regular login procedure (getty, telnet, ssh). This works for all such accounts whose shell field in the password file does not refer to /bin/false. Only version 0.76 of PAM seems to be affected by this problem. This problem has been fixed in version 0.76-6 for the current unstable distribution (sid). The stable distribution (woody), the old stable distribution (potato) and the testing distribution (sarge) are not affected by this problem.■ Debian reference DSA-177-1
tkmail It has been discovered that tkmail creates temporary files insecurely. Exploiting this an attacker with local access can create and overwrite files as another user. This has been fixed in version 4.0beta9-8.1 for the current stable distribution (woody), in version 4.0beta9-4.1 for the old stable distribution (potato) and in version 4.0beta9-9 for the unstable distribution (sid) of Debian. ■ Debian reference DSA-172-1
Heartbeat Heartbeat is a monitoring service that is used to implement failover in highavailablity environments. It can be configured to monitor other systems via serial connections, or via UDP/IP. Several format string bugs have been discovered in the Heartbeat package. One of these format string bugs is in the normal path of execution, all the remaining ones can only be triggered if Heartbeat is running in debug mode.
Insecurity
Since Heartbeat is running with root privilege, this problem can possibly be exploited by remote attackers, provided they are able to send packets to the UDP port Heartbeat is listening on (port 694 by default). Vulnerable versions of Heartbeat are included in SuSE Linux 8.0 and SuSE Linux 8.1. As a workaround, make sure that your firewall blocks all traffic to the Heartbeat UDP port. ■ SuSE reference SuSE-SA:2002:037
HylaFAX
applications were fixed too. These bugs can not be exploited to gain higher privileges on a system running SuSE Linux because of the absence of setuid bits. The hylafax package is not installed by default. A temporary fix is not known. Please download the update package for your distribution. Then, install the package using the usual rpm command “rpm -Fhv file.rpm” to apply the update. ■ SuSE reference SuSE-SA:2002:035
apache
HylaFAX is a client-server architecture for receiving and sending facsimiles. The logging function of faxgetty prior version 4.1.3 was vulnerable to a format string bug when handling the TSI value of a received facsimile. This bug could be used to trigger a denial-of-service attack or to execute arbitrary code remotely. Another bug in faxgetty, a buffer overflow, can be abused by a remote attacker by sending a large line of image data to execute arbitrary commands too. Several format string bugs in local helper
A number of vulnerabilities were discovered in Apache versions prior to 1.3.27. The first is regarding the use of shared memory (SHM) in Apache. An attacker who is able to execute code as the UID of the webserver (typically “apache”) is able to send arbitrary processes a USR1 signal as root. Using this vulnerability, the attacker can also cause the Apache process to spawn continuously more children processes, thus causing a local DoS. Another vulnerability was discovered by
NEWS
Matthew Murphy regarding a cross site scripting vulnerability in the standard 404 error page. Finally, some buffer overflows were found in the “ab” benchmark program that is included with Apache. All of these vulnerabilities were fixed in Apache 1.3.27. ■ Mandrake reference MDKSA-2002:068
drakconf Errors were discovered in the Mandrake Control Center that prevents any users using the nl_NL, sl, and zh_CN locales from starting the program. The error generated would be shown as “cannot call set_active on undefined values” on line 423. ■ Mandrake reference MDKA-2002:012
tar A directory traversal vulnerability was discovered in GNU tar version 1.13.25 and earlier that allows attackers to overwrite arbitrary files during extraction of the archive by using a “..” (dot dot) in an extracted filename. ■ Mandrake reference MDKSA-2002:066
2.4TB for less than £9000 THE TERAVAULT direct attached storage servers from Digital Networks provide direct attached storage of up to 3.8TB (3840GB) in size.
TERAVAULT RS312-DAS • DAS (Direct Attached Storage) • Two Ultra160 SCSI channels for connection to one or two hosts • 12 ATA hard disks • PowerPC 750 RISC processor with 1MB L2 cache • Drive hot-swapping and automatic background rebuild 960GB: 1440GB: 2400GB: 3840GB:
£5811 + VAT £6519 + VAT £8739 + VAT due end of November 2002
Prices correct as of 21/10/02. Please check www.dnuk.com/store for current prices.
Digital Networks
The Teravault RS312-DAS, pictured left, features hardware RAID storage with hotswap capability and dual Ultra160 SCSI channels for connection to one or two host servers. At Digital Networks, we specialise in servers, storage, workstations, desktops and notebooks designed specifically for Linux use. Unlike our competition, we offer Linux pre-installed on all our hardware – completely free of charge. We offer Red Hat, Mandrake and SuSE, plus Microsoft Windows as well. Visit www.dnuk.com and find out why corporate customers, small and medium businesses and most UK universities choose us for their IT requirements.
NEWS
Kernel
Zack’s Kernel News Replacement for TCP Linux 2.5 will soon support SCTP (Stream Control Transmission Protocol), a general purpose networking protocol that attempts to solve some problems encountered with standard TCP (Transmission Control Protocol). David S. Miller recently promised to merge the existing SCTP patch into the main kernel source tree. There are several reasons why users have been looking for a replacement for TCP. While TCP controls the order of data transmission, some applications
that do not require strict data ordering must suffer unnecessary delays as TCP blocks to ensure proper ordering. In addition, TCP is more vulnerable to denial-of-service (DoS) style attacks. SCTP attempts to answer these and some other drawbacks. SCTP was originally developed by the IETF (Internet Engineering Task Force) SIGTRAN (signal transport) working group to transport SS7 (Signalling System 7) over IP, but it may also be used as a general purpose protocol. ■
Hyperthreading
October 2002 November 2002
The Kernel Mailing List comprises the core of Linux development activities.Traffic volumes are immense and keeping up to date with the entire scope of development is a virtually impossible task for one person. One of the few brave souls that take on this impossible task is Zack Brown. Our regular monthly column keeps you up to date on the latest decisions and discussions, selected and summarized by Zack. Zack has been publishing a weekly digest, the Kernel Traffic Mailing List for several years now, reading just the digest is a time consuming task. Linux Magazine now provides you with the quintessence of Linux Kernel activities straight from the horse’s mouth.
User mode linux in kernels
Recently, there has been a big push to support hyperthreading in 2.4 and 2.5 kernels. Hyperthreading is a bit like the opposite of Symmetric Multiprocessing. Instead of using multiple CPUs as one, hyperthreading treats a single CPU as many, with some interesting performance boosts. Currently the only processor that officially supports hyperthreading is the Pentium 4 XEON, but it is rumored that other P4s can turn on hyperthreading in their BIOS. Hyperthreading made its first appearance in the Linux kernel in November of 2001 in kernel 2.4.14, under the name SMT (Symmetric Multithreading). At the time, very few developers knew what to make of it, and there was much speculation. Intel claimed a 30% performance boost under their unreleased, proprietary benchmarks, but this was taken generally with a pinch of salt. At the time, no available hardware supported hyperthreading. Only when the P4 XEON came out was there a possibility of wide-scale testing and development of this feature. In recent weeks, many large patches have appeared, and seem to be making their way into the main kernel tree. Ingo Molnar made a big splash with his patch to integrate hyperthreading with his new scheduler code. One problem with SMP
16
INFO
systems is that if more than one OOPS occurs simultaneously, they could overwrite each other, destroying the evidence needed to debug them. David Howells has been working on a patch to force all OOPS output to wait its turn before dumping to the screen. There is still some question as to whether his implementation is quite right; but it seems clear that he is on the right track, and that this code will be a welcome addition to the main tree. OOPS reports contain essential information about what the system was doing just before a crash. When decoded by the ksymoops program, an OOPS can provide developers with a valuable clue in the hunting down and fixing of an elusive bug. There are a number of problems with trying to capture OOPSes, and developers are always trying to expand their possible options. The main problem is that the system has crashed, and so there are only a limited number of behaviors that can be counted on. The OOPS code must do its best to generate a useful OOPS report in an environment in which much of the system may not be operational. There have previously been patches to dump OOPS output to a file, to send it over a serial port or even across the network; now there are patches to deal with multiple simultaneous OOPSes. ■
www.linux-magazine.com
Jeff Dike’s User-Mode Linux has finally made it into the official 2.5 kernel tree. UML is a patch to allow the Linux kernel to run as a user process, creating one or more virtual computers running simultaneously on a given system. UML recently became self-hosting, meaning that users may run UML from within a running UML process. Kernel version 2.5.35 is the first to contain the full incorporated UML patch. There are many uses for this feature. Because UML is a user process, it now becomes possible to test each new kernel versions as UML invocations, without the risk of crashing the whole system. This saves the developers time that would otherwise be wasted by having to reboot the computer system after each failed test. Another use for UML is in clustering. It has long been recognized by top developers that extending SMP to more than a few processors will result in tremendous complexity of the kernel’s locking mechanisms. To avoid this, a number of alternatives have been actively pursued for some time. One is the idea of SMP clusters, widely promoted by Larry McVoy; another is that UML may be a natural way to bridge multiple systems. Jeff has reported some success with his initial experiments, but the final direction of Linux clustering beyond SMP remains to be seen. ■
SELLING
khttpd webserver is out
XFS journaling filesystem
The controversial khttpd web server has been removed from the 2.5 kernel tree. Khttpd was written in response to the 1999 Mindcraft benchmarks that showed MS Windows serving web pages faster than Linux under certain conditions. Although most Linux developers dismissed the benchmark as highly slanted, they were forced to admit that under the conditions of the test, MS Windows did out-perform Linux. To counter this, Linus Torvalds accepted the khttpd web server into the main kernel tree. This caused much violent protest, as a web server does not properly belong in kernel space. Linus felt that it was important to beat the Mindcraft benchmark, however, and so the patch stayed. In recent months, however, a new user-space web server, Tux2, has consistantly out-performed khttpd, making the khttpd’s presence in the kernel superfluous. Khttpd has also been unmaintained for some time, making the decision to eventually remove it somewhat easier. Unfortunately, Tux2 is plagued by intellectual property disputes that no one seems inclined to fight over. Among other things, these disputes prevent the Tux2 web server from replacing khttpd in the kernel. Some may argue that this is not a bad thing, but the fact remains that there are still many open questions surrounding a viable Linux webserver. One thing is certain: khttpd is gone. ■
The journaled filesystem XFS has finally made it into the official 2.5 kernel tree. This has been a controversial project, with many folks arguing for XFS inclusion for a long time, and others saying the code was not ready yet. Kernel 2.5.36 is the first to contain XFS. SGI has been the main developer of XFS, and has been pushing for inclusion in the main tree for some time. Linux distributions such as Mandrake, SuSE and Slackware, have come bundled with the XFS patch for some time as well. Linus Torvalds had refused to do the merge in the official tree because he felt there were certain implementation details that would have a negative impact on the rest of the system, and he wanted SGI to fix those details before he’d accept their patch. Ext3, ReiserFS, and JFS (from IBM), are examples of other journaling filesystems that have previously been accepted into the main kernel tree. Journaling filesystems track all disk writes, and make sure that the filesystem is never in an inconsistant state. This means that a system crash will not require running fsck to bring the filesystem back into a usable state. Assuming all user data has been synchronized to disk, it is possible, with a journaling filesystem, to turn off the power, without fear of losing data. While the ext2 filesystem remains the default on most Linux systems, it is only a matter of time before a journaling filesystem supplants it. ■
Netware filesystem sold to the Canopy Group Timpanogas, a long-time contributor to the Linux kernel, has sold its intellectual property, including the Netware Filesystem, to the Canopy Group. Jeff V. Merkey, head of Timpanogas, would not specify which Timpanogas Linux project, if any, would continue under the new management. Jeff and his company have been fairly controversial since they first became involved in Linux kernel development years ago. For a long time Jeff was regarded by many as something of a crackpot, but he managed to gain some measure of recognition for his
technical skill and his ability to gain useful information from recalcitrant companies. Andre Hedrick, the Linux kernel IDE maintainer, worked for him briefly at Timpanogas, but left after an apparent falling out between them. The Canopy Group appears to be some sort of incubator of open source and Internet infrastructure companies. Their list of companies includes Linux Networx and Trolltech. It remains unclear how the Canopy Group intends to make use of the Timpanogas intellectual property. ■
OUT FAST! For information on how to order, please see page 87 or www.linux-magazine.com/Backissues
COVER STORY
Wireless LAN Intro
Focus on Wireless Networking
Wireless LANs There is an alternative to drilling holes in walls, fitting hundreds of yards of cable duct and crimping network cables: Wireless networks can be set up to connect desktops and laptops throughout the whole house to the internet without all that expensive building work. BY DANIEL COOPER AND BENEDIKT HEINZ
I
f you are looking for stress and drill free networking, wireless LAN is the magic word. Insert a few cards, set up a couple of access points and your network is up and running – no need to ask the landlord or the owners’ committee. Whether you need to cover just your own flat or the whole floor, bridge the network to the house next door, or even to the corner of the street, the range will depend on your location, aerial, the data rate, and frequency – page 19 and the following pages will give you an overview of the current standards, interoperability issues, and potential gains. Setting up a wireless LAN for Linux is not as easy as one might wish. Manufacturers fail to provide driver support, and there are no Linux based administration tools for access points or access routers. It is a familiar scenario – the community once again has to produce drivers with little help from the industry. Refer to the article starting on page 25 for details on the driver requirements for your cards and tips on the installation procedures.
Page 21 provides details on a few select WLAN components for which we were able to locate Linux drivers, or that provide browser based configuration facilities. This section also features a global first, a combination of network storage, 4 port switch, wireless access point and DSL router. The main problem with wireless networks is the fact that they are so easy to sniff. An experienced hacker may need only a few minutes to crack the hardware based encryption facilities. This prompted us to include a step-by-step guide to secure communications in wireless LANs (page 28) with tips on protecting yourself against attacks. ■
Cover Story Technology Review ..............19 Read all about the wireless protocols and technologies involved.
Wireless LAN Hardware ....21 If you want to know what the requirements for a wireless LAN are, then read on.
Wireless Drivers .....................25 A guide to the configuration work you need to get a wireless LAN card up and running.
OpenVPN....................................28 Use an encrypted tunnel to protect your data and provide a secure solution.
18
November 2002
www.linux-magazine.com
Wireless Technology Review
COVER STORY
Wireless Networking
The Wireless Jungle Hannes Keller, visipix.com
Wireless data communication comes in all shapes and sizes – some of the technologies are over a hundred years old. Read on to learn all about the protocols and technologies involved. BY HOLGER LUBITZ
W
ireless data transmission is certainly nothing new. The first morse code was exchanged between two stations at the end of the 18th century, and morse code is generally regarded as the oldest known data encoding technique. Although at the time usage was restricted by fragile technology and immense running costs, which meant that the technological potential was exploited mainly for military purposes, today’s inexpensive equipment makes wireless communications a real prospect for home use. Although a few years have passed since wireless LANs were introduced to the consumer market, the race for an exclusive standard is still on, with various technologies competing for custom and bandwidth. While Bluetooth has more or less established its position as a kind of wireless USB, with low transmitting power, range, and bandwidth, with some careful planning “real” WLAN solutions can be used to cover greater distances and are approaching the bandwidths normally expected in wired environments.
Wireless – but how? The advantages of wireless solutions are self-evident – no need to install wiring, laptops can access the network directly, wherever they are..The disadvantages are not quite as obvious: Wiring allows exclusive communication between the nodes on a network, but wireless solutions need to take other users of the
same frequency into consideration. That means not only unintentional WLAN disruptions due to other users in the same waveband, but also intentional misuse of the WLAN by outsiders. The current encryption services are easily exploited, and if you do not enable encryption at all, the network is available to anyone within range. License-free radio transmissions are only possible within the so-called ISM wavebands (Industrial, Scientific, Medical). The ISM frequencies 900 MHz, 2.4 GHz and 5 GHz are relevant to our discussion. The 900 MHz waveband would provide the best range due to its comparitively low frequency, however, it is not commonly available for use, having already been assigned to mobile radio communications in Europe. There are very few products available, as mobile telephones dominate this frequency range in the USA. The higher the frequency, the worse the propagation characteristices of an electromagnetic wave. Although you do not explicitly need visibility between two nodes at WLAN frequencies, even thin internal walls can subject the GHz frequencies to noticable attenuation. Theoretical ranges of several hundred meters thus tend to drop to 20 or 30 meters if walls or other obstacles are in the way. However, distances of up to 50 km can be bridged using directive antennas and avoiding obstructions. You will need to pay attention to the physical environment when positioning your access points – an access point in
the cellar will not be much use to you if you are on the second or third floor, especially if reinforced concrete was used for flooring. Instead, you might prefer to look for a position in the center of your house or appartment, even if this means some additional wiring to reach this point. If you intend to use the wireless LAN in the open, you should position your access point on the roof or in a window to provide maximum range. Access points are normally configured to be omnidirectional – that is they transmit in every direction. However, you can use special antennas to provide a directive element, although you might prefer to have this work done by a specialist who has access to the measuring equipment required to achieve maximum performance. If you are the do-it-yourself type, you should at least make sure that the cable used to attach the antenna is as short as possible.
Standards The 802.11 protocol family has been standardized by the IEEE, the Institute of Electrical and Electronics Engineers (say: “i triple e”). The original 802.11 standard that dates back to 1997 can be regarded as a predecessor to today’s WLANs. It envisaged data transfer rates of 1 or 2 Mbit/s in the 2.4 GHz frequency range, and replaced many of the older proprietory technologies. However, as wired networks were still a lot quicker, the market demanded higher data transfer rates and got them – although this meant a return
www.linux-magazine.com November 2002
19
COVER STORY
Wireless Technology Review
to proprietory solutions. To avoid uncontrolled developments, two new standards, 802.11a and 802.11b, were introduced in September 1999. Products based on 802.11b work in the same waveband as 802.11, but use a different modulation technique to achieve data transfer rates between 5.5 and 11 Mbit/s. But where the USA allows a maximum transmitter power of 1 watt, a restriction of 100 milliwatts applies in Europe – that is enough for LANs, but fairly ineffective if you need to bridge greater distances. There was some delay before the first products for 802.11a were introduced. This standard means an excursion to the 5 GHz frequency range (5.15-5.35 and 5.725-5.825 Ghz). But again, only the USA allowed the use of this range for wireless LANs. ETSI, the standards body responsible for Europe, had instead reserved this range for the HiperLan and HiperLan/2 (High Performance LAN) technologies. These frequencies had already been assigned in many national frequency usage plans and had to be reassigned. Market restrictions and low demand meant that the first 802.11a products did not have any noticable impact on the market until this year.
Bandwidth for the Masses Some European countries have started to liberalize the market for 802.11a products. In Germany, the Regulating Authority for Telecommunications and Post (RegTP – similar to the RA in the UK) now permits use of the 5150-5350 and 5470-5725 MHz frequency ranges, without explicitly restricting these wavebands to a specific technical standard. However, transmitter power is restricted to 200 milliwatts at the lower end of the scale (indoors) and to 1 watt at the higher end of the scale (including outdoor use). Although HiperLan/2 does offer lower latency as a wireless ATM, and has superior facilities for guaranteed bandwidth, the protocol overheads are too high for use in pure IP environments. Also, it is cheaper to produce hardware for 802.11a than for HiperLan/2. The bandwidth available in the 5 GHz waveband allows for a larger number of independent channels, and OFDM modulation permits higher data transfer rates. 802.11a provides eight different
20
November 2002
data rates between 6 and 54 MBit/s depending on reception quality. The disadvantage is that the higher frequency means a shorter range and consequently a higher concentration of access points. This concentration increases if you intend to use all the available channels, as you will need to resort to lower powered antennae in this case. However, research indicates an approximately 300 % improvement in bandwidth for 802.11a installations compared to 802.11b. The 802.11g standard is new and has not yet been ratified. It envisages a combination of the old 2.4 GHz waveband with OFDM (which was not permissible in the original standard), and is thus capable of achieving up to 54 MBit/s, although the restriction to three independent channels still applies. If multiple users require improved data transfer rates, 802.11a still seems to be the better solution. But 802.11g comes to its own when existing indivdual nodes in an existing 802.11b installation require increased performance. It might make sense to combine both technologies – look out for dual band access points that support both standards. One further advantage of 802.11a is the fact that the 5 GHz waveband has not noticably been occupied by other products, so far. 2.4 GHz standards can expect interference from Bluetooth, and even microwave ovens, that transmissions in the 5 GHz waveband are not currently subject to. The proprietory protocols of the early days continue to lose ground. One notable example is OpenAir by Proxim that goes back to pre-802.11 days. OpenAir uses a frequency hopping protocol and simple modulation techniques to achieve data tranfer rates between 0.8 and 1.6 MBit/s. Cheap to implement, but the performance could hardly be described as earth shattering. HomeRF by Diamond is also aimed at providing low-cost hardware, but has lost out to 802.11b with far inferior performance at only slightly lower prices in the States with virtually no impact on the European market.
Interference As previously mentioned, the frequency ranges are available for public use, and
www.linux-magazine.com
interference due to microwaves is commonplace in the 2.4 GHz waveband. WLAN cards thus implement various error avoidance and correction techniques to guarantee error free transmission despite interference. A WLAN card does not transmit at a fixed frequency, instead using multiple wavelengths or continually changing frequency within a waveband. This technique, referred to as spread spectrum, allows the transmitter to avoid frequency ranges suffering from interference, or at least mitigate the effect. There are two variants: Direct sequence modulates the data with a high frequency code. This requires more bandwidth but makes it easier to filter out interference in individual ranges when the same code is used to demodulate the transmission. Frequency hopping divides a waveband into multiple narrow channels and switches channels continually. If there is interference on one channel, you would not normally expect similar interference when hopping to the next channel. Error correction takes care of any remaining mistakes. Although there are numerous error correcting encoding procedures, they are inefficient within the context of wireless transmissions. Instead error recognition is the key, and defective packets are simply retransmitted. Of course this will have a noticable effect on the available bandwidth if a large number of errors occurs. The bandwidths quoted for WLAN products should thus be understood as referring to the maximum gross bandwidth in perfect conditions – practical experience shows that these values are almost impossible to achieve.
The Future IEEE 802.11b is the de facto standard for today’s wireless networks, although the next few years may see it being replaced by 802.11a at 54 MBit/s – the equipment is already on the manufacturers’ shelves. However, the introduction of 802.11x should prove to be a more significant innovation for home users. This means devices that adhere to the 11 MBit/s 802.11b standard but provide far superior encryption than WEP-128. Expect the first generation of equipment at the end of this year. ■
Wireless LAN Hardware
T
here are two basic methods of implementing wireless networks based on IEEE 802.11: Ad hoc mode means that the computers will communicate directly, but they must be able to talk to each other constantly to avoid disruptions. You can connect a maximum of 16 computers in this way. The hardware requirements are also quite simple – in fact you only need a WLAN card in each computer. In access point mode (AP mode), also referred to as a managed infrastructure, one or more access points are used as hubs: The wireless computers transmit data to the access point which in turn will relay the data to the intended recipient. This means that the computers only need to connect to the next access point to log on to the wireless LAN. The access point assumes a role similar to that of an Ethernet switch. Only a couple of years ago access points really could not do anything more fancy that moving data from one WLAN card to another, but now there are devices with integrated network ports, Ethernet switches, or even DSL routers.
COVER STORY
Stepping Up to a Wireless LAN
A LAN Solution for Major Tom It is so practical to be able to use laptops all over the house without a tangle of cables, or simply not needing to wire up your childrens’ bedroom to get them on the net. If you want to know what the requirements for a wireless LAN are, read on. BY DANIEL COOPER
Attenuation You will need to invest a lot more money in hardware to implement an AP network in comparison to an ad hoc network. Of course, each wireless host will still need a WLAN card, but you have the added expense of an access point or access router. For large houses, or houses that are well screened, you may even require multiple access points. The farther two WLAN computers are apart and the more walls or buildings there are in the way, the poorer and slower the connection will be – until it finally collapses. Even the temperature, humidity, and the weather can affect the connection quality. And so any statement on possible distances would be inconclusive, and this led to our decision to avoid this area. As a basic guideline, access points should be mounted in a high and unobstructed position in the middle of the area you intend them to service.
Illusions of Security NASA
Basically anybody in the proximity of your premises can sniff your wireless LAN, and even log on to your network.
www.linux-magazine.com November 2002
21
COVER STORY
Wireless LAN Hardware
The WEP (Wireless Equivalent Privacy) encryption standard was thus introduced to prevent misuse. Originally, a 40 bit key (WEP-40) was used, although today’s devices nearly all use 128 bit (WEP-128) keys. However, WEP-128 uses 24 bits for the so-called initialization vector (IV), which is simply incremented for each packet, and that leaves only 104 bits for the secret key. That does not mean you should disable WEP-128 – at least it will keep the script kiddies at bay. Choose your key carefully – the sequence should be randomly generated, if possible. You will need 13 bytes (104 bits) for WEP-128. You will definitely want to avoid using a password for a Windows client as your WEP key. The password is not used directly but truncated to 24 bits in the case of WEP-40, which corresponds to a mere 16.8 million combinations. Considering the fact that a laptop with a 1 GHz CPU can try out about 170,000 keys per second, you do not need to be a genius to work out that your network will be compromized in a matter of minutes. It is preferable to generate the 40 or 104 bits randomly. The following call will provide you with 14 bytes in hexadecimal notation. Now all you need to do is choose 13 of them and use them as your WEP-148 key: dd if=/dev/urandom bs=14 U count=1 | hexdump | cut -c 9-
Control Mechanisms If you want to set up another hurdle for the attacker to take, you can implemement an access conrol list (ACL) on your access point – in our test, the only box offering this feature was the Tellus TWL-R410. This would mean that instead of any card being allowed access, only the hardware addresses (MAC addresses) in the list are allowed to log on to the access point. Unfortunately, this mechanism is also relatively simple to sidestep, as some card drivers (albeit only via patches in some cases) allow you to edit the MAC address of the card. The attacker only
22
November 2002
needs to sniff the address of a card with access privileges and spoof that card’s MAC address. We tested a selection of today’s wireless LAN products. You can refer to the article “Driver Safari” on page 19 for a description of installing the drivers and details on the configuration of the cards in our test.
Actiontec Wireless USB Adapter 802UI3 The Wireless USB Adapter by Actiontec is particularly useful for desktops or servers. It saves you sacrificing a valuable PCI slot for the adapter you would otherwise require to run laptop cards on your desktop, and even then the position of the wireless LAN card – i.e. under your desk or somewhere in the corner – would not be ideal. You can use a USB extension lead to attach Actiontec’s USB adapter at a distance of up to six meters from the last USB hub and even wall-mount the adapter using the brackets supplied. If you do not feel like drilling holes in the wall, you can always use the sticky pads that complete the package. The USB adapter is the same size as a standard PC extension card, about one centimeter high and is attached to the USB port by means of a special lead. The device speaks IEEE 802.11b, uses 128 bit encryption and requires the prism2_usb module from the “Next-Generation” driver package (see page 19). There is one issue with the USB adapter: If you remove the prism2_usb module without removing usbcore, and then detach the USB adapter, you can expect your kernel to crash. You should also avoid unloading the prism2_usb module and reloading it without reloading usbcore. Practically speaking this means first detaching the device itself, and then removing the drivers – although this is the opposite
www.linux-magazine.com
of what you would normally do. This driver quirk is no big deal in a static environment as you will not be detaching the USB adapter regularly. The “Driver Safari” on page 19 describes how to add the Actiontec Wireless Adapter to your boot scripts without risking any kernel lockups. At around £71 the Actiontec wireless USB adapter is cheaper than the price of most other laptop WLAN cards with a PCI adapter, but it is far more flexible for both desktop or server use, because of the USB connectivity. That is why this product gets the Editor’s choice award.
D-Link DWL-500 PCI Adapter (Elito-Epox EWL-PCA) If you want to use a WLAN laptop card in a normal PC, you need a PC card adapter for the PCI bus. The answer is to add a typical laptop CardBus controller – in the case of the DWL-500 by D-Link (around £115) that means the Ricoh RL5c475. Running this device on Linux is simple, as the PCMCIA Card Services support almost all the known chipsets of the WLAN cards. If you have not already installed the pcmcia package, you will need to do so. There are no further configuration steps, as any cards inserted (wireless LAN, or a compact flash module on an adapter card) are recognized and configured by your laptop. However, do not be surprised when you install your next Linux distribution – Linux will assume that your computer is a notebook, if a CardBus controller is detected, and this can cause SuSE to install a modified KDE desktop with a battery display.
Wireless LAN Hardware
3Com X-Jack Wireless LAN Card The 3Com 3CRWE62092A Wireless LAN Card with X-Jack antenna is the smallest competitor in our test. The retractable antenna is this card’s strong point. This makes the 3Com X-Jack, as the card is normally referred to despite the complex model code, just the same size as a standard Type II card.
The X-Jack supports IEEE 802.11b with 128 bit encryption, just like the other candidates in our test. The advantage is self-evident: You do not need to remove the card while transporting your laptop in a bag or case – on the contrary, the card will still work with the antenna retracted, although reception may be poor in this position. Take care not to bend the antenna when retracting it – only the outside edges of the card lid have been
smoothed, the inside edges are razorsharp and shave the plastic coating off the antenna. You can plainly see the plastic shavings and cuts in Figure 1. The 3Com card uses the Poldhu chipset – the driver installation is detailed on page 19. The WLAN card costs somewhere in the region of £100, depending on your dealer, so do your homework before you buy – but that still makes it 20 percent more expensive than its competitors. The extremely practical (and patented) antenna makes this card stand out from the field and is a must for notebooks in daily use. And that’s why the 3Com product also gets the Editor’s choice award.
Linksys WPC11 The Linksys WPC11 card is based on the Prism 3 chipset, which requires the “Next Generation” driver package. Installing the card was no problem. The Linksys was also the only card in the test with two LEDs on top, one for the transmit and a second LED for the link status. If you are working in access point mode, the LED flashes on and off to indicate that the access point is out of range. The Linksys is a solid workhorse and the price, £70, is reasonable.
ZcoMax AirRunner XL325H The AirRunner from ZcoMax is based on the Intersil Prism 2.5 chipset and as such requires the use of “Next-Generation” drivers. With the option for an additional antenna. We found this a good quality build product. The transmit power was a respectful 100mW although the XL325HP is reported to be 200mW. The external anntena connected via one of the reverse MMCX sockets. Again with two LEDs for power and transmit. Cost: £90.
COVER STORY
external modem. The TWL-R410’s network ports allow you to attach up to four computers or network printers, and depending on the configuration, the access router can be used for seamless access from the wireless to the wired LAN – or it can be used to masquarade easily between the two environments. The TWL-R410 is easy to configure via the Web frontend provided (Figure 2). One interesting feature is the fact that you can attach a serial modem, besides DSL or network access, thus permitting Internet access if the DSL link fails. The access router can be secured via a list of permitted or denied hardware addresses and by means of 128 bit encryption. Of course hardware addresses can always be spoofed, but at least it is an additional hurdle for the attacker to take. The Tellus TWL-R410 seems to be an extremely well-engineered product. We particularly liked the idea of using an external modem as a backup line for a DSL connection – without the administrator needing to get involved, of course.
Tellus TWL-R410 Wireless AP SOHO Router. (ElitoEpox EWL-R410)
Figure 1: Cigar cutter included: The upper and lower halves of the lid are so sharp that they strip the plastic coating off the antenna – you can plainly see the plastic shavings and cuts
The TWL-R410 by Tellus is a combined DSL router, 4 port switch, wireless access point and modem interface. The device, which costs over £200, to attach a laptop with a wireless card to the internet via DSL and / or
www.linux-magazine.com November 2002
23
COVER STORY
Wireless LAN Hardware
Figure 2: The Web frontend for Tellus’ TWL-R410 provides access to a
Figure 3: The Web frontend for the NAS-101RW is well-structured. If configured
wide range of settings, without being too cluttered
correctly, this allrounder can completely replace a server
IEI NAS-101RW Wireless NAS Access Router Shortly before this issue went to print, a brand-new product arrived at our offices, IEI Electronics’ NAS-101RW. This device can do more or less everything apart from making the coffee and sandwiches: It is at the same time a 4 port switch, a wireless access point, a DSL router, a network bridge, and a network storage device, and has a small footprint to boot. In other words, the NAS-101RW can assume the role of a server in small network. A Web frontend (Figure 3) is supplied for configuration tasks, just like the Elito-Epox EWL-R410. You can either answer ten questions, or take a more modular approach via a complex menu. The front
panel contains a display and various buttons that allow you to query the device’s status and perform simple network configuration tasks. The network drive can be assigned to various user groups, Windows clients, Macs, Novell and of course Linux clients via NFS, HTTP, and FTP. The NAS-101RW even provides a complete user management module – by the way a look inside the router revealed an Embedded Linux system. Unfortunately, the NAS-101RW’s only protection against attackers is 128 bit encryption – a list of permitted hardware addresses is sadly lacking. However, access to the Web frontend is not as permissive as the R410 router by ElitoEpox – you need a password for more or less everything. We appreciated the ease of configuration and the intuitive frontend provided by the NAS-101RW. The price (over £650) and the two fans were less to our liking. But still, the NAS-101RW is a viable alternative to providing support for a traditional server in small network environments.
Conclusion Today’s wireless LANs come in all shapes and sizes. Purchasing prices range from £150, for two simple WLAN cards, to £1000 for an access point with network
24
November 2002
www.linux-magazine.com
drives and a handfull of WLAN cards. You can take an easy entry approach to wireless networking – start off with two WLAN cards, one in your desktop and the other in your notebook, and then add access points or access routers as your budget allows, until all your computers are on the wireless LAN. By standardizing on using the IEEE 802.11b protocol it should ensure that wireless LAN devices will have no problem talking to one another now or in the future. ■
INFO Howtos and drivers: http://www.hpl.hp.com/personal/ Jean_ Tourrilhes/Linux Actiontec cards: http://www.actiontec.com/UK/ D-Link PCI Adapter: http://www.mobtech.co.uk/ecbmob/ itm00959.htm 3Com XJack card: http://www.dabs.com/3com/3com. asp?s=404 Linksys card: http://www.dabs.com/linksys/linksys.asp ZcoMax card: http://www.zcomax.co.uk Tellus Router: http://www.uk2.21store.com NAS router: http://www.iei.com.tw http://www.nasgenie.com
Wireless Drivers
COVER STORY
Drivers for Wireless LAN Cards
Driver Safari
I
t would seem that the major Linux distributors have never heard of wireless networks. Nobody is currently offering configuration or setup programs, not to mention actually recognizing wireless LAN cards during the installation procedure. Driver support is in a sorry state, too with SuSE restricted to the PCMCIA Card Services ([1]) and the PCMCIA and PC cards included in the 2.4.18 kernel. Unfortunately, the drivers do not support common chipsets such as Prism 2.5 or 3, and even 3Com cards normally require updated drivers. Enter the driver safari across your system and back, if you want to use a current card.
Drivers for the 3Com X-Jack The Poldhu chipset, which is used for the 3Com X-Jack card (amongst others) requires a separate driver, which is available from [2], although it only runs on the new 2.4 version of the Linux kernel. As is the case for Next-
The distributors still have not discovered wireless LAN – most distribution CDs include tried and trusted PCMCIA Card Services. So that means a lot of manual configuration work to get modern wireless LAN cards up and running. BY DANIEL COOPER Generation modules, you will need the sources for the current kernel, although you can do without the sources for the PCMCIA Card Services. After expanding the package, launch the ./Configure program (with a capital C for a change). You are then prompted for the kernel and module directories, as well as the configuration directory for the PCMCIA Card Services. You can use the defaults options in most cases. You can then type make all and make install to compile the sources. Finally, store the finished modules at the designated locations and copy the newly created configuration file poldhu.conf to /etc/pcmcia.
3Com Configuration The /etc/pcmcia/wireless.opts file is used to configure the cards – as it is for PCMCIA Card Services drivers. The file assigns values to variables, as in variable=val or variable=“val” and is loaded when the card is inserted. If you want to use multiple cards, you can restrict a configuration block by referring to the hardware address (MAC address) of a wireless LAN card or series (or part of it). The hardware address (which you can display in hex or other formats by typing ifconfig) is an ID with a length of 6 bytes. The first three or four bytes are usually sufficient to specify the manufacturer and series.
www.linux-magazine.com November 2002
25
COVER STORY
Wireless Drivers
Let’s take a look at the hardware address 00:04:DB:A5:72:E0 as an example, where the first three bytes (i.e. 00:04:DB) provide sufficient ID. The configuration block for a card in /etc/pcmcia/wireless.opts starts with the entry “*,*,*,*)”, where the end of the line is terminated by a double semicolon “;;”. You can copy this block to a position directly below the line with the double semicolon and then edit the upper block: The four comma-separated asterisks match any hardware address of any card, so this block will be executed for any card – unless another block has been processed previously. So let’s change the beginning of the upper block to “*,*,*,00:04:DB*)”, taking care to use capital letters for all of the hexadecimal characters where necessary. The upper block is now processed for any card whose hardware address starts with 00:04:DB. As a maximum of one block can be processed, the second block will only be used for hardware addresses outside of this range, as a kind of standard configuration. We can now set variables within the blocks. ESSID contains the name of your wireless LAN – put some careful thought into choosing this name as wireless devices with the same (E)SSID autotmatically belong to the same network. You might want to use your own phone number or part of your name. Use MODE to select the operating mode: “Ad hoc” refers to a network without access points that provides a direct connection, “Managed” means that the nodes in the wireless LAN connect to a access point and use the AP as a kind of wireless hub (or switch). You can use the FREQ variable to set the channel frequency, although – as this is somewhat complex – you will probably prefer to use the CHANNEL variable and simply choose a channel between 1 and 14. The KEY variable contains the encryption key for WEP as a hexadecimal sequence, such as “0102-0304-0506-0708090a-0b0c-0d”. If you are implementing 128 bit encryption, you will need to enter 13 bytes of this type – the three “missing” bytes are used for the “Initialization Vector” (IV), which is then used for every packet transmitted,
26
November 2002
thus providing an actual key length of only 104 bytes.
Next Generation Wireless You will normally need the “Next Generation” driver package (linuxwlan-ng, [3]) from [4] to operate more recent wireless LAN cards. As most major distributions do not automatically include this package at present, you will probably need to perform a timeconsuming manual installation. We used the current stable version 0.1.14 ([5]) for our test. You will also require the sources for your kernel which will at least need to be set up using make xconfig or make menuconfig. In SuSE’s case the kernel sources are not the same as the distribution kernel, so that will mean recompiling the kernel in most cases – but don’t expect that to work if you use /boot/vmlinuz.config, the configuration file for the default kernel. For SuSE 8.0 you will need to enable support for WAN devices (Wide Area Network is located under “Network device support”, “Wan interfaces”), and disable the emulation of other processor architectures (under “Binary emulation of other systems”). If you have the time and skill, you might like to take this opportunity to modify the kernel to suit your requirements. Type the following to start re-compiling the kernel: make dep modules modules_U install bzlilo
The finished kernel is then automatically installed and launches LILO. In the case of PC Card (PCMCIA) wireless cards you will definitely require the sources for the PCMCIA Card Services from [6] or the source code package provided by your distributor. Type make config to configure the Card Services and preferably make all to recompile, followed by make install to perform the installation. After completing these preliminary steps, you can setup the linux-wlan-ng package using make config. You can select drivers for the PCMCIA Card Services, PCI adapters, PCI cards, and USB adapters. The cards discussed in this article require either Card Services
www.linux-magazine.com
drivers or USB adapter drivers. You can accept the defaults for the remaining prompts in the configuration dialog. Now launch the compiler by typing make all. Unfortunately, there is no single solution if something happens to go wrong. We performed tests with the 2.2.20, 2.4.18 and 2.4.19 kernels, and version 3.1.33 of the Card Services, with PCMCIA support disabled at kernel level. Providing no errors occur on compilation, you can install the modules and configuration files by typing make install. You will then want to call depmod -a.
Card Configuration Card Services drivers are configured via the /etc/pcmcia/wlan-ng.opts file, any other drivers via /etc/wlan.conf. The format of both files is similar, and both contain variable assignments, just like /etc/pcmcia/wireless.opts. To assign various blocks for different hardware addresses, follow the procedure detailed for the 3Com card. The files are adequately documented, so let’s concentrate on other points. The configuration files are loaded immediately on inserting or attaching a wireless LAN card. Ensure that the variable WLAN_ENABLE is set to “y” to allow the card to be set up. The “WEP” section contains the encryption settings. To enable encryption, you will need to set dot11PrivacyU Invoked=true, and dot11ExcludeU Unencrypted=true, which ensures that your WLAN card will always use encryption. To enable the 128 bit encryption, and this is strongly recommended even if just in SOHO environments, you need to set PRIV_KEY128=true and enter the key in hexadecimal notation for the dot11WEPU DefaultKey0 variable. The key must comprise exactly 13 bytes in hexadecimal notation, as in 01:02:03:04:05:06:U 07:08:09:0a:0b:0c:0d , for example. The last three sections allow you to choose the type of network. If you set IS_ADHOC=n, you will need an access point (AP) to connect the wireless computers. To use an access point you will need to enter the name of your SOHO network as the variable DesiredSSID in the “Infrastructure Station Start” section.
Wireless Drivers
Ad hoc mode (IS_ADHOC=y) is normally the cheaper variant and allows you to connect up to 16 computers directly without an access point. The disadvantage is that all of these computers need to “talk” to each other, i.e. the two computers furthest apart will still need a direct wireless connection. The network name (SSID) is also used to identify the network in ad hoc mode. You can also use the CHANNEL variable to select one of the 14 available channels. Since the upper channels may be partially occupied by Bluetooth devices, you will want to select channel 7 or lower. You may need to reduce the data transfer rate (from the maximum 11 mbit/s) for computers working in ad hoc mode that are some distance apart – the lower the data transfer rate, the greater the maximum transmission distance. You can use the OPRATES to define the various transfer rates in units of 500 kbits/s – i.e. 22 means 11 mbit/s, whereas 2 means a rate of 1 mbit/s.
Troubleshooting Driver Conflict Issues The driver packages store a list of all supported PC Card and Wireless chipsets in a separate configuration file under /etc/pcmcia/*.conf. Unfortunately, some contradictory entries may lead to obsolete drivers being loaded on occasions. You can type grep -e manfid -e version U /etc/ppp/config /etc/ppp/*.U conf | sort +2 -3 -d | less
to sort the entries in the configuration file by manufacturer ID, displaying the filename at the start of each line. If a card does not work properly with the loaded driver, despite it having been correctly recognized and despite apparent support, you should look for doubles. Just comment out the offending line to quickly find the right entry, but note that the PCMCIA Card Services will need to be relaunched after every change by typing /etc/init.d/pcmcia restart.
Network Configuration So far, we have concentrated on configuring WLAN cards themselves, and not looked at the network settings, which are defined for all network PC
Cards in the centrally held /etc/ppp/network.opts file. As we have seen in the context of /etc/pcmcia/wireless.opts, you can also create blocks for individual cards or models in network.opts by assigning values to variables – follow the same scheme here. You can leave the INFO variable blank, although a sensible value will not do any harm. If there is a DHCP server on your network, you can set DHCP=“y” to have the card set the required network environment automatically – depending on your server, you may need to set DHCP_HOSTNAME to the domain name for your network. In case of static IPs, set DHCP=“n”, type an IP address for the card for the IPADDR variable (“192.168.2.2”, for example), then enter the subnet mask NETMASK, and the base address of the network NETWORK (“255.255.255.0” and “192.168.2.0” in our case), finally enter the broadcast address (“192.168.U 2.255”), and, if required, the IP address of your DSL router or Internet gateway as GATEWAY. You may not have access to the same name server on your wireless LAN as on a wired Ethernet; in this case you can use the variables DNS_1 through DNS_3 to specify the addresses of the name servers responsible for your wireless network. These variables are set immediately on inserting the wireless LAN card. Any other variables are not of interest for normal network operations.
USB Adapters – a Special Case In the case of USB adapters, such as the Actiontec Wireless 802UI3 for example, the configuration file /etc/wlan.conf is used for basic device configuration, however, the applicable script, /etc/U init.d/wlan, is not automatically launched when you attach or enable the wireless device. There is an easy workaround for this issue, if you change the order of the boot scripts in /etc/init.d. For SuSE 7.3 this means renaming the symbolic links S06hotplug to S05hotplug and K17hotplug to K18hotplug, and also renaming S05network to S06network and K18network to K17network – all of which can be found in the /etc/init.d/rc3.d and /etc/init.d/rc5.d directories. You will also need to create two new links by typing ln
COVER STORY
-s ../wlan S05wlan and ln -s ../wlan K17wlan in both directories. This ensures that the network configuration is launched after loading the USB Hotplug Manager and the WLAN setup – i.e. the wireless adapter is configured just like any other network device via the distribution tools, the only difference being that the wireless adapter is called “wlan0” instead of “eth0”. The order is reversed when you shutdown the system; first switch off the network, and then unload USB. You also need to add the line alias wlan0 prism2_usb to the file /etc/modules.conf. We would also recommend adding the sleep 1 command to line 2 of the /etc/init.d/wlan file. The USB Hotplug Manager had not completely initialized on our test system, and so this led to the driver setup for the WLAN failing. We found that this issue was successfully resolved by adding the sleep command.
Future Setting up wireless LAN is a task that involves a lot of manual steps at present, as none of the major distributors provides the required modules or even configuration tools for the job in hand – and that can be a big issue for newbies. We can only hope that future distributions will be better equipped, and not simply continue to ignore wireless LANs. ■
INFO [1] PCMCIA Card Services Homepage: http://pcmcia-cs.sourceforge.net [2] Poldhu Driver 0.1.12 for 3Com X-Jack: http://www.xs4all.net/~bvermeul/U swallow/poldhu-0.2.12.tar.gz [3] Linux WLAN Project: http://www.linux-wlan.org [4] “Next Generation”Driver Packages: ftp://ftp.linux-wlan.org/pub/linux-wlan-ng [5] linux-wlan-ng-0.1.14 Driver Package: ftp://ftp.linux-wlan.org/pub/linux-U wlan-ng/linux-wlan-ng-0.1.14.tar.gz [6] PCMCIA Card Services 3.1.33: http://pcmcia-cs.sourceforge.net/ftp/U pcmcia-cs-3.1.33.tar.gz [7] Homepage for Swallow/Poldhu Drivers: http://www.xs4all.net/~bvermeul/U swallow/ [8] Howtos and Drivers: http://www.hpl.U hp.com/personal/Jean_ Tourrilhes/Linux
www.linux-magazine.com November 2002
27
COVER STORY
OpenVPN
Secure WLAN Networks via Encrypted OpenVPN Tunnels
Secure Tunnels Wireless networks may be practical but they are also quite dangerous. Integrated WEP encryption is no real problem for attackers who can snarf and manipulate data or even inject packets. An encrypted tunnel that uses OpenVPN to protect your data provides a secure solution. BY ACHIM LEITNER, DANIEL COOPER, OLIVER KLUGE
W
ireless LANs allow attackers to war drive their victims’ premises and grab all the data packages travelling across the network with a little help from the WLAN cards installed in their lap tops. You can compare this with a victim installing a network socket at the nearest bus stop and hoping nobody will bother plugging in to it. In urban areas the risk is extremely high even for private users. Wardrivers constantly search for WLANs that allow them to gatecrash internet accounts, snarf data or hack into large enterprise file servers possibly causing denial of service conditions. Whereas an attacker would need access to a network socket or the wire to hack a wired LAN, a WLAN pays little attention to walls and fences. To provide a modicum of protection even the earliest wireless LANs used the “Wireless Equivalent Privacy” approach. WEP aims to provide a level of security equivalent to that available in wired environments and uses its own encryption algorithms to do so. Key lengths or 40 bits were originally envisaged, but today’s devices use 128 bits. Unfortunately, the algorithm used here is quite weak: 40 bit keys can be cracked in a matter of minutes, and even 128 bit keys will tumble within a few days. In other words WEP provides very little protection.
Encryption A Virtual Private Network (VPN) that receives the traffic, encrypts it, transmits it across the wireless LAN, and decrypts it on the other side is normally the best solution. A VPN uses the traditional
28
November 2002
WLAN, but looks like an additional network – a virtual one – from the client’s point of view. Figure 1 demonstrates this principle using the OpenVPN[1] VPN package. The laptop and the desktop are connected via a WLAN and can reach each other’s true IP addresses on the wireless LAN. The VPN assigns an additional IP address to both the laptop and the desktop. The VPN encapsulates any data sent to the virtual addresses and trans-
www.linux-magazine.com
mits it to the real address of the host at the other end of the connection. The host on the receiving end will then decapsulate this traffic and treat it as though it had arrived via its virtual IP, thus creating a tunnel between the laptop and the desktop. Additional firewall rules allow both computers to receive any data arriving through the tunnel. So, any packages an attacker inserts into the WLAN have no chance of getting through.
OpenVPN
Virtual Address
COVER STORY
superuser (root) privileges and type the following command:
Virtual Address
modprobe tun Virtual Private Network
WLAN
In order to provide secure functionality OpenVPN will need keys. The simplest case assumes that both computers will be working with a shared secret. The command is then
WLAN
Notebook
Desktop Real Address
Real Address
openvpn --genkey -secret U secret.key
Figure 1: The Virtual Private Network is tunneled along a path that starts and ends at the real IP addresses of the laptop and desktop computers
The VPN uses cyptographic techniques to protect the tunnel. In contrast to the insecure WEP technology, tried and tested algorithms are used to provide a high level of security here. The tunnel thus protects any data sent through it from uninvited guests, at the same time ensuring that nobody can spoof a legitimate laptop and transmit data through the tunnel – the tunnel walls are solid.
OpenVPN The VPN principle has been implemented in various protocols, products, and projects. OpenVPN is a stable and simple implementation that does without manipulating the kernel or the IP stack.
Installation To install OpenVPN you will probably want to download the source package, openvpn1.3.1.tar.gz, from [1], and then unzip and install it (you need root priveleges). tar -xvzf openvpn-1.3.1.tar.gz cd openvpn-1.3.1 ../configure --disable-lzo make make install Note that we used the --disable-lzo flag with configure in order to disable compression. However, you can optionally install the LZO library[3].You will definitely need the OpenSSL library and developer files. SuSE users require two separate packages, for example: openssl and openssl-devel. Installation is easier for Debian users – just type the following to install OpenVPN: apt-get install openvpn The OpenVPN developers also provide RPM packages for Red Hat 7.2 and 7.3.
At both ends of the tunnel it collects traffic destined for the other end, encrypts this data using a locally stored key, and transmits the packages secured in this way through to the other end of the tunnel. The receiving end decapsulates the transmission and checks its origin. Only data secured with the correct key (i.e. the secret common to both ends) will be decapsulated and forwarded – any other data is rejected. This allows you to tunnel data packed in secure containers through a maze of insecurity. The following example assumes that the wireless network is attached to wlan0. The desktop is also equipped with a traditional, wired network interface card, referred to as eth0. This network provides access to other computers in the local network and to the Internet.
will create a key and store it in the secret.key file. Only the two computers involved should know this key, which should be readable for root only – anyone who knows the key can easily crack the tunnel. The key also needs to be copied to the second computer – make sure that this step is secure. Somebody might already be listening in on your wireless network, so why not use a floppy, which you would then reformat. If you have already installed a program such as OpenSSH, PGP, GnuPG, or similar, you can also use this program.
Digging the Tunnel Now let’s get that tunnel up and running. For this step OpenVPN will need the (static) IP address of the target computer, the name of the tunnel device (tun0 by default), and both virtual IP addresses for the VPN.
TUN Device
First Steps If you have not already installed it, you will need to install the OpenVPN package first (see the “Installation” box). The simple procedure described below assumes a static IP address – so your computers will need fixed addresses that do not change after every reboot. The procedure is more complex if you use a DHCP server to assign dynamic addresses. OpenVPN does not modify the kernel, instead using the TUN/TAP driver [2] to ensure the forwarding of data packages. This step is quite simple as the required kernel module has been part of the major distributions kernel trees for some while now. The next step is to load the module. Ensure that you have
The tunnel device is available in the current kernel, and from [2] for older versions. If you want to compile the current kernel yourself, you will find the TUN module under “Universal TUN/TAP device driver support” in the “Network device support”section of make xconfig. You can compile and install this module individually at any time without needing to replace the entire kernel. After configuring the kernel simply type: make modules make modules_install You will now need to create the device file, /dev/net/tun. If the /dev/net/ does not exist, type mkdir /dev/net/ before creating the device: mknod /dev/net/tun c 10 200
www.linux-magazine.com November 2002
29
COVER STORY
OpenVPN
And don’t forget the file with the key, of course. The commands on the laptop are as follows: openvpn --dev tun0 U --remote [Real_DesktopIP] U --ifconfig [Virtual_LaptopIP] U [Virtual_DesktopIP] U --secret secret.key
You need to be superuser (root) to run this and any following commands. The commands for the desktop are as follows (the IP addresses just need to be rearranged, of course): openvpn --dev tun0 U --remote [Real_LaptopIP] U --ifconfig [Virtual_DesktopIP] U [Virtual_LaptopIP] U --secret secret.key
You can use more or less any IPs for the virtual addresses, however, they will need to be private addresses. Your virtual addresses should be in a different block from your real addresses to allow simpler routing – the real network should be easy to distinguish from the virtual network.
Address Assignments As a practical example, let’s assume that the real IP address 172.16.0.1 has been assigned WLAN adapter in the laptop, and that the desktop answers to 172.16.0.2. The VPN will need to use addresses in the private address space, for example 10.0.0.1, as the virtual IP address for the laptop, and 10.0.0.2 for the desktop. In this case, the command for the laptop is as follows: openvpn --dev tun0 U --remote 172.16.0.2 U --ifconfig 10.0.0.1 10.0.0.2 U --secret secret.key
tun0
tun0
wlan0
wlan0
public IP address
Notebook
Desktop
UDP port 5000
Figure 2: Firewall rules can prevent outsiders entering your WLAN. Only the OpenVPN tunnel is allowed to transmit on the WLAn interface
And for the desktop: openvpn --dev tun0 U --remote 172.16.0.1 U --ifconfig 10.0.0.2 10.0.0.1 U --secret secret.key
You can then use ping to test the connection. On the laptop ping 10.0.0.2 should do the job, and demonstrate that the virtual IP address of the desktop is then reachable. If everything turned out ok, you can now launch the OpenVPN daemon, allowing OpenVPN to run in the background and use Syslog for logging. Use the --daemon flag when you launch OpenVPN to do so, but make sure that you supply the absolute pathname for the file containing the secret key.
On the Right Track The tunnel is up and running, and traffic is travelling happily back and forth – but your laptop and desktop still need to know what types of packages you want to allow through the tunnel. If you use the virtual IP address of the other end of the tunnel for the commands involved, this should be no problem. The OpenVPN call will define the route to
GLOSSARY Private address: Normal, public IP address are globally unique, and need to be so, for packages to find their way to a target. In contrast, private IP addresses are valid only on local networks and are not routed on the public Internet.This allows multiple networks to use the same private addresses. Various IP address blocks have been reserved for this purpose: 10.x.x.x and 192.168.z.z, and 172.16.y.y through 172.31.y.y. Routing: Path selection for IP packets. Linux uses a routing table to select an interface that will permit a packet to get closer to its final target. Stand alone computers do not have many options: 127.0.0.1 uses the loopback device, lo, and everything is transmitted via the default route, eth0, or similar. Routers with multiple network adapters need to make more complex decisions.
30
November 2002
UDP port 5000
www.linux-magazine.com
use exactly this address. Any other addresses will be routed past the tunnel, just like they were previously. The route from the desktop to the laptop will work perfectly, provided you use the new virtual address when you want to talk to the laptop. The real addresses assigned to the WLAN adapters in the laptop and the desktop only serve one useful purpose now: they are the endpoints of the tunnel. However, they will no longer be accessed by normal connections. You will need to put a few finishing touches to the route from the laptop to the desktop and thence to the other computers on your local network and the Internet, as the default route needs to be redefined. The following command allows the laptop to direct all of its traffic through the tunnel: route del default route add default gw 10.0.0.2
Of course, the default route does not apply to packets destined for the real WLAN IP address of the desktop (172.16.0.2). And this is a good thing, as the tunnel is bound to this address. So now the desktop just needs to know that it may need to forward some of the packets that it decapsulates. Use the following command: echo "1" > /proc/sys/net/ipv4/ U ip_forward
Fireproof That nearly completes the job at both ends. Both the laptop and the desktop are using the tunnel, your traffic is
OpenVPN
secure and nobody can listen in. However, it is still possible to inject packets, and this would allow an attacker to hijack your desktop’s Internet connection. Even if you have a flat rate, you will probably want to avoid giving bandwidth away. Network services provided by clients and servers (such as Web, SSH or FTP servers), are vulnerable from within the WLAN. And if you run an internal network there is another danger to consider: Any packages injected into your WLAN will sidestep a firewall positioned between the Internet and your internal network. However, you can modify your firewall configuration [4] to remedy this situation. The OpenVPN distribution also contains a sample script for your firewall. However, you will need to add a few additional rules for your WLAN tunnel combination. Figure 2 shows where you should apply these rules. OpenVPN uses UDP to transmit encrypted packets to port 5000 at the other end of the tunnel, and uses the WLAN to do so. This means you will need to allow UDP port 500 on your wlan0 interface. The following command allows you to receive data: iptables -A INPUT -i wlan0 -p U udp --dport 5000 -j ACCEPT iptables -A INPUT -i wlan0 U -j DROP
The last line prevents the computer from receiving any other data via the WLAN. The first ingress rule could be even stricter and use -s real_IP to define the IP addresses from which traffic is allowed to originate. This would be the real IP address of the other end of the connection in this case, that is -s 172.16.0.2 on the laptop. You will also need to restrict transmitting and forwarding of traffic:
partners who have access to the correct (secret) key. This means you can trust packets that originate from a tun device, and will want to accept and handle them. You will also want to enable traffic through the tunnel. Use the following commands to enable incoming and outgoing traffic:
The endpoints of the tunnel only forward packages that originate from known
Wireless devices are thus particularly prone to theft. The passwords you select for the services on offer in your WLAN are also important. You might find it annoying having to type those passwords, but having an intruder is definitely a lot more troublesome. ■
General Terms iptables -A INPUT -i tun0 U -j ACCEPT iptables -A OUTPUT -o tun0 U -j ACCEPT
This completes the configuration for your laptop. The laptop is not attached to any other networks, and thus does not need to forward any traffic. The desktop will still need a forwarding rule and should also use masquerading to allow the laptop to send its data onward to the outside world: iptables -A FORWARD -i tun0 U -j ACCEPT iptables -t nat -A POSTROUTING U -o eth0 -j MASQUERADE
Limitations One hitch with the method described in this article is the fact that you can only use it to secure PCs and laptops. It will not work for a network printer with a WLAN interface. WLAN aware printers only provide WEP encryption, and often only WEP40. At first sight, it might seem fairly useless to misuse a printer, as attackers would have no way of collecting their printed output. But it is still a chink in your security armour. The network is only as secure as the computers attached to it. If an unauthorized person can access the OpenVPN laptop, she automatically has access to the key, and thus to your LAN.
Access Control List (ACL): (in this context) A list containing the non-editable hardware addresses (MAC addresses) of the cards allowed to log onto the network – normally stored on access points and access routers. However, there are some techniques that allow you to spoof other hardware addresses, and this prevents the ACL from providing any real protection for your network – although it certainly is another hurdle the attacker will need to take. Station (STA): Any WLAN device, i.e cards, access points or access routers. Wired Equivalent Privacy (WEP): Using encryption technologies to achieve a security standard equivalent to the standard achievable in “wired”networks for data transferred via wireless LAN that can otherwise be sniffed by anybody interested in doing so.The WEP-40 (40 bit key length), and WEP-128 (104 bit key length) algorithms are somewhat trivial, however, and can be cracked within minutes.This means paying particular attention to security measures in wireless networks, such as ACLs, for example. Access Point (AP): A central node in a wireless network. A participating node will transfer data to the AP which relays it to the receiver. Today’s APs normally have an Ethernet port allowing them to be connected to a wired network. Basic Service Set (BSS): A group of stations (STA) with the same identification (BSSID). Independent Basic Service Set (IBSS): Also referred to as an ad hoc network where the participating hosts transmit data directly to each other without accessing a central node. There is no easy way of connecting a wireless ad hoc network to a wired network. Distribution System: Connects multiple wireless (BSS) and/or wired networks to form an ESS. Extended Service Set (ESS): A group comprising multiple wireless networks (BSS) with the same (E)SSID that together comprise a larger, logical network (BSS).
INFO iptables -A OUTPUT -o wlan0 U -p udp --dport 5000 -j ACCEPT iptables -A OUTPUT -o wlan0 U -j DROP iptables -A FORWARD -i wlan0 U -j DROP
COVER STORY
[1] OpenVPN: http://openvpn.sourceforge.net
(Extended) Service Set ID ((E)SSID): The ID or name of a network.
[2] TUN/TAP drivers: http://vtun.sourceforge.net/tun/ [3] LZO library: http://www.oberhumer.com/opensource/lzo [4] Marc André Selig: Paketfilter-Firewall, LinuxUser 05/2002, S. 30.
Basic Service Set ID (BSSID): The hardware address (MAC address) of the central node in a network.In the case of ad hoc networks, this is the address of any given participant,in networks with access points (APs) the address of the AP.
www.linux-magazine.com November 2002
31
REVIEWS
SuSE 8.1 review
The latest from SuSE
SuSE Linux 8.1 W
hole version number releases are often frowned upon, because they will usually contain new technology still trying to bed itself in comfortably to a distribution, which is why it is always a relief to find version x.1 released. SuSE 8.1 is everyone’s chance to see how their technological developments have managed to settle in.
So, what do you get? A boxed set of SuSE Linux will give you a complete, easy to use desktop system. The professional package will also give you access to a wide range of server side applications like Apache. SuSE take delight in holding the users hand throughout the installation process, which, in the simplest of cases means that you only have to touch the keyboard two or three times throughout the whole of the install procedure.
Walkthrough of an install SuSE will automatically shrink the size of a FAT 32 files system, so, for anyone doing their first Linux install from
SuSE Linux 8.1
Personal Box set
£39.99
3 CDROMs, 1 quickinstall poster, 1 manual (User Guide), 60 days of installation support Professional Box Set
£59.00
7 CDROMs, 1 DVD, 2 manuals (Administration Guide, User Guide), 1 quickinstall poster, 90 days of installation support Professional Upgrade
32
November 2002
£39.00
Making it’s break for stability, SuSE have released their Linux version 8.1. Read on to see what goodies await, for both new and seasoned hands. BY COLIN MURPHY Windows 95, 98 or Me you will find nothing complicated to do. Users of XP, or anything else that has left them with an NT file system will have slightly more work to do. In an ideal world, XP will not have been installed on your machine. You now have the chance to reserve some, maybe lots, of space for your SuSE install, using tools like fdisk of Partition Magic. If this is not the case, the easiest option would be to add a new hard drive to your machine. Some of the latest partitioning applications, like Partition Magic 7 will allow you to resize NTFS partitions, but since this is not included with the SuSE package, the extra cost in software would have nearly bought you the new hard drive in any case. Once you have selected your language, the installation procedure takes a look at your hardware and creates a proposal, listing section headings like Timezone, Partitioning and Software. Selecting one of these section headings now allows you to amend some of the proposals made for that part. So, if you were to select the Partition label, you would now have the chance to change from SuSE’s default configuration, of having just some swap space and a single Linux ReiserFS partition for ‘/‘, ‘/home’, ‘/var’ and everything else! I very much like to keep a separate ‘/home’ partition, but then I get to play with lots of different distributions, so it makes my life much easier to tinker with the settings. Should you still be finding your Linux feet, sticking with the defaults should get you a running system, but, at some point, you must give in and have a tinker too! Installation of SuSE 8.1 can be very quick, maybe no more than 30 minutes on a modern machine, but you really should put aside 2-3 hours to do this for
www.linux-magazine.com
the first time. This is not because it is complex to do, you will want the extra time just to go through the thousands of software packages that you might want installed on your system, which are not included by default! By selecting the Software heading from the proposal screen you will get the chance to control the amount and type of software installed. The default system will give you a graphical desktop courtesy of Xfree86 version 4.2 and KDE version 3.0.3. which will leave you with a fine, workable system, but how will you know what you are missing if you don’t go looking?
Hidden treasure The Software Selection screen allows you to view the numerous applications by ‘Selection’ groups like Games and Multimedia or by ‘Package’ groups like Documentation and Productivity. The most important ‘Package’ group must be all where you will be able to see all of the packages included with the distribution. Not all of these packages can be installed at the same time, some will conflict with others. To help you through this you there is the ‘Automatic Dependency’ checking utility, which is switched on by default. This does add 2-3 seconds to the selection of a package, each time you select a package the utility will consult its dependency database. This is fine for the odd package that you might want to add, but a real pain if you are adding lots, or nearly all. Luckily this function can be turned off and you are given another button with which to check for problems when you want. Should there be problems you can resolve them through a series of check boxes.
SuSE 8.1 review
Once you are happy with the proposed install, you are just a couple of mouse clicks away from making it happen. The only thing left now is to swap the CD’s in the drive, and the installation screen gives you an approximate countdown for both the entire install and for when the next CD will be called for. Now you know if you will only have time to make some tea or make an entire lunch before the next CD is required. Of course, if you are using the DVD that comes with the Professional boxed set then you will be saved even from this. You are now prompted for a root password for your system and then given the chance to create some less godly users for everyday use. The ‘graphical interface’ section of the YaST installation tool now kicks in, allowing you to set up your monitor and graphics card to the resolution you prefer to run at. You also get the chance to configure the keyboard and mouse, as you would expect, but more of a surprise is the oppertunity to set up graphics tablets and touch screen monitors, should you have them. You can now configure your LCD monitor to run in portrait mode if you so wish, see Figure 1. The rest of the hardware now gets a going over, like sound cards, printers, modems and other network devices, scanners and digital cameras. Hardware was once a big burden with Linux installs, but, so long as your equipment is neither too old or bang up to date bleeding edge then you shouldn’t have a problem. Once this is out of the way you can consider yourself installed. This new version now supports USB 2.0 and firewire devices SuSE have decided to move to grub as their boot loader, which should not
Figure 1: Now you can configure graphics tablets within the YaST configuration tool
REVIEWS
cause anyone any problems. Grub is a modern boot loader, and anyone who has compiled their own kernel but forgotten to update their lilo configuration, the older boot loader system, then you might very well appreciate grub, which won’t let you down in this way.
Some of the other changes The SuSE Professional box set comes with 7 CD’s and 1 DVD, two books, the ‘User Guide’ and the ‘Administrators Guide’ and 90 days of installation support by e-mail/fax and phone. The Personal box set has just 3 CD’s, the ‘User Guide’ and 60 days of installation support. If you chose the Personal box set you will also miss out on the features such as Project Management, Scientific Software and the integrated IP video telephony. Apart from now using Grub, SuSE have moved over to CUPS, moving away from LPRng for its default printer spooler. You can easily opt out of using CUPS via the YaST2 installation tool, which is the recommended way of changing the system configuration and for keeping your system up to date, with YOU, the Yast Online Update. The other important change to the distribution is SuSE’s decision to drop StarOffice 5.2 in favour of OpenOffice 1.0.1, now a mature product which has just celebrated its 2nd birthday, as shown in figure 2. The value of having some documentation on paper can not be highlighted enough for the new user, which is why I always advise someone trying Linux for the first time to go for a boxed set, even if it is not the most up to date version. The books act as a shield as you journey into the unknown. The ‘User Guide’ runs to 360 pages with a third of that devoted to the installation and configuration of your Linux system. The remainder of the guide goes on to introduce the user to the KDE and Gnome desktops and some of the other applications that come on the disk, like Kmail, Evolution, Galeon and the Gimp. Without some form of introduction the user might never get to learn about what is available, unless they spend ages playing. The ‘Administration Guide’ runs to nearly 500 printed
Figure 2: OpenOffice 1.0.1 now takes the place of StarOffice 5.2
pages and goes into much greater detail explaining how to administer the vital services that keep a Linux system ticking over. While this manual is only available in printed form with the professional boxed set, it is included in electronic form on the CD with the Personal set. SuSE 8.1 is built on Kernel 2.4.19 and uses glibc 2.2.5 and gcc 3.2 in the compilation of its software. Apart from the default choice of ReiserFS, you can also select from Ext3, JFS and XFS for your choice of journalised file systems. SuSE is now optimised for Intel Pentium and AMD Duron and Athlon processors, so 8.1 is no longer going to be of help to you to build a firewall on that old x486 in the loft. Memory is always a crunch point with Linux, the more the merrier, and SuSE recommend having 128MB available. You could get a minimal system onto a 400MB drive, but this would be a shame, because you have as much as 6GB of software to choose from. You can spend ages just going through the games. If you are a regular SuSE Linux user, you will know of this, what you may not know is that you also have the option of upgrading your SuSE 8.0 Professional boxed set with the SuSE upgrade package. With this, you get all of the disks from the 8.1 Professional boxed set but no manual. SuSE makes for an excellent all encompassing Linux system, giving you all of the server applications needed to handle email and other internet tasks, if you take the Professional boxed set. If your needs are not quite so high, then the Personal set will give you a desktop system with a very capable office suite. ■
www.linux-magazine.com November 2002
33
REVIEWS
Maple 8
Computer Algebra and Technical Computing with Maple 8
The Mathematician’s Apprentice Whether you need to perform calculations to a specific degree of accuracy, or without numbers based purely on symbols, computer algebra systems are capable of both tasks. They display the results in numeric formats, as formulae or 3D graphics. Modern programs, such as Maple 8, which we will be discussing in this article, are well-suited for technical computing tasks. BY HOLGER PERLT
C
omputer algebra systems (CAS) are amongst the most interesting and sophisticated programs around – and not only in the eyes of the mathematician. CAS are completely different from numerical programming languages, such as Fortran, C/C++, or Pascal/Delphi. The latter are designed to work with numbers as solutions to equations or relationships, and use exact calculated procedures or approximations to this end. The precision of the results will depend both on the procedure and the type of number you are working with (an integer, or a floating point). Thus, the results of a procedure involving floating point numbers will commonly be an approximation, with speed being the main advantage of this kind of computational processing. CA systems use both symbols and numbers and are capable of representing numbers to any given degree of accuracy. 1/3 is not simply 0.333333333… to a CAS, but the fraction 1/3. This has far-reaching consequences for solving algebraic problems, and an equally dramatic effect on processing times. Fractions of this type prove a headache in the case of algorithms where enumerators and denominators can reach considerable dimensions. But at least the results will be accurate.
34
November 2002
Calculations with an Arbitrary Degree of Precision Numeric languages will tend to truncate the results radically after every processing step to achieve a specific or the maximum available level of precision. Rounding or truncation errors can add up to produce completely misleading results. Although it must be said that this will hardly affect problems with an average level of complexity. The Maple worksheet in Figure 1 provides an example showing the kind of processing steps that will lead to issues with floating point numbers. The ability to perform calculations on the basis of symbols is another
www.linux-magazine.com
important feature: The results of a calculation can be symbols (userdefinable parameters, for example) and need not necessarily contain only numbers. Symbols are often critical to interpreting the results. They provide us with deeper insight into fundamental theories than a purely numeric program ever could. Over the years a number of CA systems have emerged to form two distinct groups.
Large and Small Small-scale CA systems are often designed for special areas and will work quickly and efficiently within these areas. Additionally, they are mostly freeware or shareware by-products of university research – or at least not too expensive. This group includes MuPAD [1], Fermat, Cocoa, Singular, Form, and Reduce, although MuPAD is a borderline case, as regards its functionality and marketing. Large, universal CA systems often provide useful numeric functions and professional graphics as well as symbolic (algebraic) processing features. They are also capable of creating quality documentation, and can make best use of web technologies. Developers will commonly refer to these products as systems
Maple 8
Maple Users Target Groups
Per cent
Universities and Colleges
60%
Industry
20%
Research Institutes
10%
Schools
10%
Source: Scientific Computers, Aachen, Germany
for technical computing, rather than Computer Algebra Systems. Products of this kind are normally beyond the scope of university groups. At some stage in product development a business enterprise is formed to take care of development and coordination. Maple, Mathematica, Macsyma, or Axiom are examples of large-scale CAS. Maple (Waterloo Maple) in particular, and also Mathematica (Wolfram Research) have achieved a high level of market penetration thanks to their well organized marketing and sales structures. The target groups for these software sytems are universities, colleges and technological enterprises, with a large range of applications. Prices at the top end of the scale tend to prevent use in schools, although this is where systems of this type could be most useful. The user base figures for Maple in Germany and Austria show this situation clearly (see Table 1).
Maple 8 Feature Overview Universal CA systems have been moving towards more complete solutions in recent years, with development work concetrating on numeric operations and graphics, in addition to symbolic operations. The aim is to allow the user to solve a complex problem using the following steps, and without needing to switch to another program: • Draft an algorithm • Test the algorithm • Apply the algorithm to a problem • Document the solution professionally The requirements for these steps differ. The first two depend on a high level of flexibility and transparency, since sophisticated algorithms and functions are involved. The user will probably want to test the draft algorithm in every imaginable scenario possible – and this is often impossible with purely numerical procedures.
The third step requires enormous processing power, and is often the achilles heel of the CAS. Since a CAS is not a compiler language its numeric processing speed will be slower than that of C/C++ or Fortran. The last point requires the features of a top-notch word processor – and a lot of work has gone into this area over the last few years. This concept becomes evident when you consider the fact that a user can spend a whole session within a sheet or notebook, using these formal pages to author program code and documents, and perform calculations. This is also where the symbolic, numeric, and graphic results will be available. You could write whole books with this user interface. Purists may tend to stick to the command-line version – especially if they only need to compute some results.
Central Features
REVIEWS
Calculus is concerned with solving equations and (partial) differential equations (DE), integrating and solving problems involving limiting values. This is the core issue for a large number of scientific and technical users. Almost any problem you look at within these fields will produce a differential equation. If you have access to a generic symbolic solution, you can investigate the problem at hand from various viewpoints – and this is ideal for technicians, scientists, or students.
Numeric Mode Providing a numeric mode cannot be considered a traditional task of symbolic calculation. The idea of developing a system for universal use in various areas of science and technology certainly provided ample incentive for the development of this mode. Maple entered into a strategic alliance with NAG to avoid losing out to tried and tested numeric routines. All the most important routines providing numeric solutions for problems in the area of linear algebra derive from the wellknown NAG program library. This allows the CA system to solve standard problems involving vectors or matrices at an acceptable speed. Maple differentiates betweeen two types of decimals: Software floating point numbers refers to the standard representation of decimals, and allows an arbitrary level of precision for numerical tasks, independently of the
Maple’s central features include the symbolic (algebraic), numeric, and graphic modes. However, assessments should be based on the symbolic mode, as this is the primary benchmark for a CAS. Symbolic mode can be further broken down into the following: • Basic operations, substitutions and simplifications • Analysis (Calculus) Maple 8 offers the standard you would expect from other CA systems in this area. Let’s look at the simplification of expressions as an example: The function has a set of algorithms that have been enhanced over the years. Using efficient routines for simplification are critical to solving complex problems. Maple is specifically capable of taking assumptions concerning value ranges and other conditions into consideration when performing Figure 1: This Maple worksheet demonstrates the difference between exact the simplification and approximate representation of numbers. Rounding of preliminary tasks. results often impacts the final result
www.linux-magazine.com November 2002
35
REVIEWS
Maple 8
Figure 2: Maple 8 can display complex 3D graphics: plot3d shows a
Figure 3a: A mass attached to a spring is pushed. How will the mass react?
Mandelbrot set for a given set of values
Maple finds the general, symbolic solution
computer involved. Precision is defined by the Digits variable. In contrast, hardware floating point numbers depend on the machine’s internal facilities. Calculations with this type of notation are often quicker, however, their accuracy depends on the processor used. Floating point calculations are invoked by the evalf instruction, which causes Maple to perform the operation in software floating point mode. Additionally, Maple uses different keywords for some symbolic and numeric functions that basically solve the same problem. As different routines need to be accessed, it often does not make sense to run a symbolic calculation for a purely numeric problem, and to then substitute numbers when the result is known. It is a lot quicker to use equally efficient, numeric algorithms.
The counterpart for hardware floating point calculations is evalhf. However, this command has several restrictions in comparison to evalf. A numerical solver for partial differential equations was introduced to Version 8. This involved considerably enhancing the functionality of the pdsolve function.
Graphics Maple 8 offers a variety of graphic display features to suit all needs. However, if a feature you really need does happen to be missing, you can add your own Maple code to customize the program. The graphic routines include: • Basic library routines, such as plot or plot3d • Special graphic routines in the plots and plottools packages, including animated graphics
• The DEtools package for solutions to differential equations • stats for statistical data The last two packages are particularly valuable if you are required to complete complex tasks. If you have ever had to program variable graphic objects, you will appreciate the ease with which Maple can display high quality results: Figure 2 shows a Mandelbrot set, constructed using one of Maple’s basic graphic routines. Version 8 includes a new package that demonstrates the concept of a unified approach to problem solving with Maple in the context of a large documentation base: Student Calculus1 is aimed at first year students. The package includes a wealth of problems and solutions based on the analysis of the functions of a variable, prepared and presented to suit
Technical Computing in Action Pictures 3a to 3c demonstrate Maple’s uniform approach. Each picture shows a worksheet, where the user can enter texts, and define the problem to be solved. Sheets are extremely informative, interactive documents that can be exchanged across platform boundaries. Other users can repeat calculations at any time, and add their own modifications.This is useful for engineers wanting to exchange ideas, and of course for teachers and lecturers wanting to map out complete maths or physics courses. Figure 3a shows how a basic differential equation is defined and solved algebraically.The code that needs to be run is shown in red, and Maple’s answer in blue. Entries are made just like a user would make them on paper – the only exceptions being the cases where statements are terminated or the values substituted. This display principle makes using the program a lot easier – and Maple always attempts to display the results in a fashion that allows for the maximum in readability. Figure 3b shows two manipulations intended to characterise the behavior of the symbolic solution, one for smaller periods and the other for larger time values. Complex results will often prohibit a simple evaluation of their outcome. Finally, Figure 3c shows a numerical solution to the differential equation in Figure 3a. Here, Maple allows the user to display the results as a graph, which is extremely useful.To do so, you simply add the routine from the DEtools package and let Maple take care of the rest.
36
November 2002
www.linux-magazine.com
Maple 8
REVIEWS
Figure 3b: The generic solution for an oscillating spring is quite clear.
Figure 3c: The spring’s behavior becomes more apparent, when Maple
However, Maple can reduce the formula even further
displays the motion as a graph. This requires a numeric solution
the requirements of a student entering higher education.
Maple’s Structure Maple comprises three components: the kernel, the program library and the user interface. The kernel was programmed in C and is responsible for low-level operations. These include arithmetic, file input and output, executing the Maple programming language, and the efficient execution of basic mathematical operations (for example, the derivation of polynomials). The program library comprises of almost every mathematical function. It is written in the Maple language and parts of it are loaded by the kernel, when needed. The user interface comprises of both a GUI and a command-line version. Third-party programs can also use the Maple routines and provide their own user interfaces. The technological program, Matlab, is a good example of
this. A recent addition, Maplets, even allow you to program a GUI of your own. Maple distinguishes five internal functional groups: • Evaluators • Algebraic functions • Algebraic auxiliary functions • Data structure manipulators • General auxiliary functions Evaluators are responsible for various kinds of calculations. These include statements, algebraic expressions, boolean expressions, naming conventions, floating point calculations with arbitrary precision or hardware floating point arithmetic. The algebraic functions include basic functions, such as diff (derivations), divide (division of polynomials) and coeff (which calculates the coefficients of polynomials). The algebraic auxiliary functions can not usually be called directly, but are instead referenced by functions of the two preceding groups. They include sim-
plifications of expressions and arithmetic packages. Data structure manipulators can be applied both to mathematical objects and to data structures. They include op (selects the operands for an expression), and length (which ascertains the length of an expression). The final group – general auxiliary expressions – is at the base of the hierarchy. It takes care of storage, internal input/output management, and program exceptions. The Maple user has access to more than 3000 commands, from Maple routines, through auxiliary functions, to evaluators. This makes Maple’s feature
Maple 8
Physics and Computer Algebra One area of application for CAS is the theory of elementary particles.The complex rules of perturbation theory can be described in the programming language of a CAS.This allows physicists fundamental insight into the complex world of subatomic interaction.This field provided considerable impulses toward the development of CAS in the 70s.
Adept Scientific plc
The Dutch physicist, J.Vermaseren, has made considerable contributions towards the development of an efficient CAS geared to the requirements of investigations into the perturbation theory.This program is FORM. Stephen Wolfram and Mathematica are also prominent examples for the symbiosis between physics and CAS. At the same time mathematicians have been working on developments to allow the use of computer algebra for applied research in the fields of Group Theory and differential equations.
Stand Alone Commercial Version: approx £1,300 Student Version: approx £125 Special licenses for research, universities, and schools are availabe; individual pricing on request http://www.adeptscience.co.uk
www.linux-magazine.com November 2002
37
Maple 8
list one of the fullest among programming systems of this kind.
Programming with Maple Maple’s own programing language allows the user to write complex programs, with a clear structure. If you are familiar with C, Fortran, or Pascal/Delphi, you should have no difficulty in mastering Maple. The Maple language is even educative for beginners: If you are familiar with Maple, you will soon come to terms with numeric programing languages. As Maple is not a compiler language, you can test your program code immediately after writing it. This is particularly useful
Tests and Benchmarks It is not easy to evaluate the capability of a CA system, and the reviewer’s subjective viewpoint is often apparent. But experts from various universities have put some thought into this matter and come up with three major benchmarking areas: • Solution of algebraic and transcendental equations • Solution of differential equations • Calculation of integrals Nearly every research task will boil down to one of these areas sooner or later. Current versions are occasionally benchmarked along these guidelines and the results are available on the Web. CAS developers do take this seriously. Michael Wester [2], Laurent Bernardin [3], Hans-Gert Graebe [4], and Stefan Steinhaus [5] are probably the most highly regarded benchmarkers. Without looking at each of the test results individually, one can still say that Maple performs extremely well in all tests.This is true of the major problem areas. Maple users have access to a state-of-the-art tool that will allow them to solve the most complex of problems in a majority of cases. Kamke’s Manual of Ordinary Differential Equations provides an almost classic test suite for ordinary differential equations. It comprises nearly every kind of standard DEQ occurring in applied mathematics. As E. S. Cheb-Terrab [6] reported, Maple 7 was capable of solving 1273 of the 1316 examples – that is, a grand total of 96.7 per cent. Maple has always played a leading role as regards solutions for standard and partial differential equations and this also applies to algorithms for solving equations.
38
November 2002
for beginners. Procedures, modules and packages are some of the central aspacts of the Maple language. This kind of structuring is essential to more complex applications. But Maple leaves virtually nothing to be desired. Modules and procedures are even platform independent. Packages are sets of procedures and data that permit calculations in specific fields. Figure 2 shows a simple example of a procedure in the Maple programing language.
THE AUTHOR
REVIEWS
Dr. Holger Perlt is a physicist who has worked in the field of theoretical, elementary particle physics. Dr. Perlt started using computer algebra in the late 70s. He has spent the past few years working on the implementation of modern approaches to self-learning optimization in software for complex technological processes.
Open for Other Languages It often makes sense to combine Maple with other programming languages – and this is possible in both directions. You can use the CodeGeneration function to translate native Maple code to its counterpart in the numeric compiler languages C, Fortran, or Java. Maple can also process code compiled in other languages, provided it is accessible as a library. You will need a Shared Library for Linux: libXYZ.so. You can then use the Maple define_external function to call the external routine just like you would call a native Maple procedure. This may allow quicker processing or permit the use of special numeric algorithms. Maple 8 sees the introduction of a new feature – the Maplet. Maplets are graphic user interfaces that run within a Maple session. They allow the user to combined packages and procedures with interactive windows and dialogs, thus producing a tailor-made desktop. Unfortunately, this feature is only available within a Maple session. As the name suggests, the Maplet package is based on the Java
Runtime Environment. Version 8 introduces enhancements for processing XML files with more flexible functions. Users must bear in mind that Maple uses its own conventions.
User Community As Maple has been around for several years now, a large user community has grown. Also, a large number of books dealing with special interest topics in science and technology have been published – not to forget the numerous procedures and packages that users can download on the Internet free of charge. Waterloo Maple provides a lot of support here (of course, it is in their own interest to do so): The company has set up a so-called Application Center. The website at [7] provides users with hundreds of sample solutions, Maple worksheets, and program code for dozens of fields. You will also find links to innumerable books, reports, and articles that refer directly, or indirectly to Maple. This makes life easier for newcomers, but even experienced users continually discover new applications. ■
INFO [1] MuPAD 2.0 : http://www.mupad.de [2] Michael Wester,“A Critique of the Mathematical Abilities of CA Systems”: http://math.unm.edu/~wester/cas_review.html [3] Laurent Bernardin,“A Review of Symbolic Solvers”: http://www.inf.ethz.ch/personal/bernardi/solve/ [4] Hans-Gert Graebe,“About the Polynomial System Solve Facility of Axiom,Macsyma,Maple, Mathematica,MuPAD and Reduce”:http://dol.uni-leipzig.de/pub/1998-11/en [5] Stefan Steinhaus,“Comparison of mathematical programs for data analysis”: http://www.scientificweb.de/ncrunch/ [6] E. S. Cheb-Terrab,“Comparison of Performances in solving ODEs using Maple 7 and Mathematica 4.1”: http://lie.uwaterloo.ca/odetools/comparison.html [7] Maple Application Center: http://www.mapleapps.com
www.linux-magazine.com
KNOW HOW
GNU m4
GNU m4
Creating HTML pages m4 is a macro language used for text processing. It simply copies its input to its output, while expanding built-in or user-defined macros. It can also capture output from standard Linux commands. BY STIG BRAUTASET
W
e will cover the basics of m4 by separating out content and layout of HTML web pages. The techniques shown are not by any means restricted to HTML, but are applicable elsewhere as well.
Before we begin… Very basic knowledge of HTML would be beneficial, but should not be necessary to read and comprehend the material covered in this article. It is m4’s capabilities as a macro processor language the author wishes to communicate; HTML was chosen because it is something most people should be at least vaguely familiar with.
What is m4? m4 is a macro processor used by Sendmail and GNU Autoconf, etc. Sendmail relies on m4 for creating its (in)famous sendmail.cf configuration file while Autoconf uses m4 to create the
40
November 2002
configure script familiar to anyone installing software directly from source. As mentioned earlier, m4 allows us to easily define our own macros, and this is the feature we will focus on. We will learn how to hide layout specifics, longwinded code or arcane syntax behind our own simple macros.
How m4 can help you write HTML There are no (to the author’s knowledge) ready-made macros for writing HTML, so we we will have to write them ourself.
At this point you may ask “What’s the point then? I may as well be writing the HTML code in the usual way, instead of using this m4 mumbo-jumbo!” The astute reader would, of course, be 100% correct in this observation. However, read on as the benefits will be explained in the next couple of paragraphs. Consider a snip of HTML you write a lot; a simple example is the code for creating links. Chances are it will be like this: <a href="http://www.w3.org">U www.w3.org</a>
Naming conventions When splitting the content from the layout of web pages, the author prefers to call the common macro file for html.m4.The content of each page goes in a file with the ending “.mac”, e.g. index.mac and so on.When explaining this, however, we develop our macro and .mac files bit by bit, so we need to refer to several different files.Thus it is convenient to name them first.m4, first.mac and so on. See the sidebar “Joining the content and layout”how to create the resulting HTML file from the macros and the content.
www.linux-magazine.com
GNU m4
Now, what if we instead define a simple macro that will allow us to write __elink(www.w3.org)
and let m4 do the tedious job of filling in the necessary bits? That’s less than half the number of characters already. Additionally, observe that we only had to write the link name once, so there is less chance of us spelling it wrong. Another example is if we have a note on our page telling visitors the date of the last update. It is tedious work searching through all the files you have changed, searching for and updating date stamps. Instead we can simply define a macro named, say, __today that will expand into today’s date. The search-and-replace business will then be taken care of for us automatically. How to do this will be shown later; we need to take care of the basics first.
Getting your hands dirty As mentioned earlier, m4 allow us to define our own macros with ease. The command to let us do this is cunningly named define. Here’s how to define a macro to let us use the link shorthand above: define(__link, U <a href="http://$*">$*</a>)
The part before the comma (but inside the parentheses) is the macro name, and the part after the comma is the macro
Joining the content and layout Here’s how you create the resulting index.html from the macro definitions in html.m4 and the content in index.mac: $ m4 html.m4 index.mac > index.html This invokes the m4 processor with two arguments.The m4 command will take the macro definitions it finds,do the necessary substitutions and output the result on its standard output. However,we make use of the shell’s redirection facilities to make the output go to a file instead of the screen (if this makes no sense to you,just tag along and follow the directions,but you should consider reading up on shell basics). Now open first.html in a browser,and voil! We have a web page!
Comments: documenting our macros If a macro is not self-explanatory, we would like to put an explanatory comment along side the macro definition. m4 naturally allows us to do this; it provides the built-in dnl which reads and discards all characters, up to and including the first newline. dnl I am an example comment. dnl I am highly unhelpful, dnl but 100% correct. Using dnl as part of a string does not exhibit this behaviour.
body. We will refer to the whole line as the macro definition. The definition line above in English: “Define a new macro named __link. Everywhere where this macro occurs, substitute the macro name for the body of the macro, but substitute the macro’s arguments (whatever is inside the parentheses following the macro name) for “$*” wherever “$*” occurs in the macro body.” We will store the macros we write ourselves, such as the above, in a file called html.m4. Se sidebar “Comments: documenting our macros” for details about how we can mix comments and macros in this file. It’s worth noting that macro names do not have to start with two underscores. It is just a convention, because we need to make sure that we do not pick a string that naturally occurs in the text. Otherwise we may get spurious replacements of the macro.
Multiple arguments and quoting The define command we used above expects two arguments; the new macro name, and what to replace the macro name with. Considering again the example with the __link macro above, what should we do if we don’t want to use the URL as the visible, “clickable” link? We could simply create a new macro that takes several arguments, and invoke it thus (in context this time, just to show that it is possible): “Here is __link2(www.w3.org, a link to w3.org). It is an informative website.” Here’s how to define such a beast: define(__link2, U <a href="http://$1">$2</a>)
KNOW HOW
The only change is that we now have “$1” and “$2” instead of “$*”. “$1” and “$2” refer to the first and the second argument of our macro. The arguments are separated by the comma character. So, Sherlock, what now if we want to create a macro that can take an argument with a comma in it? That, my good Watson, is simple. We just have to put quote the argument. When you quote something, everything between the quote-characters will be treated as a single argument, even if it consist in entirety of a string of, say, 90 commas. The default opening quote is a “`” (back-tick) and the default closing quote is a “‘” (single quote). We can now invoke __link2 with a comma in the second argument thus: __link2(www.gnu.org, U 'www.GNU.org, the home of much U software')
The second comma is now quoted, so the macro is indeed invoked with only two arguments. Note that only one layer
Listing 1: sample.html <html> <head> <meta http-equiv="Content-Type"U content= "text/html; charset=iso-U 8859-1"> <meta name="description" U content="Sample HTML page"> <meta name="keywords" U content="gnu m4 html"> <meta name="author" content="U Stig Brautaset"> <title>sample html page</title> </head> <body> <p>Hello, World</p> </body> </html>
Listing 2: first.mac __title({Hello World, HTML U version}) <h1>Hello World</h1> <p>Look at this link: U __link(www.w3.org)</p> <p>Last updated: __today</p> </body> </html>
www.linux-magazine.com November 2002
41
KNOW HOW
GNU m4
of quotes (the outermost) are stripped by m4, so the apostrophe in the following invokation will not yield an error: __link2(www.gnu.org, U 'GNU, RMS's pet hobby-horse')
The author usually changes the default quote characters into “{” and “}” for readability and ease of typing. The command for changing the quote characters, with an appropriate comment attached (see the “Comments: documenting our macros” sidebar), is: changequote({,}) dnl U change the quote characters
changequote takes two arguments, the new opening and closing quotes respectively. It can be called at any time,
Listing 3: first.m4 changequote({,}) U dnl change quote character dnl two macros for link-creation. define(__link, U <a href="http://$*">$*</a>) define(__link2, U <a href="http://$1">$2</a>) dnl abstract away all the layout U cruft at the beginning. define(__title, { <html> <head> <meta http-equiv=U "Content-Type" content= "text/html; charset=U iso-8859-1"> <meta name="description" U content="Sample HTML page"> <meta name="keywords" U content="gnu m4 html"> <meta name="author" U content="Stig Brautaset"> <title>$1</title> </head> <body> }) dnl the __title macro U ends here dnl use built-in 'esyscmd' to call the standard Linux 'date' dnl utility and have its output replaced with the '__today' dnl macro name. The date will be on the form "Sun 16 Jun 2002" define(__today, esyscmd(date '+%a %d %b %Y'))
42
November 2002
and can even be called several times from the same file. The effect is immediate, but only for this invokation of m4. The author strongly advocates that you stick to one set of quotes, as it quickly becomes rather hairy having to remember which quotes go where. The quoting “characters”, by the way, need not be single characters; you may use “{([whoopee->” as your opening quote if you wish. Neither is there any need for the closing quote to correspond logically to the opening quote. It is just a convention, and makes the macros easier to read 3 months hence. changequote({,}) dnl change U quote character dnl create a link with the U link name specified specifically define(__link2, U <a href="http://$1">$2</a>)
With a html.m4 containing the definitions shown above we can invoke our __link2 macro thus: __link2(www.w3.org, {w3.org, U a site well worth reading})
Enough basics, let’s do some real work With HTML, there’s always a lot of stuff that needs to be set up at the top of each page. If you have more than, say, 2-3 pages that have a similar layout (but with optionally different <title> tags etc.) then you will probably want to create a macro for all this stuff. We will consider the sample HTML page shown in listing 1, and see how we can get a similar result using our newfound macro-skills. After creating the necessary macros, listing 2 shows the content of the file first.mac. This is the mixture of HTML and macro calls that together with our macro definitions enables us to
Listing 4: second.m4 __title2({Hello World, U HTML version}, { <h1>Hello World</h1> <p>Look at this link: U __link(www.w3.org)</p> <p>Last updated: __today</p> })
www.linux-magazine.com
Listing 5: second.m4 changequote({,}) U dnl change quote character dnl two macros for link-creation. define(__link, U <a href="http://$*">$*</a>) define(__link2, U <a href="http://$1">$2</a>) dnl abstract away all the U layout cruft at the beginning. define(__title, { <html> <head> <meta http-equiv=U "Content-Type" content= "text/html; charset=U iso-8859-1"> <meta name="description" U content="Sample HTML page"> <meta name="keywords" U content="gnu m4 html"> <meta name="author" U content="Stig Brautaset"> <title>$1</title> </head> <body> $1 </body> </html> }) dnl the __title macro U ends here dnl use built-in ‘esyscmd’ to call the standard Linux ‘date’dnl utility and have its output replaced with the ‘__today’dnl macro name.The date will be on the form “Sun 16 Jun 2002”define(__today, esyscmd(date ‘+%a %d %b %Y’))
produce the resulting HTML in listing 1. We already know how to create the __title and the __link macros, the only new addition is the macro __today mentioned above. This macro uses m4’s
Listing 6: third.mac,25 define(__index) dnl allows U conditional processing of the U page __title2({Hello World, HTML U version}, { <h1>Hello World</h1> __menu <p>Look at this link: U __link(www.w3.org)</p> <p>Last updated: __today</p> })
GNU m4
More abstraction Looking at the code in listing 2, you may not want to write the closing </body>
Listing 7: third.m4 changequote({,}) U dnl change quote character define(__link, U <a href="http://$*">$*</a>) define(__link2, U <a href="http://$1">$2</a>) define(__rlink, U <a href="$1">$2</a>) dnl abstract away all the U layout cruft at the beginning. define(__title2, { <html> <head> <meta http-equiv=U "Content-Type" content= "text/html; charset=U iso-8859-1"> <meta name="description" U content="Sample HTML page"> <meta name="keywords" U content="gnu m4 html"> <meta name="author" U content="Stig Brautaset"> <title>$1</title> </head> <body> $2 </body> </html> }) dnl use built-in ‘esyscmd’ to call the standard Linux ‘date’dnl utility and have its output replaced with the ‘__today’dnl macro name.The date will be on the form “Sun 16 Jun 2002” define(__today, esyscmdU (date '+%a %d %b %Y')) define(__menu, { <p> ifdef({__index}, index, U __rlink(index.html, index)) <br> ifdef({__pics}, pictures, U __rlink(pics.html, pictures)) </p> })
and </html> tags either, and indeed you don’t have to. m4 allows macros to be nested, thus we can use a macro within another macro. The result is shown in listing 4. Witness that the __title2 macro takes two arguments, the first being the page title and the second being the full page body. Be careful when you go to these lengths of abstraction though, as it is easy to miss out the closing “})” at the end of the file if you do extensive updates. The change to listing 3 to facilitate this is shown in listing 5.
More advanced macros Up till now, we have only looked at fairly simple search-and-replace macros. These work fine, but consider if we have a collection of pages, with a common menu. We could put the whole menu in a macro of the type we have used before, but then the pages would include a link to itself as well as all others, and this is not very elegant. A solution, of course, is to just cut-and-paste the menu in to the individual files and change each file to not make a link to itself. This, however, is very tedious. The solution? Use m4’s built-in conditionals. In each source file we define a macro that identifies that file. In index.mac we define, say, __index. The built-in conditional “ifdef” can then use these macro definitions to decide whether to take special actions on this file. The menu could then be something like this: define(__menu, { <p>MENU<br> ifdef({__index}, index, U
Using tidy to clean up the mess If you’ve opened any of the HTML files you’ve created from the macro and content files, you’ve probably found that there’s a lot of unnecessary white-space in them.This is OK, since excessive white-space is simply ignored by web browsers. If you’re a pedantic zealot like the author,you’ll want your source to be beautiful on its own as well. Enter “tidy”, an HTML validating, correcting and pretty-printing program. Simply invoke tidy on your HTML files thus: tidy -im first.html and the file will be audited and printed prettily. See the sidebar “Links and resources”for where to get tidy.
__rlink(index.html, index)) <br> ifdef({__pics}, pictures, U __rlink(pics.html, pictures)) </p> })
The two ifdef lines are new to us. They first check whether a certain macro name is defined (observe that the first argument of ifdef has to be quoted). If the macro name is undefined, the third argument will be input into the text, and the second argument will be ignored. The use of the __menu macro is shown in listing 6. The __rlink macro is also new. Its name stands for relative link in that it does not prepend http:// to the link. It is shown in listing 7, which is the final macro listing file.
Summary We have seen how to use m4 to create macros to help us maintain HTML pages. We went from very simple one-line substitution macros, like __link and __link2, to bigger but still very simple macros of the same type, like __title. From there, we went on to using m4’s built-in ability to capture the output of system commands when we created the __today macro. Lastly we used m4’s built-in conditionals to create a __menu macro that expands into a different menu on each page. ■
INFO [1] GNU m4: more information about the m4 macro processor can be found at http://www.gnu.org/software/m4/m4.html [2]HTML tidy: get your HTML cleaned up and validated http://www.w3.org/People/Raggett/tidy/
THE AUTHOR
capability to call standard Linux tools, and puts the output of the said command (“date” in this case) into the text. Listing 3 shows the full content of first.m4. This contains all the macro definitions required by first.mac which is found in listing 2.
KNOW HOW
Stig Brautaset, born in Norway, is the founder of the Linux Society at the University of Westminster. He is currently in his last year of a BSc Artificial Intelligence degree. His interests largely revolves around computer programming – from e-mail spam filters to games. Regularly spending much time on IRC he can be found there under the nick “Skuggan”or “Skugg”.
www.linux-magazine.com November 2002
43
KNOW HOW
Graphic Scripting
Netpbm Tools and Shell Scripts
Animation on Demand Editing images does not mean you
interim formats. ppmtopgm converts images to greyscales, pnmsmooth applies a soft focus effect and pgmnorm is used for normalizing greyscales.
will automatically need a mouse. The filters included in the Netpbm
Source Material
package and similar tools can be used in shell scripts to automate various steps. BY CHRISTIAN PERLE
P
ixel based graphic formats have been around for some years. Consequently, there has been a similar demand for suitable conversion programs. If you want tackle this problem Unix style, that is using individual filter programs for the command line, you will need to write exactly (n-1)*n filters for n graphic formats. If you decide to use an interim format instead, you will only need n filters to convert various graphic formats into the interim format and another n to convert the interim format back to an original graphic format. Jef Poskanzer started working on pbmtools in 1989 with this in mind. Up until 1994 new formats and effect filters for the interim format were added
GLOSSARY Sourceforge: A Web service for Open Source projects including developer forums, version control, download areas, and various other resources at http://sourceforge.net/. Povray: A freeware raytracing (3D graphics) program that runs on various operating systems – such as Linux.The Povray homepage is available at http://www.povray.org/; the subscription CD includes a tutorial in HTML format. Anti-aliasing: Automatic smoothing of lines in high contrast images. Prevents an image from appearing “over-pixeled”and simulates a higher resolution.
44
November 2002
step by step to the collection now known as netpbm. Since 2000 the Netpbm project has seen a return to more active development and this is now hosted by the Sourceforge project.
Bit, Grey or Pix? To be more exact, Netpbm does not offer a single interim format but three: “Portable Bitmap” (PBM), “Portable Greymap” (PGM) and “Portable Pixmap” (PPM). The PBM format recognizes only occupied (black) and vacant (white) pixels and thus requires one bit per pixel. The PGM format can store only greyscales and will normally require eight bits per pixel (256 greyscales). The PPM format requires 24 bits per pixel (eight bits each for the base colors red, green, and blue), allowing 16.7 million colors (“true color”). “Portable Anymap” (PNM) refers to any interim format. tgatoppm, giftopnm, or g3topbm are examples of the filters used to convert external formats to the interim format. ppmtogif, pnmtotiff, or pnmtops are examples of formats for the opposite direction. Additionally, there are some formats that are applied only to the
www.linux-magazine.com
Of course, raw material is required for command-line based image processing. But your computer can also take care of this task using the freeware raytracer, Povray. The scene description file glass.pov from Listing 1 causes the program to trace an image with five glass balls on a checkered background. You can use the clock variable to move the background and in turn produce a simple animation. Now let’s feed this to Povray version 3.0 or 3.1. We want the program to create a 320 by 240 pixel image using anti-aliasing, and creating output in PPM format: povray +i glass.pov +w320 +h240U +a0.1 +fp +v
Figure 1 shows the results as stored in the glass.ppm file.
Filters in Chains The following steps show how to use filter commands to modify an image. The following command converts the image to greyscale: ppmtopgm glass.ppm > glass.pgm
Just like the filters in the Netpbm
Figure 1: Glass balls as a test image
Graphic Scripting
KNOW HOW
Listing 1: glass.pov 01 // Glass ball animation 02 // (C) 11/2002 Christian Perle (POVaddict) / Linux Magazine 03 04 // Camera 05 camera { 06 location <0, 0, -10> 07 direction <0, 0, 4> 08 look_at <0, 0, 0> 09 } 10 11 // Lighting 12 light_source { <10, 10, -10> color rgb<1, 1, 1> } 13 14 // Declaration of glass ball 15 #declare GBall = sphere { 16 <0, 0, 0>, 0.5 17 scale <1, 1, 0.5> 18 finish { phong 0.7 reflection 0.1 refraction 1 ior 1.33 } 19 }
package, ppmtopgm sends the results to your standard output, which can be redirected to an new file called glass.pgm using a greater than sign “>”. Figure 2 shows the results. In order to avoid creating additional temporary files for the following filtering steps, we now make use of the fact that the Netpbm tools can read from standard input. This allows us to link a series of filters using pipes in the shell: ppmtopgm glass.ppm | pgmoil U -n 5 | pgmtoppm Blue-White > U oil.pgm
The output from ppmtopgm is sent directly to pgmoil. This filter adds an effect to the image, making the contures appear to melt just like in an oil painting. The pgmoil option -n 5 tells the filter to
20 21 // Five colored glass balls 22 object { 23 GBall 24 translate <-1, -0.6, 0> 25 pigment { rgbf<1, 0.7, 0.7, 0.7> } 26 } 27 object { 28 GBall 29 translate <0, 0, 0> 30 pigment { rgbf<.7, 1, .7, .7> } 31 } 32 object { 33 GBall 34 translate <1, 0.6, 0> 35 pigment { rgbf<0.7, 0.7, 1, 0.7> } 36 } 37 object { 38 GBall 39 translate <1, -0.6, 0>
apply the oil effect to fields measuring 5 by 5 pixels. To bring back some color to the image, we now add another filter to the chain. pgmtoppm converts the greyscales to blue and white, as is shown in figure 3.
Learning to Run Of course, there is nothing wrong with calling individual filters in the command-line, but you will probably need a shell script to make use of the power of most command-line tools. Shell scripts allow you to send a whole collection of image files through the same chain of filters, or simply convert them to a different format. So now all we need is a whole bunch of images to experiment on. Let’s ask our old friend Povray to help us out with that animation feature we just talked about.
40 41 42 43 44 45
pigment { rgbf<1, 0.7, 1, 0.7> } } object { GBall translate <-1, 0.6, 0> pigment { rgbf<0.7, 1, 1, 0.7> } }
46 47 48 // Checkered pattern in background 49 plane { 50 <0, 0, -1>, -4 51 pigment { 52 checker color rgb<0.5, 0.5, 0.5>, rgb<1, 1, 1> 53 translate <-clock, clock, 0> 54 scale 0.4 55 } 56 finish { ambient 0.4 } 57 }
You can use the following command: povray +i glass.pov +w240 +h180U +fp +a0.1 +kfi00 +kff49 +kc
to have the raytracer produce an animation sequence with 50 images. The images are stored as glass00.ppm, glass01.ppm, through glass49.ppm. Correspondingly, the options +kfi and +kff refer to the numbers of the first and last images. The option +kc shows that this is a cyclical animation. The next task is to create an animated GIF image from the individual images
GLOSSARY Standard input, standard output: Many command-line programs allow you to omit the name of the input file. In this case the program reads from standard input, which will normally mean the keyboard. If you omit the name of the output file, most programs will use standard output, that is display the results on your terminal Pipe: The pipe character “|”(representing a stylized pipeline) connects the standard output of a program to the standard output of another program.This allows you to use multiple programs in a single processing step. Shell script: A file containing shell commands that are processed automatically. Repetitive tasks are often best accomplished using automated shell scripts.
Figure 2: Everything turned grey all of a sudden
Figure 3: Blue and white colouring with oil effect
www.linux-magazine.com November 2002
45
KNOW HOW
Graphic Scripting
Listing 2: mkgifanim #!/bin/bash for f in glass??.ppm do pnmscale -w 100 $f | ppmquant U 256 | ppmtogif > ${f%.ppm}.gif done whirlgif -o g_anim.gif -loop U -time 8 glass??.gif
Listing 3: mkedge #!/bin/bash for f in glass??.ppm do
in the GIF to a maximum of 256. Finally, ppmtogif writes the GIF image itself. The script uses the current value of the variable f to construct a file name, removing the .ppm suffix and adding .gif as the new suffix. The following whirlgif call will run the animation, g_anim.gif, in an infinite loop (using the -loop option), with an interval of 8 milliseconds between the images -time 8. Figure 4 shows the animation – of course you can only run the animation, if you purchased the flipbook plug-in for this issue. But seriously folks, check out the subscription CD for the file, which you can view in your web browser or xanim.
ppmtopgm $i | pgmedge | U pgmnorm > ${i%.ppm}.pgm
Animated Effects
done
In addition to GIF there are a few patented animation formats, such as MPEG and FLI. You can use xanim to view the latter. The mkedge animation in Listing 3 requires ppm2fli, which is not a Netpbm tool. This script also processes all the individual images in the glass ball sequence, converting them to greyscale (ppmtopgm) and creating lines for the edges in the image (pgmedge) and normalizing the brightness (pgmnorm). The resulting images are named glass00.pgm through glass49.pgm. Since ppm2fli expects a list of the individual images in a file, you will need to run ls to create this list, before you launch ppm2fli. You will also need to use the option -g to tell the tool the image formats in use. You can use the ffmpeg tool to create a further animation – as the name would suggest, an MPEG. You can apply the
ls glass??.pgm > frames.list ppm2fli -g240x180 frames.list U edge_anim.fli
and incorporate the GIF in a website. To prevent the GIF image from becoming too large you might decide to scale down the image to 100 by 75 pixels. The shell script mkgifanim (Listing 2) takes care of this task and goes on to call the whirlgif tool that assembles the individual GIF images to an animated GIF. In the for loop, the variable f parses any file names that match the shell expression (see box below) glass??.ppm. Within the filter chain pnmscale is used to scale down the individual images to a width of 100 pixels. The height is calculated automatically to retain the original proportions. In the next step ppmquant reduces the number of colors
Listing 4: mkoverlay #!/bin/sh for i in glass??.ppm do ppmtopgm $i | pgmedge | U pgmnorm > temp.ppm pnmarith -add temp.ppm glassU 00.ppm > ${i%.ppm}.overlay.ppm done ffmpeg -an -i glass%02d.overlay.ppm -b 768 U g_overlay.mpg
filter chain used for mkedge first, however, you will need an additional step (pnmarith) to add the original image glass00.ppm pixel by pixel. This creates an interesting overlay effect. When ffmpeg is called in mkedge (Listing 4), the expression for the input file (glass%02d.overlay.ppm) is expanded by ffmpeg itself. You might like to perform some experiments of your own, and read the man pages, to familiarize yourself with the range of filters provided by the Netpbm tools. The man pages for pbm, pgm, ppm, and pnm contain an overview. ■
Shell Patterns The shell recognizes various expressions for file and directory names, and expands them before running the current command.The most important examples are • the question mark ?, which represents exactly one character. • the asterisk *, which is a wildcard for any number of random characters (even zero). • character counts in square brackets []. Exactly one character of the type in the brackets must occur at this position.There are various notations as is evidenced by the following examples: The expression lx[acE].txt matches the names lxa.txt, lxc.txt, and lxE.txt. graph[a-z][0-9][0-9].jp? matches the names graphi50.jpg, grapho01.jpg, and graphx55.jpe (amongst others).The dash within the brackets indicates that a complete character set is to be matched, for example lower case letters a through z, or numbers between 0 and 9. The expression [^abc][xyz].b* matches both wy.b and 3x.ball (amongst others), but does not match cz.bmg or rx.img.The ^ character after the opening bracket indicates a negation, meaning any characters apart from those listed.
46
November 2002
www.linux-magazine.com
Figure 4: Animation
KNOW HOW
LyX Workshop
LyX Workshop, Part 1
Taking it Easy LyX provides comfortable word processing features for the LaTeX typesetting system allowing even beginners to create high-quality documents. BY ANDREAS KNEIB
yX’ roots reach back way down through the years of computer history. In March 1978 Donald E. Knuth [1] wrote the first lines of a typesetting program called TeX. This program was actually designed to improve the layout of his book “The Art of Programming”, as the layout was a source of concern to the esthete, Knuth. The name of the layout system itself indicates the author’s desire for perfection and art within a program. TeX represents the Greek letters Tau Epsilon Chi, and is thus pronounced tech and not the expected tecks [2]. The ancient Greeks, and Aristotle in particular, understood “techne” as artistry and applied knowledge, and this
L
48
November 2002
was exactly what Knuth was aiming for. The typesetting package provided the author with the potential to add commands to the text to produce an attractive appearance. Unfortunately, the commands involved are complex, and can prove to be too much of a challenge to the average writer [3]. This motivated Leslie Lamport to create the TeX add-on LaTeX in 1985. Like HTML, LaTex instructions comprise a markup language – that is a descriptive language that shows the interpreter how to portray a document. In contrast to HTML, the interpreter is not a browser in this case, but Donald Knuth’s layout system, that formats the text on the guidelines of LaTeX macros,
www.linux-magazine.com
and creates a DVI file as a result. If you have installed the LaTex package, you will normally find an introductory file on this subject, called l2kurz.dvi on your hard disk. The macro command set developed by Leslie Lamport may have simplified working on the text itself but the user still needed to be familiar with LaTeX syntax. In 1995, ten years after Lamport created his add-on, Matthias Ettrich produced a frontend that enhanced the user-friendliness of the layout program, and LyX was born. Matthias Ettrich originally designed this program within the context of a student project. At first sight, LyX looks like yet another editor with its menu bar,
LyX Workshop
Figure 1: A document in the DVI preview
but on closer inspection you will see that Ettrich has managed to combine the ease of use of a modern word processor with the perfection of TeX layouts – and that is no mean feat. In 1997, working with Matthias Kalle Dalheimer [4], Ettrich ported his program to the KDE environment, and decided to name this branch, Klyx. Ever since then work on the program code for LyX has been the responsibility of Lars Gullik Bjønnes. The primary difference between LyX and Klyx is the user interface. While LyX is based on the GUI toolkit Xforms, Klyx uses the QT library. Both versions are widely compatible to one another. If you are interested in downloading the latest version of LyX, try http://www.lyx.org.
LyX the Editor You will discover all the options a traditional text editor offers you in LyX. For example, you can search and replace words, insert tables, perform spell checking or cut and paste with your mouse. But there are many features of programs, such as Microsoft Word & Co, that you will not need when working with the LaTeX frontend. For example, the program does not use tabs or additional newlines to increase the whitespace between words and paragraphs respectively. The editor does not show line or page wrapping and will not try to influence you with regard to hyphenation. LyX is a visual word processor, that uses LaTeX as a print system, and is thus restricted to the limitations of the macro package. This is one of the reasons why you should not expect the WYSIWYG
KNOW HOW
Figure 2: The document from Figure 1 in the WYSIWYM view of the editor
behavior of other editors. For example, if you want to know what your document will look like in the finished version, you will have to view the DVI. These limitations are at the same time the program’s greatest advantages: While you are writing you can concentrate entirely on the content of your document, leaving the typographic niceties of paragraphs, headlines or footnotes to your computer system. You only need to tell the editor what paragraphs to indent, or format as footnotes or lists. So there are no headaches as regards the layout. If you tell the frontend what part of the text you would like to format as numbered headline, it will then choose the correct font, typeface, and number. Complicated manual text formating procedures, such as “bold, 16 point, centered”, are a thing of the past. And this is why Matthias Ettrich refers to his program as a WYSIWYM editor. WYSIWYM is the acronym for “What you see is what you mean”. That is, you will not see an exact representation of your text as the printer will output it. What you will see is the logical structure of the document as you intended it, with highlighting, headings, or lists. LyX also includes a whole range of additional features, allowing you not only to view the current document in DVI format, but also to convert it to either ASCII or HTML. The program also allows you to create tables of content as well as glossaries. For authors who use mathematical expressions, LyX offers a WYSIWYM style formula editor that you will not
need a mouse to use. Additional features worthy of note include allowing you to incorporate and scale postscript graphics to and in your documents and provide the author with almost unlimited Undo/Redo functionality.
Documentation and Help Before we take our first look at the user interface, let’s first take a moment to investigate the Help button at the end of the menu bar. The Help option provides access to a wide range of LyX documents. The help texts start with an introduction, includes a tutorial with exercises, and culminate with a reference manual. Life is made easier by the Content menu, which allows you to perform searches in the documentation. If you would prefer to start with an introduction, you might like to try the Introduction, Tutorial and FAQ section.
The Interface When you launch the program for the first time, a menu with the items File, Edit and Help appears – not many items so far. But after creating a document by choosing File/New, additional menu items are added to the list, such as
GLOSSARY DVI: DVI: (Device Independent).The content of this file is composed in a device independent language.You will need a program like xdvi or kdvi to display the file. WYSIWYG: WYSIWYG:What You See Is What You Get – i.e. the printer will output what you see in the program window.
www.linux-magazine.com November 2002
49
KNOW HOW
LyX Workshop
Insert, Layout, Display, Navigate and Documents. The File menu provides exactly what you would expect: various items that allow you to create, open and save files. The Version control item is new, and allows you to identify and select several versions of the same document. The Import and Export items are used to convert ASCII, LaTeX, or PDF documents to and from LyX format. LyX documents are easily identified by the .lyx file suffix. However, the editor will not convert documents itself, preferring to delgate that task to external programs. The Edit menu also provides most of the functionality you would expect from a normal text editor. One exception is the Table item that allows you to edit tables. The Math Panel popup contains the whole gamut of non-standard characters from Greek letters to root signs; just click on a symbol to easily insert it into your new document. You can select Spellchecker to let the Ispell program loose on your document, and use Floats & Insets to open and close footnotes, margin notes, and tables. The last two items in this menu, Preferences and Reconfigure are used to define settings the editor’s preferences file, which is normally stored in ~/.lyx/lyxrc. The Inset button provides access to a list of text markers that you can add to your document, allowing you to insert margin notes, images, and whole files. The Layout menu is somewhat more advanced. You can use Fonts to specify the typeface and emphasis of the selected text. The Paragraph Layout popup is used to select the alignment of the text or specify page breaks. You can use the Document option to define the appearance of the whole docment. Five registers are available for use in this section, allowing you to specify the document class (various classes are available, ranging from letter to book), the language for automatic hyphenation or the paper size. View opens a menu that allows you to view your text as a PDF document, in Postscript or HTML format. This menu also allows you to launch the DVI viewer, which will show you what the printed version of your document will look like. The Update function does what you would expect from a WWW
50
November 2002
browser. If a document has been modified, it is reloaded by the viewer. The Table of Contents menu is also quite useful, and provides you with an overview of the images, content, algorithms, or tables in your document. The next option in the menu bar allows you to navigate your document. You can navigate the body of your text, the notes it contains or the errors. As we have already paid a visit to the LyX help menu, our last port of call is the Document menu. This provides quick access to the documents that you currently have opened, allowing you to toggle the current document. The toolbar underneath the menu bar allows you to call commonly used functions with just a single click. A small bubble help window shows you the title of the button when you move the mouse over it. Several key functions, such as Print, Select or the Math Editor can be found here. The pull-down box at the left end of the toolbar allows you to select paragraph environment types. The environment list depends on the document class defined in Layout/Document. For example, if you choose the article class, you have access to paragraph formats for the title, headings, and author that are not available in the letter class. Instead the letter comprises entries for the address, telephone number, and references, that are not available for articles. We will taking an in-depth look at paragraph environments in the next part of our workshop.
Templates The /usr/share/lyx/templates directory contains a range of LyX files that you can use as templates. The quickest way to access a template is to choose File/New from template. This is your opportunity to test what you have learnt about word processing with LyX so far. Start by selecting the dinletter.lyx template from the templates directory. A sample letter appears in the editing area of the program. Use data of your own to replace the default text, including the angled brackets. The arrows at the end of the first lines in the Letterhead and Address environments indicate two paragraphs without any vertical whitespace between them. Press
www.linux-magazine.com
Figure 3: The Layout/Document Window
[Ctrl+Return] to use this format. If your letterhead contains more than the three lines allotted in the template, you can adapt it to suit your needs. After completing the letter you can now preview the finished item. To do so, use the View, as previously described. If you are satisfied with the results, and would like to keep this document as a template for future letter writing, the right place to save templates of your own making is in the ~/.lyx/templates directory. Make sure that you use a descriptive name that will allow you to find the template easily in the future.
Quo vadis? In part two of this workshop we will be taking a closer look at the configuration file, ~/.lyx/lyxrc, getting to grips with paragraph environments and text classes, and looking into command shortcuts. Additionally, we will be answering a few questions, such as “How do I create a table of contents?”, and “What are margin notes, references, or footnotes?” ■
INFO [1]http://www-cs-faculty.stanford.edu/U ~knuth/index.htm [2] Donald E. Knuth.Tau Epsilon Chi, a system for technical text [3]http://www.ibiblio.org/pub/packages/U TeX/info/german/texbuch/ [4]http://www.linux-magazin.de/ausgabe/U 1998/06/KLyx/klyx.html [5]http://www.linux-user.de/ausgabe/U 2000/10/085-klyx/klyx.html
Charly’s column
SYSADMIN
The Sysadmin’s Daily Grind: Rinetd
Man in the Middle No matter if you’re talking about the protagonist in a mediocre spy movie or a server, you will probably prefer to use a man in the middle, rather than look danger in the eye. BY CHARLY KÜHNAST
10.0.0.1 80 10.0.0.2 80
Of course, you can use names instead of IP addresses. If the server at 10.0.0.1 has
SYSADMIN OpenSSH Part II .....................52 Creating tunnels for TCP connections can be achieved with SSH. Find out the pitfalls when configuring a firewall.
LDAP Clients .............................57 LDAP directories will be heading for chaos without suitable admin tools. We take a look at the best freeware solutions.
more than one IP address, and I want the redirection to apply to port 80 for any of the other IPs, there is no need to add a redirection rule for each IP. Instead you can simply type 0.0.0.0 80 10.0.0.2 80 0.0.0.1
This redirects port 80 for every IP address the server owns to 10.0.0.2.
Allow and Deny Rules To prevent every connection being redirected, I can use the »allow« and »deny« rules to specify the customers allowed or not allowed to use the redirector. The rules preceding the first redirection rule in »rinetd.conf« are global, that is they apply to all the redirections defined in the file. For example: allow 192.168.0.* 10.0.0.1 80 10.0.0.2 80 10.0.0.1 22 10.0.0.2 22 1.1.1.1 3128 1.1.1.2 8080
This configuration allows redirection of connections that originate in the 192.168.0.* network. However, if you want to apply this restriction to the first rule only, you must insert the »allow« rule after the redirection rule: 10.0.0.1 80 10.0.0.2 80 allow 192.168.0.* 10.0.0.1 22 10.0.0.2 22 1.1.1.1 3128 1.1.1.2 8080
In this case the last two rules apply to connections from everywhere, but the first rule rejects any connection attempts that do not originate in the 192.168.0.* network.
If you want to know what »rinetd« is up to, you will have to convince the program to write to a logfile, by adding another entry to »rinetd.conf«. The entry will be as follows: logfile /var/log/rinetd.log
The additional »logcommon« line makes »rinetd« write its logs in Common Logfile Format (CLF) that is also used by Apache and Squid (if so configured). This has the added advantage that many programs designed for evaluating logfiles can be used here, since practically any reporting tools can handle CLF files. While the first report is being generated, you could always watch a mediocre spy film. Who knows – you might learn something. ■
INFO [1] Rinetd home page: http://www.boutell.com/rinetd
THE AUTHOR
T
he man in the middle of a network will normally be a proxy. Proxies elevate traffic to the application level where they verify, cache and manipulate it. If you do not need this extended functionality, you might consider using a simple redirector, such as rinetd [1]: Rinetd accepts connections on a specified port and relays them to a pre-defined port on another host. Since there is no need to elevate the traffic to application level, this method is quick and easy on your resources. Rinetd is available for Linux and Windows; the Linux version is a tarball that weighs in at a mere 35 Kbytes, and can be easily extracted, using the typical »make; make install« procedure. The redirection rules are stored in the »/etc/rinetd.conf« file, which is not installed automatically – you will have to take care of that yourself. To provide a simple example, let’s construct a redirector for a web server. We want to redirect the server with the IP 10.0.0.1 to the server at IP 10.0.0.2. The web server is listening on port 80 on both systems. The line in »rinetd.conf« will read:
Charly Kühnast is a Unix System Manager at a public datacenter in Moers, near Germany’s famous River Rhine. His tasks include ensuring firewall security and availability and taking care of the DMZ (demilitarized zone). Although Charly started out on IBM mainframes, he has been working predominantly with Linux since 1995.
www.linux-magazine.com November 2002
51
SYSADMIN
OpenSSH: Part II
OpenSSH from the Administrator’s Perspective – Part II
Tunnel Vision The Secure Shell protocol is not only used to provide secure shells, but also to forward other types of TCP connection through a safe tunnel. But you need to get the key length and software version right, to ensure that SSH is really safe – and there are quite a few pitfalls to watch out for when using SSH across firewalls. BY ANDREW JONES
T
he Secure Shell, SSH, – the name promises safety, and has every right to do so. We introduced you to several secure services in the first part of this series [1]. One of the most interesting features is the facility to provide secure tunneling for any TCP protocol. We will be concentrating on that aspect of SSH in this part of the series. First a word of warning to underline our statement in the intro: The SSH package is only secure if you use an up to date software version – vulnerabilities have been discovered time and again in OpenSSH. The developers have always resolved them in a timely fashion see http://www.openssh.com/security.html, but obsolete SSH servers are still a security risk. You can use the scanssh[2] tool to search for obsolete SSH servers. The tool will scan individual hosts or complete subnets for SSH servers, and output the version number. Just pass the IP address
and the corresponding subnet mask as arguments when launching the tool. The syntax for a single host is as follows: scanssh 192.168.10.3/32
Figure 1 shows an example for a complete subnet, where the output contains the version number of the installed servers. Figure 2 shows scanssh investigating individual hosts and shows that OpenSSH and Ssh.com both use their own software on their webservers. Scanssh does not require root priveleges – our only quibble is the fact that the tool does not use host or domain names, and can only locate SSH servers listening on port 22.
How long does an RSA key need to be? There was a big scare with respect to the security of SSH and other cryptoprograms in the middle of March this year. Surprisingly enough, it was not caused by a software vulnerability. It is claimed that
1024 bit RSA keys can be broken within a reasonable period and using affordable resources. To be more precise, the target was a PGP key, but the problem also affects SSH. Dan Bernstein, the author of the Qmail mail server, published his research into highly specialized parallel computers, designed for factoring integers, in the autumn of 2001 [3]. Afterwards, a discussion on the possible requirement to withdraw 1024 bit PGP keys ensued in the Bugtraq mailing list. Crypto guru Bruce Schneier added a few clarifying statements to settle this issue ([4],[5]). According to Bruce, the following key lengths can be considered as secure until the year 2005: • Private persons: 1280 bit • Corporate: 1536 bit • Government: 2048 bit Longer RSA keys are just a waste of time, according to Schneier. If you want to take Schneier’s advice, but have been using a default 1024 bit RSA key for SSH so far, you will need to update your key. The following command creates a new RSA key for Version 2 of the protocol: ssh-keygen -b 1280 -t rsa U -f ~/keynew/id_rsa U
Figure 1: The scanssh tool searches for SSH servers, here in the
Figure 2: Scanssh investigating the SSH versions installed by the OpenSSH
129.168.10.0/24 subnet, and outputs the exact version
Project and SSH Communications Security on their own web servers
52
November 2002
www.linux-magazine.com
OpenSSH: Part II
-C "1280 bit key for webmaster"
The RSA keypair (id_rsa and id_rsa.pub) with a key length of 1280 bits is written to the ~/keynew/ directory. You can use the -f flag to specify a target directory, if you want to avoid overwriting the existing keys under ~/.ssh/. The -C option adds a comment to the Public Key, however, this is used only to distinguish the key more easily, and has no influence on functionality.
Tunneling: Forwarding TCP Ports In addition to its original task of allowing secure remote logins, SSH can be used to secure almost any other protocol. Port forwarding allows you to relay TCP ports through the secure SSH connection. In this scenario SSH plays a similar role to a proxy, receiving connections at one end of the SSH channel and relaying them to the servers at the opposite ends. SSH can perform two port forwarding variants: Local port forwarding and remote port forwarding. Local port forwarding is what you will need in most typical circumstances. In this case, a connection that reaches a local (client-side) port, is forwarded across the secure SSH channel to a port on a remote server. You could also describe this technique as an egress tunnel. The syntax for this command is quite simple:
ssh login@remote_host U -L local_port:U remote_host:remote_port
You can use forwarding to open up a secure POP3 connection to your mailbox, for example – in Part 1 of our series on OpenSSH[1] we already mentioned the potential vulnerability of POP3. After all, the POP client transmits the POP password to the server in the clear, which makes it easy to steal the password off the wire. To avoid this, you can of course tunnel the POP3 connection through SSH, even if your provider does not offer POP SSL: ssh kh@pop.remote.com -C U -L 25025:pop.remote.com:110
Now, if we are so bold as to telnet localhost 25025, we can view the banner issued by the remote POP3 server. It works – and you don’t need to be root. All you need to do now, is to set the POP client to localhost and port 25025, to allow it to poll mail as usual. Figure 4 illustrates this procedure: The SSH command opens a normal SSH connection to the server, pop.remote. com, and also opens the tunnel. This forward will then remain active while you are logged on. If a POP3 client (or a telnet command) now requests port 25025 on the client (i.e. on localhost), the SSH client will
Figure 3: The Web-based administration program, Webmin, provides a module for configuring SSH server. However, you will need to put some thought into this (see boxout)
SYSADMIN
answer the connection request. SSH opens port 110 server-side and forwards any data. You can also use a similar forward to secure the connection to a Webmin server (see the Boxout “Webmin Configures SSH Server”): ssh kh@admin.remote.com -C U -L 33337:admin.remote.com:10000
Now the browser can talk to the Webmin server via the tunnel on https://localhost: 33337/. Lots of TCP based services can be forwarded and tunneled in this way – SMTP, IMAP, LDAP, or NNTP, but not FTP. FTP uses both a control channel and a data channel, whose ports are negotiated within the control channel. So, although it is trivial to secure the control channel, the data channel will still be in the clear. SSH provides scp and sftp as replacements.
Forwarding for Arbitrary Hosts The kind of forwarding we have looked at so far relied on the hosts at both ends of the SSH connection having the application client and server software installed. But all of the programs involved, the application client, the SSH client, the SSH server and the application server could equally run on a host of its own. So forwarding can involve up to four hosts for a single instance. This kind of off host forwarding can be used to create unusual network connections, and SSH tunnels, however, keep security in mind, when you are planning practical implementations. For one thing, only the connection between the SSH client and the SSH server is secured, and an attacker with access to the local port, but not to the target port on the server, can always use the tunnel to access a service that would normally be inaccessible. To mitigate this danger, OpenSSH by default only allows connections from the local host to the forwarded port, although you can use the -g switch to change the default behavior. A sensible, practical application for off host forwarding would be a connection to a server where the user does not have an SSH account. In this case the user will need
www.linux-magazine.com November 2002
53
SYSADMIN
OpenSSH: Part II
an SSH server with a secure connection to the POP3 server in the vicinity of the target server. This might be the case if both servers are in the demilitarized zone behind a firewall, but the user requires remote access to the network: ssh kh@ssh.remote.com -C U -L 25025:pop.remote.com:110
scenario from the viewpoint of the TCP client application. If the TCP client application is local to the SSH client machine, local forwarding is the right option. If it is running on the remote SSH server machine, you should opt for remote port forwarding.
Not Always All Ports
The forward is illustrated in Figure 5: An SSH tunnel is established between the client and ssh.remote.com. The mail client connects to its local port 25025. This connection is accepted by the SSH client, and the SSH server then provides the counterpart on port 110 between ssh.remote.com and pop.remote.com. Only the connection between the client and the SSH server is encrypted; a standard TCP connection is established between the SSH server and the POP3 server. From the viewpoint of the POP3 server, the connection originates from ssh.remote.com and not the client.
Reverse Forwarding Remote port forwarding is the exact opposite of local port forwarding: The connection request is for a port on the host running the SSH server. Data is forwarded via the SSH channel to the client, where it is sent to an arbitrary port. You could also regard this as an ingress tunnel. The syntax is as follows: ssh login@remote U -R remote_port:U local_host:local_port
To determine what kind of port forward you need, you need to look at the
OpenSSH permits TCP forwarding by default, and allows any free local and remote ports above 1024. Root is additionally permitted to forward local privileged ports below 1024. A user with a genuine SSH login can also achieve the same goal without any support from SSH, using Netcat, (nc), for example. To do so, the user would need to connect a Netcat server and a Netcat client via an SSH shell pipe. The AllowTcpForwarding no directive in the server configuration file, sshd_config, is thus only partially effective.
Through the Firewall One of the more interesting tasks for TCP forwarding involves transparently tunneling protocols through a firewall which permits SSH. A homeworker might need access to data stored on an Intranet web server, for example, although the server is only accessible on the company’s internal LAN. A firewall prevents access from outside, but permits SSH logins on the gateway. Let us assume that the following computers are involved: • Home desktop hd • Office desktop od • Gateway gw • Internal web server ws The user runs the following command on his home desktop:
Mail Client
POP3 Server Port 25025
Port 110
Client SSH Tunnel ssh -C -L 25025:pop.remote.com:110 \ kh@pop.remote.com
pop.remote.com
Figure 4: Local forwarding means that SSH will forward a connection that enters the client on port 25025 through the tunnel to the server, where it reaches its target, port 110
54
November 2002
www.linux-magazine.com
ssh gw_login@gw -L 2001:ws:80
This opens an SSH session to gw, and at the same time forwards the local port 2001 to TCP port 80 (HTTP) on the internal web server ws via the SSH channel. This assumes that port 2001 on the local machine has not already been assigned to another service. Now the LAN web server can be accessed from the home desktop using the following URL: http://localhost:2001. This variant is risky. Any user logged on to hd can use the open port, provided the tunneled session to gw is active. If the user also used the -g flag, port 2001 on hd will also be accessible to other hosts. If you cannot trust your users, you should be careful here, otherwise you might find them poking holes in your firewall. But it would be wrong to blame SSH for this: Any connection that goes through your firewall can be misused to tunnel other protocols.
SSH on SSH Keeping to our home office example, let’s assume that an employee would like to be able to log on to her office desktop
Listing 1: Allowing SSH to the Firewall 01 # SSH-Port 02 export SSH="22" 03 [...] 04 # Drop-Policy 05 $IPTABLES -P INPUT DROP 06 $IPTABLES -P OUTPUT DROP 07 $IPTABLES -P FORWARD DROP 08 [...] 09 # Rules for SSH access to the gateway 10 $IPTABLES -N ssh_gate 11 $IPTABLES -A INPUT -p tcp -m state --state NEW -d $EXT_IP -dport $SSH -j ssh_gate 12 # Gate should permit outgoing and ingoing SSH (to the LAN) 13 $IPTABLES -A OUTPUT -p tcp -m state --state NEW --dport $SSH -j ssh_gate 14 $IPTABLES -A ssh_gate -j ACCEPT 15 [...] 16 $IPTABLES -A INPUT -m state -state ESTABLISHED,RELATED -j ACCEPT
OpenSSH: Part II
SYSADMIN
Figure 5: SSH can also relay TCP connections
Mail Client
POP3 Server Port 25025
to a server running on a machine without the
Port 110
SSH daemon. The connection between ssh.remote.com and
Client
pop.remote.com is not secure in this case.
SSH Tunnel ssh -C -L 25025:pop.remote.com:110 \ kh@ssh.remote.com
using her home office desktop. An SSH connection in an SSH tunnel provides an elegant and secure solution: ssh gw_login@gw -L 2002:od:22 ssh od_login@localhost -p 2002
The first command opens up a tunnel from the local port 2002 to the gateway gw, which forwards this connection to the SSH port, 22, on od. The second command uses this tunnel to connect to port 2002 on localhost (option -p), thus creating an SSH on SSH connection. Alternatively, the homeworker could log on to gw and move on to od from there. This solution would mean the user storing her SSH key on the gateway, enabling a forwarding agent, or using a normal password. The SSH on SSH method avoids this. The gateway has no access to the data being forwarded: hd is directly connected to od via the tunnel, and this means that user will be working with her account on od. From the viewpoint of the tunneled connection it does not matter whether NAT (Network Address Translation) is involved, even multiple NAT will not cause any problems.
A Backdoor to Your Own Network Let’s look at another example that seems to be more complex at first: The user does not have a login on the gateway, and the firewall prevents her from connecting to the internal network. In this case remote port forwarding can provide a backdoor to the corporate network. The home desktop will need access to the Internet, and must be able to accept external SSH logins. The user must know
ssh.remote.com
pop.remote.com
the external IP address of her home desktop, but this should not be too difficult to determine, even for a dynamic IP address, in the light of services such as DynDNS. The user then enters the following command on her office desktop: ssh hd_login@hd -R 2003:od:22
Instead of terminating this login, the user then leaves the tunnel open (see Figure 6). When home, she can use the tunnel to log on to her office desktop: ssh od_login@localhost -p 2003
If the corporate gateway does not permit outgoing SSH connections for some reason, the user can simply have her SSH server on hd listen to a permitted port; port 80 looks promising in this case. This just goes to show how easy it is for users to poke holes in your firewall, if they really want to, of course. As soon as you open any port, users
can tunnel through it. Of course, this normally means contravening corporate regulations, so if you want to keep your job, you should be very careful about tunneling, and seek prior authorization from your admin. In the context of port forwarding the options -N and -f can be quite useful: -N prevents SSH from running commands server-side, and allows only the specified ports to be forwarded. -f sends the SSH client into the background, after authentication has been completed, i.e. after the user has entered her password or passphrase.
Special Cases: X11 X11 forwarding involves a special kind of SSH port forwarding. X11 always uses a network protocol. Even if the graphic output of a program running on the local machine is displayed on a local monitor, data have to be transferred between the client and the server. The X11 server is responsible for the screen display in this case, and it also
Configuring SSH Servers via Webmin The first article on OpenSSH [1] discussed the configuration of sshd amongst other things. If you prefer GUI based admin tools, you can use the corresponding Webmin module [6].Webmin writes modified settings directly to the server configuration file /etc/ssh/sshd_config. Figure 3 shows you what Webmin’s SSH module looks like. If you intend to use Webmin, you should be aware that this tool consists of a large number of Perl CGI scripts, that are accessible on port 10000 (TCP und UDP) of the Webmin server.To achieve a modicum of security, you will need to enable SSL encryption in your Webmin configuration, this will ensure that your login, password, and the changes you make in Webmin are not transmitted in the clear. Also be aware that the Webmin distribution uses a 512 bit RSA key and a self-signed certificate for SSL. Of course, the certificate is not assigned to your own server. But the fact that anybody downloading the package will be aware of the purportedly secret key is probably worse. In other words, it does not really matter that the key length is insufficient.You would need your own SSL key and your own server certificate, or an SSH tunnel, to provide genuine security.
www.linux-magazine.com November 2002
55
OpenSSH: Part II
Firewall
SSH Server Port 22
SSH Client Port 2003
Client SSH Tunnel ssh hd_login@hd -R 2003:od:22 hd od Figure 6: SSH tunnel: od first connects to hd and then opens a tunnel via reverse port forwarding, allowing hd to open a second SSH connection in the opposite direction to od
reads keyboard and mouse input. X11 clients are programs that use X11 for their input and output. X11 servers normally listen on port 6000. If a computer has more than one screen, keyboard, and mouse, additional X11 servers will use ports 6001 upward. The client program reads the environment variable $DISPLAY to discover what server it should display on. If you can access an X11 server, you can display an X11 client on that server, however, you can also grab screenshots or sniff keyboard events. So without additional security measures X11 would be a security nightmare. But rest assured, X11 uses an authentication system of its own. MIT Magic Cookies are the most common implementation in this area. Since you need authentication, a port forward alone is not sufficient for X11. So SSH provides a mechanism that allows you to relay the graphic output of a remote computer to your local display. This mechanism handles X11 authentication, sets the $DISPLAY variable when you log on, and forwards the connection through the tunnel. Several conditions must be met. The configuration file for the remote SSH server, sshd_config, must contain the lines X11Forwarding yes and a directive of the type X11DisplayOffset 10. On the SSH client side, you will need to run an X11 server and enable X11 forwarding, for example, by using the SSH option -X or ForwardX11 yes in /etc/ssh/ssh_config or ~/.ssh/config. The profile files on the remote computer, for example, ~/.profile or
56
November 2002
~/.bashrc can prove to be another pitfall. Some of these scripts attempt to set the $DISPLAY variable, without being aware of SSH. They may even overwrite the correct settings and this could cause some surprises if the X11 client talks directly to the X11 server, and simply ignores the tunnel, although SSH and X11 forwarding have been enabled. After fulfilling the conditions for X11 forwarding, you can run any X11 program on the remote computer. The SSH tunnel forwards the display to the local display and encrypts the data transmission. When dealing with SuSE servers with Yast 2, or Mandrake hosts with DrakConf, admins can use this method for secure remote administration via an SSH tunnel.
Configuring a Firewall for SSH We have already mentioned how a user can undermine a firewall using a tunnel. But no security conscious admin would want to attach her computer to the Internet without a firewall. The firewall is often the Internet gateway for an internal LAN configured with private IP numbers (RFC 1918). Our task is to configure the firewall to allow an SSH login on the firewall host, and to provide access to the servers in a DMZ or on the LAN from that point. Listing 1 shows how you can use the firewall subsystem of the Linux 2.4 kernel to do so; it illustrates only the relevant sections of the iptables rules. This set of rules uses a DROP policy for INPUT, OUTPUT and FORWARD. By default, the kernel will not permit any IP
www.linux-magazine.com
packages to enter or leave the computer, and will not forward any IPs. Interfaces, IPs and Ports must be specified explicitly – i.e. the basic principle, “anything not explicitly permitted is denied”, applies. This policy will not even allow connections to a host loopback device without explicit permission. An INPUT rule allows SSH connections to the gateway via the external interface. An OUTPUT rule allows SSH logins via the Gateway to computers on the LAN or in the DMZ. These rules do not permit you to log on directly to any internal computer. The last line in Listing 1 allows the kernel to recognize the packets belonging to a permitted connection and also permit them. This kind of statefulness became available with the network stack of the 2.4 kernel. For more detailed information you might like to refer to the commented iptables scripts produced by Bob Sully [7], or to man iptables, and the iptables HOWTOs [8]. ■
INFO [1] “Out of Sight: OpenSSH from the Administrator’s Perspective”, Linux Magazine Issue 24 [2] Scanssh: http://www.monkey.org/ ~provos/scanssh/ [3] Daniel J. Bernstein:“Circuits for Integer Factorization: A Proposal”: http://cr.yp.to/papers/nfscircuit.ps [4] http://www.counterpane.com/ crypto-gram-0204.html#3 [5] http://www.counterpane.com/ crypto-gram-0203.html#6 [6] Webmin SSH module: http://www.webmin.com/download/ modules/sshd.wbm [7] IPtables scripts by Bob Sully: http://www.malibyte.net/iptables/ scripts/fwscripts.html [8] HOWTOs for IPtables: http://www.digitaltoad.net/docs/ iptables-HOWTO-1.html
THE AUTHOR
SYSADMIN
Andrew Jones is a contractor to the Linux Information Systems AG http://www.linux-ag.com in Berlin. He has been using Open Source Software for many years. Andrew spends most of his scarce leisure resources looking into Linux and related Open Source projects.
LDAP-Clients
SYSADMIN
Comparison of LDAP Clients under Practical Conditions
Admin’s Little Helpers Without a well planned management concept and suitable administration tools any LDAP directory is surely heading for chaos. This article investigates the options for optimizing administrative approaches using only freeware tools. VOLKER SCHWABEROW
A
fter setting up an LDAP directory in a productive environment one of the first questions that arises will probably concern options for effective management. As a system administrator you may soon find yourself demoted to the personnel department’s gopher when user phone extensions or addresses need to be modified. That is definitely not what you had in mind when you applied for the admin job – and probably not what your employer had in mind either. You will often find admins purchasing expensive toolkits in order to cope with the administration of a directory service, but why buy tools if you are working with an Open Source directory service such as OpenLDAP [1]? This is the question we will attempting to answer in this article using several Open Source solutions as examples. Toolkits must be able to provide several fundamental capabilities: • Delegation: The tools will need the ability to delegate management of the directory service, and any tasks this involves to administrative roles. • Usability: The toolkit must be useable by inexperienced users. You should not need to be an expert to change a record. • Management: The solution must be easily maintainable for the sysad and provide understandable core functions.
You will also need to decide who will manage what records in the LDAP directory. One possibility would be to allow the users to keep their own records up to date. However, the administrator could just as easily assign this task to trained staff, possibly from the personnel department. Whatever approach you take, you will need to ensure that the procedures you implement allow you to maintain consistency for new and existing data.
A toolkit is no improvement if it allows users to let the LDAP directory to go haywire, preventing even the administrator from keeping track of the status of the records. And this is why the introduction of an LDAP toolkit should be modelled on your administrative processes. If not, the introduction of an interface for data maintenance is doomed to failure. It should be obvious that models of this kind do not lend themselves to quick and dirty implementations.
www.linux-magazine.com November 2002
57
SYSADMIN
LDAP-Clients
Web Interfaces The Web interface is a typical method for administration, the advantage being that users can access their own records, no matter what platform they use, Web interfaces provide a similar level of functionality to native applications, since the functionality of a fat client can be implemented almost entirely using Web programming languages. Gonicus – a company that rose like a tiny phoenix from the ashes of ID-Pro – recently placed a tool called Gosa on their website [3]. Gosa is a Web application based on the PHP [2] programming language. Although you can transfer the administration of multiple network services to a directory service, you may not be able to use a common interface. This is the gap that Gosa attempts to fill. Gosa was developed as an add-on to Gonicus’ own thin client project, Goto. Gosa’s strength is user and group management for Posix accounts, Samba, Squid and Qmail. If you are not using Samba or Qmail in a given network, it does not make much sense to install and run Gosa. Additionally, an nss_ldap server link should be in place.
Installation and Configuration of Gosa After downloading the current Gosa package from the FTP server, you will probably want to use the /opt directory to expand the package. You will find a targzip file containing schema for Gonicus’
own applications in /opt/gosa/contrib. You will also need to expand the schema and copy the Gonicus directory that this action creates to /etc/openldap/schema. The qmail .schema file contains the schema for the QmailLDAP interface that also needs to be moved to the /etc/openldapU /schema directory. At this point the *.schema files need to Figure 1: Gonicus’ freeware tool, Gosa, immediately following installation be imported into your set up the Posix account on your OpenLDAP server’s schema. To do so, directory server, as follows: use an include statement in the slapd .conf file. Listing 1 shows you the order in which to add the schemas. If you are dn: uid=myadmin,dc=myname,dc=com adding additional schemas to your LDAP objectClass: top server, you will probably want to change objectClass: posixAccount this order. homeDirectory: /root The next step is to define the Access userPassword: secret Control Lists recommended for Gosa loginShell: /bin/false (see Listing 2) in the slapd.conf file. Pay uid: admin attention to the comments in the lines cn: admin starting with hash signs. The ACLs must uidNumber: 501 reference the distinguishing name of the gidNumber: 501 LDAP admin account. Finally, you will need to set up a Posix Use ldapadd -x -D “cn=Manager,U account for the administrator of your dc=domain,dc=com” -W -f filename.U directory. If you have already set up ldif to add this to the directory and some Posix accounts on your directory complete the configuration steps for your server, you can simply point to the directory server. distinguishing name of an existing Our last step is to install the PHP account. If not, use a short LDIF file to scripts that will install Gosa on the web
Listing 1: Schema File Order 01 include /etc/openldap/schema/core.schema 02 include /etc/openldap/schema/cosine.schema 03 include /etc/openldap/schema/inetorgperson.schema 04 include /etc/openldap/schema/nis.schema 05 include /etc/openldap/schema/misc.schema 06 include /etc/openldap/schema/qmail.schema 07 include /etc/openldap/schema/gonicus/gohard.schema 08 include /etc/openldap/schema/gonicus/goto.schema 09 include /etc/openldap/schema/gonicus/goaccount.schema 10 include /etc/openldap/schema/gonicus/gofirewall.schema 11 include /etc/openldap/schema/gonicus/gofax.schema Figure 2: The Gosa user interface from the viewpoint of the administrator
58
November 2002
www.linux-magazine.com
LDAP-Clients
server. You can either create a relative link in the Apache configuration file, httpd.conf, or copy your Gosa root to the root directory of your web server. Now, Gosa should really be ready to go at this point – if it wasn’t for those pesky bugs and issues.
First Impressions Spoilt by Weaknesses For example, when you modify a user, the corresponding LDAP record is first deleted, and then reinstated. This is by no means a perfect solution, because an error could mean losing the account entirely. The PHP programming language actually includes a statement for just such a task, ldap_modify, but Gosa does not use it. The Gosa developers’ solution for checking privileges is also slightly cumbersome. To check the role assigned to a user, the program attempts to add a user account called admincheck to the directory when the user logs on. If this works, the user is an administrator from Gosa’s point of view. If it does not work, possibly because the account already exists, you may find your admin account being degraded to a normal user – this is an unnecessarily complicated and dangerous system. Conclusion: Gosa is headed in the right direction, but the project itself is tied to other projects under development by Gonicus. Users who prefer modular software may be disappointed by this product, as it can hardly be classified as a stand-alone toolkit. There are several issues involving the Gosa installation
SYSADMIN
Listing 2: ACLs for Gosa 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
# DN must reference the DN of the Directory Management # Account. access to attribute=deliveryMode by dn="cn=Manager,dc=myname,dc=com" write by self write by * read # DN must reference the DN of the Directory Management # Account. access to attribute=mailForwardingAddress by dn="cn=Manager,dc= myname,dc=com" write by self write by * read # DN must # Account access to by by by
reference the DN of the Directory Management attribute=mailReplyText dn="cn=Manager,dc= myname,dc=com" write self write * read
# The DN can point to an existing # POSIX object in this case, Admin for example. # This DN is used to manage the Gosa solution itself. access to * by dn="uid=myadmin,dc= myname,dc=com" write by * read
procedure which we were only able to resolve by modifying the PHP scripts. Work is still in progress on improving the universal appeal of the project. According to Linux Magazine sources, a new version is due to be released shortly and the folks at Gonicus claim that it will be easier to adapt to third-party tasks than the current version.
Figure 3: It is easy to install an additional Webmin module
Using Webmin Plugins to organize LDAP The well-known Webmin [4] tool offers a variety of administrative functions for Linux servers. Several third-party modules are available to enhance the functionality of Webmin and one of them is the LDAP Users Admin Module [5]. The ldap-users module is easy to install
Figure 4: A sample ldap-users configuration
www.linux-magazine.com November 2002
59
SYSADMIN
LDAP-Clients
via the active Webmin interface. To do so, just run Webmin Configuration in the main menu and then call Webmin Modules (see Figure 3). After selecting Install Module from File you can then configure the module – configuration normally takes place immediately after selecting the new module. You can start using the user interface immediately after these steps (see Figure 4). The interface provides quick and easy access to the attributes of any Posix object, and also allows you to create new users, although these features are unfortunately not available for groups. In addition to the LDAP user administration you can also use Webmin to administer the OpenLDAP server. The plug-in is called OpenLDAP, for want of a better name, and is available from [6]. The module is also available for older OpenLDAP versions. The module for version 2 is called openldap2-X_X.wbm. The installation procedure is similar to the one used for the LDAP Users Admin Module, and can be accessed via Webmin’s Servers menu. In addition to configuring Access Control Lists you will hopefully be able to modify and create object classes in future versions. An option for maintaining server attributes is now available. Conclusion: It is easy to configure an OpenLDAP server using Webmin and LDAP modules. You can also delegate daily administrative tasks. If you already use Webmin, you will immediately feel at ease with the LDAP modules, as the look and feel of other modules is apparent.
One important pre-condition for all the programs described in this article is the ability to delegate administrative functions via Access Control Lists on the directory server itself. In the case of OpenLDAP the access rule is recommended for this purpose, as it can be defined for any user attribute. The following listing provides an example based on OpenLDAP: # All users are allowed to # maintain their own records # Other users have read-only # access. access to * by self write by * read
LDAP Browsers/Editors Figure 6: The LDAP Browser/Editor by Jarek Gawor provides a range of features comparable to commercial LDAP-Clients
Traditional: Native LDAP Client Programs Besides the Web browser based administration you can also opt for the traditional method and try a native graphic interface. Native tools may be quicker in comparison to generic solutions, but you will have the disadvantage of having to select a single operating system. (Neither of these statements applies to Java programs, of course.) There are several Linux applications of this type that permit more or less professional administration of your LDAP directories.
Figure 5: OpenLDAP Webmin modules for administering OpenLDAP servers
60
November 2002
www.linux-magazine.com
If users in large companies are to be allowed to maintain their own data, a user client platform dependent interface can cause you headaches. Java based GUIs are one solution. Although the language is not exactly famous for its graphic output speed, it will at least run on most platforms. The LDAP Browser/ Editor [8] by Jarek Gawor, who works for Chicago University, is just one example of a Java program. The current version of the tool, 2.8.1, requires the Java Runtime Environment 1.2.2 or newer, and is fairly stable on systems with at least 128 mbytes of RAM. The flexibility of the Java GUI is comparable to that of commercial LDAP clients – which makes this tool a must for people who normally use native LDAP client software (see Figure 6).
Figure 7: The Gnome Directory Administrator Tool includes wizards
The interface allows easy moving, manipulation and copying of directory objects. In addition to standard functionality, the LDAP Browser/Editor can also export data to the LDAP Data Interchange Format, LDIF. This allows you to export a complete LDAP tree in a matter of seconds. This approach is useful for creating backups and equally so for migration tasks. The interface uses templates to create new entries. The administrator can use a simple template to define the required attributes, and that can be a big help if you are defining custom objects. Conclusion: A universal and OS independent interface, such as the browser programmed by Jarek Gawor is a good thing to have around for those daily maintenance tasks, although performance could be better. Most administrators will appreciate a quality tool such as this – especially as it comes for free.
Directory Administrator A number of LDAP GUIs are available for the Gnome desktop. One of them is the Directory Administrator, which is mainly suitable for user administration. The website [9] offers RPM binary archives for Mandrake 8.2 and Red Hat Linux 7.3, so you will need the source archives for any other OS. After expanding the archive you can follow standard procedure to compile and install the sources: ./configure; make; make install. A wizard is available to the admin user
on initially launching the program (Figure 7) and can be used to create a connection profile. If everything works out okay, you will be able to view the users and groups stored in your directory (Figure 7). The interface can also perform tasks such as assigning users to groups, and it is extremely flexible with respect to storing user accounts and groups in OU hierarchies. Conclusion: If you are managing the users and groups of a department in an organizational structure, the Directory Administrator is a good choice. The tool will perform tasks such as creating users, or defining a user’s Samba shares, quickly and easily.
GQ and KdirAdm There are numerous alternatives to the tools already discussed. GQ [9] for example, an LDAP client for the Gtk environment which is comparable to the LDAP browsers already discussed in most respects as it allows you to manage the objects in your directory. Another lookalike is KdirAdm [10].
Conclusion: useful for simple cases Depending on their quality and the individual application you have in mind, the Open Source directory management tools that we have introduced in this article may (or may not) be useful to an administrator. Many of the tools tend to introduce too many levels to what are in effect simple administrative concepts.
THE AUTHOR
LDAP-Clients
SYSADMIN
Volker Schwaberow is a technology consultant for RAG INFORMATIK GmbH in Gelsenkirchen, Germany, and started looking into Linux and associated topics in 1995.The author’s hobbies are reading, listening to music and programming in C/C++, Java, Perl, and PHP.
It is also vital that you restrict access to the directory via ACLs. As an administrator you can base your choice of tool on the task in hand, although this may make life difficult for you when you first attempt to draw up an implementation plan. Statistical evaluation is probably the best way to handle this. Statistics will help you determine whether you can safely delegate a wide range of administrative tasks. As in many other cases the approach, and the results, will only be as good as your advanced planning. Your only option at this stage will normally be to create a list of mandatory administrative tasks and check whether one of the tools we discussed is suited to them. A combination of tools may even prove to be your best option for simplifying your daily workload. You might consider using a Web interface that allows your users to modify their own personal data and passwords, but provide a native graphic frontend for the administrator. ■
INFO [1] OpenLDAP: http://www.openldap.org [2] PHP: http://www.php.net [3] Gonicus: http://www.gonicus.de [4] Webmin: http://www.webmin.com [5] LDAP Users Admin: http://ldap-users.sourceforge.net [6] OpenLDAP Webmin Module: http://gaia.anet.fr/webmin/openldap [7] Directory Administrator: http://diradmin.open-it.org/index.php [8] LDAP Browser/Editor: http://www.iit.edu/~gawojar/ldap [9] GQ for Gtk: http://biot.com/gq/ [10]KDE Directory Administrator: >http://www.carillonis.com/kdiradm [11] RFC1779 – A String Representation of Distinguished Names: ftp://ftp. isi.edu/in-notes/rfc1779.txt [12] RFC1778 – The String Representation of Standard Attribute Syntaxes: ftp://ftp.isi.edu/in-notes/rfc1778.txt Figure 8: Making the Administrator’s life simple: All the groups and users are
[13] RFC1777 – Lightweight Directory Access Protocol: ftp://ftp.isi.edu/ in-notes/rfc1777.txt
visible at a single glance with Directory Administrator
www.linux-magazine.com November 2002
61
PROGRAMMING
C Tutorial: Part 12
C: Part 12
Language of the ‘C’ Following on from last month’s article, Steven Goodwin, looks at how the make utility can be used to improve the development process. BY STEVEN GOODWIN
I
n preparing this section, I asked twelve different programmers for the best way to write a make file. I got twelve different answers! Writing makefiles, like code, novels or music is a uniquely individual experience. There is no right or wrong way – whatever works (and is readable!) can considered a ‘good makefile’! The method we’re using here is fairly ‘traditional’ and shall be developed from first principles, so you can see each step in the process.
Make So first off; what is a makefile? And what is make? Well, make is a utility that helps reduce development time by allowing us to only rebuild parts of the project (using gcc) that need it; if you have not changed ‘converter.c’ or included header files, why would you want to spend time compiling it when the result will be the same as it was last week?! Conversely, if you’ve changed a header file that is used in four places,
62
November 2002
you would want to rebuild those source files to reflect the changes. A makefile (by default called Makefile – the capital M is important!) describes each component of the project, how they should be built, and what constitutes them being ‘out of date’. The watchword here is dependency. If we have a two file project where converter.c includes converter.h, then we can say converter.c is dependent on converter.h. If converter.h changes, it stands to reasons that converter.c must also have changed in some way, and so needs to be re-built. We can build a makefile to describe this. We can then build this program by typing:
Line 1 describes a target. The text to the left of the colon dictates what we want to produce (an executable file called convunit in this case), whilst the right hand side lists the dependant files we have to use in order to build it. Our makefile is effectively saying that should converter.c or converter.h change, then ‘convunit’ will be out of date and needs to be rebuilt. Each subsequent line after a target that begins with a tab (and only a tab!) holds the command, or commands, that we must execute in order to produce the target file. It stands to reason, therefore, that those commands must produce the target file in some manner. You can include as many commands as you need;
make
Listing 1: Makefile If you had not named your file ‘Makefile’ but listing1make, for example, then you will need to use the -f flag. make -f listing1make
www.linux-magazine.com
1 convunit: converter.c U converter.h 2 gcc converter.c -o U convunit
C Tutorial: Part 12
Listing 2: Makefile 1 convunit: converter.o 2 gcc converter.o -o U convunit 3 4 converter.o: converter.c U converter.h 5 gcc -c converter.c -o U converter.o
semi-colons let you put two or more commands a line, while the backslash is available for line continuation if required. This is sometimes necessary, since each line executes in its own shell, and you might need to include several commands. The following would fail if each instruction was placed on a different line. main: source/converter.c U cd source; gcc converter.c
make will execute each command in sequence until none is found (i.e. the line does not begin with a tab) or an error occurs. At this point it will stop trying to build that target and exit. To suppress these errors, start each command with a minus sign and it will continue with the next instruction (we’ll see where that is used later). Also, as each command is echoed to the screen you may wish to stop this by using the @ prefix. convunit: converter.c U converter.h @echo "Now compiling U converter.c ..." gcc converter.c -o convunit
Temporary Like Achilles This makefile can be improved however by building object files, and not executables. Object files are compiled versions of source code (it can consist of one or more ‘C’ files), which lack the essential ingredients that make them executable (like access to glibc, and a place to start, for instance!). This not only makes them smaller, but also does not tie them in to any particular executable. They can be built individually, and then linked together with other object modules to make one executable. For
projects with several source files, this also means that updates can be built with just one compile and one link, which is much more efficient than several compiles, and one link. Object files (by convention) use the .o extension, which is usually pronounced “dot oh!”. Here, we are nesting targets. In this example, convunit is dependant on converter.o (an object file), which in turn is dependant on the two files converter.c and converter.h. Should any of these files change, convunit will be re-built. We can place the targets in any order we choose, however, the first target given (convunit) is the one built by default and so should be the main executable. Looking to the bigger picture, we have already split our project into modules (see last month’s Linux Magazine issue 24) and have five ready-made targets (core, config, process, output and debug) that map nicely onto five object files. From this we can build a complete makefile for the project. These last two examples make uses of the ‘-c’ option of GCC, which indicates we want to only build an object file, and not a complete executable. Our first invocation of make will build five object files (the .o files from lines 4,7,10,13 and 16) and one executable (line 1); our second will build none! It will spot that the file convunit is newer than all its dependencies (converter.o
PROGRAMMING
and config.o process.o output.o debug.o) and report that it is “up to date”. Whenever a file changes, only the necessary dependencies will be rebuilt. This is determined by looking at the date stamp of the files in question. You can test this by typing: touch output.h make
This will then build core.o and output.o (since they are the only targets that depend on output.h) and re-link a new executable with the 3 old, and 2 new, object files. It is very rare to include header files like stdio.h and stdlib.h in the dependencies list. This is because they are standard headers, and changing the function prototypes or macros here would require a change in the glibc libraries also. That last happened many years ago with the switch from version 5 to 6, and required a complete recompile of all system and user software.
Showroom Dummies To ease the task of maintenance, make supports macro substitutions which you can use to save re-typing repetitive command line switches. This is especially useful for changing compiler and linker options as one macro can replace everything in one go. Macros are, by convention, always upper case and defined as a ‘name=
Listing 3: Makefile 1 convunit: converter.o config.o process.o output.o debug.o 2 gcc converter.o config.o process.o output.o debug.o -o convunit 3 4 converter.o: converter.c converter.h config.h output.h process.h U debug.h 5 gcc -c converter.c -o converter.o 6 7 config.o: config.c converter.h config.h process.h 8 gcc -c config.c -o config.o 9 10 process.o: process.c converter.h process.h 11 gcc -c process.c -o process.o 12 13 output.o: output.c converter.h output.h process.h 14 gcc -c output.c -o output.o 15 16 debug.o: debug.c converter.h debug.h process.h 17 gcc -c debug.c -o debug.o
www.linux-magazine.com November 2002
63
PROGRAMMING
C Tutorial: Part 12
substitution’ pair. They are used with the $(NAME) syntax and are substituted automatically before executing any build command. This way, any errors are explained with real commands and parameters, instead of macro names that may be quite complex and obtuse. CC = gcc CFLAGS = -Wall converter.o: converter.c U converter.h $(CC) $(CFLAGS) converter.c
A number of macros exist by default (type “make -p” in a shell to find out which) but these can still be changed if necessary. There are also a number of standard macros that you will see, so you should become at least comfortable with them (see tables 1 & 2). Macros can also be set from the shell, by giving the ‘name=substitution’ pair as an argument to make. make CFLAGS=-Wall
Table 1: Conventional Macros Macro
Description
Example
CC
Name of the C compiler
GCC
MAKE
The make utility
make
AS
Assembler
as
LD
Linker
ld
FC
Name of the Fortran compiler (really!) f77
Table 2: Common Macros Macro
Description
TARGETS
The names of the targets being compiled
SOURCES
Those files to be compiled
LIBS
Directories for other libraries
INC
Directories for other headers files
CFLAGS
Compiler flags
LFLAGS
Linker flags
Table 3: Special Variables
Notice that the equals sign is used without spaces as it helps distinguish between a macro definition and target name. For other examples of CFLAGS, see the BOXOUT: Useful compiler flags.
College Girls Are Easy Another one of make‘s many features to improve the quality of life are implied dependencies. Make knows that a C file generates a .o object file, and that it must use gcc to do so; the dependency of the .o on the .c is implied and so make can perform the compile operation automatically! This allows you to reduce a typical line to: config.o: config.c converter.h U config.h process.h
On the surface, it might appear that we have lost the means to use macros and apply special compile flags to gcc. Not so! By using the CFLAGS macro (which is common) we can add warnings, compiler optimisations, or any number of switches we want, and they will get used within the implied dependency. Notice, however, that line 3 provides an explicit build instruction because make doesn’t understand that a collection of .o files need to be built into an executable. This is because it can not make the connection between the executable (convunit) and the object files. By changing line 2 and calling our ELF ‘converter’ instead, we can do without line 3. 2 converter: converter.o U config.o process.o output.o U
.SILENT:
Does not echo any command executed. Equivalent to prefixing each command with an @
.IGNORE:
Ignore any errors from the commands. Equivalent to - on each command.
.PRECIOUS
Does not remove the target file being removed after an error.
Name of the current target
$$@
As $@, but only available on dependency line
.DEFAULT
Tries to build this if the given target doesn’t exist.
$?
Files that are newer than the target,and so need building
.PHONY
Indicates that these targets do not really compile into programs. Used for cases like ‘clean’and ‘install’,in case there’s a file called (say) ‘clean’in the current directory that could confuse the situation.
?Member files of library files?
$<
$? for suffix rules
$*
$@ for suffix rules.The files suffix is omitted, however.
64
November 2002
The implied dependencies of an executable (converter, in the case above) is its equivalent .o file, and anything else given on the right hand side of the colon. That is – its usual dependencies. For advanced work, it is possible to create your own implied dependencies; they are called suffix rules.
Time After Time In addition to macros, there are a number of special variables with a similar appearance to macros, as both start with a ‘$‘ symbol. When building make files they can be used to enhance error messages, or to provide parameters to other programs. They also work inside quoted strings. converter.o: converter.c $$ converter.h
Box 2: Useful compiler flags -D_DEBUG_FLAGS
Automatically defines the macro ‘_DEBUG_FLAGS’to the source code.
-g Include
GNU debugging information into the executable.This allows you to use gdb to step through the program one line at a time.
-c
Compile and assemble, but don’t link. i.e. create the object file
-o converter
Specify the output file
-Wall
Specifies the warning level.‘All’is best.
-O3
Specify the optimisation level.0 is off (debug),3 is the highest. Using -Os will optimize for space, instead of speed.
-fPIC
Switch specific flag options. Here, PIC tells gcc to produce position independent code (if possible).The option name is case insensitive. Used to produce libraries that would work in more than one place.
Box 1: Targets
$@
$%
debug.o
www.linux-magazine.com
-I /usr/local/apache2/include Also search the named directory for header files. Same the INC common macro. Note:There is no space between the flags switch and the parameter,except with ‘I’.
C Tutorial: Part 12
For a list of these special variables, please refer to table 3.
Shoot That Poison Arrow When make is run without arguments it will look for the first target in the makefile and try to build it. If the makefile contains more than one project, you should create an extra target named all, which is dependent on each of the other targets. This way, every project will get built with a single call to make. You can also build a specific target by including it as an argument. make testbed make config.o
Now, most Linux users who build from sources are familiar with the trio of ./configure, make, make install. If both the
above sentences are true, then ‘install’ must be the name of a target. Funnily enough, it is! The ‘install’ target often includes commands to copy configuration and executable files to the appropriate place. These targets, however, are phony – they don’t really produce a file – and as such need to be indicated by adding a .PHONY line to the make file (see listing 3 and BOXOUT: Targets) We can use this knowledge to enhance our makefile by adding clean and install. Notice that in the case of clean we ignore all errors, and with install we suppress the echo command; and will require superuser privileges. In these cases no dependencies are given, meaning the instructions are executed every time that particular target is called. This produces a complete makefile, ready for use!
There’s a guy works down the chip shop? As time goes on, and projects change, the makefile will become outdated. We’ll
Listing 4: Makefile 1 # We're now using implied dependencies! 2 convunit: converter.o config.o process.o output.o debug.o 3 gcc converter.o config.o process.o output.o debug.o -o convunit 4 converter.o: converter.c converter.h config.h output.h process.h U debug.h 5 config.o: config.c converter.h config.h process.h 6 process.o: process.c converter.h process.h 7 output.o: output.c converter.h output.h process.h 8 debug.o: debug.c converter.h debug.h process.h
Listing 5: Makefile 01 CFLAGS -Wall 02 03 converter: converter.o config.o process.o output.o debug.o 04 converter.o: converter.c converter.h config.h output.h process.h U debug.h 05 config.o: config.c converter.h config.h process.h 06 process.o: process.c converter.h process.h 07 output.o: output.c converter.h output.h process.h 08 debug.o: debug.c converter.h debug.h process.h 09 10 clean: 11 -rm *.o converter 12 13 install: 14 @echo "Copying conf file to /etc" 15 cp convert.conf /etc 16 17 .PHONY: clean install
need to add more targets, change dependencies, or remove old files. Doing this manually can become a bind, so there are a number of tools to help you, such as mkdepend, mkmkf and makedepend. We shall look at this latter. As the name suggests, makedepend will build a list of dependencies for the files specified on its command line. So, assuming all our source files in the same directory (and it contains no rogue files from other projects), we can type: makedepend *.c
And a complete list of dependencies (including things like stdio.h, and stdlib.h) will be built, stored in the makefile. And in the correct format! Makedepend does a couple of clever things here. First off, it makes a back-up of your original makefile and calls it ‘Makefile.bak’. Then it appends the dependency information to Makefile. What is clever here is that a second call to makedepend will not re-append the same data. Even in a small project such as ours, makedepend can add 50 or more lines to the makefile. How does it know? Well, it adds a comment marked ‘DO NOT DELETE’ before the appended text. If this already exists, makedepend removes the text below it, and adds the new information. Naturally, calling makedepend without arguments will not find any dependencies and thus produce an empty block at the bottom of the file. This is still useful, as it makes the makefiles small enough to fit in a magazine! And as long as we add the dependencies back to the makefile before trying to compile, all is well! With the exception of .DEFAULT, each can affect specific targets by including its name as a dependency. If no target is specified, then it will affect all targets within the makefile. ■ The language of ‘C’has been brought
THE AUTHOR
@echo "Trying to build $@ U (because $? are too new!)" $(CC) $(CFLAGS) converter.c
PROGRAMMING
to you today by Steven Goodwin and the pages 62–65. Steven is a lead programmer, currently finishing off a game for the Nintendo GameCube console.When not working, he can often be found relaxing at London LONIX meetings.
www.linux-magazine.com November 2002
65
PROGRAMMING
Perl Tutorial: Part 6
Perl: Part 6
Thinking in Line Noise W
hile the aspects of Perl that have been covered in this series so far are enough to start you upon your way to becoming yet another Perl hacker they’ve been the basics of the language and offer nothing other languages do not, albeit in a lot less lines of code.
Nested Data structures Perl facilitates complex datastructures in several ways, by far the most readily understood are the “hash of hashes” and “list of lists”; these are nested datastructures. It is also possible to have “lists of hashes” and “hashes of lists”. What we mean when we say a “list of hashes” is that the data structure is a list that contains hashes as its elements. One example might be a list of people and for each person a list of their top five favourite shell commands: Terry: rm -rf*, chmod 777, U kill -9, ln -s, reboot Billy: vim, df -h, ls -lah, U ps -eaf, mutt
We could write that very quickly in Perl, using nested data structures. In this instance a hash of lists appears to be the most sensible as the user’s names will be unique identifiers and the top five commands have no other significance but the order in which they occur. Using a hash structure for the names and a list for the commands for each user we can access the information in an intuitive fashion and pull out details such as the favourite command used by Billy. We’ll show ways of obtaining this data later. First we’d better store the data. One way of writing this in Perl would be: my %commands = ( Terry => [ 'rm -rf*', 'chmod U 777', 'kill -9', 'ln -s', U 'reboot' ], Billy => [ 'vim', 'df -h', U 'ls -lah', 'ps -eaf', 'mutt' ] );
66
November 2002
This month we introduce some of the more powerful idioms and features of Perl and show why it’s still one of the hackers languages of choice. BY DEAN WILSON AND FRANK BOOTH Within the code sample above, the only unfamiliar symbol should be the square brackets, these are used to denote anonymous lists. Anonymous lists are arrays without a name. A clue to this is the square brackets ‘[]‘, usually seen when accessing elements of an array: print "$some_array[4]\n";
So it’s not really counter intuitive that square brackets be used elsewhere for arrays. Using this philosophy can you guess what sigils we use to create an anonymous hash? We use ‘{}‘ curly braces, as we use curly braces to retrieve a value from a hash: print "$some_hash{four}\n";
Or if you prefer: print $some_hash{'four'} . "\n";
Returning to our list of users favourite commands again, when we need to reference the data inside our nested structures, we need a means of specifying the element inside the parent data structure we want. We can access ‘%commands’, in the normal fashion: #returns a string akin to U 'ARRAY(0x1ab54d0)' print "$commands{'Billy'}\n";
But this returns a string that looks like “ARRAY(0x1ab54d0)” which actually tells us a lot but not what we wanted. The uppercase ‘ARRAY’ tells us that the returned value is of an array reference type and the characters with in the parenthesis tell us where Perl stores the reference. To access the data from the list within the hash, we use the arrow
www.linux-magazine.com
operator ‘->‘, this enables us to access the list within the hash: $commands{Billy}->[0];
This will return the first item from Billy’s list of shell commands. For the purposes of this exercise, we’ll say that the list places favourite items first. We can now make a hash of lists using anonymous lists, we can make an array of hashes too. you may by now be asking yourself if there any other things that can be made anonymously.
Anonymous Subroutines It’s the Perl way, if you’ve got anonymous hashes and anonymous lists, then what about the functions and the scalars? Perl provides them too. An anonymous function seems like an odd thing to have, until you get sufficiently lazy, then you find yourself using them. my %func = ( stdout => sub { print @_ }, log => sub { print LOG @_ }, stderr => sub U { print STDERR @_ }, not_stderr => sub { print @_; print LOG @_; }, not_stdout => sub { print STDERR @_; print LOG @_; }, all => sub { print STDERR @_; print LOG @_; print @_; } ); # Print to all bar stdout: &$func{not_stdout}( 'hello', U
'world' ); # Print lots of times to each: &$func{$_} for keys %func;
The last command seems completely nonsensical as it’s a hash data structure, there is no telling what order the elements will emerge when using the keys command, which could cause problems. If you require order, use an array. The following example will produce a set of error levels, increasing in urgency. my @warn = ( sub { print STDERR @_ }, sub { print LOG @_ }, sub { print @_ }, sub { die "I can't function under these U conditions: ", @_,"\n" } ); sub notify { my $error_level = shift || -1; &$func{$_}(@_) for ( 0..$error_level ); -1 }
This example will report messages back according to the error level. If the error level was 1, it would write to ‘STDERR’ and the log file. At level 3, it would write to the ‘STDERR’, ‘LOG’, ‘STDOUT’ and the finally stop the program with a final message. It does this by looping through the array of anonymous subroutines. There are a few things that happen implicitly, that we’ll examine now: my $error_level = shift || -1;
When variables are passed into a function they’re passed in as an array ( @_ ). The ‘shift’ operator removes the first item from an array, its default is ‘@_’, so if no array is specified, it defaults to using @_. If the ‘@_’ array is false ( it has no contents ) the value -1 is placed in the variable instead, why will become clear soon. &$func{$_}(@_) for 0..$error_level;
This line uses Perl’s ‘for’ looping construct to iterate over the first part of the line. In this instance it will repeat for every number from 0 to $error_level, the value for the current iteration will be put into the default variable ‘$_’. Since the ‘for’ loop occurs at the end of the line the loop condition doesn’t need braces. It is worth noting that the range operator ( ‘..’ ) will not work backwards, it won’t count from 10 to 1 using 10..1, it will merely skip the entire loop as having failed on the first attempt. The first part of the line calls the function from the array ‘@warn’, the element it references is the value of ‘$_’, and the parameters the function is passed are the remainder of the parameters passed to the notify function. The ampersand ‘&‘ denotes that the thing in the hash is a function. It is necessary to explain that the contents are a
NEW SuSE Linux 8.1 Putting you in the winners’ circle with open standards For beginners: SuSE Linux Personal 8.1 • Free MS-Office compatible office suites • Secure Internet and eMail communication • Easy-to-install desktop solutions • Extensive multi-media support • Graphics manipulation tools for digital cameras, scanners etc.
For professionals: SuSE Linux Professional 8.1 • Complete small office solution • All you need to run your office network • Configurable security with SuSE Firewall 2 • Additional secure file systems • Numerous development environments and programming tools
SuSE Linux is celebrating its 10 year anniversary! We owe our success to you, thus we would like to thank you for your loyalty. For further information visit our websites: www.suse.com www.suse.co.uk www.suse.de/en/ SuSE Linux AG Deutschherrnstr. 15–19 90429 Nürnberg Germany
PROGRAMMING
Perl Tutorial: Part 6
function, otherwise Perl would expect a normal scalar value and would interpret the function as such.
References References are scalar variables used to point to anonymous data types and functions. In all the above examples we’ve relied on the containing data structure to ensure we look at the data we meant to or call the function we intended. We can just as easily use a scalar variable to do the same task. my $array_reference = U [ 1, 2, 3, 4, 5 ]; my $hash_reference = { beef => 'corned', cabbage => 'over-cooked' };
We refer to the elements within the reference using the arrow operator ‘->‘: $array_reference->[0]; @$array_reference[0]; $hash_reference->{beef}; %$hash_reference{beef};
We can refer to anonymous functions: $func = sub { print "foo\n" }; &$func;
We can refer to scalar values too: $func = \'3.14'; print $$func;
Here we’ve prefixed the variable we’re applying with a data type constraint. Putting the wrong type in a data type constraint will result in the program concluding rather sooner than you’d hoped, if you don’t know what type of data to expect try something like this: sub handleref ($) { my $reference = U shift or return;
DATA TYPES Type
Action
SCALAR
Return the value.
HASH
Return a joined list of keys.
ARRAY
Return a joined list of values.
NOT A REF
Return the value itself.
68
November 2002
$_ = ref( $reference); /SCALAR/ and return U $$reference; /ARRAY/ and return join U (', ',@$reference ); /HASH/ and return join U (', ', keys %$reference ); $_ }
This program uses the ‘ref’ function to determine the data type of a reference. ‘ref’ returns one of a number of possible values including the more common: SCALAR, ARRAY, HASH or ‘’. The last value indicates that the parameter sent was in fact not a reference at all. What the code does is define the response taken when passed different data types: Here is a list of the input type and result output. ‘ref’ is extremely useful when using generic datastructures that can nest any type of data and have no defined limit of depth to which it is nested as it allows fully automatic determination of the references type. You may want to create a reference to an existing structure, to enable access from a function, or to link to a dynamically structured list. We use the ‘\‘ backslash operator to dereferrence a value: # Makes a reference to a scalar my $foo_ref = \$foo; # Makes a reference to @foo U called $foo_arrref my $foo_arrref = \@foo; # Make an array of references. my @list_of_arrays = U ( \@foo, \@bar, \@baz ); # this can also be written: @list_of_arrays = U \( @foo, @bar, @baz );
Here be Dragons Closures are one of the more complex features of Perl in that they build upon previous knowledge and require a grasp of a number of the language basics such as scope and pass by reference before they become readily comprehensible. However like most magic, you don’t need to understand it to wield it. A closure is a function that exploits both the lexical scope it is declared in and Perl’s garbage collection algorithm to preserve a variable beyond its expected lifetime.
www.linux-magazine.com
We’ve not yet discussed Perl’s garbage disposal routine in any depth as it is unobtrusive and rarely falls to the programmer to know or care what it does and how it works. It tidies up after us and ensures that the memory no longer used in our programs is released. The garbage collector in Perl works on a very simple (in theory) principle known as reference counting. Whenever a new variable is created it starts off with a reference count of 1 and each time a reference to that variable is taken the count increases by one. Each time a reference to the variable falls out of scope the reference count decreases by one and when no more references point to it (IE the reference count is zero) the variable is ‘reaped’ by Perl’s garbage collection and the memory it used is released automaticly, no explicit ‘malloc’ and ‘free’ for us! To clarify how closures work let’s look at what we know. We know that a variable declared in the scope of a block only exists for that block… { my $count; print "$count\n"; } # this line fails compilation # as $count is not visible print "$count\n";
We also know that a function is global regardless of where it is defined: { sub phrase { return U ' I can be called anywhere '; } } print phrase(); # this works.
So what happens when we mix the two? { my $count = 0; sub set($){ $count = shift } sub incr(){ $count++ } sub getcnt() { $count } } set(5); # sets the count to 5 incr; # adds 1 to count. #this prints 6 print getcnt(), "\n";
Perl Tutorial: Part 6
We get a variable named ‘$count’ that exists only for the functions ‘set’, ‘incr’ and ‘getcnt’ any other attempt to reference the variable will fail. This gives us a “tamed” global variable that has limited ways of being altered while also providing some data encapsulation; A global variable we can manage. There are instances when global variables need to be used and there are instances when you can use a closure instead to make the code a little safer and avoid another global. If you think this looks a little like very primative Object Orientation (OO) then you may not be surprised to know that these principles will hold you in good stead when we get to Perl’s OO facilities. While the above is a useful application of a closure, it is not the most common use of closures. In the example below we use an anonymous subroutine to create a bespoke function. This is probably the most popular and often seen use of closures within Perl. sub hello($) { my $message = shift; return sub U { print "Hello $message\n"; }
This is a customisable function. A closure can be created by calling the function like so: my $std = hello('world'); my $song = hello('dolly'); my $phrase = hello('nurse!');
We can call all the separate closures using the ampersand symbol to signify its a function and the variable that holds the reference to the anonymous subroutine. So: &$std #will print: Hello world. &$song #will print: Hello dolly. &$phrase # will print: U Hello nurse!
These rather trite examples serve only to illustrate the basics of how closures work but hopefully they will whet your appetite for the advanced potential uses they provide once you have made it past the initial hurdle and understand how they work.
Data::Dumper Once you’ve started to use more complex references you’ll inevitably want to view the contents of a complex data structure. While your first instinct may be to ‘unroll’ the structure with a number of loops, a better approach would be to use a module from the Perl core (it’s installed by default) called ‘Data::U Dumper’. We’ll show uses of Data::U Dumper here without explaining all the details behind using modules as a gentle introduction. A full explanation will be covered in a future column. ‘Data::Dumper’ is a module that is capable of serializing Perl data structures so they can be printed to screen or even written to a file while remaining valid Perl code. The last point is an important one that warrants a deeper explanation, the stringified version of the data structure is still valid Perl code, this allows it to be used in an ‘eval’ to recreate the structures in the current application and even to be read in from a file and used as a simple persistence layer. The example ‘simple_dump.pl’ below shows a rudimentary use of ‘Data::U Dumper’ to print a hash containing hash references. Although the example may look slightly contrived the principles can still be applied to larger code such as a function passing back a complex hash ref of configuration settings such as for example an ‘ini’ file style configuration. #Example: simple_dump.pl use Data::Dumper; my (%config, $config_ref); %config = ( email => { workdir => U '/home/dwilson/work', logdir => '/var/log/U perlapps/examples/email' }, news => { workdir => U '/home/dwilson/work', logdir => '/var/log/U perlapps/examples/news' } ); $config_ref = \%config; print Dumper($config_ref);
PROGRAMMING
This example shows a simple use of Data::Dumper’s procedural interface to print the representation to the console. The first line imports the ‘Data::Dumper’ module and allows any of its exported functions to be called. We then create both a hash and a scalar and immediately put some sample data in the hash. It’s useful to note how the hash of hashes is built up manually as the ‘Data::Dumper’ representation is remarkable similar. The row following should now be familiar as we take a reference to the hash. Finally we make use of Data::Dumper with the exported ‘Dumper’ function. If you run the code you’ll see how closely the output resembles the original code. The ‘Data::Dumper’ module itself can be used in either a procedural or object orientated (OO) fashion allowing it to fit inconspicuously in to the surrounding code as all good third party modules should. The example below uses the OO interfaces and requires only minimal changes: #Example: simple_dump_oo.pl #above here we would create U the hash my $dumper = Data::Dumper->U new([$config_ref]); print $dumper->U Dump($config_ref);
We start the ‘simple_dump_oo.pl’ example with the same set up code used in the ‘simple_dump.pl’ example. The code changes begin in the last few lines as we create an instance of the Data::Dumper class and pass in the reference we would like to have it work on, notice the use of braces to force list context, Data::Dumper’s constructor expects its first argument (A second optional argument is allowed) to be an array ref. Once we have a variable holding the object we then call the ‘Dump’ method and get the same on screen information we did with the procedural version. Now that the basic use of Data::U Dumper has been shown we move on to some useful options that can be configured to customize how Data::Dumper represents its output. These options are
www.linux-magazine.com November 2002
69
PROGRAMMING
Perl Tutorial: Part 6
set differently depending upon the way in which you are using the module, for the moment don’t worry about their purpose but rather how they are set. For the procedural version: $Data::Dumper::Useqq = 1; $Data::Dumper::Varname = 'foo';
These configuration settings are global so it is prudent to limit the scope the changes affect by using them within a separate often anonymous block, this is best done using ‘local’: { #start anonymous block local $Data::Dumper::Useqq U = 1; local $Data::Dumper::Varname U = 'foo'; } # changes are lost when the U code reaches here.
The options are set using methods in the OO style of use and look like this: $dumper->Useqq('1'); $dumper->Varname('foo');
When the settings are changed via methods they do not need require the jumping through hoops to limit the scope of the change as any change applies only to the one object: my $dumper_cust = Data::DumperU ->new([$config_ref]); $dumper_cust->Varname('foo'); print $dumper_cust->U Dump($config_ref); my $dumper_raw = Data::Dumper->U new([$config_ref]); print $dumper_raw->U Dump($config_ref);
When the second Data::Dumper instance (‘$dumper_raw’) prints its output it will use ‘VAR’ instead of ‘foo’. Now we have covered setting the values it is useful to know that the methods also act as accessors and if you call one with no parameters it returns the current value: my $prefix = $dumper_cust->U Varname(); #prints 'default prefix is VAR' print "default prefix is U $prefix\n";
70
November 2002
While the default settings are often enough you may occasionally need to tweak the settings to suit the use the module is put to. Two modified settings are $Data::Dumper::Indent and $Data::Dumper::Useqq or in OO parlance $OBJ->Indent and $OBJ->Useqq The first of these two ‘$Data::U Dumper::Indent’, controls the general human readability of the output structure. From the minimum value of ‘0’ which strips out all but the essential white space leaving the output as valid perl code but not easily human readable through to a maximum value of ‘3’. The default value is ‘2’ and this causes the output to have newlines, nicely lines up entries in hashes and similar and sensible indentation. While a value of ‘2’ is often enough if you are dealing with a large number of with complex arrays then it is worth at least considering a value of ‘3’ as its main benefit is to put out the array index along with the data allowing quick visual look-ups at the cost of doubling the output size. In practical terms it is often enough to leave the setting at its default value but if you are using Data::Dumper to serialize the structures to disk then you can get away with a lower level as it only needs to be machine readable. The second of the more useful options is the ‘$Data::Dumper::Useqq’ option which causes the data to be put out in a more normalized form which includes white space represented as metacharacters ([\n\t\r] instead of literal white space) characters that are considered unsafe (Such as the ‘$‘ will be escaped and non-printable characters will be printed as quoted octal integers. #Example: multi_oo_escape.pl use Data::Dumper; my %chars; %chars = ( #one tab and one space whitespace => ' ', unsafe => '$', #literal carriage return unprintable => '^M' ); my $dumper = Data::Dumper->U new([\%chars]); $dumper->Useqq(1); print $dumper->Dump(%chars);
www.linux-magazine.com
In the ‘multi_oo_escape.pl’ example above we have a one of each type of character used as values in a hash that we then pass as a reference to the Data::Dumper constructor. We then set the ‘Useqq’ to a positive value to turn it on and then call the Dump getting an output like this: $VAR1 = { "unsafe" => "\$", "unprintable" => "\r", "whitespace" => "\t " };
Notice that the unprintable carriage return (generated in vi using CTRL-V and then return) is printed as ‘\r’ the tab is printed as ‘\t’ and the single dollar is escaped to prevent it from having any special meanings. The downside to the additional functionality of ‘Useqq’ is that it will incur a performance penalty due to the fact that most of Data::Dumper is implemented in C (Using XSUB) whereas this function is implemented in pure Perl which has a performance hit. Now we have covered the basic and more useful of the features Data::U Dumper provides if you want to carry on experimenting with it you should look at perldoc Data::Dumper
They think it’s all over… The use of references is often the difference between an easy to follow and maintainable piece of code and a tangled mess of line noise and remains one of the more important areas of Perl 5 syntax to understand. Fortunately the best documentation on references (although the examples are quite terse) are included in the Perl distribution itself: A good place to start is with ‘perlreftut’, its a lighter read than the others and has a number of easy to follow examples. perldoc perlreftut Once you have the basics down you can either go for the in-depth details with perldoc perlref or go for more example code and explanations in perldoc perllol which focus’s on arrays of arrays. More varied examples in the data structure cookbook in perldoc perldsc A good final note is the reference page for Data::Dumper itself, possibly the best way of viewing or debugging references perldoc Data::Dumper. ■
LINUX USER
KTools
KStars
The Sun, the Moon and the Stars No need for a trip into hyperspace, when the KDE planetarium brings the stars to your living room. Take your computer on a trip through the night skies of the indian summer with KStars! BY STEFANIE TEUFEL
T
winkle, twinkle little star… Of course everyone knows this lullaby, but do you know the exact positions of Mars, Saturn, or Jupiter? No? In that case, you are either a candidate for your neighborhood observatory, or you might like to try KStars, KDE’s desktop planetarium. KStars is included in the kdeedu package. To launch the program, just click on Educational / KStars (the Desktop Planetarium) in the K menu of KDE 3. You will be rewarded with a top notch astronomy program that identifies over 40,000 stars and 13,000 other objects, and is capable of displaying the night sky as visible at any point on the globe. This may take a while to load, but KStars does at least let you know what progress it is making. The program design and content are quite intuitive. Figure 1 shows the night sky in the Cassiopeia constellation. KStars displays the stars in realistic colors and with their true relative luminosity. The developers have also labeled the brightest stars. Deep sky objects (that is, objects more distant than the nearest stars, such as galaxies, nebulae and star clusters) are indicated by colored symbols. The info-bar at the top of the screen
KTOOLS In this column we present tools, month by month, which have proven to be especially useful when working under KDE, solve a problem which otherwise is deliberately ignored, or are just some of the nicer things in life, which – once discovered – you wouldn’t want to do without.
72
November 2002
shows the current time and date on the left, the coordinates in the middle and the current geographic location on the right. KStars uses the status bar at the bottom of the screen to display additionally the name of the object you have just selected, and the coordinates of the current mouse cursor position.
Where am I?
from the list, you can always add the coordinates in the lower area of the coordinate window. All the fields in this window, except State/Province, are mandatory, and must be filled in before you can click on Add to List.
Late for an important date? When you launch the program, it synchronizes its internal clock with your system clock in order to display your chosen constellation in realtime. However, you can select Stop Clock under the Time item to stop the program clock or make it run more quickly or slowly. You can use Time / Set Time… or click on the timer in the toolbar to change the date and time.
The standard home position is Greenwich in England, the site of the Royal Observatory, which is well-known as the prime meridian. If you want to look at the night sky above your home town, select Location / Geographic… in the menu to open a configuration window just like the one in Figure 2. The developers provide a list with the Wandering Stars longitude and latitude of over 2000 towns in the top right corner. If your So, who needs Star Trek to explore the home town happens not be on the list, final frontiers, when you can boldly you can always try a city close by. Every click, or press the arrow keys? If you also city is represented by a dot on the world map, and immediately marked by a red crosshair when you select a list entry. Instead of spending time scrolling through the list, you can type the first letter of your home town in the filter box, reducing the search scope. If your home site is missing Figure 1: Visiting Cassiopeia
www.linux-magazine.com
KTools
Figure 2: The Night Sky above Cologne
hold down the [Shift] key, you can double your warp…, oops, your scrolling speed. Should you happen to stumble across an interesting bit of the heavens, you can use the plus and minus keys to zoom in or out. As an alternative, you can also click on the “Zoom In” and “Zoom Out” buttons in the View menu. You can use [-] to zoom out until you see a green arc like the one shown in Figure 3. This represents your local earth horizon. The curved white line in Figure 3 represents the celestial equator, an imaginary line that divides the skies into the northern and southern hemispheres. The brown line, which is almost invisible in Figure 4, represents the ecliptic, that is, the path the sun appears to follow in the course of the year. KStars shows you the whole spectrum of celestial objects – stars, planets, planetary nebulae, and galaxies. You can click on a specific object to identify it – the name immediately appears in the status bar. If you then right click with your mouse, you can use the menu that
LINUX USER
Figure 3: Somewhere Beyond the Horizon
then appears (Figure 4) to query the object type, and download a razor sharp image of the object from the celestial atlas, The Digitized Sky Survey, by clicking on Show 1st/2nd Gen DSS Image. You can see an image of the star, Ras Alhague of the constellation Alpha Ophiuchus, in the opener. You can use the Add Link… option to add websites with more information or insert additional images. KStars even allows you to verify links by clicking on the Check URL button. The program automatically loads your additions when launched, saving in the myimage_url.dat
and myinfo_url.dat files in the ~/.kde/share/apps/kstars directory. If you become tired of just aimlessly roaming around the virtual heavens, you can use the Location / Find Object… menu item to search for a specific object. The “Find Object” window (Figure 5) includes a list of all the named objects in the KStars database. Many of them are listed by their catalog entry only, such as NGC 3077, but you will also find some well-known names, such as Cassiopeia.
On Board Information Sources Of course, not everybody is a natural born astronomer. So do not panic, if the terms used in this article are a mystery to you. The makers of KStars have also given this issue some thought. “The AstroInfo Project”, a useful feature of the KStars manual, provides you with a series of short articles that explain the most important concepts and terms in the field of astronomy. Some articles even include exercises that can be completed using KStars. The developers are keen on expanding this section and actively encourage interested users to contribute to the scientific database. ■
INFO [1] KStars: http://kstars.sourceforge.net [2] Fixed Stars: http://www.winshop.com.au/annew
Figure 4: Identifying Celestial Objects
Figure 5: Seek and You Will Find
[3] AstroInfo Project: http://astroinfo.sourceforge.net
www.linux-magazine.com November 2002
73
LINUX USER
Out of the box
GWhere
A Break for the Disk-Jockey The GWhere CD Indexer is just what the doctor ordered for those of you suffering from the “can’t quite remember what Linux Magazine subscription CD the xyz tool was on” syndrome. BY PATRICIA JUNG
tar -xzvf GWhere-0.0.25.tar.gz cd GWhere-0.0.25 ./configure make checkinstall
T
wenty subscription CDs, a heap of MP3 disks, and a backup of your second computer – that’s quite an impressive collection of CDs, but if you do not remember what disk the file xyz is stored on, you have a problem: Insert the CD, launch find, remove the CD if you draw a blank, and start again… You would not want to have to repeat all those steps, unless you are looking for a really important file. “Where there’s a will, there’s a way”, Sébastien Lecacheur thought and so he wrote GWhere, (http://www.gwhere. org/), a little GUI tool that indexes data CDs, floppies, or Zip disks. Allowing you to search the file tree without needing to sign up for a degree in diskjockeying. If you know what medium the required file is stored on, you merely need to locate that particular medium. The current version of GWhere 0.0.25 is available as a binary for RPM based distributions, but you might prefer to compile the source code stored in GWhere0.0.25.tar.gz. Version 1.2.0 or better of GTK, and the matching gtk-dev(el) package must be pre-installed to do so:
OUT OF THE BOX There are thousands of tools and utilities for Linux.“Out of the box” takes a pick of the bunch and each month suggests a little program, which we feel is either absolutely indispensable or unduly ignored.
74
November 2002
If checkinstall [1] is not available, you can use make install instead. If you want to define the root directory for the installation yourself, you can call configure with the --prefix=directory flag.
First Things First… Type GWhere & (paying attention to the case) in an X terminal session, and then wonder what to do about the more or less empty window that you are confronted with at this point. Neither the Help nor the File menu give you any clues as to what to do with the heap of CDs that you wanted to index. Users who require menus in any language apart from English will need to set the Locale correspondingly:
message. To avoid this, you can check the Automount checkbox under Options before clicking on the browse button. GWhere prompts you for a Catalog Name at first. This name has nothing to do with the CD and is simply a general heading for all the CD indices stored in a single file. As GWhere can only display and manipulate a single index file in the current version, this heading is not really important – just enter mycds, or cdindex or something similar. However, you will want to put some thought into answering the next prompt, which refers to the Volume Name (Figure 1). This should ideally provoke a reaction such as “Oh yes, that’s the CD with…” from the user. All that fancy search functionality is useless if you are unable to physically locate the CD that the GWhere search results refer to. GWhere zooms through the indexing process in next to no time, and if you have additionally selected Eject Volume if
export LANG=fr_CA; GWhere &
will use French Canadian, for example. If you want to define French Canadian as the target language for GWhere, but not for the shell, you can put this command in parentheses: (export LANG=fr_CA; GWhere &)
The main indexing function turns out to be accessible via a tab labelled Management (Figure 1). You can use the drop-down menu Choose Volume to locate the mount point for the CD, insert the CD, and then click on the Browse Volume button at the bottom of the window. If the medium is not mounted at this point, GWhere will return an error
www.linux-magazine.com
Figure 1: The Volume Name is important to the success or failure of a search operation
Out of the box
LINUX USER
GLOSSARY ; (Semicolon): A semicolon between two commands in the command line has the same effect as pressing [Enter]: After processing the first command, the second is processed.This allows you to enter a series of short commands in quick succession. Parentheses: Commands in parentheses are not processed by the current shell, but call a new subshell (which is automatically closed after processing).This allows you to use environment variables to provide a command with its own environment. In our example, the value for LANG only affects the subshell in which GWhere was launched, but not the current working shell. So, in this case, only GWhere would be expected to speak French; other programs called in the current shell (and the shell itself) will continue to use the default setting for LANG.
Figure 2: The dialog for adding keyword categories is not exactly intuitive
possible, will automagically open your CD-ROM tray after finishing the index.
Categorized and Described If you have had more than your fair share of CD diskjockeying, you will probably want to save the index using File/Save in the main menu. While you are waiting, you might like to define some keywords for your CD collection under Action / Edit Categories (Figure 2). Type the keyword in Category Name, and add a few explantory comments in Description, before clicking on Add to add the entry to the Category List. Editing categories you entered previously can be more challenging. To do so, you first select the list entry and then click on the Update button.
This toggles the Add button in Figure 2 to Update. Click on this button when you have finished editing the keyword file. Clicking on the Catalog tab not only reveals the current collection of indexed CDs, allowing you to navigate them, but you can also add metadata. Right click with the mouse to open a menu, allowing you to add a keyword and a description for every directory and file via the Properties dialog box (Figure 3).
The Joy (and Pain) of Search Ops Don’t forget the program’s current state of development before you get too enthusiastic about entering metadata. Although the Search tab (Figure 4) theoretically allows you to search by description only, this feature does not seem to be available at present – neither are the functions for searching by media name (Disk), or by keyword (Category). If you prefer not to search by full name, you can activate the Regular Expression feature. This allows GWhere to find any files containing the png string, such as libpng.so.2 or top-bg.png. If you are only interested in PNG images with the png or PNG suffix, you
Figure 3: If required, you can add a description and a keyword to every file
can use a dollar sign to indicate the end of the string and search for png$. If you also select Upper/Lower Case, GWhere will respect the case of your search string: In this case a capital letter will find only those files that have a capital letter at the appropriate position in their names. The program has an annoying habit if a search op fails to find a result: it has no progress indicator, no status message and no way of knowing whether the search op is still in progress or has failed. This kind of inconsistency in the user interface certainly keeps you on your toes, but in our opinion GWhere is indispensable for anyone just starting to lose track of their CD collection. It is a pity that GWhere can only maintain a single index file, as you really do need two catalog files to avoid mixing up your MP3 collection and your Linux Magazine CDs. But on the upside, you can always run multiple parallel GWhere processes if you require. ■
INFO [1] Christian Perle:“Say Hello Wave Goodbye”, Linux Magazine Issue 22, p78.
Figure 4: Searching for PNG files
www.linux-magazine.com November 2002
75
LINUX USER
deskTOPia
Jo’s alternative Desktop: UDE
All together And the last will be first – as we learned from the Bible. With this in mind a new desktop environment has wended its way to Linux land, and its patron saint, the “Ultimate Window Manager”, is now ready for a Friedrich Keller, visipix.com
test run on your desktop. BY JOACHIM MOSKALEWSKI
O
nce upon a time the world was full of competing computer systems. Some of them lived in the land of Atari, others were friends of the Commodore, and another little group kept a big Apple company. All of them were convinced they were doing the right thing – and you know what? They were right. But one fine day, Bill, the demagogue took a journey. He visited numerous countries, on his way collecting a few subjects who had made life worth living in their old homelands, from each country. He invited them to perfect his own country. And finally in 1995, the gates to the country of “Windows 95” were opened. The lure of this country was heard in countries far afield, and thus more and more users rushed in to see this country’s promise for themselves. And the applause was so loud that they stayed.
Strange Lands
November 2002
newbie for ever, and as a power user will soon be looking for ways of making your life simpler. In contrast to other Window Managers also designed with this clientele in mind, UDE is clearly mouse oriented. If you are looking for a keyboard driven GUI, the current version of UDE is not where you want to be .
From the Archives Installation should be fairly painless on any Linux distribution. The developers provide both RMPs and Debian package, as well as the normal source archives, at http://udeproject.sourceforge.net/. The packages available from this site are also on this issue’s subscription CD. You will definitely need to resort to the sources, if the pre-compiled packages do not work for you. This should not prove too much of an obstacle: UDE merely uses your X servers functionality and
DESKTOPIA
Of course, there were consequences: The inhabitants of this country have been conditioned ever since. If a system does not offer known elements, such as a taskbar or start menu, window buttons or desktop icons, these poor users are
76
incapable of imagining that they could operate the system. However, there are still some users who are open to change, and since you would seem to be one of them – after all you are a Linux geek who reads deskTOPia – I would like to introduce you to a radically different concept from the one you are used to the major players providing. I’m talking about UWM, the slightly different Window-Manager of the “Unix Desktop Environment” UDE. The documentation starts with the words: “Starting UWM for the first time you might recognize that it doesn’t only look different from other window managers but also behaves not quite the way most of you would first expect such a system to do. This fact alone might be a reason for some people to throw UWM away and go back to a conventional windowing user interface. Others might start thinking – Some of them might get used to it.“
Mice for Power Users UWM was not developed with the aim of attracting users on account of its ease of use, but aims to provide more power to the user after you have mastered the first few steps. After all, you do not stay a
www.linux-magazine.com
Only you can decide how your desktop looks.With deskTOPia we regularly take you with us on a journey into the land of window managers and desktop environments, presenting the useful and the colorful viewers and pretty toys.
deskTOPia
thus does not require any specialized packages. Users of older distributions may need to install a separate package to support XPM images. You will need the make tool, the gcc compiler, and the developer package for the X Window System (but you will probably already have installed these items, if you occasionally need to install graphics software off the Internet).
Starting Blocks Before you can really profit from the UDE that you install, you will first need to take another hurdle: Launching your X Window System instead of your original Window Manager uwm. Unfortunately, the way to do that differs from distribution to distribution. As a rule of thumb: If you launch the graphic interface manually, by typing the startx command, the ~/.xinitrc file is parsed. If you log on in graphic mode, this will be the ~/.xsession file. If you can’t find these files, you can simply create them. Then contents of both files are identical. Since the Window Manager, UWM, is a core component of the UDE environment, you simply need a
Figure 1: A Start menu, but not a bar
Keyboard Shortcuts It is not as if the creators of UDE think that mouseless operations are entirely irrelevant. There is even supposed to be a rudimentary – albeit non user definable – keyboard layout. But the author’s attempts to get this running on his own system were a miserable failure. But still, if you want to try your luck, you can refer to doc/ude0.2.8/html/node11.html for the theoretical keyboard layout.
LINUX USER
single call to uwm – the program will take care of everything else itself. The Listing (p78) shows an example of an X start file. Don’t forget to set the access privileges with chmod 700 .xinitrc or chmod 700 .xsession. Otherwise, the script will not run on some distributions. The next Figure 2: The Title Inside time you start your X Window System, you can expect to see ground – hidden by other applications – the Unix Desktop Environment. until you raise it into the foreground. Focus and raising are two separate conClean cepts in UWM. A window is raised if you left click its UDE displays a completely empty frame, and the focus automatically desktop at first. No buttons, no menus, switches to that window. If you use and no icons saying “Click me!”. The the middle mouse button instead, the UDE developers insist that this is window disappears behind all the other intentional, as your desktop should windows – without losing the focus. normally be filled with applications and The following approach is slightly not with a Window Manager. more complex, but more comfortable in UDE’s control elements are hidden the long run. Click on the window you behind your mouse buttons, so look want to modify with the center key and forward to some finger fuddling hold the key down. Now click on the left exercises. You simply press the right key to raise the selected window above mouse button to open the start menu the others. (Figure 1) and release the button, when If you simply press the center or right you have found the application you want mouse key, the window drops down the to launch. order. It often makes more sense to The option for launching multiple “lower” the current, and undesired, applications with a single mouse action window than to raise the desired is neat: Keeping the middle button on window. (If your mouse does not have a your mouse held down, click on an center button, you will not be able to use application you want to to launch. Then this function – and many others; UDE move to the next entry and again left definitely requires a three-key mouse). click to run it. Windows are not surrounded by Push! frames – there is merely a heading inside (!) the program window (Figure 2). If While you hold down the middle mouse you move the mouse to the area with the and change the order of your windows, heading, the heading will simply disappear, reappearing only when the GLOSSARY window drops out of the mouse focus. Window Manager: This program is
In Focus or in the Foreground? When multiple applications begin to fill up your available desktop, you will notice that the interface provides for only “sloppy focus” behavior. This is where the window you last moved the mouse across will automatically react to any keypresses (focus), but the window will remain in the back-
responsible for window dressing and window functionality – in short for anything that needs to drawn around an application. Window Managers often include a Start menu or allow you to set the desktop background. In contrast, a Desktop Environment not only contains a Window Manager but also influnces your applications. X Window System: Provides Linux systems with a graphical user interface. Even desktop environments such as KDE or GNOME run as applications on this interface.
www.linux-magazine.com November 2002
77
LINUX USER
deskTOPia
.xinitrc/.xsession #!/bin/sh LANG=en_US; export LANG exec uwm
you can simultaneously rearrange them. You automatically drag the window as long as you hold the mouse key down. All this functionality certainly takes some getting used to … When you are placing applications on the desktop, it is quite useful to dock windows by moving one window so close to another that it snaps to the window frame pixel for pixel. UWM positions new windows wherever there is enough room for them: If there is not enough room on the desktop, you are asked where to place the new window.
Sweet
visible and minimized applications the workspace contains. Besides the virtual desktops the Windows menu also contains an entry for Sticky Windows. This submenu contains the applications that you have designated as sticky windows via the Honeycomb. If you switch to an application on another virtual desktop, the sticky window simply stays with you. In other words these windows are omnipresent in all your workspaces.
November 2002
Comb
Function
Up
Close Window (regular)
Upper right
Kill window (kill including safety prompt)
Lower right
Sticky Window,also opens the Workspace menu
Down
Maximize window or reset to original size
Lower left
Lower window – hide it behind other windows
Upper left
Shrink window
UDE à la Carte
Virtuality
If you want to configure your desktop, you should copy the files under /usr/local/share/ude/config/* to ~/.ude/ config/*:
The default configuration first calls the uwmrc-ws.hook file, which configures the virtual desktops – three are defined by default. Each workspace is then processed individually. By the way the workspace numbers start at 0 and not at 1. The last valid workspace is always the default workspace, that is number 2 in the original configuration. Workspace specific colors are somewhat daunting at first glance. they normally take the form 113;140;118, that is a numeric RGB values between 0 and 255 for the colors red, green, and blue. However, instead of this notation, you can also use the hexadecimal values you may recognize from HTML (such as #667788), or even choose selfexplanatory colors such as black or plum4 using the xcolorsel tool.
mkdir -p $HOME/.ude/config cp /usr/local/share/ude/U config/* $HOME/.ude/config/
If you left click a window (The path refers to a frame, the Honeycomb, standard installation of the which comprises a range source code). UWM will of functions becomes first try to locate its available (Figure 3). Your configuration files in mouse pointer is ~/.ude/config/uwmrc. If it surrounded by six icons cannot locate them, the that replace the normal Window Manager then window buttons. The queries the global settings table on the right provides in usr/local/share/ude/ Figure 3: Window buttons in you with details on their config/uwmrc. You can edit the honeycomb assignments. So to close a the copies in your own window, you just left click a window home directory to suit your needs. frame, move the mouse up a bit, and However, an error in the configuration then let go. file can lead to UWM refusing to launch. So that just leaves the function of the To remedy that situation, (temporarily) right mouse key when you click a change to another Window Manager, or window frame: You can change the repair the configuration file on the window size by right clicking. character based console. If you don’t have a lot of modifications to lose, you Variety of Features can simply remove the damaged file and/or copy it again. After using the Honeycomb to shrink an The copies represent a complete and application window, you will certainly documented basic configuration: The miss the program icon to restore the developers have added a helpful window. Well.., there really are no icons! comment for each entry in these files. But, you will be okay, if you remember The supplied uwmrc comprises only to use the left and center mouse keys on of links to other configuration files. If the desktop. Press the center key to open your prefer to do so, you can type the the so-called Windows menu (yes, they content of the external configuration really do spell it with an s…). files here, however, it does make sense to At this point it is fairly obvious that sort the configuration parameters by UWM can handle virtual desktops, like topic and store them in separate files, the majority of X11 Window Managers: when you consider the sheer number of Each workspace provides a submenu available options. that allows you to view and activate the
78
Honeycomb Assignments
www.linux-magazine.com
Background If you want to prevent UDE installing a monochrome desktop background, you can comment out the ScreenColor line by placing an % at the start of the line, to make UWM ignore this entry. If you change your mind later, all you need to do is remove the comment character. To place an image on your desktop you
GLOSSARY Virtual Desktop: If your desktop is full of applications, you may have to resort to a virtual desktop, or workspace. Only one workspace is visible on the desktop at any time. If you switch to another desktop, the applications in the original desktop are kept and reappear when you switch back. Xresources: The traditional method of configuring an application’s appearance.This only works if the toolkit the application is based on respects this method (which Qt and GTK+ do not); multi-application configurations are also possible.
deskTOPia
Cancelled
display -geometry U 1280x1024! -window U root image file &
will zoom your image to 1280x1024 Pixel and place it on the desktop background. However, this will not permit you to display different backgrounds for your Figure 4a--c: Workspace specific themes by courtesy of urdb individual workspaces. You can add this command to urdb in this file. This “UDE Resource StartScript, the autostart file for your Database” contains workspace specific Unix Desktop Environments. Make sure Xresources (and thus the first clear you terminate every command in this file indication that UDE is intended by be an with an &, as UWM will otherwise wait Environment). If you have defined X for the command to complete… By resources of your own, or prefer to use default the file contains a call to xterm your system’s settings, you can disable with an informational text. If this starts this line by prepending a % sign. to get on your nerves, you might like to appmenu is also called in uwmrc-laydelete the offending line. The counterout.hook. This file describes the start part of the StartScript is the StopScript, menu available via the right mouse which is run when you quit the desktop button. Entries take the form and unmounts a number of standard devices in the default configuration. ITEM "Name";"Command";
The next configuration file, uwmrclayout.hook, is responsible for the appearance of the windows and menus: You can adjust frame widths, 3D effects, and fonts. This area also includes the appearance of your applications, and for this reason you will find a reference to
You can use uwmrc-behaviour.hook to influence your desktop’s behavior. For example, if you set TransientMenus to 0, your start menu will not simply disappear if you forget to hold down the mouse button. You can additionally adjust automatic window positioning in this file.
uwmrc finally refers to uwmrcuser.hook – a non-existant file, as it is reseverd for the (few) variant settings required by individual users. But, as you have already stored a whole set of configuration files in your home directory, you will not need this entry. After saving the changes you made to the configuration files or the start menu, you can restart UDE – without needing to terminate any applications – using the Restart UDE entry in the context menu for the left mouse button. UDE respects some, forgotten Unix concepts. It utilizes the three mouse buttons, supports and uses Xresources, and allows use of a text editor for configuration tasks. The developers are still capable of developing an innovative concept. The mouse jockeying involved is definitely revolutionary. ■
THE AUTHOR
can simply use one of the innumerable Linux tools for this purpose. One example is display, which belongs to the ubiquitous ImageMagick package. The command
Frames
LINUX USER
Jo Moskalewski missed out on the purported debauchery of student life and is now a master craftsman. He spends most of his free time trying to save the world, or just meeting up with friends and sun worshiping. On those rare occasions when he takes time out to relax, you will normally find him basking in the euphonic experience of a loudspeaker system that he built himself.
www.linux-magazine.com November 2002
79
LINUX USER
Dr. Linux
Dr. Linux
Safe and Sound Is your CD image file or floppy in good working order? Doctor Linux can help you find out. BY MARIANNE WACHHOLZ
B
efore burning the ISOs I just downloaded off the Internet to CD, I would like to check whether the files are 100% error free. What Linux program can I use to do so? Dr. Linux: One possible way of verifying an ISO image is to mount the file. If this works, you can assume that the data was transferred correctly during download, but this does not give you a
100% guarantee that the file is error free, only that you have a working copy of what was available. So-called loop devices can be used by the superuser, root, to insert files such as hard disks or floppies into the directory tree. You can use mount with the -o loop option to do so. Make sure that you change the syntax in the following example to reflect your own directory structure names!
Doctor Linux Complicated organisms, which is just what Linux systems are, have some little complaints all of their own. Dr. Linux observes the patients in the Linux newsgroups, issues prescriptions here for the latest problems and proposes alternative healing methods.
80
November 2002
perle@linux:~/iso> su root Password: Your_Root_Password linux:/home/perle/iso # mount U -o loop -t iso9660 /this/is/U my.iso /mnt/
SuSE users tend to mount in /media or a
www.linux-magazine.com
subdirectory below this level: mount -o loop -t iso9660 /U path/to/my.iso /media/
The complete content of the ISO image is now available below the directory supplied in the last argument, the mount point, exactly like it would be if you burned and mounted a CD with this content later. After verifying the files you can remove the image from the directory tree by typing umount mountpoint and then drop your superuser status by typing exit. If you want to ensure that an ISO file you downloaded to your home PC is error free, you can compare the checksum of the original file with the
Dr. Linux
checksum of the file you downloaded. The checksum is a numerical value calculated by an algorithm with reference to the total sum of the bits that the file contains. There are various programs for calculating checksums, such as sum or cksum. As they all use different algorithms, the checksums created by them are not compatible, so you will need to use the same tool to create your checksum as was used to create the checksum for the original file. At present you will almost always discover that the MD5 algorithm [1] has been used to checksum a downloadable file. The md5sum program is included as a standard component of any known Linux distributions. md5sum creates a 128 bit value for any file. Checksums are typically stored in files ending in .md5 or .md5sum on FTP or web servers (Figure 1): ddee9456051785ebdd92f3d28a033e61U gentoo-ix86-1.2.iso
MD5 checksum files thus contain only a few bytes – in contrast to ISO images – and it makes sense to save them in the same directory as the file used to create them. Sometimes administrators will collate several checksums to a single file or add them to files, such as Readme.txt or the like. It is more or less impossible to generate the same checksum for two different files with md5sum. Even the slightest change to a file – and this could be caused by a transmission error – will immediately lead to a different checksum as the sum of the bits will now be different. The uniqueness of checksums, which are often referred to as fingerprints, is used by administrators to discover system manipulations caused by files that have been injected or exchanged. How can you verify a checksum that you have just downloaded? To see how this works let us look at this process using an ISO file from Gentoo Linux [2] as an example:
LINUX USER
perle@linux:~/iso> ls -l insgesamt 16540 -rw-r--r-1 perle 16908288 Jun 23 13:48 gentoo-ix86-1.2.iso -rw-r--r-1 perle 54 Jun 23 13:49 gentoo-ix86-1.2.iso.md5
users U U users U U
The ISO image and the corresponding MD5 file are both stored in the current working directory (in this case, /iso). We now want to pass the checksum file to the /usr/bin/md5sum program with the -c (“check”) flag set. If everything turns out okay, the answer will be a single ok: perle@linux:~/iso> md5sum U -c gentoo-ix86-1.2.iso.md5 gentoo-ix86-1.2.iso: Ok
If the file, however, fails the test, md5sum issues a warning: perle@linux:~/iso> md5sum U -c gentoo-ix86-1.2.iso.md5 gentoo-ix86-1.2.iso: Error md5sum: Warning: 1 of 1 U calculated checksums did U NOT match
GLOSSARY ISOs: Popular expression for files whose file system reflects the system independent ISO 9660 standard, which is used for burning CD ROMs. Mount: Storage media are inserted into the Linux file system tree by means of the mount command, which requires root access. Before removing a mounted CD or floppy from the drive, you will need to issue the umount command. Access to hard disk partitions can also be disable in this way on Linux.The sysadmin can use the /etc/fstab file to allow unprivileged users to insert or remove certain media, such as CD ROMs for example. su: You can use the “su username”command to assume the identity and rights of the selected user in the shell. After entering the correct password, you are returned to the current directory, but with the privileges of a superuser, for example, and can carry on working with root privileges.
Figure 1: Knoppix ISO files [3] and their md5sum files
e2fsck: This command-line tool verifies, and if needed (and possible), repairs Extended 2 file systems.This was the standard file system type for most Linux hard disk partitions, although many distributors have now switched to more modern file systems, such as the successor Ext3 or ReiserFS.
www.linux-magazine.com November 2002
81
Dr. Linux
LINUX USER
If the MD5 value is included in a readme file, you therefore need to create a checksum yourself before you can compare it, pass the name of the ISO file to md5sum and verify the results visually: perle@linux:~/iso> md5sum U gentoo-ix86-1.2.iso ddee9456051785ebdd92f3d28a033e61 U gentoo-ix86-1.2.iso
If your distribution happens not to include md5sum, you can download it from [4]; type md5sum as a search key. The program is part of the Text utilities package. Additional information is available from the GNU project at [5].
Error Free? I have a few older floppies that I would like to use as “single floppy Linux”
Listing 1: 1.44 MB Floppy with Bad Blocks 01 02 03 04 05 06 07 08 09 10
perle@linux:~> su -c "/sbin/badblocks -s -v /dev/fd0" Password:Your_Root_Password Checking for bad blocks in read-only mode >From block 0 to 1440 Checking for bad blocks (read-only test): 12 0/ 13 13/ 1440 14 15 15/ 1440 done Pass completed, 4 bad blocks found.
1440
Listing 2: Major and Minor Numbers for Floppy Devices perle@maxi:~> [...] brw-rw---brw-rw---brw-r--r-brw-rw---brw-rw---[...]
ls -al /dev | less 1 1 1 1 1
root root root root root
disk disk root disk disk
2, 2, 2, 2, 2,
0 36 60 4 8
Jun Sep Jun Sep Sep
6 17:13 fd0 24 2001 fd0CompaQ 28 14:28 fd0H1722 24 2001 fd0d360 24 2001 fd0h1200
Listing 3: Ostensible bad blocks due to an incorrect floppy device 01 02 03 04 05 06 07 08 09 10 11 12
perle@linux:~> su -c "/sbin/badblocks -s -v /dev/fd0 1722" Password: Your_Root_Password Checking for bad blocks in read-only mode >From block 0 to 1722 Checking for bad blocks (read-only test): 1440 1408/ 1722 1441 1442 [...] 1720 1721 done Pass completed, 282 bad blocks found.
Listing 4: Checking a 1722 kb floppy with badblocks perle@linux:~> su -c "/sbin/badblocks -v /dev/fd0H1722" Password:Your_Root_Password Checking for bad blocks in read-only mode >From block 0 to 1722 Pass completed, 0 bad blocks found.
82
November 2002
www.linux-magazine.com
versions or boot disks. How can ensure that there are no bad blocks hiding on these disks? Dr. Linux: You would normally want to use the /sbin/badblocks tool to ensure that there are no bad blocks on the disk. This tool is part of a collection designed for verifying, maintaining and creating file systems on (almost) any Linux system. Maintenance programs, such as e2fsck, can process the output from badblocks. As you would normally only want to use floppies that are free from errors (although there might be some strange reason for using a damaged disk), we are not going to look into possible repair procedures in this article. If you want to test a floppy with badblocks, you cannot mount it in the Linux directory tree. As superuser privileges are required to access the floppy, you may need to prefix the badblocks call with a call to su using the -c flag. This ensures that only the ensuing command, which must be enclosed in quotes, will be executed with root privileges. You can use the badblocks option -s (“show”) to show which block the program is currently processing. The -v flag (“verbose”) will keep you up to date on the program’s activity. But you are still on the safe side if you leave out these options as a message is given when a bad block is found. According to the man page, you need to specify the number of blocks to check, but since floppies are verified by reference to the corresponding device file, you can leave this parameter out: perle@linux:~> su -c "/sbin/U badblocks -s -v /dev/fd0" Password:Your_Root_Password Checking for bad blocks in U read-only mode >From block 0 to 1440 Checking for bad blocks (read-U only test): 16/ 1440
In our example the program is currently testing blocks 16 through 1440. If the result is negative, for example, when there are no bad blocks, the program will report back to us with Pass completed, 0 bad blocks U found.
Dr. Linux
Box 1: Floppy Disk Device Types The manpage for fd-(“floppy disk”-)devices specifies over 30 different device files that can be used to access floppy drives, including some fairly obscure 5.25 inch drives.The following short excerpt shows just a few of the various possibilities: In the following list n refers to the drive number: 0 for the first drive, 1 for the second, and so on: 3.5 Inch High Density Devices: Name
Capac
Cyl
Sect
Heads
Minor Base #
fdnH360
360K
40
9
2
12
fdnH720
720K
80
9
2
16
fdnH820
820K
82
10
2
52
fdnH830
830K
83
10
2
68
fdnH1440
1440K
80
18
2
28
fdnH1600
1600K
80
20
2
124
fdnH1680
1680K
80
21
2
44
fdnH1722
1722K
82
21
2
60
fdnH1743
1743K
83
21
2
76
fdnH1760
1760K
80
22
2
96
fdnH1840
1840K
80
23
2
116
fdnH1920
1920K
80
24
2
100
(provided you specified the -v option). In case of positive results, the bad blocks will be listed – refer to Listing 1, where blocks 12-15 are reported. In this case the program will report:: Pass completed, 4 bad blocks found..
Figure 2: Gentoo Linux desktop
The Floppy Format Odyssey My floppies have been through it all – all those attempts to increase their capacity by using different formats. But badblocks always shows an incredible number of bad blocks in this case, although they are not displayed if I stick to the standard 1440 kb format. Dr. Linux: If the device file you supply does not match the low level format of the disk, badblocks will return with gibberish. Additionally, assigning the wrong device file can endanger your hardware. Avoid using device files that are inappropriate for your hardware type! Various floppy drives can be accessed via the
GLOSSARY
LINUX USER
device files in the /dev directory. They all have the major device number 2. The minor device number represents a floppy format for this type of hardware (see Box 1). If you list the directory /dev, you are shown the major and minor numbers of the devices instead of file sizes. Listing 2 shows an example, you should not assume that it will be similar to the device files on your own system. The kernel relies on this information to recognize the format when a floppy is opened and passes this information on to the relevant programs. Let’s look at an example to see how badly a verification with badblocks can go wrong if you supply the wrong device file: A floppy has been low level formatted using the fdformat program, and now has a capacity of 1722 kB: perle@linux:~> su -c U "fdformat /dev/fd0H1722" Password:Your_Root_Password Double sided, 82 tracks, 21 U sectors/track, total capacity: U 1722kB. Formating ... done Checking ... done
Now you take the floppy out of the drive and insert it again to suggest to the kernel that a new medium has been inserted. When verifying the disk with badblocks, you mistakenly refer to the device as /dev/fd0, although you supply the correct number of blocks to be processed. As a result 282 bad blocks are shown – blocks 1440 through 1721. That is, the blocks that exceed the capacity of /dev/fd0 (1440 blocks in the case of high density floppies) (Listing 3). If you now choose the right device file, the floppy passes the test, as you would expect – refer to Listing 4 for details. ■
INFO
Low level format: This does not mean writing a file system (minix, msdos) to the disk, but defining tracks and sectors. Disks with “raw”formats of this type can be written to using tar or dd.
[1] RFC 1321: http://www.fourmilab.ch/md5/ rfc1321.html
Major and Minor Numbers: When a program accesses a device file, two numbers are passed to the kernel to indicate the request.The major number typically refers to a particular kernel driver and the minor number to the device for which access is required.This is why all the device files for the serial port have the same major number, but different minor numbers. In short, the kernel uses the major number to pass the request to the appropriate driver, and the driver uses the minor number to determine the device that needs servicing.There are a few exceptions but normal Linux users will normally not come across them.
[2] Gentoo Linux: http://gentoo.org/ [3] Knoppix Download: http://download. linuxtag.org/knoppix/ [4] GNU Software: http://www.gnu.org/ directory/ [5] Text Utilities: http://www.gnu.org/ software/textutils/textutils.html
www.linux-magazine.com November 2002
83
LINUX USER
Distributed Computing
Distributed Computing on Linux
Hertz Donors A variety of projects with completely different goals are currently competing for the use of the latent processing power of home PCs. This article provides an overview of the more interesting efforts. BY BJÖRN GANSLANDT
E
ven though the daily blurb from various computer and chip manufactures might suggest that your computer needs even more power, you will in fact very rarely need to tax your CPU to the limit. The average PC has only a moderate load most of the time. In addition to CPU cycles most PCs have some bandwidth to spare, allowing them to coordinate processing tasks with other PCs. In this way, millions of PCs can be linked up to work in parallel on jobs that would normally require a super computer at an exorbitant asking price. Nearly all of the distributed computing projects discussed in this article are available for Linux in the form of tar.gz archives and can be unpacked in the usual way. tar -zxvf archive.tar.gz
Since the client source code is typically not included, there is no need to compile it – just launch the client instead. But you could even leave that task to another program called cron – refer to the Cron Setup inset for more information.
Cylon Radio Probably the most famous distributed computing project with over 3.9 million users is SETI@Home, [1]. SETI@Home has set itself the daunting task of searching for intelligent, extraterrestrial
84
November 2002
lifeforms, no less. To this end, the SETI@Home client analyzes a 100 second recording with a bandwidth of 10 kHz from a radio-telescope in Puerto Rico seeking signs of intergallactic radio transmissions. As the telescope rotates in relation to possible extraterrestrial radio sources, the client searches for a signal that matches a Gaussian beam pattern. Additionally, the software has to consider doppler effects, recognize pulsed signals, and come to terms with the increasing number of terrestrial transmissions. Due to the enormous amount of volunteers, SETI@Home can evaulate each package more than once, and thus eliminate
www.linux-magazine.com
errors or attempted manipulation. And thanks to the sheer bulk of data evaluated, it is also possible to filter out radio signals that occupy constant positions in the sky – unfortunately most of these permanent signals turn out to be terrestrial. The SETI software normally runs in character-based mode on Linux, but you can stipulate the -graphics option to relay your results to the GUI version, xsetiathome. If you want to run SETI@Home permanently as a background task, you might like to try the -nice 19 option, which reduces the client program’s priority. Why “nice”? Well the program gets out of the way if other programs need more CPU cycles. Apart
Distributed Computing
LINUX USER
Figure 2: Any possible distances between two numbers on a Golomb Ruler must be of a 0different length
Figure 1: Xsetiathome visualizes the search for extraterrestrial radio signals
from xsetiathome there are a variety of other programs that convert the results to graphics, allowing you to insert them into the KDE panel. Try Freshmeat, if you are interested in finding a few [2].
Crack the Code Distributed.net [3] are currently working in parallel on several mathematical tasks, and in contrast to SETI@Home they can point to a number of problems they have solved in the past. The project was able to crack DES and/or CSC encrypted messages in a record time. Currently Distributed.net are taking part in the RC5-64 competition – in contrast to other competitions, the participants are required to test a maximum of 264 keys, compared to 256 previously, and that certainly requires an enormous amount of processing power. You are more likely to be struck by lightning while winning the national lottery than find the right key with your first guess. However, you do get to keep US $2,000 of the prize money, if you are the lucky finder. The rest of the prize money – which was sponsored by RSA – will go to Distributed.net, which is a non-profit organization, and if you are a member, to your local Distributed.net group. The second active project at Distributed.net is the search for an Optimal Golomb Ruler with 24 or 25 integers, where the integers must be non-negative such that no two distinct pairs of numbers from the set have the same difference. An Optimal Golomb
Ruler is the shortest Golomb Ruler possible for a given number of marks. OGR’s have many applications, including combinatorial functions, and in the field of interference phenomena. Distributed.net also offers a console based client that does not impact your bandwidth or CPU cycles as heavily as SETI@Home. You can either configure the program after first launch or use the manual -config option. The configuration options allow you to set the priority for various projects and change the size of the work packages. You can also use the -install option to automatically add the program to /etc/init.d/ and assign the appropriate runlevel, allowing it to be launched whenever you start your computer – until you remove it, with the -uninstall option, that is.
More Maths: Prime Numbers GIMPS [4] is another mathematical project, and they are looking for prime numbers this time – prime numbers of the form 2p-1, where p is also a prime number, to be more precise. This type of prime number is referred to as a Mersenne prime number, named after the French monk and mathematician. The Electronic Frontier Foundation has put up a prize for the first prime number with at least 10 million digits, but GIMPS is not the only project seeking large prime numbers, and you need a lot of CPU cycles to find, or verify one. The ECCp-109 project [5] has entered yet another cryptography competition
with slim chances of prize money. In contrast to RC5-64 this competition is not about symmetrical algorithms but an asymmetrical (Public Key) algorithm based on elliptic curves, where both a Public and a Private key exist. Encoding algorithms based on elliptic curves have the advantage of shorter keys and higher speeds when compared with traditional techniques like RSA or Elgamal, as used by PGP or GPG, however, more research is required on this subject.
Power Chess with Clusters The success of computers such as Deep Blue or Deep Fritz (who has been battling it out with the reigning (BGN)
Cron Setup Cron can be used to launch and terminate other programs at pre-defined times.You can use crontab -e to define tasks for the daemon.This command will load the editor defined in your $VISUAL or $EDITOR environment variable – use export EDITOR=editor, if you want to change this setting.The following entry launches a program at 8.00 pm every day and terminates the program at 9.00 am.The 2>&1 >/dev/null string sends the program’s output to the null device; cron would otherwise want to email this output to the user. If you use the “@reboot”parameter instead of specifying a schedule, cron will launch the designated program when you reboot your system – you can type man 5 crontab for additional information on the crontab format. 00 20 * * * cd mydirectory; ./CLIENT 2>&1 >/dev/null 00 9 * * * killall CLIENT
www.linux-magazine.com November 2002
85
Distributed Computing
LINUX USER
Figure 3: Predicting protein structures from distributedfolding.org
Chess world champion Wladimir Kramnik in October, assisted only by a team of eight professors) has shown that computers with a certain amount of processing power are extremely difficult to beat. The Chessbrain project, which is quite recent, is looking into the prospect of a powerful chess computer (see [6]) and has recently reached the first of four designated development stages. Chessbrain will not become a really powerful competitor until it reaches phase 3 – the work currently in progress primarily concerns the distributed infrastructure. One of the most fascinating aspects of this project is the use of the SOAP protocol to transfer data to the clients, or so-called PeerNodes. As SOAP can now be processed by FlashMX, Chessbrain not only offers the PeerNode software, but also various viewers based on Flash or PHP, for example, that allow you to view the current game. However, you will need a flat rate if you intend to sign up for this project, as the PeerNode continually accesses the server.
Proteins Super computers are also important to medicine and play a vital role in the field
INFO [1]
http://setiathome.ssl.berkeley.edu
[2]
http://freshmeat.net
[3]
http://distributed.net
[4] http://www.mersenne.org/prime.htm [5]
http://www.nd.edu/~cmonico/eccp109/
[6] http://chessbrain.net [7]
http://folding.stanford.edu
[8] http://gah.stanford.edu [9] http://www.distributedfolding.org [10] http://www.electricsheep.org
86
November 2002
of genetic research, such as the quest to decrypt the human genome. Research into proteins and corresponding genetic sequences is the aim of two partner projects: Folding@Home [7] and Genome@Home [8], which recently united to form a single client. Folding@Home is specifically concerned with the folding process of proteins, whose encryption could mean a breakthrough both in medicine and in nanotechnology. Genome@Home works with known protein structures and attempts to calculate appropriate, synthetic genetic sequences that allow genetic researchers to gain a better understanding of natural genetic sequences. Although Genome@Home has been integrated in the Folding@U Home-Client, you can specify a Genome@Home team number (over 100.000) to work exclusively on the former project. Additionally, the original client is still available as Genome@U Home Classic. However, before you can run the integrated client, you will need to make it executable by typing:
Androids Dream of Electric Sheep” – as was the movie “Blade Runner” – is a notable execption. Of course computers don’t count normal sheep, but instead use their processing power to create animated, fractal flames. In contrast to the other projects discussed, Electric Sheep provides both the source code and RPMs, allowing you to install the screensaver directly to your GNOME control center. If you do not use GNOME, you can type the following line to add Electric Sheep to the “programs:” section of “~/.xscreensaver”: "ElectricSheep" electricsheep \n\
The animations are sent to a central server that then returns the animation as an MPEG video to Electric Sheep screensavers all over the globe. Unfortunately, the volume of traffic involved restricts useage to those fortunate enough to have a DSL flat rate or similar Internet link. ■
chmod +x FAH3Console-U v312-Linux.exe
Another project working on proteins is Distributed Folding [9]. The procedure here is different to that followed by Folding@Home – the focus of Distributed Folding is on predicting protein structures, rather than folding. The folding process is particularly relevant to diseases such as Alzheimer or Creuzfeldt Jacob, that may occur in the context of proteins whose folding characteristics deviate from the norm. Although most of the projects discussed so far are available as (more or less attractive) Windows screensavers, Linux users normally have to be content with boring text based interfaces that only occasionally issue a cryptic comment on the progress they are making. The Electric Sheep screensaver [10], which was inspired by Philip K. Dick’s book “Do
www.linux-magazine.com
Figure 4: Electric Sheep calculates fractal flames
COMMUNITY
Norwegian Award / Luxembourg LinuxDays
Norwegian Award for Linux in schools Scandinavia’s Open Source scene has a new prize to celebrate: For the first time, the Norwegian Unix Users Group (NUUG) and Oslo University College “Høgskolen i Oslo” awarded the Norwegian Free Software Prize. After a ceremony held in Norway’s capital Oslo on October 7th, the lucky winner “Skolelinuxprosjektet” took home a cheque for NOK 30,000 (approximately EUR 4,115). The winning project brings together contributors from all over Norway and develops a Linux distribution for Norwegian schools aiming at easy installation and maintainance as well as availability in the two Norwegian literary languages (Bokmål and Nynorsk) and the Saami language. Amongst the nominees were well-known Unix “old-
timers” like IETF’s Harald Alvestrand, Stig S. Bakken of The PHP Group or GNUS-guru Lars Magne Ingebrigtsen as well as young DeCSS-hacker Jon Lech Johansen and Qt-company Trolltech AS. Exemplarily, coding was not the only reputation that counted: three of the 34 nominees, namely Gaute Hvoslef Kvalnes (who is also a member of Skolelinuxprosjektet), Kjartan Maraas and RoyMagne Mo, have been translating KDE and GNOME, respectively, into either Bokmål or Nynorsk. Last but not least, the Norwegian Secretary of Labor, Victor D. Norman,
Second Luxembourg LinuxDays The second Luxembourg LinuxDays took place at the beginning of October, this year’s venue being the IST (Institute Superiéur de Technologié). The conference was organized by a group of scientists from the Henry Tudor Institute in cooperation with the Linux User Group for Luxembourg and sponsored, among others, by the Ministery of Economy and Linux Magazine. Luxembourg’s Minister of the Economy, Mr. Henri Grethen, opened the series of lectures comprising a total of six
90
November 2002
topics. The Cluster Track included a number of interesting talks with Hubert Feyrer from NetBSD giving a lecture on a cluster project for rendering video material, and the sight of a cluster comprising 45 computers was quite impressive. As usual, the Debian booth attracted quite a lot of attention. Andreas Tille from Debian also held two talks on the Project Track, one of which – DebianMed – having been initiated to promote the use of Open Source Software in the area of medicine. But the highlight of the first day had to be the social event that took place subsequently in Luxembourg city center.
www.linux-magazine.com
BY PATRICIA JUNG
was nomininated due to his decision of no longer extending the state’s purchasing deal with Microsoft. ■ http://www.nuug.no/prisen http://skolelinux.no
BY ANNETTE MERISTE
The second day saw the focus switch to security, embedded Linux and projects. The Security Track provided a platform for topics such as penetration testing, kernel security as well as high speed packet filtering. The LinuxDays culminated in a footnote by Jon ‘Maddog’ Hall, the President of Linux International. In addition to a look back at the annals of history, Jon also presented a number of projects that relied on Open Source Software to guarantee cost-efficiency and discussed the potential for integrating free software into business models. He placed particular emphasis on the distribution of Open Source Software in government and provided some useful insights on putting the advantages of it across to a government audience. One thing is for certain, the organizers definitely achieved their goal of promoting Linux amongst Luxembourg’s enterprises while simultaneously keeping track of the latest Open Source projects. ■ http://www.linuxday.lu
Linux Expo UK
COMMUNITY
UK’s largest Linux Exhibition and Conference
A chance to meet T
he 9 and 10 October saw London’s Olympia exhibition hall once again playing host to Linux Expo UK, sponsored in part by Linux Magazine. The Linux market is still in a state of flux, the same state it was to be found in last year. But last year Linux had much hype to live up to and the Linux Expo event failed to draw the crowds most had hoped for. This years event was smaller, taking the 1st floor of Olympia hall 2, and was run in conjuction with WebSolutions Expo taking the ground floor. This was an interesting combination because it went on to highlight the growing number of Web Application providers that are starting to use Linux for their day to day business, with some even considering themselves to be on the wrong floor once the expo had started. One such company was Jool Ltd, whose MD, Anjula Perera, took time to tell me about his range of Linux powered servers and the success they had using their smallest server to power a major application, running the network services of the Labour Party conference in Blackpool. Their Kwartz servers, which stand out from the crowd because of their unusual 270x190x160mm form factor and brushed Aluminium and Perspex finish, was able to cope with over 2,500 transactions through the Oracle application it was handling. While the event was smaller, there was a definite buzz of excitement for the two days. As was the case last year, space was set aside for the Open Source and community element that Linux relies on so heavily. The Debian team must have given away lots of Knoppix 3.1 disks, which will prove to be an excellent introduction to those who wanted to see some of the power that a Linux distribution can put
forward with the minimum of fuss, as it is able to boot and run completely from the CD-ROM, no installation to a hard disk is necessary. Sharp had pitched in with the Greater London Linux User Group to help pass on their new Linux PDA, the SL-5500 at a considerable discount, while the guys on the Lonix stand showed off their true colours by spending most of the two days playing Unreal Tournament 2003 and inviting passers by to join them for drinks after the show. The developers from the Rosegarden project were on hand, showing how Linux was capable of making it in the music studio with their composition and sequencing applications.
Helping friends The show had attracted some of the biggest players in the IT industry, with Sun and HP both showing off their ranges of Linux servers and applications. Their large stands helped to accommodated some of the other vendors. Sun had given over room for people like SuSE and SCO. Some people may find it hard to imagine why competing companies, at least for the moment, are prepared to share stand space together. I see it in an opposite light and find it refreshing that
BY COLIN MURPHY
partnerships and strategic alliances can stand together. Enterprise Management Consulting told us about their new development of ‘The Linux Centre’, a purpose built call centre to house up to 40 technical support staff. Technical support is one of the major issues that seem to hold back prospective migration. Initiatives like ‘The Linux Centre’ must add weight to the total solutions that corporate business demand from their systems, proving that Linux really can be considered as an alternative.
Business case The Expo organisers had a fiendish plan for a rolling conference talks which fell into three tracks, the first of these tracks was to take part in the ‘Enterprise Linux Case Study Theatre’, which allowed the senior IT decision makers, those with the cheque books, to evaluate the possibilities Linux might offer their business operations. The ‘Product Education Theatre’, the second of these tracks gave vendors the chance to speak to groups of interested punters, pitching their products. Practical, hands on advice and help was available from the third track, made up of the user groups and developer community. This seemed to work well, but many of the big name vendors also had set up facilities for their close partners to do the same, as part of their own stands. This made it difficult to catch all of the presentations one might have wanted to, but it did alleviate the desperate crushes experienced last year in the all too small theatre. No one seemed disappointed to have attended the event, there was a real buzz and people thought they were on the crest of something big. I’m looking forward to the next expo. ■
www.linux-magazine.com November 2002
91
COMMUNITY
Brave GNU World
The monthly GNU Column
Brave GNU World I
n this monthly column we bring you the news from within the GNU project. In this issue we will look at mapping both on earth and in space along with on-line archiving.
Welcome to another issue of Georg’s Brave
GpsDrive
But fortunately there is GpsDrive. BY GEORG C. F. GREVE
As the name suggests, GpsDrive [1] by Fritz Ganter is a Free Software navigation system under the GNU General Public License, which uses the satellites of the “Global Positioning System” (GPS). Through a GPS receiver, GpsDrive gets the current position and displays it on an automatically chosen map in a userselected scaling. Loading the maps can either be done directly off the internet or through a proxy; even from map servers like Expedia or Mapblast. GpsDrive supports route planning through way points, which can be read from a file or entered dynamically with the mouse. Routes can also be recorded and played back, so it is possible to record ways you have taken and pass them on to friends, which is already being used for bicycle tours, for instance. To avoid having to stare at the screen all the time, GpsDrive also supports spoken output in English, German and Spanish through the Festival [2] speech synthesis software.
GNU World. Although earth may be mostly harmless, sometimes it is quite easy to get lost on it.
Given that development on GpsDrive only began in August 2001, making the project just one year old, the list of features is quite amazing. One of the most unusual ones is clearly the “friendsd” server, which allows friends to share their positions, allowing to display also the positions of the others. GpsDrive was written in C with the GTK+ toolkit and even though it is already quite stable, it is still under development. Points of interest for future development are a real street navigation and also speech input. It works with all Garmin GPS receivers which allow for serial output, as well as GPS receivers supporting the NMEA protocol and is usually being used on laptops, where it has been tested under GNU/Linux and FreeBSD. But of course especially PDAs would be interesting platforms for such applications and owners of the Compaq iPAQ and the Yopy may be happy, because GpsDrive has been used successfully on those. Although GpsDrive has already been localized for 10 languages, especially translation into other languages is an area in which Fritz seeks help to make his project accessible to as many people as possible.
GNU SpaceChart
Figure 1: GPSDrive running on an iPAQ
92
November 2002
GNU SpaceChart [3] by Migual Coca, a relatively new package of the GNU Project, also helps keeping the orientation, although its practical application would be planning of intergalactic bypass roads. In fact it was the interest in science fiction stories and their “original locations” that made Miguel work on SpaceChart. GNU SpaceChart is a program for star cartography that is not
www.linux-magazine.com
restricted to displaying two-dimensional images of the nightly sky or some constellations, it rather visualizes the position of stars in the sky, as can be seen in the screenshot to the right. The user can look at the sun or another star from a large distance and through tunably filters determine, which kinds of stars are being displayed. To increase the 3-D impression, stars can be connected with lines and rotated. For Miguel, this is one of the major advantages of SpaceChart compared to other Free Software programs, because they do not give him the same three dimensional feeling. Programming language used for SpaceChart is C with the GNOME libraries and it is published under the terms of the GNU General Public License. This choice makes it fast, and makes it, for instance, possible to display all stars within 50 light years of the sun and rotate them smoothly in real time. Other components of GNU SpaceChart are data files created automatically from astronomical catalogs by a Perl script, and documentation, most of which has been contributed by Robert Chassell, who is also the most active beta tester and who (according to Miguel) has a never-ending supply of new ideas for further improvement. The main audience for this project would be readers and authors of science fiction stories, who would like to have a better idea of how stars are distributed relatively to each other. But he would also like feedback by “real” astronomers to tell him how GNU SpaceChart might become more useful to them. Help is welcome in form of code, testing and documentation, of course.
Brave GNU World
GNU EPrints Christopher Gutteridge of the University of Southampton is working on GNU EPrints [4], a project to create onlinearchives, with support by Mike Jewell. Especially in the scientific field, literature research is an incredibly important part of the work and publications are only useful if they can be found. Making this easier is the goal of GNU EPrints, although it can theoretically be deployed in any situation where articles or documents of a research area, project or institution are to be archived. Professor Stevan Harnad, who is the political force behind GNU EPrints, drew his motivation for the project from the idea to reestablish unencumbered access of science to its results and also to give financially weaker institutes and countries the chance to participate in the scientific exchange. Despite being Free Software under the GNU General Public License (GPL), GNU EPrints also offers the advantage of being geared towards supporting different languages from the start. Web pages can be provided in different languages and it is also possible to select languages per field. This has already found practical application when some French archives were required to have abstracts in English and French simultaneously. EPrints isn’t restricted to European languages, thanks to Unicode, almost anything should be possible. EPrints was written with an objectoriented approach in Perl, keeping it as understandable as possible, because the design philosophy assumes that it can never be perfect, so it will require changes to adapt it to the local situation. To do this, EPrints employs the concept
Figure 2: SpaceChart showing constellations
of “Hooks,” which call custom scripts that do useful things. This makes for a highly customizable system, which sometimes creates the problem of finding the right option or understanding the different functions. In order to help new users on the right way with this, HOWTOs are provided for frequently arising questions and needs. In real-life practical deployment, the technical side is the minor problem, as far as the experience of the author is concerned. Getting to a solution for archive policy or agreeing on the structure is much more difficult. There are places, where it took several months and committees to determine the structure of an archive that now contains 20 entries. This once more demonstrates that social problems cannot be solved with technology. In these cases, Christopher Gutteridge uses “carrots and sticks” as the adequate tools. But once there is agreement on the structure and once users have been educated to provide sufficient amounts of metadata, GNU EPrints can provide an extremely valuable tool. Since it fulfils the Open Archives Initiative (OAI) [5] standard version 1.1 and 2.0, it is even possible to share archive metadata with other archives, so entries can be searched over multiple online archives simultaneously. According to Christopher Gutteridge, he doesn’t really need help at the moment. The code base seems to be sufficiently stable and thanks to external funding, good documentation is currently under development.
GCron
COMMUNITY
tem, some readers may not have heard about it yet – a brief introduction might be useful: Cron is a program which allows execution for programs i.e. scripts at specific times (week days, times, dates, and so on). This allows automating the periodically necessary tasks, for instance. Cron is being used for system maintenance tasks on almost all installations of Unix-like systems. Ryan Goldbeck now works on gcron, a security-aware new implementation, which will then be used on all GNU/Linux distributions. First goal is completing support of the POSIX standard and make the files backwards-compatible to Vixie Cron to allow for a painless migration. Afterwards GNU/Hurd specific extensions and additions for better information about the executed programs such as the running time or resource usage are planned. It would also be possible to include a better means for controlling system resource usage by the executed programs. It is not very surprising that gcron is published as Free Software under the GNU General Public License; C is being used as the programming language. Goodbye, and thanks for all the fish! That’s it with the “A Tribute to Douglas Adams” issue, who died too young, little more than a year ago. And as usual, I’m asking everyone to not be shy in providing ideas, comments, questions, inspiration, opinions and information about interesting projects to the usual address. [7] ■
INFO
GCron [6] will replace the currently used Vixie Cron within the GNU System, because the Vixie Cron has not been maintained since the early nineties and has developed several security problems, which the different GNU/ Linux distros try to address with their own house patches. Thanks to gcron, this will hopefully soon become unnecessary. Even though cron is clearly one of the “classics” of any Unix sys-
[1] GpsDrive home page http://gpsdrive. kraftvoll.at [2] Festival home page http://www.cstr.ed. ac.uk/projects/festival/ [3] GNU SpaceChart home page http:// www.gnu.org/software/spacechart/ [4] GNU EPrints home page http://www. eprints.org/ [5] Open Archives Initiative (OAI) home page http://www.openarchives.org [6] GCron home page http://www.gnu.org/ software/gcron/ [7] Home page of Georg’s Brave GNU World http://www.brave-gnu-world.org Send ideas, comments and questions to Brave GNU World column@brave-gnu-world.org
www.linux-magazine.com November 2002
93
Subscription CD
LINUX MAGAZINE
Subscription CD On this month’s subscription CD we start with the latest distribution to hit the servers. Included along side the full distribution we have all the files that we mention in the magazine, in convenient formats.
KDE 3.0.4
UDE
KDE 3.0.4, the third generation of KDE’s free, powerful desktop for Linux. KDE 3.0.4 is available in 51 languages – including the addition of Basque for the first time. KDE 3 – ships with the core KDE libraries, the base desktop environment, and hundreds of applications and other desktop enhancements from the other KDE base packages (PIM, administration, network, edutainment, development, utilities, multimedia, games, artwork, and others). KDE 3.0.4 provides various service enhancements over KDE 3.0.3, which shipped in mid-August 2002, as well as two security corrections (the personal web server (KPF) may permit a remote user to retrieve any file readable by the local KPF user, and the PostScript / PDF viewer (KGhostview) may execute arbitrary code placed in a PS or PDF file). KDE, including all its libraries and applications, is available for free under Open Source licenses. Features included: • Konqueror is KDE’s next-generation web browser, file manager and document viewer. Widely heralded as a technological break-through for the Linux desktop, the standards-compliant Konqueror has a component-based architecture which combines the features and functionality of Internet Explorer/Netscape Communicator and Windows Explorer. • Konqueror supports the full gamut of current Internet technologies, including JavaScript, Java, HTML 4.0, CSS-1 and -2 (Cascading Style Sheets), SSL (Secure Socket Layer for secure communications) and Netscape Communicator plugins (for playing Flash, RealAudio, RealVideo and similar technologies). • In addition, KIO’s network transparency offers seamless support for accessing or browsing files on Linux, NFS shares, MS Windows SMB shares, HTTP pages, FTP directories and LDAP directories. The modular, plug-in nature of KDE’s file architecture makes it simple to add additional protocols (such as IPX or WebDAV) to KDE, which would then automatically be available to all KDE applications. • Besides the exceptional compliance with Internet and filesharing standards, KDE achieves exceptional compliance with the available Linux desktop standards. KWin, KDE’s new re-engineered window manager, complies to the new Window Manager Specification. Konqueror and KDE comply to the Desktop Entry Standard. KDE generally complies with the X Drag-and-Drop (XDND) protocol as well as with the X11R6 session management protocol (XSMP).
The UDE-Project is creating a new WM which will be a complete GUI in future. The project does not use any special GUI-Libraries such as QT or GTK+. It just uses the standard Xlibs (which also make UDE faster).
GWhere GWhere allows you to manage a database of your CDs and others removable media. With GWhere it’s easy to browse your CDs or to make a quick search without needing to insert all of your CDs in the drive.
Rinetd Rinetd redirects TCP connections from one IP address and port to another. This makes it practical to run TCP services on machines inside an IP masquerading firewall.
Lyx LyX is an advanced open source document processor that encourages an approach to writing based on the structure of your documents, not their appearance.
Graphic Scripting From our article starting on page 44 we have included the test graphic so you can work through the examples along with the utility to make the FLI/FLC animation files.
Distributed Computing From the article starting on pages 84 we have included the files for the Distributed folding project as well as the Marsennes prime and ECCP projects. Last but by no means least is the Electric Sheep project which can produce stunning fractal animations and still images.
Subscribe & Save Save yourself hours of download time in the future with the Linux Magazine subscription CD! Each subscription copy of the magazine includes a CD like the one described here free of charge. In addition, a subscription will save you over 16% compared to the cover price, and it ensures that you’ll get advanced Linux Know-How delivered to your door every month. Subscribe to Linux Magazine today! Order Online: www.linux-magazine.com/Subs Or use the order form between p66 and p67 in this magazine.
www.linux-magazine.com November 2002
97
COMMUNITY
Linux New Media Awards 2002
Linux New Media Awards 2002
Simply the Best During the last year there has been
2002 Winners Hardware
lots of movement in the Linux
Mobile Devices
Community. Linux New Media AG invited several editors, a jury of
1. Sharp Zaurus
44%
2. Compaq iPAQ
31%
3. Gmate Yopy
25%
Network Hardware
authors, developers and leading
1. Axiom AX 6113
members of the Open Source Community to choose the most
34.9%
2. Itranator
30.9%
3. Equiinet
23.3%
Hardware
significant Linux products and projects of the year 2002.
1. Pioneer DVR-104
23.9%
2. ATI FireGL 4
23.1%
3. Fujitsu Siemens Memorybird
14.6%
Software Distributions
F
or the first time the Awards weren’t a “Reader’s Choice” selection, but rather an “Editor’s Choice”. In addition to the editors of the German LinuxUser, the German Linux-Magazin and Linux Magazine, a jury of widely known and respected people participated the voting. Everything was handled by email: First, the jury collected the nominations for several categories, such as “Network Hardware”, “Distributions”, “Office Packages”, “Development Software”, “Internet Applications” etc. A few days later the jury members selected their personal top three choices from each of the groups. After collation and calculations it is now time to congratulate and award the winners.
The Winner takes it all In the hardware section the Sharp Zaurus, Axiom AX 6113 and Pioneer DVR-104 hit the top spot. For the distributions, it is no surprise that Debian won the race. The jury honoured the work of several key developers who have worked so hard on the free operating system over the past number of years. The GCC (“GNU Compiler Collection”, renamed from “GNU C Compiler” in
88
November 2002
1999) won the award for the best development software. As in 2000, OpenOffice won an award. In the “Office Packages” category, the office suite prevailed against the other competitors with a sensational vote of 47.8%.
Mozilla conquers all Mozilla triumphed over the email client Mutt and Konqueror browser in the group “Internet Applications”. In the “Databases” category, PostgreSQL overtook MySQL for the first time. The swap may have come around due to MySQL lacking some features. The “Special Award” for “Newcomer of the Year” goes to Gentoo Linux, followed by Ogg Vorbis. Gentoo Linux is a BSD-style ports-based distribution that allows you to build all packages specifically for you machine. Ogg Vorbis is a completely patent unemcumbered lossy audio codec, with an excellent psycho-acoustic model. In the “Linux Companies” category, no specific product was nominated. Instead, the achievements (financial or conceptual) of a particular product or company, were open to selection. IBM won the first prize as the company who has done the most to promote Linux during the last year. ■
www.linux-magazine.com
1. Debian
28.8%
2. Knoppix
25.7%
3. SuSE
13.1%
Development Software 1. GCC
25.5%
2. KDevelop
15.2%
3. Eclipse
13.6%
Office Packages 1. Open Office
47.8%
2. KOffice
11.3%
3. Star Office
10.1%
Internet Applications 1. Mozilla
29.4%
2. Mutt
17.2%
3. Konqueror
16.7%
Databases 1. PostgreSQL
39.6%
2. MySQL
33.7%
3. DB2
9.5%
Special Newcomer of the Year Linux 1. Gentoo
24.4%
1. Ogg Vorbis
24.4%
2.Video Disk Recorder
17.2%
Linux Companies 1. IBM
33.5%
2. O’Reilly
15.6%
3. Red Hat
11.0%
Linux New Media Awards 2002
COMMUNITY
The 2002 Jury Bernhard Bablok, Java expert, has been writing for Linux-Magazin for several years. He’s a software developer at the Allianz Insurance Company. Fionn Behrens is a true game junkie, and writes about his experiences with the Linux gaming arena for Linux Magazine and LinuxUser. Frank Bernhard is a security expert. He was one of the first people who dealt with dedicated firewall systems using Linux. Simon Budig is one of the people behind the GIMP (GNU Image Manipulation Program). He is a passionate supporter of Open Source. You may not be able to understand 90% of what he says, or even his kernel code, but you’ll definitely recognize Alan Cox as one of the first and most active hackers in the community. Matthias Kalle Dalheimer, founding member of the KDE project, now lives in his favourite country Sweden and works at his own company Klarälvdalens Datakonsult AB. Mirko Dölle dismantles every computer he can get his hands on, and publishes his confessions as the hardware expert for Linux Magazine and LinuxUser. Michael Engel is a Power PC expert, in particular on running Linux on that platform. Hans-Georg Eßer, as the editor-in-chief, has been responsible for LinuxUser since its first days. He also is the author of several Linux books. Nils Färber writes articles for LinuxMagazin from time to time, and likes dealing with Linux on non-X86platforms. Björn Ganslandt is a fan of GNOME and takes an active interest in the software available for it. He regularly publishes articles about his experiences in LinuxUser.
Bdale Garbee works for Hewlett Packard, where he is developing a Linux distribution for them. Also, he holds the position of Debian Project Leader. Johnny Graber helps run http://www. linux-community.de, and moderates articles on that site. Georg C. F. Greve is the president of the Free Software Foundation Europe (FSFE), and writes our monthly column “Brave GNU World”. Andreas Grytz works as a news researcher for Linux-Magazin. Also, he writes articles for LinuxUser and the community forum http://www.linuxcommunity.de. The “Linux Evangelist” Jon “maddog” Hall preaches for the free OS all over the world. He’s the Executive Director of Linux International and one of the community’s most outspoken voices. Andreas Huchler works as a freelancer for Linux-Magazin and LinuxUser. He writes mostly about new software. Patricia Jung is the deputy editor-inchief of LinuxUser. In her free time she runs a Linux mailing list for women (lynn@lists.answergirl.de). Jan Kleinert is the editor-in-chief of Linux-Magazin. Harald König is one of the XFree86 developers and likes to work miracles on exotic hardware, preferably split over multiple displays. Michael Kleinhenz helps organize LinuxTag, one of Europe’s largest Linux events. Michael Kofler works as a full-time writer for Addison Wesley and has published several books about Linux, MySQL and Maple. Some of his works have been translated into several languages. He’s the author of one of the standard Linux books: “Linux – Installation, Configuration, Use”. Charly Kühnast takes care of the servers at the KRZN computer centre and publishes his useful sysadmin tips & tricks in Linux Magazine.
Achim Leitner, head of Linux New Media’s competence center “Network & Security”, oversees all articles in this field. Sebastian Raible just finished school. He helps run the http://www. linux-community.de/ website. Christian Reiser enjoys testing the Linux compatibility of various hardware and publishes articles to share his experiences. Daniel Riek is a board member of the LIVE (“Linux-Verband e.V.”) organisation and speaks out against the dangers of software patents. Michael Schilli is a Perl guru and regular contributor to LinuxMagazin, with columns on Perl programming. He currently lives in America. Tom Schwaller, now Linux IT architect & Linux evangelist at IBM, was editor-in-chief of Linux-Magazin for several years. Tim Schürmann, office software expert, frequently writes reviews for LinuxUser of the latest Office suites for Linux. John Southern, is editor-in-chief of Linux Magazine and one of the organizers of the Greater London Linux User Group (GLLUG). Marianne Wachholz is a real free software enthusiast and writes articles on that topic for LinuxUser. Max J. Werner moderates threads on the forum http://www.linuxcommunity. de/. Ulrich Wolf, is deputy editor-in-chief of Linux-Magazin and a regular contributor of Linux Magazine. Oliver Zendel is chairman of LinuxTag, one of the first large Linux events organized by the community.
www.linux-magazine.com November 2002
89