003 Welcome.qxd
02.02.2001
17:35 Uhr
Seite 3
COMMENT
General Contacts General Enquiries Fax Subscriptions E-mail Enquiries Letters
01625 855169 01625 855071 www.linux-magazine.co.uk subs@linux-magazine.co.uk edit@linux-magazine.co.uk letters@linux-magazine.co.uk
Editor
John Southern jsouthern@linux-magazine.co.uk
Staff Writers
Keir Thomas, Dave Cusick , Martyn Carroll
Contributors
Alison Davis, Andrew Halliwell, Colin Murphy, Richard Smedley
International Editors
Harald Milz hmilz@linux-magazin.de Hans-Georg Esser hgesser@linux-user.de Bernhard Kuhn bkuhn@linux-magazin.de
International Contributors
Gregor Anders, Bernhard Bablok, Fionn Behrens, Klaus Bosau, Mirko Dölle, Friedrich Dominicus, Thorsten Fischer, Björn Ganslandt, Dirk Gomez, Georg Greve, Patricia Jung, Heike Jurzik, Markus Krumpöck, Jochen Lilich, Micheal Majunke, Jo Moskalewski, Christian Reiser, Marin Schmeil, Daniel Schulze, Stefanie Teufel, Marianne Wachholz, Joachim Werner, Ulrich Wolf
Design
vero-design Renate Ettenberger, Tym Leckey
Production
Hubertus Vogg, Stefanie Huber
Operations Manager
Pam Shore
Advertising
01625 855169 Neil Dolan Sales Manager ndolan@linux-magazine.co.uk Verlagsbüro Ohm-Schmidt Osmund@Ohm-Schmidt.de
Publishing Publishing Director
Robin Wilkinson rwilkinson@linux-magazine.co.uk Subscriptions and back issues 01625 850565 Annual Subscription Rate (12 issues) UK: £44.91. Europe (inc Eire) : £73.88 Rest the World: £85.52 Back issues (UK) £6.25
Distributors
COMAG, Tavistock Road, West Drayton, Middlesex England UB7 7QE
R. Oldenbourg
Linux Magazine is published monthly by Linux New Media UK, Europa House, Adlington Park, Macclesfield, Cheshire, England, SK10 4NP. Company registered in England. Copyright and Trademarks (c) 2000 Linux New Media UK Ltd No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, e-mails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing. ISSN 14715678
INTRO
Current issues
IN THE LINUX GARDEN It’s been a hectic time recently with the release of the new kernel (now up to version 2.4.1pre8), Gimp making it to version 1.2, and KDE 2 becoming available. As things are appearing to get better on the desktop front and with more hardware manufacturers realising Linux is emerging from the obscure techie’s system to a more mainstream function, the future is rosy. Or is it? Always in the background is the possibility that some ground breaking new technology that you just have to own will be proprietary and so kill the whole movement. Such a thing is rearing its very ugly head right now. Recently there has been a meeting of the ATA standards committee. As you might know, these are the industry people who decide what the standards will be for future hard disks and storage media. No problem, you might have thought – just make them bigger and faster to cope with ever increasing files! After all, digital camera pictures are getting bigger as resolutions improve, and my collection of MP3s grows larger every day. I’m always running out of space. But it’s here that the problem lies. ATA just do not like the fact that I can copy MP3s or other file formats that may be copyright. Their latest proposal is that encryption should be employed at the hardware level. This would mean a file is spread across the hard disk and an encryption key would be needed to access it.
So you could easily copy a text file to disk but MP3 files would be rejected, making it a streaming-only technology. The record industry could finally collect all those hard earned revenues that they should rightfully make but this also means that I won’t be able to copy the self made MP3 of my daughter laughing. By making the encryption at hardware level it will also be a fact that certain OSes would be allowed coding information – those that are considered legitimate. It’s not known whether Linux would make this list. Direct disk access would not be possible so RAID systems that use IDE disks would no longer work, for example. Andre Hedrick, the ‘ATA Linux dude’ on the committee, is having some success organising the counter offensive. So all is not lost. It may be that the encryption is only put on removable media such as Flashcards, memory sticks and microdrives. The future still looks rosy although perhaps with the occasional thorn. Happy coding!
John Southern Editor
Linux is a trademark of Linus Torvalds Linux New Media UK Ltd is a division of Linux New Media AG, Munich, Germany Disclaimer Whilst every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material providied on it is at your own risk. The CD is comprehensively checked for any viruses or errors before reproduction. Technical Support Readers can write in with technical queries which may be answered in the magazine in a future issue, however Linux Magazine is unable to directly provide technical help or support services either written or verbal.
We pride ourselves on the origins of our magazine which come from the very start of the Linux revolution. We have been involved with Linux market for six years now through our sister European-based titles Linux Magazine (aimed at professionals) and Linux User (for hobbyists), and through seminars, conferences and events. By purchasing this magazine you are joining an information network that enjoys the benefit of all the knowledge and technical expertise of all the major Linux professionals and enthusiasts. No other UK Linux magazine can offer that pedigree or such close links with the Linux Community. We're not simply reporting on the Linux and open source movement - we're part of it.
6 · 2001 LINUX MAGAZINE 3
NEWS
SourceForge SourceForge.net, an ASP for open source developers, now supports more than 100,000 registered users, according to an announcement from the Open Source Development Network (OSDN), a division of VA Linux Systems, which is dedicated to promoting Open Source software development. With a user base that has grown by about thirty percent a month since its launch a year ago, SourceForge.net’s base of supported projects include MySQL, Tcl, Python, XFree86, KDE and Squid.
John ”Tiberius” Hall, vice president of strategic planning at VA Linux Systems said, ”The rapid growth of SourceForge.net during its first year of service demonstrates the growing popularity of the Open Source development model across a wide range of platforms, as well as the popularity of the SourceForge toolset... By working with ibiblio, we hope to further improve SourceForge.net’s ability to catalogue, archive and distribute Open Source software and documentation.”
Paul Jones, director of ibiblio.org which provides free information, including software, music, literature, art, history, science, politics and cultural studies added, ”The compatibility of the goals and key directions of SourceForge.net and ibiblio, as well as complementary research and service interests, make this relationship a great boost for the Open Source Community. Our relationship with SourceForge.net will help us bring the state of contributor-run archives and libraries to a high level very quickly. There are no limits in sight.” ■
SuSE Linux 7.1 is out
SuSE’s help system has been revised, making it even more user-friendly
SuSE Linux is delivered in two versions – Professional and Personal. The Professional edition contains seven CDs and a DVD as well as four comprehensive Manuals. The Personal edition comes with three manuals and three CDs. Retail prices should be around £29 for the Personal Edition and £49 for the Professional (including VAT). SuSE also offers 60 days installation support for the Personal Edition and 90 days for the Professional. Love it or hate it, SuSE has once again increased the amount of software to over 2000 software packages in the case of the Professional Edition. Additionally, you will find a series of bundled commercial applications, both, demos and full versions. In SuSE Linux 7.1, customers have the choice between the proven kernel version 2.2.18 and the brand new kernel version 2.4. Along with this kernel version come many additional plug and play, USB and power management features. Professional users will appreciate the support for up to 64 GB of
6 LINUX MAGAZINE 6 · 2001
main memory and the much-improved scalability of this kernel version. Thanks to the hardware detection that attempts to identify both your graphics card and monitor, the configuration of the X Window system is a matter of just a few mouse clicks. Instead of the old Version 3.3.6 for ancient graphics cards, you can also choose XFree-4.0.2 as your default X-Server, including the latest drivers for 3D hardware acceleration for a series of common accelerator cards based on Nvidia GeForce(256/2GTS/2MX) and ATI Rage128/Rage128Pro. During Installation, it is now possible to resize an existing FAT32 partition (Windows95/98/Me) with the help of a graphical front end to ”GNU parted”, if a Windows partition covers the complete hard drive. The newly released KDE2 desktop environment (Version 2.0.1) largely improves the usability of Linux as a desktop platform. Along with the basic environment SuSE delivers a multitude of different KDE applications which are integrated into the desktop. The KDE desktop has been customised to SuSE’s environment. Also included for the first time is also the KOffice Suite. Although KDE is the default desktop, SuSE Linux comes a very wide variety of other Window Managers and Desktop Environments to choose from including the very popular GNOME desktop environment. Among the software offered the StarOffice set of productivity tools is a long-time favourite (Word Processor, Spreadsheet, Presentation graphics and much more), now at version 5.2. StarOffice is even able to handle the import and export of MS Office documents, spreadsheets and presentations. There have been stability-issues with StarOffice in the past, but with the present version these seem to have gone. SuSE Linux 7.1 might be seen as ”just” an upgrade of version 7.0. But with its plethora of new features and updated basic software (Kernel 2.4.x, XFree-4.0.2, KDE-2.0.1), it really deserves to be called a ”.0”-less Distribution. ■
NEWS
Virus Warning
A new Internet worm that targets computers running Red Hat Linux is being described by some virus experts as the first successful attack on the Linux operating system, which is widely considered to be one of the best protected platforms around. To penetrate computers that have Red Hat Linux 6.2 or 7.0 installed, the Ramen worm exploits three security breaches loopholes - in.ftpd, rpc.statd and LPRng. These were identified and closed by developers last summer. The breaches are from the Buffer Overflow category and could enable hackers to send an executable code to the remote system and run it without the user’s authorisation.
The Ramen worm infects a system by sending data to a target computer receives data which then overflows the system’s internal buffer. This enables the worm code to gain root privileges. It then initialises the command processor that executes the worm’s instructions. Next the worm creates the /usr/src/.poop folder and launches the ”lynx” Internet browser. There it downloads the worm’s archive RAMEN.TGZ from a remote system. After this, Ramen opens the archive and executes its main file START.SH. The worm changes the content of INDEX.HTML files found on the system, so that when affected HTML-files are run they display the following message: RameN Crew Hackers loooooo00000000000ve noodles. There is no additional payload. Denis Zenkin, head of corporate communications for anti virus software
Kompany Gold The Kompany.com has released a GUI IDE to help KDE developers working on C++ applications. KDE Studio Gold is a commercial release of the Kompany’s Open Source KDE Studio project, with added features and full documentation. As well as standard features such as tools for code completion, dynamic syntax highlighting and popup function parameter lookup, the IDE offers simplified debugging and Kbabel translation tools. The product features integrated support for Trolltech’s Qt Designer files. KDE Studio Gold is available in either the Standard or the Professional edition, both of which can be downloaded or bought as physical media. Both editions include the same features and documentation, however the Professional Edition adds Qt, KDE, libc & kernel documentation within the help system. Prices start at around £15 for the standard edition bought as a download during the presales period. Currently the Kompany is offering the KDE PowerPlant at fifty percent off for customers who buy another product.
Info http://www.thekompany.com/products/ksg/ http://www.thekompany.com/products/powerplant ■
developer Kaspersky Lab, said that to date he has received no reports of Ramen in the wild. However, he added, ”It is important to emphasise that the breaches exploited by the Ramen worm are also found on other Linux distributions, such as: Caldera OpenLinux, Connectiva Linux, Debian Linux, HP-UX, Slackware Linux and other. This particular worm is triggered to activate only on systems running Red Hat Linux. However, it is probable that in the future other modifications of ”Ramen” will successfully operate on other Linux platforms. Therefore we recommend users to immediately install patches for these breaches regardless of the Linux distribute you use”.
Info http://kaspersky.com ■
Linuxcare
Linux and open-source professional services provider Linuxcare has unveiled its comprehensive managed services solution for Linux. Linuxcare Managed Services are subscription-based and target the Internet service provider (ISP) market. As well as providing monitoring and systems administration, the services offer ISPs a quick way of deploying Linux-based, valueadded services to their end-customers. Linuxcare Managed Services include optimisation of Internet servers, systems management, proactive remote monitoring, remote software updates and repairs across all major distributions of Linux. Lori DeMatteis, product line director, hosting services, Digital Island, a LinuxCare customer, comments, ”Linuxcare’s services and expertise have helped to maximize the reliability and performance of our Linux servers. Now, Digital Island is able to offer the same service levels agreements for Linux that we have traditionally offered to our customers on Solaris systems.”
”Managed services are the logical next step for Linuxcare, and they have been part of our plan from day one,” said Arthur F. Tyde III, Linuxcare co-founder and CEO. ”We’ve assembled the best and brightest talent from the open-source community, and Linuxcare Managed Services can scale to make our services widely available to partners and end customers. We understand the need for more than basic managed services around Linux. Linuxcare is a full partner in mitigating customer risk so that they can quickly reap the dramatic rewards that open-source software affords.” ■
6 · 2001 LINUX MAGAZINE 7
NEWS
Embedded servers Coventive Technologies, which provides embedded Linux solutions for information appliances, has announced the creation of its new business unit focussed on the needs of the server appliance market. The business unit will build on the company’s server appliance initiatives which were completed in last year. These include a total server appliance solution delivered to motherboard manufacturer Gigabyte. Roland Chen, formerly Coventive’s director of sales, will be general manager of the Server Appliance Business Unit. Server appliances are nonprogrammable, network-enabled, sealed servers, pre-configured for the needs of a specific application or to fulfil a certain functions, such as that of file/print server or email server. However Coventive finds that to server appliance makers and designers are not traditionally experts
when it comes to implementing product features with embedded software. With its new business unit, the company it plans to help by providing them with its expertise in the embedded Linux field, providing open source solutions and joint engineering services to the appliance manufacturers, as well as its embedded software technology, thinnest kernel and application development and multilingual interfaces. Randy Tan, chief executive of Coventive Technologies commented, ”We are very excited about entering into the promising new server appliance market. The key to powering all post-PC era products, be they information appliances or server appliances, will be the embedded software, and Coventive will be at the forefront of that business, fully utilising the unique advantages of open source software.” ■
Embedded support LinuxDevices.com has launched its free technical support forum for the Embedded Linux Community. The Embedded and Real-time Linux Technical Q&A Forum is staffed by a team of volunteers from the embedded and real-time Linux companies, including Century Software, Coventive, Esfia, FSM Labs, K Computing, Lineo, LynuxWorks, MontaVista Software, OnCore Systems, Red Hat, RedSonic, and TimeSys.
LinuxDevices.com founder Rick Lehrbaum said, ”Developers using Linux in embedded and real-time system applications are encouraged to submit their technical questions to our Embedded Linux Q&A Forum, where they will be promptly answered by a panel of experts. This supported technical Q&A forum is one more clear indication of the unique collaborative spirit that pervades the Embedded Linux community.” ■
Motorola choose CollabNet Motorola has teamed up with CollabNet, an open source collaborative software development solution provider, to offer free web-based project hosting, a collaboration environment and code storage. ”Motorola and CollabNet are facilitating the application development process by offering a complete platform for global, collaborative development,” said Bill Werner, corporate vice president of Motorola and general manager of the company’s iDEN Subscriber Group. ”By supporting Open Source principles, this program encourages developers to work together on projects and leverage one another’s creativity, regardless of their location.” Bill Portelli, president and chief executive of CollabNet added, ”Motorola turned to CollabNet for its expertise in building collaborative environments for developer communities based on Open Source principles. This is a key example of how developers can benefit from using the concepts of Open Source development.” The initiative is part of the Motorola iDEN developers support programme. The iDEN programme aims to promote the support developers working on applications for Motorola’s forthcoming JavaTM 2 Platform, Micro Edition (J2ME) technology-enabled multiple communications and computing handsets. The program provides them with help in building applications and getting them to market. This help includes tools, technical support, marketing, distribution, e-commerce backend processing and end user support.
Info www.motorola.com/idendev ■
Hammer support AMD has said it is to work with Virtutech on a software solution to support developers working on software for AMD’s nextgeneration Hammer family of processors. The Hammer Processors will be AMD’s first 64 bit processors and will be available in the first half of 2002. AMD and Virtutech have developed a tool, codenamed ”VirtuHammer”, to enable software developers to write and test 64-bit programs in readiness for the launch of the processors. Using Virtutech’s Simics software, a computer with a standard x86-compatible processor can simulate the operations of a 64-bit Hammer processor-based computer. This enables developers to test and debug their
8 LINUX MAGAZINE 6 · 2001
64-bit software for AMD’s next-generation processors using current technology. AMD is now delivering VirtuHammer simulators to targeted software partners, to ensure they have the resources, time, and support required to develop 64-bit operating systems, tools and applications for the Hammer family of processors. Fred Weber, vice president of Engineering for the Computation Products Group at AMD said, ”Virtutech’s excellent simulation technology, coupled with the powerful AMD Athlon processor, create a high-performance tool that can help developers bring 64-bit software to market supporting AMD’s Hammer processors. The developer community has
expressed tremendous interest in the x86-64 instruction set and the Hammer family of processors, with more than 100,000 users having visited the x86-64.org website since the specification was released last August.” Virtutech chief executive Peter S. Magnusson said, ”We’re excited about adding support for the Hammer processor family to our Virtutech Simics simulation platform. AMD’s x86-64 technology is a welldesigned and smooth enhancement of the x86 legacy instruction set. We were able to create a prototype Hammer processor-based version of our existing x86 simulator in just a fraction of the time that adding a new 64-bit instruction set would have taken.” ■
NEWS
Real-Time tools Real time software development solution provider Real-Time Innovations has announced the new release of its visualisation tool for monitoring and analysing embedded and real-time Linux applications. StethoScope v5.3 for Linux enables developers to watch any variable or memory location in their system while it runs. Part of RTI’s ScopeTools visualisation toolkit, the StethoScope software helps speed up development cycles by enabling developers to understand and debug complex real-time code.
ask Neil
Stan Schneider, president and chief executive officer of RTI, says ”Embedded developers face huge challenges working with systems that require small footprints, speed, reliability, and short development schedules. Proven tools like RTI’s ScopeToolsTM help. StethoScope, a long-time member of the ScopeTools suite, provides developers the insight they need to test, debug and improve their embedded applications. StethoScope analyses data activities on production Linux code running at full speed with virtually no impact.” Chris Conrad, principal engineer at RTI customer Veraxx Engineering, adds, ”We use StethoScope to help us streamline our real-time software development process. It’s been great for monitoring and analysing our system as it runs. As Linux developers, we see specialised analysis tools like StethoScope as invaluable for getting the system out on time and within specification.”
Info http://www.rti.com ■
6 · 2001 LINUX MAGAZINE 9
NEWS
Crusoe Award
Certified RAID
The Crusoe processor from mobile computing solutions developer Transmeta Corporation has been awarded the title of Best of Show at the Innovations 2001 Computer Hardware and Software category at the Consumer Electronics Show (CES) held in Las Vegas. Transmeta introduced its software based Crusoe processor last year. It was on display at the show along with a number of new products which use it. One such product is Net Display Module the newly announced mobile computing integrated computer display from Philips Components. Jim Chapman, senior vice president of sales and marketing at Transmeta said ”This award is an acknowledgement of the innovation that Crusoe is enabling in these emerging mobile computing products.” ■
Networking storage technology company LAND-5 has certified Linux based PCI RAID controllers, from computer interface solutions provider ICP Vortex Corporation, for network-attached storage. The RAID controllers use LAND-5’s Linux-based iceNAS graphical software interface. From this interface, users can implement and manage up to 20TB of RAID storage. LAND-5 now plans to market the ICP RAID controllers as part of its NAS product suite. AnnDee Johnson, ICP’s vice president of marketing and sales says that ICP’s relationship with LAND-5 has helped strengthen both companies’ product offerings. ”The combination of their advanced software and storage products with our SCSI and Fibre Channel RAID controllers offers a unique NAS solution for the enterprise market. Together, we provide affordable, scalable storage for mission-critical applications that fully leverage the strengths of Linux.” Al Kernek, vice president of marketing for LAND-5, says ”ICP is one of the few RAID controller manufacturers who has excellent Linux drivers and supports integrator access to the RAID software.” He adds ”By mapping their extensive RAID functionality into our iceNAS software interface, we’ve made it easy for users to take full advantage of ICP’s fast performance. Mission-critical RAID storage can be set up in just minutes.”
Info http://www.icp-vortex.com http://www.land-5.com ■
Web Hosting Global Web hosting provider Sphera Corporation has partnered with Dialtone Internet a provider of Linux dedicated hosting and co-location solutions to deliver a dedicated Linux Web hosting server line. AutoHost is targeted at Web hosting providers and runs on Sphera’s HostingDirector 3.0, a software platform that aims to simplify and automate mission critical Web hosting business operations. The solution has enabled Dialtone Internet to offer its Web hosting customers a time-saving way of automating administrative tasks such as adding sites and applications, as well as setting up web infrastructures and provision accounts. Dialtone Internet chief executive Al Albarracin is impressed with HostingDirector as a high-end hosting solution. ”It is the only product we found that allows our customers to streamline management tasks without any specialised equipment or a steep learning curve. HostingDirector dramatically enhances Web hosts’ 10 LINUX MAGAZINE 6 · 2001
ability to provision proven, optimized software on demand to their end-users, while controlling costs. Sphera can have a significant, positive impact on the hosting providers’ bottom line.” Sphera chief executive Tamar Naor says that Dialtone Internet’s customers stand to benefit from the solution. ”Both Sphera and Dialtone Internet understand the challenges faced by Web hosting providers, and we’ve developed solutions that face them head on.” He added, ”Hosting Director’s highly scalable architecture will give Dialtone’s AutoHost customers the cost effective solution they demand for their growing businesses, making this an ideal partnership for both companies.”
Info http://www.sphera.com http://www.dialtoneinternet.net ■
Python’s BlackAdder Open source and commercial Linux software producers, the Kompany.com has announced the release of BlackAdder, its Windows/Linux GUI development environment for Python. BlackAdder features a visual design environment together with debugging, syntax highlighting and ODBC interfaces. It comes with more than 50 Mbytes of HTML documentation to provide a platform for developing Python applications. Using BlackAdder, developers can organise their Python scripts and GUI designs into projects. The development environment also features an editor that includes highlighting of Python keywords and code folding. An interactive Python interpreter (v2.0) enables developers to execute any Python commands while their applications are still running. BlackAdder features TrollTech’s Qt windowing toolkit v2.2.3 and a GUI designer offering the features of TrollTech’s Qt Designer that generates Python code. The environment provides ODBC database connectivity in the form of Python’s mxODBC multiplatform interfaces created by Genix. BlackAdder is available in both Personal and Business editions. The Personal edition is targeted at home hobbyist. However both editions provide the tools to create high-quality professional applications, the restriction on the Personal edition is in the ability to develop proprietary applications for resale. Chief executive Shawn Gordon explained, ”The Python scripting language is an important enabling platform for us in all of our products. During the course of creating widget and scripting front ends for Python we realised what a great opportunity we had for taking advantage of the multi-platform nature of Python to build a comprehensive tool for creating Python-based applications.” He added, ”What is especially exciting for our customers is that BlackAdder not only runs on Linux and Windows, it generates applications that will run on either system as well. This protects your investment in development by enabling you to develop and deploy your applications almost anywhere.” ■
G. Matter
6 · 2001 LINUX MAGAZINE 11
NEWS
President and Chief Executive of Linux NetworX, Glen Lowry
LifeKeeper on NetWorXs
Evaluate and inform
Enterprise application reliability solutions provider SteelEye Technology has announced a strategic OEM agreement with large-scale Linux cluster systems solutions provider Linux NetworX. Under the agreement, Linux NetworX will offer SteelEye’s LifeKeeper for Linux Next Generation Enterprise Reliability platform in the solutions it provides to its e-business customers. LifeKeeper for Linux maximises the availability of clustered Linux systems and applications by monitoring system and application health, in order to ensure clients have uninterrupted access to the data access no matter whether they exist on the corporate intranet, extranet or Internet. The solution enables lost application services to be automatically restored, either by restarting them locally or by relocating them to another server in the cluster. Jim Fitzgerald, chief executive of SteelEye said, ”Linux NetworX is one of our key OEM partners and we are very excited about the customer benefits of this alliance,” ”With LifeKeeper for Linux, Linux NetworX can offer customers a scalable, economical solution for protecting their business-critical environments, with assurance of global availability and support.” Glen Lowry, president and chief executive of Linux NetworX explained that the auto failover and recovery capabilities of LifeKeeper, matched Linux NetworX’s customers demands for system reliability. ”Linux NetworX customers demand turnkey high performance Linux clusters with unmatched system reliability - SteelEye’s LifeKeeper helps us deliver highly available Internet solutions.”
UK open source services and information provider SlashTCO has announced its new partnership to deliver online and offline training in Linux application. SlashTCO has partnered with Runaware, an evaluation service provider for both software vendors and consumers. The alliance between the two companies aims to foster awareness of the Linux operating system and offer online and offline training in Linux products. Runaware’s role in the partnership will be to generate web site traffic and Professional Services customers for SlashTCO. SlashTCO, meanwhile, will provide access to Linux research, support and training materials to Runaware users. The two companies will cross-link their sites, effectively providing one-stop destinations for Linux evaluation and training. Duane Mayes, chief executive of SlashTCO said, ”Runaware’s evaluation service, in combination with SlashTCO’s support and training services, will allow potential customers the chance to fully discover the benefits of open source software by freeing them to find the right software application and get the support and training they need.” Rod Plummer, VP Strategic Alliances at Runaware commented, ”As a full-service software evaluation marketplace. It is important for us to give the software consumer and corporate buyer the most complete information on all types of software available. With SlashTCO we will be able to present Linux products in a way that will fully educate and inform our users.”
Info Info www.linuxnetworx.com www.steeleye.com
http://www.slashtco.com/ http://www.runaware.com/ ■
■
BriQ built PowerPC product designer and manufacturer Total Impact has announced its new Linux-based compact Power PC based network appliance computer. The new product is called briQ and is a customisable computing appliance designed to handle performance hungry applications. The briQ is designed to provide a versatile platform for a wide range of applications and products including firewalls, routers, security devices and web servers. It features PowerPC G3 or G4 processors. It comes with 1MB of L2 cache and a 100MHz 64 bit system bus. The briQ can support up to 512 MB of SDRAM and a hard drive with up to 40 Gigabytes of space. The briQ measures a compact 5.74 inches wide, 1.625 inches tall and 8.9 inches deep and weighs approximately 1.85 pounds, providing developers and 12 LINUX MAGAZINE 6 · 2001
OEMs with a production ready computer engine with a footprint the size of an industry standard CD ROM drive. Pricing for development systems starts at $2,500. Brad Nizdil, President of Total Impact believes the briQ’s standard form factor will help to establish is as a computer industry standard. He expressed enthusiasm about the possibilities offered by the new device. ”The PowerPC and Linux have enabled us to develop the most powerful network appliance in the smallest foot-print available. We are very excited about the briQ’s potential and have already received commitments from several OEMs.” ■
Info http://www.totalimpact.com ■
NEWS
WebSphere IBM has announced a new version of its WebSphere personalisation software, which now supports Linux. WebSphere Personalisation for Multiplatforms v3.5.2 runs on Linux, AIX, HP-UX, Solaris, Windows 2000 and Windows NT operating systems. WebSphere can be used by businesses to create personalised marketing messages for online customers. The software can help businesses to tailor Web pages to appeal to customers and gain insight in to customer preferences. Ed Harbour, director of marketing for the WebSphere software platform at IBM said the WebSphere software enabled customers to deploy the Web as a powerful business tool to engage customers. He added, ”IBM’s WebSphere Personalisation allows businesses to develop,
deploy and update personalised data quickly on various platforms, including Linux— the fastest growing operating system in the industry.” In Personalisation v3.5.2 the rules-based personalisation capabilities have been extended, enabling the personalisation strategy to be based on keystrokes made during a visitor’s tour of the website. Businesses can gain insight into the visitor’s interests by monitoring what site content has been viewed or what the visitor has placed in the shopping cart.
Info http://www.ibm.com/websphere ■
Groupware Server Open source solutions provider SuSE Linux has teamed up with Lotus to deliver the SuSE Linux Groupware Server. SuSE Linux Groupware Server incorporates Lotus Domino version 5.0.5 on the SuSE Linux Enterprise Platform. This version of Domino supports XML, Java Servlets, Java Beans and Java APIs for accessing Domino services, enabling it to deliver the functionality of the Lotus Domino messaging and web application server for the Linux operating system. The
server offers multiple level security functions, with personalised data access based on individual and group access rights. Other features include tools for groupware, workflow, messaging and scheduling as well as providing a platform for web and messaging application development. Message integrity is supported by S/MIME. Its visual development toolkit enables existing applications to be integrated with customer service, customer relation management, and
sales support in addition to travelling and expenses calculation. The server comes with licences for ten Lotus Notes clients and SuSE offers sixty days of installation and configuration support.
Info http://www.suse.de/en/produkte/solution s/groupware_server/index.html ■
Dolphinics
6 · 2001 LINUX MAGAZINE 13
014was-intro.qxd
02.02.2001
17:25 Uhr
COVER FEATURE
Seite 14
WEB APPLICATION SERVER: INTRODUCTION
Web Application Server
SHIFT OPERATION JOCHEN LILLICH
WWW-Applications servers accelerate the development of more powerful Web applications and simplify their operation. What do applications servers do, how do they work and how can one decide on a specific one?
Overview • Zope for example : • ArsDigita Community System: • Java-based servers: • jBoss, a free EJB server • Commercials: Websphere and Weblogic: • Cocoon/ XML
page 16 page 23 page 26 page 28 page 30 page 32
”Up to date”, ”interactive” and ”individual” are the requirements for a modern web launch. And static HTML pages don’t cut it , not by a long way. ”Just In Time” means a page is only compiled at the moment when a visitor invokes it. Behind the classically-presented content such as text and graphics, there is also a program logic to be laid down which takes care of the production of the pages. Web development is teamwork. For clear division of responsibilities, content, layout and functionality are managed separately. To separate layout, templates are usually used, in which the content is then inserted at specified places. Editors do not have the option of altering the layout according to their own taste, but concentrate solely 14 LINUX MAGAZINE 6 · 2001
on content. And developers care about program logic and nothing but. Common script languages such as PHP, ASP or Perl do not offer this clear separation, because the Web logic is embedded in the HTML pages. This is where web applications servers come in. Also referred to as Middleware - they sit in the middle, between the application realised by a web service, and the Web server, which accepts the calls from visitors and the database, which manages the components of the Web site. The term for this is ”Three-Tier-Architecture”. At least 50 different products crowd the applications servers market. Amongst the proprietary servers; the best-known are BEA Weblogic, the Oracle Application Server and IBM WebSphere. However in the Open Source world things are really moving now. In particular Zope, Enhydra and Midgard have already made names for themselves. A selection of other OSS applications servers can be found in the Info-box.
How goes it? So how does this type of applications server (from now on called ”AS” for short) work? As usual, it all
014was-intro.qxd
02.02.2001
17:25 Uhr
Seite 15
WEB APPLICATION SERVER: INTRODUCTION
COVERFEATURE
starts with the web query from the visitor to the site. By meshing between itself and the AS the Web server recognises that the query applies to the Web application and passes it on to a colleague. From the URL the AS sees which page and thus which part of the application should be invoked. This page must now be produced. To do this, layout and the program logic of the page are needed. The layout is either statically assigned or is selected dynamically on the basis of criteria such as browser model, day of the week, time of year or stored user preferences. By executing the program logic the content-related elements are combined to make a complete page in the selected layout. The result is now usually an HTML page, which is sent back to the enquiring browser. In the future the data could also be supplied in a more powerful format such as XML (more on this later).
as text processing documents, graphics with additional information, customer data from the merchandise information system and so on. So it was not long before the lack of a standard, flexible data format became intolerable. The remedy arrived in the form of Extensible Mark-up Language - XML. Anxiously awaited as a saviour for data gatherers, XML is now broadly implemented for the platform- and application-independent representation of information - but mostly only on the provider side. As soon as the browser technology is able to represent XML data agreeably with the aid of stylesheets, it will also be able to show off its advantages on the user side.
High demands
How then do you choose an applications server? In the case of large projects the software must support distributed development, as far as possible with version management and access protection (Locking). As a programming language, Java is a preferred, along with Perl and PHP, but other languages like Tcl or Python have also become favourites with Web developers. For many applications, development is greatly accelerated if one can make use of flexible tools and ready-made application modules. Components for easy database access, connection pooling, session management and browser interaction can be found in the form of Enterprise Java Beans, or as Perl modules in the CPAN. If many data sources and thus also data forms are used, open interfaces and standards such as XML are one criterion. Flexible and powerful data administration must be possible at all times. The fact remains that in the field of ”Web applications” a great deal is still in flux. An applications server is a puzzle made of so many individual pieces, themselves subject to heavy modifications (XML, Java, etc.), that this is where a system with open interfaces and standards conformity has the best chance of survival. Which is the best argument for using open source software here, too. ■
Once a website is big enough for it to be worth using an AS, the size of the operation also imposes greater demands on the system: An AS must be capable of processing a page query at the same rate as a simple static HTML file would be issued. Nothing is more annoying than potential customers who leave the site because of long waiting times. And an AS system must also run stably and reliably. The proven modules Linux and Apache deliver the goods in this respect. The best solution for both problems, though, is called clustering.
XML – at last In Web applications data often come together from very different sources: texts in ASCII form or
Spoilt for choice
The author Jochen Lillich is a technical director of the Linux system house TeamLinux in Graben near Karlsruhe. When he is not sitting at the keyboard of a computer, then he is usually at that of a piano.
Info [1] XML W3C Draft: http://www.w3.org/XML/ [2] Aquarium: http://aquarium.sourceforge.net/ [3] Ariadne: http://www.muze.nl/software/ariadne/ [4] Enhydra: http://www.enhydra.org/ [5] lxpServ: http://www.commandprompt.com/products_lxp.lxp [6] Midgard: http://www.midgard-project.org/ [7] rmdms: http://rmdms.sourceforge.net/ [8] Zope: http://www.zope.org/ [9] http://www.linuxde.org [10] Application Servers and Linux: http://www.linuxplanet.com/linuxplanet/reports/1146/1/ [11] CPAN: http://www.cpan.org Web application servers usualy have a 3-tier architecture
■ 6 · 2001 LINUX MAGAZINE 15
016zope.qxd
02.02.2001
17:42 Uhr
Seite 16
COVER FEATURE
WEB APPLICATION SERVER: ZOPE PRACTICE
The free Web application server Zope in use
VILLAGE GOSSIP ON THE NET JOACHIM WERNER
A year ago a little-known niche product, Zope has now become a serious competitor for PHP, Perl, and even for Java, in the field of Web applications. But many outsiders are now wondering, not without some justification, what’s it all about, this ”weird hybrid”, which is so hard to classify into one of the usual categories such as ”Web server”, ”application server” or ”scripting language”.
If you ask true ”Zopistas” what Zope really is, they sometimes just start to stutter, otherwise they launch into a very long lecture. The author is in fact himself infected by ”Zopitis”, but nevertheless we will attempt together on the following pages to understand the Zope phenomenon using a sample application. Whether you the reader allow yourself to catch the bug at the end or prefer to stay with Perl, PHP or Java, is for you to decide. But first, a little look back.
In the beginning was an aeroplane The year is 1996. The scene is a return flight from a training course on CGI-based programming, taken over on behalf of a colleague at short notice. Jim 16 LINUX MAGAZINE 6 · 2001
Fulton, the chief programmer of the small American software firm Digital Creations, started to ruminate. What he had acquired in a few days of self study for the course on Web programming simply did not fit into his ”world view.” The link between CGI scripts and HTML pages is more than a little tenuous, and advanced functionalities are hard to convert. He was especially bothered by the fact that the path via CGI scripts is not ”transaction-secure”: When for example just a single step in a log-on procedure fails, it is very time-consuming to cancel the whole log-on. There must be another way: above all object-oriented, and naturally the whole thing should be written in Fulton’s grassroots programming language Python. That is how the idea was born. But it then took until autumn 1998 before Zope received its present
016zope.qxd
02.02.2001
17:42 Uhr
Seite 17
WEB APPLICATION SERVER: ZOPE PRACTICE
name and - at the advice of the financial management of Digital Creations - was let loose on the world as open source software.
”Villages onto the Net” Initiative Let’s get back to the present, to a little village in Bavaria called Wolterdingen, which is not quite as out in the sticks as one might think, because the village council passed a resolution that the future could no longer be ignored and an Internet presence was necessary. Now they’ve all got one. And as the people of Wolterdingen wanted to get a bit ahead of the competition from neighbouring villages, it had to be a ”dynamic” Web presence. All the societies and clubs in the village should be able to publish articles themselves or announce events on the Web. The villagers asked enquired whether something like that could be done. What they learned at their next village meeting made their ears ring: a couple of confirmed Linux hackers in the village swore by a self-built solution in Perl, naturally under Linux and with the Apache Web server. Also, an SQL database as backend was needed, either MySQL or PostgreSQL, which is something the two geeks were not totally agreed on. Then there was the IT consultant, who works during the week at a large software house in Munich. He swears totally by Java. Plus a reasonable application server with ”EJB”, ”RMI” and naturally ”JDBC”. Without a database, of course, here again nothing would happen, but it should be a ”proper” one, from Oracle, or at least Sybase. The local PC dealer would
COVER FEATURE
have just liked to point out that it would also be possible to create really great Web sites with Windows and Active Server Pages too, but he didn’t get a word in edgeways. In the furthest corner of the hall however, a cluster had formed around a young woman who had brought her notebook with her. On the screen could be seen a Web site which came close to the proposals of the villagers. ”How did you get that so fast?”, asked the village council leader incredulously. When he heard the reply, his jaw dropped: ”I downloaded this Zope from the Internet yesterday evening. There is a module called Squishdot. I just had to install it and adapt the colours and the logo a bit. And the Web server is already there!”
Zope: The all-in-one solution Let us leave the Wolterdingers and take a look at the ”Feature List” of Zope (details can be found in the box ”Zope at a glance”): Zope comes, unlike most other Web-tools, as an ”allin-one” solution. The binary distribution even comes with an adapted version of the programming language Python, on which Zope is largely based (a few performance-critical parts are coded in C). The so-called ZODB, the Zope Object Database, serves as database. As the name says, this is a truly object-oriented database, with which objects and their status variables can be easily serialised and stored. For SQL fans a Python-based test database is included, which although not suited to larger The Management-Interface: All about Wolterdingen on the Net at a glance
6 · 2001 LINUX MAGAZINE 17
016zope.qxd
02.02.2001
17:42 Uhr
Seite 18
COVER FEATURE
WEB APPLICATION SERVER: ZOPE PRACTICE
projects, serves well when it comes to experimenting and is fully integrated in Zope. As already mentioned, the Web server is included in the form of Zserver, an expanded version of the python server Medusa. As well as the Web protocol HTTP (including the PUT command to upload web pages) the ZServer also understands FTP even the Web DAV protocol which is already supported as standard under Windows in the form of the so called webfolder, though scarcely known in Linux. This makes it childsplay to load an existing web page, or a folder with graphics, into the Zope Object database. The best is yet to come, though: Zope has a simple and fast Web front-end, by means of which almost all functions of the platform can be administered.
Zope is in the common Linux distributions. But the versions which come with SuSE or RedHat Linux, for example, are sometimes look a bit past their best and thus cannot be unreservedly recommended. For the sake of completeness the Windows binaries are worth a mention, which could also be of interest for die-hard Linux users, because many people who have been using Linux on their server for a long time may still be running Windows on their notebook. Since the Zope applications are completely platform-independent, there is nothing wrong with running a small test server on the Windows computer and then simply exporting the results to the Linux server.
Zope Architecture
Back to Wolterdingen, the village with the big plans for the Web: After the Wolterdingers have settled on Zope, a few more decisions have to be made. Because for Zope the same is true as is often said of Perl: There is more than one way to get things done. In the first place, Zope can be deployed ”out-ofthe-box” as a pure content management system. Via the Web interface, easily called up by attaching /manage onto the appropriate URL for the website, new managers and users, sub-domains (”Folders”) and documents can be made and images and HTML pages be set up. If errors occur at this time, it’s not a big problem. Firstly, one can make changes in a socalled version. These can then be seen by those who are also logged onto the version. The visitor to the Web site, however, does not see them until the version is released. Secondly, all actions can be reversed until the administrator ”packs” the database. This is when, depending on the setting, any objects no longer used, whose modification was more than ten days ago, are deleted. Some objects even offer a History function, with which one still has access to all old revisions of the document or the method. Then, for example, one can make last week’s version into the latest one again, because since then a bug had slipped in. But many users, particularly those without a programming background, will miss some comfortfeatures of professional content management systems, such as more complex release workflows or an inbuilt WYSIWYG editor. Web pages must either be processed in what are usually somewhat small ”text area” boxes in the browser or created locally with an editor and then uploaded via FTP or WebDAV. Under Windows many people swear by Web editors like GoLive. The next step is to provide the pages with a consistent standard_html_header and standard_html_footer. This makes it very easy to achieve a consistent design for all the pages. The navigation menu for the Web site is usually also packed into the header and contains the necessary code to generate on each page perhaps a menu or a directory line (Home > Clubs > Rabbit breeders).
To install under Linux simply download the binary distribution from the Zope server www.zope.org and execute the install script in the folder which is created after unpacking. Then Zope is started with the start script and then, if nothing has been altered in the basic settings, can be contacted at http://localhost:8080. Naturally the start can also later be automated via a script. Apart from the binary distribution one can also select the source distribution, which has the parts of Zope programmed in C in source code. Source installation is also governed by a script and if Python and GNU C-Compilers have been correctly pre-installed there is no problem running under Linux. The source distribution is platform-independent and should be able to run on all commercial Unix systems (including MacOS X) and of course the various BSDs. Naturally one of the simplest ways to obtain
Simplyfied Zope-Architecture HTP FTP XML-RCP WebDAV
Zserver
Publisher
DAs
SQL, LDAP, ...
ORB
ZOBD
ZEO Client
ZEO Server
18 LINUX MAGAZINE 6 · 2001
One platform, many ways
016zope.qxd
02.02.2001
17:42 Uhr
Seite 19
WEB APPLICATION SERVER: ZOPE PRACTICE
Headers and footers, together with other dynamic page elements, are integrated as tags according to the table ”<dtml-var Name>” in the pages. This scripting notation, which is in principle similar to PHP, is called DTML (Dynamic Template Mark-up Language). Originally this was only intended to include pre-produced modules written in Python in the sites and to realise simple logic with if-elements and loops. In the meantime, though, DTML has become an alm ost complete programming language with, admittedly, often highly individual syntax.
Acquisition: If you haven’t got one, get one! One particularly useful, but also at first confusing, characteristic of Zope is the so-called ”acquisition” of methods and objects, which needs a bit more explanation: Put simply, all elements of a Zope site are in fact objects. A folder for example is an object container, which can contain other objects. A DTML method may look like a simple HTML document, containing DTML tags, but it behaves towards the folder in which it is located, like a method which can be applied to it. A simple example would be that of a folder with the title ”Clubs” and the ID clubs_folder on our Wolterdingen Web site, which contains a method view: <dtml-var standard_html_header> <h1> Welcome to the <dtml-var title_or_id> site </h1> <dtml-var standard_html_footer>
COVER FEATURE
If one now goes with the browser to http://www.wolterdingen.de/vereins_folder/view , the header and footer ensure that the standard layout of the Wolterdingen site is used and the following text can be read: Welcome to the Club Page! If the ”Clubs” folder had not defined a title, <dtmlvar title_or_id> would automatically use the folderID (the ”file name” of the folder, as it were), so this would be clubs_folder. Similarly to other web servers, incidentally, Zope searches, automatically for a method ”index_html” when http://www.wolterdingen.de/clubs_folder/ is entered,and displays this (in Zope-Speak one would say that Zope ”renders” the index_html method. The fact that index.html in Zope is called index_html, is incedentally due to the fact that the dot in Python (and most other object-oriented languages) is used as a separation mark between object and method. But now Zope can also easily render files with a dot, which is important for example in the case of images (”logo.gif”), because many browsers would not otherwise co-operate. Now let’s place e.g. the rabbit breeders’ club and the gardening club under the clubs folder. Now, at last, acquisition comes into play: http://www.wolterdingen.de/clubs_folder/gardenin g/view now displays Welcome to the Gardening club page! without the view method having been copied into the corresponding folder! Zope simply searches in the next highest folder for a corresponding method and uses this. This means you can do fantastic stuff: The Gardening Club can now use the standard The Control-Panel: Zope can be administered completely via the Web
6 · 2001 LINUX MAGAZINE 19
016zope.qxd
02.02.2001
17:42 Uhr
Seite 20
COVER FEATURE
WEB APPLICATION SERVER: ZOPE PRACTICE
header and -footer. At some point, the committee decides that instead of the standard layout for Wolterdingen, it would prefer to have a couple of green ivy twines along the side. Nothing could be easier: Just copy the standard_html_header using copy & paste onto the Web interface in the Club folder and make the corresponding changes there. Then all you need do is upload the necessary graphics and the whole thing is done. At the next call up the view method will now use the header of the Gardening Club, as soon as it has been called up from its folder. The administrator of the Wolterdingen site does not, of course, want every club Web master making changes willy-nilly on the site. So she can make a so-called User Folders, defining users for the folder, in which they are located, and all sub-folders and
objects stored therein. This procedure allows roles to be defined for any kind of user, e.g. Club member, Committee or Guest. These roles can then be very finely-tuned to be linked with individual rights. So a committee can be given the right to add network users and delete them for their club. The User Folders can by the way also be replaced by
Zope service providers: beehive elektronische Medien GmbH: http://www.beehive.de iuveno - Smart Communication AG: http://www.iuveno.de Lightwerk Premium Internet Solutions: http://www.lightwerk.de
A selection of Zope products and modules: Database adapters (DAs) and Z Object Database Implementations Zope Interbase Storage ZODB-Implementation for Interbase from Borland Zope MySQL Database Adapter MySQL DA DBMaker Database Adapter DBMaker DA Ultraseek DA DA for the search engine Ultraseek Zope ODBC Database Adapter ODBC-DA for Windows Z Solid Database Adapter DA for the commercial DB from Solid ZPoPyDA+A10 latest PostgreSQL-DA ZRadius DA for Radius protocol Zope Sybase DA Sybase DA Berkeley Storage ZODB-Implementation for the Berkeley DB GV Interbase Database Adapter Interbase DA ZopeLDAP Adapter for querying LDAP from Zope SAPDB-DA SAP-DB Database support (still in development) User Folder Implementations Generic User Folder general implementation, which can be used with various DAs NTUserFolder User Folder for NT Domains smb User Folder Imports SMB/Samba-User into Zope MySQL user folder User Folder for MySQL LDAP User Folder Authenticates Zope-User against an LDAP directory Zope Applications Squishdot Web Log a la Slashdot Portal Toolkit Trial toolkit for community portals Zwiki Web Zope version of the web community tool Wiki Wiki Web ZopeGUM (Zope Grand Groupware tool a la MS Outlook Unified Messenger) Other ZpdfDocument generates PDF documents from Zope DTML-Tex Product generates Tex documents from Zope Etailer E-Commerce Shop Zope Cascading StyleSheets Style Sheets easily processed via forms Zope Internet Explorer Editor Demo HTML WYSIWYG-Editor; sadly, only for MS Explorer zCommerce E-Commerce Shop ZDP-Tools Active Images GDChart Product Emarket E-Commerce Shop Local File System Access to the file system via Zope (e.g. for uploads/ downloads over the Web) Site Access 2 enables virtual hosting with Zope ZSwGenerator generates flash code from Zope PHP Object generates PHP code from Zope 20 LINUX MAGAZINE 6 ¡ 2001
016zope.qxd
02.02.2001
17:43 Uhr
Seite 21
WEB APPLICATION SERVER: ZOPE PRACTICE
plug-ins and users imported automatically from an LDAP directory or from a Samba server.
Applications out of the construction set Apart from the aforementioned plug-ins, the Zope website contains a multiplicity of more or less useful expansion products for Zope. A large group of these are the database adapters, about which more later. Also, there is a large number of smaller modules for creating charts, advertising banners, navigation menus and other elements. But what interests the people of Wolterdingen is the ready-made ”Zope Applications”, of which Squishdot is only the best known. With Squishdot it is possible to make socalled ”Web Logs”, a la Slashdot, without the trouble of programming. The Web master only has to complete a few configuration forms and upload the necessary graphic elements, and there it is: an individualised Web Log with complete functionality. In the next few months many more interesting Zope applications are planned: Groupware tools following the example of Microsoft Outlook, toolkits to construct portals for the Internet and/or for company Intranets or a ”proper” content management system, which can be used without HTML- or DTML-know-how and more besides.
ZClasses - A (Z-)class for all seasons If we take a closer look at the Wolterdingen Club pages, one wish comes to the fore: Wouldn’t it be nice if there was a simple model, a ”Template”, with which the Web master could simply add a new club with all the necessary elements, such as their own calendar for events or a member directory? No problem: For this Zope has ZClasses - Zope expansions, programmable via the web-front-end, which can then be simply selected from a menu like the products which can be downloaded from the Zope website. In most cases a corresponding form then has to be completed, for example to specify the administrator for the new club or to make the first layout adjustments. Anyone who is now wondering how to find his data again in these ZClasses without SQL, can find the answer in the ZCatalog, a search engine integrated into in Zope. It is fairly powerful and fast, with automatic index-generation, but which, at least in the older Zope versions, still had a few little bugs such as failing to correctly interpret German umlauts.
Perl and Python in close harmony ... Sometimes - often fairly quickly - one comes to a point with Zope’s DTML scripts, when there is a keen desire for a ”proper” programming language. Zope now has several options for this: Firstly, you
COVER FEATURE
can call up ”External Methods” written in Python from Zope. These must simply be placed in the Extensions folder of the Zope installation and activated in the Web interface. On the other hand it has now become possible to make small Python Scripts via the Web-front-end, but their functionality is somewhat limited on the grounds of security. Both options will - perhaps even by the time this article comes out - also be realised for Perl. This means that in future Zope will capture the almost inexhaustible resources of the existing Perl code, such as the excellent Wwwlib, even if some Python fans will certainly turn aside with a shudder. With external methods and scripts almost any additional functionality required can be integrated into Zope, because behind a small method there can also be a ”Wrapper” for an entire program library (e.g. the PIL Library for image processing or the powerful XML-Tools from Python). But the royal road to Zope programming is found in the so-called ”Zope products” - new Zope modules, written completely in Python. We have already looked at these from the point of view of the user: Squishdot is a Zope product. Because there are now very good instructions and aids such as complete code templates, programming Zope products is simpler than you think. Anyone who already has the corresponding know-how in Python - and that’s not so hard to acquire - will often achieve an objective
A look into the Zope crystal ball It is pretty certain that the future for Zope will soon have no support for Java. In return Zope will soon be fully compatible with the new SOAP protocol, which although being promoted among others by Microsoft, is nevertheless an open and highly promising standard for distributed applications. Also, work is proceeding at fever pitch on improving the existing XML support. The first two fruits of this work are HiperDOM - a new XML-based template concept, which is comparable to the XMLC approach from Enhydra - and Parsed XML, the new XML rendering engine for Zope. Internally, there will at first be only some small changes. Porting onto Python 2.0 and the ability of Zope to handle Unicode character sets, is as good as finished. This Unicode support is an important building block for Zope to have full multilingual capacity. At present there is some effort, mainly in Europe, to translate the web interface of Zope. A German version is already available. In the next step the project will also be transferred to application level to simplify the work on multilingual websites. Another interesting product is the ZPatterns Framework. Unfortunately ZPatterns have previously only been properly understood by an illustrious circle of Zopistas. The underlying concepts for reusability of functionalities and clean abstraction between data and views of this data are, however, ingenious. The majority of activity for the next few months is expected to be in Zope-based applications. For example work is proceeding at a feverish pace on a Groupware system, a Content Management Framework and a Portal Toolkit. Last but not least there are also various projects underway which have the aim of a sort of IDE (Integrated Development Environment) for Zope. The most ambitious of these was ZopeStudio, a development platform based on the new features of the Mozilla browser. Unfortunately, because of diverse technical problems, the project has been put on ice for the time being at the Mozilla end.
6 · 2001 LINUX MAGAZINE 21
016zope.qxd
02.02.2001
17:43 Uhr
Seite 22
COVER FEATURE
WEB APPLICATION SERVER: ZOPE PRACTICE
faster than with ZClasses and DTML. Python simply has clearer and shorter syntax and is easier to read. But this does mean that programming in teams over the web can only be realised to a limited extent. Zope versions or Undo are of course not available. Naturally, for this one can simply get help from tried and tested programmer tools like CVS.
Hello World! - integrating Zope The author Joachim Werner is the father of a two-year-old son, who can unfortunately already say ”mobile”, but not yet ”Zope” . As founder and CEO of iuveno Smart Communication AG he sadly all too rarely gets round to looking at picture books with his son. His largest project at present is the total conversion of a university to a Zope-based content management system.
Info [1] The Zope-Homepage (Downloads, Manuals): http://www.zope.org [2] ”Zope Newbies” (daily news about Zope): http://weblogs.userland.com/z opeNewbies/ [3] EuroZope (the European Zope Community): http://www.eurozope.org [4] Zope Documentation Project: http://zdp.zope.org [5] Zope book from Digital Creations (published by O’Reilly); Download at: http://www.zope.org/Members /michel/ZB/ [6] beehive elektronische medien GmbH (pub.):Zope; The Open Source Web Application Server, dpunkt.verlag, planned to appear in April 2001, ISBN 3932588-93-2 ■
One point which we have only touched on in passing so far is the integration of Zope into existing system landscapes. The reason for this is simple. Because Zope comes with everything needed for the average web application, there is no need at the beginning to worry much about SQL databases or alternative Web servers. But in most cases it’s not all plain sailing to start off. Often existing SQL databanks need to be brought onto the web. But Zope is also well-equipped to do this. For many commercial and most common Open-Source databases there are so-called database adapters (DAs), though their quality does vary. The DAs maintained by Digital Creations itself on behalf of Oracle and Sybase are especially stable. But the PostgreSQL and MySQL drivers leave very little to be desired. MySQL though, has only recently started to provide substantial support for roll-backs of half-finished transactions, and the corresponding DAs were not yet ready when this article was written. This means that when using MySQL you lose part of the terrific transaction functionalities of Zope. Apart from SQL there are other database interfaces supported by Zope one way or another, such as LDAP, the Lightweight Directory Access Protocol, or the free Berkeley DB. It is also true that practically all data sources for which there is a Python interface, can be integrated very easily via a small wrapper into Zope. CORBA, the Common Request Broker Architecture, used e.g. in the GNOME Project, is not yet supported by Zope, but there have been plans to do so for years now. Zope has yet another ”Goody” ready for this, which could in future be even more important than CORBA: Zope supports XML-RPC (XML Remote Procedure Calls). XML-RPC is a protocol which is actually very similar to the HTTP. It is just expanded by a few extra commands and sends in the ”Body” of the message, instead of a query for a Web site or the corresponding reply from the Web server, method calls and their results. So one could for example very easily pass on a database query to a Zope server from another Zope server or any XML-RPC-Client at all. With XML-RPC Zope can thus practically be completely ”remote controlled”! A spicy remark in passing: The new SOAP protocol, which Microsoft introduced in the framework of its ”.NET” initiative - and which by the way is open and free to use, is based on XMLRPC. SOAP can also be integrated into Zope very easily. Which means that Zope servers could very
22 LINUX MAGAZINE 6 · 2001
soon play an interface role between Linux and the Microsoft world.
Apache? That works, too Even if Zope comes with its own webserver, it can sometimes be useful to rely on the existing server infrastructure. Zope can also be operated in principle in two ways with most of the common Web servers: Either as CGI script with the modifications Fast-CGI and Persistent CGI or via a ”Proxy” configuration, where the calls made to a specific address are simply passed on by the Web server to Zope’s Zserver. The box ”Zope and Apache” goes into more detail about how this proxy mode is used with the Apache webserver.
Zope Enterprise Objects: High availability for the poor ... One last time, back to that village in Bavaria: Say the villagers of Wolterdingen were to stumble across a hot spring in their village. If ”Wolterdingen Spa” was then driven crazy by enquiries from visitors, a Zope server which they operate might soon become too small. But there is a remedy for this, too: With the ”Zope Enterprise Objects” (ZEO) distributed clusters of Zope servers can be constructed. And www.zope.org is one such cluster. ZEO now allows the operation of, in theory, as many Zope servers as required on a central Zope database server. The Zope servers also come with their own cache memory and can intercept any brief interruptions of contact to the central server. Zope-father Jim Fulton is already working on designing the database server to be redundant. Then Zope could finally become a player in the super-league of scaleable ”HighAvailability” solutions.
And where are the Java Beans? To become a ”proper” application server, Zope obviously now only lacks support for Java Beans, or Java anyway. But this is not going to happen so fast, since the people at Digital Creations are certain that anyone having worked with Python will now only work with Java for web solutions at gunpoint. This is quite simply because since Python scripts and modules in Zope can be tested immediately after saving without compilation so it is far more productive than with Java. To finish, there is still the answer to the question which you have probably been asking all along: ”Does the little village of Wolterdingen actually exist?” - Yes, it does. And the good people of Wolterdingen have also decided on a Zope-based Web presence, which has turned out very nicely and with which they are very happy, so far. Whether the village meeting did actually proceed exactly as described by the author or if he dreamed a bit of it up, shall remain a secret. ■
023arsdigita.qxd
31.01.2001
10:48 Uhr
Seite 23
WEB APPLICATION SERVER
COVER FEATURE
The ArsDigita Community System
DIGITAL LOCAL CALLS DIRK GOMEZ AND MARTIN SCHMEIL
ACS is a tool for creating and running online communities, so it’s a special feature of a Web application server. The original system is based on operation with Oracle and the AOL Web server, plus there is OpenACS for Postgres and Apache.
The ArsDigita Community System (ACS) is something out of the ordinary. It performs the typical tasks of a Web application server, but is not designated as such. It is focused on the construction and running of Web communities. These include not only the typical open communities on the WWW, or particular ACS is intended to enable project-oriented teamwork by work groups scattered all over the world.
When someone goes on a trip... ACS, with its approximately 50 modules, is the most comprehensive free Web-standard software package. It has its origins in a trip lasting several weeks. In 1993 the MIT lecturer and photo fan Philip Greenspun went to Alaska in his Minivan. Every week he wrote a chapter on his experiences and published it on the Internet. Shortly after he returned, Mosaic, the first Webbrowser, came out and the Internet went
multimedia. Greenspun added pictures to his travel reports. He built a system to make replying to the many e-mail messages he received easier. Over the next three years he expanded the original software from photo.net into community software. The history of the system is well documented in “Philip and Alex’s Guide to Web Publishing”. In it Philip addresses all the techniques necessery for the construction of a Web service. The book is still an interesting read now, as it gives an overview of the “big picture”.
The heirs of the past – Oracle and TCL Philip Greenspun then decided to make use of the following technologies: the NaviServer, (later renamed AOLServer), Oracle and TCL. The following arguments were in favour of the AOLServer: Most Web servers use CGI scripts to display dynamic contents. Each access of a page requires a program 6 · 2001 LINUX MAGAZINE 23
023arsdigita.qxd
31.01.2001
10:48 Uhr
COVER FEATURE
Seite 24
WEB APPLICATION SERVER
to be started and if applicable, connection to a database. The AOLServer had a TCL interpreter integrated right from the start to allow multithreaded page generation within the Web server. When starting an AOLServer the database connections are made and kept open during the run time, which is otherwise a very timeconsuming operation. For some years now by the way, the AOLServer has been free and Open Source. Greenspun decided on Oracle mainly because of the following advantages: • Oracle uses an internal version management, so that read transactions never have to wait for write transactions and vice versa. • Oracle has a complete programme environment within the database server. Software running within the database itself does not need to transfer any data between database and Web server. • Oracle is extremely stable and very powerful. In 1997, Greenspun founded ArsDigita with some of his students. They took the source code which had evolved in three years and over the summer they hacked the ArsDigita Community System together. ArsDigita places the ACS under the GPL, and anyone interested can download the system from http://arsdigita.com.
OpenACS complete with free software Because the classic ACS is in fact free, but requires Oracle as database, a few former colleagues from ArsDigita breathed life into the OpenACS project and ported kernel and modules on to PostgreSQL, the only free database at that time. In the meantime Philip Greenspun, founder of ArsDigita, with Alex
work was proceeding in a one-man project to port ACS onto Interbase. In this project, incidentally, an Interbase driver for the AOL-Web server in Version 1.1 has just come out. OpenACS is now available as a complete RPM packet for Red Hat, and work is still going on for packages for other distributions. Installation takes only about ten minutes. To make the ACS also run under Apache, a module was developed which emulates the AOLServer-API under Apache. The OpenACS-RPM can therefore use Apache as a Web server. Up to Version 3.4 the ACS has grown historically and essentially represents a collection of loosely-connected tools. Version 4, which came out recently, brings about a generation change: The kernel of the system has been completely redesigned and a large part of the application logic has been relocated into the database, so that TCL is now merely the glue language. OpenACS is still based on Version 3.2. As soon as the database layer is implemented under Postgres, the porting cycles will presumably become considerably shorter. But as long as there is no large developer community, it is expected that OpenACS will continue to limp a long way behind ClassicACS. There are about 180 developers working at ArsDigita.
The end of a toolbox Earlier versions of ACS were more programmer toolkits, which considerably speeded up the creation of Web sites. Version 4 is the first fullyintegrated product; an out of the box installation can, with a bit of configuration via the Web browser, already run as a simple Web site. The package manager can be used to download applications from the Web, install them and mount them in the site map on one or more URLs. In this way, a module can be used as often as required. All modules mentioned below are available for both ACS and OpenACS or are now being ported.
The modular construction remains There is a refined issue of privileges, which recognises users, user groups and user groups interlocked with each other and allows secure web applications with finely-graduated access rights. Very large, hierarchical Web applications can thus be created and administered in the Webbrowser. The Content Repository makes available an API which encapsulates the administration of contents of any types required. Version administration, categorisation, permissioning and workflow are observed by the corresponding functions, so that developers can concentrate on creating useroriented applications. One example of an 24 LINUX MAGAZINE 6 · 2001
023arsdigita.qxd
31.01.2001
10:48 Uhr
Seite 25
WEB APPLICATION SERVER
application of this layer is the Content Management System of ACS. The workflow module has functions which considerably aid the development of processoriented applications. Internally the workflow processes are shown on Petri networks, the system is generic and can be used by a developer with relatively little expense. Two sample applications are the workflow module as application and the tickettracker module. Additional service modules are notification (email alerts on objects), messaging (for example creation of Web-boards or e-mails), LDAP authentication and the templating system. These layers of abstraction considerably alleviate the adaptation of the ACS to individual settings and adapted systems profit from further advances in the kernel modules.
A module for every function The applications modules can be divided into five categories: • Collaboration: Since the ACS grew out of a Web forum oriented towards collaboration, the collaboration modules are the most important and the largest ones. Address book, bulletin board, bookmarks, calendar, chat, file storage, Intranet, ticket or WimpyPoint (Web-based presentations) form the backbone of most ACSsupported Web sites. • The Publishing modules support and simplify the administration of content: adserver, banner ideas, display, dynamic publishing system, FAQ, general comments, general permissions, graphing, news, poll, prototype builder and spam • The Personalization modules make it possible to adapt a website to users or user groups: portals, user groups, user session tracking and member value
COVER FEATURE
• The Site-Management modules, audit and sitewide-search, support auditing and categorised Site-wide searching. • The Transaction modules mainly encompass the ecommerce module, which makes it possible to set up an Amazon-type shop, and the classified ads.
Documentation is everything, but there’s more Open and free software includes good documentation, which is why each ACS includes manuals for programmers in the form of HTML files. The documentation standard prescribes that the requirements of a module must be described in it. The bulletin boards of ArsDigita and OpenACS [8] are another good source of information for web and database developers. The ArsDigita Systems Journal on the other hand is a generally formulated online magazine on the subject of Web-based information systems. Also to be found there are online publications of the three computer books written by Philip Greenspun: The already-mentioned Philip and Alex’s Guide to Web Publishing, SQL for Web Nerds and TCL for Web Nerds. At irregular intervals, events on web development with ACS take place in Germany (see the ArsDigita Web site). Also interesting - especially for small communities - is the OpenACS hosting offer from the firm Furfly. ArsDigita has been working flat out for some months now on a Java version of ACS, which runs under Apache. The commercial market is crying out for it and the firm is hoping for a considerably stronger presence in the free software scene. An alpha version has already been released, in midNovember, and the final release should be available for downloading by the time this issue comes out. The latest information on this can be found in the developer zone of the ArsDigita Web site. ■
Info [1] ArsDigitaHomepage:http://www.arsdigi ta.com [3] OpenACS: http://openacs.org [2] Philip Greenspun: Philip and Alex’s Guide to Web Publishing (Verlag Morgan Kaufmann; ISBN: 1558605347) [4] Interbase variants of ACS: http://acs.lavsa.com/acsinterbase/ [5] Open-ACS packets for downloading: http://openacs.org/software.ad p [6] Content Management System: http://cms.arsdigita.com [7] BBoard of ArsDigita: http://www.arsdigita.com/bbo ard [8] BBoard of OpenACS: http://openacs.org/bboard [9] Philip Greenspun’s books online: http://www.arsdigita.com/boo ks/ [10] ACS-Hosting: http://openacs.furfly.net/servic es.html ■
The authors Dirk Gomez has been a selfconfessed fan of Philip and Alex’s Guide to Web Publishing for years. Also, he has been developing database applications for almost ten years. Martin Schmeil has in the past four years been a Sybase / Oracle-DBA and developer in a large multimedia agency. He publishes the (non profit) yellow pages of the punk-rock ”BDEBL” in print and soon (perhaps with ACS) online. 6 · 2001 LINUX MAGAZINE 25
026java.qxd
31.01.2001
10:53 Uhr
Seite 26
COVER FEATURE
WEB APPLICATION SERVER
Java-based Application Server
JAVA ISLAND DUEL
BERNHARD BABLOK AND ULRICH WOLF
Java applications servers are as common as espressos in a coffee bar. We present a brief overview of some of the available servers and give some tips on selection criteria. It all depends on the service
Unfortunately applications server is not a protected trademark. Which means that nowadays any application which can create dynamic websites is an applications server. In these, the presentation layer is not a core task of the applications server, especially since an HTML interface represents only one of many options for the user interface in modern systems. In my opinion the following definition is more apt: An applications server makes available a generic processing environment for applications logic. The services offered by an app-server represent an important evaluation criterion for selection. The strengths and weaknesses of various application servers are a result of their respective origins. Database manufacturers who are leaping onto the e-commerce bandwagon do of course also provide powerful solutions for the persistence layer, while an ORB manufacturer shines in the field of distributed objects.Important services provided by app-servers are: • Resource management, e.g. connection-pooling of database connections • Persistence of objects • Authentication, Authorisation, secure connections • Support for distributed objects, perhaps by means of RMI or CORBA • Finding objects (naming) • Support for (distributed) transactions Applications servers which meet the J2EE specification (Java 2 Enterprise Edition) must provide corresponding standardised implementations for the listed services. In particular they must supply a complete Enterprise JavaBeans (EJB)-Container. Details of EJB-Containers can now be looked up in the Coffee-Shop . Anyone who has to access their data, which may be stored on a large computer, will in any case also still need the corresponding support from the applications server.
Management of applications servers Smooth operation of critical business applications depends not only upon these services, but above all 26 LINUX MAGAZINE 6 · 2001
on the management of the server in day-to-day operation. If the server lies in something of a DMZ, a nice CORBA-supported admin-console will be no help at all (unless a, usually unspeakably slow, HTTP tunnelling of the CORBA connections is possible). Automated server management is possible, via integration in SNMP-based tools or proprietary solutions (such as Tivoli). Syslog support is sadly uncommon, although no platform-specific code would be necessary for it. This is also the case with log files, in which log reports are provided with numerical IDs, which considerably simplifies parsing, compared to text-based logs. Another important criterion is the platform availability and the consumption of resources. Pure Java implementations run, theoretically anyway, on all platforms, but sometimes the devil is in the detail. Sometimes the installation fails on Linux simply because a corresponding installation routine is missing. If individual parts of the app-server are not implemented in Java (e.g. parts of CORBA services), platform-specific versions are necessary. Consumption of resources can in fact be controlled via correspondingly large computers and the aforementioned load-balancing. But a complete development environment (with server, IDE and database) should still run at a reasonable speed on an individual PC. This is particularly important for mobile developers, who rarely have 512 MB of memory available on their notebook.
Frameworks and development tools Anyone who develops applications for app-servers is keen to have as little as possible to do with the details of implementation. Appropriate development tools simplify and accelerate the process here. Interfaces for existing IDEs such as Borland’s JBuilder should also have their place in the decision-making process. In the commercial domain, there is also a requirement that standard logic, such as for address management, should not have to be constantly reimplemented. The manufacturer of the app-server will supply a framework along with it, or will keep to
026java.qxd
31.01.2001
10:53 Uhr
Seite 27
WEB APPLICATION SERVER
existing standards, so that the integration of solutions from third party suppliers poses no problems. An advantage of following standards is that one is not tied to the manufacturer of the server.
Brief overview of major servers In the following sections, there is a brief overview of the principal applications servers. For reasons of time, it has not always been possible to install and research all features, so it was necessary to rely on information from the manufacturer. These details should be verified before use. If there is no statement on Linux compatibility, the product did not enter into our selection (unless it was explicitly a pure Java server). Since many servers run under Unix derivatives such as Solaris or AIX, the picture might look very different in a few months or even weeks.
Enhydra Enterprise The future version 4 of the open source Enhydra Enterprise Application Server (at the moment the fourth alpha version is being tested) offers, firstly, complete support for the J2EE standard, and secondly through various work groups it implements a range of tools, which greatly simplify the development of HTML/XML-based applications. Sub-components are taken over from other open source projects, so for example JOnAS is included in Enhydra Enterprise. Enhydra Enterprise is presented in more detail in the context of a Coffee-Shop. Version 3.1 of the ”Enhydra Application Server” (without ”Enterprise”) implements much of the functionality addressed above and has already been introduced in the Coffee-Shop (see [14]).
JOnAS Application Server JOnAS came about as the result of a joint venture between various firms, with the participation of France Telecom. JOnAS implements the latest EJB 1.1 standard. The Object Web Group, which maintains JOnAS and the projects connected with it Joram and Jonathan, consists of developers at French universities and telecommunications firms. Its objective is to build a platform for distributed objects for the telecommunications industry.
jBoss Application Server jBoss (also open source) was originally called EJBoss, but the abbreviation EJB is a registered trademark of JavaSoft, so the name was simply abbreviated. The jBoss Web site states that some 500 developers world-wide are supporting the further development of jBoss. Version 2.0, the current release, makes available an EJB 1.1 standard environment, and development is progressing on support for the EJB 2.0 specification in the next release.
COVER FEATURE
jBoss is really compact and comes either standalone, or bundled with the servlet engines Tomcat and Jetty. The greatest shortcoming is the lack of documentation but this is being addressed at top speed. A lot of this can already be seen online, though at present it is not yet possible to download the complete documentation. In the next article the project will be presented in more detail.
Exolab Application Server The Exolab app-server has not yet reached a usable state. The aim is to implement a server which right from the start supports the EJB 2.0 specification. Chief architect of the server is Richard Monson-Haefel, the author of the classic Enterprise Java Beans.
The author Bernhard Bablok works as a systems programmer in the field of systems-management. When he is not listening to music, cycling or hiking, he spends his time in the world of object orientation.
Orion Application Server The application server from the small Swedish firm Evermind Data is fast and, in comparison with other commercial products, a bargain. Each professional installation will cost US$1500. For development and non-commercial use the server is free. Orion, in the current version 1.2.9, already supports parts of the future EJB2 standard, including Message Driven Beans. And parts of the, not yet completely specified, servlet-standard 2.3 are also implemented. The Orion application server comes with its own Web server, but it can also be persuaded to work with Apache.
Silverstream Application Server The stock market-quoted software house Silverstream, from Massachusetts, is supplies a complete ecommerce suite; the Silverstream application server is a central component of this. Currently the software is available in version 3.5, 3.7 (at beta stage at the time of writing) is scheduled for released at the end of 2000. Worth noting is the integration of various IDEs, including Macromedia DreamWeaver.
Inprise Application Server Inprise maintains one of the traditional models among Web applications servers. Integration with JBuilder goes without saying in this case. Its roots lie in the Visibroker, from Visigenics, one of the first Object Request Brokers (ORB). Borland bought Visigenics in 1997. The Inprise application server is written in 100 per cent Java. Installation routines are currently available for Red Hat 6.0, it supports the common J2EE standards such as EJB 1.1. At this point we could go on forever. Nearly every company above a certain size which has nailed its colours to the mast of system integration, has a Web application server in its schedule. In our overview a few of those discussed here are listed. Two commercial products with very high market shares, namely IBM Websphere and BEA Weblogic, are presented from page 56 in somewhat more detail. ■
Info [1] Lutris Enhydra Enterprise: http://www.enhydra.org [2] IBM WebSphere Application Server: http://www4.ibm.com/software/webservers/ appserv/ [3] BEA Weblogic Application Server 5.1: http://www.bea.com/products/w eblogic/server/ [4] Silverstream Application Server: http://www.silverstream.com/ [5] Inprise Application Server: http://www.borland.com/appser ver/ [6] Orion Application Server: http://www.orionserver.com [7] JOnAS Application Server: http://www.objectweb.org/jonas / [8] jBoss EJB-Server: http://www.jboss.org [9] Unify eWave Application Server: http://www.unifyewave.com/ove rview.htm [10] Exolab Application Server http://openejb.exolab.org [11] Oracle Application Server: http://www.oracle.com/ip/deplo y/ias/ [12] IONA iPortal Application Server: http://www.iona.com/products/ [13] JRun Application Server http://www.allaire.com/products /jrun/ [14] Coffee-Shop: Im Zeichen des Otters [Under the sign of the otter], Linux-Magazin 08/2000, p. 128 ff. [15] Richard Monson-Haefel: Enterprise Java Beans, 2nd edition, O’Reilly 2000 ■
6 · 2001 LINUX MAGAZINE 27
028jboss.qxd
31.01.2001
11:03 Uhr
Seite 28
COVER FEATURE
WEB APPLICATION SERVER
Insight into the jBOSS Project
SCALING THE HEIGHTS
DANIEL SCHULZE
JBoss is a free project for an Enterprise Java beans application server. The extensive and lively developer scene and consequent orientation towards modern Java technologies such as JMX could help jBoss scale the heights of its class.
The proposal sounds fascinating: The basis for applications no longer consists of the operating system, but a more abstract, clearly-drawn, welldocumented and component-oriented platform, an Internet Operating System - or J2EE, as Sun Microsystems unpretentiously calls it. And now this new infrastructure, just like Linux, is to be made available free to everyone who needs it. Mesmerised by this vision and for the fun of programming, Marc Fleury, (then employed at SUN as a Java evangelist), founded the EJB Open Source Server (EJBOSS) project. Very quickly, high-calibre coders such as Richard Monson Haefel, who are helping with their contribution to the design and code, got the young project off the ground. The project was given an additional impetus when Richard Öeberg came across it and added the latest available Java technologies, which lent jBoss in its current, second generation, standard-setting characteristics. JMX - Guarantee for simple interfaces and expandabilty The technical kernel consists of a JMX-(Java Management eXtension) server. JMX, a management API introduced by SUN as an official J2EE component, will soon become the standard for the management of distributed Java applications, so 28 LINUX MAGAZINE 6 · 2001
that commercial suppliers such as BEA can copy with Weblogic and integrate JMX. The advantage of this technology is a modular structure with clear interfaces, making it simple to integrate additional components and expansions. What originally began as an EJB container is now well on the way to becoming a proper J2EE server. JBoss is 100 per cent developed in Java, has an extremely narrow memory footprint, which makes it ideal for integration into other systems, and is licenced under the LGPL. At present there are JMX-wrappers for two Web server/servlet containers (Tomcat3.2, Jetty3). There is also an integrated JMS (Java Messaging Service) implementation and an integration module for Castor, a free JDO implementation. JBoss speaks SOAP and there is even an integration module available for the well-known commercial O-Rmapping utility Cocobase. The secret of this rapid success lies in the way in which Fleury manages the community 24 hours a day, seven days a week and makes sure that no-one loses focus. There is a very friendly, helpful atmosphere in the mailing lists. Anyone can contribute code and there is also constant encouragement to do so. So a frequent reply to queries as to when features will become available is.
028jboss.qxd
31.01.2001
11:03 Uhr
Seite 29
WEB APPLICATION SERVER
"When you start your IDE and begin to make them happen!" Many developers integrate their own projects into jBoss, examples worth mentioning include SpyderMQ (JMS) and Minerva, the implementation of a database connection pool. The main focus of the whole project is ease of use; the motto goes: Hide as much complexity from the user as possible.
Simple usability main objective of the project This condition has brought with it, among other things, features like dynamic proxies and hot deploy/hot redeploy. Anyone who wants to deploy an EJB archive under jBoss, does not need to know how to create skeletons and stubs and which ones must then go into which class path. Simply copy the desired archive into the deploy folder and voila beans deployed! If one integrates one of the two available servlet machines, one can also start Web archives (.war) or entire enterprise archives (.ear) simply by means of Drag and Drop. At present, though, .web and .ear integration is not yet 100 per cent J2EE-compatible. There are still problems in the JNDI and security integration.
Unpack and go – Installation is no problem Ease of use begins from when the server starts. Download jBoss from the Internet, no matter if it’s CVS or binary distribution, simply execute sh run.sh (in the case of CVS, sh build.sh first) and it runs. An additional feature accompanying the distribution is EJX, a GUI-based XML file editor for simple assembly of EJB archives and the metadata necessary for this (ejb-jar.xml). This tool is being completely revised and will reappear in a few weeks with renewed sparkle. The EJB container itself currently supports the EJB specification Version 1.1. It integrates a transaction monitor, JAWS, a persistence manager, and a JAAS (Java Authentication and Authorisation Service) compatible security layer. In principle any relational database can be used for the persistence layer (BMP/CMP), as long as there is a JDBC driver available for it. The distribution already includes two pre-configured pure Java databanks (Hypersonic, InstantDB). Mapping data for the following additional databases is available for JAWS, so that they can be integrated with ease: Oracle, PostgreSQL, Pointbase, Solid, mySQL, MS SQLServer, DB2/400 and SAPDB.
Those who keep on reading are smarter Despite simple installation and what is, in principle, simple usability, some questions
COVER FEATURE
remain unanswered. But it is also clear that programmers are reluctant to write documentation. Though efforts are being made to increase the supply of documentation and especially to keep the existing documentation up to date. But the rapid progress of the project means this is not always so simple. If the documentation provided on the Web site, the Getting Started guides and Howtos do not come up with the answer for a certain problem, the best source for information is the jBoss mailing list. Here can be found what seems to be an answer to every question, in most cases actually from the author of the code. It’s certainly easier to look it up in the smartly designed ring binder provided, out new - this is open source! Frequently asked questions are: Can I implement jBoss in a production environment? Do any benchmarks exist? How does the server scale? When will such and such a feature become available? Now, jBoss can be (and is) used in production environments for example: http://www.liquidwit.com. It may be a little early yet for company-critical environments, but in a month or two this hurdle may be overcome on the basis of the present momentum of the community. A minus point for use in larger distributed systems is the transaction manager, which currently does not support any distributed transactions, and 2-Phase Commit (committing data to more than one database simultaneously). The design of the container itself scales superbly. No matter how many clients are connected at the same time, all queries run via a central, very fast container Invoker authority, from where they are passed on to the corresponding beans. The container itself is statusless, all necessary data is extracted from the incoming client invocation.
The author Daniel Schulze is actually studying computer science in Leipzig, but at present he is living and working in San Francisco at Olliance, an open source consultancy. He became a jBoss developer as the result of work experience at Teksel.
The future – Clustering, Jini, EJB 2.0 Developers are now working hard on plans for clustering. In the good old jBoss tradition, you can look forward eagerly to some tasty technical morsels – one thing which can be disclosed is that JINI will play a leading role in this. And of course EJB 2.0 is also in the pipeline, message driven beans are said to be already running under laboratory conditions. For CMP2.0, though, there are no volunteers as yet. Also in areas such as configuration tools, more fault-tolerant application deployment, and also the aforementioned clusters and EJB2.0 plans, there is still a lot to be done. But in general this is the right direction, the momentum on the part of the community is great enough to sustain the driving force for development on the part of the community, and now everyone knows how successful open source projects can be. ■ 6 · 2001 LINUX MAGAZINE 29
030websphere.qxd
02.02.2001
17:29 Uhr
COVER FEATURE
Seite 30
WEB APPLICATION SERVER
Brief introduction to IBM Websphere and BEA Weblogic
HEAVY INDUSTRY BERNHARD BABLOK
Technologically, Websphere and Weblogic have little in common. While IBM uses tried and tested technology, BEA sparkles with the latest standards. But both have very high market shares.
Everyone knows IBM, but sometimes even IT insiders don’t know the first thing about BEA Systems. Together these two companies hold a position which commands the market in the field of web application servers. BEA 30 LINUX MAGAZINE 6 · 2001
was only founded in 1995, but since then it has grown like wildfire. The company’s success was mainly based in the first instance on the transaction platform Tuxedo, while Weblogic is relatively new to their range.
030websphere.qxd
02.02.2001
17:29 Uhr
Seite 31
WEB APPLICATION SERVER
Websphere: Mixture with Apache IBM’s WebSphere application server is currently available in version 3.5, but so far for Linux it remains at version 3.02 (the version for IBM’s mainframe operating system OS/390 is no more advanced either). Websphere comes in three versions (Standard, Advanced and Enterprise). The range of functions varies according to the version. While the Standard version comprises Web server, Servlet-engine and JSP environment, the Advanced Edition also offers an EJB run-time environment. A comprehensive CORBA solution is however reserved for the Enterprise Edition. But this is mainly implemented in C/C++, so that both integration with Java, as well as porting onto various hardware platforms is difficult. The web server from Websphere essentially consists of Apache (the Apache licence allows this), while the Servlet- and JSP-Engine are both proprietary. This is also where the problems start with Websphere. The standards are supported in completely obsolete versions: in the Servlet-API this is only version 2.1, and in the JSPs it is versions 0.91 and 1.0. With the EJB specification it’s even worse. While many servers are already implementing parts of the ”Public Final Draft” of version 2.0 of the EJB specification, Websphere users still have to settle for version 1.0. The list of woes gets even longer under Linux, since only with version 3.5 has support for Java 2 made an appearance.
Resources guzzler The resource consumption of Websphere on the other hand is enormous. As a minimum equipment requirement, 256 Mbyte is recommended, for a stand-alone solution for developers, including development environment (VisualAge) and database (DB2) 512 Mbyte is just the beginning of compatibility. According to reports from users the stability of version 3.5 is poor compared with 3.02. With all these rightly-criticised points, one wonders why Websphere is used at all. One aspect which should not be overlooked is the fact that Websphere is a product for big business, and that’s where as a rule people sitting on decision-making committees have grown up with IBM as a reliable partner. The risk that IBM’s applications server will disappear as the result of stockpiling is relatively low at this point (so something like the case of GemStone, which was bought up by Brokat, is not going to happen). But notwithstanding these ruminations, which prove the excellent marketing of IBM, there are also definite technical reasons for implementing Websphere. Websphere’s strengths are its integration of the development environment, ”Visual Age for Java”, with various code generators and a good environment for debugging. Even if the latest EJB specifications are not supported, IBM has implemented its own proprietary additional system anyway, which provides the developer with a
COVER FEATURE
powerful persistence framework. This also includes the corresponding database tables. Equally, the reverse path for existing tables is possible. The last point leads to another good argument for Websphere, namely the integration of legacy applications via large computer systems like CICS or IMS. Other commercial servers, like BEA’s Weblogic Server, though, offer similar features via add-on components. Version 4 of Websphere has already been announced. Here at last the latest standards should be implemented. Also the remaining APIs from the J2EE specification, in particular JavaMail, should be implemented. It is to be hoped that by that time a current Linux version will also be available.
BEA Weblogic Application Server BEA’s Weblogic Server is available for download in a ”Public Beta 2” of version 6, from the Web sites of BEA. Unfortunately here again the Linux user will have to settle for the older version 5.1.4. This version can also be downloaded for evaluation purposes after a registration. It is understandable that in a beta phase not too many platforms find support, which is why a few other Unix dialects are also lacking here in comparison with the long list of platforms supported by the latest version. The Weblogic Application Server is enjoying wide distribution and the high version number also shows that already a great deal of practical experience of the product has been included. Version 6 can be described as ”State of the Art”. The implementation of the complete J2EE specification is available and the management of cluster-capable application via modern Webbased tools is possible. The cluster architecture (both on the Web and on the object server site) guarantees a high level of scalability and availability. Even the EJB 2.0 specification is supported. The Weblogic Server supplies a Web container (for HTML/XML pages, Servlets and JSP), an EJB container (as run-time for the Enterprise JavaBeans) and the services necessary in the business domain. Apart from its own web server, existing servers such as Apache, IIS or Netscape can also be integrated. And integration into development environments such as WebGain Studio (formerly known as Visual Cafe) from Symantec, VisualAge or JBuilder is also possible. Security with respect to the ”outside” is made possible via SSL-connections, and on the the ”inside” firewalls do not hinder the management of the server, as corresponding HTTP/HTTPS tunnels are available. Furthermore modules are available as add-on products, such as for mainframe access, the personalisation of Web sites or access to standard applications. ■ 6 · 2001 LINUX MAGAZINE 31
032cocoon.qxd
31.01.2001
11:10 Uhr
MAIN FEATURE
Seite 32
COCOON/XML
Serving XML with Apache Cocoon
BRING ON THE DANCING GIRLS... MARKUS KRUMPCK
Everyone has heard of XML, but how can you use it and how do you transfer data to clients that do not speak XML? That is the topic of our current article, where we will be looking into the Apache Groups Web Publishing Framework.
Normally, the first question to be asked is: ”Why XML? - Just because it happens to be hyped?” ”And what do all those abbreviations mean anyway: XML, XSL, DTD, and XSLT etc?”. Everyone is talking about XML - at least about its look and feel, but nobody quite seems to know what it is used for. XML is short for ”eXtensible Markup Language” with the emphasis firmly placed on ‘extensible’ in contrast to HTML, which (although it has grown with each consecutive version) still comprises a fixed set of permitted tags. So you can define your own tags. Additionally, XML files comprise only textual content and logical characteristics or, more simply, meaning. XSL (eXtensible Stylesheet Language) or XSLT (eXtensible Stylesheet Language Transformations) are required for output. Often this will be performed by using HTML tags within XSL 32 LINUX MAGAZINE 6 · 2001
032cocoon.qxd
31.01.2001
11:10 Uhr
Seite 33
COCOON/XML
MAIN FEATURE
documents. Again the approach differs from HTML documents where content and formatting instructions are combined in a single file. Developing Web pages in XML is normally more time consuming than using HTML as you need a document each for content and for output via XSL. You will often see a separate document with tag definitions known as DTD (or Document Type Definition). One advantage of applying logical tags to data is that it allows applications to browse and parse the documents. At the same time the data and its display characteristics are stored separately, allowing different people to work on a document’s content or appearance. Unfortunately, there are very few XML/XSL editors available at present, and that leaves you with very little choice but to fire up your favourite text editor. At the time of writing XML looks like it might just become THE standard data exchange format. Now you might ask what tools you can use to display XML or XSL documents considering the fact that current browsers provide only limited support (Mozilla) or refuse to comply with specifications (Internet Explorer). This is the point at which Cocoon enters the scene:
you keep to the pre-defined interfaces, that is. By the way, Science Fiction fan and author of Cocoon, Stefano Mazzocchi, was inspired by the movie Cocoon, which was shown on TV while he was working on an idea for an XSL rendering servlet. You may recall that in this movie senior citizens were wrapped in a kind of cocoon before emerging to a new life, just like butterflies.
Why Cocoon?
XML documents always begin with <?xml version=”1.0”?> followed by any number of procession instructions (PI) that use the syntax <?target instruction?>. Instructions tell the application how to parse i.e. process the document content: For example, <?cocoon-process type=”xsp”?> or <?xml-stylesheet href=”sample.xsl” type=”text/xsl”?> are typical PIs. The XML tag and PIs are the only tags that do not need to be terminated by an end tag. This is followed by the root element that comprises any other elements. XML documents must be wellformed, that is, you need to pay attention to proper nesting (the first tag to be opened is the last to be closed). EVERY tag must be terminated by an end tag (in contrast to HTML where tags such as <hr>, <br> or <p> are often used without an end tag). However, you can define empty tags, that is, tags that do not have any text content but consist entirely of attributes (similar to the <img src=”pict.gif”> tag in HTML). To avoid having to type tags of this type twice, a tag can be self-terminating, for example:<author name=”Markus Krumpck”/>. You should also be aware that XML documents are case-sensitive. A well-formed XML document can be valid, although this is not mandatory. A document is said to be valid if it complies to a Document Type Definition (DTD). DTDs contain structural definitions for a document – for example, they might define what kind of elements can be used in a document. In addition, a DTD provides guidelines for nesting
Cocoon is a publishing framework for Web content that is currently under development by Stefano Mazzocchini and others as part of an Apache XML project. Simply put, Cocoon brings XML functionality to the server. In other words Cocoon processes XML/XSL documents, allowing them to be displayed by any client. The client does not even need to be a browser - it could be a mobile phone, for example. By defining different stylesheets for the same content you can change the way a document is displayed by the client. You might need to do this to provide better support for browsers or simply to display content in a different format (XHMTL, HTML, XML, WML, or PDF for instance). As browsers begin to provide native support for XML/XSL, in future there will be no need to perform conversions of this kind. The data can be displayed natively without the need to perform the intermediate step of converting to HTML. One further advantage is the fact that the same data can be displayed ‘on the fly’ in multiple formats, and this would be extremely difficult to achieve and support without XML/XSL. Separating content (which is stored in XML documents), business logic (stored in eXtensible Server Pages) and layout (client-based presentation) - and Cocoon does provide support for this functionality - makes it easier to develop and support complex Web projects. These three areas can be managed independently by different people - no more stepping on each other’s feet, provided
Functionality XML documents are processed using an XML parser (Cocoon uses Apache Xalan, which was named after a rare musical instrument and developed as part of the XML Apache Project) and placed in an internal tree structure known as the Document Object Model (DOM). The data structure can be accessed by one of a whole bunch of processors (XSP processor, SQL processor, LDAP processor and DCP processor amongst others) that processes and manipulates the data following the guidelines of the host applications logic. Following this step an XSLT processor (Apache Xerces) is used to generate output for the clients.
XML 101
6 · 2001 LINUX MAGAZINE 33
032cocoon.qxd
31.01.2001
11:10 Uhr
MAIN FEATURE
Seite 34
COCOON/XML
elements, stipulates the attributes associated with certain elements and specifies the valid attribute values. You can specify a DTD within the XML document itself, although it is more common to refer to an external file. DTDs may be replaced by XML schemas.
Installation Before you install Cocoon, you must ensure that you have a functional servlet engine. You could use any servlet engine, such as Tomcat. Both the Cocoon manual and Java and XML, Chapter 9 (which can be read online) contain further details. For the purpose of this test we used SuSE 6.4 with JDK 1.1.7 and Apache JServ 1.1, although similar results can be obtained with other distributions. First and foremost, download the 2.5 MB Cocoon Package (1.7.4) from the Apache XML Project Web site. This package contains classes for Xalan, Xerces and FOP to avoid inconsistencies caused by newer versions of these programs being developed separately from the Cocoon project. Note: The Cocoon installation guide contains an error at this point, referring to .../cocoon/BIN/cocoon.properties instead of .../conf/cocoon.properties):
Pic.: Cocoon following successful installation
/etc/httpd/jserv/jserv.conf: Action cocoon /servlet/org.apache.cocoon.Cocoon AddHandler cocoon xml
After running the commands in the Installation script text box and modifying the files shown in the Configuration files text box, you need to create a directory for Cocoon to store its Java classes in - that is, the user ID under which the web server is running must have write access to the directory and point to the directory in your configuration files. When I tried to use the directory specified in default configuration, ./respository (relative to the web server root), I kept on getting ”Can’t create store repository: //./repository. [...]”, although I had assigned global read and write privileges. I finally used an absolute path name for the directory and re-named it to temp to resolve this problem: mkdir -p /usr/local/httpd/htdocs/temp chmod a+rwx /usr/local/httpd/htdocs/temp /usr/local/java/cocoon/conf/cocoon.properties: processor.xsp.repository = /usr/local/httpd/U htdocs/temp In /etc/httpd/httdp.conf the line should not contain: LoadModule action_module che/mod_actions.so
/usr/lib/apaU
any comment characters - this is normal for SuSE 6.4. After restarting Apache (/sbin/init.d/apache restart) you should be able to type in the following URL http://localhost/Cocoon.xml in any browser to display the output shown in the picture. After using the following command: cp -R /usr/local/java/cocoon/samples/ /usr/U local/httpd/htdocs/ to copy the sample files that accompany the Cocoon package to the htdocs directory on your Web server, you can also load the following page http://localhost/samples/index.xml for an overview of the features that Cocoon has to offer.
HelloWorld Let’s look at an example - the ubiquitous HelloWorld - to explain things. The example is taken from the sample files that accompany the distribution. (hellopage.xml and hello-page-html.xsl are just two plain, old XML and XSL documents, except for the special Processing Instruction (<?cocoon-process type=”xslt”?>) in the XML document that tells Cocoon to pass the document to the XSLT processor, that is, to transform the document. The output format is specified in the stylesheet using: <xsl:processing-instruction name="cocoon-foU rmat">type="text/html" The XML document also contains a sample DTD that was embedded in the document (refer to the lines starting with <!DOCTYPE page [ through ]>). You can generate client-specific output using different stylesheets, the advantage being that you 34 LINUX MAGAZINE 6 · 2001
032cocoon.qxd
31.01.2001
11:10 Uhr
Seite 35
COCOON/XML
MAIN FEATURE
Installation script tar -xvzf Cocoon-1_7_4_tar.gz cd cocoon-1.7.4 mkdir -p /usr/local/java/cocoon cp -R * /usr/local/java/cocoon cd /usr/local/java/cocoon/lib ln -s xerces_1_0_3.jar xerces.jar ln -s xalan_1_0_1.jar xalan.jar ln -s fop_0_12_1.jar fop.jar
Configuration files /etc/httpd/jserv/jserv.properties: wrapper.classpath=/usr/local/java/cocoon/bin/cocoon.jar wrapper.classpath=/usr/local/java/cocoon/lib/xerces.jar wrapper.classpath=/usr/local/java/cocoon/lib/xalan.jar wrapper.classpath=/usr/local/java/cocoon/lib/fop.jar /etc/httpd/jserv/zone.properties: servlet.org.apache.cocoon.Cocoon.initArgs=properties=/usr/local/java/cocoon/conf/cocooU n.properties
hello-page.xml <?xml version="1.0"?> <?xml-stylesheet href="hello-page-html.xsl" type="text/xsl"?> <?cocoon-process type="xslt"?> <!DOCTYPE page [ <!ELEMENT page (title?, content)> <!ELEMENT title (#PCDATA)> <!ELEMENT content (paragraph+)> <!ELEMENT paragraph (#PCDATA)> ]> <!— Written by Stefano Mazzocchi "stefano@apache.org" —> <page> <title>Hello>/title> <content> <paragraph>This is my first Cocoon page!>/paragraph> </content> </page>
hello-page-html.xsl <?xml version="1.0"?> <!— Written by Stefano Mazzocchi "stefano@apache.org" —> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="page"> <xsl:processing-instruction name="cocoon-format">type="text/html" <html> <head> <title> <xsl:value-of select="title"/> </title> </head> <body bgcolor="#ffffff"> <xsl:apply-templates/> </body> </html> </xsl:template> <xsl:template match="title"> <h1 align="center"> <xsl:apply-templates/> </h1> </xsl:template> <xsl:template match="paragraph"> <p align="center"> <i> <xsl:apply-templates/> </i> </p> </xsl:template> </xsl:stylesheet>
6 · 2001 LINUX MAGAZINE 35
032cocoon.qxd
31.01.2001
11:10 Uhr
Seite 36
MAIN FEATURE
COCOON/XML
.../conf/coocon.properties: ########################################## # User Agents (Browsers) # ########################################## browser.0 = explorer=MSIE browser.1 = pocketexplorer=MSPIE browser.2 = handweb=HandHTTP browser.3 = avantgo=AvantGo browser.4 = imode=DoCoMo browser.5 = opera=Opera browser.6 = lynx=Lynx browser.7 = java=Java browser.8 = wap=Nokia browser.9 = wap=UP browser.10 = wap=Wapalizer browser.11 = mozilla5=Mozilla/5 browser.12 = mozilla5=Netscape6/ browser.13 = netscape=Mozilla
clean-page.xml <?xml version="1.0"?> <?cocoon-process type="xslt"?> <?xml-stylesheet href="page-xsp.xsl" type="text/xsl"?> <page> <title>First XSP Page</title> <p>Hi, I’m your first XSP page ever.</p> <p>I’ve been requested <count/> times.</p> </page>
page-xsp.xls <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xsp="http://www.apache.org/1999/XSP/Core" > <xsl:template match="page"> <xsl:processing-instruction name="cocoon-process">type="xsp"</xsl:processing-instruction> <xsl:processing-instruction name="cocoon-process">type="xslt"</xsl:processing-instruction> <xsl:processing-instruction name="xml-stylesheet">href="page-html.xsl" type="text/xsU l"</xsl:processing-instruction> <xsp:page language="java" xmlns:xsp="http://www.apache.org/1999/XSP/Core"> <xsp:logic> static private int counter = 0; private synchronized int count() { return counter++; } private String normalize(String string) { if (string == null) return ""; else return string; } </xsp:logic> <xsl:copy> <xsl:apply-templates/> </xsl:copy> </xsp:page> </xsl:template> <xsl:template match="title"> <xsl:copy-of select="."/> </xsl:template> <xsl:template match="p"> <xsl:copy> <xsl:apply-templates/> </xsl:copy> </xsl:template> <xsl:template match="count"> <xsp:expr>count()</xsp:expr> </xsl:template> </xsl:stylesheet>
36 LINUX MAGAZINE 6 · 2001
032cocoon.qxd
31.01.2001
11:11 Uhr
Seite 37
COCOON/XML
can concentrate your efforts on maintaining a single content file while simultaneously generating HTML pages for multiple browsers. This allows you to support various vendorspecific HTML, DHTML, or JavaScript versions, or supply a WML page if the client happens to be a WAP compatible mobile phone, and even a PDF document, as we will see in the following section. The default stylesheet in our example is specified as follows: <?xml-stylesheet href="hello-page-html.xsU l" type="text/xsl"?> in our sample document. You could follow up this line with a sequence like the following: <?xml-stylesheet href="hello-page-ie-html.U xsl" type="text/xsl" media="explorer"?> to assign your own template for the Internet Explorer, or use the following line: <?xml-stylesheet href="hello-page-wml.xsl" tU ype="text/xsl" media="wap"?> to define a stylesheet for a WAP mobile phone. You will find a range of values for the attribute media in .../conf/cocoon.properties The list is extensible, however, you must pay attention to the order of the list entries, as the MS Internet Explorer (for example) uses the following banner "Mozilla/4.0 (Compatible; MSIE 4.01; ...)" and would be recognised as Mozilla, if you had omitted an entry to check for the MSIE string. The client media type is read from the HTTP request header. You can therefore use a simple CGI script that outputs environment variables to discover the exact syntax (this is pre-configured for SuSE 6.4). Stylesheets are fairly powerful tools, when you consider that they can be used not only for simple formatting tasks but to embed logical constructs that allow you to output entries complying to a given search pattern. A brief explanation is all we have space for: A stylesheet comprises multiple templates that begin with <xls:template match=”....”> and end with </xls:template>. The value of the atttribute match defines the tag in the attached XML document that the template is applied to. Starting at the root element (<page> in our case), the XLST processor replaces the content of the XML document with the HTML tag defined in the appropriate template. If the processor finds an occurrence of the <xsl:apply-templates/> tag, the children of the root element are also parsed, and then their children, and so on until all the XML tags have been replaced. <xsl:value-of select=”title”/> is used to output the value of the XML tag <title>. XSL allows for a variety of more complex operations, such as for loops and if constructs. You can define
MAIN FEATURE
the output order, or output only selected elements, and perform many other useful tasks that are unfortunately beyond the scope of this article. However, I would like to make the following observation on stylesheets before we move on: Because XSL documents need to be well-formed, just like XML documents, the HTML tags in the stylesheet also have to be well-formed. In fact, you will not actually be generating an HTML page, but an XHTML page. Tags such as <hr> for a horizontal line or <br> for a carriage return must appear in their syntactically correct form <hr /> or <br /> in stylesheets. Our example contains an occurrence of this, as you can see by referring to the paragraph tag, <p>, whereas HTML will often omit the end tag. You also need to pay special attention to metacharacters such as the ampersand (&). This is normally used to reference so-called entities, that is, text constants that are repeatedly used. XML contains pre-defined entities to resolve this problem: &amp; for the ampersand, &gt; and &lt; for greater than and lesser than, &quot; for double quotes and &apos; for single quotes. In the case of longer text passages, whose syntactical validity does not need to be checked, we recommend that you place these lines between <![CDATA[ and ]]> - CDATA is short for ‘character data’. The W3C web site contains the full specifications of XML and a draft for XSL, although these documents are by no means easy reading and thus not recommended for beginners. If you are looking for a tutorial that is also applicable to Cocoon, take a look at ‘Java & XML’.
FOP XML can (at least in theory) be converted to any format and not only to text-oriented formats, such as HTML or WML. The Formatting Objects Processor (FOP) is required for this task. The processor currently supports conversion to PDF format. Just like in the previous example, you simply need a different stylesheet to convert the content of an XML document into a PDF document. However, we will not be discussing the exact syntax of this special stylesheet at this time. For further information, please review the examples in the Cocoon sample files or take a look at the W3C Consortium web site, where you will find the complete XSL-FO draft.
XSP Now that we have seen how to define static pages with the exception of one or two logical constructs that can be defined in XSL stylesheets - it might be a good idea to find out how to generate dynamic output. In the case of Cocoon the XSP processor (eXtensible Server Pages) is 6 · 2001 LINUX MAGAZINE 37
032cocoon.qxd
31.01.2001
11:11 Uhr
MAIN FEATURE
Seite 38
COCOON/XML
Information Brett McLaughlin: Java and XML - Web Publishing Frameworks http://www.oreilly.com/catalo g/javaxml/chapter/ch09.html Cocoon: http://xml.apache.org/cocoon/ W3C: XML-Specifikation http://www.w3.org/TR/RECxml W3C: XSL Working Draft http://www.w3.org/Style/XSL/ MyXML: http://www.infosys.tuwien.ac. at/myxml/ ■
The Author Markus Krumpck is a student of Information Technology. Web site programming is his major field of activity. He spends most of his free time playing clarinet. You can contact Markus at markus.krumpoeck@gmx.at
responsible for this task. Code segments are stored in so-called logic sheets, in contrast to Java Server Pages where code is embedded in normal HTML pages. This means creating at a third file type in addition to XML and XSL documents, but it also means genuine separation of content and business logic. This in turn provides for ease of maintenance, since the files can be managed independently. To demonstrate this point I have abbreviated one of the files in the Cocoon sample file compendium. The listings clean-page.xml (content file), page-xsp.xml (business logic) and pagehtml.xsl (layout) are no more than a simple counter that outputs the number of hits for a page, cleanpage.xml. Using a technique commonly seen with servlets, the number of hits is stored in memory. On initial access to clean-page.xml Cocoon generates and compiles the Java source, which is stored in the repository defined in the configuration file in the same (relative) subdirectory as the corresponding XML document. If you store the XML/XSL documents in the directory, /usr/local/httpd/htdocs/samples/xsp/ then the corresponding Java source or classes will be stored in directory, /usr/local/httpd/temp/_usr/_local/_htdocs/_samples/ _xsp/. At first sight this may seem somewhat complicated, however, if you are working on a large-scale project, you will begin to appreciate the strict separation with its beneficial effects on maintenance tasks. If you are only interested in creating a quick and dirty prototype or demonstrating navigational or layout features, you may prefer to use CGI or PHP.
page-html.xsl <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/TraU nsform"> <xsl:template match="page"> <xsl:processing-instruction name="cocoon-format">type="text/html"</xsl:U processing-instruction> <html> <head> <title><xsl:value-of select="title"/></title> </head> <body> <p><br/></p> <center> <big><big><xsl:value-of select="title"/></big></big> <xsl:apply-templates/> </center> </body> </html> </xsl:template> <xsl:template match="title"> <!— ignore —> </xsl:template> <xsl:template match="p"> <xsl:copy> <xsl:apply-templates/> </xsl:copy> </xsl:template> </xsl:stylesheet>
38 LINUX MAGAZINE 6 · 2001
Summary This article introduced you to Cocoons XSP processor, however, Cocoon comes with a variety of other processors, such as the SQL processor, or the LDAP processor. If you require more information on this topic, please refer to the comprehensive documentation included in the Cocoon package. At time of writing Cocoon2 is under development, although an official release is not yet available. The web site does not provide any clues as to when Cocoon2 might be released, however, I would like to introduce you to some of Cocoon2s features at this point: Cocoon 1.x is based on DOM1, that is, documents are placed in a hierarchical tree and stored completely in memory after parsing. This can be a problem, especially in the case of large-scale documents. This problem has been resolved by using SAX (Simple API for XML) in Coocon2. SAX parses a document sequentially and triggers the appropriate event when a tag is encountered. In other words only partial or no storage of the document in memory is sufficient. DCPs (Dynamic Content Pages) have also been dropped in favour of Extensible Server Pages. Additionally, a lot of effort has been put into optimising performance, and you have to admit that Cocoon 1.x is not exactly quick if you need to process a number of requests simultaneously. At this point I would also like to draw your attention to another interesting project that also deals with XML content generation issues and similarly envisages separating content, logic and style: MxXML. This project was launched by and is under the supervision of Clemens Kerer, Engin Kirda and Roman Kurmanowytsch of the Institute for Distributed Systems at the Vienna Technical University. A template engine is used to generate Java classes or HTML files from XML/XSL files, allowing them to be integrated in their own servlets. This technique has one major advantage over Cocoon it allows you to implement business logic within traditional servlets and does not require logic sheets where Java and XML mingle in a single file. This also allows you to continue using existing servlets, only exchanging those parts where HTML code is output for classes generated by MyXML. Classes and HTML files are generated offline, that is, after modifying an XML or XSL file, you must manually launch a Java program that automatically creates the required classes or HTML files. Of course you do not need to re-compile the classes each time you modify the XML content and you can also define dynamic classes (for database applications, for example). The comprehensive online documentation contains further details. MyXML has already been used to implement a major commercial Web site and has stood up well to field testing so far. ■
040usbmulti.qxd
31.01.2001
16:52 Uhr
ON TEST
Seite 40
USB MULTIMEDIA
Multimedia Devices with USB connection
MULTIMEDIA IN SERIES CHRISTIAN REISER
Philips DSS 330 When you unpack this sub-woofer satellite system, a great big logo boasts: 600 Watt PMPO (Peak Maximum Power Output). On taking a closer look at the introduction it emerges that this leaves just 50 watts: 25 for the sub-woofer and 12.5 for each satellite. This is adequate for almost any normal application, but again shows how much influence marketing people have and how seriously big numbers should be taken. Unlike the Yamaha boxes introduced below, in this case the Line-In input is not adjustable: The incoming signal is thus always turned up to the maximum. Anyone trying to insert a device during operation does in fact risk damaging the loudspeaker. Another unpleasant side effect is that the boxes therefore hiss slightly. But only the owners of whisper-quiet computers will notice this. The only adjustment option on the device is the sub-woofer volume. The cable remote control appears under Linux as a keyboard with just two keys: louder and quieter. The other buttons – Power and Surround – are hard-wired and function without additional software aids. Under Linux, apart from volume, it is also possible to adjust balance, treble and bass.
Philips DSS 330: Mini-USB keyboard for remote control is included
Whether it’s a loudspeaker, radio or MP3 player – most USB devices currently available on the market can also be connected with Linux – but there are also some black sheep. More about that in this test report.
Multimedia under Linux? A few years ago that would have been unthinkable. The domain of Unix was always the server field, where it’s enough if the computer has an input/output port: the network connection. But recently a lot has happened, and, mainly as the result of the popularity of Linux in the private domain, a large market has sprung up. This has unfortunately not yet been recognised by all the hardware companies. For precisely this reason, it is still advisable to find out first before buying new hardware if the device you want will also function under Linux, because otherwise you might be in for some nasty surprises, as we found in our test. You can also find out more about the latest USB driver support under Linux at http://www.qbik.ch/usb/devices/ and at http://www.geocrawler.com/lists/3/SourceForge/45 63/0/.
40 LINUX MAGAZINE 6 · 2001
Teac PowerMax Traveller This product advertised with the USB logo cheered up the tester enormously: There is certainly a USB cable, leading from the computer to the end device, but on the end of it the connector finishes in a twopin plug for 5-volt power supply. So this means these loudspeakers are simply misusing the USB connection as a replacement for a (certainly too expensive) plug-in power supply unit – the sound has to be conducted via a conventional sound card and appropriate analogue audio cable to the boxes. When plugging in to the USB connector, accordingly, no new device is found, as expected. But the USB controller monitors power consumption and switches off if appropriate. The USB logo on the packaging is accordingly slightly misleading. It is only apparent from the product
040usbmulti.qxd
31.01.2001
16:52 Uhr
Seite 41
USB MULTIMEDIA
description printed on the side that these boxes are not USB devices in any real sense. Obviously the speakers functioned with Linux and appropriate sound card with no problems.
Yamaha YST-M45D These boxes are in a classic design go with any monitor. With respect to the racket... er, music output, they are perfectly adequate as computer boxes – but don’t plan to use them as a substitute for a stereo system. Apart from music transmission via USB you can also feed in two analogue signals with a pair of 3.5mm jacks. All connectors and settings are combined on the right-hand loudspeaker. From there, there is only a little cable to the left-hand box. The loudspeakers do not have an internal power supply, so a transformer unit has always to be left lying around. Under Linux the tested product behaved in exemplary fashion – after loading the driver module (see box) you can get started straight away. In terms of adjustment options the mixer offers, apart from the volume control, bass and treble amplification – all working well. These parameters can also be controlled directly on the device. This is very practical, as physical access is usually quicker.
ON TEST
D-Link DRU-R100 For anyone who does not yet have a radio, this external device is perfect. For anyone else, it’s a nice toy, or a nice ornament for your desk. The device does its duty happily under Linux too. But unfortunately only with a kernel of version series 2.4, since the necessary driver is not included in the backport. Tuning is done via the video4linux interface. For this reason the videodev module must be loaded. Now only the module dsbr100 itself has to be loaded – that’s it. There are a large number of programs under Linux to control it. And for KDE and GNOME there are special versions, which all in all can only alter the frequency, since the control for the volume sadly did not work in any of the programs tested. The most astonishing thing about this device: the sound is not transported via USB. There is a 3.5mm jack for this, which has to be
Teac PowerMax Traveller: Despite the USB logo on the carton, these are not USB loudspeakers – the connection to the computer is used only for power supply
Yamaha MS35D The smallest sub-woofer satellite system in our test uses exactly the same external power supply as the YST-M45D: and again there are two analogue 3.5mm jack inputs. But when it comes to the type of construction, this is the only external similarity between the two systems. Treble and bass controllers were left out on the small one, but instead there is a controller with which one can set the amplification of the bass box. The power switch and the volume control are, practically, placed on the right-hand satellite. The whole thing is slightly reminiscent of Bose systems. In technical terms both systems are pretty similar: This three-part system with adjustable feet functions just as reliably and offers under Linux exactly the same adjustment options as in its big brother system presented above: treble, bass and volume control.
Yamaha YST-MS55D Unlike its smaller brothers, the Yamaha top model declined to co-operate at all. Not even the standard initialisation could be completed (no inputs in /proc/bus/usb/devices). The only way to coax a sound from the device, was with a sound card with analogue connection cable. But then the powerful bass really makes itself felt: an 80 watt amplifier makes for good pitch, and the two-way satellites show off their quality too. But none of this is any help. At present there is sadly no USB support for these boxes.
USB loudspeakers under Linux There is a kernel module under Linux, which is responsible for the integration of USB loudspeakers (audio.o). The USB sound output slips seamlessly into the existing collection of sound card drivers. Which is why the module soundcore is also needed (which is loaded automatically by modprobe). The advantage of this driver solution is that the application programs normally have nothing else to worry about. All mixers, play programs and other speaker functions operate immediately after loading the module. This is also included in the USB backport patch for Kernel 2.2 and runs stably there – so in all mainstream distributions (such as the SuSE 7.0 used in the test) a modprobe audio should suffice to be able to use the loudspeaker without restriction. For this to happen automatically if required, you will still need to add the following lines in the file /etc/modules.conf: alias alias alias alias
char-major-14 soundcore char-major-116 snd sound-slot-0 snd-card-0 snd-card-0 audio
Make sure char-major-14 is not in any other line. Uncomment this as necessary by simply putting # before it.
6 · 2001 LINUX MAGAZINE 41
040usbmulti.qxd
31.01.2001
16:53 Uhr
ON TEST
Seite 42
USB MULTIMEDIA
behold: there is now a driver to support some of the functions of this camera. It has been completely created as the result of time-consuming reengineering, as Logitech itself will not issue any specifications. For this reason, the driver is not yet one hundred per cent stable at this time. For those who have already added this model, though, it’s certainly worth a test. Exemplary: All functions of the Yamaha YST-M45D boxes worked straight off and without any problems.
Philips
plugged into the Line-In input of the sound card or on the boxes.
WinTV USB With this device, apart from listening to the radio, it should also be possible to watch television. Unfortunately here again there is no Linux driver available yet. But there is some hope for all those who have already added on WinTV. There was a report in the USB mailing list recently about a driver at the development stage. Anyone interested can contact (jig@satec.es or Jorg (heckenbach@fgan.de).
Webcams under Linux Everything went wrong with the Webcams in this test. Of the four test samples which reached the test lab, not a single one worked at first. Georg Acher, one of the USB kernel developers, accepted, after an interview (see Linux Magazine November 2000 page 49) the Logitech Quickcam Express, and lo and Small but wow: in the Yamaha YSTMS35D it’s not just the sound that’s as smooth as silk
42 LINUX MAGAZINE 6 · 2001
We had high hopes for the Webcams from Philips, because this is the only manufacturer who will reveal any information. Of the Webcams requested, however, only the new ToUcam Pro reached our offices – shame, because this is precisely the device which is not yet supported by Linux. Obviously none of the USB developers was prepared to sign the Non-Disclosure Agreement, in order to get the specification from Philips necessary to create a Linux driver. So far only Nemosoft Unv. (Pseudonym) has agreed to do so and is programming drivers for the models PCA645VC, PCA646VC, PCVC675K (Vesta), PCVC680K (Vesta Pro), and PCVC690K (Vesta Scanner). They are in fact only available as readycompiled modules (http://www.smcc.demon.nl/webcam), but according to reports on the USB mailing list they do work without any problems. Apparently there is still work to be done with the ToUcam (PCVC 740K): Nemosoft has announced that it will be meeting representatives of Philips about this matter.
Dlink Nor did we have any luck with Dlink. Instead of the requested Webcam DSB-C300 we got the DRU350, which, however, you’ve guessed it, was not supported by Linux.
Personal JukeBox PJB100 The Personal Jukebox, an MP3 player, was originally developed by Compaq and is now being produced under licence by a firm specially founded by them. In the interior of this box there is a 6-GB hard disk, which can provide for about four days of uninterrupted listening pleasure (manufacturer’s specification: 100 hours at 128 Kbps), with the hard disk being replaced by an even larger model if required. Theoretically any commercial 2.5” disk should fit. The player obtains its songs exclusively via the PC. There is neither an analogue nor a digital sound input. The headphones supplied fit very well, due to their sophisticated construction, and yet do not press too hard on the ears. They can be folded up for transport. Only thing missing: the player sadly has no cable remote control, but on the other hand the rest of the accessories are worth a mention: plug-in power supply, a cigarette-lighter adapter and a cassette adapter.
040usbmulti.qxd
31.01.2001
16:53 Uhr
Seite 43
USB MULTIMEDIA
Not even powerful basses can help here: with USB the Yamaha YST-MS55D boxes won’t make a sound
With the D-Link DRU-R100 Radio only the frequency can be adjusted by USB. The sound has to be tapped by analogue means
Data swap Transfer of digital audio data is of course done via the USB bus. The appropriate kernel module has to be downloaded from http://phobos.fs.tum.de/pjbox/ or http://crl.research.compaq.com/downloads/register.cgi ?download=Linux+Jukebox and it only works under 2.3 or 2.4 kernels. As a front-end there is either the command line program supplied, pjb or alternatively for GNOME pjbmanager (http://mews.org.uk/pjb) or for KDE the Jukebox Manager (http://sourceforge.net/projects/jukeboxmgr). Unfortunately not one of these programs can read back the files. The JukeBox is operated via six keys: Start, Stop and four cursor keys. With the On/Off keys it is possible to navigate in the menu; with right/left the settings selected are changed. The user can toggle between various playback modes (Album/Interpret, Genre). But there are also the normal functions, such as random selection and repeat. The run time is stated as 10 hours. If the sample rate deviates from 128 kbps, this is shown in the display.
ON TEST
Only worked with extra help from a USB developer: Logitech-Webcam
blindly purchasing such a product is still a game of chance. Only when it becomes normal for the hardware manufacturers to develop, in addition to Windows drivers, also the corresponding Linux drivers (and this is then also made clear on the product packaging) will the purchase of such devices become much simpler for the average Linux user. In the next issue we will be concentrating on USB printers and scanners. ■
At about 100 hours the Personal JukeBox PJB100 MP3 player leaves very little to be desired – not even in connection with Linux
Rock steady The hard disk, by the way, could not be ruffled even by vigorous shaking. The player should thus be suitable for jogging. Also, the disk does not run constantly anyway: it only runs up briefly, loads the song into the RAM and switches the motor off again immediately. This process has just one small drawback: if you select a new song during playback, it takes three to four seconds before anything happens. At the end of the song this pause does not occur, as the hard disk starts again before the end of the playing time.
Conclusion Although there now exist a considerable number of Linux drives for USB multimedia devices, 6 · 2001 LINUX MAGAZINE 43
044hardwarebuy.qxd
31.01.2001
11:31 Uhr
FEATURE
Seite 44
HARDWARE
Linux-compatible hardware components
PLUGGED IN PENGUIN BERNHARD KUHN
Driver or no driver? – that is the question for the newcomer to Linux who wants to treat their computer to the free operating system. The driver coverage for PC components is certainly very extensive. But simply buying a piece of hardware blind can lead to great annoyance. This can be avoided.
Posting: an article published in a newsgroup. OS: short for Operating System ■
Until a few years ago, there were very few manufacturers willing to offer Linux drivers for their hardware. For this reason, a powerful army of voluntary programmers have been beavering away at device support – provided the manufacturer will issue the information necessary for the programming. But with a growing market share, manufacturers can no longer ignore the wishes of their customers and there is a clear trend towards factory-installed Linux drivers. This development, which is in itself praiseworthy, does however have its dark side: The drivers are often only available as proprietary solutions in binary format. This means that Linux distributors have little chance of adapting the driver to their product. Plus, the manufacturer’s driver programmers seldom have sufficient time to upgrade and improve the software as appropriate. Result: the drivers (such as that from nVidia) are often under-developed and the stability of the operating system suffers considerably. This is why the user should settle for a device whose device driver is available in source code. Most users don’t even know where to start with this. But the few who, out of pure curiosity, dare to reach programmer level usually make a valuable contribution in the form of bug-fixes or at least very exhaustive bug-reports, which help the driver developers to get the piece of software into a stable
44 LINUX MAGAZINE 6 · 2001
condition. Generally, therefore, when buying new hardware for a Linux system: first look on the Internet, to see if the device you want will also do its duty under Linux. There are plenty of informative starting points. Table 1 shows a few revealing sources for the respective driver requirements. If you don’t find your devices here, don’t give up straight away, but ask your favourite search engine. If this too fails to find what you want, you could turn with confidence to the Linux hardware newsgroup comp.os.unix.linux.hardware. Check before sending a posting whether the question has not recently been posed and answered.
Linux inclusive Internet searches are very time-consuming, though – and time is money, as we all know. So in Germany a few PC manufacturers (in some cases, for a considerable time) have been including wellspecified computers with pre-installed Linux in their range of products. Of course, with a system like this, you can expect all the integrated components to be completely compatible with the operating system delivered. For good reason, the complete-system manufacturers very rarely use noname-products for
044hardwarebuy.qxd
31.01.2001
11:31 Uhr
Seite 45
HARDWARE
their Linux packages, as the big-name component makers can no longer ignore the ever-growing Linux market. Obviously the manufacturers also want to get in on the act, even if only by providing information on the hardware-related register programming, so that driver development by third parties is possible. After all, one does have a reputation to lose (as did the company Adaptec, at the mention of whose name die-hard Linux cognoscenti still wrinkle their noses).
False economy No-name Chinese and Taiwanese products for the end-user mass market, such as those found piled up high and sold cheaply in supermarkets, do make new computers cheaper than ready-made Linux PCs, but they are certainly not as compatible with the free OS, since this is where the card manufacturers often play dumb and display no inside details about their cards – in the false assumption that this will give them a competitive advantage in the tough world of market shares. Anyone who has already got a supposed bargain on his desk, and is now having to buy Linux-compatible components as well – because the multimedia hardware which came with it is refusing to do its duty – will very soon recall the saying: "You get what you pay for".
Cheaper? A little tip for any hardware specialists who want to build their system up themselves: Apart from a look at the support databases or the Web sites of the respective hardware projects (see Table 1) one can of course also take a peek at the Web pages of the Linux-PC integrators. This is, after all, where individual components of systems which have undergone the acid test are listed. Adding on hardware is of course not yet prohibited – whether do-it-yourself or by commissioning the PC Mister fix-it around the corner. In the latter case, though, the price difference to the example may turn out so low that at best, proximity to a more or less competent dealer might come out in favour of this solution.
Coming to terms with it Appropriate local specialist shops are also a good place to start for those customers who are uncertain but willing to buy. Many dealers’ technical departments have already seen the way the wind is blowing and can help you out with good advice. If the dealer does not know if the hardware is Linux compatible or not, it is not uncommon for an eight to 14-day returns policy to be granted if required. This is because the new hardware components cause problems in your home computer.
FEATURE
Yet other shops are leading by good example and are already referring in their (online) price lists to the impeccable implementability of their products in combination with Linux.
PCI vs. ISA So far we have been assuming that a spanking new computer with Linux is to be used alone or (more realistically) in parallel with another, also very commonly found, operating system. This is where, when it comes to buying new, the reins are in your hands. But quite often Linux newcomers (e.g. after buying a PC supermarket bargain) want to upgrade their old second computer into a Linux Box. In computers of the older type of construction PnP cards (Plug & "Pray") or ISA-cards often scrape a living with mechanical jumpers. Linux does in fact tolerate almost all (PnP) network cards, although old (PnP) sound cards often only run in Soundblaster, rarely also in Windows soundsystem compatibility mode. There are no Linux drivers for really exotic graphics cards (put better: X-servers) The integration of PCI cards into the Linux system is far less problematic than in ISA bus cards. PCI cards can announce their demands and capabilities in the PCBIOS (incidentally, this is a technique that already existed 16 years ago with the Amiga). ISA cards on the other hand have to be probed for automatic recognition. But this could lead to crashing and in the worst case to loss of the file system. For this reason, with such hardware components at least the I/O basic address must be specified when loading the driver module.
Autodetect PCI/AGP cards have a unique Vendor and Device identification number (ID). The installation programs of the distributors can therefore very easily assign the PCI card to the kernel driver and as a result automatically configure the hardware detected in the Linux installation. In the case of old ISA bus or brand new PCI or AGP cards (which are not yet entered into the assignment table) the associated driver must be specified manually with one of the configuration utilities provided (e.g. linuxconf). Many Linux newcomers have no end of trouble with this, as the drivers often have names which superficially have nothing to do with the product name or the chip working on the card. Two examples: the driver tulip.o must be used for network cards with the DEC21143 Ethernet chip. To set this one uses a configuration tool or manually enters the line alias eth0 tulip in the file /etc/conf.modules (eth0 stands for Ethernet Device 0). The Creative Labs PCI128 is downwardly compatible with the PCI64V – the latter has a component with the designation ES1373. 6 · 2001 LINUX MAGAZINE 45
044hardwarebuy.qxd
31.01.2001
11:31 Uhr
FEATURE
Seite 46
HARDWARE
GLX vs. DRI: ‘Direct’ means the graphics output is faster
At present though, there is only the module es1371.o for SB PCI64, which however can be used for all three Soundblaster cards mentioned. But this does not exhaust all the functions of the cards completely. Apart from free sound drivers (Table 1: ALSA) there are also some commercial ones (OSS/Free).
Protocol: A standardised language, with which computer and (service) programs in a network communicate. The network in this case consists of just one computer. One then talks of a Loopback. GUI-Toolkit: GUI-toolkits provide programmers with (user)elements for programs with graphical user interfaces. This means the toolkit used (as well as the cleverness of the programmer) decides how user-friendly a program will be. Multihead-Support: Support for several monitors on one computer, which show different parts of the image created by the graphics card Patch: Software repairs, which correct errors or add on a functionality. When the Linux operating system kernel is fixed up with patches this allows the use of hardware which was previously not supported by the stable user kernel. ■
Win or lose Hardware with the prefix or suffix win in the product name (e.g. WinModem) is generally very difficult to persuade to co-operate with Linux, as their manufacturers prefer a different OS. Praiseworthy exception: For Softmodems based on the Lucent chip there is a stable kernel driver. Generally, new consumer devices which formerly had their functionality reside on the chip are increasingly having it realised by cheaper software. This means accepting a higher processor load because of the extra burden imposed on digital signal processing. Since this often means using US-patented algorithms and/or the (sometimes bought in by manufacturers) copyrighted source code, things are looking pretty bad for Linux hardware drivers.
Graphics cards (2D) In the case of Linux the graphics card driver is very different from the kernel drivers for SCSI, Ethernet or Sound. For the screen output, there really are no
proper drivers which would be part of the operating system, but a powerful program – the X-Server – which accepts graphics commands via the X11protocol and then writes directly in the register of the graphics card. Application programs send their illustration requirement using an X11-based GUI-toolkit (e.g. Qt or GTK) to the X-server. As a result the application (the X-client) can run on a powerful applications server – only illustration is done separately on a (old and) weak client computer (on which an X-server is running – the confusion of terms is now complete). This architecture also has the advantage that the complex and error-prone graphics driver runs in the so-called user-space and can profit from the memory protection mechanisms of the processor. An X-server will at worst cripple the console. In a free-standing home computer even this is fatal, but in a network server at least the Web /FTP/mail/file services would still continue to run as needed. The X-server which runs amok can usually be started again by remote control – one simply has to log in using ssh or telnet on to the problem computer. A faulty sound or network kernel driver in the kernel space on the other hand can freeze up the whole computer. But this happens only very rarely with Linux (in drivers at the test-stage). For almost all current and older graphics cards there are appropriate X-servers. It is only in the case of brand new products that one ought to make enquiries first about downwards
Table 2: Five commandments for buying hardware 1. Before buying, check for Linux compatibility on the Internet (see Table 1) 2. Consider buying a complete PC system with factory-installed Linux 3. Negotiate with the dealer before buying to agree he will take it back if it does not work properly 4. Setting up cheap No-name components often costs heavily in terms of time and patience – so avoid it! 5. Supermarket bargains usually include one or more Linux incompatible components or onboard elements
46 LINUX MAGAZINE 6 · 2001
044hardwarebuy.qxd
31.01.2001
11:31 Uhr
Seite 47
HARDWARE
compatibility. This means the latest tool may not be perfectly refined, but at least you get more than just a text console.If there is no free XFree86 server to support your special card, then you could try contacting third party manufacturers such as XiGraphics or Metro Link. These can supply the commercial X-server families AcceleratedX and MetroX – for the average Joe user these are certainly not cheap but on the other hand are equipped with multihead-3D-support (e.g. for Matrox G400 DualHead) and other things. If all else fails with a brand new graphics miracle, at least you can usually use the free Framebuffer X-Server. This X-server attaches directly on to the VESA2.0/3.0-BIOS functions of the card. This means even resolutions of up to 1280x1024 at 76 Hz screen refresh rate are possible – but unfortunately not accelerated. The option of the KDE window manager Move window with content should therefore be deactivated.
Rosy future Certainly, the hardware manufacturers mainly want to make a fast buck from the Windows user, but Linux has also already captured a solid market share. In
FEATURE
particular, in the professional network and server field, Linux drivers are almost always immediately available now when new items of hardware come out. As the desktop steadily improves, Linux is also starting to be of interest for the consumer market, which will obviously mean the hardware manufacturers are increasingly spurred to action with their own drivers – just as happened in the past with ATI, CreativeLabs, nVidia and HewlettPackard. ■ Table 3: Rules of thumb for hardware support 1. <Linux runs in principle on 386SX processors and above with at least 2MB of mainmemory. For Mandrake Linux, a computer in the Pentium class with 64MB (better 128MB) main memory is recommended. Motherboards with onboard components are to be avoided, unless it is known that all are supported one hundred percent by Linux. 2. For almost all (Raid) SCSI, network and graphics cards (2D) there are suitable Linux drivers – though sound cards should be enjoyed with care. 3. USB devices can currently only be operated under Linux with patches – the Linux newcomer should steer clear of these. USB support will not be integrated until Kernel 2.4 (coming in the first quarter of 2001). 4. SCSI scanners and CD burners are very well supported by Linux. But in the case of the parallel port or ATAPI versions, the chances look a bit slim. 5. Brand-new, megacool hardware is highly unlikely to function with Linux, or only to a very limited extent. But drivers are often available two to three months after the market launch.
Where to start looking on the Internet Source
Driver
URL
XFree86 Project
free X-server (”graphics driver”)
http://www.xfree86.org/
DRI project
3D-acceleration for XFree86-4.0
http://dri.sourceforge.net/
Utah-GLX project
3D-acceleration for XFree86-3.3.6
http://utah-glx.sourceforge.net/
Xi Graphics
commercial X-Server
http://www.xig.com/
Metro Link
commercial X-Server
http://www.metrolink.com/
Nvidia
Detonator-Driver for Nvidia-Chips
http://www.nvidia.com/Products/Drivers.nsf/Linux.html
3dfx
Glide driver for Voodoo and Banshee Chips
http://linux.3dfx.com/
Matrox
3D and Multihead drivers for Matrox cards
http://www.matrox.com/mga/support/drivers/files/linux_03.cfm
Linux3D
Daryll Strauss’ links collection on
http://www.linux3d.org/hardware.html
Graphics
3D graphics card drivers Sound 4FrontTech Open Sound System
commercial sound card drivers
http://www.4front-tech.com/linux.html
OSS/Free
free sound card drivers
http://www.opensound.com
ALSA-Project (”Advanced
alternative free sound card drivers project
http://www.alsa-project.org/
Holger Klemm’s infosite for
http://www.multimedia4linux.de/
Linux Sound Architecture”) Multimedia Multimedia4Linux
TV and framegrabber cards (etc.) Video4Linux
Driver for TV, Framegrabber,
http://www.linuxvideo.org/
MPEG and DVD cards Peripheral devices SANE-Project (”Scanner
Scanner software and drivers
http://www.mostang.com/sane/
Ghostview
Printer drivers
http://www.cs.wisc.edu/~ghost/printer.html
Linux-USB project
Linux drivers for USB devices
http://www.linux-usb.org
Linux Winmodem Support Project
Driver for Lucent Winmodem chips
http://www.linmodems.org
gcombust
burner software
http://www.abo.fi/~jmunsin/gcombust/
xcdroast
burner software
http://www.xcdroast.org/
various for SuSE Linux 7.0
http://cdb.suse.de/
Access Now Easy”)
CD burners
Compatibility lists SuSE-Hardware databank
6 · 2001 LINUX MAGAZINE 47
048components.qxd
31.01.2001
11:38 Uhr
FEATURE
Seite 48
HARDWARE IN DETAIL
Components for the Linux-PC
RUNS WITH LINUX BY GREGOR ANDERS
Linux has taken the step of putting the system onto the desktop. The most important features of modern Windows PC’s are thus now available under Linux. Whether USB or hardware accelerated 3D representation – the constant booting of the other system, just to play a game for a short time or to read out images from a digital camera via USB, has finally come to an end.
Graphics Power Accelerated graphics with OpenGL support is integrated into XFree86 4.0, so even inexperienced users without the patches for the sources and recompiling can enjoy 3D under Linux. Whether nVidia’s flagship GeForce2, Matrox’s G400 range, ATI or Voodoo cards: drivers for these chipsets are already installed in the standard installation of the latest distributions from SuSE, Red Hat or Mandrake. nVidia goes its own way in this matter and offers closed-source drivers, which realises almost every dream of the ambitious gamer. There is a set of installation instructions included in the packet. Those finding this too complicated can plump for the standard nVidia drivers from XFree 4.0.1, but will then have to do without ideal performance. But it must be noted that these drivers will not co-operate with cards with NV1 or 128/128ZX chipsets. For these older chips, though, there are open source drivers (see Table 1 in the previous article). 3dfx, ATI and Matrox are already receptive and have immediately provided developers with the necessary information to support their cards completely under Linux. At this point, though, it should be mentioned that many older graphics cards will no longer function with the new X-servers. 48 LINUX MAGAZINE 6 · 2001
Voodoo graphics cards have the advantage that under XFree 3.3.6 they also benefit from the accelerated chips (3D exists however only in full screen mode). The open-source Glide library – the basic driver for Voodoo chips – may be old, too, but this also makes it more refined. If performance were the only criterion, then graphics cards with nVidia chips would come out near the top. But the stability of this proprietary driver still leaves a great deal to be desired. Anyone wishing to risk a little 3D game, will probably be unmoved by this, but in professional 3D applications the X-Server tends to come to a standstill now and then – especially if there is a parallel I/O intensive process running. Anyone who would nevertheless still like to settle on a nVidia based graphics card, should take a look at the GeForce256 variants, as these currently have the best price-performance ratio. It doesn’t always have to be the top model. The Matrox G400 DualHead Version allows two monitors to be connected to one graphics card, and thus practically doubles the desktop. A driver is also available (see Table 1). This is obviously a driver that comes with 3D acceleration (although only in single-head operation). There is not yet any 3D support for ATI RageFury graphics cards with Rage 128 or Rage 128 Pro chips. For this reason in the case of professional
048components.qxd
31.01.2001
11:38 Uhr
Seite 49
HARDWARE IN DETAIL
FEATURE
3D Power for the penguin: For almost all current graphics cards there are powerful drivers available. nVidia chips are especially fast, but sadly not particularly stable: The Voodoo driver is much more robust.
applications representation errors are the order of the day. With 3D games these cards are usually fast enough and attain roughly the same level of performance as Voodoo and Matrox chips. For ATI’s flagship Radeon, X-server specialists XiGraphics have recently brought out an alpha test version of an X-server with fast 3D architecture (X Direct Access, XDA for short) which functions similarly to the Direct Rendering Infrastructure (DRI) of XFree86-4.0. In the next kernel 2.4, incidentally, AGP support will have a fixed size – although there is a backport patch for kernel 2.2, which all the mainstream distributors have included in their kernels. Don’t expect too much from AGP, as most games scarcely make the most of the now usual 32MB graphics memory, so accesses by AGP to textures in the main memory are not even necessary. In scientific/professional applications this can of course look very different. A little tip for owners of an Athlon mainboard with VIA-KT133 chipset: If the agpgart module will not load, the entry
well known user environment from Redmond, but with gcombust and XCDRoast there are two options for avoiding the mess of command switches of cdrecord. An exhaustive comparison of 16 different CD burn programs can be found at http://sites.inka.de/~W1752/cdrecord/frontend.de.html. Unfortunately in some circumstances getting an ATAPI CD burner to perform may be somewhat more complicated than a SCSI device (distributiondependent), as the cheaper burner has to be addressed via a SCSI emulation layer in the kernel. Hence this tip to all newbies: Allow plenty of time for reading and testing. One place to start with this should be the CD Writing Howto, found at http://www.ibiblio.org/pub/Linux/docs/HOWTO/CDWriting-HOWTO.
With the Siemens DVB card in the digital video recorder
Linux supports all SCSI and ATAPI CD burners almost without exception
options agpgart agp_try_unsupported=1 in /etc/conf.modules bzw. /etc/modules.conf can truly work miracles.
Flicker and static At a time when 17 inch monitors represent the lower end of what is acceptable, more and more users are employing their PC as a TV set. In this category Hauppauge WinTV variants are the commonest. But also Terratec TERRA TV+ or miroVIDEO PCTV pose no problems with setting up and use.
Hot disks CD burners have long since found their way into the free system. Certainly, ease of use when creating a CD on the command line is not as great as with the 6 · 2001 LINUX MAGAZINE 49
048components.qxd
31.01.2001
11:38 Uhr
FEATURE
Seite 50
HARDWARE IN DETAIL
USB? That’ll do nicely
The author Gregor Anders is employed at the computer centre of the University of Cologne at the Help desk. When he is not helping students with their day-to-day Windows problems, he programs Web sites in Perl and PHP.
For a long time Linux users had no substitute for the serial parallel cable chaos. But more and more often, customers buying hardware find that the device they want is now only offered as a USB variant, or you don’t always want to wait so long, just because (due to the lack of support) you have to fall back on the slow serial variant. USB is marching into Linux with the launch of kernel 2.4, which has been in the test phase for a long time. For the current user kernel (2.2.17) there is a backport for 2.4 sources (this should become an integral part of 2.2.18). All current mainstream distributors have integrated this patch into their kernel, and so the Linux user can use a large number of USB devices out of the box: The hardware range supported extends from USB mouse/ keyboard via USB digital cameras up to USB scanners or printers. At http://www.linux-usb.org/devices.html there is an exact listing of all currently supported USB devices. For the sake of security, care should be taken when buying a USB mouse/ keyboard that this comes with a PS/2 adapter – just in case.
Sound for all
Whether scanner, printer or input devices: Linux supports all kinds, but a look at the Linux USB homepage can save a lot of bother
Most sound cards no longer cause any problems for Linux thanks to the ALSA project and OpenSoundSystem, but there can be complications with a few onboard variants. Generally you should make sure your PC is as free as possible from onboard components, to avoid unnecessary problems. A later update on a faster graphics card or better sound card is very hard to do with onboard components and may even be impossible. Both Intel’s AC97 sound chip as well as various SoundBlaster versions detect the distributions during installation and integrate the necessary driver at the same time. Unfortunately support of so-called WinModems is less advanced. At present only devices with
Lucent chips work. It is advisable to avoid these and choose an external modem, even if this is usually dearer. For those who would like to try it anyway, the recommended site is http://www.o2.net/~gromitkc/winmodem.html
DVD, the Cinderella DVD support under Linux is still in baby shoes. Development is being thwarted by problems of patent rights, which make it impossible for Open Source developers to write Linux DVD software, without involving themselves in breaking the law. The DVD algorithms are copyright and cannot be used with a licence. One place to start, for those who want to try anyway, is the site http://www.linuxdvd.org
Scanner, printer & co. Using scanners (parallel, SCSI or USB versions) is, thanks to the SANE project, no longer a magic trick – but before buying you should find out if the object of your choice will function with Linux (see SANE homepage in the table). The use of printers under Linux, however, does often pose a problem: GDI printers oriented to Windows will usually only run with Linux with a great deal of effort (if at all). And the free drivers available for Linux are very often of much lower quality than their commercial Windows counterparts. This is due to the fact that these drivers often use complicated and/or patented processes for colour balance. Lexmark is one of the first manufacturers to supply its printers in the SOHO sector (Small Office/Home Office) with high-quality Linux drivers.
Massive mass storage Hard disks are getting bigger and cheaper all the time. Cheap (E)IDE disks have still got a bad reputation in the professional field although thanks to UltraDMA 33/66/100 and 7200 RPM the IDE based mass storage of SCSI variants is not far behind in terms of performance. But there is also a good reason for this: There are sometimes considerable stability problems in DMA mode – especially with newer motherboard chipsets, whose Linux-UDMA drivers are still marked as experimental. In the case of an office workstation, though, it usually does not matter if the hard disk is only operated in the slow PIO mode. Software developers are learning to quickly evaluate the smooth progress of continuous operation during the execution of I/O intensive applications and prefer to rely on the SCSI host adapter – where DMA is guaranteed. Anyone brave enough, incidentally, can try to optimise hard disk accesses by IDE devices with: hdparm -d1 -c1 the hard disk is put into DMA mode and 32-bit I/O accesses are activated. ■
50 LINUX MAGAZINE 6 · 2001
052handspring.qxd
02.02.2001
18:19 Uhr
KNOW HOW
Seite 52
CONNECTIVITY
Using the USB cradle
CONNECTING A HANDSPRING VISOR JOHN SOUTHERN
Backups are a vital necessity for all systems. This feature will show you how to connect a Handspring Visor to your Linux box and so back up your data.
Palm Pilots have been around for quite a while now and come in many forms. These range from the original Pilot to modern Palm VII machines. Originally developed for 3Com by Jeff Hawkins, Palm has become a successful division in its own right. Not completely satisfied with 3Com, three of the original team left and set up their own company, Handspring, to supply Palm devices cheaper and with expansion facilities. The main difference between Handspring devices to those of Palm is that they have an expansion slot called a springboard and support USB connection when connecting to your PC. A USB docking cradle is supplied when you buy the device. However the supplied CD-ROM only comes with Win 9x or Mac drivers. Win NT users need to buy a serial docking cradle, but more of that later. Under Linux it is possible to connect a Handspring Visor using the USB cradle, but some work must first be done. 52 LINUX MAGAZINE 6 ¡ 2001
The first thing to do is to obtain a copy of the mini-HOWTO document Handspring Visor written by Ryan VanderBijl. This is available from http://www.calvin.edu/~rvbijl39/. The Visor Linux USB project can be found at http://milosch.net/visor/. Read these to familiarise yourself, then off to the task at hand. Firstly, make sure your distribution of Linux contains the Visor module. This is certainly present in Mandrake 7.2. To check if the module is present, open a console under the root login and run the following: modprobe visor If the module is not present you will get the answer back that the system cannot locate the module visor. If the module is present on your system, it just returns to the command prompt. Once you have a distribution with the Visor module present we need USB support in the kernel. If you are now running the new 2.4 kernel this is included, but if you have an older system then you may require a backported
052handspring.qxd
02.02.2001
18:19 Uhr
Seite 53
CONNECTIVITY
version. Kernel 2.2.18 now incorporates the USB support as modules. We need to determine which type of USB Host controller is present. Start up a console and type: lspci
-v
This command, based on ls, will then list all the available options of PCI devices. The line we are interested in is after USB Controller.... Flags... The next line is either I/O ports.... or Memory at... The former indicates a UHCI controller while the latter indicates an OHCI controller typical of add-on USB cards. With the OHCI controller you will need a recent kernel (2.4.0-test12 or later). By using make xconfig, compile in the kernel the following: CONFIG_USB CONFIG_USB_DEVICEFS CONFIG_USB_UHCI or CI CONFIG_USB_SERIAL CONFIG_USB_SERIAL_VISOR
KNOW HOW
pilot-xfer -b visorbackup This will make a full backup of the Visor into the directory visorbackup. If you only want to back up certain databases use the option -f and the database name such as: pilot-xfer -f AddressDB We have safely backed up the data and can stop worrying about losing all the work that went into creating the Visor databases. Now we can look at what software is available on your Linux machine to use the data. We could import the databases into a text editor but they are not very readable.
[below] Pi-address: Full control of the Addressbook [bottom] J-Pilot: All four main programs in one package
CONFIG_USB_OHU
If you want to make the system hotplug compatible add: CONFIG_HOTPLUG Use the following line to make your new kernel image. (note: && is a useful way to enter numerous commands from the same command line entry): make dep && make bzImages && make modules && mU ake modules.install Before rebooting you now need to modify the /dev entries. We first need to create a device entry for raw device USB0, unbuffered character special file, major number 188, minor number 0 and another entry for raw device USB1, unbuffered character special file, major number 188, minor number 1: mknod /dev/ttyUSB0 c 188 0 mknod /dev/ttyUSB1 c 188 1 chmod 666 /dev/ttyUSB* cd /dev ln -s /dev/ttyUSB1 pilot If you want to use ColdSync then you also need to map in Palm with: ln -s /dev/ttyUSB1 palm Within /etc/fstab using a text editor add: none /proc/bus/usb usbdevfs defaults 0 0 Now reboot. To test the system we will use pilotlink, which can be found at ftp://ryeham.ee.ryerson.ca/pub/PalmOS/ We now start a console window and with the Visor docked press the HotSync button, then type: 6 ¡ 2001 LINUX MAGAZINE 53
052handspring.qxd
02.02.2001
18:19 Uhr
KNOW HOW
Seite 54
CONNECTIVITY
Pi-address The first I would recommend is pi-address. This is available at ftp://ftp.belug.org/pub/user/mw/pilot/. By opening the backed up database we have full access and control of the Addressbook.
J-Pilot Next is J-Pilot which is conveniently placed in the KDE menus under Applications/Communications, it is available from http://jpilot.org/. Upon first using this package do not be surprised if no data is visible as it looks in /.jpilot/ for the databases. J-Pilot is not just for the Addressbook database, it is capable of handling the Datebook, To Do lists and Memos.
Gnome-Pilot This is a daemon-containing package to monitor any Palm device as it is connecting. It is available from http://www.gnome.org:65348/gnome-pilot/.
Xcopilot This package is now known as POSE (Palm OS Emulator). It is a Palm emulator that runs under X. It is available from http://www.palmos.com/dev/tech/tools/emulator/. To run this program you need a copy of the ROM image from the Visor. Instructions for extracting this are available at http://www.thehaus.net/AltOS/PalmOS/htvisorrom.shtml.
Kpilot [left] Kpilot: Memo screen [right] PilotManager: Simple configuration
Kpilot is again software to replace the Palm Desktop software. It is now up to version 3.2.1 and is available at http://www.slac.com/pilone/kpilot_home/. The
54 LINUX MAGAZINE 6 · 2001
software uses conduits, which allow Kpilot to exchange data with other programs such as Korganiser.
PilotManager Written in Perl this is again a graphical program suite. It uses a Hotsync deamon and by using conduits, it is capable of many data exchange formats. Many conduits have been written including Syncmidi (used to change the Datebook alarm), SyncBBDB (using the Addressbook for emacs) and MALsync (an interface for the Avantgo system). PilotManager can be downloaded from http://www.moshpit.org/pilotmgr/.
ColdSync This is a console only program that takes the pain out of syncing the Visor. A fastsync facility can be used that only takes changed files. Conduits make it extendable. Version 1.4.6 is available. ColdSync can be downloaded from http://www.ooblick.com/software/coldsync/. To perform a back up with ColdSync use the following command: coldsync -mb visorbackup -p /dev/ttyUSB1
Linux Palm desktop This is an Open Source project to add the Palm OS connectability to Applixware via Shelf. It is available at http://shelf.sourceforge.net/.
Finally using the Serial Cradle The Handspring Visor serial cradle is sold separately and, apart from the connector, looks identical to the USB cradle. With Pilot Link the command is simply: pilot-xfer /dev/tty0 -f AddressDB This is not as quick as the USB method but if you also run NT you have no choice but to use the serial cradle.■
055distributed.qxd
31.01.2001
13:53 Uhr
Seite 55
FILESHARING
KNOW HOW
Clients for file sharing
UNFAMILIAR TERRITORY TOBIAS FREITAG
While market researchers are arguing whether unlimited exchange of MP3 files is pumping up sales of CDs or spells the end of all sound media, new services, servers and clients are blithely emerging regardless. Others die off or are assimilated by traditional economic forces. Let’s now take a look at the individual species.
Sold down the river? The firm Scour has now gone out of business. The financial burn-out to which so many startups have fallen victim forced the firm firstly to ”restrict its product range to Web site service”. It had built up a network where users were not just limited to swapping MP3 files. Now it has been bought up by CenterSpan Communications. The new owners plan to re-open the service from March 2001 to the general public – for a membership fee. And at Napster too the songs will soon cost money, after the Bertelsmann Music Group (BMG) entered into a strategic alliance with the file-sharing pioneer. Nevertheless the possibility of free use will not be disappearing and not only pieces from the BGM fund will be accepted on the network. Whether the start-up of this balancing act will survive unscathed is something we will find out in the coming months.
Anyone can get
Firewalls A couple of things cause the same problems with all services: The resumption of interrupted downloads is something only Scour could manage (and even then not quite perfectly) and because many users are hiding behind a firewall many downloads do not work. The reason is truly enlightening: If participant one is protected by a firewall and so is participant two, neither of them can make a connection to the other. After all, each incoming connection to the Napster port is blocked by the firewall. There are three steps to remedy this: Either you have administrator rights on the firewall and allow incoming connections at the ports 6666, 7777 and 8888 through to the computer. Or you use a function that blanks out the find locations behind firewalls. The third possibility is that of using a SOCKS proxy on the firewall. This enables clients to make a connection from outside through the firewall. Only very few clients are set up for this process, however, for example Gnutella is not aware of a single one yet.
involved in the worldwide exchange of MP3 files with Napster and Gnutella – all you need is the right software. In this article we are going to pick out one or two clients from each file-sharing network and put them under the microscope.
Gnapster Gnapster is certainly the most refined client among the classic MP3 swap systems. It used to be a bit phone to hanging (and/or crashing) but is now stable with version 1.4 and also now offers a few features: 6 · 2001 LINUX MAGAZINE 55
055distributed.qxd
31.01.2001
13:53 Uhr
KNOW HOW
Seite 56
FILESHARING
Connection to the Napigator Web site where an up to date list of OpenNap servers is hosted, a chat- and a log window and the browse feature allowing all files released by the user to be searched. On starting, the program not only connects automatically to the last used server, but can also go automatically to previously defined chat rooms (the default is the #Gnapster forum). In the list of cochatters the program displays, apart from the name, also the number of shared files and the connection rate. Whether the rate is accurate is another matter, because as with all Napster and Gnutella clones this figure is usually defined by the user himself. By and large the program could be said to give a solid impression, even if now and then in the past instabilities occurred. The latest version, Version 1.4.1a, displayed no problems during a loading test.
Knapster Since we have dealt, up to now, solely with Gnome sites it is time to see what the KDE developers have achieved. In Knapster the chat function is much more strongly constructed than in its Gnome competitors. With one click on the button the latest list of all channels is downloaded and displayed.
Add-on tools If you now compile the source code of Dewrapster, it is also possible to swap films, images and programs with a Napster client. The listed programs though are usually just cracked Windows programs. The program is a fairly adventurous hack, because in fact the firm Napster only conceived its service for MP3 swapping. With its increasing popularity, however, the requirements of users are also rising, and two hackers programmed Wrapster, a tool which packs any files you want into an uncompressed Zip archive and gives it an MP3 header. Which means the original Napster software has been outwitted and at the same time a format for the other clients has been created. Now both founders have withdrawn from the project and are, fairly enough, giving away the source code. Sadly this cannot be downloaded from the site, and nor does it appear to be anywhere on the Internet. [left] One lonely voice in the Gnapster channel: Nothing but log-in messages [right] The channel list in Knapster: Clear in terms of both layout and the number of chatrooms.
56 LINUX MAGAZINE 6 ¡ 2001
Napfinder With the aid of the command line tool Napfinder all OpenNap servers registered with Napigator can be searched for a user or (which is more likely to be the case) for a file. A test search for the indie band Tocotronic, which produced a maximum of 50 hits with normal clients, collected more than 3600 hits within five minutes with the aid of this tool. But it is not easy to evaluate the hit list. One function which can download the found files at the same time has not yet been implemented and is also no longer being included by the original authors. They have declared the task completed and the project ended.
Gnutella Systems like Napster or Scour have one crucial disadvantage compared to a network like Gnutella: The central point where the index of all files is located is a Single Point of Failure. It can be crippled by power failures, network breakdowns and lawyers. Gnutella, Mojonation and others, on the other hand, are based on decentralisation. But this, too, can have drawbacks. In the case of Gnutella transmission problems are also growing as it becomes more popular. The hordes of modem users have turned into a bottleneck, because in the Gnutella network each client has to do the same work, whether attached to a 2Mbit dedicated line or a 28.8 Kbps modem.
Solution in sight But now someone has come up with a solution for this problem, too: The network specialists at Distributed Search Services have developed a Java program intended to take over most of the network traffic with a high-speed connection to the Net. The so-called Reflector network node acts as a relay station for slow modem access, thus solving the problems of both parties. Firstly, the modem users are screened off and the speed of the network is no longer affected. Secondly, they profit from the direct connection to the Reflector and its file index. Next we shall take a look at a Java client, which may not be GPL software but at least is available as freeware to everyone for download.
055distributed.qxd
31.01.2001
13:53 Uhr
Seite 57
FILESHARING
KNOW HOW
[left] The protective haven for modem users: the Reflector separates out part of the network and transmits only relevant data to the clients behind it [right] Small, strong, black: Gnut may not look very imposing, but can do more than many of its graphical brothers
LimeWire This Java client provides everything one misses from native C and C++ programs: Resumption of interrupted downloads, high stability, even a Family Filter, which can shield children from too much sex and violence. It can also keep different data formats separate from each other and search for them individually. Making several search requests at once is no problem. For each new search a rider is placed on the results list, hosts behind a firewall being highlighted in red. The program may punish those who only like downloading files but do not offer any themselves, by denying them access to their own files. The Linux version of the client is – just like the versions for other operating systems – always up to date. But there are also the typical problems of a Java application, long load times, difficulty of use, and high demands on CPU power.
Gnut Gnut is the complete opposite. As a lean command line program it is started and ready to use on an average computer in half a second. This also means that it automatically connects to the network and can independently manage the list of known Gnutella hosts. It is fairly straightforward to use, for a command line program. find or search starts a search for the following word. The search can be stopped by pressing any key.
Technically, the Mojonation client is a proxy server written in Python. Which is why getting up and running is not so simple as with other clients. When the packet has been downloaded from the homepage and unpacked, environmental variables still have to be set and the proxy server and the socalled broker started. Only then can the program interface be accessed using a Web browser, which has been installed on the proxy. The broker is the critical program. This governs the transactions of Mojo with the Central Bank OLWA. You get no money for uploading files. On the contrary, it costs Mojos to transfer files and even inquiring whether a certain piece of music is located on a server, draws one or two Mojos out of your account. Only those making disk space and computing time available can earn money. For this you have to offer up to four different services on your computer, for example the Content Tracker, which searches an index of all files registered in it. Or the Block Server, which stores the files chopped into pieces and encrypted as blocks on the hard disk. The Publishing Agent puts new content on to the network and also charges for this. The most lucrative services will be the Relay Server, since it integrates all users sitting behind a firewall into the network. It is consulted every time a user is in contact with the network. This service can only be offered by those not sitting behind a firewall themselves.
Outlook So much for our panorama of the distributed landscape. Characterised by continuing change and a wealth of ingenuity, it will certainly come up with many more interesting programs and ideas in future. ■
Info Napigator: OpenNap server list http://www.napigator.com dewrapster source code: http://woggo.webfreekz.com/us ers/theo/dewrapster.c Napfinder Home page: http://napfinder.sourceforge.net Reflector Home page: http://dss.clip2.com LimeWire Home page: http://www.limewire.com Mojonation Home page: http://www.mojonation.net Filesharing portal: http://www.zeropaid.com ■
Mojonation Mojonation is one network which has dedicated itself completely to capitalism. Each transaction, each search query costs Mojos (the imaginary currency in Mojoland). But don’t worry, as the project is in the Beta phase, everyone making an email enquiry is credited with 10 million Mojos. The Mojonation project is the first product from the firm Autonomous Zone Industries (AZI), which wholly owns the firm Evil Geniuses for a Better Tomorrow Inc. Its Web site is tellingly called www.mad-scientist.com.
Get rich quick – at least in Mojos
6 · 2001 LINUX MAGAZINE 57
058tripwire.qxd
02.02.2001
13:48 Uhr
KNOW-HOW
Seite 58
TRIPWIRE
Tripwire – A situation report, part 2
SAFETY FIRST! KLAUS BOSAU
The second article in the series is concerned solely with the configuration of Tripwire, a special kind of monitoring tool. Using the example of the widely-used Academic Source Release (ASR), we explain the syntactical characteristics of the configuration file, and the important instrument of the selection mask.
The only configuration file is tw.config, the power unit of every Tripwire installation. For simple and rapid adaptation to platform-dependent specifics of the file system, the configuration file is in the form of a list. Each entry concerns only one object and follows the simple form: [!|=] Object [Selection mask] [#comment] As objects, entire directories or individual files are permitted. A directory represents its entire content. Be careful, as file system boundaries cannot be overstepped. For example if /usr and /usr/lib are mount points for two further partitions, and if the entire content of /usr is to be monitored, both paths must be listed separately. 58 LINUX MAGAZINE 6 · 2001
Bangs are good for objects which are constantly changing Tripwire monitors each object found in tw.config, unless a preceding ”!” (bang) expressly prohibits this. This exclusion marker is provided for noncritical objects like /dev, whose monitoring would waste computing time. But beware, frequent use of the exclusion marker increases the risk of uninvited guests slipping in unnoticed! For directories, therefore, there exists another option: ”=” monitors the I-node of the directories, but not its content (the I-nodes and datazones of entries). This resource-saving long leash tightens
058tripwire.qxd
02.02.2001
13:48 Uhr
Seite 59
TRIPWIRE
up in the event of an access to the content; but Tripwire shows neither the objects concerned, nor the type of modification itself. This is practical in the case of objects such as /tmp or /var/spool/mail, which are constantly changing in normal operation.
Select flags mark out more concrete properties A far more refined synchronisation is possible with select flags . These – seventeen (!) – flags are represented by individual letters or numbers,
KNOW-HOW
each assigned to a concrete property of the object. The spectrum of properties which can be selected is derived primarily from the range of data, and thus from the structure of the related file system. For the Linux platform the relationships are clear, because the ext2 file system, defined back in 1995 by Remy Card, has established itself as more or less the standard (for now). The 128-byte I-node offers nine ext2-specific properties, which are commented on request by the ASR in the reference database. Version 2.2.1 in fact
[top] Figure 1: Indices [above] Figure 2: The output format of the list command
6 · 2001 LINUX MAGAZINE 59
058tripwire.qxd
02.02.2001
13:48 Uhr
KNOW-HOW
Seite 60
TRIPWIRE
knows eleven properties. Figure 1 shows that this means all the main fields of the I-node are captured. The wallflowers flags and file/dir ACL, which until now have had no practical benefit, are proposed as interesting candidates for future expansions. All ext2-specific select-flags of a Tripwire protective shield are summarised in Table 1, specifying their respective meaning. Figure 2 shows the relationship to the output format of the list command. Version 2.2.1 has five further ext2select-flags (24 with Windows NT); their usefulness is however limited, as they are almost identical to the well-known select-flags of the ASR. The proposed indirect object characteristics are mainly suited for early detection of unintentional modifications to the file system or those induced as the result of incorrect functions. Targeted subtle attacks are only to be warded off to the extent of preventing the intruder from attaining root-privileges – and thus access to the data zones. Insiders will know, or guess, that this
endeavour will still be keeping zealous administrators of UNIX-type operating systems busy a hundred years from now. No practically-usable file system can ever really achieve this. Ambitious attempts at a solution collide at this point with the limits imposed by resource hunger of cryptographic methods. A ”high security operating system” with the performance of a pocket calculator is hardly acceptable. Certainty the integrity of an object can be achieved only through the direct ”survey” of the data zones by an effective signature function. Algorithms such as SHA and Haval (see below) are not deceived even if an intruder were to have full access to the object and unlimited time to cover up. In the ASR there are eight common signature functions to choose from for this. In Version 2.2.1. there are four. As each function has been granted its own select flag, the administrator can react very flexibly to special requirements when configuring. These
Table 1: The ext2-select-flags of ASR and what they mean select-flag report meaning p st_mode Access rights and modes of execution (SUID-, SGID-Bit (!) and ”text”-Bit) i st_ino Number of I-node: The I-node number of an object is not altered by normal write/read operations. If such an inconsistency is found in the integrity report, this suggests that the object concerned has been deleted and replaced by a forgery with the same name n st_nlink Number of hard links and/or sub directories: A special field of the I-node, the so-called ”links count”, specifies in the case of a directory the number of associated sub directories, and in the case of a file, the number of links associated with the I-node. In the latter case the counter goes up whenever a hard link is produced on the associated data zones. If, using ln /etc/passwd /home/hacky a hard link to /etc/passwd is created, then the corresponding counter in the I-node of /etc/passwd will increase by one. In the next integrity test the file would thus be shown as "changed". u st_uid User-ID: User and group ID do of course act as superb targets for attacks of all kinds g st_gid Group-ID s st_size file size: a fully usable indicator, since it may not always be easy, so modify a configuration file in such a way that the file size is retained, and yet the desired effect is achieved a st_atime date of last access: Just reading in a file is enough to update this sensitive entry in the associated I-node. Deploying the relevant select-flag in combination with a signature monitoring therefore makes little sense as processing the signature means the relevant file obviously has to be read. The access timestamp can be made visible using ls -l —time=atime .... m st_mtime Time of last modification: This field is only updated when the relevant file has been modified and backed up again. The modification timestamp is something every Linux user is familiar with from content directories, which have been created using dir or vdir c st_ctime Date of last status change, i.e. of last write access to the I-node: A status change occurs e.g. when changing the access rights of a file. The inode timestamp can be fetched with ls -l —time=ctime .... t (2.2.1) Object Type File type (file, directory, symbolic link) d (2.2.1) Device Number Partition type: Partitions are provided with a special identification number on installation, which gives information about the type of formatting. Magic number: The respective select-flag ensures that apart from other characteristics the partition’s identification number is also commented in the reference databank, from which the I-node of the respective object stems. l (2.2.1) Size ”Logfile”: Indicates that the size of the respective file in regular operation can only get bigger. Unlike s, which queries any change in the size of the file, a message is only issued here if a decrease is detected. (A typical candidate for example would be /var/log/messages. The ASR only makes this functionality available in connection with other select-flags as template (">"). r (2.2.1) File Device Number Main device number: This property is declared only for device files and in this case designates the number of the device driver which belongs to the associated I-node. If the /dev directory is listed using ls -l /dev, instead of the file size, the main device number (and any existing sub-device numbers) are shown b (2.2.1) Blocks blockcount: Number of datablocks which are occupied by the zone pointer of the I-node. The size of an ext2fs-block is specified when the partition is installed (typically 1024 bits) 60 LINUX MAGAZINE 6 · 2001
058tripwire.qxd
02.02.2001
13:49 Uhr
Seite 61
TRIPWIRE
arise from the importance of the object, the available computing power and the individual requirement for system security. Table 2 provides an aid to decision-making. This lists the most important characteristics of the individual candidates and recent findings from the domain of cryptography.
Optimal Mixture is in demand The selection mask, i.e. a complete description of all the interesting properties of an object, comes about
KNOW-HOW
in its simplest form through grouping the selectflags into character strings such as ”+ug-a”. In this example in fact user and group identification of the owner, but not the time of last access, are being monitored. In fact the example also includes all other properties, because the ASR basically treats undefined matter as selected. Equivalent notations for ”+ug-a” accordingly are ”+pinugsmc123456789-a” and ”-a”. If it is really only the user and group identification of the
Table 2: The Arsenal select- Algo-
Throughput in
flag
rithm1
MB/s (on P/200) security
MD5
7.2
1
Estimated *****
Special features The Message-Digest 5 algorithm developed by the Crypto-Pope Ronald Rivest corrects weaknesses in MD4. Odd numbers (four instead of the former three) and the quantity of additive constants (one each for the 64 part steps) are altered. This greatly protects the algorithm against analytically supported attacks, but at the expense of processing speed. The euphoric evaluations by leading cryptographers in the past appear to be in need of revision in the light of more recent findings. So far it has not been possible to erode the effectiveness of the hash function, but collisions – as previously with MD4 – for the Compression Function (an essential partial structure of the hash function) have been found – to be dealt with at length in a later instalment. MD5 is currently the most used hash algorithm, yet its future looks bleak. Leading cryptographers are now declaring that future attacks will have good chances aof success!
2
Snefru (R)
1.4
****
The ideal pyramid was eventually built by Snefru’s successor, Khufu, and the first the Great Pyramid at Giza – was the finest and most successful. The algorithm conceived by Ralf Merkle at the Xerox Palo Alto Research Center (PARC) did not quite match up to the high esteem enjoyed by its famed namesake. By April 1990 a keen student managed to dethrone the previously popular two-step version and to pocket a prize of 1000 dollars as a result. PARC is now recommending the 8-step variant. Since to date every attempt to defeat the 4-step version used here with 128-bit signature format has failed, security performance may well still be within acceptable limits. But one very real drawback is the comparatively low data throughput.x
3
CRC-32 (also 2.2.1) 9.3
**
4
CRC-16
*
16.2
Refer to explanation of Cycle redundancy check (CRC-16). Both of these robust and fast CRC algorithms are actually intended to identify transmission errors caused by hardware. The simplest variant of such a checksum function is realised by successive XOR linking of all the words in a message. Even the signature size of 16 and 32 bits prohibits any use in large or important files. Since a forged file must, however, come with not just the appropriate signature but also the corresponding functionality to be of any use, it’s certainly worth the risk of using it for less critical objects.
5
MD4
14.4
***
This was introduced in 1990 and was very popular because of its rapidity on RISC processors. In 1998 came the sobering-up period: A slightly modified version proved to be reversible. MD4 is now seen as defeated and should therefore no longer be used for the protection of more important objects. Collisions for MD4 can be created artificially on an ordinary commercial PC in a few seconds! This impressively clarifies the relevance of this consideration.
6
MD2
0.3
****
Unusually slow, designed solely for old-fashioned 8-bit processors, while MD4 and MD5 can exhaust fully 32 bits, thus the capacity of most current processors! Although MD2 is the oldest of the three Message-Digest-Algorithms from RSA, there has until now never been any question of its effectiveness. The only finding of a cryptanalytical nature concerns a slightly modified version. Collisions could in fact only be created artificially when, in the so-called Padding (which will be dealt with at length in a later instalment) the insertion of the message length was omitted.
7
SHA (also 2.2.1)
5.4
*****
The Secure-Hash-Algorithm of NIST is, like most hash algorithms, structurally similar to MD4. In 1994 it was superseded on the grounds of an undocumented weak point by SHA-1. There are persistent conjectures that the National Security Agency (NSA) has made possible an access mechanism to external data material. This would obviously only fork as long as the weak point also remained secret and is not disclosed by over-zealous cryptographers. This hypothesis is not one to which the author of this article wishes to subscribe in view of the paltry supply of information. TSS seems to share this view, since SHA is in the current version 2.2.1 in unaltered form.
8
Haval (also 2.2.1)
10.7
****
The large 160-bit signature nevertheless makes SHA a good choice – even for security-critical objects. Even NASA prefers this algorithm in their Tripwire installation. This was created in 1992 at the University of Wollongong by Yuliang Zheng. Haval is the only one to display both a variable signature size (128, 160, 192, 224, or 256 bit), as well as a variable number of work steps (three, four, or five). The message is split at this point into blocks of 1024 bits, which are then processed in three, four or five cycles respectively by the Compression Function. This means there are a total of 15 different variants of the algorithm available for practical applications. In the Academic-Source-Release the four-step variant with 128-bit signature format is used. My evaluation with respect to security may have to be revised upwards. The unconventional structure is a lucky fluke, because this makes the algorithm immune to ordinary attacks, which are based almost without exception on MD4-methods.
6 · 2001 LINUX MAGAZINE 61
058tripwire.qxd
02.02.2001
13:49 Uhr
KNOW-HOW
Seite 62
TRIPWIRE
Figure 3: An example for the configuration file tw.config # # Tripwire config-file # / /usr /boot !/dev =/tmp =/proc =/home /etc/ppp/pap-secrets /var/log /var/log/messages
R R R
R-m L >
# All objects under `/’ are monitored. # Entry necessary if second hard drive assigned. # Ditto, as own partition. # Not interesting! # Monitor directory only, but not content. # Also sufficient in process file system. # Private! # Timestamp not important as frequent access. # Log files. # Steadily growing file.
# ”@@include” inserts external text into ”tw.config” at run time. All # host-specific properties could be described in a separate file. @@include /root/tw.host-special # Here a variable selection mask ”@@var” comes into use, whose respective # importance can be specified using the command line option ”-Dvar=...”. # In the integrity test or update the same option must always be selected # as at initialisation. The counterpart to ”-D” also exists. # With ”-Uvar” a definition formulated in ”tw.config” can be cancelled. # (If ”@@var” has not been specified in the command line, # ”E” is immediately placed here.): @@ifndef var @@ define var E @@endif /opt @@var # The macro ”@@ifhost” represents what is certainly the easiest tool for # adaptation to different computer architectures. In the example, what has been # achieved is that one and the same area of the filesystem, depending on the # computer, is dealt with differently by ”Tripwire”. (But to do so the # environmental variable HOSTNAME, which is evaluated during run time, must be # correctly set.): @@ifhost babyboy.mamabear.org || babygirl.mamabear.org @@ define TEMPLATE_S N @@else @@ define TEMPLATE_S E @@endif /var/Honeypot @@TEMPLATE_S # Naturally only relevant for ”Bear cubs”! # The content can also be structured with ”@@define”. Complex configura# tion files can be made much more clear with this: @@define private E @@define critical R-12+78 @@define secret N-a /home/Helga @@private /home/Axel @@private /root @@critical /sbin @@critical /etc/inetd.conf @@critical /etc/hosts.allow @@critical /root/banking-details @@secret
62 LINUX MAGAZINE 6 · 2001
058tripwire.qxd
02.02.2001
13:49 Uhr
Seite 63
TRIPWIRE
owner which are to be scanned, this should be displayed by ”+ug-pinsamc123456789” or ”pinsamc123456789”. In the manpage of tw.config a corresponding indication has simply been omitted. For users who are less obsessed with detail Tripwire provides pre-defined selection masks, socalled templates. Table 3 contains these standard cases. And combinations of templates and selectflags such as ”N-a” or ”E+7” are permitted. So the cryptic-looking character strings are markedly simplified with a template ; our example ”User and group identification” is thus reduced to ”E+ug”. The selection mask can also be left out completely. Then the standard template ”R” for ”read-only” comes into play. But beware: the important access-timestamp is thereby excluded from the check! The optimal combination of individual elements is produced from the function of the respective object and the general requirement for system security. The resource use can, despite deliberate optimisation of the source code, turn out to be critically high. Assembler inlays were out of the question in Tripwire on grounds of portability. If Tripwire is running as a background process, this does not usually matter – on computers with sparse resources, though, it becomes a burden. In this case the optimisation has to be weighed against less computing-intensive signature algorithms. I would recommend replacing the (now out of date) template ”R” by a self-defined selection mask. A good compromise with respect to security and data throughput is ”R-12+8”.
A central configuration file on the Net Professional users evaluated the feature of using just one configuration file on several computers of varying architecture at the same time. Tripwire has a single-stage preprocessor for this purpose, which
Table 3: The templates of the ASR template Definition R +pinugsm12-ac3456789 L
+pinug-sacm123456789
N
+pinugsamc123456789
E
-pinugsamc123456789
>
+pinug-samc123456789
Device (2.2.1)
+pugsdr-intlbamcCMSH
KNOW-HOW
interprets special keywords such as @@include, @@ifhost and @@define. This effectively alleviates the use of Tripwire in large heterogeneous environments. In such a network for example it is conceivable that the configuration file could be reserved for a single computer and available to the other computers only on request. Existing configuration files could be merged into a single one, with the respectively valid variants then being determined by the enquiring computer at run time. In corporate networks with ten or more computers this saves a lot of work for the administrator! Of course, this only makes sense if there can be no manipulation of the environmental variables of the enquiring computer!
An example clarifies the grey theory Enough abstraction! Figure 3 shows a (madeup) example for tw.config, which presents, for better understanding, selected elements from the fund of the options sketched in this article. I hope this little introduction to configuration may have sparked some interest in the inner life of the Filesystem Integrity Checker. The next in the series will have the same ambition: it offers a fascinating look into the unfathomable depths of the signature function. Also, interesting new features in Version 2.2.1 will be presented. ■
Info [1] The ext2 filesystem overview: http://ftp.iis.com.br/pub/Linux/system/filesystem/ext2/Ext2fs-overview-0.1.ps.gz [2] Snefru and accessories (Xerox): ftp://arisia.xerox.com/pub/hash [3] National Institute of Standards and Technology: http://www.first.org [4] Tripwire site of NASA: http://lheawww.gsfc.nasa.gov/~srr/tripwire.html [5] Yuliang Zheng’s Homepage: http://www.stcloudstate.edu/~bulletin/ee/index.html ■
Application (R)ead-only: files which although generally accessible, can only be read (Standard) (L)og file: User directories and files which are subject to constant modification ignore (N)othing: Full program. This selection mask is also ideal as a starting point for users’ own definitions ignore (E)verything: For inventory. Only added or deleted objects are shown growing file: files which constantly grow in size but are not allowed to shrink Files which Tripwire must not open in the integrity test (these include all device files) 6 · 2001 LINUX MAGAZINE 63
064joeeditor.qxd
31.01.2001
16:16 Uhr
KNOW HOW
Seite 64
TEXT EDITING
Using the text editor Joe.
JOE COOL
ANDREW HALLIWELL
Getting started
Whenever the subject of text editors comes up in a group of Unix or Linux users, an almost religious debate will commonly arise, as people gravitate to one of two factions within the room. These two factions are the advocates of VI (and it’s clones) and Emacs. The arguments on the side of VI will point out that VI is a standard editor used on every Unix system which is incredibly powerful, with syntax highlighting and the ability to very productive. The argument on the side of Emacs will say that it’s easier to use and has a method of configuration and extension so powerful it can be used to create programs within Emacs (performed using the Emacs LISP interpreter). In fact, Emacs could be almost considered an operating system in its own right. There are drawbacks to both of these editors however. VI has an extremely steep learning curve. Someone who just wants to enter an editor, type what they need and correct spelling without having to fiddle with different modes is out of luck with a moded editor. Emacs can do almost anything, including run mail and newsreaders – but at a cost. The cost in Emacs’ case is size. While VIM (one of the more popular VI clones that stands for VI iMproved) only takes approximately 700k. Emacs takes up over 22MB of disk space. There are many other editors available however, and one of the most flexible alternatives to the two previously mentioned is Joe’s own editor, commonly known simply as Joe. Joe is a modeless editor with multiple personalities. Built into Joe are five different personalities. See Table 1. All of these personalities exist within the same executable that is less than 200K in size, and are activated by symbolic links to the main executable Joe. 64 LINUX MAGAZINE 6 · 2001
Joe comes with simple built in help. This is activated by pressing ^K H and occupies the top area of the editor screen. Help comes in six sections, which are accessed by using ESC. and ESC, to move forward or back through the screens. The different Help screens explain the following functions: Basic editing commands such as cursor movement, block highlighting and manipulation, load, save, insertion of text files, search and replace and spell checking and Window manipulation. The editor screen can be split into multiple windows to allow the viewing/editing of more than one file, or part of a file, at once. Miscellaneous commands such as scrolling, macros and book marking are also covered. Programming commands such as commands to parse errors, indent and search by code-block separator or tag file. Advanced Search and Replace Help clarifies regular Joe expressions. The final Help window is the e-mail address to which bug reports should be sent.
Basic editing commands Joe uses three methods of command access. Ctrl, Ctrl+K and ESC. Basic text navigation can be performed using the cursor keys, Page up, Page down, Home and End. These basic navigation commands are duplicated with Control key combinations so that they will work on non-PC keyboards or terminals. With these commands, it’s quite simple to use this editor for basic text manipulation. You also want to be able to load, save and insert text files into the body of the document. As well as inserting text, it is also desirable to be able to delete it. Deleting a larger block of text is covered in block manipulation next, but deleting words, lines and parts of lines is covered here. On a PC keyboard, Backspace and Delete will delete the character before the cursor and on the cursor respectively. But again, Delete is replicated as a Control key. When you have inserted into (or deleted from) an existing paragraph, the command Ctrl+K+J will reformat the text.
064joeeditor.qxd
31.01.2001
16:16 Uhr
Seite 65
TEXT EDITING
Occasionally, you may want to delete, copy or move a large block of text. For example, when posting a reply to a newsgroup or e-mail, to delete large blocks of text which are not relevant to your reply. Joe can do this quite simply by using its block manipulation commands. If no block is selected with the indent/outdent commands, the paragraph the cursor is currently on will be marked as a block. A simple example of the filter command can be seen by making a block of the above key definitions and using the shell command wc. This replaces the block with the text below. 10 74 418 These figures describe 10 lines, 74 words and 418 characters. This is by far the most powerful command in Joe, as it taps into the full power of Unix. With it, you can create shell or Perl scripts to do complex things and use those as filters for Joe. Everyone makes mistakes once in a while, be it deleting a block of text by accident, or replacing all occurrences of a word with something nonsensical. Joe is quite capable in this area as it has multi-level undo and redo commands. Search and replace functionality is one of the most useful features in a text editor. In Joe, this is also powerful and is activated with the following commands: Ctrl+K+F find text Ctrl+L find next occurrence \^ match the beginning of a line \$ match the end of a line \< match the beginning of a word \> Match the end of a word
Spell checking No editor in this day and age would be up to much if it had no spell checking capabilities. There are two spell check commands in Joe. Check a single word (if you are unsure how one is spelt) and check the entire document. Joe calls on ispell to perform all its spell-checking functions. This means that ispell can be replaced by other spell checkers as it suits the user, as long as a symbolic link is used to make the new spell checker assume the name ispell.
KNOW HOW
Joe configuration options
Joe in the standard personality
Pressing Ctrl+T brings up a horizontal menu at the bottom of the screen. This can be navigated by using the left and right cursor keys and activated by pressing Enter. The menu can be dismissed by pressing Esc. In Rectangle mode, instead of selecting a continuous block of text, a rectangular area of text is selected. This is useful for editing tabulated data. When a block is moved or deleted in this mode (in conjunction with overtype mode) the area is replaced by spaces rather than deleting the text to maintain table integrity. In insert mode, the block is removed and text to the right of the block falls back to fill the hole. In Anchor mode, the Ctrl+K+B marks the start of the block, and the end of the block follows the cursor. This covers most of the editor’s simpler commands. But there are many more that are beyond the scope of this article. These include commands that deal with code editing, editing multiple files, macros, shell commands and more. ■
Table 1 jpico: Mimics the pico editor usually supplied as part of the PINE package. It is however much more powerful than pico, as it still holds all the features of Joe. jstar: This imitates the old WordStar editor that was widely used in the DOS era. jmacs: This copies Emacs. It doesn’t support ELISP or any of Emacs’ more unusual properties, but can be used comfortably by Emacs users when Emacs isn’t available. rjoe: This is restricted Joe. It can be used in environments where you wish to limit what the users of your system can do. rjoe can only edit the filename(s) supplied in the command line, which means in a menu-based shell, it can be used to prevent the user from editing configuration files in their home directory, but still be able to reply to e-mail/news. Joe: This is the personality that is most widely used and the one that will be covered in more depth here.
6 · 2001 LINUX MAGAZINE 65
066Scheme.qxd
31.01.2001
11:50 Uhr
PROGRAMMING
Seite 66
PROGRAMMING WITH LISP
INTRODUCING SCHEME, THE SIMPLEST LISP LANGUAGE FRIEDRICH DOMINICUS
In this series of articles we would like to introduce Lisp, one of the oldest language families, but one that is by no means ready for the scrap heap.
Lisp- ‘sounds familiar? If your preferred text editor is Emacs, you will have come across at least one example of Lisp. Should you have had a more detailed look at Emacs Lisp you may have got the impression that this is a very substantial language. But don’t be fooled, this substance is not intrinsic to the language itself, but results from a feature that is characteristic to all Lisp dialects. As Paul Graham puts it: "Lisp is a programmable programming language." The language core of all Lisp languages is relatively small, and that of Scheme is certainly the smallest. The Scheme standard (IEEE Standard, 1178-1990 R 1995) is probably the shortest for any language. Interestingly, that of another Lisp variant, Common Lisp, may well be one of the longest. It is therefore safe to say that the Lisp family offers something for everyone, a statement with which you will hopefully agree after reading this series of articles. As an introduction to Lisp programming I would like to start with its smallest exponent, Scheme.
Some Scheme history There is an article on the Internet which illuminates the history of the Lisp family. The text comprises more than seventy pages, which is beyond the scope of an article like this. Here therefore a summary of the information about Scheme which can be found there. Scheme was developed by Gerald Jay Sussman and Guy L. Steele in the mid-seventies as an implementation aimed at helping them to understand a theory by Carl Hewitt. Sussman and Steele are defining personalities in the development 66 LINUX MAGAZINE 6 · 2001
of the Lisp languages. Sussman is the co-author of one of the most highly acclaimed books on programming called Structure and Interpretation of Computer Programs, in which Scheme is used to illustrate different aspects. Reading this book is highly recommended. Steele is the author of the standard work on Common Lisp called Common Lisp, The Language. This book was one of the starting points for the standardisation of Common Lisp and can therefore be found on the shelves of any Lisp programmer. It is not a textbook, but a reference manual and more than 1000 pages long. Its index alone is longer than the complete standard for Scheme. Scheme was initially developed as a playground for programming experiments. Extensive experimentation lead to a number of different implementations of Scheme being developed, including some commercial versions. To get an overview, start here: ftp://ftp.cs.indiana.edu/pub/schemerepository/doc/misc/scheme_2.faq. The origin of the name Scheme has its own anecdote. Sussman and Steele were very pleased with this toy actor implementation and named it Schemer in the expectation that it might be developed into another AI language in the tradition of Planner and Conniver. However, the ITS operating system had a 6-character limitation on file names and so the name was truncated to simply Scheme and that name stuck.
Why Scheme? Scheme, like Common Lisp, offers the opportunity of testing any programming paradigms. Scheme allows imperative, functional (one of its strengths) or, with extensions, object-oriented work. Many
066Scheme.qxd
31.01.2001
11:50 Uhr
Seite 67
PROGRAMMING WITH LISP
Scheme systems contain an OO system, as does Common Lisp, which features one of the most flexible (CLOS). Scheme does not force its own scheme of things on you, which should please Linux fans especially.
Different Schemes As previously stated, there are a number of implementations. The following is a short list of notable free Scheme implementations – obviously without any claim to completeness: • MIT Scheme. Probably the most mature Scheme going (the current version is 7.5, and unlike Windows software they have been counting steadily upwards from one). Forms the basis of teaching at MIT, together with the book by Sussman mentioned above. Very substantial (probably the Scheme equivalent of Common Lisp) with interesting extensions and applications (object systems, graphics, an Emacs clone (Edwin) that uses Scheme as its extension language, etc.) • Guile. The standard script language of the FSF. Scheme is used here especially as an extension language. There are even discussions underway to replace Emacs Lisp with Guile. The window manager SCWM uses Guile as its extension language. Guile is also very suitable for use as a Unix scripting language. As is often the case with FSF favourites, you either like Guile or avoid it • Elk. An implementation developed especially for embedding in C or C++. The idea is certainly attractive: instead of creating a special language for each tool you use the full functionality offered by Scheme • Scsh. My personal favourite when it comes to shell programming. In many cases where I once used shell scripts or Python scripts I now use Scsh. It is remarkable how cleanly the, sometimes unconventional, shell syntax has been converted to Scheme • Kawa. A Scheme interpreter written in Java, which also translates into Java byte-code • DrScheme or MzScheme. Almost as substantial as MIT Scheme and designed especially for teaching. DrScheme is a comprehensive Scheme development environment with very helpful extensions and the best documentation. I will therefore mainly be using DrScheme
PROGRAMMING
# for DrScheme PT_HOME=/usr/local/lib/plt export PATH=$PATH:$PT_HOME/bin export MANPATH=$MANPATH:$PT_HOME/man You should now be able to start DrScheme by typing drscheme. I would also recommend the installation of MrSpidey, a static debugger. However, this does not have to be done immediately, as DrScheme offers a more agreeable way of installing additional packages. When the Help Desk is open (menu Help->Help Desk) and you encounter a link which informs you that the required package has not yet been installed, you are given the chance to connect to the DrScheme Web site to install the desired software. It is, of course, advisable not to install the software as root, and it is certainly also a good idea to put the packages onto your hard disk first before blindly accepting anything going. However, I have made things easy for myself by installing the entire software under an unprivileged account. Therefore I regarded the risk of possible data loss as not very high. When you call DrScheme using drscheme, you will see the following opening screen: B:Figure 1: DrScheme Start Window The upper part of the window is an editor, where you can enter Scheme programs. These definitions can be saved if required. DrScheme uses the extension .ss, but .scm
Installation of DrScheme • 1. Download the software from http://www.cs.rice.edu/CS/PLT/ • 2. cd /usr/local/lib • 3. tar -xvzf plt.i386-linux.tar.gz • 4. cd plt • 5. Execute ./install. • 6. Set an environment variable $PT_HOME and add $PT_HOME/bin to your path. You can add the following to your .bashrc or .zshrc: 6 · 2001 LINUX MAGAZINE 67
066Scheme.qxd
31.01.2001
11:51 Uhr
PROGRAMMING
Seite 68
PROGRAMMING WITH LISP
is also common. When you click on execute the program text is transferred, and you can call the procedure you have defined above from the prompt in the lower part of the window. You could of course just go straight into the lower part and get started. On pressing return your entry is evaluated and a result is returned. You will probably be familiar with this process from your favourite scripting language. Should the buttons Analyze and Step not be available to you then the corresponding packages have not been installed and you can upgrade via the Help Desk. Also helpful is Check Syntax, which lets you perform syntax checking. DrScheme offers several language levels; in figure 1 the most advanced level has been set. At this level you are able to use the graphics toolbox. A graphical user interface created in this way can be used anywhere there is an implementation of DrScheme. It is therefore possible to develop on Linux and, should someone insist, to run the programs written there on FreeBSD, Solaris, Macintosh and, last but least, Windows. As you can see, it is a real cross-platform development tool.
First steps Try some entries in the lower part of the window. Please note that all Lisp languages use prefix notation. The sequence is always (procedure name parameter1 parameter2 ) Just give it a go: > 1 1 > (+ 1 2) 3 > "Hello" "Hello" > (/ 1 3) 1/3 > (* (expt 2 32) 2) 8589934592 > a reference to undefined identifier: a > ‘a a You can already notice some interesting features of Lisp languages. Numbers can be of any length and among the numeric data types offered by Scheme are fractions. The arithmetic should not surprise you, but possibly the reaction to the entry of ‘a might. The behaviour when entering a is likely to be familiar to you, a is seen as a variable, if nothing has been assigned to it you get an error message about undefined identifiers. The behaviour of ‘a may surprise you. The ‘ represents (quote, that means the following variable is seen as a symbol. In this case the symbol is called a, i.e. if symbols are quoted they evaluate to themselves. Other elements that evaluate to themselves are numbers, characters and character strings. How do you assign something to a variable in Scheme? By using define. Note that parentheses must be opened before using define, hence: 68 LINUX MAGAZINE 6 · 2001
> (define a 1) What happens to a is not shown, so we’ll ask: > a 1 Scheme is a dynamically typed language, therefore variables do not have to have their type explicitly declared. If you now enter: > (define a "Hello") > a "Hello" > a becomes a string. The value of a variable can be determined using something called predicates. The names of the predicates are pretty obvious: string? for a string, number? for a number. The question mark is optional, but the convention is to append it: > (string? a) true > (number? a) false > (define a 1) > (string? a) false > (number? a) true >
Data types What data types does Scheme provide? They are quickly listed: numbers (including ones of arbitrary size, fractions and floating point numbers), characters, strings, lists, fields, symbols, procedures and macros. You may ask yourself, is that all? Basically yes. DrScheme contains a construct for defining structures, but this is already an extension which is not included in the standard. Talking of standards, the current standard is called R5RS and can be found easily using the Help Desk. Call the Help Desk, click on Manuals and select Revised(5) Report on the Algorithmic Language Scheme, then you can immerse yourself in the standard. Or better still: follow the link http://www.schemers.org to documentation about Scheme, print the report and treat yourself to a pleasant evening’s reading.
First procedures After this long introduction we should look at how procedures are defined and called. For this we will be using one of Scheme’s central data structures, the list, and calculating the sum of all elements of a list. Our first attempt looks like this: (define (my-sum-1 a-list) (cond ((null? a-list) 0) (else (+ (car a-list) (my-sum-1 (cdr a-list))U ))))
066Scheme.qxd
31.01.2001
11:51 Uhr
Seite 69
PROGRAMMING WITH LISP
define (my-sum-1 a-list) defines my-sum-1 as the procedure name, with a parameter being expected. The name a-list implies that we are dealing with a list. This is how procedures are defined in Scheme, they are called in a similar manner, by enclosing the procedure and its parameters in parentheses. Let’s have a look at the implementation: there we find (cond, this is the Scheme name for a multiple conditional or case differentiation. The case differentiation starts with a termination condition, the only one in this example, ((null a-list?. null? checks whether the list is empty. If yes, then 0 is returned, otherwise the following is executed: (+ (car a-list) (my-sum-1 (cdr a-list))). A procedure call is always enclosed in parentheses, which means the following are procedure calls: +, car, my-sum-1 and cdr. You can assign (almost) any name to a procedure in Scheme. There are certain conventions for different types of procedures, such as the question mark in predicates. What is being added? Something called (car a-list) and also something else. The significaance of (car a-list) an (cdr (pronounced could-er) originates in the history of Lisp and is a relic from its beginnings: car stands for contents of the address part of the register and cdr for contents of the decrement part of the register. To translate: car denotes the first element of a list, and cdr all elements of a list apart from the first one, that is to say, the rest of the list. Since these names are anything but mnemonic for anyone who is not a Lisp expert, we will instead define: (define first car) (define rest cdr) Now car and first are synonyms, as are cdr and rest. This shows how simple Scheme is, it does not matter to define whether it is dealing with numbers, strings or procedures. The syntax is uniform and as you can see from the following example, procedures really are first-class citizens. Let’s re-write the procedure: (define (my-sum-2 a-list) (if (null? a-list) 0 (+ (first a-list) (my-sum-2 (rest a-list))))) In this case (if was used instead of (cond. This (if works differently from, for example, the one in Python. There is no explicit else keyword, instead everything is controlled by parentheses. The 0 after (null? a-list) is the "then" part and (+ represents the else branch. Should there be several expressions in one branch you would need to use (begin BLOCK). Let’s have a closer look at the solution. As you can see it is recursive. This is usual in Scheme, and Scheme provides support for effective processing. The optimisation conditions are not right, however. You can check this in the following way: set the language level to Beginner and write the procedure into the upper window, along with a procedure call to it:
PROGRAMMING
(define (my-sum-2 a-list) (if (null? a-list) 0 (+ (first a-list) (my-sum-2 (rest a-list))))) (my-sum-2 (list 1 2 3)) Highlight the text and click on ”Step”. Now you can see step by step how the result is calculated. You will notice that all recursive calls are executed first and that results are only calculated upon return from the recursion. If you tried this with an extremely long list it could lead to a stack overflow. However, Scheme would not be Scheme if there was not a more elegant solution: tail recursion
Tail recursion The Scheme standard explicitly demands the optimisation of tail recursion, and any compliant implementation of Scheme must support it. What is tail recursion? Tail-recursive programs do not have to execute the entire recursion, but instead calculate interim results for each step, which are then used in the next recursive call. The recursion does not build a stack frame for the recurring processes, instead for example, a jump instruction can be used. Let’s put the procedure into a tail-recursive format: (define (int-my-sum acc a-list) (if (null? a-list) acc (int-my-sum (+ (first a-list) acc) (rest a-liU st))))) (define (my-sum-3 a-list) (int-my-sum 0 a-list)) Normally you should define (int-my-sum internally as (my-sum-3. This is not accepted in the beginner levels of DrScheme, at a higher level however, the following is not a problem: (define (my-sum-3 a-list) (define (int-my-sum acc a-list) (if (null? a-list) acc (int-my-sum (+ (first a-list) acc) (rest a-liU st)))) (int-my-sum 0 a-list)) This shows how easy it is to shift Scheme code. Internal definitions can be applied at a higher level or a previously global procedure can be used internally. No special provisions have to be made in order to do this. Should you ever have problems with a long procedure you can simply export it bit by bit, test its parts separately and then put them back together once you have finished. This uniform syntax is a curse as well as a blessing. Without the support of an editor you are likely to despair of the parentheses. But on the other hand it shows how elegant the syntax can be. Please expand the procedure again and look at the step by step processing of the program. You will see that in this solution results are calculated at each step. The recursive call is the last one in this procedure and represents the tail. As stated above, in Scheme this 6 · 2001 LINUX MAGAZINE 69
066Scheme.qxd
31.01.2001
11:51 Uhr
PROGRAMMING
Seite 70
PROGRAMMING WITH LISP
can (and must) be replaced with an iterative solution. Although I am not a Scheme guru, I would like to say a few word about programming style. Scheme programmers will certainly prefer the last version, they can regard this approach as a design pattern. acc stands for accumulator and is a sort of informal standard for a variable that accumulates values and is used to return a value at the end of the recursion. You don’t need to feel tied to acc, but it pays in the long run to adopt customs that have developed over time as it makes things easier for the programmers that come after you.
Other solutions We have already found three different solutions to the same problem. Now I would like to demonstrate some other elements of Scheme programming using different solutions: • letrec • named let • iterative solutions Solutions with (letrec are very similar to solutions with internal procedures. A (letrec solution looks like this: (define (my-sum-with-letrec a-list) (letrec ((my-int-sum (lambda (acc a-list) (if (null? a-list) acc (my-int-sum (+ acc (first a-list)) (rest a-liU st)))))) (my-int-sum 0 a-list))) Which one you prefer is a matter of taste, to me the solution using an internal define is simply
Information Structure and Interpretation of Computer Programs (SICP for short); Harold Abelson and Gerald Jay Sussman; MIT Press; second edition 1996 ON LISP, Advanced Techniques for Common Lisp; Paul Graham; Prentice Hall, 1994 Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp; Peter Norvig; Morgan Kauffmann publishers; 1991 Common Lisp the Language; Guy L. Steele jr.; Digital Press; second edition 1990 Writing GNU Emacs Extensions; Bob Glickstein; O’Reilly & Associates, 1997
Online: ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/Evolution-ofLisp.ps.gz MIT Scheme:ftp://ftp.cs.indiana.edu:/pub/scheme-repository/imp SCWM window manager: http://scwm.mit.edu/scwm/ Guile homepage: http://www.gnu.org/software/guile/guile.html Elk: http://www-rn.informatik.uni-bremen.de/software/elk/ Scsh: http://www-swiss.ai.mit.edu/ftpdir/scsh/ Kawa: http://www.gnu.org/software/kawa/ DrScheme: http://www.cs.rice.edu/CS/PLT/ Scheme: http://www.schemers.org
70 LINUX MAGAZINE 6 · 2001
clearer. In the last solution you can see the first use of (lambda; this defines an anonymous procedure, in this case the name is set to (my-intsum by using letrec. The recursive call occurs in the else part of the case differentiation and is used in the same way as in the solution with an internal define. I would like to come back to the solution using the internal define. I thought it would be clearer to start with the simplified format. Actually, a solution with an internal define would look like this: (define my-sum-2-revealed (lambda (a-list) (define my-int-sum (lambda (acc a-list) (if (null? a-list) 0 (+ (first a-list) (my-sum-2-revealed (rest a-list)))))) (my-int-sum 0 a-list))) There are situations in which you can only use this version. However, I find the previous definition clearer and use it as often as I can. Using anonymous procedures with lambda is the central mechanism for all calculations within Lisp. If you are interested you should have a look at the basics of the lambda calculus sometime.
Named let – The last explicitly recursive solution: (define (my-sum-with-named-let a-list) (let loop ((acc 0) (al a-list)) (if (null? al) acc (loop (+ acc (first al)) (rest al))))) This is a very elegant solution in my opinion. Let’s go through it step by step. You are already familiar with the procedure definition, but (let loop is new. This is what is called a named let, loop is simply a label or a name to which you want to refer. The introduction of loop variables is very readable: 0 is assigned to acc and the initial list to al. During a recursive call both variables change as follows: the value of the first list element is added to acc and within the list we move forward by one element.
”Iterative” solutions The quotes around iterative are deliberate, because we are not necessarily dealing with real iterative solutions, although it does look that way. It is simply expected that these are quasi-iterative and therefore efficient solutions. Here is an iterative solution: (define (my-sum-iterative-without-body a-list) (do ((acc 0 (+ acc (first al))) (al a-list (rest al))) ((null? al) acc))) There’s dense code for you. All the work is done in the loop header, there is no loop body. C programmers should get a feeling of déjà vu at this
066Scheme.qxd
31.01.2001
11:51 Uhr
Seite 71
PROGRAMMING WITH LISP
point. So what is happening in detail? First, 0 is assigned to acc, then in each iteration the value of the first element of the remaining list is added to acc. The initial list is assigned to the variable al and then in each iteration the remaining list is assigned to it in turn. The termination condition is ((null? al). If this is true, the accumulator acc is returned. Finally, I would like to show you a solution that is more similar to Pascal or Eiffel. In this case we move the update from the loop header to the loop body. The solution looks like this: (define (my-sum-iterative-with-body a-list) ; now a bit more pascal-ish or eiffel-ish ;-) (do ((acc 0) (al a-list)) ((null? al) acc) ; body starts here (set! acc (+ acc (first al))) ;; attention set! is a destructive update,hanU dle with care (set! al (rest al)))) Not much has changed. The loop variables are initialised, but the update now takes place in the loop body rather than in the loop header. This is the first time we have come across a destructive
PROGRAMMING
operation. Up to now none of the procedures had any side-effects. In Scheme, procedures without side-effects are good form. You ought to try to adhere to this yourself. Destructive variants are indicated by the suffix !, to show that these methods should be used with caution. In principle (set! variable new-value) has exactly the same effect as an assignment. (set! acc (+ acc (first al))) is the equivalent of acc = acc + first(al) in Python.
Where do we go from here? In these examples we have looked at the basic elements of Scheme programming. In subsequent articles on the subject of programming in Lisp languages I would like to introduce you to other elements, for example local variables, list operations, higher-order functions, scoping, OO programming and similar issues. I will also show you different Scheme implementations and discuss their strengths and applications. As always I would welcome any comments, suggestions or questions. My email address is: Friedrich.Dominicus@inka.de ■
AD?!
6 · 2001 LINUX MAGAZINE 71
072gnomeprog.qxd
31.01.2001
12:18 Uhr
PROGRAMMING
Seite 72
GNOME
GNOME and Perl
PEARLS BEFORE DWARVES THORSTEN FISCHER
Everyone knows by now that script languages are full programming languages. But only a few people build complete applications with graphical user interfaces out of them. Equally few people therefore are aware that this can be done much faster than with established languages.
One of the greatest strengths of Perl is the CPAN, the Comprehensive Perl Archive Network. This world-wide network was established to give Perl programmers access to the truly vast number of modules for their language. As well as small libraries and useful trivia there are also wrappers around fully developed tools for programming graphical user interfaces. Of course Gtk+ and GNOME have to be in on the act, and hence there is Gtk-Perl by 72 LINUX MAGAZINE 6 · 2001
Kenneth Albanowski. You can get your hands on the necessary module with the simple installation procedure, to which one has become accustomed by Perl via the CPAN, as follows: # perl -MCPAN -e shell cpan shell – CPAN exploration and modules U installation (v1.59) ReadLine support enabled cpan> install Gtk-Perl
072gnomeprog.qxd
31.01.2001
12:18 Uhr
Seite 73
GNOME
Gtk-Perl is presently available in version 0.7004 and contains support for GNOME. The version of CPAN is, in the circumstances, not the latest; which is why it is worth taking a look at the Gtk-Perl Homepage and seeing if there is anything new.
First steps Normally, I prefer Python, for running up little Gtk+- or GNOME programs, as Perl is set aside on my system for administrative tasks, and therefore it seems a good idea to start with a little Hello frog program to get used to the language. Listing 1 shows an example for such a program. The associated screenshot can be seen in Figure 1. Listing 1: hellofrog.pl 1: #!/usr/bin/perl -w 2: 3: use strict; 4: use Gnome; 5: 6: my $APPNAME = ‘Hello froggy!’; 7: 8: init Gnome $APPNAME; 9: 10: my $app = new Gnome::App $APPNAME, $APPNU AME; 11: 12: my $button = new Gtk::Button "Hello frogU gy!"; 13: $app -> set_contents ($button); 14: 15: show_all $app; 16: 17: main Gtk; So far, no surprises for either Perl or GNOME programmers. The module is integrated in line 4 and the application is then initialised in line 8. The application is an in-house widget in GNOME which on the other hand takes on diverse additional tasks. So for example it is easy for several instances to create one and the same application. This is reflected in line 10 in the creation of this widget. In the same way, a button is created and inserted into the window and then displays the entire application. In line 17 the program loop gtk_main gets control. Up to now this looks fairly similar to a Hello World in C, with the difference that Perl again exercises its magic, to demand considerably fewer keystrokes from the programmer. Incidentally, if a more alert reader who has already browsed through the material wonders why the name of the program does not appear in the title line of the screenshot, although it ought to be placed there by line 10: this is due to my misconceived Sawfish theme.
PROGRAMMING
Signals and events Gtk+ is an event-based widget set, and events certainly get the worst of it here. If an event occurs, a corresponding signal is emitted, to which a function defined by the programmer can react. Such a function is known as a callback. If the program in Listing 1 is to be ended, for example the insertion of the following code between lines 12 and 13 is standard: If the button as so the control, sends out the signal ‘clicked’, the said anonymours function should be executed, which then leads to the ending of the program. Take note that the following cannot function: signal_connect $button ‘clicked’, Gtk -> maiU n_quit; It’s good form to write individual callbacks for each signal, especially for those which end the program, since this leaves options for clearing up – for example one can construct a routine end and reference it as follows: sub end { print "Bye-bye froggy!\n"; Gtk -> main_quit; return 0; } # ... signal_connect $button ‘clicked’, \
Getting dolled up In a GNOME application, three things form part of the standard equipment: a status bar at the bottom end of the program, a menu at the top edge and under this a toolbar, providing access to the most frequently used functions. With Perl these things can be added quickly and easily, as can be seen from Listing 2, which took only a short time to develop; admittedly, with a little cut & paste, but that’s what GPL code is there for, after all. This code is already a great deal larger, but it also achieves a great deal more. Firstly of course the initialisation, and this time I am also giving it a version number. In lines 10, 16 and 23 the callbacks are defined, and I will go into this more later. In line 33 and the following lines the toolbar is placed. A list of lists has to be given to the function; but since there is no such thing as two-dimensional arrays in Perl, one has to make do with the corresponding notation. The reader who is so inclined should again watch for the invocation in line 47, in which the callback is defined in case the respective button on the toolbar is pressed. In the example I have used only so-called ‘Stock Pixmaps’, which are pre-installed in GNOME. The menus are created from the same pattern. Line
Figure 1: Hello froggy! Hello Hello froggy!
signal_connect $button ‘clicked’, sub { Gtk -> main_quit; print "Bye-bye froggy!\n"; return 0; }; 6 · 2001 LINUX MAGAZINE 73
072gnomeprog.qxd
31.01.2001
12:18 Uhr
PROGRAMMING
Seite 74
GNOME
80 refers to the first interesting callback: The Aboutbox, which can also be seen in Figure 3. The callback invocation brings the information dialog onto the
screen – well, it’s not really a dialog, just an OK button, but anyway. Line 86 shows an element which ought to be included in every GNOME application: a
Listing 2: bookman.pl 1: #!/usr/bin/perl -w 2: 3: use strict; 4: use Gnome; 5: 6: my $APPNAME = ‘Book manager’; 7: my $APPVERSION = ‘0.1.0’; 8: init Gnome $APPNAME; 9: 10: sub end { 11: Gtk -> main_quit; 12: parint "Bye bye.!\n"; 13: return 0; 14: } 15: 16: sub infobox { 17: my $about = new Gnome::About $APPNAME, U $APPVERSION, 18: ‘(c) 2000 Thorsten Fischer’, [‘ThorsU ten Fischer@mapmedia.de>’], 19: ‘Gtk-Perl sample code for Linux MagazU ine.’; 20: show $about; 21: } 22: 23: sub select { 24: my ($clist, $row, $column, $event, @datU a) = @_; 25: my $text = $clist -> get_text ($row, $cU olumn); 26: print "The selection was made in line $U row, column $column.\n"; 27: print "Content: $text\n"; 28: } 29: 30: my $app = new Gnome::App $APPNAME, $APPU NAME; 31: signal_connect $app ‘delete_event’, \ 32: 33: $app -> create_toolbar ( 34: { 35: type => ‘item’, 36: label => ‘Open’, 37: pixmap_type => ‘stock’, 38: pixmap_info => ‘Open’, 39: hint => ‘Open book list’, 40: }, 41: { 42: type => ‘item’, 43: label => ‘Exit’, 44: pixmap_type => ‘stock’, 45: pixmap_info => ‘Quit’, 46: hint => "Quit $APPNAME", 47: callback => \ 48: } 49: ); 50: 51: $app->create_menus ( 52: { 53: type => ‘subtree’, 54: label => ‘_File’, 55: subtree => [ 56: { 57: type => ‘item’, 58: label => ‘_New’, 59: pixmap_type => ‘stock’, 74 LINUX MAGAZINE 6 · 2001
60: pixmap_info => ‘Menu_New’ 61: }, 62: { 63: type => ‘item’, 64: label => ‘_Quit’, 65: pixmap_type => ‘stock’, 66: pixmap_info => ‘Menu_Quit’, 67: callback => \ 68: } 69: ] 70: }, 71: { 72: type => ‘subtree’, 73: label => ‘_Help’, 74: subtree => [ 75: { 76: type => ‘item’, 77: label => ‘_About...’, 78: pixmap_type => ‘stock’, 79: pixmap_info => ‘Menu_About’, 80: callback => \ 81: } 82: ] 83: } 84: ); 85: 96: 87: $appbar -> set_status (‘Welcome!’); 88: $app -> set_statusbar ($appbar); 89: 90: my $sw = new Gtk::ScrolledWindow undef, uU ndef; 91: $sw -> set_policy (‘automatic’, ‘always’); 92: 93: my @list title = (‘Author’, ‘Title’, ‘ISU BN’); 94: my $liste = new_with_titles Gtk::CList (@U list title); 95: $liste -> signal_connect (‘select_row’, U \); 96: 97: my @book1 = (‘Larry Wall, et al’, ‘progrU amming Perl’, 1565921496); 98: my @book2 = (‘Helmut Herold’, ‘Linux Unix U System programming’, 3827315123); 99: my @book3 = (‘Wiglaf Droste, Gerhard HensU chel’, ‘The Mullah from Bullerbü’, z38940135U 24); 100: $liste -> append (@book1); 101: $liste -> append (@book2); 102: $liste -> append (@book3); 103: 104: for (my $i = 0; $i < $liste -> n_columns; U $i++) { 105: $list -> set_column_width ($i, $list -> U optimal_column_width ($i) + 5); 106: } 107: 108: $sw -> add ($list); 109: 110: $app -> set_contents ($sw); 111: $app -> set_default_size (640, 480); 112: show_all $app; 113: 114: main Gtk;
072gnomeprog.qxd
31.01.2001
12:18 Uhr
Seite 75
GNOME
PROGRAMMING
status bar, which consists under GNOME of an actual status bar for messages and a progress bar which can display the progress of actions. Since in the example, both are meant to be present, the necessary flags are set to 1 instead of 0. The second parameter is the ‘Level’, at which the interaction takes place. In this case it is at a user-defined setting. These settings can be adjusted in the GNOME control centre.
Callbacks make you feel at home in C A GtkScrolledWindow is now stuffed into the application, into which a GtkCList with illustrious contents migrates: From line 97 the list is filled with data, in this case with a few very nice books. Messrs. Droste and Henschel will surely be pleased at being included with Wall and Herold in a ‘List of nice books’. In line 104 a small loop runs, which calculates the best size for displaying the field contents. After this the whole mess is actually displayed and the program started. The callback in line 23 for selection in the list writes the data transferred in its own local variables. The structure of the callbacks for individual signals follows those in C, so that in Perl, too, one is forced to know these precisely, so as to be able to distribute the values correctly. The relevant literature does however provide references. Figure 2 now shows the completed program. ‘Book manager’ is perhaps a bit too high-faluting for a simple GtkCList, which more or less randomly contains a list of books, but it is only meant to be an example.
Data on the list It is unlikely that one would want to prepare his data so that it fits into the list, only then to have to again tediously cobble it together for the rituals of a selection. For example: If I have an object in Perl having the properties of a book, then I would like to see this object associated with the entry in the line, without having to expend any effort on having to reconstruct the object purely from the entries. In order to realise this, lines with data can be connected. This is done as follows: $list -> set_row_data (0, $object);
Here the first line assigns the object $object. This assignment is retained, even if the position of the line changes as the result of a sorting. Through a variant of this function, if the line has been destroyed – perhaps it has just been deleted – a callback function can be invoked which is given the data, and thus can then proceed as required:
Figure 3: Information about the book manager.
$list -> set_row_data_full (0, $object, U &function);
Info
Conclusion The nice thing about a GNOME program in Perl is that unlike C, no complete code tree has to be delivered. Basically a single file is sufficient, which contains the script or the program respectively. The code trees, as used for C programs, mainly serve to configure the source code appropriately for the respective platform. For the code presented here this has already been done, when Perl and Gtk-Perl were installed, so this step is dropped and the data volumes remain more manageable. Also the syntax leans heavily on the normal Gtk+ and GNOME under C, so the methods on objects all have matching designations. Support for Perl by Gtk-Perl is not however restricted to simple GUI functions, but also extends to image processing by Imlib and GdkPixBuf, to the processing of Glade data and also to more exotic widgets like GtkHTML and GtkGLArea. Coding examples are on the Web – for real this time. Happy Gnoming! ■
Gnome Website: http://www.gnome.org/ CPAN: http://www.cpan.org/ Gtk-Perl Homepage: http://projects.prosa.it/gtkperl/ Thorsten Fischer: GUIprogramierung mit Gtk+, SuSEPress 2000 Code examples from the article: http://www.derfrosch.de/weic hewaren/linux-magazin.html ■
The author Thorsten Fischer is a student of computer science and Media consultancy at the Technical University of Berlin, his book ”GUI-programierung mit GTK+” was published in October just in time for the book fair by SuSE-Press. He also works as a developer for Mapmedia in Berlin.
Figure 2: Book manager 6 · 2001 LINUX MAGAZINE 75
076serial keys.qxd
31.01.2001
15:27 Uhr
PROJECT
Seite 76
SERIAL REMOTE CONTROL
Remote controlled: A computer without a keyboard
RAPID SWITCHING MIRKO DÖLLE
Whether it’s a fileserver or printer
Everyone who has a server parked somewhere in the building certainly knows this problem: To change a CD you have to log on from a workstation, unmount, then go with the CD to the server and after changing it, back again to the workstation to mount. Or the 486 still in use as a printer spooler has to be powered down for the weekend – the fact that it has not yet been powered down is of course something you don’t notice until after shutting down the last workstation. After a short time, monitor and keyboard get back together again so you don’t have to keep running back and forth between workstation and server. Viewed in the cold light of day, there are only a few actions such as shutdown,(un)mounting the CD drives or starting/continuing a backup, which you constantly come up against on the server. On the other hand the serial interface has various status lines which can be queried without going to a lot of trouble. A combination of the two produces a simple and very cheap remote control.
spooler: processors which have only one special function really need neither a monitor nor a keyboard – except that now and then a CD has to be changed or the system shut down at the weekend.
Inputs and outputs A glance at Table 1 shows that in total, we have five inputs and three outputs. At first the send and receive lines (RXD, TXD) will remain unconnected. To connect one of the inputs we need a voltage. We could get this externally via a power supply or internally via a hard disk power point, but it is critical. The serial interface reacts sensitively to too high a voltage and in particular to too high a current. Since components nowadays are housed on the motherboard, is it not all that easy to exchange a damaged serial interface. Which is why we are going to use the outputs as power supply for the inputs. So there are four inputs left, one freely connectable output, for example for an LED, and a second, conditionally connectable output.
76 LINUX MAGAZINE 6 · 2001
Wiring the inputs and outputs The first output RTS will be used for a yellow LED, and this is connected with a series resistor between output and earth. We can use this for status messages or as acknowledgement for an action (on/off/flashing). The series resistor should be chosen such that not more than 20mA current flows, usually 2.2 kOhm. The second output must be activated if it is to connect the inputs – So it is perfect as a systemstatus display. To do this we exploit the fact that the low-level of the serial interface is not earth (GND), but a negative voltage. If a series resistor and a twopole dual-LED are placed between the output DTR and earth (polarity reversal of the LED causes it to change colour between red and green), then on red the buttons are inactive (because of negative voltage). The outputs are both low when the device is switched on, so at first the LED is red. After starting the control program (for example via an init-script) DTR is then set to high, the LED turns green and the inputs can be connected. The four inputs CD, DSR, CTS and RI are connected, via four keys and a common series resistor of 10 kOhm with the output DTR. To avoid unintentional shutdown by touching a key, two red keys have been connected in series and linked to RI in the sample construction in Figure 1 – so both have to be pressed. The keys serve at the same time as a holder for the two LED’s and for the sake of security are as far from each other as possible.
Control program As described above, by activating the output RTS the yellow light diode is switched on. The control program published on our FTP server uses the LED as acknowledgement that a keypress has been detected and the associated action has been triggered. It is turned off again at the end of the action. If an error arises during this, the
076serial keys.qxd
31.01.2001
15:27 Uhr
Seite 77
SERIAL REMOTE CONTROL
yellow LED flashes. The dual LED at DTR acts as the readiness display. At the start of the control program it is switched to green, and at the end of the program to red. The inputs are queried in a loop every 75ms. A key is regarded as pressed when the inputs remain unchanged during three runs (i.e. 225ms). The 75ms interval is set at random as a compromise between CPU load and reaction speed. The program can distinguish between individual keys and key combinations, and at present two actions are being implemented: If both red keys are pressed (input RI high), the yellow LED is set and the system is powered down (halt). If one or more white keys are pressed, then there is an unmount of /dev/cdrom and the ejection or insertion and mounting on /cdrom. For anyone for whom the algorithm for ejection and insertion is too imprecise, two keys can also be used. In total, seven actions, including combinations, can be triggered with the white keys. The red keys should be reserved for shutdown. The C listing of the control program is provided with (hopefully) adequate comments, and even without more precise explanation of the source it should not be hard to adapt to local circumstances. Connected to a serial extension cable and with appropriately modified key configurations, one can even adjust the loudness of the sound card and control mpg123, if you ever leave the desk to listen to music.
Producing It is not absolutely necessary to use the circuit board layout from Figure 2 - in principle it is also possible to simply screw and multiwire key switches and separate LED holders into a drive cover plate. The layout is intended for the key switches listed in the component list with and without LED holders, which can be obtained from places such as Maplin Electronics (http://www.maplin.co.uk). The only thing, which is important, is to be sure of the correct polarity of the two light diodes – and especially to watch for short circuits! The serial interfaces are usually located together with PCI bus and PS/2 ports in the southbridge directly on the motherboard. It would be tragic if this were to give up the ghost. Better to use a separate multi-IO card. This also means having no problem with the cable run, because you can come in via the PCB headers. It is also important to use serial resistors – the interface cannot usually handle more than 20mA per output. Connection is made either using a ribbon cable to a slot plate or through the casing aperture (ATX motherboards) from the outside, or via a PCB header directly on the plug of the motherboard or the interface card. Beware, the configuration of the PCB header is always different and should be looked up in the motherboard manual.
PROJECT
Figure 1: Sample construction with five keys
Figure 2: Component diagram and circuit board layout
Future By using a simple multiplexer it is possible to install seven white keys, instead of three, but this will mean no multiple combinations. It is also possible to add a display via an interface for additional information (which is why we have kept the RXD and TXD lines free), but this takes much longer and is more expensive and is for this reason being saved for another article. With the serial remote control described here it is possible to solve many everyday problems, for which one would otherwise have used a monitor and keyboard. ■
Circuit board layout at original size
Table 1: Pin configuration of the serial interface 9 pin 25 pin Direction Signal Designation 1 8 input CD 2 3 input RXD 3 2 output TXD Transmit Data 4 20 output DTR Data Terminal Ready 5 7 GND 6 6 input DSR 7 4 output RTS Request To Send 8 5 input CTS 9 22 input RI
Carrier Detect Receive Data
Ground Data Set Ready Clear To Send Ring Indicator
Connection configuration of the sample circuit board (soldering lugs) Pin Signal Component 1+2 GND 3+4 CD white key switch, left 5+6 DSR white key switch, middle 7+8 CTS white key switch, right 9+10 RI red key switch (series connection) 11+12 RTS yellow LED 13+14 DTR red/green LED Component list Quantity 3 2 1 1 2 1
Description Key switch white Key switch red with LED holder 3mm LED 3mm yellow Mini Bi-Colour LED 3mm red/green resistor 2.2 kOhm resistor 10 kOhm 6 · 2001 LINUX MAGAZINE 77
078serial.qxd
31.01.2001
16:09 Uhr
PROJECT
Seite 78
SERIAL DISPLAY
Build your own LCD displays
FLACHMANN’S BEAT MICHAEL MAJUNKE
When it comes to servers it is often not worthwhile connecting a monitor when the administration is done remotely, as there will only be status messages. Further to our serial key project, this time we are going to show you a serial display, with which you can keep an eye on your server. A look at the circuit board with the components in place. The display was fastened to the circuit board with spacer pins for the casing to be fitted later
Anyone who runs a server or takes part in projects such as SETI@home, knows the problem: To see any information, however small, a monitor is needed. Usually it’s just a couple of lines which are really of interest, and they could be quickly read off a display. Since smaller LCD displays are no longer that expensive and almost every computer is equipped with a serial port, such a display can be added relatively easily. The finished device can display 64 alphanumeric characters and has another five LED’s for status messages. The cost come to about £20 to £30, depending on the casing and display type selected.
The circuit The most important thing in the construction of the device is of course the display. Based on the display area needed we decided on a 16x4 display. Naturally it is also possible to select other formats. The built-in LCD Controller used in this project must be a Hitachi HD44780 or compatible. Controller operation is implemented by means of parallel circuitry, so an interface is needed for 78 LINUX MAGAZINE 6 · 2001
connection to the serial port. This interface must also be capable of performing control tasks. These include output of data received and sent to the display, receive and process control codes, drive the LED’s and more besides. For this, a small microprocessor, programmed as required, is ideal. The main criteria for selection of the processor: Easy to program, sufficient number of inputs/outputs and it should not be too expensive. The choice came down to processors from the firm Atmel, which offer precisely what was required. The great advantage is that they are on-board programmable and that free programmer software is available. For the prototype, an AT90S2333 was used having 20 I/Os, 2K of program memory, SRAM, EEPROM, timer, UART and an A/D converter. The Atmel-2333 is still around in large numbers but is an obsolete model and is no longer in production. Its big brother, AT90S4433, has even more memory and is therefore dearer, but fully hardware and software compatible with the 2333. The stock of CPUs should thus be assured for the next year, even if now and again one may have to put up with delivery bottlenecks.
078serial.qxd
31.01.2001
16:10 Uhr
Seite 79
SERIAL DISPLAY
For operation we need a working voltage of 5 volts and a quartz crystal component, plus a few resistors which have to be soldered on so that the processor can be programmed in-circuit. You can find out how this self-build programmer functions in detail at http://www.rowalt.de/mc/. For the data channel we will use the RxD and TxD line to the serial port which remained unused in the serial key project. Because of the difference voltage level, it is necessary to make an appropriate adjustment. At this point, we rely on the standard component MAX-232 from Maxim. This is relatively simple to integrate into the circuit and in addition it protects our sensitive interface ports from over-voltage. Last of all, a power supply is needed for the circuit rated at a maximum of 300mA at 5 volts. For this we will use the popular voltage controller µA7805, which should really be fitted with a small heat sink. Anyone planning to fit the device in the PC case can also draw current direct from the PC power supply and so save a few components. A completed layout in the Eurocard format can be found on the FTP site at ftp://ftp.linuxmagazin.de/pub/listings/magazin/2001/02/Seriellesdisplay/. To make the circuit board, we advise using the phototransfer process, and to do this, first print the layout with a high-resolution printer and next copy it twice onto acetate. The two acetate sheets
PROJECT
are placed precisely one on top of the other thus producing adequate blackening for the exposure. Because of the relatively broad tracks, track undercutting ought to be easily avoided. Anyway, it won’t do any harm to make a visual inspection of the completed circuit board after it’s been treated with solderable lacquer. Anyone who does not wish to perform the etching and drilling process themselves, can turn to the firm Kernel Concepts (http://www.kernelconcepts.de), where firstly the order requirements are recorded and over a certain number of pieces, complete circuit boards are manufactured. Whether Kernel Concepts will also be offering construction kits was not clear at the time of going to press. After building the circuit, you should especially check the solder joints which are located close together for short circuits, but trimming is not necessary.
Programming Once the circuit has been successfully completed, it is now time to program the circuit according for our purposes. There are a range of options open to us for this: for the purists, Assembly, for C-programmers, certain C compilers and for high-level language
Fig. 1: Circuit diagram
6 · 2001 LINUX MAGAZINE 79
078serial.qxd
31.01.2001
16:10 Uhr
PROJECT
Seite 80
SERIAL DISPLAY
enthusiasts, Basic, Pascal, BASCOM and many more. It is best to start with an assembler, which is the best way to learn the features of the processor. We used the Linux assembler AVRA, which after download is installed as follows: tar xzvf avra-0.5.tar.gz cd avra-0.5 cp Makefile.linux Makefile make su make install Since AVRA is compatible with the Atmel assembler, you can use its own instructions. You can find the complete assembler instructions for the processor at the home page of Atmel. Here is a simple example for LED control, which switches on the five LEDs of the display: ; small assembler example ; Control the LED .DEVICE AT90S2333 ; Define processor .EQU PORTD_DDR = 0x11 ; Name of port output dU irection .EQU PORTD_D = 0x12 ; Name of port output dU ata .DEF WORK = r16 ; Name of register R16 ldi WORK,0b11111100 ; LED-Ports to output out PORTD_DDR,WORK ; output DataDIR Port D ldi WORK,0b00000000 ; LED on (neg. control) out PORTD_D,WORK ; output data-DataPortD
The finishing touch of an aluminium casing is not that easy to do, on the other hand it looks a lot better than the bare circuit board
The command ldi loads a register with a constant, which is only possible with the registers R16 and above. With out a register can then be output in the I/Oaddress space. This is of course just a simple example.
80 LINUX MAGAZINE 6 · 2001
The source code must now be translated for transfer to the processor. This is done using the command avra name.asm. This gives us a HEX file containing the program code for our processor. Anyone programming data for the EEPROM, in his program, will also have received another EEP file, which must be transferred later with the HEX file. The actual task of transferring data to the processor is handled by the utility SP12, which serves as the basis for all write and read operations on the processor. SP12 is available as source code and binary file and contains a comprehensive set of instructions. In order to use SP12, after unpacking it must be initialised using sp12 -i. Then our HEX file can be transferred with the following command line: sp12 -T1 -wpfc name.hex Anyone having to transfer data for the EEPROM must extend the line by -wefC name.eep, which will make SP12 write this file into the EEPROM address space. By now we should have our first program in the processor and if everything has been done properly, 5 lit LED’s should be visible.
Software As the basis for your own projects, you will find a functioning sample program on our FTP server. It allows the display to be controlled via the serial interface, at which point control codes which appear in the text to be output are evaluated. Such a control code is introduced by the start symbol ”~”, followed by the command code with the
078serial.qxd
31.01.2001
16:10 Uhr
Seite 81
SERIAL DISPLAY
parameters. If pure text is sent to the display, it is simply output with the blank areas of the display ignored. Two examples show how a data output can occur under Linux: echo ~B1~C >/dev/cua1 Here we shall first switch on the lights (~B1) and then the ”display” is cancelled by (~C) echo ~P206Hello~L01 >/dev/cua1 sets the cursor in line 3 and column 7 (~P206, as usual, begins the internal count at zero), writes starting at that position ”Hello” and then switches the first LED on (~L01). You will find an overview of the control codes in the annex to the program. The method of operation is as follows: When a symbol is sent to the processor via the serial interface it triggers an interrupt. This interrupt starts a routine which stores the symbol received in a buffer. The main program now continually checks whether there is a symbol in the buffer. If so, the symbol is read from the buffer and evaluated, regardless of whether it is a code symbol or simple text. Depending on the evaluation, either the text is output or the command is executed. Since it’s possible for the display buffer to become full, an XON/XOFF flow control (alternatively CTS) is programmed in. This stops the data stream from the computer and only continues when sufficient space has been freed in the buffer.
PROJECT
For program analysis the source text is exhaustively commented so you will certainly be able to introduce your own expansions quickly.
Use as status display One area of application for the self-built display is status messages of all kinds from average loading via free memory or hard disk space to call number display from the ISDN log. You might also like to know whether a daemon has been started, how many users are currently logged on or how far a program has got in a calculation. All this can be output via the display automatically with cron and a few basic Linux commands such as cut, grep or echo. The following example from the /etc/crontab displays the loading of the computer: * * * * * root echo -n ~P001`cut -d” ” -f1-3 /U proc/loadavg` >/dev/cua1
Large reserves The circuit has deliberately been equipped with large reserves, so as to be able to take care of future and more complex tasks. Anyone using the more expensive Atmel-4433 will even have at his disposal twice the performance reserves. So it would for example be possible to produce, via the still free inputs and outputs, a data transfer from the display to the computer. Anyone who would like to build in an automatic brightness control can easily do so by
Component list No. 1 4 2 2 1 1 5 1 1 3 1 1 1 1 6 1 1 1 1 1 1 2 1 1 1
Type electrolytic capacitor electrolytic capacitor tantalum capacitor quartz crystal FET P2N40 or similar LED, green resistor resistor resistor resistor resistor resistor resistor potentiometer voltage regulator
Value 470µF, 16V 10µF, 16V 0.1µF, 16V 27pF, ceramic 8MHz min. 300mA max. 20mA LED, red 8.2 Ohm 510 Ohm 150 Ohm 5.1 kOhm 39 kOhm 10 kOhm 1 kOhm 10 kOhm, pre-set µA7805, 1A ATMEL 2333 or 4433 MAXIM 232 display 16x4 plug connector 16 pin socket 25-pin D-Type connector power supply IC holder, DIL 16 pin IC holder, DIL 28 pin
LCDs via the parallel port Nils Färber reports on the use of a parallel port display: For anyone who finds controlling an LCD module with the aid of an additional microcontroller circuit board too fiddly or timeconsuming, it is possible to achieve the same result with a simple cable solution at the parallel port. 1 IC holder, DIL 28 pin
6 · 2001 LINUX MAGAZINE 81
078serial.qxd
31.01.2001
16:10 Uhr
PROJECT
Seite 82
SERIAL DISPLAY
connecting a light dependent resistor (LDR) to one of the A/D converters. But with any expansion, the programming time also increases - so integrating the AVR C-compiler is definitely worthwhile for these purposes.
Principle
The author: Michael Majunke works as a communications electronics engineer and mainly spends his spare time tinkering on the computer and programming. To balance this he likes listening to e-music and still takes photographs the old-fashioned way with a film camera.
Most commercial alphanumeric LCD modules are based on the LCD controller HD44780 from Hitachi or compatibles. This can be controlled very easily via its parallel input. To do this, four or eight data lines for the four or eight-bit data mode and two control lines are necessary. Parallel ports on PCs have eight data lines and eight additional control lines, more than enough to control the LCD modules. The supply of power to the modules requires a bit of ingenuity. Since the parallel port works with 5 volt TTL levels and the display needs exactly 5 volts, the operating voltage could be obtained here. But to protect the output drivers of the port, it would be better not to do so. In the case of an internal installation, the PC power supply can be used, and if connected externally the display can be powered via the joystick port or an external power supply.
Hardware
Info HITACHI display controller http://semiconductor.hitachi.c om/ Maplin Electronics http://www.maplin.co.uk Homepage of the firm ATMEL http://www.atmel.com In-circuit programming of ATMEL processors http://www.rowalt.de/mc/ Homepage of the firm Maxim http://www.maxim-ic.com AVR C-compiler overview http://www.omegav.ntnu.no/~ karlto/avr/ccomp.html AVRA-Assembler http://tihlde.org/~jonah/el/avr a.html Programming SP12-ATMEL processors under Linux http://www.xs4all.nl/~sbolt/espider_prog.html LCD parallel port drivers: http://sourceforge.net/projects /lcd/ ■
As a rule, LCD modules have a 16 pin connector, which almost always has the same configuration. For connection to the parallel port, a 25-pin D-type plug and a length of 14 core cable is needed. If the joystick port is to be used for power supply purposes, another 15 pin D-type plug will be needed. The connection is described in Table 1.This configuration was selected so a ribbon cable could Table 1: Wiring of parallel port/LCD display PC LCD Function LCD Pin 18-25 GND 1 +5V 2 DRV 3 18 R/W 5 1 EN 6 2 DB1 7 3 DB2 8 4 DB3 9 5 DB4 10 6 DB5 11 7 DB6 12 8 DB7 13 9 DB8 14 14 RS 4
be connected almost 1:1. Those using the joystick port to supply power will obtain 5 volts from pins 1, 8, 9 or 15, and ground from pins 4, 5 or 12. Now all that’s missing is the contrast regulator at pin 3 of the LCD modules. Depending on the type of the modules, it can now become a bit more difficult. With some modules this pin can be bridged directly to earth, which gives maximum contrast and in many displays delivers a very good result. If the maximum contrast is too dark, a voltage divider must be connected here, i.e. a potentiometer with a range 100 to 500 Ohm, with one side at +5V, the other to earth and the middle tap to pin 3 of the LCD. Then the contrast can be adjusted by this means.
Driver The current version of the driver for Linux kernels from version 2.0 to version 2.4 can be found at http://sourceforge.net/projects/lcd/ and can be downloaded there direct from the FTP or CVS domain. In the driver/ directory of the driver, before compiling, take a look into the Makefile, because this is where it is possible to make a few adjustments to your own system. A subsequent make creates the driver module lcd.o, which can then be loaded with insmod ./lcd.o. In the driver/ subdirectory there is also the small script mkdevice, which creates the appropriate device under /dev/. The driver can also be attributed additional parameters, described in Table 2, when loading. The two waiting times t_short and t_long are the intervals for which the driver waits for the LCD display. If the output appears on the display garbled, these times should be increased.
Application If the cable connection is correct and the driver module has been loaded, then the display has been initialised and the cursor will be visible in the top left corner. A simple echo ”Hello” > /dev/lcd will make the text ”Hello” appear on the display. The driver also supports, apart from normal text output, a few escape sequences, which are comprehensively described in the README file of the driver. These include: delete display, move cursor up and left, cursor on/off and self-definable graphic symbols. The latter are especially interesting for displaying bar graphics. ■
Table 2: Parameters of the LCD-modules Parameter Meaning io I/O Address of the parallel port cols Number of display columns lines Number of display lines t_short short waiting time t_long long waiting time
82 LINUX MAGAZINE 6 · 2001
Default 0x378 20 4 40 100
084boookreview.qxd
31.01.2001
14:22 Uhr
BOOKS
Seite 84
REVIEW
Rebel Code: Linux and the Open Source revolution by Glyn Moody
REBEL YELL ALISON DAVIES
In Rebel Code Glyn Moody weaves the disparate strands of the stories around the Open Source phenomenon into a cohesive and exciting narrative. In essence, it’s the story of how in less than ten years Linux went from being a project to fill some spare time in a student’s Christmas holiday to a system capable of rivalling Microsoft’s domination of the software world. The book examines the beginning of the free software movement going back to 1984 and Richard Stallman’s GNU project and follows how developments from that affected the growth of Linux and vice versa. It traces the history behind all the major projects connected to the development of Linux such as Apache, Perl and Sendmail, and examines the personalities and the anecdotes surrounding them. It is this that makes Rebel Code such an entertaining read. We are told about the reasons behind the development of the various distributions, both existing and ones that are no longer used, and how fear of fragmentation has prevented them from forking too far apart. As an interesting aside Moody reveals the reasons behind many of the, often idiosyncratic, names, such as Debian deriving from its creator, Ian Murdock’s own name, and that of his wife, Deb. Glyn Moody has drawn from a variety of sources including interviews with figures in Open Source who have been reluctant to speak before to give a clear picture of what happened in those hectic years. He examines how the Internet has such a bearing on the development of Linux and how in turn Linux assisted in the growth of the World Wide Web, as we know it today. There is much about the various types of licensing agreements, from the original copyleft designed by Stallman to the various modifications made to allow commercial use of the programmes while still allowing the code to be freely accessible. In the chapter Trolls versus Gnomes he follows the development of graphics for Linux and the rivalry between the supporters of the proprietary programme KDE and the free software, Gnome and how the company, TrollTech, was finally persuaded to use the GNU General Public Licence, as drawn up by Stallman. 84 LINUX MAGAZINE 6 · 2001
Moody explains all the terms and acronyms that can sometimes put people off reading about the computer industry, yet the book contains enough detail to be interesting even to readers who already know much of the story. After covering the early history of Linux, he also conjectures about might-havebeens. What would have happened if Netscape had bought RedHat in 1998? He devotes a chapter to the struggles in 1999 between Microsoft and the Linux community over the comparison between Windows NT and GNU Linux with Samba; the battle for fair testing and the admission that Linux needed further work before it could out perform the Redmond giant. Arguments and counter arguments over the ‘weaknesses’ of Linux (as perceived by Microsoft and documented on their website in October 1998) are traced. This is followed by a brief description of how Linux has improved since then and a mention of the project, WINE, to allow the running of unmodified Windows applications on a GNU/Linux system is included. Moody then heads into the future with the Intel Itanium chip and 64 bit processing. In the last part of the book he describes how Linux has been taken up by all the major hardware companies and looks at the way forward with the use of Linux in modern embedded technologies. He describes how Linux is being taken up around the world, in markets as yet unexploited by Microsoft, such as China and India and how Mexico has started a project to install Linux in schools and hopefully encourage a new generation of hackers. Above all, this book is about the conflict between big business and the profit motive and the belief that the chance to develop and improve software is a freedom that should be available to all. The question over whether in the long term such altruistic motives can win over the desire for commercial success he leaves unanswered – only time will tell, but in the meantime the story behind those principles is a very enjoyable and optimistic read. In the spirit of Open Source Penguin Books UK have made the first three chapters of Rebel Code available for free on http://www.penguin.co.uk ■
085fagIntro.qxd
02.02.2001
16:44 Uhr
Seite 85
Welcome to the LinuxUser section of Linux Magazine. The LinuxUser pages, as well as containing articles for beginners, also serve as an invaluable source of information about the latest Linux software. Ever wondered about hard disk space usage and who’s wasting the lion’s share of those precious gigabytes? df and du are command line tools giving you some hints. However, you don’t need to touch them if you prefer GUI tools. Dr. Linux gives an introduction to both of them. Feel like playing a different game? Mindrover lets you construct robots and train them. Then you can let them fight it out with someone else’s Mindrover bots. Gnomogram and Korner give you (as usual) the latest news on GNOME and KDE, the latter gives an in-depth introduction to KDE’s printing tool klpq. If desktop environments aren’t exactly what you want, check out Desktopia: Here we look at plain X – OK, window managers are allowed. This issue presents a nice tool called xnodecor. Crontabs, or cron tables, can be used for daily, weekly, or otherwise repetitive, tasks. Don’t do them manually, let your computer handle them automatically. We’ve also got some interesting facts on the MP3 format and available players. Naturally we’ve not forgotten about the command line. Read some introductory words on command lines and have a look at the famous screen tool, which allows you to leave a terminal session and reconnect to it from another machine. If you want to see some more, proceed to Programming Corner to find out what the shell knows about string manipulation. Enjoy the LinuxUser pages,
CONTENTS 85 BEGINNERS This is where you should start in the world of the free. 86 Dr. Linux Dr. Linux prescribes the remedies to make your system healthy and fit for life. 90 Desktopia How to make your screen look better. We show you two utilities to improve your looks. 92 Gnomogram The latest news in the Gnome world. Find out what’s happening to your favourite desktop.
94 Programmers’ Corner Following on from last month. We continue to explain variables and how to call them. 98 Ksplitter News from the land of K. How to use aRts for your sounds 100 Ktools How to spool that data and prioritize your jobs. 101 Take Command Improve the functionality of your terminal applications using a tool called screen.
115
SOFTWARE
104 MP3 players Want to play but don?t know which to play with. We explain all.
108 Cron job Tables Easy to use tools to save you time. 114 MindRover Virtual Robot Wars.
Hans-Georg Esser hgesser@linux-user.de
6 · 2001 LINUX MAGAZINE 85
086drlinux.qxd
31.01.2001
14:47 Uhr
Seite 86
BEGINNERS
DR. LINUX
Dr. Linux
NEXT PATIENT PLEASE Complicated organisms, since that’s what Linux systems are, have little
complaints all of their own. Dr. Linux
BY MARIANNE WACHHOLZ
observes the patients in the Linux
newsgroups, issues prescriptions here for current problems and suggests alternative healing methods.
Is there room for just one more? I have the feeling that the space on my computer is getting a bit tight. How do I find out how much hard disk space is occupied? Dr. Linux: First of all, there are programs you can call using a command line, for example the command df (meaning disk free): user$ df Filesystem /dev/hda6 /dev/hda2
1k-blocks 3470648 932912
Used 2045904 637288
Available 1245220 248236
Use% 62% 72%
Mounted on / /RedHat
This promptly supplies you with the memory occupied by all file systems which you currently have mounted.
mount: This command integrates media (e.g. hard disk partitions and CDs) into the Linux file system. This is normally reserved for root. Before you can remove a mounted CD or diskette from the drive, an umount command is essential. And hard disk partitions, too, can be put out of reach under Linux again in this way. File system: The ways and means of organising data on a data carrier vary from one operating system to another and for different storage media. For example under Windows 9x there is usually an extension of the DOS file system FAT named VFAT in use, while Linux likes its data partitions to be in the ext2 file system. On data CDs on the other hand iso9660 is used. Kilobyte: Memory is divided into memory cells, which contain either the value 0 or 1. Such a memory cell or the data stored in it is called a Bit. Several bits can be combined into units such as a byte, word or long word: a byte for example corresponds to eight bits. This is sufficient to store one (Latin) letter. The word ”space” accordingly needs five bytes of storage space. A page of text contains approx. 1500 characters and therefore unformated approx. 1500 bytes for storage. A kilobyte (kB, kByte) incidentally, corresponds not to 1000, but to 1024 bytes. One kilobyte times one kilobyte gives one Megabyte (MB, MByte), which is exactly 1,048,576 bytes. A Gigabyte (GByte), thus one kilobyte times one kilobyte times one kilobyte, is lots and lots of bytes, 1,073,741,824 to be exact. ■
86 LINUX MAGAZINE 6 · 2001
The memory space is displayed to you in one kilobyte units (blocks). You are given details on the whole space on the respective hard disk partition, diskette, CD, etc., with an indication of how much of it is occupied (used) or free (available); plus the memory space used as a percentage. On checking these figures it will strike you that the used (plus the still-available) memory comes to only about 95 per cent of the total memory. This is not a program or calculation error, but a deliberate limitation of the memory space for normal mortal users. The last five per cent is only available for the superuser. This gives him or her the option of making space in a full disk by means of a pack program, which itself also needs some memory space to work. If you would prefer to have a specification in mega- or gigabytes, give df the option -h with: user$ df -h Filesystem Size Used Avail Use% Mounted on /dev/hda6 3.3G 2.0G 1.2G 62% / /dev/hda2 911M 622M 242M 72% /Red Hat For users who prefer to work with graphical interfaces, there are also a few applications available. In the KDE menu you might find, in the Utilities sub directory, the programs KDFree (Figure 1) and KDu (Figure 4). If not, your distribution may supply these in a kdu package for later installation. With SuSE Linux this is in the series kpa. If you have no luck here you can download the appropriate package from http://rpmfind.net/linux/RPM/kdu.html. kdfree offers in the first instance an overview of all data carriers which are entered in your /etc/fstab. These are also selectable individually via riders, which provide a pie chart and information on the selected drive (Figure 1). As with df only those data carriers are taken into account which are mounted under a mount point in the system.
086drlinux.qxd
31.01.2001
14:47 Uhr
Seite 87
DR. LINUX
In a direct comparison with this the program GNOME Free Disk (Figure 2), which you will find in the GNOME sub menu Tools looks a bit sparse in the graphic representation. Here round instruments show you a percentage value which represents how full your disk is. This tool can also be invoked from the command line using the command gdiskfree. GNOME also offers you the option of inserting an applet into the control panel with which you can keep a constant eye on the memory space (Figure 3). To do this, in the GNOME menu, under Panel/Add applet/Status display select the item Disk space. On the command line, entering diskusage_applet & achieves the same result. What is displayed are, as with df, on one command line the mount point (MP:) and after av: as in "available" the available memory space in kilobyte blocks. If you click with the right mouse button on the applet icon in the panel, a few more setting can be configured via the menu entry properties; in particular, at this point the refresh rate of the display may be of interest.
Memory space in directories How can I find out how much memory space individual directories are occupying? Dr. Linux: Change to a command line in the directory whose memory occupancy interests you, and enter the command: user$ du -k (short for: disk usage). You will receive an output of memory occupancy in kilobytes for the sub directories and the entire memory usage of a directory. On the graphical user interface the aforementioned KDu in the sub directory Utilities of the KDE menu offers you the option of displaying the memory usage of directories (Figure 4).
If you don’t ask... My computer seems so slow to me – what’s the possible diagnosis?
BEGINNERS
Dr. Linux: Your Linux system naturally comes with a few programs which can be invaluable in helping you to analyse your system. I would like to introduce you to vmstat (virtual memory statistics). Invoked on the command line, this produces something like the following output: procs r bw 0 00
swpd 212
memory free buff cache 4936 3104 23556
swap si so 0 0
bi 3
[left] Fig. 1: kdfree shows the memory allocation of individual drives as an overview or in detail [right] Fig. 2: The display of gdiskfree therefore looks a bit sparse
io bo 0
in 128
system cs 266
us 2
sy 0
cpu id 98
Here is how to decrypt this jumble of figures and abbreviations: • r (run): The higher the number you find here, the slower your system. This shows how many processes would run if you did not have to wait for machine time. • b (block): Processes displayed here are waiting for specific events to be able to continue running the program, but not for machine time. • w: The number at this point shows you the number of all processes currently backed up in the swap domain
/etc/fstab: This configuration file is used on system start up by the mount program to mount hard disk(partitions) and other data carriers in the file system and keep information ready for later mount actions. The first four columns in the file are the most interesting: The first specifies the device to be integrated, the second the mount point, the third the type of file system used on the data carrier. The mount options are listed in the fourth columns. The following example automatically mounts the root file system on the first partition of the first SCSI hard disk on booting, but on the other hand not the ATAPI-(IDE-)CDROM (noauto). However, later on, normal users (thanks to the option user) have the option, with the command mount /cdrom, of making any inserted data-CD accessible under /cdrom as read-only. #Devicename mounted as /dev/sda1 / [...] /dev/hdc /cdrom
Filesystem mount-options ext2 defaults 0 0 iso9660
ro,noauto,user 0 0
Mount point: The directory in which the files of a data carrier are ”mounted” so that they can be accessed under Linux and other Unix operating systems. ■ 6 · 2001 LINUX MAGAZINE 87
086drlinux.qxd
31.01.2001
14:48 Uhr
BEGINNERS
Seite 88
DR. LINUX
Fig. 3: The applet Disk space and its configuration menu
Fig. 4: KDu shows the memory space in directories
• swpd states the swap memory currently in use in kilobytes • free shows how many kilobytes of RAM are unused right now • buff shows the size of the areas of memory in which the in-/output buffers are located; the unit is again the kilobyte. If data are being produced, there will not be a transfer to the hard disk for each symbol or each block. All in-/outputs are first placed in a buffer zone. If e.g. by reading, a program tries to access a block inside the file, the operating system checks whether the block sought is already in the buffer. If so, it is loaded and then made available to the program which requested it 88 LINUX MAGAZINE 6 · 2001
• Under cache can be found in kilobytes how much integrated hard disk cache is currently available • si (swap in) and so (swap out) show how many kilobytes of data per second have been loaded from the swap zone on the hard disk into the main memory or swapped from the RAM onto the disk • bi (block in) and bo (block out) refers to e.g. hard disk activities, since the number of blocks per second is displayed here which have been sent to block devices or received from such. Block devices are for example floppies and hard disks. • in shows the number of halts per second arising
086drlinux.qxd
31.01.2001
14:48 Uhr
Seite 89
DR. LINUX
from hardware demands; the so-called Interrupts • cs (context switch) shows how often per second a switch is made from one program to the next (multitasking). • us reflects what percentage of the processor time used is consumed by application programs, while • sy shows the processor time used by the system • id is the unused processor time, again as a percentage An id number in the double-digit range and simultaneous massive activities in si and so can be the first indications of too little main memory. A high number for unused processor time means in such a case that the system often has to wait to access the hard disk. This can be confirmed, if swpd is high, free,buff and cache on the other hand stay relatively low. If id, si and so are constantly in the region of practically zero, these are serious indications of a processor which is too sluggish. To track down problems, you should make vmstat produce periodic messages while these are going on. But you will probably have to be patient at this point if everything is moving as slowly as a tortoise in winter. The following example shows a vmstat output with six repetitions (second figure) in one-second cycles (first figure):
user$ vmstat -n procs r b w swpd 1 0 0 208 2 0 0 208 0 0 1 208 0 0 0 208 1 0 0 208 0 1 0 208
BEGINNERS
is done with the command: root# dd if=/dev/zero of=Filename bs=1024 coU unt=Filesize_in_Kilobyte This copies the file /dev/zero into the specified file names. As file size, enter a value between 40 and 131073 kilobytes. The specification of 131073 makes a file of around 128 megabytes, which is the maximum value for swap memory. The smallest manageable quantity of 40 kilobytes for making a swap area is not really sensible. Using the command:
/dev/zero: The content of the zero device, as the name indicates, consists of zeros (which go on forever). Everything written into this special file is deleted. ■
root# mkswap Filename Filesize_in_kilobytes the file is given a swap identification. Only then can the file be used as swap storage. You can imagine this procedure as like the installation of a file system with the command mkfs. Before you log the new file into the system, make sure that only the superuser has read-write privileges for it. Otherwise your system will questich the wrong privileges or give error messages and refuse access to the new swap area. In case of doubt use chmod 0600 Filename to determine the correct permissions. Lastly, log the new memory with the command: root# swapon Filename onto the system. The swap memory will be available to you immediately.
1 6 free 1556 1552 1544 1544 1500 2336
memory swap buff cache si so 2616 20736 0 0 2616 20552 0 0 2616 18868 0 0 2616 18868 0 0 2616 18912 0 0 2616 19188 0 0
bi 3 230 91 0 11 69
Anyone not wanting to go rummaging in the Man page, can use vmstat x to receive a brief introduction to the use of this handy tool.
Space in the tiniest hut When I ran up my Linux system I apparently created too small a swap partition. Can I upgrade the swap memory without repartitioning? Dr. Linux: As a temporary solution, I would recommend a swap file – which you may be familiar with from Windows. Linux has two options for swapping data from RAM onto the hard disk: • When installing your system you inevitably came across the swap partition. This is the first and the faster option • The swap file is used less often. This works more slowly than a swap partition Linux can manage up to 16 swap zones at once, so if necessary swap files can simply switch over The making and activating of a swap file is done by the superuser on the command line. The former
io system bo in cs 0 130 265 0 2019 2358 0 830 2607 0 102 208 0 358 734 60 1150 727
us 2 36 41 0 0 10
cpu sy id 0 97 16 48 19 40 1 99 5 95 5 85
In Listing 1 you can see the acknowledgements produced by Linux when you create a swap file. Between the commands to create the file, the command sync is inserted at this point. This makes sure that everything is written to the hard disk, before the next command executes further actions with: root# swapoff Filename you can log the additional swap memory off from the system, if you no longer need it. ■ Listing 1: Creating and logging a swap file root@maxi:/ # dd if=/dev/zero of=swapfile bs=1024 count=131073 131073+0 Records on 131073+0 Records off root@maxi:/ # sync root@maxi:/ # mkswap swapfile 131073 Swap area Version 1 with a volume of 134213632 bytes is made. root@maxi:/ # sync root@maxi:/ # chmod 0600 swapfile root@maxi:/ # swapon -v swapfile swapon for swapfile 6 · 2001 LINUX MAGAZINE 89
090desktopia.qxd
31.01.2001
14:57 Uhr
BEGINNERS
Seite 90
DESKTOPIA
Jo´s alternative desktop
BIT OF MAKE-UP? BY JO MOSKALEWSKI
What your Linux desktop looks like is something only you can decide. With deskTOPia we take you with us on a regular journey into the land of window managers and desktop environments, presenting the useful and the colourful, as well as pretty toys. And true to our motto "Our desktop will improve the way it looks" we are now going to show you a tool to put a bit of pizzazz into any window manager. Make-up for the desktop.
xosview 1.7.1, xnodecor LinuxMagazine/desktopia/
Window manager: Manages the windows of the graphical user interface X Window, which is also used by KDE or GNOME (both also only fall back on a window manager, which equips the applications with frames). GNOME Panel: The start panel of the GNOME environment which can be deployed separately; counterpart to the KDE start panel or Windows task bar. /dev/null: Unix device file which makes data ”disappear”: Data written into this file is simply forgotten by the system. X Session: Session on the graphical desktop – from the start of the interface (X) to the close. ■
Many people have already tried to upgrade a puny Window Manager with an application such as the system monitor gkrellm – usually successfully. But what can you do if this is now firmly settled as desired on only the first of your multiple desktop, and that its presence in the task list of the window manager? Or the dream tool which actually gives you optical nightmares, such as e.g. a GNOME Panel with window frames – who wouldn’t want to send this combination as quickly as possible to /dev/null?
Our leading actor But if you want to turn your deams into reality, rather than having to remove them in the correct way – there is a solution, as will be shown by the example of the system monitor classic xosview. And as xosview comes with every distribution in a sufficiently up to date form, we are no longer going to concern ourselves with its installation, but with the intention of integrating it inconspicuously into the desktop.
The Emperor’s new clothes A window manager should normally provide each program with a frame, (provided it does not expressly wish to remain incognito) but unfortunately not every window manager understands this, or this has somehow not been thought of by the program author (as for example in the case of xosview). So what is needed is the
90 LINUX MAGAZINE 6 · 2001
possibility of hiding any program, regardless of whichever window manager is in use – it has to become invisible. But it is only rarely that we have any influence on the program – or on the window manager. But if, after starting an application and before starting our window manager, we could rob this program of its window attributes, the window manager would not take it under its wing. Since this does not usually involve programs which are briefly started and stopped again during an X Session, but rather those such as a system monitor (or a clock, start panel, notification of incoming mail, etc.) which are only supposed to be started automatically when the system is turned on, these need to be included in the start files of our X Session.
Changing rooms Every user can concoct his own X Session start file. This is called either ~/.xsession (usually used when the user logs in graphically – e.g. via xdm), or ~./xinitrc (start X via the command startx). Both files are identically constructed. Hence it is possible to kill two birds with one stone using a Symlinks. These files are actioned line by line and once the last command in them has been actioned, the X Session is also ended. So one first starts tools in these, such as our xosview, adding a background image if required, and only then is a window manager called up. If this is ended, X stops. Our ~/.xinitrc could thus contain the following:
090desktopia.qxd
31.01.2001
14:57 Uhr
Seite 91
DESKTOPIA
BEGINNERS
[left] Fig. 1: xosview in the guise of a window manager with entry in its window list [right] Fig. 2: xosview is invisible to the window manager
xosview & sapphire The & after xosview is necessary, because without this the window manager (in this case sapphire) would only start after our system monitor had finished its action – but now the process xosview is sent into the background, and it’s the turn of the next line. And so this is also an ideal opportunity to make our xosview invisible before starting the window manager. To do this, there is the unknown program xnodecor, which can now only be found on the Internet at ftp://ftp.42.org/pub/wmx/contrib/ xnodecor.c. The author’s Web site mentioned there is no longer available.
Tailor-made suit But that’s not enough – because there’s nothing there but naked program code, and very few would know where to start with that. And those who do will find out that this may well function under the related operating system, but not under Linux. This is why I have included it on the coverdisc – together with the necessary extras – as an all-round, carefree package. This is unpacked and installed just like other tar.gz source text archives: tar xvzf xnodecor.tar.gz cd xnodecor su (Password...) make make install exit If unexpected problems should arise, there is also a ready-compiled binary included with the package, which a root ”User” can simply copy to /usr/local/bin.
Behind the scenes The way xnodecor is to be called up is revealed to us by the command xnodecor -h, and this is, as you can imagine, easy: While previously in our ~/.xinitrc first xosview and then a window manager were called up, we now simply insert an xnodecor -w xosview@computername:
xosview & xnodecor -w xosview@planet sapphire xosview@computername? Right, xosview goes by this name, and not just as expected by that of xosview. In case of doubt the name can be determined with the command xprop | grep WM_NAME and a mouse click on the window to be hidden. But it is usual for a program to be hidden with the name of the executed file.
Make Up Anyone now following up with practical trials has certainly found out to his or her disappointment that xosview itself also draws a little frame, and that this does not necessarily go to the top right hand corner of the screen and simply will not match the desktop background colour. But that too can be changed, and this is done via the X Ressources. Simply make – if not already present – a file ~/.Xdefaults and add to this the following entries: xosview*geometry: -0+0 xosview*background: #102040 xosview*foreground: #102040 Obviously your own values should be inserted here – and with xosview there exist an almost infinite number of other options which can be found in the manual, man xosview. By now our system monitor is looking like the one in Figure 2, and is present on all desktops. It no longer appears in the window list of our window manager. Anyone with lots of time and patience could now theoretically design a matching and functioning background graphic using xnodecor . ■
Symlink: Directory entry, behind which no file is hidden, but which only represents a cross-reference to another file. So the command ”ln -s .xinitrc .xsession” can get round creating the file ~/.xsession, if an ~/.xinitrc with the requisite content already exists. X-Resources: In classic X-programs default values are set with regard to these – usually with respect to their appearance. ■ 6 · 2001 LINUX MAGAZINE 91
092gnomogram.qxd
02.02.2001
14:03 Uhr
SOFTWARE
Seite 92
GNOMOGRAM
GNOME news and programs
GNOMOGRAM BJÖRN GANSLANDT
GNOME and GTK as the basic programs of GNOME have been attracting more and more followers in recent years. There are now programs for almost every task and new ones are being added daily. In the Gnomogram column, we present the pearls each month among GNOME tools and report on the latest rumours and information about GNOME.
GNOME Foundation elections over
helix-setup-tools 0.2.1, gsmssend 1.0, smssend 2.0 LinuxMagazine/gnonogram/
Unlike the US elections, the election of the eleven directors of the GNOME Foundation Board went off without a hitch and with no great surprises. Naturally Miguel de Icaza and Federico Mena Quintero, the founders of GNOME, are still members of the board as are Havoc Pennington and Owen Taylor, who are working on many fundamentals such as Gconf and GTK. Also, Bart Decrem, who made a crucial contribution to the creation of the GNOME Foundation, Maciej Stachowiak, an important Nautilus hacker, Dan Mueth, co-ordinator of the GNOME documentation and John Heard, responsible for the collaboration between Sun and GNOME, were elected. Experience with organisations like the GNOME Foundation is also being contributed by Jim Gettys and Daniel Veillard. The only German candidate, Martin Baulig, was unfortunately short of a few votes, which means the Board is still a tad short on Europeans. But overall some very highly qualified directors were elected and it is to be hoped that any problems which the future may bring can be dealt with the GNOME Foundation Board.
Kylix for GNOME According to zdnet, Borland is going to disclose the sources of its Delphi (and later C++) development environment Kylix, at least to the GNOME Foundation, and expand Kylix by adding GNOME support. Borland is also going to join the GNOME Advisory Board and 92 LINUX MAGAZINE 6 · 2001
collaborate on Bonobo; Kylix, too, is to support Bonobo in the long term. Whether Kylix will become open source or free software has not yet been settled – but it is not expected that the complete development environment, just CLX (the Library on which programs developed with Kylix are based) will get a free licence. But it remains to be seen when Borland will actually take steps to realise any of this; at the time of going to press there was no official comment.
Mandrake joins GNOME Advisory Board Apart from Borland, MandrakeSoft has also joined the GNOME Advisory Board and will take part in the development of GNOME Office and/or provide this with financial support, as announced on linuxmandrake.com.
Helix setup tools With Helix Setup Tools, Ximian, formerly known as Helix Code, is now also making an effort to simplify the configuration of Linux and other Unix variants. Despite the low version number the tools are already very stable. Nevertheless a certain amount of care is still needed when tinkering with the innards of the system. As to be expected with Helix Code, all the tools consist of a single back-end written in Perl and a graphical front-end. Care has been taken here to ensure the back-ends can detect existing configurations and do not simply over-write them. Front and back-end communicate via XML, which makes it possible simply to save an existing
092gnomogram.qxd
02.02.2001
14:03 Uhr
Seite 93
GNOMOGRAM
configuration in a file and load it again as necessary. Although this feature cannot yet be used via a graphical interface, it is possible, with the command programmname-conf -g > now.xml to save the current configuration or to load it with cat now.xml | programmname-conf -s The respective back-ends can be found in /usr/share/helix-setup-tools/scripts/, if the tools have been compiled as recommended using the options ./configure —prefix=/usr —sysconfdir=/etc All components of the set-up tool will be included in the next version of the control centre, though at present one still has to start each tool individually, which means of course you also need root privileges to make any modifications. Even if it is already possible to do some configuration with the tools, later on sound, printer, hardware configuration and other components will be added. With disks-admin you can now get an overview of all hard disks and partitions in the system and can alter settings such as Mountpoint and Filesystem. shares-admin configures all filesystems imported via SMB or NFS or even exports to other systems. The swap memory in use can also be configured with memory-admin and with networking-admin the basic network settings can be made. Everything to do with DNS, and thus name resolution, can be found in nameresolution-admin. Time-admin not only sets the time, but also configures time synchronisation with other servers via NTP. And with the aid of users-admin there is no longer any need to make /etc/passwd or /etc/group by hand, to administer groups and users in the system.
SOFTWARE
Icons on the GNOME desktop
URLs
Anyone with icons on his desktop, is certainly aware of the problem that the background and text colours set by the GTK theme clash with the background. But fortunately, the colourful boxes around the icons or the text can be switched off in the menu Desktop Properties/Desktop, which opens by means of a right click on the desktop. Nevertheless it may be that a few icons are displayed surrounded by a coloured edging or that the text does not match in terms of colour even without a box. To change the colours, the file ~/.gtk.mine has to be edited, which is loaded from ~/.gtkrc with the line include "/home/user/.gtkrc.mine" As can be seen in Listing 1, the colours of the foreground and background are set in selected and in normal condition. The three figure values correspond to the RGB value of the respective colour, where you can specify values of between 0.0 and 1.0. (hge) ■ Listing 1: .gtkrc.mine style "my-desktop-icon" { fg[NORMAL] = { 0.8, 0.8, 0.8 } bg[NORMAL] = { 0.0, 0.7, 0.0 } fg[SELECTED] = { 0.0, 0.0, 0.8 } bg[SELECTED] = { 0.0, 0.8, 0.1 } } widget_class "*DesktopIcon*" style "my-deskU top-icon"
www.zdnet.com/eweek/stories /general/0,11011,2652581,00. html www.linux-mandrake.com/en/ pr-fgnome.php3 www.ximian.com/desktop/ setuptools.php3 zekiller.skytech.org/smssend_e n.html ■
The author Björn Ganslandt is a student and passionate bandwidth squanderer. When he is not busy testing new programs, he reads books or plays the saxophone.
[top] Fig. 2: SMS without a mobile [above] Fig. 1: Disk-admin in action
Gsmssend SMS mania goes on, even with GNOME - with Gsmssend you can even send SMS messages without a mobile. To do this, Gsmssend falls back on smssend, which in turn makes use of free SMS offers on the Web. Smssend has its own script language, which defines to which sites the SMS is sent, and saves the user logging on. Even if the provider expects you to look at certain sites before sending an SMS, Smssend can just pull the wool over their eyes by using a false referrer. For each script Gsmssend has a view in which one can enter the necessary options and the actual SMS, always displaying how many characters are still available to you. This can vary from provider to provider, as advertising messages of different lengths are attached to the text you send. Since the providers’ sites change from time to time, under Settings/Check for Updates you have the option of checking whether more recent versions exist. Unfortunately Gsmssend in the current version does not yet pass on the information issued by smssend to the user, so you have to start it in an xterm in order to make sure no errors occurred when it was sent. 6 · 2001 LINUX MAGAZINE 93
094pcorner.qxd
31.01.2001
14:27 Uhr
BEGINNERS
Seite 94
PROGRAMMING CORNER
Part 2: Principles of Bash
ARRAY OF LIGHT MIRKO DÖLLE
After taking a look in the last issue at metacharacters and the basic use of variables, this time we are thrusting forward into the territory of multidimensional variables and towards the end we will be concerned with processing character strings.
This time we are going to start with variable fields, the so-called arrays. It may sound very complicated but in reality it’s very simple to use. A one dimensional array is nothing but a series of variables one after the other, as if one had a little box of broad strips of squared paper – in each box one can write a number. Since in these array-variables there 94 LINUX MAGAZINE 6 · 2001
is no longer a single value, we have to say each time which box we mean. To do this we attach to the name of our fields in square brackets the number of the box, which is to be used: V[1]=Hello V[2]=world!
094pcorner.qxd
31.01.2001
14:27 Uhr
Seite 95
PROGRAMMING CORNER
In the first line we write in box number 1 Hello and in box number 2 world!. As so often in the computer world, Bash also starts to count at zero. So before ”Hello” we still have one box free: V[0]="We say:" For the element zero incidentally the explicit designation over ”[0]” can be left out, both in assignments and in the output, as the following output shows: echo $V ${V[1]} ${V[2]} We say: Hello world! And this is the snag with arrays: to reach an element, i.e. the content of a box, we have to use square brackets – otherwise Bash would think ”$V[1]” was ”$V” – thus ”$V[0]” – and then ”[1]” instead of the number of element number one in our array. So let’s just take a look at what is actually in our array. To do this we need the command typeset, with which among other things one can query the status and content of a variable: typeset -p V declare -a V=’([0]="We say:" [1]="Hello" U [2]="world!")’ The output from typeset corresponds to what one would have to enter to roll in the array new from the ground up. The new part is the command declare -a, with which one can log variables. ”-a” explicitly defines that V is an array, following which the values of the elements 0, 1 and 2 are entered. declare has other parameters apart from ”-a”, which we will look at as necessary. As so often happens, we can also do without the declare in this case, by simply writing: V=([0]="We say:" [1]="Hello" [2]="world!") At this point, just a word about programming in general. A programming language serves to give the computer instructions in forms which are legible and comprehensible for humans. One could also feed the computer direct with processor commands, but then one would have serious problems correcting errors later or installing new functions – after a certain time, one would no longer understand one’s own program. The last stop on this line is often the waste paper basket, together with a complete redesign of the program. This is why it is very worthwhile to use declare to declare more complex structures, arrays for example. An outsider is then much more likely to understand the program. But that’s not enough. You really must get used to documenting your programs, regardless of whether you are compiling them for C or Bash. The middle way between the Spartan copyright annotation and a comment on every line is, as usual, the best. An output line, in which you concisely report an error, is something you don’t need to document.
BEGINNERS
You may assume that a reader has complete mastery of Bash. But if you start to process data with external programs and at the same time install interlocks or even tricks for faster servicing, this point obviously really must be documented with more than one line. I will, when we come in the next instalment to corresponding examples, go into this again. But back to the arrays. Let’s just assume we had gaps in our field, as in the following example: N="The" N[3]="house" N[6]="." On our strip of squared paper we would occupy boxes 0, 3 and 6. Bash manages its memory better, though, and there are no gaps there: typeset -p N declare -a N=’([0]="The" [3]="house" [6]=".")’ Our three entries are simply stored one after the other, plus Bash remembers their element number. In this way we can at any time fill in the as yet unused numbers in the gap: N[1]="is" N[2]="the one" N[4]="owned by" N[5]="Nikolaus" typeset -p N declare -a N=’([0]="That" U [1]="is" [2]="the" [3]="house" [4]="owned by" U [5]="Nikolaus" [6]=".")’ Finally it should be mentioned that arrayelements can be used at any point where a normal (scalar) variable could be placed, but in most cases one simply has to use the notation with the curly brackets.
Special variables Bash has access to a whole range of special variables, such as for example the parameters for a program start. Here are just the most important ones. The variables $0, $1, $2 etc. are the parameters which were provided by the user when invoking a Bash script (or a function, but more on that later). You can work with these in exactly the same way as with all other variables, except that you cannot assign values to them. This is because variable names cannot start with a figure. The variable $0 is always set. This is where the program name, as used to call it up, is found. The total number of parameters can be polled with $#. Take note that $0 is not included in this count, if $# thus supplies ”4”, this means that there is $1 to $4 plus $0 in addition. Let’s just take a look at the example ”myecho” in the following listing: #!/bin/bash echo [$#]: $1 $2 $3 $4 $5 $6 $7 $8 $9 ${10} U ${11} ${12} 6 · 2001 LINUX MAGAZINE 95
094pcorner.qxd
31.01.2001
14:28 Uhr
BEGINNERS
Seite 96
PROGRAMMING CORNER
The curly brackets from parameter ten on are necessary, by the way, as otherwise Bash, as will be familiar from arrays, will first insert $1 and then add on a zero, etc. Simply write the little program in a text editor, for example kedit, mcedit or even emacs, and save it under the name myecho. Then you have to make the file executable with the command chmod a+x myecho, and call it up twice: ./myecho 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [15]: 1 2 3 4 5 6 7 8 9 10 11 12 ./myecho "1 2 3 4 5 6 7 8 9 10 11 12 13 14 15" [1]: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 The different result from the two calls is due to the quote marks. While on the first occasion we have, according to $#, 15 parameters, with the second call there is only one involved. Bash interprets spaces as separators between two parameters in program calls; consequently the 15 numbers separated by blanks are 15 parameters. As already outlined in the first part, we must either quote parameters containing blanks in full or leave out the blanks. In the second call Bash thus now has only one parameter, consisting of all 15 numbers separated by blanks, while $2 to ${12} from our program remain unused. To be able to output any desired number of parameters, we would have to go through and individually output all parameter-variables from $1 to the end, for example in a loop. The two special variables $* and $@ save us this task, here for example is our altered myecho: #!/bin/bash echo [$#]: $* The difference between $* and $@ only becomes clear when both are placed in quotes. Then when there are four parameters ”$*” turns into ”$1 $2 $3 $4”, while ”$@” turns into ”$1” ”$2” ”$3” ”$4”. At first glance the difference does not appear to be of any significance. But that’s due to the internal Bash variable IFS being pre-set. The first character of this variable is used with ”$*” to subdivide the individual parameters. Normally IFS contains three characters, namely the Blank space, the Tabulator and Enter. The following example shows the difference, by placing a comma before the standard character in IFS:
#!/bin/bash IFS=",$IFS" echo [$#]: "$*" echo [$#]: "$@" The third line lists for us all numbers separated by a comma, while the fourth is still separated by blanks. On the other hand $@ is not superfluous, either, and for this example we shall take a new program with the name whichfiles: #!/bin/bash ls -l "$@" echo —ls -l "$*" echo —ls -l $@ echo —ls -l $* Please remember to also make the program executable with chmod a+x whichfiles. All we need now is two files, one of them with spaces in the name. Next we call up whichfiles and specify – careful with the blank spaces – both files as parameter: echo "Hello world" > "hello world.txt" echo "That is Nikolaus’s house" > nikolaus.txt ./whichfiles "hello world.txt" nikolaus.txt The result requires a bit more explanation. In this case we have placed all four notations of $* and $@ one after the other, the echo instructions are only used as separators to give a better overview. The first ls call correctly showed us both files, hello world.txt and nikolaus.txt. As expected $@ turned into hello world.txt and nikolaus.txt, ls thus received two parameters and displayed both files. The second ls rightly complained at being unable to find any hello world.txt nikolaus.txt. The reason is the property of $*, of supplying all parameters separated by a character in quotes. Accordingly ls also received only one parameter, i.e. hello world.txt nikolaus.txt. Calls three and four provide the same result; here both parameters are given to ls separated by a space. Because of the second space in hello world.txt there were three parameters for ls, hello and world.txt thus correctly gave rise to a complaint, as neither of these files exist. Finally the variables $? and $! should be mentioned, which serve to query program response
Special variables in Bash $* All parameters assigned to the program, separated by blanks. $*=$1 $2 $3 $4 ... "$*" All parameters assigned to the program, in quotes, separated by the 1st character (c) of the variable IFS. ”$*”=”$1c$2c$3c$4” ... $@ All parameters assigned to the program, separated by blanks. $@=$1 $2 $3 $4 ... "$@" All parameters assigned to the program, individually enclosed in quotes and separated by blanks. ”$@”=”$1” ”$2” ”$3” ”$4” ... $# Number of assigned parameters $? Response value of the last command $$ Process-ID (PID) of the current program $! Process-ID (PID) of the last program started in the background $_ Last parameter of the last program called up 96 LINUX MAGAZINE 6 · 2001
094pcorner.qxd
31.01.2001
14:28 Uhr
Seite 97
PROGRAMMING CORNER
values, the so-called exit status. At present they are not yet important to us, but a short description can be found in the table "Special variables for Bash".
String processing Most script languages, as well as Bash, are principally concerned with processing character strings, also referred to simply as strings. This includes a series of letters, numbers, special and control characters, for example the chapter of a book. It would be wrong only to take into account letters, numbers and any special symbols in character strings, because a string can frequently be several lines in length and contain other formatting symbols such as tabulators or a page break. This clarification is important – we must always be aware that in such a string variable there could always be a whole novel or just one file. The assignment and output of a character string has been shown by many examples. Next we come to the possibilities for finding out something about the text hidden in variables and manipulating it. Contrary to Version 1.1 of Bash, we now have access to a vast range of functions, but let’s start with something very trite. How do you find out that a variable is empty? Well, one possibility would be to compare the content with an empty string (which means we are concerned with the control structures), or else simply determine the length of the string: a="" echo ${#a} 0 a="four" echo ${#a} 4 The quotes in the assignment can be left out, even if it would look somewhat unusual in the first line. We can even – without first having introduced control structures – react to an unspecified parameter with an error message: #!/bin/bash a="Hello" : ${a:?’yes’} echo ‘no’ Save this little program under is-a-empty and call it up after you have made it executable with chmod. The program answers correctly with no, which is hardly surprising, since after all, we did output it with echo. Now delete Hello from the second line, so that a is now empty, and call up the program once more:
a: yes It is obvious that our echo no from the last line of the program has not been executed this time. That’s the result from the third line. We know of
BEGINNERS
the colon-command from the last instalment of Programming corner: it does nothing at all. The parameters after colon, however, are still taken into account and this is where the evaluation is hidden. The instruction ${variable:?errormessage} first checks whether variable is empty or does not even exist. If so, the error message after the question mark is issued and the program is interrupted. That was why the echo from the fourth line did not even get a look in. When the error message is output, Bash has, in the usual way, placed the program name and the location at which the error arose, in front. For the next two instructions we shall modify our script whichfiles first: #!/bin/bash ls -1 ${1:-$HOME} What happens now is that we get a list of either the specified directory, or else the content of our home directory $HOME: ./whichfiles /bin arch bash cat ... ./whichfiles dead.letter mail ... The instruction ${variable:-string} has the effect of inserting string where there are empty or nonexistent variables. In our case the content of the variable $HOME, otherwise the content of $variable. We shall check the first parameter in the program. If we have specified a directory, $1 is filled and the instruction delivers the content. If not, $HOME is inserted. Almost the same effect is produced by ${variable:=string}, but here string is additionally assigned to the variable, if it is empty or does not exist: #!/bin/bash directory="$1" : ${directory:=$HOME} echo "show $directory" ls -1 "$directory" ${variable:+string} has a similar effect to the previous instructions. The difference consists in that string is always inserted when variable contains something. If it is empty, nothing is inserted. This instruction is very seldom used in practice, which is why I have also been unable to find a useful example of it. In the next instalment of Programming corner we shall be taking a look at the initiation of partstrings and Search/Replace with regular expressions in Bash. For now all that is left is for me to wish you a good start to the new millennium which now lies before us. ■
Terms Character string, string: A series of letters, numbers, special and control characters. Character strings can certainly contain several lines, and the content can also consist of a program or image file. Field, array: Consists of several Elements, which are addressed via numbers. In onedimensional arrays, too, only one number is necessary for the selection of an element, for two-dimensional arrays two, etc. Element of an array: This is addressed via its position. Elements can be used like regular variables; values can be stored and called up. Response value, exit status: Value sent back to the enquirer when a program ends. The response is guaranteed by the system. Not to be confused with messages on the screen. If the program has been successfully executed, the value 0 is usually returned, if errors or problems arose, the value is greater than 0. One can often determine the error on the basis of the response values; explanations on their meaning can usually be found in the program documentation. ■
6 · 2001 LINUX MAGAZINE 97
098ksplitter-ktools.qxd
31.01.2001
16:31 Uhr
BEGINNERS
Seite 98
KORNER
K-splitter
MELODIC STEFANIE TEUFEL
Who says there is no place for gossip and scandal in a Linux magazine? K-splitter broadcasts montly news from the K-World and noses here and there behind the scenes.
Play it again
xmms-plugin for aRts LinuxMagazine/ksplitter/
Apart from many visual improvements, additional configuration options and a whole range of programs such as for example Koffice, multimedia architecture is one of the really juicy new features of KDE 2. It is based on aRts (Analog Realtime synthesizer). This allows you to play several audio or video data streams at once, both on the desktop and over the network. But where there is bright light, there is also lots of shade or – in the case of aRts – silence. Maybe one or two of you have already noticed when trying to operate one of your old familiar noise-makers under KDE 2.0: Many non-aRts (and that means all those not part of the KDE family) applications just will not work with this smart new system at all. What now? Wait. And wait for exactly 60 seconds. Because that’s how long it takes for the KDE sound server to release your sound system or put another way, your sound card. What if you wait that long but the stupid program still just won’t go? Bad luck. Unfortunately apart from the brute force method there is as yet no elegant way of temporarily disabling the server
[left] Figure 1: Small driver, big action [right] Figure 2: A bouquet of colourful images 98 LINUX MAGAZINE 6 · 2001
artsd. The only option left to you if this happens is to get rid of artsd with a firm killall artsd ; killall artswrapper and start arts again from new after calling the respective problem child with a kcminit arts After that, please don’t be surprised by unstable applications. Stupidly, many aRts programs currently still tend to crash as soon as the server is killed and restarted in this manner. Problem detected, problem banished? Not quite, unfortunately. But no fear, help is at least partly on the way because for xmms, one of the most popular applications, there is already a more refined solution. If you don’t want to give up your customary MP3 player on aRts, simply go to the site http://home.earthlink.net/~bheath/xmms-arts/ where you can download the package xmmsarts-0.4.tar.gz. This clever little aRts Output Driver makes sure xmms and aRts in future no longer get into a scrap, but instead live peacefully together. Once installed, under Options/Preferences/Audio I/O Plugins select the newly arrived item aRts Driver 0.4 [libartsout.so] (Figure 1), and the sound of silence is a thing of the past.
098ksplitter-ktools.qxd
31.01.2001
16:31 Uhr
Seite 99
KORNER
Informative Those unable to get enough KDE 2.0 news or want to show off online to their dear acquaintances, friends and relatives just what this new desktop can do – no problem. The answer awaits at http://www.linuxmandrake.com/en/demos/Tutorial/KDE2desktop/, where the makers of the Mandrake distribution present, in an extensively illustrated online tutorial, just what our favourite desktop can do (Figure 2).
Joint forces Combining forces to achieve more was an idea thought up by a few companies, and shortly before the end of the year the KDE League was born. In future such big companies as IBM or Hewlett-Packard, together with the KDE developers will be ensuring with money and good words that KDE goes into action on more desktops than before. In addition, the implementation of KDE for increasingly popular handhelds will also be promoted. No fear, the independence of the KDE project will still be guaranteed in spite of all this, since responsibility for the development of core
BEGINNERS
applications and libraries will continue to lie solely with the KDE developers (but they are always glad to receive good code contributions from third parties). IBM has also incidentally let it slip out that work is proceeding jointly with the other League partners to make components of IBM’s ViaVoice available for KDE.
Choice Admittedly, a bit of programming know-how is certainly needed to really do anything with the new KDE 2.0 development manual. But for all those who are not completely unfamiliar with terms such as C++ there awaits at http://kde20development.andamooka.org/ one of the largest current online references on KDE and Qt programming. But the operators go one further, and are also offering the complete book as HTML or PDF file for download. A new feature of the project is that readers are challenged to express themselves online in detail about the content. The structure of the sites allows visitors to comment on each section of the book and to read the remarks made by other readers – an open support system of the new generation. ■
MP3: MP3 or MPEG 1 Audio Layer 3 is the name given to what is currently the best compression algorithm for audio data. It was developed at the German FraunhoferInstitut. Using this method, sample data can be compressed by a factor of approx. 12, without problematic loss of quality. ■
K-tools
PHOTO FINISH STEFANIE TEUFEL
In this column we present
tools, which have proven to Anyone who is used to the graphical representation of the printer spool under Windows 9x, has certainly already cursed at one time or another about printing under Linux. Why is this? Surely not because the Linux developers are actually sadists who gloat fiendishly when we users sit bawling in front of our computers when we try to print. It’s more to do with the fact that Linux is a full-blown multi-user operating system. In such a system it can often happen that your beloved girlfriend wants to get the latest snapshots of Brad Pitt onto paper, while you are trying to print out the memo urgently prepared for your boss. You may dispute the priority of the tasks (I’d side with Brad Pitt), but in any case, in our example two people are sending a print job to the printer at once. Or trying to, at least, because if they actually did so, there would be a huge printer fiasco: memos and Brad Pitt really don’t go together very well here.
In this type of situation our favourite operating system acts as mediator. Under Linux you never talk to the printer yourself, but use the utility program lpr, which firstly dumps the various print jobs on a democratic basis into the spool directory. As soon as the printer is free, the printer daemon lpd swings into action and shovels the respective file to the printer. But what has all this to do with KDE, you ask yourself. Quite a bit, since KDE provides you with klpq – nicer and more functional than ever in version 2.0. And what does this do, mainly? One could say it makes things transparent. It shows you the printer queue, removes waiting jobs from the spool or changes their priorities at a click of the mouse and increasingly saves you having to resort to lpr on the command line. You don’t have to worry greatly about the
be especially useful when working under KDE, solve a problem which otherwise is deliberately ignored, or are just some of the nicer things in life, which – once discovered – you wouldn’t want to do without.
6 · 2001 LINUX MAGAZINE 99
098ksplitter-ktools.qxd
31.01.2001
16:32 Uhr
BEGINNERS
Seite 100
KORNER
Figure 1: Bring klpq back onto the right track...
Spool directory: The waiting room, in which files wait nice and patiently until they are dealt with (e.g. printed out). The (print) jobs are reeled off in sequence and the files, which have been processed, are then deleted from the directory. Printer daemon: A utility program that ensures, largely unnoticed in the background, that the print jobs are sent from the spool directory to the printer. URL: Uniform Resource Locator, the unique address specification of a file on the Internet. For example: http://www.linuxmagazine.co.uk/. Temporary file: File which is created by a program to store data temporarily and also later deleted by it.
installation of the printer’s little helper, as it is an integral part of the kdeutils package. Configuration is also simple. But this will probably only be of interest to you if you can start the program in the first place. This is done in two ways: Either enter, in a terminal emulation of your choice, the command klpq &, or settle on the route via the K menu, where you will be guided to your target by Utilities/Manage print jobs. On first starting, klpq merely wants to know from you which printer spool system you are using. The default is BSD, which will usually put you on the right track in a Linux system. If the commands lpq, lprm, and lpc which klpq will in future make zealous use of, are located in some obscure directories, which deviate from the standard paths, modify these under the point Options/Spooler (Figure 1). What’s the use of these little helpers? The command lpq shows on the command line the files which are lying around in the spool directory, waiting to be printed, together with the job number, lprm stops a print job, and with lpc you control the printer itself. Now the print management can begin.
Is that it? As soon as you assign klpq an argument on the command line, you can even use the program as a substitute for lpr. Then for example type in, not lpr
■
[left] Figure 2: klpq in all its glory [right] Figure 3: For those who want to do it themselves
100 LINUX MAGAZINE 6 · 2001
printmebaby, but kill two birds with one stone, and try it direct with klpq printmebaby. In this case klpq not only starts, but also selflessly executes the necessary lpr command for you. But klpq also supports lots of more frolicsome actions and accepts URLs as command line argument. As long as you are online, it uses the Konqueror to copy the corresponding file(s) temporarily onto you computer and send them to the printer. Leave klpq actually running during the entire download. If you shut down the program in the meantime the downloaded files cannot be printed. Like almost all KDE programs klpq obviously also understands KDE’s own Drag and Drop protocol, so to print a file you just have to drag it out of the Konqueror with the mouse and drop it over the klpq window. As soon as you activate the Autobutton, the current printer queue is reread by klpq every x seconds automatically, so you don’t constantly have to bother with the Updatebutton (Figure 2). Under Options/Update frequency, you set whichever time interval seems appropriate for you. If the adjuster in the associated dialog box is set to 0, this mechanism has been put out of action (Figure 3). The rest is pretty much self-explanatory: if you want to delete print jobs from the queue, select the respective file in the klpq window and click on the Dequeue button. Of course, this only works with print jobs which are yours. If you want to remove other people’s jobs from the spool, you have to be in possession of root privileges. You are? Then klpq even allows you to set priorities: If you want to sneak your oh-soimportant memo past the Pitt photos, then shame on you. But as soon as you click on the file and then the button Move to top, the memo, despite all your girlfriend’s protests, goes into first place. ■
101command.qxd
02.02.2001
16:52 Uhr
Seite 101
AT YOUR COMMAND
BEGINNERS
At your command
A TERMINAL CASE HEIKE JURZIK
Even though lots of things can easily be controlled by means of graphical user interfaces such as KDE or GNOME – anyone really wanting to tax his Linux system cannot avoid using the command line. Apart from this, there are also many situations where it’s good to know your way around a bit in the jungle of command lines.
Have you ever got fed up of switching back and forth between lots of terminals because you wanted to run several applications at once in the foreground? Or have you been downcast because a process which you had to start on a computer at the workstation was not ready in time to go to the ball, but you wanted to check the output of the program? screen is an extremely powerful tool which can make many tasks easier for you. Before starting the program for the first time it’s best to check to what value the environmental variable TERM (terminal emulation) has been set, since this will be evaluated immediately when the program is started: huhn@asteroid:# echo $TERM xterm huhn@asteroid:# export TERM=vt100 huhn@asteroid:# echo $TERM vt100 The programmers of screen point out explicitly in their documentation to the fact that this tool gets
along best with vt100 – so it’s best to check first. With screen you can simulate up to ten virtual windows in a single Xterm (or on the console). You can then run programs in all these windows – with each of the virtual windows being independent of the others. Simply type screen – after a short greeting text there are instructions on what happens next: [Press Space or Return to end.] So using the space bar, you can now enter the realm of infinite expanses of terminals. You have access to a whole range of commands, all starting with [Ctrla]: Hold down the [Ctrl] key and type [a]. Now the program waits for the next command inputs: [Ctrla] [?] for example gives a complete overview of the key configuration (see Table 1).
Out of the blue – screen! Apart from all the control commands within the window it is of course also possible to provide the
Terminal emulation: The program responsible for screen output, appears to the system as a terminal. Linux consoles or an Xterm for example can use certain control sequences for highlighting, cursor positioning etc. Sometimes these emulate real hardware terminals, e.g. those of the type DEC vt100. If the environmental variable TERM is set to vt100, the program can be controlled like a vt100 terminal. ■
6 · 2001 LINUX MAGAZINE 101
101command.qxd
02.02.2001
16:52 Uhr
BEGINNERS
Seite 102
AT YOUR COMMAND
program with various parameters at the start. In case you have started screen several times and no longer know how many and whether these are currently active, there is the option -ls (stands for: -list): huhn@asteroid:# screen -list There are screens on: 1200.pts-10.asteroid (Attached) 1203.pts-14.asteroid (Detached)
The author Heike Jurzik works at the computing centre of the University of Cologne as Administrator of the local news-server. She has been working on Linux systems since 1996. And because the computer keyboard is enough when it comes to keyboard instruments, instead of piano she prefers to play the violin in a symphony orchestra and when she gets the chance enjoys reading a good book.
Here you can see the process ID (pid), then the virtual terminal (tty) in which the screen was started, the host (asteroid), and as the last piece of information whether it is currently active (”attached”) or has been put to sleep (”detached”). Inactive screens can be brought back to life with screen -r [pid.tty.host]. It is only necessary to specify the process number and terminal if several screens are inactive. You can make this task much easier for yourself by giving the session a name right from the start: screen -S petronella names your screen ”petronella”. In the overview this is then called: 1364.petronella – the name thus replaces terminal and host. By the way, if ever a screen process hangs, you can detect this in the overview from the status flag ”dead”. You can get rid of it elegantly with the parameter screen -wipe. When you revive a screen which has been put to sleep, you may sometimes want to scroll back to look at the last outputs from current programs. 100 lines are standard for the buffer. You can alter this with the aid of the option -h number of lines. So with screen -h 1000 you can now go back 1000 lines. To move around in this buffer there is a range of keyboard commands. To do this first go into Copy/Scrollback mode (see Table 1, [Ctrl-a] [Esc]). If you already know and use the editor vi, you will certainly be familiar with the commands for cursor movement. Otherwise you can find a short reference on the commands in Table 2.
Stars of the small screen You can create a configuration file in your home directory, .screenrc, in which you can enter specific wishes for program behaviour. For example if you enter startup_message off
defining your own commands. E.g. if you write in the .screenrc bindkey ^f screen ssh marvin.cologne.de (and not, as described in the Man page, bind xy!), when [Ctrl-f] is pressed in the screen window, a new screen will automatically be opened with an ssh connection to the computer marvin.cologne.de. In this way you can define lots of useful aliases. If you would like to have more than 100 lines as standard buffer, you can use the entry defscrollback 1000 to define your own buffer size. Another nice feature is the so-called vbell_msg. For this you must first define: vbell on, then the desired message which is to appear when a window receives a ”beep” ([Ctrlg]), e.g. vbell_msg "Hello! Here’s a beep!" There is a whole range of tips and tricks on this subject in the very comprehensive Man page. It is also worthwhile taking a look into the default configuration file /etc/screenrc. If you would like to read more on this subject, most distributions have a very well-written README and an FAQ. The directory in which these files are located depends on the distribution. For Red Hat it’s /usr/share/doc/ screen-3.9.5/, for Debian /usr/doc/screen/, and for Mandrake /usr/doc/screen-3.9.5/. (Tip: you can read these files, if they end in .gz, thus are gzip-compressed, with the program zless – this is the case for example with Debian Linux.) Otherwise the following applies: Send bugreports, fixes, enhancements, t-shirts, money, beer & pizza to screen@uni-erlangen.de ■
(hge)
Screen URLs Homepage of the GNU Project screen is http://www.gnu.org/software/screen/ A nice collection of information can be found at http://www.math.fu-berlin.de/ ~guckes/screen/ .
the greeting message at the start of the program will be left out. Another practical option is that of
Table 2: The most important commands for Movement command h, j, k, l move the cursor line by line or column by column, left, right, up, down. 0, $ go to extreme left or right end of line. H, L, M moves the cursor in the column on the far left to the top, bottom or middle. +, line up or down. G jumps to end of buffer. g jumps to start of buffer. w,b,e jump word by word: back, forward and to end of word. 102 LINUX MAGAZINE 6 · 2001
101command.qxd
02.02.2001
16:52 Uhr
Seite 103
FAVOURITE COMMANDS
Table 1: Key combinations in screen Keyboard shortcut Command [Ctrl-a] [?] help [Ctrl-a] [c] screen [Ctrl-a] [space bar] next [Ctrl-a] [Ctrl-a] [Ctrl-a] [0...9] [Ctrl-a] [w]
other select n windows
[Ctrl-a] [a], [s] or [q]
meta/xoff/xon
[Ctrl-a] [x] [Ctrl-a] [H]
lockscreen log
[Ctrl-a] [Esc]
copy
[Ctrl-a] [d]
detach
[Ctrl-a] [D] [D]
pow_detach
[Ctrl-a] [K]
kill
BEGINNERS
Meaning Lists all key configurations. Opens an additional virtual window. Changes to next window, and if the command is repeated, one can ”run through” all the windows. Constantly changes back and forth between two windows. Changes to window with no. n. Shows in a line on the lower edge for a short time how many windows have been started, the current one being highlighted with *. Sends a [Ctrl-a], [Ctrl-s] or [Ctrl-q] direct to the window, needed for some programs (e.g. Emacs), which also have [Ctrl-a] control sequences Locks the screen – after entering a valid password you can carry on working Logs the standard output in a file, and depending on the number of the window (1-10) the logfile is called screenlog.n, calling up [Ctrl-a H] again ends the logging Changes to copy mode: If there is no mouse to mark text, one can now go with the letters h, j, k, l to the desired point on the screen, make the marking with the space bar, then go to the next point, press the space bar again, to store it on the ”clipboard”. With [Ctrl-a] ”]” ( [Ctrl-A] followed by a closing square bracket) incidentally one inserts the marked text, with [Esc] the action is interrupted. ”Releases” the screen, all processes started therein continue to run, but the program detaches itself from the terminal: Now you can log out. With screen -r the screen can be called up again (complete explanations follow in the text). ”Power Detach” – not only detaches the screen, but also immediately logs out of the terminal. Destroys the whole screen – fortunately there is a safety challenge at this point: Really kill this window [y/n]
A series of tips on command line favourites.
AT YOUR COMMAND RICHARD SMEDLEY
| less [pipe, less] If you are fairly new to the command line, you may have been occasionally frustrated by a list command, such as ‘ls /etc’ scrolling off the screen past the information that you wanted. Perhaps you have used <RtShift><PgUp> and <RtShift><PgDn>, a very useful utility if your shell allows it – but annoying over several pages. The answer is: ls /etc | less
The | [pipe] is the Unix tool for gluing together commands by sending the output of one to the input of the next. Piping several screens to the pager less enables easy back-and-forth scrolling with <b> and <f>. For more details of the scrolling commands available type less —help If you have never used less before, then you need to know that ‘q’ will exit the program (and this is also the command to leave man pages). ■ 6 · 2001 LINUX MAGAZINE 103
104mpplayers.qxd
02.02.2001
17:00 Uhr
SOFTWARE
Seite 104
MP3 PLAYERS
Getting to know MP3s
MP3 PLAYERS COLIN MURPHY
Although they are not quite giving media players away with Cornflakes yet, they do appear to be everywhere. Here we will look at just a few to see what features are available.
MPEG Audio Layer 3, a subset of the MPEG standard for A/V storage, is an audio format that produces highly compressed files while sacrificing very little audio quality - the perceived frequency response and signal-to-noise ratio are retained. Essentially, MP3 works by removing inaudible information. Compression ratios of up to 12:1 (for stereo files) can be achieved with very little degradation. MP3 files are compressed sound files that rely on a Fraunhofer compression routine. This is similar to zipping a sound file but also removes any sound information that could not be heard by the human ear. This greatly reduces the size of file, with a normal stereo CD being some 650MB and 74 minutes, whereas in MP3 format this is usually about 60MB. Mono speech only (such as a radio show) is compressed even further. This means you could make a CD-R of over ten hours of music or put days worth onto your hard drive. The Fraunhofer compression is a proprietary algorithm, which may be charged for in the future. This causes problems to producers of free players and encoders. Currently Fraunhofer does not charge for those players that are given away, but that could always change. For a shocking example of the charging rates visit http://www.mp3licencing.com 104 LINUX MAGAZINE 6 ¡ 2001
To overcome this limitation an open source project has produced Ogg Vorbis. The Ogg part is a framework in which streams of data can be presented. One such stream can be audio. The Vorbis part is the audio codec that has been written patent free and released under the LGPL. Ogg Vorbis tracks are currently slightly larger than MP3 as the code is not yet optimised. Listening to a track in MP3 encoding and comparing to an Ogg Vorbis encoded file gives no noticeable difference. As it currently stands you are more likely to get hold of MP3 files but should aim to take advantage of Ogg Vorbis. With any type of compressed audio file, the drawback is firstly to play the files you need a player and secondly to get the files. To obtain the files you could either download them (from sites such as http://www.MP3.com), buy them online, or at a computer show or you could always make your own. Downloading MP3 files on a normal 56K modem is still painfully slow, with a five-minute tune usually taking about half an hour. Buying has the problem that the range is severely limited. Making your own MP3 file requires encoder software. With Linux we have encoders in the form of BladeEnc, LAME, oggenc and mp3encode. These take the file or files and output the required MP3
104mpplayers.qxd
02.02.2001
17:00 Uhr
Seite 105
MP3 PLAYERS
SOFTWARE
format file. On a standard 500MHz machine this usually takes double the time of the track to encode. As these are so time-consuming they are usually console-based, but graphical front-ends (such as Grip) are available. MP3 players usually support playlists. This means you can create a list of your favourite tunes to play from whichever directories that you’ve saved the MP3 files in. They can then be saved as a playlist. This has the advantage that you can have collections of themed music set up, depending on your mood, without having to search each directory again. Streaming is where the audio file is sent out in a continuous stream to be listened to live. For example, a music concert may decide to stream its broadcast so those connected online can listen in real time. If you want to stream data then you need to set up a streaming server (i.e. an Internet audio broadcasting system based on Mpeg audio technology). There are several, such as SHOUTcast and icecast. To receive the streamed data you need a stream compliant player.
Mpg123 Now at version 0.59r from http://www.mpg123.de/ Mpg123 is a real time MPEG Audio Player for Layer 1,2 and Layer3. (MPEG 2.0 with Layer1/2 not heavily tested). It has been tested with Linux, FreeBSD, SunOS4.1.3, Solaris 2.5, HPUX 9.x and SGI Irix. It plays Layer 3 in stereo on an AMD-486-120Mhz or a faster machine. This is the base decoding engine used by many of the following players.
Grip Grip is gtk based and capable of handling all encoders, but also supports ripping and playing of MP3 files. Currently at version 2.95 it is only a 155K download for the rpm. It has the ripping capabilities of cdparanoia built-in, but can also use external rippers (such as cdda2wav). It also provides an automated
front-end for MP3 encoders, thus letting you take a disc and transform it easily straight into MP3s. Grip also can handle CDDB. This is where you decide to rip a CD album, but rather than have to type in all the ID3 information yourself, the system uses the Internet to look it up on a database. If the information is not there then you update the database when you have typed in the ID3 tags on the basis that if everyone does a little then it saves time for everyone. Using this system worked well – finding a popular Moby album instantly, but as expected an obscure album was not present. The lower part of the Grip screen is the built in player. It also supports DigitalDJ to provide a unified computerised version of your music collection. As well as Grip, there is a CD player only version, called
[left] Grip before connecting to a CDDB [right] Grip after connecting to a CDDB and retrieving information
XMMS with a skin
6 · 2001 LINUX MAGAZINE 105
104mpplayers.qxd
02.02.2001
17:00 Uhr
SOFTWARE
Seite 106
MP3 PLAYERS
GQmpeg GQmpeg 0.8.1 by the same author as GQview can be downloaded from http://www.geocities.com/ SiliconValley/Haven/5235/mpeg-over.html is an X Windows front end to the mpg123 mpeg audio player. Similarly it includes playlist support and playback options. GQmpeg requires mpg123 version .59o for actual playback of mpeg audio files. If you have mpg123 v0.59p or later then streaming inputs are possible. It supports Winamp skins as well as its own custom skins and comes complete with a skin editor.
Kmp3
[top] Gamp showing you do not need x for fancy graphics
GCD for those who are not interested in track ripping or encoding.
XMMS XMMS started out as a clone of the popular Windows application WinAmp. It is currently at version 1.2.4 and available from http://www.xmms.org Along with being a player of audio it supports skins and plug-in features. Skins are similar to themes in that they allow you to change the look and feel of the player. This is done by creating bitmap files of what image you require and storing these in the ~/.xmms/Skins directory. Changing the skin is either done by using the dropdown menu, or pressing Alt+S to bring up a list of those contained in the directory. The bitmaps are WinAmp 2.0 standard skins and can be placed in the directory in their compressed form. There is a built-in graphic equaliser and either an oscilloscope or spectrum analyser. The volume can be controller with a wheel mouse if required. However if you are not content with these features then you can use the plug-ins that are available. These are varied, ranging from Input decoders (Ogg Vorbis files are supported) to visualisation add-ons. Again the range for these is huge and includes diverse modules such as a Tux penguin dancing in time or to spectrum analyser effects like a blur scope.
Gamp If resources are low then Gamp being a consolebased player could be the answer. Available from http://www-users.cs.umn.edu/~wburdick/gamp/ It is a ncurses mp3 player for Linux. The codec for Gamp is based upon amp by the Croatian Tomislav Uzelac. The ncurses interface gives most of the functionality of an X-based mp3 player without all the bulk, and without requiring X. As can be seen in the figure it includes a spectrum analyser. 106 LINUX MAGAZINE 6 ¡ 2001
GQmpeg showing a port of a K-jofol skin.
Kmp3-1.0 released. Kmp3 is a KDE Mp3 player that runs on a number of different Unix systems, including Linux. The mpeg audio engine is based on mpg123; however, the end result is a player that uses less CPU than the console-only mpg123 on many systems. Sporting an attractive, easy-to-use and fullfeatured GUI, Kmp3 is suitable for practically anyone. Capable of using both EsounD and ALSA, it can also be run form the command line with kmp3 song1.mp3 ... songN.mp3 kmp3 *.m3u The latter is for playing full playlists. http://www.kmp3.org/
Kmpg Similarly to Kmp3, Kmpg is a mp3 player for KDE. Downloadable from http://www.rhrk.unikl.de/~mvogt/linux/kmpg/ It supports playlists and has a built in mixer for mp3 streams, zipped Winamp skin support. It is also a mpeg video player.
Kmp3 standard player
104mpplayers.qxd
02.02.2001
17:00 Uhr
Seite 107
MP3 PLAYERS
CDparanoia III 9.7 http://www.xiph.org/paranoia/ CDparanoia is a Compact Disc Digital Audio (CDDA) extraction tool, commonly known on the net as a ‘ripper’. The application is built on top of the Paranoia library, which is doing the real work (the Paranoia source is included in the CDparanoia source distribution). Like the original cdda2wav, CDparanoia package reads audio from the CDROM directly as data, with no analog step between, and writes the data to a file or pipe in WAV, AIFC or raw 16 bit linear PCM. CDparanoia is a bit different from most other CDDA extraction tools. It contains few-to-no ‘extra’
SOFTWARE
features, concentrating only on the ripping process and knowing as much as possible about the hardware performing it. CDparanoia will read correct, rock-solid audio data from inexpensive drives prone to misalignment, frame jitter and loss of streaming during atomic reads. CDparanoia will also read and repair data from CDs that have been damaged in some way. At the same time, however, CDparanoia turns out to be easy to use and administrate; it has no compile time configuration, happily autodetecting the CDROM, its type, its interface and other aspects of the ripping process at runtime. A single binary can serve the diverse hardware of the do-it-yourself computer laboratory from Hell. ■
Kmp3 skin configuration
Kmpg showing a Winamp skin and the default player
Kmpg running a video CD 6 · 2001 LINUX MAGAZINE 107
108crontab.qxd
31.01.2001
16:23 Uhr
SOFTWARE
Seite 108
CRONTAB
Crontables at the click of a mouse
TASK CREATOR AT YOUR SERVICE BY PATRICIA JUNG
The Cron daemon may be very useful for making the computer execute one task or another at specific times, but the format in which it accepts requests takes a bit of getting used to. Which is where the graphic Crontab creation program can help.
Standard output, standard error output: command line tools send their results to the pre-defined standard output channel stdout, their error messages to the standard error output stderr. Both are normally ‘linked’ with the screen, while the standard input channel stdin is used by the keyboard. As user, you have the option of telling a command that instead of the screen, it should use a file as stdout: Command > file. Similarly, error messages can also be output in a file: Command 2 > file. ■
Whether you want to cleanse your hard disk at regular intervals of the remains of dying programs (the core files), remember the birthday of your beloved in time, or want to be woken tomorrow by your MP3 collection – this is no problem when the computer is running almost round the clock. Simply record a task for the Cron daemon, and it will all be taken care of. The catch: There is certainly a program named crontab. This retrieves your favourite editor and uses a rough syntax check to prevent too much mischief finding its way into the system, but the bottom line is that you have to write the Cron table, with its very demanding structure, yourself.
The acid test Creating Crontab entries is not exactly one of those things where one automatically keeps up to date because they are done at least twice a day. It’s all too easy for a little error to slip into the time details or the syntax. Some newcomers might be so frightened by the syntactical hurdles that they will have nothing more to do with this useful tool.
108 LINUX MAGAZINE 6 · 2001
Graphical fond-ends programs for crontab suggest themselves as a way out of this dilemma. We are now going to take a closer look at a few of these little helpers. Our test course is not completely undemanding. With a really complex invocation, we will check if the program can cope with long command lines. (The Cron daemons found under Linux usually follow up to 1024 characters per Cron table entry.) To do this we will build an alarm clock from the two command line tools mix (ftp://sunsite.unc.edu/pub/Linux/apps/sound/mixers/ mix-1.0.tar.gz) and mpg123. This will search the directory /music for MP3 files and play these from Tuesdays to Fridays at 07.25 at half volume (vb=50) in a random sequence (-z). It will also send all outputs from Standard output and Standard error output into data nirvana /dev/null. Four minutes later, hopefully, we are out of bed and so want the annoying cacophony to switch off automatically at 07.29 with killall. We should now check whether the program is actually making use of the whole time spectrum of the Cron daemon (every minute). For those who always find it hard to
108crontab.qxd
31.01.2001
16:23 Uhr
Seite 109
CRONTAB
SOFTWARE
Fig. 1: kcron needs the initial user instructions
get out from under the duvet in the morning, the alarm clock goes off from 07.35 to 07.39 for a second time. Next, we want to be presented with the morning paper every day at 8am (netscape http://www.thetimes.co.uk. Ideally, the program will let us know that we are going to have a problem with this, if we have not logged on and there is no X server running under our name. As the fourth task, a self-written script from the subdirectory bin of your own home directory will perform a few clearing-up tasks daily between 0.00 and 23.00 every six hours. We want to save ourselves the bother of specifying the precise path to clear up when entering the command and therefore set the Crontab PATH variable in advance to $HOME/bin. With this we can test whether the front-end can set variables and not just globally once at the start of the Cron table, out also before a new entry. As a rule, Linux Cron daemons only evaluate a few special variables. The result of this is that a really good GUI-Crontab program does not allow just any variable to be set. Listing 1 shows the appropriate hand-written Crontab. This should also be able to be read in by all the programs. Particular attention is paid here to periods specially produced with - and / (0-23/6). These are recognised by the Cron daemons commonly used under Linux, but not by Cron implementations of other Unix operating systems. Another test criterion for reading in the handwritten Crontab lies in the correct reproduction of the comments. Lastly, we will now check how the program acts in this connection with respect to impossible dates such as 30 February or 31 November. This, and a plausibility check of the program to be executed, may be requirements that not even the command linecrontab program meets. Since click programs are specially intended to make access easier for the less adept user, though, they will have to be measured against stricter criteria. Table 1 shows the most important criteria and
information on each individual program. For comparison, the command line tool installed as standard on almost every Linux system, crontab, is executed at the same time.
Kcron The fact that an all-round carefree desktop environment like KDE comes with a Crontab frontend, is more or less to be expected. In fact this has been the case only since KDE 2.0: kcron from the kdeadmin packet or else installed as an individual rpm packet, can be found in the K menu under System/task scheduler Kcron or can be retrieved using the command kcron & (Figure 1). Apart from the actual application window, a short set of instructions also appears – and these really are needed: The fact that a new task is being created by selecting the Tasks folder with a double click, is something most would presumably expect. But then nothing happens, which means we must fiddle about instead by selecting, in the Edit menu, the entry New... or quickly learning the keyboard shortcut Ctrl-N. The task creation window (Figure 2) is clear, but this only allows tasks at a rate of one every five minutes. When importing an existing Crontab there Listing 1: Example of a personal Cron table # DO NOT EDIT THIS FILE - edit the master and reinstall. # (/tmp/crontab.1514 installed on Wed Jan 6 21:44:50 1999) # (Cron version — $Id: cron2_red1.html,v 1.4 1999/03/06 22:44:46 lm Exp $) # Alarm 25,35 7 * * Tue-Fri (/usr/local/bin/mix vb=50; /usr/bin/mpg123 -b 2048 -z `find /music -name \*.mp3`) > /dev/null 2>&1 # after 12 minutes turn off the annoying alarm clock 29,39 7 * * tue-fri killall mpg123 # The morning paper 0 8 * * * netscape http://www.thetimes.co.uk/ # From now on only programs in $HOME/bin without path specification # will be found PATH=$HOME/bin # Clear up script 0 0-23/6 * * * clear up 6 · 2001 LINUX MAGAZINE 109
108crontab.qxd
31.01.2001
16:24 Uhr
SOFTWARE
[top] Fig. 2: Cleared up, but not finely adjustable [above] Fig. 3: Shame kcrontab did not get there after KDE 2
Seite 110
CRONTAB
are in fact no problems even with a number of minutes, which is not divisible by 5 – unless you want to alter these entries. If no month is entered, the program complains, and you are forced to make a cross every 12 months individually, if you want to imitate the little star in the month column with kcron. The details of the variables are explained: With Cron’s permission these can be selected direct from
the menu and are immediately provided with an explanatory comment. However, kcron does not refuse fantasy variables, and changing the environment for individual commands, by placing the appropriate variables before each Crontab entry, is also impossible. If you make use of the option of searching via the Select menu for the program to be executed, one way or another this will be specified with the complete path, so that this takes the sting out of criticism for the PATH variable. If you click the command to be executed in this way, you will also get a free test of executability rights. Both variables and Cronjobs may be commented. The only shame with this is that kcron refuses to import multi-line comments. Plus, the program has a surprise in the form of a very useful feature, even if it’s not visible at first glance: Crontab entries can be deactivated in kcron. Behind the scenes such an entry is specially decommented, and is thus retained, can be modified and if required, reactivated.
KCrontab Anyone who has not yet changed over to KDE 2 can still make use of kcrontab, even if it is not that easy to start up this first KDE-Crontab manager. Minutes and hours are entered manually (meaning you have complete format control, so can also use *), and thanks to the every month selection item, you don’t have to click yourself to death. Apart from forgetting an essential time detail there are no more plausibility tests in the pipeline, not even when setting variables. When entering a variable for the first time it takes some getting used to, having to click on Edit, to enter it into the list after setting. Nevertheless kcrontab is much better to use, (especially when the Crontab format is not a complete unknown), than the KDE-2 program.
Cromagnon Calculated for what is, for Cronjobs, such a common time at midnight, the program once conceived as standard Crontab manager for the GNOME desktop refuses: One you realise that the witching hour (midnight) in Cromagnon’s user dialog starts at 12am (Figure 4), then you will no longer be disappointed. Erroneously, this detail has been stored as 24 instead of as 0. If you have just got over the fact that the GNOME-Crontab manager personally so far only creates Crontables, but does not file them in the spool directory, you will now be saving the generated Crontab in a separate file and try to install it with crontab \filename. crontab may, however, correctly fail to be convinced of the erroneous 24-hour specification and need editing by hand. Variables are not an issue for Cromagnon; comments on the other hand can not only be made, 110 LINUX MAGAZINE 6 · 2001
108crontab.qxd
31.01.2001
16:24 Uhr
Seite 111
CRONTAB
SOFTWARE
but even have to be entered in the column Description. On the basis of the comment lines generated by it, commencing with #CroMagnon: Cromagnon namely recognises whether he it is confident enough to modify the entry found in the following Crontab line. All Cronjobs not marked in this way are quite simply taboo for Cromagnon – which is annoying, since the Crontable painstakingly made by hand cannot be modified. On the other hand this has the advantage that nonstandard extensions of the Crontab syntax, as used by special Cron daemons such as ucrond or hc-cron remain enclosed. But anyone interested in such special features has already got the knack of using Crontable by hand. The option of duplicating an existing entry using the Duplicate button in order to adapt the clone later, saves time. There is no help in creating the task command, and since nothing has happened to do with the code for a long time, it is to be presumed that the project will not become a mature aid in the foreseeable future.
gat The gap which Cromagnon leaves in the GNOME project, is one which gat is trying to occupy – and in doing so it is taking refreshing new paths. Confusing to Crontab veterans at first glance, the 0 8 * * * entry from Listing 1 is displayed as Every day at 8:00am, but this translation into natural language means the program is to some extent intuitively used by users without knowledge of Crontab syntax. Unfortunately this concept is not always kept to consistently. The GTK Task Scheduler tries as far as possible to hide the cryptic Crontab syntax from the user. This is especially evident from the fact that a wizard leads the way through the job creation. But this basically good idea is not always clearly thought out and/or completely realised. So for example, it is only possible via the point Custom (Figure 5a) to create a task which is to be executed twice an hour but not exactly every half
hour, and here you need to know your way around a bit in terms of the Crontab syntax. Too much knowledge, however, is also dangerous, because for example if a (legal) abbreviation for a day of the week such as mon (for Monday) is entered, nothing whatsoever happens, not even an error message. When entering a Cronjob command, the wizard only provides one Browse... button as an aid (Figure 6) and at the same time also accepts nonexecutable files without any problem. But this should not be held against the program, because as it does have a Test job button in the main window, it is the only one in the test field to have a test option for the input command. All that was really missing here was a warning that the Cron daemon can execute graphics programs badly if one has not
Fig. 4: Blackbox: Cromagnon recognises only its own Crontable entries
Fig. 5 : Good ideas, not always ideally realised – gat
6 · 2001 LINUX MAGAZINE 111
108crontab.qxd
31.01.2001
16:24 Uhr
SOFTWARE
Seite 112
CRONTAB
Fig. 6 : Plus points for the only Crontab manager with test options for Cronjobs
At-Job: With this program at jobs can be defined which must be executed at a certain time (but unlike Cron, once only instead of repeatedly). GUI-Toolkit: A programmer library, made available to programmers as a toolbox of components for constructing graphical user Interfaces graphical User Interfaces windows, menus, buttons, selection buttons etc. ■
when first starting, before finding out that to create Crontabs the rider Recurring jobs has to be selected: by default, you land in the index card One-time jobs to create At-Jobs.
Visual Cron
logged on under X, then this point would be realised almost perfectly. The ‘Recommended’ rating is however foolishly thrown away by gat, because it provides no editing option of any kind for jobs once created – so at present all you can do is delete and rewrite. Also, some of you may take a bit longer than expected
Apparently just as popular as GUI toolkits Qt and GTK with Crontab substitute authors is the combination of the script language Tcl and the toolkit Tk. The protagonist with the longest history in this illustrious tour is called vcron, and helps, not only with the syntactical tricks of a Cron table, but (like gat) also in elegantly handling that of an Atjob. But there’s no fool like an old fool, and so vcron disappoints by supporting neither comments nor variables. Our long-winded alarm clock entry flummoxes the tool as soon as you try to change it and the 0-23/6 entry is not correctly interpreted.
Table 1: Comparison of Crontab managers Name/Version URL
kcron 2.0pre KDE-Mirrors
Requirements Accuracy when creating the Crontab Reject impossible dates like 30.02. or 31.11. Time details with - and / Command length
Testing commands
Check for no no executability when using Selection dialog for variablesfor task details for task details and task details any with menu any, each settable only no selection for those once per Crontab settable actually understood by Cron, each settable only once per Crontab yes yes yes
Comments Variables
cromagnon 0.1 http://www. andrews.edu/~ aldy/cromagnon.html
gat 0.9 http://www.cs. duke.edu/~ reynolds/gat/
vcron 1.5 http://www.linuxkheops.com/pub/ vcron/vcronGB.html
tkt 1.1 http://www.spin. net.au/~mich/ tkcron/
crontab (Vixie-Cron) (usually part of basic installation)
KDE 2.0/Qt 2.x 5-minute cycle
kcrontab 0.2.2 RPMFind, z. B. http://rpmfind.net /linux/RPM/ powertools/6.0/ i386/kcrontab0.2.2-1.i386.html KDE 1.0/Qt 1.x 1-minute cycle
GNOME/GTK 1- minute cycle
GNOME/GTK 1- minute cycle
Tcl/Tk 8.0 1- minute cycle
Tcl/Tk 8.0 1- minute cycle
command line 1- minute cycle
no
no
no
no
no
no
no
import
import and when creating 100 symbols
no
Import 1024 symbols (does write longer commands, but then crashes) ”Test job” button
import and when creating > 4000 symbols used
yes
2064 symbols
no ”/” when import > 4000 symbols
no
no
no
no
no
yes
yes
no
no
MAILTO, SHELL, PATH yes
100 symbols
Read in with crontab-installed Crontables Import hand-written no Crontabs from other files File the created yes Crontable in the Cron spool directory Support for at no
depends on editor
no
no
yes (removes with cuts comments as soon as a new job is created with gat) no no
yes
no
yes
yes
yes
yes
no
no
yes
yes
no
no
112 LINUX MAGAZINE 6 · 2001
no
yes
only from ~/.tct
yes
108crontab.qxd
31.01.2001
16:24 Uhr
Seite 113
CRONTAB
SOFTWARE
Fig. 7: vcron even falsifies standard Cron entries
Another disappointment was the user interface: Why tabulator symbols are displayed as \t, when it makes no difference to Cron, whether one uses as column separator a blank space or tabulators, is something you will have to ask the author himself. The fact that the at window, even when empty, still claims the same volume as the Cron part, also no longer corresponds to modern GUI-standards. And the helpful feature of having the current time always in view does not really make up for this (Figure 7).
tct The second candidate from the Tcl/Tk faction is hesitant at first even to read in an existing Cron table: Only when the installed Cron table e.g. with crontab -l > ~/.tct has been read into the personal tct-configuration file, can this be loaded using the Get Crontab button into the editor window and edit it there, too. Install then actually does what it is meant to and installs the modified Cron table. Anyone wanting to click together a new job, should get ready for a tiresome procedure: Each of the five time-setting buttons has to be set to a specified value and a command selected, before the task can be included in the editor window via the Add button, where it can only be altered manually. The respective selection dialogs are very powerful, though there is no display showing for which button a value has already been defined, and the whole thing is not very helpful, since the typical Tcl/Tk error messages usually give more cryptic than meaningful information. Astonishingly, on the other hand, our over-complicated 0-23/6 details when converted with tct turned into the identical, but more compact form */6 in the Cron table. Once all times have been set, they are stored as default values for the current session, until something else is changed on them. So for additional jobs only times which deviate from the previous time settings have to be set anew. The Find button for selecting a command produced only error messages, though, in the tested version, so that there is no alternative to entering the command by hand. Using the Environment button the three most common Cron environmental variables can be entered at the start in the Cron table. Anyone wanting to set them specially for a job has to modify them in the editor. This is presumably where the main benefit of tct lies hidden: One quickly tires of the fiddly user interface, but thanks to the editor being visible at all times this is a valuable learning tool. One very soon learns how to write Crontables by hand.
success. But, not least because of this paucity of ideas, not one of them manages to be an adequate (never mind a better) substitute. Apart from the ability of clicking together the time details (which is often in need of improvement itself) additional benefits are few and far between. Perhaps the most useful programs to use are those which make themselves superfluous as quickly as possible by turning out to be learning aids for Crontab syntax. One candidate or another may help newcomers in simple Cronjobs get over the syntax hurdles, but for more demanding contemporaries, all the tools tested get stuck down blind alleys or are too basic. The only comfort here is that in any case one can only rely on the aid of Cron table managers for personal Cron tables. When it comes to managing system-wide /etc/crontab it would be better to use your favourite text editor. ■
Info Cron-Daemonen & Co. are available for downloading: ftp://sunsite.unc.edu/pub/Linux /system/daemons/cron/ ■ Fig. 8: Powerful, but extremely fiddly – tct
What’s left The conclusion of this test is sobering. Apart from gat, all the programs presented try to imitate the crontab original graphically, with varying degrees of 6 · 2001 LINUX MAGAZINE 113
114mindrover.qxd
31.01.2001
16:37 Uhr
SOFTWARE
Seite 114
MINDROVER
Linux Games are here!
MINDROVER FIONN BEHRENS
When I was about 16, during my blessed Amiga days, a game called C-Robots fell into my hands. This involved using the programming language C (in simplified form) to program a robot, which then had to defend itself, with the aid of this program, independently against other robots. This game introduced and clarified the C programming language for me, and I have always wondered why nobody is still making such a game in a more modern form – till now. And what lies before me now has definitely got a high addiction factor.
Fig. 1: Well-made online help and context-oriented tutorial 114 LINUX MAGAZINE 6 · 2001
There has certainly been some progress in terms of form in the past 10 years: Now the robots no longer race around as red and white dots on a whitish grey chequered square, but are shown looking smart and full of detail in 3D, racing through arenas full of obstacles and traps, equipped with an enormous selection of accessories and aids. But I digress – let’s start from the beginning. The Mindrover game has been around for some time for an unimportant operating system. But now the maker Cognitoy has decided to convert the new and heavily revised version of this program just for Linux. The advantage of this is that, as a rank beginner one can immediately access a large collection of completed robots, forums and add-ons on the Web, which provide help and examples if you have a problem. In practical terms the new version is fully compatible with the old one. The old robots just have to be recompiled. First, the game has to be installed. While it is common nowadays to see games gobbling up disk space by the gigabyte, at just over 60 MB, Mindrover will strike you as pretty laughable. As usual, installation goes smoothly with the Loki installer (which also brings with it the necessary icons for the Gnome and KDE desktops). As already mentioned, the game requires a 3D graphics card with OpenGL support; Apart from this the demands made on the computer are only slight. On older computers the game start does take a bit longer, but then even with a P200 everything should go smoothly. After the start Mindrover welcomes you with a log-in screen, so that several users can use the one program with different settings and robots. After
114mindrover.qxd
31.01.2001
16:37 Uhr
Seite 115
MINDROVER
SOFTWARE
Fig. 2: Enormous varieties of scenarios allow for a multiplicity of solutions
entering your name, you come to the console, where you have to decide on a scenario. Mindrover waits here with a very large and effective selection of task settings for the ambitious robot constructor. The scenarios are divided into several sub-groups such as Fight, Sport, Race, Various or Training. With regard to training: practically the entire content of the well-made manual can also be found within the game, as well as occasionally context-sensitive online help within an extensive tutorial, competently introduces you to the first steps towards your own robot. Of course, old favourites such as Rocket Arena are among the scenarios, but also others such as Sumo Wrestling, in which the aim is to avoid getting caught. There are some where you have to find and collect as many valuable items as possible, and various types of races. Many other challenging and original situations are provided by the game. To fulfil the multitude of tasks the robot can be issued with an equally large number of sensors and devices. These are then linked with each other using the built-in, graphical programmer interface and must be made to co-operate. To do this, the programmer has the use of logic operators, counters, registers, status models and much more in graphical form, all well explained and very simple to use. Two components are always linked through a vector by mouse, which then defines the type of interaction. That is very simple in principle, yet one very soon reaches a degree of complexity in the wiring, making it hard to keep an overview. But it appears that even users with no programming experience cope very well with this system. However, it is advisable to assign each component and each vector a meaningful name right at the start. If you have finally created something, and think it really ought to work, you can pit it against another robot. Neatly enough, Cognitoy has also thought of options for issuing status messages and similar in the form of text and coloured lights or even fireworks because your own creations do not always behave as intended. To be more precise, they hardly ever do. So the beginner rapidly learns that robotics really is a complex field of activity. Unfortunately the occasionally clumsy and hard to influence camera direction makes it difficult in
complex scenarios to observe close up and from a good angle. There is some room for improvement here. If at some point the perfect robot is completed, it can be swapped with friends, with or without the graphical source code. Your creation can then be set against their robot. This is where the true challenge of the game lies as the robots installed in the game may be very good but not exactly works of genius.
Fig. 3: Robots can be programmed to be as complex as you like â&#x20AC;&#x201C; but more often than not simple programs are the most effective
6 ¡ 2001 LINUX MAGAZINE 115
114mindrover.qxd
31.01.2001
16:37 Uhr
SOFTWARE
[top] Fig. 4: Strike! But a few seconds later both are badly hurt... [above, left] Fig. 5: The finished vehicle [above, right] Fig. 6: Parental nightmare 2010: the children’s programmable fighting machines turn the hall of your house into a theatre of war
The author Fionn Behrens is a student of technical IT. But you can’t study all the time... He can be contacted on the Net as Fionn at IRCnet.
Seite 116
MINDROVER
By its nature the game has no multi-player mode or similar, and the 3D-Shooter fan will probably find this game dreadfully boring. I can only say that once you have made something by yourself and it wins, then you’ll spend the whole day constantly thinking up new algorithms and dreaming up ways of making further improvements to your own baby. The construction of a new machine can easily bind you to the screen for many hours and when else could you have such fun giving yourself a splitting headache? The only thing I really missed was an option for experienced programmers to program the machines avoiding the graphical interface in a text mode facility. This is in fact possible, because – oh glory – the graphical wiring diagram is translated by the game into source code. This in turn is converted by a compiler into an executable robot, but these sources can also be edited and compiled outside the game (and then of course the graphical representation gets lost). The game then has to start again and Mindrover is not exactly a fast starter. So the conventional debugging of a robot on a textual basis rapidly turns into a game of patience. Also, the programmer language is a completely individual development and very limited in many respects.
116 LINUX MAGAZINE 6 · 2001
Conclusion Negative: OK, something did happen, but I’m not at all sure all the improvements will really be better. It may be that I’m now on the wrong side of the fence to be able to judge if it is really useful to tempt in computer-newbies with a purely graphical programming interface. As a vi spoilt Visual-Basic hater I have in any case sometimes had a lot of trouble with what Mindrover presented to me here. Nevertheless it was more fun than bother. Positive: Forget crossword puzzles, forget logic games. This thing doesn’t even cost one fifth of the price of an ordinary Lego Mindstorm basic building set, but can be at least as challenging. Anyone who could use an ‘educationally valuable’ change from the 3D-Bang-You’re-Dead standard brew, which at the same time also exercises the grey cells and has a true long-term fun factor, should grab it. ■
Evaluation: Long-term game fun: Graphics: Sound: Control: Multi-player: nla Total score:
95% 68% 40% 55% 80%
120gnulicence.qxd
31.01.2001
16:45 Uhr
COMMUNITY
Seite 120
FREE SOFTWARE
Free software is a matter of liberty, not price
HOW FREE IS FREE? RICHARD SMEDLEY
Free Software (FS) and Open Source Software (OSS). Perhaps you have heard these terms used interchangably, perhaps even in opposition by two sides of a licensing argument. We attempt to tease apart the meanings of these two phrases and the purposes of the movements behind them and finish up with a brief look at software licenses for your project.
There are many definitions of Free and Open Source Software, so let’s start with the Free software foundation (FSF): ”Free software is software that comes with permission for anyone to use, copy, and distribute, either verbatim or with modifications, either gratis or for a fee. In particular, this means that source code must be available. If it’s not source, it’s not software.” Before we go deeper into this we need to journey back into the history of the FSF and the Free Software Movement. Writing software is like any other scientific endeavour – there is a process of discovery, then one of justification. The hypothesis, test conditions and results must be shared with other scientists, to see if the process is replicable. This is the justification stage. There must be sharing of the Source Code – the hypothesis and test conditions – for others to build upon each discovery and advance that area of science. Scientists have occasionally lapsed into secrecy due to strong rivalry, and some discoveries (Mendel and his vertically challenged peas) are made in total isolation, but ultimately science only advances through the sharing of ideas. Originally the world of Computer Science was just like this. Sharing the source was taken for granted. Outside circumstances lead to changes. These are thoroughly documented elsewhere by Eric Raymond (conventionally referred to as ESR), the semi-official anthropologist to the hacker community. Like much of modern computing our story starts at MIT Artificial Intelligence Laboratory in the 1970s. Here the Lab’s hackers had written the 120 LINUX MAGAZINE 6 · 2001
Incompatible Timesharing System (ITS) in assembler code to replace the Operating System (TOPS-10) supplied with its PDP-10 minicomputer by the manufacturer, Digital Equipment Corporation (DEC). It was in 1971 that Richard Stallman (conventionally referred to as RMS) joined the MIT AI Lab and became immersed in their culture of hacking and code sharing. It was 1971, coincidentally, when Ken Thompson and Dennis Ritchie won an internal contract at Bell Labs to produce an office-automation system using their recently developed Unix operating system. Three years later they had recoded it in C and ported it to several different machines. By 1983, when DEC cancelled plans for a follow-up to the PDP-10 range, Unix (usually running on a PDP-11 or Vax) was a strong alternative to the PDP-10/ TOPS-10 solution previously favoured by academia and research laboratories.
Cooperation is forbidden Many people know the story of RMS being refused the source code for the control program of the Lab printer. This crystallised his opposition to the closing-off of source code into proprietary programs and he became a fierce opponent of the commercialisation of the Lab. In 1982 MIT AI Labs, having lost many of its original ITS team to new computer companies, went with DEC’s own, nonfree, timesharing OS for their new PDP-10. To use the OS’s of the time one had to sign a nondisclosure agreement just to get an executable copy. Stallman
120gnulicence.qxd
31.01.2001
16:45 Uhr
Seite 121
FREE SOFTWARE
was faced with a choice. He could ignore his principles, work with a system based on not sharing and helping members of the hacker community, or leave computing – which would have squandered his skills and training. RMS came up with a third option. He decided to leave the AI Lab and found an organisation, the Free Software Foundation (FSF), to write a free operating system (OS), to encourage a worldwide community of co-operating hackers. The OS would be made compatible with Unix so that Unix users could easily switch to it. It would also have easy portability. Following a hacker tradition the self-recursive acronym GNU, for GNU’s Not Unix, was chosen. In 1984 changes at AT&T meant Unix becoming a fully commercial project. Having seen freely shared code taken up by commercial organisations and put in proprietary software, RMS worked on a license to protect users’ freedom to ”run, copy, distribute, study, change and improve the software.” The FSF defined the Four Freedoms: • The freedom to run the program, for any purpose (Freedom 0) • The freedom to study how the program works, and adapt it to your needs (Freedom 1) • Access to the source code is a precondition for this. The freedom to redistribute copies so you can help your neighbour (Freedom 2) • The freedom to improve the program, and release your improvements to the public, so that the whole community benefits (Freedom 3) • Access to the source code is a precondition for this. To protect these freedoms the GNU General Public License (GPL) was developed. Defined as ”a necessary evil”, the GPL is the classic, ”you are free to do what you like, as long as you do not remove freedom from others” license. The restrictions in the license only apply to those distributing modified forms of a GPL’ed program and are designed to pass on the same freedoms that you had with the code. In recent years the license has become the battleground between the worlds of proprietary and free software, we will return to this theme later with a look at available licenses. The GPL, and the freedoms it protects have become the standard against which other licenses are measured. This is in part due to one young programmer’s decision to adopt the GPL for a particular piece of code he had written, something he has said is the smartest decision he made.
Enter the Penguin In their first decade the FSF were quite successful at producing most of the unglamorous programs that go to make up an operating system - such as the linker, assembler, C library and so on. Following
COMMUNITY
contemporary OS theory they were developing a microkernel - the HURD - and it was taking a long time. In 1991 this gap was filled by Linux. Wanting to run a Unix-like system on his 386 PC, and dissatisfied with the shortcomings of Minix (a cheaply available academic OS), Helsinki University student Linus Torvalds developed a kernel using the GNU tools. He released the source code on the Internet and a group of hackers rapidly grew around the project. Within two years GNU/Linux had become a stable OS, competing with commercial Unices and attracting ports and new software. Free Software was now well and truly competitive.
Open Source The growth of GNU/Linux, in comparison to freeBSD, as well as to proprietary software, is often attributed to the GPL license, which protects freedom. Although the GPL has always been antiproprietary, it has never been anti-commercial, indeed it insists that there is no restriction on the commercial use of a piece of software. Nevertheless a number of supporters of freedom and Free Software are anti-business, and whilst this has been no barrier to the business releasing their software under the GNU GPL, it was seen by a number of influential Linux figures as a barrier to the further growth of Linux. Meeting on February 3rd 1998, in Palo Alto, California, to discuss the opportunity presented by Netscape’s decision to open up the source to its browser; Eric Raymond, John ‘Maddog’ Hall and Larry Augustin (both of Linux International), Sam Ockman (of the Silicon Valley Linux User’s Group), Todd Anderson and Chris Peterson (of the Foresight Institute) were looking to make a pragmatic case to businesses. Concerns of freedom, responsibility and ethical issues were seen as obstructive to getting businesses on board and a "better" term was sought. Peterson came up with "Open Source" and a new movement was born. Linus Torvalds, Bruce Perens (of Debian) and Phil Hughes (editor of Linux Journal) soon became involved and on the wave of publicity surrounding Netscape’s announcements the term (and the movement) open source hit the mainstream, being profiled in the Economist and Forbes Magazine before the year was out.
But what about freedom? Richard Stallman of the Free Software Foundation has been quite critical of the term "Open Source" and has covered a number of the above points on the FSF’s Web pages. His strongest criticisms are reserved for companies who use their association with the Open Source movement to leverage proprietary (non-free and closed source) products, this has taken a number of forms. For example the featured speaker at a Linux trade show in late 1998 6 · 2001 LINUX MAGAZINE 121
120gnulicence.qxd
31.01.2001
16:45 Uhr
COMMUNITY
Seite 122
FREE SOFTWARE
was an executive from a prominent software company that had decided to support Linux. Unfortunately, their form of support consists of releasing non-free software that works with the system - using the Free Software community as a market but not contributing to it. He said, "there is no way we will make our product open source, but perhaps we will make it ‘internal’ open source. If we allow our customer support staff to have access to the source code, they could fix bugs for the customers, and we could provide a better product and better service." As a term, Free Software suffers from the dual meaning of the English word ”free”. Many other languages have separate words to cover "without cost" and "without restriction" and in parts of Europe FS is referred to as "Libre Software". However it is a simple matter to explain that it’s about freedom - "think free speech rather than free beer". Open Source implies nothing about protecting freedom to run and distribute modified copies, and has lead to a plethora of different licenses (see Table) which allow access to the source code but place burdensome restrictions upon the use, modification or distribution of the software. The Open Source Institute (OSI), as well as publishing a definition of Open Source - based upon Perens’ Debian Free Software Guidelines, use the Given free license TL:Free Software License GNU General Public License (GPL) GNU Library or ‘Lesser’ Public License (LGPL) Guile/ GNU Ada X11/ Xfree86/ Cryptix Original BSD Modified BSD The Artistic License Clarified Artistic license Netscape Javascript License Netscape Public License Mozilla Public License v. 1.0 (MPL) Mozilla Public License 1.1 (MPL 1.1) Qt Public License (QPL) IBM Public License Sun Public License Sun Community Source License Sun Internet Standards Source License 1.0 Sun Solaris Source Code (Foundation Release) 1.1 MITRE Collaborative Virtual Workspace License (CVW License) Ricoh Source Code Public License Python license (1.6a2 & earlier) Python license (1.6b1 & later) zlib/libpng license Apache Software License Zope Public License Apple Public Source License (APSL) Intel Open Source License Jabber Open Source License 122 LINUX MAGAZINE 6 · 2001
OSI Certified Mark on licenses they believe are compatible with their Open Source Definition. This, they say, is because "the term ‘Open Source’... has become widely used and its meaning has lost some precision." Perens, on the OSI Web site: "To be Open Source, all of the terms below must be applied together, and in all cases. For example, they must be applied to derived versions of a program as well as the original program. It’s not sufficient to apply some and not others, and it’s not sufficient for the terms to only apply some of the time. After working through some particularly naive interpretations of the Open Source Definition, I feel tempted to add – this means you! " The OS argument that to businesses appearance is everything and the word ‘free’ is a great obstruction to business involvement, is counterbalanced by OSS advocates Chris DiBona, Sam Ockman, and Mark Stone in the introduction to Open Sources: Voices from the Revolution. "The success of the open-source movement does not depend on businesses adopting it. It’s not ‘in the market’ except in the sense that movement is in the bazaar. Nobody needs to buy it for it to succeed. The success of open source software depends on people taking pride in their work and in doing it right, and deriving their sense of worth from that. That the products are useful and desirable flows
FSF Approved y y y y y y n y y y y y y y y n y
GPL-compatible y y y y n y n y y n n (y) n n n n n
OSI Approved y y y (y) y y y y y y y y y
Copyleft y weak weak n n n n n weak n weak weak n y n n weak
n
n
-
n
y y y y y y y n y y
y n y n y n n n n n
y y y y y y y y
y y y y n n n n n y
120gnulicence.qxd
31.01.2001
16:45 Uhr
Seite 123
FREE SOFTWARE
from the success of craftsmanship, not the other way around." OSS and FS are not two factions of the same ideology, with the same enemy but different tactics. They are two different ideologies with different enemies but the same tactics and short-term goals. For Free Software advocates the enemy are restrictions upon freedom to share knowledge. For Open Source proponents the enemy is poorly written software, particularly products in a monopoly position with no likelihood of change. There is no doubt that we owe a colossal debt to the Free Software Foundation for the GNU project and the GNU GPL as well as a whole generation of programmers brought up on the benefits of gcc and other free tools. However it is also true that some of the recent growth of GNU/Linux is due to corporate interest in the practical benefits of the Open Source idea and that a number of these corporations are unhappy with ideas about freedom. Unhappy enough that only the different emphasis of Open Source movement encouraged them to GPL their software. We have seen, however, that many corporations have little interest in the open source community, only in attaching themselves to the kudos of the Open Source tag. To remain free, Open Source needs constant explaining. For Free Software there is no doubt that it is about freedom, and the source should be always available. Sticking to purely free software may mean missing out on some tempting closedsource apps in the short-term, but the better quality software will continue to arrive. At the moment there is little practical difference between FS and OS. If people cease to value the freedom of their software, will that always be the case?To protect the freedoms discussed above the GNU project uses copyright law to enforce the freedoms of the GPL. As well as declaring the right for anyone to run, copy, modify and distribute modified copies of the software, it refuses these rights to anyone who seeks to add restrictions of their own. This reversal of the traditional use of copyright law has been named copyleft, following a quip scribbled on the envelope of a letter to RMS ”copyleft – all rights reserved”. Some FS licenses do not protect software from future restrictions, many programs under these noncopyleft licenses have been absorbed into proprietary code, with further development not returned to the community. The GPL has been covered earlier, so we turn to look at the LGPL. Originally the GNU Library General Public License was conceived for tactical reasons. The GPL does not allow a non-free piece of code to be linked to it, as there were already many C libraries the GNU C library was given a special license so that it would be more widely used, leading to the rapid spread of the GNU tools. For specialist libraries, such
COMMUNITY
as the GNU Readline, developed to provide the Bash shell with command line editing, the GPL is more appropriate at it gives an advantage to free software (i.e. only free software can link to it). As people were beginning to LPGL their libraries as a matter of course the LPGL was renamed the GNU Lesser General Public License, to give a less misleading impression. The licenses for Guile and the GNU Ada compiler are similar. There are many free licenses that are incompatible with the GPL, due to restrictions on use or modification of the software - see table. However the largest group of GPL-compatible free licenses to "rival" the GPL is the X11/ BSD type license.
The author Richard Smedley is an organic gardener by training, an engineer by temperament and a writer because he has to pay the rent.
Do what thou wilt In the 1980s the many competing windowing systems for Unix were vanquished by the X11 windowing system. This was licensed under permissive (non-copyleft) terms, which gave the user permission to do what they liked with the code, but placed no restrictions upon taking the code and making it proprietary. Thus commercial Unix vendors soon each had their own proprietary X11 version. If your aim is many users for your standard, then this is a useful license. However it does not protect the freedom of future users of the software. The modified BSD license, under which the ”other free Unix-like operating system” – freeBSD (& its close relatives) is released, is a similar permissive license. As BSD-licensed code can be linked with any (proprietary or not) code, some developers and firms see this as a big advantage. Earlier versions contained an advertising clause, insisting on credit for earlier authors being placed in advertisements for modified versions, which resulted in some advertisements containing 70 or more credits. It is the permissive nature of the license that has helped to attract Apple to use freeBSD as the heart of its soon-to-be-released OSX. If you really want to investigate the minutiae of all the other free software licenses the list at the FSF Web site is a good place to start. However my personal advice is that your time would be much better spent coding! ■
Info The Free Software Foundation http://www.fsf.org The GNU project http://www.gnu.org Open Source Initiative http://www.opensource.org/ O’Reilly open source network OSS news site http://opensource.oreilly.com/ Open Sources: Voices from the Open Source Revolution, various authors, O’Reilly 1999. ISBN 1-56592-582-3 Eric Raymond’s histories of hackers and the Open Source movement http://www.tuxedo.org/ ■
Yet more licenses... Other GPL compatible FS licenses include: iMatrix Standard Function Library; W3C Software Notice and License; Berkeley Database License (as published 1999-09-12). Other GPL incompatible FS licenses include: Arphic Public License (no incompatibility when used for fonts); OpenLDAP License; Phorum License; LaTeX Project Public License; Netizen Open Source License (NOSL), Version 1.0; Interbase Public License, Version 1.0; Freetype License; Open Compatibility License; PHP License, Version 2.02 [this is used for PHP4, PHP3 is dual licensed under the GPL]. Non-free: Plan9 license; Open Public License.
6 · 2001 LINUX MAGAZINE 123
124advocacy.qxd
02.02.2001
17:05 Uhr
COMMUNITY
Seite 124
NATIONAL INSTALLATION EVENT
More chances to give away your favourite OS
INSTALLATION FOR THE NATION RICHARD SMEDLEY
Linux may have been gaining ground in the last few years, but it remains in minority use on desktops. Those frustrated with this position will be happy to hear that a national Linux event, in the form of an Installation Day, is being planned for 29th April.
Don’t forget the signature
Events will be taking place across the country, but your help is needed to organise and run them. As well as individuals and User Groups, Linux businesses will be involved in giving free demonstrations to business users.
Use the LUGs
Info Linux Day http://www.linuxday.org.uk/ UK LUGs http://www.lug.org.uk/ Cheap CDROMs of GNU/Linux distributions are available from a number of firms. Follow the links from any Linux portal, such as http://www.linux.org.uk/ ■
Local User Groups will be the main focus for events, co-ordinating installfests and local publicity across much of the country. If you are not yet in touch with your nearest LUG, see our User Group pages, they will be glad to hear from you. Those in a region without an active LUG can find out about forming one at the UK LUG Web site, where you can also find LUGs not yet listed by us. Details of the events can be found at the Install Day site. Installing a new OS is not always a straightforward business (even with Linux), and there is a lot of discussion on the mailing list of the pros and cons of this approach. Alternative promotional events will be run alongside installations, including software demonstrations and possibly group training events. Shops are encouraged to participate and an installation pack is available for them on the Web site.
124 LINUX MAGAZINE 6 · 2001
Details of installation procedures are up on the site. Clear notice of limited liability for damage caused should be given and advanced notification of hardware specifications should be asked for. Get PC owners to sign the waiver. You may want to insist that hard disk drives are ready-partitioned, or have a separate area at your event for this. Useful for the day would be handout of URLs for help, howtos/manpages, online books, LUG details, book lists and magazines. We will follow up preparation for Linux Day over the next two issues and look forward to hearing about your own events.
Back to school As a community event looking to reach the most users interested in free (in every sense of the word) software, schools maybe a particularly good candidate for hosting events. A Linux install on a spare school desktop or two (and maybe a server as well) may be the beginning of a total move to free software for some schools. Don’t forget many schools still have scores of old Acorn and Amiga machines, as well as old PC’s and Apple Macs, often with network cards, into which Linux will breathe new life. Distributions for these architectures are easily available. ■
125gnuworld.qxd
31.01.2001
17:01 Uhr
Seite 125
BRAVE GNU WORLD
COMMUNITY
The monthly GNU Column
BRAVE GNU WORLD GEORG C. F. GREVE
Debian Jr. The Debian Jr. project has been started by Ben Armstrong in order to create a special Debian distribution for kids from 1 to 99, with the goal of introducing children to using computers in general – and the Debian system in particular – as early as possible. Using a standard Debian GNU/Linux or GNU/HURD distribution should therefore hold no problems for this generation. The current focus of development is the age group up to 8 years in order to approach the group up to 12 years with the next step. The initial target audience consists of parents, teachers, older friends and relatives who are using Debian already and wish to share it with the kids in their lives. Strictly speaking this gives the Debian Jr. project two target audiences. The aforementioned group will be setting up and administrating the systems, so making the installation of kid-friendly packets and prepared setups as easy as possible is the goal here. This will be done the Debian way by creating task packages for the different tasks that will have dependencies to the packages this task requires. The second target audience are the kids of course. For them it is important that programs are easy to use and tailored to their needs (using the programs also needs to be fun as this is an important motivational factor). But there is a second aspect that is to be considered when modifying a system for children. As Ben Armstrong said, ”If there is any way to break a system, a kid will find it very, very quickly”. His own daughter almost drove him nuts by changing her password on a daily basis and promptly forgetting in the next day. And then there was the day when she decided to load every file she could get hold of – including MP3 files, tar archives, the kernel and /dev/dsp – into the buffer of her pico editor. For this reason some attention should be devoted to kidproofing the system. Steps like the liberal application of limits and quotas as well as limiting access to some tools and functions are a very good idea. It is only a rumour that some system administrators already dream about using the
experience for setting up workstations in their companies. According to Ben Armstrong Debian has been chosen as the basis for this project because of its strong Free Software philosophy. It is an open system that has seen a lot of users become maintainers expanding and improving the system. To him this seems like the perfect basis on which to introduce children to computers. One of the biggest strengths of the system, being maintained by a lot of volunteers, is also one of its greatest weaknesses as co-ordination of the participants is sometimes problematic. Of course Ben cannot manage such a project without help, but unfortunately the number of volunteers is much too large to be able to mention everyone here. At the moment there are about 160 subscribers on the mailing list. However, there is still a lot to be done and help is always welcome. Especially porting the packages to other languages, which seems like a good idea since a lot of children do not have English as their native language. Additionally the project is currently looking for a suitable logo and it is planned to create special themes to give people the chance to identify a system as Debian Jr. Personally I consider this project to be very important since it introduces children to computers early and at the same time shows them how systems can be built upon the spirit of freedom. This serves as a very good first experience with information technology. That said I’ll come to two projects of general interest.
Welcome to another issue of Georg’s Brave GNU World
GTKtalog GTKtalog by Yves Mettier is a GTK+ based program to catalogue CDs. Owners of big CD collections should find this especially useful as it makes finding files on them easy. This application is already very good as far as creating a catalogue of your favourite MP3-CDs goes, but GTKtalog has an additional feature that makes it even more useful. It supports virtual file 6 · 2001 LINUX MAGAZINE 125
125gnuworld.qxd
31.01.2001
17:01 Uhr
COMMUNITY
The Debian Jr. project is looking for a logo
Seite 126
BRAVE GNU WORLD
systems like tgz or rpm, so files within these archives can be found without a problem. One of the most pressing problems according to Yves is the lack of documentation and the abscence of German and Italian versions of the menus. For several other languages the internationalisation is already complete. His other plans are to improve functionality - some parts contain solutions that he would not call bugs, but they could be solved in a more user-friendly and elegant way. One of the things emphasised by Yves was that he would prefer help from the outside with completing the documentation. His reasoning behind that is that he would automatically write it from an author’s point of view, which is not necessarily the most useful perspective for the reader. For further development there is a rather voluminous TODO file that the author wants to work his way through. But despite its being under development, GTKtalog is definitely already usable and has found its way on the Mandrake distribution. On top of these things Yves Mettier wanted to point out that one of the best aspects of developing for him are the e-mails he gets. Sometimes it is a little like Christmas when people send him another translation or a pretty big patch. Of course GTKtalog is released under the GNU General Public License.
Jam
[left] GTKtalog brings order: You can select file formats and categorise programs [right] There are two applications for searching within GTKtalog – here you can see the more detailed one
Jam is also a project for creating and maintaining CD catalogues but unlike the previous project, Jam concentrates on music CDs. It can archive not just MP3 CDs but also regular music CDs, so it allows storage of all music in a MySQL database. Especially interesting is the possibility to import and export CD lists as XML files. Since the database possesses an owner field, a user can import the lists of a friend and can so browse his music collection as well. Jam has been written by Fabian Mrchen and Thomas Schwarzpaul under the GNU General Public License. Unfortunately there is still a drawback to it as Jam uses some proprietary (although gratis) libraries. This pretty much raises the same complex
126 LINUX MAGAZINE 6 · 2001
of problems that KDE had. As a perspective it would probably be better to replace the proprietary components with Free Software, or work on making them free. The problematic area at the moment is the installation, the authors say. It doesn’t work automatically on some systems although installation by hand is relatively easy. Jam is completely console and command line driven, so a GUI is on top of the list of tasks to be done. Furthermore the authors would like to create output backends for HTML and TeX. As far as the core functionality is concerned, it is planned to have Jam administrate playlists and to use these to create CDs or tapes. Additionally they would like to interface with a MP3 player and encoder as well as supporting the creation of CD covers. But there is still a lot to be done to reach that point and help is explicitly welcome. The background of this project is a work the authors had to create for their university. Once they were done they liked it so much that they released it as Free Software. These roots pose the advantage that the source code has a clean structure and should be relatively understandable for beginners.
GNU Font Editor The GNU Font Editor (GFE) is a relatively young GNU Project by Anuradha Ratnaweera. It is a GTK+ based graphical WYSIWYG editor for fonts that will support raster and vector fonts once it has been completed. The target audience are professional designers as well as end users. Since the programs in this area were mostly command line driven or based upon non-free toolkits like Motif, Anuradha saw a need in this area. He plans to use existing solutions like the GNU Fontutils for orientation, though. Currently GFE only supports BDF fonts, so the next important steps are the support of PCF (X) fonts and afterwards PS fonts. Later he also wants to approach TTF and other formats. He expects problems once the support of vector based fonts are to be included, as the mixed representation has not yet been completely thought through. As soon as GFE is ready, it will become an
125gnuworld.qxd
31.01.2001
17:02 Uhr
Seite 127
BRAVE GNU WORLD
extraordinarily useful tool as it allows converting all kinds of fonts into each other and it will fit in with the GNOME/GTK GUI very well. But to that point there is still a lot of work to be done and one of the things most needed by Anuradha is more people on the mailing list for stronger peer review in order to make the development process faster.
GNU Typist GNU Typist is a fresh GNU Project, as well. It is originally a program by Simon Baldwin who still maintains a Java version of it. The derived GNU Project is done by Igor Tmara and Vladimir Tmara. Typist is a program to train correct and efficient typing. It does support different keyboard layouts and is internationalised with NLS. So far it has reportedly been used on x86-GNU/Linux, x86-WinNT, x86-DOS, Aix and Sparc/SunOS. This makes it rather portable. The program itself follows the concept of an interpreter for lesson files that can be expanded indefinitely. As the program is already relatively complete, the creation of these files for more languages and keyboard layouts is the major task at hand. Although the next project may not have immediate importance to most people, it could be of great interest to people with a shopping site on the Internet.
ISoSy The International Shopping System (ISoSy) by Stefan Zapf is a PHP and MySQL based online shopping system under the GNU General Public License. The advantages are stability, easy customisation and a logical separation between general design, content and source code. The project supports multiple languages and currencies but is capable of saving all customer information in a single database. This makes it possible to have a single page or ordering interface for all customers. The project is already usable – small and medium sized companies in particular can use it to improve their customer contact. Incidentally, the HTML design used was been written by Robert Koetter aka Athelas. Stefan’s plans for future developments are increasing ease of customisation and moving some options from the source code into configuration files and automations like automatic switching of currency upon selecting a different language. The long-term plans contain a GTK/C++ based program for initialising and managing the database as well as automatic FTP transfer to the server. So there are still some things to be done. For this he seeks contact to PHP hackers as well as people with C and GTK+ experience. The author would like to emphasise that the special value of this project is in offering a way to make money with Free Software without
COMMUNITY
Info Send ideas, comments and questions to Brave GNU World column@brave-gnuworld.org Homepage of the GNU Project http://www.gnu.org/ Home page of Georg’s Brave GNU World http://brave-gnu-world.org We run GNU initiative http://www.gnu.org/brave-gnuworld/rungnu/rungnu.en.html Debian Jr. project home page http://www.debian.org/devel/debian-jr GTKtalog home page http://gtktalog.sourceforge.net/ Jam home page http://www.mybytes.de/jam/ GNU Font Editor home page http://www.gnu.org/software/gfe/gfe.html GNU Typist http://www.gnu.org/software/gtypist Typist (Java Version) http://www.ocston.org/~simonb/typist/ International Shopping System home page http://isosy.sourceforge.net/ MIX Development Kit home page http://mdk.sourceforge.net/ Donald Knuth, The Art of Computer Programming, Addison Wesley http://Sunburn.Stanford.EDU/~knuth/taocp.html Mixal project description http://www-cs-faculty.stanford.edu/~knuth/mmix.html ■ compromising the ideals and philosophy of the Free Software community.
MDK The MIX Development Kit (MDK) by Jose Antonio Ortega Ruiz is a project which allows the development and running of MIXAL programs in a MIX virtual machine. It should probably be said that MIX is the mythical computer described by Donald Knuth in his book, The Art of Computer Programming and that it is being programmed in MIXAL – the MIX Assembly Language. The MDK contains a MIXAL Assembler Interface and the already mentioned MIX Virtual Machine with a command line interface. Other than the Mixal project, MDK does offer debugging capabilities and supports block devices. MDK also contains a rather comprehensive documentation. The only drawback according to Jose is that MDK does not contain a GUI at the moment. So developing a ncurses and/or GTK+/GNOME front end is the next step. Afterwards he plans to support MMIX. The RISC based version of MIX planned by Donald Knuth. ■ Information about Compact Disks can also be presented very clear on a plain old terminal, thanks to Jam.
6 · 2001 LINUX MAGAZINE 127