Linux Magazine - Windows, Software, Emulation, VMware, Win4Lin, 3D Graphics ca... Pรกgina 1 de 2
Issue 3: December 2000 l l l
l
l
l
l l
l l
l
l l l
l
l l
l l
l l l
l
l
l
l
Comment: Linux to go to school Letter: Responses to our launch issue News: Red Hat Linux 7, SuSE adds SPARC
distribution, etc. Report: Extra-terrestrial intelligence with Linux Report: Distributed file sharing - current state of the art On Test: Group Test: 12 of the latest 3D Graphics cards On Test: Acrylis WhatifLinux Personal Edition On Test: Compaq iPaq - installing Linux on a iPaq On Test: IBM RS/6000 B50 preinstalled Linux Cover Feature: Running Win programs on your Linux Desktop Cover Feature: VMware - a virtual machine under Linux Cover Feature: Win4Lin - VMware's rival Cover Feature: Wine - Windows APIs for Linux Feature: Cluster computing - high processing power at low cost Feature: Mosix Clustering - creating clusters of Linux computers Know How: Blender scripting using Python Know How: Framebuffer graphics as an alternative to X Programming: Using Qt Designer Programming: Fast and Light Toolkit for graphical apps Beginners: Intro & Overview Beginners: How to: Boot Linux from DOS Beginners: The Tutor: Replacing sendmail with Postfix Beginners: Command Line: Using ImageMagick's convert utility Beginners: How to: Create KDE desktop themes Software: Out of the box: ncp - the network copy program Software: Nautilus - the new file manager for GNOME
file:///K:/REVISTAS/INFORMATICA/INTERNACIONAIS/Linux%20Magazine%2010t... 13/04/2012
Linux Magazine - Windows, Software, Emulation, VMware, Win4Lin, 3D Graphics ca... Pรกgina 2 de 2
l
l
l
l
Dockapps: small utilities like clocks and resource monitors Software: How To: Tackle problems when installing programs from source code Community: Brave GNU World - the monthly GNU column Cover CD: WINE and Vmware, Quake3Arena (demo), Parsec (LAN-test), 150 KDE themes, XFree86 4.0.1, drivers for Voodoo5, drivers for Nvidia, GLX and DRI CVS, latest Kernels, MOSIX Software, Nautilus, Mozilla M18 Software:
<= Previous
Back To Archive
Next =>
Order Back Issues
file:///K:/REVISTAS/INFORMATICA/INTERNACIONAIS/Linux%20Magazine%2010t... 13/04/2012
004editorial.qxd
23.10.2000
14:40 Uhr
INTRO
General Contacts General Enquiries Fax
COMMENT
Subscriptions E-mail Enquiries Letters
01625 855169 01625 855071 www.linux-magazine.co.uk subs@linux-magazine.co.uk edit@linux-magazine.co.uk letters@linux-magazine.co.uk
Editor Julian Moss
jmoss@linux-magazine.co.uk
Staff Writers
Keir Thomas, Dave Cusick , Martyn Carroll
Contributors
Jenny Baily, Jono Bacon
International Editors
Harald Milz hmilz@linux-magazin.de Hans-Georg Esser hgesser@linux-user.de Bernhard Kuhn bkuhn@linux-magazin.de
International Contributors
Nils Färber, Tobias Freitag, Michael Engel, Peter Ganten, Mirco Dölle, Dennis Schön, Matthias Warkus, Christoph Dalitz, Martin Strubel, Patricia Jung, Hagen Höpfner, Christian Perle, Jo Molkalewski
Design
vero-design Renate Ettenberger, Tym Leckey
Production
Hubertus Vogg
Operations Manager
Pam Shore
Advertising
01625 855169 Neil Dolan Sales Manager ndolan@linux-magazine.co.uk Linda Henry Sales Manager lhenry@linux-magazine.co.uk Verlagsbüro Ohm-Schmidt Osmund@Ohm-Schmidt.de
Publishing Publishing Director
Seite 4
Robin Wilkinson rwilkinson@linuxmagazine.co.uk Subscriptions and back issues 01625 850565 Annual Subscription Rate (12 issues) UK: £44.91. Europe (inc Eire) : £73.88 Rest the World: £85.52 Back issues (UK) £6.25
Distributors
COMAG, Tavistock Road, West Drayton, Middlesex England UB7 7QE
R. Oldenbourg
Linux Magazine is published monthly by Linux New Media UK, Europa House, Adlington Park, Macclesfield, Cheshire, England, SK10 4NP. Company registered in England. Copyright and Trademarks (c) 2000 Linux New Media UK Ltd No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, e-mails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing.
TIME FOR LINUX TO GO TO SCHOOL Microsoft, the Redmond-based software developer we all know and love, is getting twitchy about the amount of licence revenue it loses to software piracy. Back in the summer it commissioned a report on the problem of piracy in schools which found that more than a third of schools in some areas were breaking the law by allowing teachers and pupils to copy software illegally. My reaction when I saw this was one of shock and horror, though perhaps not in the way the report’s authors intended. The horror was the thought that the cash-rich software corporation was going to make schools cough up money they could ill-afford to pay for the extra software licenses. The shock was the realisation of just how much money this must be. And this led to the question: why? Why are British schools paying a small fortune for software when there is an effective and completely free alternative? A few schools are using Linux. Very few. Roger Whittaker of SuSE UK runs a mailing list for Linux users in schools. (SuSE’s schools web page is at http://www.suse.de/uk/schools.) The list has around 150 members. Most of those represent secondary schools at which an IT teacher has installed a Linux box or two as file, print or web servers. There is no use of Linux on the desktop yet, despite the fact that most systems in schools are used not to run custom Windows-based applications but for things like word-processing, spreadsheet work and so on – just the kind of thing that could be accomplished using Linux and StarOffice. Think of the benefits a greater use of Linux in schools would bring. Not just a saving in licence fees but a saving in administration costs thanks to Linux being a secure operating system, safe from tinkering. It would be immune to the viruses that can spread like wildfire in the school environment too – the saving in anti-virus software licence fees would be another bonus. Lack of Linux experience among teaching staff (not to mention local authority education departments) will be one obstacle to overcome. Here, perhaps, Linux User Groups could help. Many LUG members must be parents and contribute in one way or another to school funds. Your Linux expertise could potentially be of more value to a school than time devoted to other fund-raising activities. There’s also enormous potential for developing educational software for Linux. The open source development model is perfect for this. Skilled programmers and teachers could work together creating educational programs that would be free and available to all. Nor should the value in introducing computer users to Linux at such a young age be overlooked. The nation’s children are in danger of growing up thinking that computers and Windows are synonymous. Schools are a huge market – many British IT companies grew big devoting themselves exclusively to it – and they present a wonderful opportunity for Linux and the open source movement to make a difference where it would really be noticed. Who wants to seize the initiative?
Julian Moss
ISSN 14715678 Linux is a trademark of Linus Torvalds Linux New Media UK Ltd is a division of Linux New Media AG, Munich, Germany Disclaimer Whilst every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material providied on it is at your own risk. The CD is comprehensively checked for any viruses or errors before reproduction. Technical Support Readers can write in with technical queries which may be answered in the magazine in a future issue, however Linux Magazine is unable to directly provide technical help or support services either written or verbal.
4 LINUX MAGAZINE
3 · 2000
We pride ourselves on the origins of our magazine which come from the very start of the Linux revolution. We have been involved with Linux market for six years now through our sister European-based titles Linux Magazine (aimed at professionals) and Linux User (for hobbyists), and through seminars, conferences and events. By purchasing this magazine you are joining an information network that enjoys the benefit of all the knowledge and technical expertise of all the major Linux professionals and enthusiasts. No other UK Linux magazine can offer that pedigree or such close links with the Linux Community. We're not simply reporting on the Linux and open source movement - we're part of it.
008letters.qxd
20.10.2000
11:09 Uhr
NEWS
Seite 8
LETTERS
Readers’ Letters
WRITE ACCESS Converted At last, the UK has a decent Linux magazine! Excellent, well done! I have found the first issue to be most informative and entertaining. The tutorials were very good I felt that I had actually learnt something and was not left hanging in the air unlike the ”other” UK magazine which never seems to go into any detail. I have been converted and will be subscribing shortly. All the best for the future and keep up the good work. David Price Thanks, David! We are pleased to have received many letters in this vein. But we are still striving to make the magazine even better! [Ed.]
User-unfriendly?
Letters We welcome your letters either about the magazine or on topics of general Linux and open source interest. Letters will be edited if not brief and to the point. Send your letters to letters@linux magazine.co.uk. Please include your full name. ■
After reading issue 1 I feel compelled to give you my thoughts on Linux. Over the years I’ve used DOS 5, BeOS, Windows 3.1, 95 and 98 and have tried to use various flavours of Linux including versions from Corel, Red Hat, Slackware, Turbolinux (all quickly abandoned) and recently SuSE 6.4. The latter installed very quickly but took hours to get the display, sound and modem working correctly in spite of the relevant cards being ”supported”. So I tried installing ”Kruiser” from your cover CD. All seemed well until ”configure” terminated with the error message ”checking for X… configure: error: Can’t find X includes. Please check your installation and add correct paths” What the hell does that mean? Then I tried being really brave and installing EPIwm. This resulted in a screenful of errors. I decided to delete SuSE and install Mandrake, but I was wary. The installation screen has no ”Exit Install” star, partially obscured buttons at the bottom of the screen and no scrollbars in the help section (unlike your illustration). Having come across this sort of sloppyness before during installation (I
8 LINUX MAGAZINE 3 · 2000
forget which version of Linux) and being unable to proceed, my immediate reaction was to exit. How? Switch off of course! As a matter of interest, Windows 98 SE was custom installed from scratch without a hitch on the same computer in about 40 minutes, plus another 10 minutes to set up and download e-mail and access the internet. All my experiences of trying to use Linux have convinced me that it’s the most user-unfriendly operating system ever devised. No doubt I’ll persevere once my frustration has abated, but at the moment the saying ”Linux is an OS designed by a committee” comes to mind, with apologies to the camel. John Hartley The errors you saw trying to install kruiser and EPIwm were caused by missing header files. Many distributions don’t automatically include all the c-headers necessary to compile graphical applications from source code. The problem and its solution is tackled in the article ”Installation problems” in this issue. Locating and installing the libraries can be hard work, but you only have to do it once. Unfortunately the alternative – putting a ready-compiled binary version on the CD – would be even more unsatisfactory because the package may only work on the same processor type under a similar distribution to the one it was compiled on, leaving many readers disappointed. As to your problems installing Linux, it sounds as if you have just been unlucky. Technically it is very difficult to write an installer that can cope with the near-infinite permutations of hardware that can be found in a PC. Microsoft has been able to spend a lot of time and even more money on the Windows setup program. Linux is improving all the time in this area but often you still need a bit of effort and determination to get everything working right. But Linux is not designed by a committee. Every user who is so inclined can help with its development and influence its direction. So don’t criticise – get involved! [Ed.] ■
010News.qxd
23.10.2000
14:49 Uhr
Seite 10
NEWS
BUSINESS
NEWSLETTER IBM gets into kernel development
David Turek, IBM VP Deep Computing
IBM is set to invest millions of dollars on Linux kernel development. As part of the company’s strategic focus on Linux, $200 million US has already been pledged to create centres for Linux application development and integration in Europe, and a similar amount for Asia. During the summer the company also set up an operating system development laboratory in Portland, USA with the aim of supporting Linux OS development activity around the world. Already staffed by around 60 developers, this number is expected to increase to several hundred during the coming year. These developers will be focussing mainly on issues of performance, availability and scalability. Speaking to Linux Magazine at the European Linux Cluster Roadshow in Warwick, IBM VP Deep Computing David Turek said: ”We think that IBM can make a substantial contribution to accelerate the development of Linux. We can help OS developers by providing access to equipment for development and testing. The aim is to accelerate Linux penetration of the enterprise space.” IBM’s main interest in Linux is as a server platform and for deployment in computing clusters. ”The advantage of Linux and its kernel is that it is
unencumbered by the philosophy characteristic of many proprietary operating systems produced to date,” says Turek. ”Linux hasn’t been designed as a general purpose computer operating system. It is ideal for use in a small set of functionality type of environment. Its simplicity turns out to be a virtue. It means that Linux can be tested in those environments quite effectively.” Despite IBM’s massive investment in Linux, the company has no intentions of entering the market with its own IBM-branded distribution. ”This is not part of our strategy,” claims Turek. ”There is a real and legitimate need to have multiple sources of supply. For IBM to enter the marketplace would put us in competition with other distribution companies. That could damage the marketplace and affect the perception of how Linux will be developed over time.”
Info IBM 01256 343000 http://www.ibm.com/linux/europ ■
Microsoft bails out Corel, .NET gain for Linux? Corel Corporation has entered into what it calls a ”strategic alliance” with Microsoft which will lead to the Canadian company’s involvement in Microsoft’s .NET initiative. Microsoft has invested $135 million U.S. by purchasing 24 million non-voting convertible preferred shares in Corel. At the same time, Corel has reorganised its board of directors. Derek J. Burney, its interim president and CEO, has been appointed to the position of president and CEO on a permanent basis. Four new executive vice presidents have also been appointed to the board, with responsibility for specific product areas. They are: Ian LeGrow, executive vice-president, creative products; Graham Brown, executive vice-president, business applications; Rene Schmidt, executive vice-president, Linux products; and Annette McCleave, executive vice- president, corporate marketing. The alliance with a company that many
10 LINUX MAGAZINE 2 · 2000
supporters of Linux regard as the arch enemy has led to speculation that Corel’s strategy to become a major force in the Linux market will be torn up. But talking to Linux Magazine, CEO Burney emphatically denied this. ”The deal we did with Microsoft was to give us access to the .NET functionality that they are developing,” he said. ”We had been looking for the past year or so at how to move our applications to the web. We found that there wasn’t a set of tools to enable us to do a good job of it. In .NET, Microsoft has a powerful offering which will enable us to move our applications to the web space.” Burney denies suggestions that Corel will be under any pressure from its new partner to stop work on products that would compete with Windows or Office. ”We’re in complete control of our future,” he said. ”If Microsoft was interested in stomping out this market there are ways it could have done that without having this deal.” If anything,
Burney believes that Microsoft’s intentions are just the opposite. ”I can’t speak for Microsoft,” he pointed out, ”but they might be using this as a way to get involved in Linux somehow. If so, we will be more than happy to accommodate them.” Corel’s plans to launch server and enterprise versions of Corel Linux OS remain in place. ”It will be interesting what we can do with Linux on the .NET side,” said Burney, ”and the Microsoft deal means we now have plenty of cash in the bank to get on with development. As the Internet becomes more of a platform for applications so the desktop becomes irrelevant. To that extent, .NET is good news for Linux.”
Info Corel http://linux.corel.com/ ■
010News.qxd
23.10.2000
14:50 Uhr
Seite 11
SYSTEMS
NEWS
Acer to ship systems with TurboLinux Computer giant Acer’s European division has begun shipping Linux on selected models in its AcerPower and Veriton business desktop ranges, which are sold only through resellers. The TurboLinux distribution will be available installed and configured in a choice of seven languages. ”TurboLinux offers a robust and complete business solution with its operating system that will add significant value for our customers in Europe who wish to work with Linux,” said Maya Panconcelli, Software Marketing Manager for Acer Europe. ”We think that TurboLinux has excellent localised language
versions of its software and that our customers will appreciate the ease of use and power of the TurboLinux operating system.” Acer’s web site still had to be updated with details of the announcement, however. When Linux Magazine looked, the business desktops page still bore the statement that ”Acer recommends Windows 2000 Professional for business.”
Info Acer http://www.acer.co.uk/ TurboLinux http://www.turbolinux.com ■
Acer business systems – now available with TurboLinux
Unitree adds Linux-based HSM to its portfolio UniTree Software recently relesed a Linux version of its Hierarchical Storage Management (HSM) product UCFM for Linux. UniTree Central File Manager (UCFM) transparently moves data from disk file systems to removable media such as tape or optical disk for automated disk management and more cost effective storage. Files migrated to removable media still appear to be on disk and are transparently restored to the disk based file system on a user’s request. UCFM is the first commercially available HSM to support Linux, UniTree claims. ”Linux is quickly emerging as a mainstream operating system and we see a growing need for more cost effective data
management in the Linux market,” says Nigel Dear, European Director for sales and marketing, UniTree. ”With hardware manufacturers enhancing performance at a drastic rate combined with the efficiency of the Linux operating system, we see data management for the Linux operating system as a significant growth area.” Initially UCFM for Linux will ship on Red Hat Linux release 6.0.
Info UniTree 01628 486773 http://www.unitree.com/ ■
Anzeige
010News.qxd
23.10.2000
14:50 Uhr
Seite 12
NEWS
SYSTEMS
The appliance of Linux
The power of Linux, but the ease of use of an appliance
Linux support company Linuxsure is introducing a range of Linux-based office servers designed to deliver essential services economically for office networks. The Celestix Aries Integrated Server is compact (170mm high and 100mm wide) and provides easy-to-use resource-sharing and Internet access for workgroups of up to 50 users. With the reliability and simplicity of a consumer appliance (claims the company) the Aries requires no maintenance from the user. Just plug in a few cables, enter your Internet settings on the front panel and you have full Internet access for the whole office in less than ten minutes. The Aries also runs a powerful file and print sharing service compatible with Windows, Macintosh and Unix/Linux. The £1,499 product is primarily targeted at small offices and home offices (SOHO) that do not wish to hire fulltime technical specialists to administer their
network. System housekeeping tasks are minimal and even software upgrades can be carried out without user intervention or system shutdown. The product is also suitable for mobile users who need to move their network frequently or have to set up ad hoc networks in temporary environments such as meeting rooms and hotel rooms. The Aries has a 200-MHz Pentium MMXequivalent processor, 64MB of memory and 6GB of hard disk, which is claimed to be sufficient for most small offices with less than 20 users. Optimised operating system drivers and extensive use of caching enable the unit to perform on a par with most desktop servers. Aries supports an extensive array of connectivity options from dial-up V.90 or ISDN connection through an external serial modem or PCMCIA card modem, to ADSL or cable router using the second Ethernet port. Support for USB-based communication devices is planned for the future.
Info Linuxsure 01636 650223 http://www.officeservers.co.uk/ ■
Free firewall/router smooths Internet access
SmoothWall – a free SOHO router/firewall
Network users can now have secure Internet access thanks to SmoothWall, a free open source software firewall and dial-up router based on Linux. SmoothWall turns a redundant 486 or P100 upwards PC into a fully fledged dial up router and firewall for SOHO networks. It is a fully functional firewall with full fault tolerance and auditing functionality and can be administered from any browser on any platform. SmoothWall allows you to create a dialup server for your network that has been exhaustively penetration tested and documented and has already proved a popular solution in networks large and small. SmoothWall was developed using freely available Linux components and is freely distributable under the GNU General Public Licence. The project was managed using SourceForge. A team of six Linux User Group members made up of developers, testers and editors from the UK development community and headed by Richard Morrell, a technical consultant at VA Linux Systems UK, designed and built SmoothWall in just two months. Less than two weeks after that the SmoothWall website had
12 LINUX MAGAZINE 2 · 2000
received over 142,000 hits and 13,000 downloads by users from right across the globe. Richard Morrell said: ”The success of the SmoothWall project provides a perfect example of the power of the Open Source model over the proprietary software model in terms of the penetration and speed of worldwide distribution. The real advantage of SmoothWall is that the benefits of Linux and Open Source can spread quickly across PCs without requiring people to change operating systems first. We are automating the ability for people to secure the way they work and to use the power of internet connectivity without worrying about ‘script kiddies’ attacking their workstations and servers.” An ISO CD image can be downloaded from the SmoothWall web site (an 18MB download.) Copies may also be purchased through selected retailers.
Info SmoothWall http://www.SmoothWall.org/ ■
010News.qxd
23.10.2000
14:50 Uhr
Seite 13
SYSTEMS
NEWS
Linux system managers get beta Caldera Systems’ Linux management solution, formerly known as ”Cosmos”, has now entered open beta. The product is a browser-based management tool which uses the inherent strengths of LDAP directories and enables network administrators to manage from a few to thousands of Linux systems from a central point. Caldera claims that the tool will significantly benefit ASPs, ISPs, hosting companies, systems integrators and network administrators, dramatically reducing costs, saving time and increasing efficiency. ”This is a solution that is in line with the needs of a university setting,” said one user, Brian Haymore, systems engineer at the Center for High Performance Computing at the University of Utah. ”It enables us to centralize management functions on a large and varied network without having to individually manage each system. By using policies to control inventory and distribution, Caldera’s management solution facilitates single-point
management for our cluster of 188 heterogeneous Linux processors.” ”This management solution will be a catalyst for increased adoption of Linux by any size of business – particularly the enterprise,” claimed Ransom Love, president and CEO of Caldera Systems. ”Previously, the time and cost of deploying Linux networks has been staggering. By consolidating the effort needed for network management, Caldera continues to lead the way in Linux for business solutions.” The beta is available for download from Caldera’s Web site at http://www.calderasystems.com/beta/.
Info Caldera http://www.calderasystems.com/ ■
Red Hat releases version 7 and launches new network Red Hat Linux 7 is now available. The latest version of this popular distribution provides enhanced security, new ease-of-use features, optimised software for higher-end Intel chips and increased 3D support, along with dozens of new enterprise-ready applications, plus a free trial of the new Red Hat Network. Important new features of Red Hat Linux 7 for enterprise users include integrated security with OpenSSL for secure communication via the Web, graphical configuration tools, the MySQL database and the fact that the operating system is kernel 2.4 ready. Workstation users will find a more customisable desktop environment, more software to choose from including digital image viewing and diagramming programs, improved default security levels and better 3D graphics support. Developers will benefit from the enhanced internationalisation sub-system, more complete C++ support with a new compiler, an updated development suite and a preview of many new development tools such as the new GNU compiler for Java. Red Hat Linux 7 is available in three versions. Each includes the operating system, productivity applications (both full and trial versions), the StarOffice Office Suite and the Extra Binge! Package including a SysAdmin
Survival CD and two Loki games CDs. The versions are: • Standard edition (£30.00): 60 days Web support, 60-day free Red Hat Network trial; • Deluxe (£60.00): 90 days Web support, 90-day free Red Hat Network trial, 30 days phone support; • Professional (£130.00): 90 days Web support, 180 days free Red Hat Network trial, 30 days phone support, 30 days Apache configuration support. All Red Hat 7 customers will be entitled to a free trial of the new Red Hat Network, a Webbased service for deploying and managing open source platforms. It will provide customisable preferences for security alerts, update management (tightly integrated with RPM) and technical support. Its aim is to improve system administrator productivity and enhance the security, reliability and performance of networked systems, while reducing costs for customers. ”Red Hat Network is the future of software: an integrated set of technology and services that speeds the deployment and reduces the costs of management for Internet infrastructure” said Paul McNamara, vice president of products and platforms at Red
Red Hat 7 includes free access to a dedicated support network
Hat. ”Red Hat Network simplifies deployment and delivers proactive services to keep systems secure and reliable. By managing the constant stream of open source innovations through redhat.com, customers will get maximum value from the open source development model.”
Info Red Hat http://www.europe.redhat.com/ xRed Hat Network http://www.redhat.com/network ■
2 · 2000 LINUX MAGAZINE 13
010News.qxd
23.10.2000
14:50 Uhr
Seite 14
Anzeige Intel
010News.qxd
23.10.2000
14:50 Uhr
Seite 15
Anzeige Intel
010News.qxd
23.10.2000
14:50 Uhr
Seite 16
NEWS
SYSTEMS
SuSE adds vital SPARC SuSE Linux has now added a distribution for the SPARC architecture from Sun Microsystems to its range. This move further emphasises SuSE’s strategy of cross platform Linux support making the free operating system available for all key platforms used in the professional environment. It joins the growing family of SuSE distributions for the PowerPC, S/390 and Alpha processors in addition to the Intel platform. The release of a SPARC port makes SuSE Linux an ideal common server platform within the professional environment, the
company claims. SuSE Linux offers comprehensive networking, stability and flexibility together with uniform and consistent administration of cross platform, heterogeneous networks. This enables costs for development and purchase of strategic software products within an enterprise can be minimized. The low purchase cost of Linux and the reduced administration overhead allow a significant reduction in the ownership costs of large server farms. The complete version of SuSE Linux 7.0 for SPARC is now available for download free
of charge. SuSE Linux is actively looking for users to promote the further development of this major project. For more information see the mailing list at suse-sparc@suse.com.
Info SuSE Linux: 0208 387 4088 http://www.suse.de/en/ Download: ftp://ftp.suse.com/pub/suse/sparc ■
Mathsoft S-PLUS 6 available for Linux and Solaris Mathsoft has announced a major upgrading of its statistical data mining software S-PLUS 6, which is initially being made available for Linux and Solaris. The new release features a powerful new Javabased point-and-click user interface to simplify and accelerate access, analysis, and visualisation of technical and business data. A new integration method called CONNECT/Java allows software developers to enhance the analytical and graphics capabilities within their applications by embedding the S-PLUS engine. The visualisation capabilities of S_PLUS 6 include a new type of Java-based graphic called
Graphlets which makes it easy to deploy interactive, drill-down graphics via Web pages. Data can be transformed into easy to understand charts having multiple tabbed pages. Users can export graphs to files JPEG, TIFF, PNG, PNM and Windows BMP formats in addition to live Graphlets.
Info Mathsoft http://www.splus.mathsoft.com/ ■
AMD releases x86-64T simulator Now you can start porting Linux to a platform that doesn’t even exist yet. AMD has now released the x86-64 technology simulator (also known as the AMD SimNow! simulator) to enable developers to work on x86-64 technology based code prior to the release of AMD’s 64-bit processors (codenamed ”Hammer”) at the end of 2001. The simulator, which was ported to Linux by CodeSourcery, is available free of charge at www.x86-64.org <http://www.x86-64.org/> . AMD’s x86-64 technology builds upon the x86 instruction set and provides support for applications that need to address large amounts of physical and virtual memory which find the current 32-bit 4GB addressing limit too much of a restriction. The processors will be designed to be backward compatible,
16 LINUX MAGAZINE 2 · 2000
automatically detecting whether 32-bit or 64bit addressing is required. The SimNow! simulator includes a model of a theoretical microprocessor based on the AMD Athlon processor but enhanced with the addition of x86-64 architecture support. It contains all the classic pieces of a PC system (CPU, memory, Northbridge, Southbridge, display, IDE drives, floppy, keyboard, and mouse support). Features of the simulator include the ability to single-step, peek at registers and memory, test with 64-bit mode and debug kernel bugs without having access to 64-bit processors. ”AMD designed and built the AMD SimNow! simulator to provide developers of x86-64 technology with tools for debugging their code and applications prior to the release of the ”Hammer” family of x86-64
technology enabled processors,” said Richard Heye, vice president and general manager, AMD Texas Microprocessor Division. ”AMD is committed to supporting the Linux community and is proud of the Linux partners also supporting x86-64 technology including Ada Core Technologies, CodeSourcery and SuSE.”
Info x86-64.org http://www.x86-64.org/ x86-64 Architecture Programmers Overview http://www.amd.com/devconn/index.html x86-64 simulator http://www.x86-64.org/downloads ■
026seti.qxd
20.10.2000
11:30 Uhr
Seite 26
FEATURE
SETI AND LINUX
Searching for extra-terrestrial intelligence
IS THERE ANYBODY OUT THERE? JENNY BAILEY
Go out on a dark, cloudless night at look up at the stars. There could be someone looking back, although of course, we couldn’t see them. Their image would be history, as light from a planetary system 10 light years away would take 10 years to arrive here. A space vehicle using our current technology would take 300,000 years to get the 4 light years distance to Proxima Centauri, our nearest neighbour. So how can we find out whether or not we are
alone in the universe?
Listening for a Radio Transmission
Electromagnetic radiation (light, radio waves, x-rays) is the fastest communication medium known to us at the moment. It is therefore our best hope for contact with another intelligent species.
In 1937 Nicola Tesla suggested using radio for extraterrestrial communication. Since then there have been a number of radio based SETI (Search for Extra Terrestrial Intelligence) projects to search for radio transmissions from distant solar systems, starting with Frank Drake’s Project Ozma in 1960. Every new project boasts better sensitivity and more advanced search algorithms. Project Argus proposes 5000 small dishes operated by Amateur Radio Astronomers, each dish pointed in a different direction. We are developing Linux software for Project Argus stations. Each Project Argus station is searching for a beacon signal targeted towards our solar system: a transmission powerful enough to be heard on the
MOther Current SETI projects SETI@home Launched on 17 May 1999 SETI@home uses data from the Arecibo Radio Telescope as part of Project SERENDIP. Project Phoenix Observations began in February 1995. Currently conducting a targeted search from Arecibo, with Jodrell Bank, Manchester checking candidate signals.
26 LINUX MAGAZINE 3 · 2000
026seti.qxd
20.10.2000
11:30 Uhr
Seite 27
SETI AND LINUX
FEATURE
earth that is both narrow band and coherent and which therefore stands out above the galactic noise. Unlike Hollywood’s portrayal of SETI people, we do not spend our time next to a radio telescope wearing headphones and listening to white noise. Automated searching for a signal in noise is a job for a computer.
Searching for a Needle in a Haystack Project Argus assumes that an Alien Civilisation is trying to signal us using a radio beacon. We must make some logical deductions about this beacon based on physics, which we believe to be universal. We could try to detect wideband signals such as accidental radio leakage from, say, their domestic television transmitters or equivalent. These signals would be modulated and therefore spread over a wider band. They would look like galactic background noise. If an alien intelligence is trying to contact another civilisation they might send out a narrow band signal to likely looking planetary systems. To hear this we must have a fully functioning SETI station pointed in the right direction, at the correct frequency, with an appropriate polarisation.
Direction Some SETI projects are ”targeted searches” where the most likely stars are monitored continuously for many hours. The telescope must track the star as it rises and sets. This type of search tend to be favoured by well-funded projects because of the cost, and noise, of a constantly tracking aerial arrangement. The other main search type is the ”all sky search” where the radio telescope remains fixed in one position and the sky moves past. A small dish – 4m diameter – has a wider beamwidth (will see more sky) than a 100m dish, albeit with less sensitivity. To cover all the sky, all the time, over 5,000 small radio telescopes are needed, distributed all over the earth. This is the goal of Project Argus as managed by The SETI League.
Polarisation We are trying to detect a signal which has been sent using an unknown polarisation (a term that describes the plane of the received radio waves.) To receive the signal we must be using a similar polarisation at our receiver or the signal we want will be severely attenuated. The aerial that feeds the dish can be linearly polarised: it can use horizontal polarisation, vertical polarisation, or something in between. There are also more complex polarisations such as circular (clockwise and counter-clockwise) which are used for space communication.
Frequency The Earth’s atmosphere is only transparent at certain frequency bands. The microwave window between 1GHz and 10GHz is both relatively noise free and low-loss through the atmosphere (see Figure 1.) A signal heard throughout the Universe is the Hydrogen line. This is a narrow band, non-coherent signal generated by interstellar hydrogen gas, the most abundant molecule in the Universe. This frequency also happens to be within one of the ”frequency windows” in the atmosphere, so we choose to look for signals in the band around the Hydrogen Line. The band around the Hydrogen Line also has the advantage of being protected from transmissions, so there should not be any manmade signals on these frequencies. However, there is no ”incorrect” frequency for SETI. Other projects look around frequencies such as 4461MHz ( p * hydrogen line ) and other mathematical constants that an intelligent civilisation might use.
[top] Fig. 1: The best frequencies for listening to signals from space are between 1 and 10GHz [above] Fig. 2: An FFT routine converts audio samples into frequency bins
Bandwidth The sensitivity of a receiver is proportional to its bandwidth. A narrow band receiver will hear less noise than a wideband receiver, and therefore a signal will stand out better. However, the less bandwidth we look at, the less chance we have of hearing a signal, because we don’t know exactly what frequency to listen on. 3 · 2000 LINUX MAGAZINE 27
026seti.qxd
20.10.2000
11:31 Uhr
Seite 28
FEATURE
SETI AND LINUX
Doppler shift: This is the change in apparent frequency observed by the listener, depending on whether the transmitter is coming towards you or moving away from you. The same effect results in the change in tone of a car horn that is sounded as the car drives past. ■
Therefore, we need to look at as much bandwidth as possible. The answer to these two contradictory requirements is to use a computer to split a large bandwidth into lots of narrow ”frequency bins” and look for a signal in each one. The function to split the bandwidth is called a Fourier Transform and a cunning hack to speed up processing makes it a Fast Fourier Transform – FFT. When you double the size of a Fourier Transform you square the number of multiplications. When you double the size of a Fast Fourier Transform you approximately double the number of multiplications. The computer sound card samples the audio and an FFT routine converts these samples into frequency bins (see Figure 2.)
A little FFT calculation The sample rate is the number of samples (bytes or words) per second. With a SoundBlaster card or compatible the maximum is 44kHz. The maximum frequency we can detect at this sample rate is equal to the sample rate divided by two (also known as the Nyquist frequency.) The sample size is the number of samples that we process in one block. Then the bin size = 2 * sample rate/sample size. • Audio bandwidth from receiver = 20KHz ( DC .. 20KHz ). • Maximum Sample rate (SoundBlaster 16) 44KHz • Data rate (assuming 16 bit samples) = 88Kbytes/second • Target bin size = 10Hz • Bin Size = sample frequency/FFT length = 44000/4096 = 10.7Hz • Number of FFTs per second = sample frequency/FFT length = 10.7 FFTs per second. • Maximim time for each FFT cycle = 93ms.
Why CPU Power is important Fig. 3: The components of a Linux SETI program
Calculating an FFT involves many complex-number calculations. Narrow bin size – and therefore better
sensitivity – will result from more CPU time. Therefore any SETI program should minimise CPU use elsewhere (such as for the display). So we now have the situation where the sensitivity of a SETI station can be improved by optimising the FFT routine! A soundcard (such as a SB16) can record two channels simultaneously: stereo. Therefore one sound card can support the output from two receivers as long as the CPU can process fast enough. Linux can support more than one soundcard. There is, however, a limit to narrowing the bin size beyond which valid signals might be missed. Narrow band signals will start to spread as they pass though the Interstellar Medium. This alone will limit the minimum bin size to greater than 0.1Hz. Bin levels tend to be averaged over a number of FFT calculations to further enhance sensitivity. During this time the signal can move from one bin into an adjacent one due to Doppler shift. The averaging may then lose the signal. Doppler shift occurs because of the changing relative velocities of the transmitting station (a rotating alien world orbiting a distant Sun) and the receiving station. Programs like SETI@HOME have the time to ”chirp” the data, i.e. they move the frequency of the received data to compensate for Doppler shift. We can’t predict the Doppler shift and so SETI@HOME FFTs and averages to match the many possible Doppler shifts up to ±10Hz. This is a luxury you only have when processing off-line.
Why Linux? The SETI software runs 24 hours a day, 7 days a week, usually unattended. A robust operating system with the ability to handle large data throughput and gigabyte logfiles is an essential requirement. Another reason for using Linux is that documentation on, for instance, programming the soundcard is widely available on the Internet, in various HOWTOs and excellent O’Reilly books. If all else fails, then you can just dive inside the kernel source and see what is going on yourself. Once you have discovered the sndconfig command or its counterpart in your chosen distribution, installing a sound card for Linux couldn’t be easier. The /dev/dsp and /dev/mixer give consistent interfaces for many types of sound card.
Why Open Source? The algorithms used to detect signals can be made very sensitive to catch even the faintest signal but many false alarms, or they can be set insensitive to minimise false detections and possibly miss real signals. Once the framework for SETI detection software is in place, the benefit of open source is that others can add their own improved search algorithm. 28 LINUX MAGAZINE 3 · 2000
026seti.qxd
20.10.2000
11:31 Uhr
Seite 29
SETI AND LINUX
FEATURE
The software will need to interface to many different types of hardware, like the many types of receiver to be remotely programmed. One designer cannot code for all receivers. Once a signal is detected, it is possible to verify the signal by switching off amplifiers, or moving the aerial slightly. The sequence of events is very dependant on the SETI station configuration and an operator may want to customise the code.
Setisearch To meet the above requirements I have been developing a Linux SETI program called setisearch. This is open source and modular so that development of a module can be undertaken in isolation. Figure 3 contains a diagram of the software architecture. From the above discussion you can see that it is important to optimise a SETI program for maximum FFT CPU time. As the display might only be viewed at start-up and then during an alarm it is a waste of CPU time to keep updating a pretty display, so a text display was chosen with a user selectable level of debug. For debugging and showing your friends, it is sometimes useful to see a graphical representation of the received noise/signal. Ideally this will be run from a different computer, taking data over the network via a sockets interface.
Fig. 4:How the audio samples are processed
Data processing thread
A sound card’s input configuration consists of a mixer followed by an A/D converter. The A/D converter samples the input audio voltage at a rate programmed by the SETI code; the mixer level is controlled by a software AGC loop to keep the input level more or less constant. Details of programming the soundcard and mixer, and taking data from /dev/dsp are available in the Linux Multimedia Guide by O’Reilly. Audio samples are available in either 8 bit or 16 bit form, depending on the soundcard, with various formats for the sample such as signed or unsigned. Programming the soundcard for stereo will give alternating left-right-left-right samples.
Once there are full buffers available, the processing thread will FFT the data as shown in Figure 4. Two buffers are concatenated and then the contents multiplied by a window function. The first sample of data effectively makes a step jump from zero up to the value of sample 1. This is a false transition because of the way that we are processing blocks of data rather than continuous data, and it can cause extra noise in the results. A window function effectively creates a smooth transition rather than a step at the beginning and end of each pair of buffers, thereby reducing the noise. Unfortunately we are losing data by this process, so we process pairs of buffers, buffer 1 and 2, then 2 and 3, then 3 and 4 and so on.. All the data will then be processed. The result of the FFT is a bit reversed complex number. The order of the bins is not linear from 0.. 4096 so this has to be re-ordered into a linear order. You could argue that for peak detection, the order is not critical. The complex number contains both magnitude and phase information of the audio stream. We calculate the magnitude and throw away the phase information.
Data collection thread
Signal detection
The /dev/dsp entity will give a stream of data. The process will block whilst awaiting data from /dev/dsp. Whilst you are away processing this data it may lose samples and the sampled audio would become discontinuous and corrupted. In order to perform an FFT you will need to take a number of samples, 4096 for instance, and then process them. Therefore one process should collect data and put it into buffers and another process should take full buffers and FFT them. Hence the need for a multithreaded program.
Integrating many samples over a few seconds can further enhance the system sensitivity. An exponential smoothing algorithm was used as it does not require much CPU time or memory. Doppler shift is the limiting factor when integrating signals over time, as the signal may drift outside the bin limits during the period of integration. Once the bin data is available we need to differentiate the noise from the data. A ”beacon” type signal should fill one bin more
Sound card
3 · 2000 LINUX MAGAZINE 29
026seti.qxd
20.10.2000
11:31 Uhr
Seite 30
FEATURE
SETI AND LINUX
Typical Station
Fig. 5:A SETI station
Info Setisearch website http://www.setisearch.org/ JennyB seti website http://www.jsquared.co.uk/seti /index.html SetiLeague website http://www.setileague.org/ Seti-uk website http://www.jsquared.co.uk/seti -uk/ \Linux Multimedia Guide ( a bit dated, but the programming details are still invaluable ) O’Reilly ISBN 1-56592-219-0 Beginning Linux Programming WROX Press ISBN 1-861002-97-1 The Cathedral & The Bazaar O’Reilly ISBN 1-56592-724-9 ■
than any other, so we are looking for bins with an above average level. The algorithm for detecting a signal in the noise should be sensitive enough to not miss valid signals, but also proof against false detection. The setting for this threshold is user-configurable via a configuration file. On signal detection, a logfile of the ‘hit’ is taken whilst the signal is still present. The following data is recorded to a log file: • Time and Date; • Frequency of the hit; • Position in the sky where the hit was detected; • Bin data once a second for the duration of the hit. The log file can be analysed using a TCL script to give a 3-D representation of the signal with Doppler shift drifting from 20KHz to DC and back. As well as writing a logfile, hardware connected to the parallel port – such as a siren and a flashing beacon – can alert anyone in the vicinity that there is an interesting signal on the receiver. Writing data to the parallel port is relatively easy and the programming is described in the IO Programming HOWTO.
Fig. 6:The first recorded unexplained signal from space
30 LINUX MAGAZINE 3 · 2000
A small radio telescope suitable for Project Argus will consist of a 3 to 5 metre parabolic dish suitable for the target frequency. Many of these larger dishes are now available as people move from the old ‘C’ band satellite dishes to the smaller ‘Sky’ dishes and minidishes that work on 10GHz. The dish feed and low noise amolifier (LNA) units can be purchased as kits for less than £100, although it is possible for the enthusiastic amateur to build their own. There are many ‘scanner’ type receivers that will work in SSB mode at 1420MHz, although the receiver bandwidth should be modified to pass 20KHz of audio. The Icom IC-R7000 is a favoured receiver and can be purchased second hand for less than £400.
What we have heard so far Many signals have been heard so far, but the vast majority of them have been explicable as man-made interference, either due to a faulty transmitter or poor receiver design at our SETI station. The time will come when the radio bands are so full of interference that SETI stations can only operate on the dark side of the moon. Inexplicable signals have been recorded, however. The earliest known was the ‘Wow’ signal as dramatised on the X-Files. This was recorded in 1977 using the ”Big Ear” Radio Telescope built by John Kraus. It shows that the signal was narrow band and that it came and went with the Gaussian response of the aerial. Unfortunately there was no follow-up Radio Telescope to confirm that signal was extra-terrestrial rather than local. There are currently 98 Project Argus Stations in 18 countries. These stations have received many unconfirmed and unexplained signals. Co-ordination between project Argus stations will one day confirm the reception of an extraterrestrial signal. ■
Fig. 7: A setisearch hit
031distributed.qxd
20.10.2000
11:20 Uhr
Seite 31
FILE SHARING
REPORT
Distributed File sharing, part I
COLLECTIVE MEMORY TOBIAS FREITAG
The whole world as a file system which everyone can access and can reach their home directory even from an internet café in Timbuktu – this, or something similar, is what the developers of Freenet, Gnutella and Co. imagine the world could be with their software. This twopart article aims to show how close they are to achieving this goal.
It’s a logical idea: a globalised world should have a global data archive. The idea is realised in each of the many projects connected with it, usually by a small program that, in contrast to normal file transfer programs, not only transfers data from other computers but is also able to provide it. Users themselves are solely responsible for what kind of data they transfer. Often, only the basic system is stipulated. This is the case with probably the best-known operator of a file sharing network. Napster, a company set up two years ago by an American student and now used by almost 5 million people, only allows MP3 files to be transferred. Unfortunately, it currently faces legal proceedings – initiated by the heavy metal band Metallica among
others – which aim to prevent the service from transferring copyrighted music. This is because users wishing to build up a music collection with the help of Napster or the other networks (see table 1) do not actually take any notice of copyright. Until now the court case has had the opposite effect of that intended. According to Media Metrix’s estimates, the number of users rose from 1.1 million in January to 4.9 million in July of this year making Napster the fastest growing application ever registered on the internet. Nevertheless, the question remains whether the service will be closed in the coming months. For precisely this reason the gnutella project was formed on a website hosted by AOL. 3 · 2000 LINUX MAGAZINE 31
031distributed.qxd
20.10.2000
11:21 Uhr
REPORT
Shawn Fanning, just nineteen years old, founded Napster together with his uncle
Seite 32
FILE SHARING
In contrast to Napster, this service aims to allow users to transfer all kinds of files and not to rely on one central server to produce and manage the index of all the distributed data. Instead, a ”real” network is in operation. Each participant produces their own index, and search requests are forwarded to the next clients, which in turn forward these to their network neighbours. The reasoning behind this is that if there is no central server, a court cannot shut down the network. But this is where the system reaches its limits. When it seemed that Napster was about to be shut down, thousands of former Napster users flooded to the ”rival” Gnutella, whose capacities were put to the test for the first time. And it seems that the concept behind the protocol used had not been properly thought through. The web publisher Clip2 published a report according to which Gnutella had now come up against a ”modem barrier”. The many search requests forwarded by each client have slowed the throughput to such an extent that it is no longer possible to download properly, and this affects not only modem owners but also all the other participants, as their search requests are also forwarded, and so delayed, by the ”weak” modem users.
Classified information Though Gnutella claims to be open source and part of GNU, the reality appears to be a little different. Even when development began, when AOL still accommodated the Gnutella website directly on one of its servers, the developers postponed the publication of the source code until the distant future. They said the code would be released once they had reached version 1.0. But after more than half a year’s development work they are only up to version 0.56. Even if the developers could finally make up their mind to make their program open source, they fear that AOL could intervene. After all, the software was developed by the company Gnullsoft, backed by employees of the company Nullsoft (particularly well-known in Windows circles for its MP3 player Winamp), which in turn was bought up some time ago by AOL. The crucial detail: AOL, following the merger with Time Warner if approved by the competition regulators, will be one of the major players in the music business. Table 1: Common file-sharing networks Name Founded in OpenSource Freenet
June 1999
yes
Average data volume available GByte
Gnutella Napster
1997 May 1999
planned no
20 TByte 4 TByte
Scour
September 1997
no
20 TByte
*All hosts that can be searched simultaneously when operating normally 32 LINUX MAGAZINE 3 · 2000
Freenet Freenet is quite clearly open source and completely open to anyone wishing to participate. The project was first conceived by Ian Clarke while he was studying for his doctorate and now has a few ”nodes” or installations. The system was immediately implemented in Java for preview purposes, and a graphical user interface is under development. However, the client isn’t yet suitable for serious use and the indexing and searching with what are called keys doesn’t even seem to be able to satisfy several developers. The Freenet nodes don’t just form an interface with the released and exported resources. They also automatically replicate data where it is frequently requested. However, as they only index some of the data available on the network, it is possible that data for which there is insufficient demand will disappear from the network completely. Therefore, this approach is not suitable for the ”Eternity Service” (based on a paper by Ross J. Anderson) which is said to keep data for all eternity. The Eternity Service attempts to combine the advantage of longevity with that of modern data communications by storing the data on servers scattered around the world. There seems to be a need for it, particularly in view of the ever decreasing lifespans of the storage media used today to store all kinds of data regardless of whether it is a medical report or birth certificate. However, Anderson actually had quite different ideas about his vision. He draws a comparison with the first translations of the Bible, which were one of the main reasons for the Reformation, the subsequent social upheaval and everything that followed. He says that, then as now, a technical innovation – in this case the printing press – ensured that the acquired information could not be suppressed or lost. He therefore requests that the system protect the anonymity of its users and not allow any government or other institution the opportunity to take the information away from the world.
... and all the rest The idea of sharing something among equals on the peer-to-peer network goes even further. Just as users of file search services only share hard disk
Users*
Globbing search
c. 200
no’
200-800 300; 4.9 million in total (as at July 2000) 70
in part” no
’see text
no
Special features encrypted transfer possible central server, only MP3 files permitted. central server
”depends on the support provided by the host searched
031distributed.qxd
20.10.2000
11:21 Uhr
Seite 33
FILE SHARING
space with other users, unused computer time could be redistributed. However, the projects which have focussed on this subject to date, such as Seti@HOME or Distributed.net, don’t distribute computer capacity but use it for a special purpose. Distributed.net, for example, is currently attempting to crack the RC5 encryption algorithm. The GnuSpace project hosted on Sourceforge intends to redistribute the CPU time to the user. Unfortunately, the project is still in its infancy and so usable programs can’t be expected for some time. There is no shortage of ideas about where to go with the concept of ”distributed resources”. The best example was O’Reilly’s Peer-to-Peer Summit where Hank Barry, CEO of Napster, Andy Herzfeld from Eazel and representatives of IBM, Microsoft, Red Hat and Gnutella among others came together in the middle of September. Also participating in the event was Stanford professor Lawrence Lessig, who has made a name for himself with the sociology papers he has published on computers and society in particular. This informal meeting was also concerned with investigating the potential of distributed systems for technology and society and countering the idea that this kind of technology would only make it easier for pirates to make copies. The participants agreed that the use of dormant memory capacity, computer time and transfer bandwidth through peer-to-peer networks and
Info Eternity Service implementation: http://www.cypherspace.org/~adam/eternity/ Another implementation: http://www.kolej.mff.cuni.cz/~eternity/ The Eternity Service, Ross J. Anderson: http://www.cl.cam.ac.uk/users/rja14/eternity/eternity.html GnuSpace Project: http://gnuspace.sourceforge.net O’Reilly’s Peer-to-Peer Summit: http://www.oreillynet.com/pub/a/linux/2000/09/22/p2psummit.html ■ technologies can only be useful. Due to the increase in the amount of data, it will be increasingly important in future to be able to categorise data sensibly. XML is only the beginning, and it must be made much easier for users to furnish newly produced data with the correct meta data straight away, they said. This technology obviously harbours a considerable degree of dynamism. By the time you read this a verdict will have been reached in the Napster case, and there may be other networks too. There will certainly be new clients and implementations. We will go into more detail on the latter next time. ■
Table 2: Useable filesharing clients/programs Network Linux clients Homepage Freenet official Java client http://freenet.sourceforge.net/ Gnutella gnubile http://www.gnutelladev.com/source/gnubile/
Napster
Scour
REPORT
gnujatella gnut gtk-gnutella hagelslag gnapster gnome-napster gtk-napster iNapster javaNapster
http://gnujatella.sourceforge.net/ http://www.mrob.com/gnut/ http://gtk-gnutella.sourceforge.net/ http://TieFighter.et.tudelft.nl/hagelslag http://jasta.gotlinux.org/gnapster.html http://gnome-napster.sourceforge.net/ http://www.geocities.com/xilliator/ http://members.optusnet.com.au/~iwade/inapster/ http://www.mp3s4u.f2s.com/jnapster/
jnap jNapster jnerve knapster Linux Napster Client Lopster MyNapster TekNap XNapster gsx JavaScour JScour Scour Media Agent
http://www.perham.net/mike/jnap/ http://members.nbci.com/harikris_v/ http://jnerve.sourceforge.net/ http://knapster.netpedia.net/ http://www.gis.net/~nite/ http://lopster.sourceforge.net/ http://mynapster.sourceforge.net/ http://www.teknap.com/ http://www.xnapster.com/server.html http://freshmeat.net/projects/gsx/homepage/ http://freshmeat.net/projects/javascour/homepage/ http://freshmeat.net/projects/jscour/homepage/ http://freshmeat.net/projects/scourmediaagent/homepage/
Special features Java client Linux client of developers, can be used for uploads Java client console client console client, GUI planned
Web interface with Napster Java client, search requests possible on several servers at the same time Java client Java library for Napster access Java server console client can be used for uploads Web interface with Napster Opennap/Napster server Java client Java command line Perl wrapper for SMB download from Scour Windows server 3 · 2000 LINUX MAGAZINE 33
034Graphic.qxd
20.10.2000
11:24 Uhr
Seite 34
ON TEST
3D GRAPHICS CARDS
12 graphics cards put to the test
3D GETS NEW IMPETUS BERNHARD KUHN
Direct Rendering Infrastructure has made its way into a number of distributions through XFree86 4.0. What’s more, the 3D drivers available under the GPL or the vendor’s proprietary software can often be found on the CD. For example, SuSE 7.0 provides the option to run a number of ATI, Voodoo or nVidia cards or FireGL1 with 3D hardware acceleration without requiring an in-depth knowledge of Linux. An XFree86 driver module for the G series from Matrox (G200/G400/G450) recently became available too. It isn’t just the silicon that determines the performance of a graphics card. The graphics chip’s driver software also plays an important role. Therefore, we have taken a close look at a dozen graphics cards and observed their stability as well as their performance. Table 1 provides an overview of all the graphics cards that were put to the test (the performance data was provided by the vendors).
Voodoo magic in moderation Linux supports an increasing number of 3D graphics cards. To see how well they performed, we subjected a dozen different cards to a thorough examination at the Linux Magazine hardware lab.
34 LINUX MAGAZINE 3 · 2000
Two of the three test subjects were familiar from earlier tests. In contrast, the Voodoo5 5500 with its two VSA 100 chips is new to us. Unfortunately, the current Linux driver can’t cope with both chips at present: neither »Scan Line Interleaving« (SLI) nor »Full Screen Anti Aliasing« (FSAA) can be used. Due to the massive size of the card (around 2/3 full size) it was fortunate that, in the case of our test system, a collision did not occur with the midi-tower casing’s drive bays. A suitable drive unit power cable
034Graphic.qxd
20.10.2000
11:24 Uhr
Seite 35
3D GRAPHICS CARDS
ON TEST
was quickly found, and this was used to connect the Bolide from 3dfx. Anyone who was irritated by the habit of using a fan on the graphics chip will be even less pleased about the two fans on the two VSA 100 chips. The driver is relatively easy to install. With XFree86-4.0.x correctly configured, you only need to install the appropriate Glide3 library for the graphics card (V3/V5) and a DRM kernel module (see Info box.) The rules for the conversion of OpenGL statements to Glide functions are already contained in the XFree86 module tdfx_dri.so. The module section of XF86Config must be expanded to include the two entries Load »dri« and Load »glx« (as is the case with all other 3D cards). Voodoo card drivers have been suitable for common use in games for years. However, SPECviewperf occasionally shows display errors.
Matrox races to catch up As well as a Matrox G400DH-MAX, a brand new G450 (Dual-Head) also took part in our test. The fact that the PCI identifier is the same as that of the earlier model suggests that Matrox hasn’t produced a new chip design for its new product. However, the added feature, which splits the monitor picture to a television via a composite cable at the second video output, can even be used under Linux. XFree86-4.0 makes the first step in putting 3D into operation child’s play: obtain a driver module (mga_drv.o) from the vendor and copy it to /usr/X11R6/lib/modules/drivers. However, as is usually the case with DRI, you also require a cardspecific kernel module (mga.o) and »AGP Kernel Support« (see box). The two Matrox cards were extremely stable during the tests. However, 3D acceleration is only functional at the present time (status: mga_1_00_03_beta) at a 16 bit colour depth and without the multihead feature. Anyone with only one monitor but a dual-head compatible card would be better off connecting the monitor to output two if they don’t wish to keep plugging into other sockets: until the X server is booted the screen output is replicated on both connections, but the first output switches off in graphics mode.
Finally 3D with ATI Along with two older Rage Fury cards, our hardware lab had at its disposal a Rage Fury Maxx and the brand new Radeon 256. Unfortunately, we were unable to run the latter two under Linux (see box »ATI Fury MAXX and Radeon«). Although ATI published the specifications for its Rage128 chips very early on, it unfortunately took quite a while for a stable X server (2D) to come along for the graphics cards. And there has been no trace of 3D functionality to date. XFree86-4.0 provides a remedy, however. 3D mode requires just two kernel modules, although producing these may present quite a challenge even for advanced Linux users (see box.)
[top] Fig. 1: Voodoo5 5500: up to 32 VSA 100 chips can be used, but Linux currently supports a maximum of one chip.
[above] Fig. 2: Matrox G450: 400 series graphics cards are quite suitable for 3D games, but multihead only works in two dimensions at present.
Table 1: An overview of all the graphics cards Vendor
Product
Graphics chip
Matrox Matrox Elsa Creative Labs Elsa Silicon Graphics Diamond 3dfx 3dfx 3dfx ATI ATI
Millenium G400 DH MAX Millenium G450 Erazor III 3D Blaster Gladiac VR3 FireGL 1 Voodoo3 2000 Voodoo3 3000 Voodoo5 5500 RageFury RageFuryPro
G400 G450 TNT2 TNT2 Ultra GeForce2 GTS Quadro IBM Rasterizer Voodoo 3 Voodoo 3 2 x VSA 100 Rage 128 Rage 128 Pro
Memory Bus size Memory Data [MB] [bit] speed [MHz] [GB/s] 32 32 64 32 128 150 2.4 32 128 183 2.9 32 128 166 5.2 64 128 166 5.3 32 256 16 2.3 16 2.7 2 x 32 128 166 5.3 32 128 143 2 32
Pixel rate lines
2 2 4 4 1 1 2x2
TMUs/ Pipeline
2 2 2 2 . 2 2 2
MPixel/s MTriPipeangle/s
AGP
RAMDAC
no info. no info. 250 300 800 540 200 143 166 667
4x 4x 4x 4x 4x 4x 2x 2x 2x 4x 4x 4x
360 360 300 300 350 350 250 300 350 350 230 250
no info. no info. 9 25 17 4.5 6 7
8
3 · 2000 LINUX MAGAZINE 35
Colour depth (3D) 16 16 16/24 16/24 16/24 16/24 24 16 16 16 16/24 16/24
034Graphic.qxd
20.10.2000
11:24 Uhr
Seite 36
ON TEST
3D GRAPHICS CARDS
The two graphics cards tested were satisfactorily stable. However, minor display errors (with SPECviewperf) are a common occurence.
nVidia is way out in front Besides two older cards with TNT2 chips we investigated the performance of an Elsa Gladiac and the SGI VR3 (salvaged from a Silicon Graphics SGI 230). Together with SGI and VA Linux, nVidia has developed an XFree86 driver extension which is similar to »Direct Rendering Infrastructure«. Unfortunately, the source code doesn’t contain the proprietary extras, which makes it more difficult to search for errors. If the vendor could guarantee that it would operate flawlessly, we could forgive the unusual file structure of the drivers (in comparison with DRI). However, in our test lab we occasionally observed the following effect with several nVidia cards and mainboards. If we try, for example, to make a screenshot from a 3D application using xv, the X server often freezes and the 3D program communicates like mad with the server – possibly a deadlock in arbitration via the GLX protocol. If we kill this communicative process (kill -9), the X server once again continues to run quite happily – however, those Linux users without a second computer (for remote login) are unable to do this and must resort to the reset button. Without open source the vendor now has a duty. Interestingly, it was not possible to block the SGI VR3 with its Quadro chip, even with a great deal of effort. Therefore, the annoying error could also be hidden in the depths of the nVidia retail chips.
[above] Fig. 3: RageFuryPro: it puts up a brave fight – even if it is somewhat older.
[right] Fig. 4: Elsa Gladiac: only the Quadro chip of the SGI VR3 was able to outperform this card in several tests.
AGP Kernel Support With the advent of XFree86 4.0 a number of current graphics cards can now prove their three-dimensional capabilities under Linux. This may require additional XFree86 modules and certainly a DRI kernel module, as per table 2, in addition to a working XFree86-4.0 configuration (2D). Matrox and ATI cards also require a kernel with AGP support. In the case of the Linux operating system version 2.4.0.test*, the modules agpgart.o, r128.o (for ATI) and mga.o (Matrox) can be compiled directly into the Linux kernel, if selected correctly during make menuconfig. As converting to Kernel 2.4 can involve a lot of work due to the different module architecture (depending on the distribution) there is a patch for Kernel 2.2.x for agpgart.o. However, the features of the newer mainboard chipsets are not always completely supported and the AGP module then refuses to be loaded with insmod. If this is the case, entering option agpgart agp_try_unsupported=1 in /etc/modules.conf (or /etc/conf.modules in the case of older distributions) sometimes helps. Users have to create the DRI kernel modules (r128.o and mga.o) for 2.2.x from the DRI CVS tree. Those who, quite understandably, don’t wish to undertake a complete 17 MByte CVS update can obtain the considerably smaller kernel module package from 3dfx. This is a snapshot of the kernel/drm directory in the DRI-CVS tree, which contains the kernel modules for almost all 3D cards supported by XFree86-4.0.
36 LINUX MAGAZINE 3 · 2000
034Graphic.qxd
20.10.2000
11:24 Uhr
Seite 37
3D GRAPHICS CARDS
Diamond FireGL1 The Diamond FireGL1, designed for professional CAD applications with its Graphics Rasterizer Chip from IBM, didn’t seem to be quite as fresh when subjected to our performance test. Like Matrox and ATI, Diamond is standing dutifully by the DRI specification and so, after downloading the driver package, you just need to move a few files into the correct position (in accordance with table 2). Unfortunately, the kernel module needed is not yet available in source code form, and so the binary supplied has to be forced upon the operating system (insmod -f firegl1). This could possibly be the reason the system crashes quite frequently when 3D applications are executed.
ON TEST
The performance tables and diagrams we produced consequently contain some gaps. Users shouldn’t be concerned by the fact that the graphics card can only be operated at a 24 bit colour depth.
Test environment The measurements provided come from a system with an Athlon Thunderbird 800 based on an Asus A7V mainboard (with VIA-KT133 chipset and 256 MByte PC100 SDRAM). In the case of Quake3Arena the default settings were used and only the resolution was varied. Exception: one of the test subjects only reached its »Top Speed« at 512x384 pixels with deactivated »Game Options« and low texture and geometry details.
Full Screen Anti Aliasing (FSAA) The annoying »stepped effect« in scenes with a high contrast can be reduced using a simple but computationintensive trick. The events are recorded by several cameras with a parallel viewing direction. The maximum distance of the cameras from the original viewpoint corresponds to half a screen pixel (see Figure 6). The resulting images, interestingly enough, look better when the cameras are not arranged in a uniform raster.
operated in FSAA mode. According to the nVidia driver’s README, export __GL_FSAA_QUALITY=[0-2] can now be used to set the FSAA quality. Our attempts always gave the same result (obviously only 2-fold FSAA). But the improvement in the rendering compared with simple rendering was clear to see (see Figure 7). Although the frame rate was halved in our example from 224 to 93 FPS, an export __GL_SYNC_TO_VBLANK=1 proved to be very useful for synchronising the scene structure in the graphics card with the monitor’s image refresh rate, as otherwise linear sequences of motions can, subjectively, seem a little jerky. Strangely enough, Quake3Arena ignored the FSAA environment variables completely and produced the ugly stepped effect as before.
Fig. 6: Full Screen Anti Aliasing: the same scene is viewed superimposed from slightly different viewpoints
The images of the individual cameras are superimposed before being output to the screen. This reduces the steps (see Figure 7). However, the disadvantage of this process is obvious – the scene has to be rendered several times. Graphics cards with just one pixel pipeline can only draw and mix in the »Accumulation Buffer« one scene after the other. Modern cards have several pixel pipelines which render the same scene at the same time. In the case of 3dfx it takes two (model 5500) or even four (model 6000) VSA 100 chips to complete the job. Unfortunately, the vendor’s Linux drivers can’t provide full screen anti-aliasing at the present time. Things look different with the GeForce2 GTS from nVidia. Using export __GL_ENABLE_FSAA=1 you can inform the libGL that the four pixel pipelines on the chip should be
Fig. 7: Once with and once without FSAA: the difference is clear to see.
3 · 2000 LINUX MAGAZINE 37
034Graphic.qxd
20.10.2000
11:25 Uhr
Seite 38
ON TEST
3D GRAPHICS CARDS
Fig. 5:Diamond FireGL1: the good SPECviewperf results produced by these older cards give rise to hopes of excellent prospects for later models.
CAD Performance There’s a substantial difference in the performance of the test subjects when measured against the SPECviewperf benchmark. The Graphics Performance Characterization Group (GPC) of the
Standard Performance Evaluation Corporation (SPEC) has recently raised the benchmark with the new version 6.1.2 of the 3D performance test. In order to reduce the obstacles to be overcome by graphics cards in the lower price bracket, its predecessor 6.1.1 was also applied (where possible): however, this didn’t help the weaker participants in the test very much. The current graphics chips from nVidia are at the top of the rankings. The gap between the Quadro chip on the SGI VR3 and the Elsa Gladia (GeForce2 GTS), considerably cheaper by comparison, is small. However, the high-end graphics card is about twice as fast as its retail counterpart in the important ProCDRS test (see Figure 8). In spite of its ripe old age of almost two years, the Diamond FireGL1 leads the rest of the field – as long as you are prepared tp disregard its weaknesses (see above). However, the good partial results give rise to hope that the new generically related FireGL2/3 chips will allow for new 3D maximum performance as soon as the Linux drivers promised by Diamond/S3 are available.
Table 2: Drivers and files Graphics chip
XFree86-
G4x0 Voodoo3/4/5 Rage128/Pro nVidia
4.0.1 4.0.1 4.0.1 4.0.1
FireGL1
4.0.0
RagePro/SiS/Savage3D
3.3.6
Deviating files in /usr/X11R6/lib modules/drivers/mga_drv.o libglide3* modules/drivers/nvidia_drv.o modules/extensions/libglx.so* libGL.so.1.2* libGLcore.so.1* (delete libglx.a and libGLcore.a in modules/extensions) modules/drivers/firegl1_drv.o modules/dri/firegl1_dri.so modules/linux/libfgl1.a libGL.so.1.2 modules/glx.so libGL.so.1.0
DRI kernel module name mga.o tdfx.o r128.o NVdriver
Where to get? Kernel 2.2 DRI-CVS 3dfx or DRI-CVS DRI-CVS nVidia
Kernel 2.4 in the kernel in the kernel in the kernel nVidia
firegl1[-SMP].o
Diamond
-
DRI-CVS
Kernel
Tabelle 3: Quake3Arena bei 16 bpp [FPS]
16bpp Top Speed 512 x 384 640 x 480 800 x 600 1024 x 768 1280 x 1024
Rage 128
Voodoo3 Voodoo3 G450 Voodoo5 Rage1 2000 3000 5500 28Pro
96,9 56,6 43,2 29,6 20,3 13,1
80,8 55,2 44,3 34,1 24,2 15,9
82,1 59,1 49,8 38,0 27,9 18,3
117,9 69,1 56,4 40,7 27,8 18,0
82,5 62,5 57,0 44,6 33,2 21,4
101,6 66,6 61,8 46,4 31,9 20,6
Riva TNT2 113,5 77,7 76,3 54,8 34,1 20,7
Riva G400 TNT2U 115,2 77,7 76,3 54,8 34,1 20,7
Quadro GeForce 2GTS
119,3 74,2 71,3 56,3 38,5 24,9
147,8 109,2 108,7 106,1 87,3 55,5
148,5 109,7 109,3 108,6 103,9 78,6
Tabelle 4: Quake3Arena demo001 bei 24bpp [FPS]
FireGL1 Top Speed 512 x 384 640 x 480 800 x 600 1024 x 768 1280 x 1024
38 LINUX MAGAZINE 3 · 2000
31,6 25,3 18,2 12,3 0 0
Rage 128 70,6 31,0 24,2 17,6 11,7 7,3
Rage 128Pro 94,3 62,7 49,6 36,6 23,6 14,7
Riva TNT2 86,9 69,8 54,9 35,3 20,8 12,9
Riva TNT2U
Quadro
GeForce 2GTS
102,6 69,8 54,9 35,3 20,8 12,9
147,9 109,2 107,2 93,0 60,7 36,6
147,8 109,3 108,1 102,2 65,0 39,5
034Graphic.qxd
20.10.2000
11:25 Uhr
Seite 39
3D GRAPHICS CARDS
Beaten by miles, the other products fight for the last places in the rankings. Although, in the case of the top product from 3dfx, this is disappointing, it isn’t devastating as this card is not intended for CAD applications. In this discipline we would have expected more from the G400, which is also aimed at the professional market. However, these 3D drivers are at the beta test stage and under development at Precision Insight, financed by Matrox. Clearly there is still room for improvement.
Games Performance Tables 3 and 4 show the comparison of all the test subjects with the Quake3Arena demo001. The area marked in green illustrates the resolution with which each of the graphics cards was able to produce a (subjectively for the tester) fluid images. Users may wonder why a card with an average 25 images per second should seem jerky, but in particularly intense gaming situations just 10 FPS may remain of this average – this favours the virtual opponent. Once again it is the graphics cards with nVidia chips that stand out when measured against this benchmark. Even at a high resolution the game results aren’t bad. Only at a colour depth of 24 bit is the Dual Data Rate RAM no longer able to deal adequately with the demands placed on it. The discrepancy between the high-end graphics card SGI VR3 and the Elsa Gladiac at high resolutions is interesting: nVidia’s flagship product on price isn’t even designed for games. At lower resolutions and texture/geometry details the other contenders can follow the leading duo very well. Even a RageFury achieves around two thirds of the performance of a high-end card with just under 100 FPS. However, this is a purely theoretical benefit as high frame rates of this kind don’t have a substantial influence on the game’s events: higher resolutions or texture/geometry details and a lower frame frequency (beyond 30 FPS) give the players a better chance of victory – and not just for violent shoot-em-up action games. Substantial performance losses can occur during trilinear surface filtering where older graphics cards have only one Texture Mapping Unit. However, as all the current graphics cards undergoing the test have at least two TMUs per pixel pipeline, the deviation from bilinear filtering is only minimal (see table 5)
ON TEST
SPECViewperf 6.1.2 [FPS}
0,6 0,6 0,9 0,0 0,8 1,9
Voodoo5500 16bpp
Light-04 DRV-07 DX-06
1,5 1,7 2,8 0,0 2,7
G450 16bpp
ProCDRS-03 9,3
MedMCAD-01
0,0 0,0 0,0 0,0 0,0
FireGL1 24bpp
AWadvs-04 29,1
3,2 10,8 12,2 9,4 10,4
Gladiac 24bpp
36,2 3,9 12,3 13,3
VR3 24bpp
19,0 21,3 53,7 3,9 12,9 13,4
VR3 16bpp
20,6 22,0 60,7
0
10
20
30
40
50
60
70
Fig. 9: Amazing: the two year old FireGL1 beats a number of younger competitors by miles. Disappointing: Voodoo5 in last place.
SPECViewperf 6.1.1 [FPS} 1,0 2,7 0,4 2,9 3,9
Voodoo5500 16bpp
Light-03 DRV-06
1,4
ProCDRS-02
5,8
Rage128Pro 16bpp
1,8 6,3
DX-05
12,6 1,5
AWadvs-03
6,4
G450 16bpp
1,7 7,7 11,8 0,0 0,0 0,0
FireGL1 24bpp
24,6 30.9 3,3 25,1
Gladiac 16bpp
18,1 38,2 69,4 3,3 25,1
Gladiac 24bpp
17,1 37,6 69,7 3,9 26,8
VR3 24bpp
35,4 41,3 79,2
Table 5: bilinear vs. trilinear filtering [FPS] (Q3A demo001 at 1024x768) Bi Tri G450 27.8 25.1 Rage128Pro 31.9 30.3 Voodoo5 5500 33.2 31.2 GeForce2GTS 65 63.7
4,0 26,7
VR3 16bpp
40.5 41,9 80,2
0
10
20
30
40
50
60
70
Fig. 8: Whether at a 16- or 24-bit colour depth, nVidia is almost unstoppable. But the driver’s stability is disappointing. 3 · 2000 LINUX MAGAZINE 39
80
034Graphic.qxd
20.10.2000
11:25 Uhr
Seite 40
ON TEST
3D GRAPHICS CARDS
Performance in 2D operating mode The differences in performance between the test subjects on an ordinary X server (without 3D) aren’t worth worrying about in everyday use. Any half
decent graphics card with supported 2D hardware acceleration functions is fast enough for the job. Even if the speed is measurably doubled, many users won’t notice the improvement. Nevertheless, the LM Speed Index (see Figure 12) proves that the current nVidia chips are in
UTHA-GLX for older models In contrast to the »Direct Rendering Infrastructure«] (DRI) of XFree86-4.0, hardware acceleration is only possible with older graphics cards in 3D applications with the help of the »utha-glx« project, though it is then network transparent. As well as Matrox G200/G400, ATI RagePro, Intel i810, nVidia Riva (not TNT and younger), SiS 6326, S3 ViRGE and S3 Savage3D are also supported. You »only« need the libGL client library and the GLX server module (don’t forget Load »glx.so« in the modules section of XF86Config) and a working 2D XFree86-3.3 configuration. However, these software components have to be built manually from the source. This means: • Check out the utha-glx source from the developer CVS tree • Check out the Mesa source from the developer CVS tree • Compile These efforts are seldom successful. In our attempts with ATI RageIIc, S3 Savage4 and SiS 6326 only the latter (with a number of display errors in 2D operating mode) showed the initial stages of 3D hardware acceleration. Even then, this was only with a simple OpenGL example application – the test subjects often responded to the launch of Quake3Arena with a system crash. Experiments with the AGP support supported by utha (agpgart) didn’t produce any worthwhile results either.
ATI Fury MAXX and Radeon Two normal Rage 128Pro chips sporting noisy fans reside on the ATI Rage Fury Maxx. Each of the chips has 32Mb of main memory and is supposed to boost 3D performance with the aid of Scan Line Interleaving, as is the case with Voodoo5 5500. Once again this feature can’t be used with Linux. However, the two Rage Fury Maxx chips can be configured individually using XFree86-4.0 and combined to form a multihead display – it is just a shame that the second socket and the other needed components aren’t integrated on the card (see Figure 16). Although they are normal Rage 128Pro chips, we were unable to configure a single-head 3D solution with just one of the chips within a reasonable amount of time: XFree86 always allowed the second chip to join in (»Multihead configuration found«). The outlook is even gloomier for the latest product from ATI. The VESA framebuffer device provides Radeon users with a large number of colours and high resolutions but no 2D/3D hardware acceleration.
Fig. 16: XFree86 4.0 would work with the Rage Fury Maxx in dual-head mode, if only the second output were fully equipped.
40 LINUX MAGAZINE 3 · 2000
034Graphic.qxd
20.10.2000
11:25 Uhr
Seite 41
3D GRAPHICS CARDS
ON TEST
GLX vs. DRI GLX is an extension of the X protocol for OpenGL developed by SGI. A 3D application calls up functions of the 3D library libGL, which then places relevant network packages on the X server. These are interpreted there by the GLX module. The graphics card-dependent 3D driver glx.so is found in the directory /usr/X11R6/lib/modules, and must be declared in the module section of XF86Config (see text). The libGL (often known as libMesaGL too) is independent of the graphics card used. Critics of this architecture bemoan the overhead resulting from the network layer: if the 3D application is executed on the same computer as the graphical output, GLX is more of a hindrance than a help. This is why the company Precision Insight came up with Direct Rendering Infrastructure, which (like the GLX protocol) will be implemented in XFree86 version 4.0. GLX and DRI allow graphics cards vendors to create operating system-independent drivers for their products. The modules are binary-compatible with all systems running XFree86 (e.g. Linux, BSD, Solaris for IA32). Further information on GLX, DRI and XF86-4.0 can be found at the following URLs: http://www.sgi.com/software/opensource/glx http://www.precisioninsight.com/piinsights.html http://www.xfree86.org/releaseplans.html
Matrox G200-MMS With four graphics chips on one PCI board the Matrox G200MMS from Matrox is an interesting product for professional visualisation applications. The G200 chips are hidden behind a DEC 21152 PCI-to-PCI bridge and are treated by the operating system as if they are four separate graphics cards . The board also contains hardware for connecting a video source, though this feature can’t yet be used under Linux. There isn’t room on a standard slot panel for four 15-pin Sub-D sockets, so two special connections are provided for the composite cables (see Figure 17). However, these are too wide for many computers and can’t be fitted properly as the back section of the casing is in the way. Putting a »Multi Monitor System« (hence the abbreviation MMS in the product name) into operation is no easy task for Linux beginners. Although XFree86 4.0 can work with a single Matrox G200 without any problems, automatically creating XF86Config (using X -configure) for multihead operating mode can still be difficult. Therefore, users must be prepared for some manual work. The vendor does provide detailed documentation on its website which is also useful when setting up other graphics cards. Unfortunately, G200-MMS isn’t blessed with 3D hardware acceleration under Linux: the implementation of DRI in XFree86 only allows one card to work in 3D mode at the present time: the end of the dream of 3D games on four screens. But at around £500 the Matrox G200-MMS is probably only for professional clients anyway.
Fig. 17: Matrox G200-MMS: is it useful to have four G200s on one board?
3 · 2000 LINUX MAGAZINE 41
034Graphic.qxd
20.10.2000
11:25 Uhr
Seite 42
ON TEST
3D GRAPHICS CARDS
2D Speed index [kObject/sec]
FireGL1
63,2 98,0
Rage128
143,1
106,4
G450
24bpp 16bpp 163,8
RivaTNT2
130,7
RivaTNT2U
130,7
209,2 209,5
134,3
Voodoo3 2000
193,9
145,0
Voodoo3 3000
207,4
143,2
Voodoo5 5500
207,9
148,7
Rage128Pro
195,8 151,2
G400
228,6 202,5
Quadro
311,2
207,1
GeForce2GTS 0
50
100
150
200
322,0 250
300
350
Fig. 12: All the cards are stable and adequately fast in 2D operating mode, though we expected more from the Diamond FireGL1.
a class of their own. The other contenders, beaten by miles, have little to choose between them, with only the three at the bottom of the rankings differing markedly from those in the middle. However, it’s surprising that the brand new G450 was hardly any quicker than a Rage Fury at completing its 2D tasks.
On balance It was mainly the nVidia products that showed outstanding performance. Unfortunately the beta test status of the software sometimes
Info: Glide3 for Voodoo3: http://linux.3dfx.com/open_source/download/voodoo3_banshee_dri.htm Glide3 for Voodoo5: http://linux.3dfx.com/open_source/download/voodoo5_dri.htm DRI kernel module for Voodoo3/4/5: http://linux.3dfx.com/open_source/download/dri/tdfx_drm-1.0-3.src.rpm XFree86 driver module for Matrox G400/G450: http://www.matrox.com/mga/support/drivers/files/linux_03.cfm AGP support for kernel >= 2.2.16: http://utah-glx.sourceforge.net/gart/agpgart-2.2.16.patch XFree86 driver modules for Diamond FireGL1: ftp://ftp.diamondmm.com/pub/display/fire-gl/fire-gl-1 GPC/SPEC: http://www.spec.org/gpc/ XFree86 driver modules for nVidia graphics chips: ftp://ftp1.detonator.nvidia.com/pub/drivers/english/XFree86_40/0.9-5 Xdrivers ftp://ftp.linux-magazin.de/pub/XFree86/modules/Xdrivers-28092000.tgz Direct Rendering Infrastructure: http://dri.sourceforge.net Utah-GLX project: http://utah-glx.sourceforge.net Precision Insight: www.precisioninsight.com ■ 42 LINUX MAGAZINE 3 · 2000
became obvious where there was a high auxiliary load as the console froze (only in 3D mode). We hope that the vendors SGI and nVidia improve upon their proprietary drivers before the end of the product cycle so that they can also be used in CAD departments and not just to give players a few more frames per second. The Voodoo5 5500 was disappointing in every respect. Without SLI or FSAA the monstrous AGP card hardly differs in its performance from a Voodoo3 3000. While the Voodoo chips were in the lead a year ago when measured against the SPECviewperf benchmark, they are now at the bottom of the rankings. Although 3dfx offers probably the most stable driver, which is almost impossible to kill even if you try, 3dfx products cannot be recommended as a basis for CAD applications. The ATI cards of the Rage 128Pro series produced some surprising results. The somewhat aged Rage FuryPro often outperformed the current products from 3dfx and Matrox. Unfortunately the DRI kernel module requires AGP kernel support. Therefore, users need kernel patches (only for Kernel 2.2) and compilation – an annoying obstacle. The Linux 3D drivers for current Matrox graphics cards are still very young. It’s no wonder, therefore, that the performance potential has been inadequately harnessed. Nevertheless, the G400 has already been able to catch up with the leading nVidia products – although only in terms of the game benchmark. With the coming multi-head 3D support the dual-head models from Matrox may attract some professional buyers too. The dusty high-end card from Diamond was able to show what it’s got, particularly with regard to the Advanced Visualiser Benchmark (SPECviewperf-6.1.2), and reached half the speed of the leading SGI VR3. Gamers will be put off by the high price of this old model and the poor texture performance.
A future for Linux3D Considerable progress has been made when you consider what the state of the 3D art was under Linux just a year ago. This is due in no small part to the many industrious XFree86 developers and the creators of Direct Rendering Infrastructure at Precision Insight. However, a number of graphics cards vendors have also realised that the complexity of 3D drivers requires them to put in some effort of their own to help speed up development if customers are to be provided with usable software within the life of the product. After the products from Redmond, Linux paired with XFree86 may now be among the operating systems supporting the widest range of current 3D graphics cards. ■
044whatiflinux.qxd
20.10.2000
11:34 Uhr
ON TEST
Seite 44
SOFTWARE MANAGEMENT
WhatifLinux Personal Edition
LICENSED TO MANAGE JULIAN MOSS
WhatifLinux is a web-based software management system for computers that run Linux. It will tell you if there are newer versions of packages to be installed, let you see the dependencies and assess the effect of uninstalling a package, and will alert you to important issues – such as security matters – affecting the packages you use. The recently launched Personal Edition is designed for users of standalone computers. We took a look at it. Info WhatifLinux Personal Edition http://www.whatiflinux.com/ Cost: $49 30-day free trial available ■
Installation is performed by running a script from a console window
WhatifLinux consists of a Java-based agent that runs on your computer and communicates with the server at whatiflinux.com. The agents use the information contained in your system’s RPM database, in conjunction with the WhatifLinux Knowledge Base, to determine what updates or alerts are of interest to you. Therefore, prerequisites to using the service are that you run an RPM package-managed distribution and have a Java run-time environment 1.2.2 or later installed. If you are prone to update the software on your system by other means than installing a precompiled binary RPM package, WhatifLinux won’t know what has happened and the
44 LINUX MAGAZINE 3 · 2000
information it provides may not be accurate. Installation is fairly straightforward, but with a couple of obstacles that would probably defeat inexperienced users. From the web site you download a small install script which you must then run from a console window. This downloads the agent software and runs InstallAnywhere to set up WhatifLinux on your system. You must also register with WhatifLinux, supplying an email address, password and a few other details. Registration failed in our case because the user we used to run the installation did not have write access to a Java security file. A suggested script modification didn’t work either, perhaps because the instructions as to how to modify it weren’t clear enough. In the end we chose the alternative workaround, which was to complete this part of the process with root privileges. Some people may be uncomfortable with the idea of allowing a remote system full access to the details of their system while running as root. Once installation has been completed you must edit a script and also the crontab file, to ensure that the Whatif Agent is running. Inexperienced users may find this a bit difficult, and there is a risk of them messing up an important system file. Of course, you can simply start the agent manually when needed and stop it when finished. This is probably a good idea if you use a dial-up Internet connection as we found that connections were regularly opened whilst the agent was running. Once an agent is running you can log on at
044whatiflinux.qxd
20.10.2000
11:34 Uhr
Seite 45
SOFTWARE MANAGEMENT
www.whatiflinux.com using your email address and password and start the WhatifLinux Console. The server connects to the agent and displays an alphabetically indexed list of all the RPM packages installed on your system. After selecting a package from the list you can view details about it, including a full list of the package contents, dependencies, known conflicts and any alerts. You can also see if there would be any undesirable effects if you uninstalled the package and find out if there are any newer versions of it. In our tests the list of newer packages seemed not always to be accurate. For example, all the newer versions of wine listed appeared older than the one we had installed. Version 0.60 of the mail client Mahogany was shown as available although we could find no trace of its release at the time of testing either by using rpmfind or by looking on the developers’ web site. If you select one of the newer versions of a package listed by WhatifLinux you can view the possible conflicts and unsatisfied dependencies that might arise if you installed it. What you can’t do, unfortunately, is actually install it. This option is apparently available in the Workgroup Edition, which also claims to let you update packages simultaneously on many computers across a network. We consider it surprising that this facility has been omitted from the Personal Edition as it is probably the most useful feature for the ordinary user and would be a significant incentive to subscribe to the service. The ability to be alerted of security problems, updates, patches and other important issues affecting the software you are running is another benefit that will be especially useful for administrators of web servers and other business critical systems. The information provided online is detailed, giving an explanation of the problem as well as a recommended fix. Links are also provided if you want more information. You don’t have to keep checking the WhatifLinux web site to get this information in a timely fashion as you can have it emailed directly to you as soon as it happens. However, there’s a price to pay. WhatifLinux Personal Edition costs $49 a year and the question most people will be asking is: is it worth it? After all, a package manager such as rpm or one of its graphical equivalents will let you install and uninstall packages and help you avoid dependency problems and conflicts. WhatifLinux goes further than this. It will keep you updated with the latest information related to the packages you use and let you check for possible conflicts and dependency problems associated with updates before you even download them. It can also do this for new packages you haven’t installed, as long as they are in the WhatifLinux Knowledge Base. Most Linux users are technically self-reliant and are used to getting things for free, so we can’t see too many personal users wanting to pay for this
ON TEST
service. It would have more value for newbies – but to be accessible to them the installation procedure should be fully automatic and more bullet proof. If you need to be running as root in order to install the software, the installer should say so. We also think that the facility to install packages using the system should be part of the Personal Edition. Without this, it isn’t likely to be of sufficient use to most personal Linux users. ■
[top] The Console displays a list of packages newer than the one installed on your system – but you can’t initiate an update! [below] Alerts concerning security and other important issues can also be sent to you
3 · 2000 LINUX MAGAZINE 45
046ipaq.qxd
20.10.2000
11:36 Uhr
Seite 46
ON TEST
IPAQ
Compaq iPaq on test
YOU SEXY THING NILS FAERBER
Handhelds are advancing. Although Compaq doesn’t yet ship its new organiser flagship with Linux preinstalled, there are now two Linux distributions for the iPaq. Now no-one has to go without a proper shell while out and about.
I didn’t pay much attention when the first rumours surrounding the iPaq started at the beginning of the year. There had already been a great many product announcements, which soon turned out to be design studies or prototypes and dashed all our hopes. The rumours surrounding the iPaq quietened down too. Then suddenly the finished product was announced and there was an abundance of information, and at almost the same time, news of the Linux port developed by Compaq itself. The first concrete information became available: 206 MHz StrongARM CPU, 32 MByte RAM, 16 MByte flash, a colour TFT screen. What would it be, then? A netwinder for the rucksack? Observers may have thought it impossible to fit all that into a PDA. Then the first pictures on the Net. It’s a PDA! And rather a good-looking one too. It holds IrDA, USB and RS232 interfaces, almost a connectivity wonder. With a compact flash and PCMCIA slot too. And even more gems in the form of stereo-audio with 16 bit 44 kHz and a headphone connection. Even more pictures! It really does exist. But the pictures certainly conceal the true size. It must be large – how else can it accommodate everything? Then the shock – finally, the dimensions were found on the Net: 78mm wide, 130mm high and 16mm thin. The whole thing weighs less than 170g. After a virtually endless search we finally found two iPaqs available for testing but were unable to wait for them to be dispatched. We immediately drove to collect them. The price of nearly £400 inc. VAT was high, but we felt it was worth it. 46 LINUX MAGAZINE 3 · 2000
Unpacking it The iPaq icomes with a serial docking station, power pack, spare pen, the inevitable Windows synchronisation software (Active Sync), a leather case and a dummy jacket. The iPaq’s casing is made of metal-plated plastic, fits snugly into the hand and feels solid. At first glance, the built-in holder for the pen seems as if it has a small design error. The pen itself has an oval profile, but unfortunately it is not symmetrical! If you are not careful when you insert the pen into the holder and push it using only a little too much force, it is almost impossible to get it out again. The power pack can be connected to both the docking station and the iPaq itself. Very sensible, as users don’t always need the station to recharge the iPaq. When we first unpacked the iPaq it turned into a fiasco almost immediately, as neither of the two iPaqs were prepared to show the slightest sign of life. The trick was that the resourceful Compaq engineers had incorporated a switch to protect the built-in lithium polymer battery (900mAh). This can be used to completely cut off the power supply: particularly important if the iPaq is not used for a longer period, as you must never let a lithium polymer battery run low! When the iPaq is shipped from the factory this small switch on the lower edge of the iPaq is in the ”off” position. Once switched on, the picture was immediately quite different. With a resolution of 320x240 pixels, the iPaq’s screen is really quite large in comparison with a Palm or Helio. The contrast is good and the
046ipaq.qxd
20.10.2000
11:36 Uhr
Seite 47
IPAQ
backlight ensures that the display is always readable. If the backlight is turned off, the transflexive display still provides enough contrast so that readability continues to be good where the lighting conditions are somewhat better. Under Windows CE 3.0 the brightness of the light is adjusted automatically via an in-built light sensor to suit the lighting conditions.
Jackets and Sleeves Expansion packs for the iPaq are described as „Jackets” or „Sleeves”, into which the iPaq can be inserted quite easily. The jackets hardly make the iPaq any bigger, just somewhat fatter depending on the type of jacket. For this purpose the iPaq has on its lower edge not only a socket for the combined RS232/USB and power pack plug but also a system plug where all sorts of interesting signals can be found. These include the compact flash and PCMCIA interface, an audio-line-out output and an input for an external battery pack. Only the compact flash jacket is available at the present time. The PCMCIA jacket with a built-in battery pack to supply the PCMCIA cards is to follow very soon. Other jackets are already under discussion, including a game jacket with larger loudspeakers and special user elements that are especially suitable for games. There are practically no limits to the imagination here and there are sure to be several crazy developments. Compaq has published the specifications for the interface and for the iPaq as a whole. There will also be a development jacket for interested developers. This consists of an empty jacket with the special plug-in contact so that one’s own design can be inserted fairly easily. Unfortunately, the jackets are relatively expensive. The compact flash jacket will cost between £40 and £50, the PCMCIA jacket between £75 and £100. Let’s hope that these prices will fall if sales are high.
But now to Linux! As mentioned, the iPaq incorporates 16MB of flash memory. This is loaded with Windows CE 3.0 at the factory. Unfortunately, this is deleted by the installation of Linux. Although it is possible to read out and save the flash memory contents using the Windows CE boot loader, no-one is guaranteeing that it will ever be possible to get the saved code back into the flash. But quite honestly, who wants that? This is also the most critical moment in the Linux installation process. The entire flash is reprogrammed and the boot loader too, which causes the hardware to run once it is switched on. If the data is written incorrectly during the installation process you will have a wonderful but expensive paperweight or a case for the service engineers. It is
ON TEST
not known whether and how Compaq provides compensation or carries out repairs if this happens. I have only ever heard of one case where this was necessary. In this case the party concerned lived two streets from the Compaq lab in the USA and was invited by the developers to simply drop by. The iPaq’s initial installation process is relatively simple. You should obtain the latest version of the Windows CE boot loader (currently osloader1.3.0.exe from distribution V0.15) from handhelds.org. This is a program for Windows CE containing the same bootloader used to boot the system, only this one is booted not from the flash memory but from the Windows CE program. There are only three options available to get the osloader program on the iPaq. Straight away we are unable to implement one of these three: • Load the file onto the iPaq from a Windows PC using Active-Sync. However, I do not have Windows. • Write the file to a compact flash card and start it on the iPaq using compact flash jacket. If you do not have a CF jacket or cannot write to CF cards, only the third option remains. • Establish a PPP direct connection between iPaq and the Linux computer and transfer the file from the Linux computer using Windows CE Internet Explorer. Not the simplest method, but it works. Once the OS loader is safely on the iPaq, things become exciting. On the Linux computer boot a terminal program (e.g. minicom) and configure it for the interface to which the iPaq is connected. The following parameters are probably the most advisable: 115200 baud, 8 data bits, 1 stop bit, no handshake, neither RTS/CTS nor XON/XOFF. If the terminal program has reached this stage, you should boot the OS loader on the iPaq and select Tools–>Bootld–>Run. Shortly after, the iPaq’s screen should be switched off and the boot loader message boot> should appear in the terminal program. Thus far nothing has happened and a restart would restart WindowsCE. However, the next step could be the killer and some preparation is required.
Installation of the boot loader The boot loader, and all other components to be programmed into the flash, are transferred to the iPaq via the serial XModem protocol. The XModem implementation in the boot loader is still somewhat flaky and doesn’t like it at all if there is a delay before the transfer starts. The recommended procedure is as follows. First enter the boot loader command to start receiving. Then send the file from the Linux computer. There should be a period of no more than one to two seconds between the time the boot loader command is entered and the time the file is sent. However, the method described below works reliably. In the case of Minicom, quit it using the 3 · 2000 LINUX MAGAZINE 47
046ipaq.qxd
20.10.2000
11:37 Uhr
Seite 48
ON TEST
IPAQ
option ”Q” instead of ”q” so that the current interface settings are not reset. If you are not using a Minicom, the following script can be used to initiate the procedure: #!/bin/sh if [ -z "$1" ]; then echo "Usage:" echo " $0 <ttydev>" exit fi stty 115200 -echo -echok -echoe -echoctl -ecU hoke -onlcr -inlcr< $1 The command to load the boot loader then has to be issued to the iPaq. A simple echo is sufficient: echo "load bootldr" > <\<><\\>\><\>ttydev> A second script is now used to send the file: #!/bin/sh if [ -z "$1" -o -z "$2" ]; then echo "Usage:" echo " $0 <filename> <ttydev>" exit fi sx -b $1 > $2 < $2 cat $2 This must be started immediately after the command to receive the file! bootldr-0000-2.9.5 is given as the file for the first load bootldr command. This is the boot loader written to the boot sector of the flash (from address 0). After the transfer is completed the display shows the boot loader messages, which should end with ”verifying … done.”. After this the iPaq can be removed from the station and the reset button activated (a small hole on the lower edge, opposite the power pack socket). Back in the station, the boot loader should generate another message. If in doubt, you should quit the script and restart the terminal program, pressing Return where necessary. Congratulations! The boot loader has now been started from the flash and the installation process can continue.
Status Quo The next part of the installation procedure depends on what is to be installed on the iPaq. There are currently two alternatives available: • Handhelds.org iPaq Linux Distribution, V0.12 to V0.15 • PocketLinux The distributions named above differ in their installation methods. Even the installation method for the iPaq Linux V0.14 from Handhelds.org differs from that for V0.15. Fortunately, the installation method for the Handhelds.org version is well 48 LINUX MAGAZINE 3 · 2000
documented for each release. Users will also find HTML files in the subdirectories of the individual versions.
Handhelds.org Linux Handhelds.org is the main development site for the current iPaq-Linux. The site is intended to be a general platform for handheld Linux and offers the usual services such as WWW servers, mailing lists, FTP servers and CVS servers. It is all sponsored by Compaq, and so it is no wonder that the focus is currently on the iPaq. The current development version of the iPaq distribution can also be found at the FTP server ftp.handhelds.org in the subdirectory /pub/linux/compaq/ipaq/. The current version is V0.15. The distribution includes: • Bootloader V2.9.5 • Linux kernel 2.4.0-test6-rmk5-np2-hh3, normal zImage • Initial RAMdisk, cramfs Image • root file system , cramfs Image • usr file system, cramfs Image Due to the ample flash memory on the iPaq, the root and usr file systems are very full and practically all the standard shell tools, such as grep, cat, wc, ls, etc., are included. This may sound somewhat strange, but those of you who have worked with downsized systems will know what a relief it is when everything works like a normal Linux system. The cramfs images are written to the relevant flash areas during the installation process and mounted from there. This means there is no RAM disk copied to the memory. This saves a considerable amount of RAM, but has the disadvantage that the file system is not writeable. If it is to be changed, a new image has to be produced and programmed into the flash: therefore it is practically infeasible while the system is running. In addition to the file systems with static data, a dynamic RAM disk is also set up. This RAM disk is dynamic in two senses. First, data can be stored here during run time which is no longer available, however, after a reset or other restart. Second, this is the new RAM-FS, which, in contrast to normal RAM disks of a fixed size, dynamically takes the memory required from the main memory and gives it back to the main memory too. This provides some scope for experimenting. After the file system images have been installed using the bootloader and a reset has been activated, the Handhelds.org Linux immediately starts in a graphical mode, the LED flashes green and the user is requested to calibrate the pen on the touchpad. Once this is done, believe or not, an X11 opens! Yes, an X server runs on the iPaq (see Figure 1). The X server currently runs in Landscape mode (i.e. not properly) but work is being done on this. Jim Gettys, one of the fathers of X11 and now an employee at Compaq made a substantial
046ipaq.qxd
20.10.2000
11:37 Uhr
Seite 49
IPAQ
contribution to the X11 version used. The X server itself is an extremely downsized version with a file size of around 600 Kbytes and approximately 1.2 Mbytes of memory for the run time. The picture shows not only some of the standard X11 applications such as xterm, oclock, xeyes and xload but also xscribble, a handwriting recognition program for X11, which could be described as a mixture of Jot (used in Windows CE) and Graffiti (which comes with PalmOS). With some good will, this functions quite well and sends the recognised writing to the X11 program having the input focus. TWM is currently being used as the window manager. More complex applications such as the MP3 player GQmpeg have also been ported and are contained in the distribution, which leads us directly to the next topic.
Available drivers Kernel development is mainly under way at Compaq and so the most up-to-date drivers can be found in the distribution from Handhelds.org. Quite obviously, the frame buffer is also supported, as the X server used relies on it. The touch-screen functions and serves as a mouse substitute under X. GQmpeg has been ported for a good reason, as audio device support is provided too. The IrDA interface is recognised by the kernel and Fast IrDA (FIR - 4MBit/sec.) should function. But unfortunately, the current version lacks the IrDA protocol driver. The compact flash and PCMCIA jacket can be used, making the data exchange considerably easier. It wasn’t possible to test the USB driver due to the lack of a USB station, but it is also contained in the distribution. When connected to a Linux host computer it will then be possible to use the USB connection as a network connection, and therefore to use TCP/IP as normal.
Things that don’t yet work Power management is among the things that don’t yet work. This means that the only way to put the
ON TEST
iPaq in energy-saving mode is to switch it off. However, work is already under way on various options. One of these is known as a clock scaling expansion. This reduces or increases the CPU speed depending on the workload and thus saves a considerable amount of energy. The backlight and the display itself are already switched on and off by the X11 and console screen saver. What is still lacking is a way of putting the computer to sleep properly, i.e. putting the CPU and the DRM in standby mode and switching off all the peripherals. This would then be the ultimate low energy mode. It isn’t possible to consume much less than this. However, this gives rise to a problem, as the iPaq doesn’t have a real-time clock and the system time simply stands still in this kind of mode. It isn’t easy to wake up the CPU either, as all the laboriously adjusted system settings, e.g. for the DRAM, have to be recalculated and initiated. The iPaq currently runs for about six to nine hours with a fully loaded battery and without any excessive demands on the system. Apart from the lighting conditions sensor, all the hardware drivers are available and functioning (in the case of the audio driver, only from V0.16 onwards).
[left] Fig 2: Handhelds.org-Linux with GQmpeg MP3 Player [right] Fig 1: Handhelds.org Linux, X11
PocketLinux Sponsored by Transvirtual Inc., PocketLinux is said to be a completely adequate Linux PDA desktop system using Java. Based on a customised Kaffee version, the frame buffer is also used directly in this instance. Its own GUI toolkit takes care of the graphical elements and display. For text input the developers have implemented their own handwriting recognition system similar to xscribble and a virtual keyboard in Java. The sparkling announcements and the screenshots shown on the website are promising, but a test is more sobering. The screenshots feign a completeness that doesn’t exist in reality. Almost nothing works, with the exception of the application manager and the notepad (at least 3 · 2000 LINUX MAGAZINE 49
046ipaq.qxd
20.10.2000
11:37 Uhr
Seite 50
IPAQ
not in our test). Performance also leaves a lot to be desired. The iPaq is unbelievably fast under X11 with the Handhelds.org Linux, but delays and glitches in the graphical interface occur under PocketLinux. The announcement that PocketLinux will be offered in the same form for the less powerful VTech Helio too is not promising. The ideas behind PocketLinux are certainly good, but it doesn’t seem possible at the present time to produce a complete graphical application suite for PDAs in Java. Nevertheless, it is worth a try. There’s a downloadable demo for anyone wishing to try it on their iPaq. Unfortunately, there are no installation instructions, so here are a few quick tips: • Only the two files ipaq-imagelinuxworld.video.zImage and ipaq-imagelinuxworld.video.gz are required for the demo. • In contrast to the current Handhelds.org Linux version, these two files have to be programmed into the flash from the bootloader using load kernel and load ramdisk. • The command line for the Linux kernel should be linuxargs=„initrd root=/dev/ram ramdisk_size=16384”. • boot then starts the demo.
In-house developments Compaq has also thought about its own inhouse development. A complete ARM crossdevelopment package is available from a second research project known as Skiff. The package contains all the programs required to generate the kernel and the user mode applications under x86er-Linux. This saves a considerable amount of work and fiddling about, and functions as soon as it is installed. The current kernel source and a few more source and binary packages are also available on the Handhelds.org FTP server. Although, due to the currently rapid rate of development, some of the documentation on the Handhelds.org WWW server is out of date, it is still a valuable source of information. Those of you with experience of system development shouldn’t find it difficult to work on your own projects using the tools and documentation. Compaq has another gem for larger development projects. Skiff, mentioned above, is a StrongARM-based project that seems to be focussing in on small servers. A few of these prototypes are already available in the Compaq labs and are freely accessible over the Internet. Five of these machines can be found at the addresses skiffcluster[1-5].handhelds.org and can be accessed via the account guest without a password using telnet, ftp and rcp. As hard disks are connected to these devices, larger projects can be directly compiled ”native” without going via a cross-development environment.
ON TEST
Fig. 3: PocketLinux Desktop
Outlook It seems as if the whole world is about to turn to embedded Linux. New desktops and new embedded toolkits are being announced everywhere. Trolltech has announced an embedded Qt which runs on the raw frame buffer without X11. In the GTK arena there are more and more rumours that this cannot simply be left and so there are plans to do something. However, tangible results occur rarely if at all. So let’s stick to the facts. Apart from the Agenda VR3 from Agenda Computing, a fully adequate Linux PDA doesn’t exist at the present time. All the currently available models have a problem somewhere, usually in their power management. The future of graphical interfaces for Linux PDAs is also completely unclear. For smaller devices X11 represents overkill, for the larger ones like the iPaq it seems feasible. The only question concerns the sense in having X11 on a PDA. The highly regarded network transparency can hardly be the reason, as when is a PDA on the Net? This leaves the portability of existing applications. But even this is not without its problems, as a resolution of just 320x240 pixels is today insufficient for most X11 programs. The question is really whether the window paradigm is practical for a PDA. These are exciting times. Every day there is something new to discover in this area and I am sure there will be something for everyone. ■
Info Handhelds.org homepage: http://www.handhelds.org/ Handhelds.org iPaq Linux FTP: ftp://ftp.handhelds.org/pub/linux/compaq/ipaq/ PocketLinux: http://www.pocketlinux.org/ PocketLinux demo: ftp://ftp.pocketlinux.org/dists/bignoodle/main/flash-images-ipaq/ ARM crossdevelopment package: ftp://ftp.handhelds.org/pub/linux/arm/toolchain/arm-linux-toolchain-post-2.2.13.tar.gz Agenda Computing Inc: http://www.agendacomputing.com/ vCard and vCal specifications: http://www.imc.org/pdi/ ■ 3 · 2000 LINUX MAGAZINE 50
051rs6000.qxd
20.10.2000
11:42 Uhr
Seite 51
IBM RS/6000
ON TEST
SuSE Linux 6.4 for PPC on an IBM RS/6000 B50
STRONG AND BLACK MICHAEL ENGEL
The B50 is one of the first RS/6000 models available with the option of preinstalled Linux. We looked at what this computer has to offer and whether it lives up to the promise of its chic exterior. The B50 reached us from IBM well-packaged in a large crate on a pallet (including several kilos of prospectus material – sadly the barbecue season is now over!) It arrived with Linux from SuSE (SuSE PPC 6.4) already installed. At the time of our test, version 7.0 of SuSE was only available for x86-based systems and (for the first time) SPARC. Owners of PowerPC and Alpha-based systems will have to wait a bit longer.
Hardware The computer itself, at 89cm x 44.7cm x 61.2cm (W x H x D) looks well designed and is intended for installation in a 19-inch rack, preferably in large quantities (IBM likes to advertise the B50 with pictures of a 19-inch rack in which ten B50s are installed.) The 14.5 kg weight is, thanks to two folddown handles cleverly fitted on the side, easy for a single system administrator to carry and shove into the 19-inch rack. From the outside it appears that IBM has thought of everything when equipping the B50. It comes with interfaces for keyboard and mouse (PS/2), two serial ports, one parallel port, an
external Ultra SCSI connection and 10/100 Mbit Ethernet. As for mass storage, IBM offers a 1.44Mb diskette drive, a SCSI CD-ROM drive and two hard disk plug-in slots accessible from the front (behind a flap) for UW-SCSI hard disks. Plus, as with every RS/6000 there is an LED diagnostic display which after the system has been switched on displays a code indicating the self-test step or section of the boot procedure currently being executed – handy for diagnosing problems with the system that occur before screen output is possible. But it’s what’s inside that really counts. The B50 is very engineer-friendly – most activities can be performed without a tool. For example, the casing cover is secured by three knurled screws. After removing the cover you find yourself looking at a very neatly constructed computer which also, thanks to PowerPC, manages with only three fans. (I was unable to determine precisely whether the hard disk slots had their own fans). The RS/6000 came to us fully equipped: that is, with a 375 MHz PowerPC 604e CPU with 1 MB L2 cache, 1 GB RAM in 4 256 MB DIMM modules and two IBM DMVS18D U2W-SCSI hard disks each with 18 GB capacity. It also boasted a Matrox G200 graphics card (a PCI version) and an additional SMC PCNet32 FastEthernet card. The package was rounded off by a well-designed PS/2 keyboard in
IB M RS/ 00 60
Front with CD-ROM drive and plug-in slots for the hard disks
3 · 2000 LINUX MAGAZINE 51
051rs6000.qxd
20.10.2000
11:43 Uhr
ON TEST
Seite 52
IBM RS/6000
chic black (and with no Windows keys) and a remarkable little dual-key PS/2 mouse, which I found hard to get along with. But since the B50 is really mainly intended for rack installation, the keyboard and mouse will seldom come into play. The SCSI controller in the B50 comes in the form of the tried and tested Symbios Logic 53C875 Onboard. The installed hard disks from the Ultrastar product line are high-tech too: with 10000 rpm, 2 MB Cache, SCA2 connection and seek-times of 4.9 milliseconds, they are ideal for a server machine like the B50.
Software
A fully-loaded rack
A glance at the inside
As mentioned, the B50 came with a pre-installed SuSE PPC 6.4 distribution. Unfortunately the root password was unknown, so after switching on for the first time the guessing game began – without success. We decided to try out the tried and tested kernel option init=/bin/bash. Just a minute, though: how? The B50 began to boot Linux immediately after the graphical start-up screen; there was no sign of a boot loader. Like all modern PowerPC systems, however, the B50 ought to have OpenFirmware. The long and the short of it is that a bit of a search on the Net produced the answer: when switching on the system you have to hold down the F8 key, then you are greeted by the OpenFirmware prompt and the machine can then be booted ”by hand”. Once this little problem had been solved, there was no longer any obstacle to curiosity. SuSE installs a 2.2.14 series kernel which has compiled-in almost all the necessary drivers. After booting you are greeted by a nice penguin – so framebuffer support in the kernel is also present. A quick ls -l > /dev/fb0, which ought normally to produce coloured patterns on the console, however, on the
B50 led to a kernel panic. Not a pretty sight … One nice feature for long-term operation, however, came to light immediately – the B50 started again automatically a short time after the kernel panic. On closer examination of the installation one notes that the machine has been installed by IBM. The LVM (Logical Volume Manager) from SuSE is straightforward to use and all partitions (apart from the boot partition) are arranged in a volume group named rootvg – AIX says hello. The use of the LVM certainly makes sense for server applications, since it can help to reduce downtime. The file system used, however, is still ext2, since ReiserFS at present only functions on x86 systems and IBM has still got work to do on its JFS implementation for Linux before it will be ready for the end user. The X Window System was configured to play safe with a resolution of 640x480 pixels. As usual with SuSE, SaX can be used for additional configuration of X11. At this point it appeared that the graphics card was only supposed to have 2 MB video RAM. This would of course considerably restrict the choice of video modes for the X-Server (which is a framebuffer X-Server) and it seemed to us highly unlikely that a current graphics card would be delivered with such a small amount of video RAM. Another search on the Net proved that the problem is already known by IBM – a patch for this is already available which enables the video RAM to be correctly recognised (IBM designates the Matrox G200 as GXT130P). There are a few other interesting patches for RS/6000 systems running with Linux, such as those which make it possible to compile IBM’s JFS on PowerPC. A number of interesting tips are also starting to appear on SuSE’s web site. Among other things, there is a description there of how the onboard sound chip, a Cirrus Logic CS4236+, which is not automatically recognised by the SuSE installation, can be configured (for the curious, we have immortalised it in the following table which shows the YaST parameters to be set). Why IBM uses an ISA sound card is still a mystery – maybe it’s just a leftover from the old RS/6000 43P-model. Network connections functioned straight away and the additional Ethernet card functioned immediately (both the onboard Ethernet and the extra card are based on the AMD PCnet 32chipset.) If you want or have to use Token Ring this is also supported by IBM. A patch for Olympic and YaST Parameters snd_port snd_cport snd_irq snd_mpu_port snd_mpu_irq snd_fm_port snd_dma1 snd_dma2 snd_isapnp
52 LINUX MAGAZINE 3 · 2000
0x534 0x538 5 0x330 9 0x388 1 3 0
051rs6000.qxd
20.10.2000
11:43 Uhr
Seite 53
IBM RS/6000
LanStreamer Token Ring cards is available but, lacking the card and TokenRing network (Ethernet, FDDI and ATM will have to do) we could not test it. For installation in a 19-inch rack the computer will of course be used without a VGA monitor, keyboard and mouse since these take up space unnecessarily in the rack. IBM has thought of that. The B50 can be installed and used without any problem via a terminal on the first serial port. When booting up Linux you must not forget to enter the parameter console=ttyS0 at the same time, otherwise there won’t be much to look at. A plea to the developers – unless we are very much mistaken the console just used can be detected by querying an OpenFirmware environment variable which would enable Linux to select the console automatically.
All new ... Unlike the kernel versions for most other processor platforms, the PowerPC port was developed with the aid of a version management tool. This step is to be welcomed in principle. Unfortunately, the tool used is not the normally-used (and standard for open source projects) CVS but a tool named ”BitKeeper” which is available free from www.bitkeeper.com. Unfortunately it only appears to be available as a binary for the commonest Unix platforms and Windows. Even if the step towards version management is welcome, the use of a nonopen source tool for this is to be deprecated. The development method used is presumably the reason why current Standard 2.4 test kernel versions cannot be compiled without problems for PowerPC. Before the final release of kernel 2.4 there is surely still a bit of work to be done here. Current kernels (both the 2.2 and the 2.4 test series) are available (see Info box.) In order to use one of these kernels, however, you must first download BitKeeper, then after it has been installed, run it as follows: bk clone bk://oss.software.ibm.com:port version The values to be used for port and version can be found in the following table: Version 2.2, stable 2.2, development 2.4, stable 2.4, development
port 8000 8001 8002 8003
version stable_2_2 dev_2_2 stable_2_4 dev_2_2
More info on LinuxPPC from IBM can be found at http://oss.software.ibm.com/developerworks/ opensource/linux/projects/ppc/
Applications What can you use the B50 for, then? Certainly the B50 would work well as a workstation computer. But at a price of around £3,000 for
ON TEST
even a basic configuration that could not be the main intended use. Rather, IBM sees the B50 under Linux being used as a server, especially as a webserver for ISPs. For other services on the network, such as a robust file server, high-speed router or firewall, however, the B50 would also work very well. During our tests we pushed the system load up to 60, which didn’t bother it in the slightest – though running at such a high system load does make things somewhat sluggish. Which brings us to a point that should not go unmentioned – the 604e processor used originated in the penultimate PowerPC generation and is really slow in comparison with the current PowerPC G4 and x86compatible processors (not to mention 21264 series Alphas.) So perhaps it is not such a good idea to build up a Beowulf cluster for number-crunching with B50s.
Desirable Apart from a few little flaws and components to be installed later in the installed SuSE version, the B50 in our test proved to be a problem-free and very reliable computer, with advantages for typical server applications compared with x86-based systems or even G4 PowerMac systems under Linux (you just have to think of the serial console which is used without any problems at all). But there are still a few wishes left … Concerning SuSE we wish that the support for the B50 was completed. With the excellent support by IBM this should be no problem from the software side. However, the documentation on installation on non-PowerMacs could also easily be a bit more comprehensive, as this would certainly make it easier for a few AIX-spoilt administrators to get on board Linux. A boot loader that was easy to configure and use would be very nice, too: at the moment the kernel is lying raw on a special boot partition. A yaboot version has since become available from IBM’s web site, but only as a the last resort. A functioning journalling file system would be wonderful. But it’s probably just a question of time. Oh yes, it would also be a good idea if the whole memory could be recognised without any problem – anyone who orders the B50 with 1 GB will certainly be cross otherwise … From IBM we would like to see a more up to date processor (a fast G4 would surely fit the bill) and perhaps even support for a second CPU. For a server in this performance class the prices are certainly reasonable and IBM also grants generous discounts to educational and research establishments removing the biggest obstacle to the use of the B50 with Linux. We did have just have one last wish … but no: sadly, the B50 had to go back to IBM after our testing was completed. ■
Info Info on B50 http://www.rs6000.ibm.com/ha rdware/enterprise/b50.html Linux for IBM RS/6000 systems http://oss.software.ibm.com/de veloperworks/opensource/linu x/projects/ppc/ SuSE 6.4 for PowerPC http://www.suse.de/de/produk te/susesoft/ppc/index.html Patch for the recognition of the complete graphics memory http://oss.software.ibm.com/de veloper/opensource/linux/patc hes/ppc.php#GXT130P IBM Ultrastar-hard disks http://www.storage.ibm.com/h ardsoft/diskdrdl/prod/us18lzx3 6zx.htm Tips for using SuSE 6.4 on the B50 http://oss.software.ibm.com/de veloperworks/opensource/linu x/projects/ppc/b50tips.php Current Linux/PowerPC kernel versions http://oss.software.ibm.com/de veloperworks/opensource/linu x/projects/ppc/code.php ■
3 · 2000 LINUX MAGAZINE 53
073emuintro.qxd
20.10.2000
13:01 Uhr
Seite 73
INTRO
WINDOWS EMULATION
So many possibilities
WINDOWSEMULATION BY HANS-GEORG EßER
Be honest, even as a fully-fledged Linux devotee, you are bound to miss one program or another from the world of Windows. Some manufacturers are still stubbornly refusing to port their software, so in many cases the only option is to keep on booting Windows.
It doesn’t have to be that way. When it comes to using your favourite Windows programs under Linux you have a range of options. Consequently, the Windows partition can disappear from your computer once and for all. There are broad variations in the techniques used for emulation: The free project Wine is one example. This makes libraries available that directly convert Windows function calls into the corresponding Linux calls. It is not necessary to have a version of Microsoft Windows installed. VMware and Win4Lin take an alternative approach. Both emulate a complete PC and allow you to install an original Windows package on it. VMware goes one better, and provides support which makes it possible (in principle) to run just about any operating system you like in the virtual machine.
Windows 98 in the VMware window under Linux
The VMware virtual machine comes up with a BIOS opening message and possible BIOS set-up. Windows can be installed just as on any ”normal” PC, simply insert the Windows installation CD, boot the virtual machine and run through the installation procedure. Windows 95, 98, Millennium Edition, NT 4.0 or 2000 will all run optimally, in principle. The installation of any preferred Windows programs (such as Microsoft Office 2000) is no problem at all. But for the latest versions of Windows you will need a faster guest computer. Windows 2000 struggled with the installation for an hour and a half in VMware using a dual Pentium III 500. Even then, it was not particularly fast. By comparison, NT 4.0 runs perfectly smoothly on the same computer under the same version of VMware. On the other hand, Wine offers less compatibility. It’s still in the Alpha stage and has serious problems with many current Windows programs. But it does offer seamless integration into the Linux environment. Any Windows program executed under Wine can be executed in a normal Linux window. These windows can also share a desktop with other programs. VMware opens a big Windows desktop window containing all Windows programs. Windows programs also run noticeably faster under Wine, since they are not running on simulated hardware. If the programs you need are compatible with Wine, then Wine is the best choice – especially since you don’t need a Windows licence to run it. Whichever way you look at it, it’s possible to use any program which has not yet been ported onto Linux. Never reboot again! ■ 3 · 2000 LINUX MAGAZINE 73
074vmware.qxd
23.10.2000
15:28 Uhr
Seite 74
WINDOWS EMULATION
VMWARE
PC emulation
VIRTUAL COMPUTER HANS-GEORG EßER
Possibly the most popular product among Windows emulators is VMware, a commercial program from the USbased company of the same name. But VMware is more than a Windows emulator: It actually emulates an entire PC.
Seeing a freshly-installed and appropriately configured VMware installation start for the first time is a sensation in itself. In a black window, the BIOS start-up message of an ordinary PC appears, the memory is counted up and you have the option of calling up the BIOS set-up, just like on a real PC.
You will find a sample installation with explanations in the box ”Installation with vmwareconfig.pl”.
Installing VMware
When starting VMware for the first time (by entering vmware as a normal user (not as root) you can call up the VMware Configuration Wizard, with which you can prepare VMware for a Windows installation. To do this, first select the guest operating system. In VMware 2.0.x the systems on offer are MS-DOS, Windows 3.1, 95, 98, NT 4.0 and 2000 as well as Linux (yes, you can run Linux in a virtual machine under Linux!) and FreeBSD. You can also experiment with other operating systems, such as BeOS. Here, we are only looking at the installation of Windows, since our focus is on Windows emulation. In the first dialog box of the Configuration Wizard, select the Windows version you wish to install on the virtual machine (Fig. 1). Next, select the directory into which VMware Windows is to be installed. Note: this will not contain a directory structure in which Windows files accessible directly from Linux will be stored. Instead, VMware will create a single file there which contains a virtual hard disk. This will later become a perfectly normal-looking (at first unpartitioned) disk. The disk will then be partitioned and formatted as usual by Windows. A good location for this directory is /home/user/vmware/windows/. VMware will suggest an appropriate directory. What is important is that there is sufficient free space in this directory. In the next dialog box you have to specify how large the virtual hard disk is to be. Obviously, this needs to be large enough to hold the version of Windows you are installing, the applications and files you will use under it, plus the swap file. Windows 2000 alone demands about 900 M, though older Windows versions can get by with less space.
Two steps are necessary to get the benefit of Windows emulation. Firstly, the VMware software itself has to be installed. A licence is required to be able to use VMware. You can obtain a free 30-day evaluation license from VMware’s home page, or you can buy a long-term VMware licence. The price starts at $99 US for personal, non-commercial use. Whatever you do, the licence must be stored in the .vmware directory in your home directory under the name license. The performance of VMware is unaffected by the type of licence obtained. In most cases you will install an rpm package – in the case of SuSE Linux 7.0 it is the file vmware-2.0-55.i386.rpm. You install this as usual as administrator root using the command rpm -i vmware-xxx.i386.rpm
Fig. 1: The Configuration Wizard is ready for the installation of a range of operating systems
This does not mean VMware is ready to use. A few special kernel modules are required. These are included in the VMware package for various kernel versions: if you are running a distribution for which precompiled modules are not supplied, the installation script will compile them. To install – and if necessary, create – the kernel modules, call up the program vmware-config.pl (again as root). A longish question and answer game then begins. This will install the kernel modules and, if required, add a DHCP server from which the operating system running on the virtual PC (known as the guest operating system) can automatically obtain its own IP address. Your Linux computer will receive a second IP address at this point via which it can be contacted from the guest system.
74 LINUX MAGAZINE 3 · 2000
Preparing VMware for Windows installation
074vmware.qxd
23.10.2000
15:28 Uhr
Seite 75
VMWARE
Next you must state whether the CD-ROM drive and the diskette drive are to be available for the guest operating system when VMware starts up in future. A positive answer to this question will not, incidentally, prevent you also mounting a CD or diskette under Linux.
Network The next item concerns the network connection between the virtual PC and your Linux machine: At this point you can choose between No networking, Bridged networking and Host-only networking
WINDOWS EMULATION
(and a combination of the last two variants). If you don’t need any connection between the virtual Windows system and any other computer you can also select No networking. VMware makes a recommendation for the available main memory which the emulated PC is to have, and which cannot be altered at this point. (You can, however, correct the RAM size later on via the VMware configuration menus.) There now appears a summary of the options with which VMware will start in future and in which directory which files were created. (Fig. 2) Confirm this with Done, in order to finish with the Configuration Wizard.
Installation with vmware-config.pl [root@dual esser]# vmware-config.pl Making sure VMware’s services are stopped. .... Trying to find a suitable vmmon module for your running kernel. None of VMware’s pre-built vmmon modules is suitable for your running kernel. Do you want this script to try to build the vmmon module for your system (you need to have a C compiler installed on your system)? [yes] [Return] What is the location of the directory of C header files that match your running kernel? [/usr/src/linux/include] [Return] If the header files of the kernel are located elsewhere, specify the correct path: /usr/src/linux/include/ should usually be correct. Extracting the sources of the vmmon module. .... The module loads perfectly in the running kernel. Making sure that both the parport and parport_pc kernel services are available. Trying to find a suitable vmppuser module for your running kernel. None of VMware’s pre-built vmppuser modules is suitable for your running kernel. Do you want this script to try to build the vmppuser module for your system (you need to have a C compiler installed on your system)? [yes] [Return] The same procedure follows for a kernel module, which later permits direct access to the printer port, so that (for example) parallel port scanners can be addressed directly under Windows.
We were unable to locate an unused Class C subnet in the range of private network numbers because we did not manage to send ICMP ping packets on the network (which is normal if your host is not connected to an IP network). You will need to explicitly specify a network number. At this point, enter the IP address which your Linux computer is to have later when seen by the VMware computer, thus from Windows. As a rule, an address in the form of 192.168.x.y is suitable. The associated netmask for this is 255.255.255.0. What will be the IP address of your host on the private network? [192.168.2.1] 192.168.2.1 [Return] What will be the netmask of your private network? [255.255.255.0] [Return] This system appears to have a CIFS/SMB server (Samba) configured for normal use. If this server is intended to run, you need to make sure that it will not conflict with the Samba server set-up on the private network (the one that we use to share the host filesystem). Please check your /etc/smb.conf file so that: .. The ”interfaces” line does not contain ”192.168.2.1/255.255.255.0” .. There is a ”socket address” line that contains only your real host IP address If you are using a Samba server this has to be reconfigured so that it allows accesses from the virtual PC. Then you can assign your Linux home directory under Windows to a drive letter and access your private files. Hit enter to continue.
Extracting the sources of the vmppuser module. .... The module loads perfectly in the running kernel.
Starting VMware services: Virtual machine monitor [ OK ] Virtual bidirectional parallel port [ OK ] Virtual ethernet [ OK ] Bridged networking [ OK ] Host-only and samba networking (background) [ OK ]
Do you want this script to automatically configure your system to allow your Virtual Machines to access the host filesystem? (yes/no/help) [yes] [Return]
You have successfully configured VMware to allow your Virtual Machines to access the host filesystem. Your system appears to already be set up with usernames and passwords for accessing the host filesystem.
The version of Samba used in this version of VMware is licensed as described in the ”/usr/share/doc/packages/vmware/SAMBA-LICENSE” file.
Would you like to add another username and password at this time? (yes/no/help) [no] yes
Hit enter to continue. [Return]
The following dialog only appears if a running Samba server has been found. You can then add a new username under which you can log onto the Samba server from Windows:
Enabling networking (this is required to share the host filesystem).
Please specify a username that is known to your host: esser
Trying to find a suitable vmnet module for your running kernel.
New SMB password: Retype new SMB password: Added user esser. Password changed for user esser.
None of VMware’s pre-built vmnet modules is suitable for your running kernel. Do you want this script to try to build the vmnet module for your system (you need to have a C compiler installed on your system)? [yes] [Return] And so it goes on. The vmnet module will be needed later for the network connection between Linux and the guest system. Extracting the sources of the vmnet module. .... The module loads perfectly in the running kernel.
You have successfully configured VMware to allow your Virtual Machines to access the host filesystem. Your system appears to already be set up with usernames and passwords for accessing the host filesystem. Would you like to add another username and password at this time? (yes/no/help) [no] [Return] (One new account is enough…)
Enabling host-only networking (this is required to share the host filesystem).
... The configuration of VMware 2.0 build-476 for Linux for this running kernel completed successfully.
Do you want this script to probe for an unused private subnet? (yes/no/help) [yes] [Return]
You can now run VMware by invoking the following command: ”/usr/bin/vmware”.
Under some circumstances VMware can itself find a free subnet, i.e. a collection of IP addresses which is not yet in use. This is important, to avoid any conflicts. In this case, it has not worked:
Enjoy, the VMware team
Probing for an unused private subnet (this can take some time). [root@dual esser]# Unable to sendto: network is unreachable
With this, vmware-config.pl has completed its task.
3 · 2000 LINUX MAGAZINE 75
074vmware.qxd
23.10.2000
15:28 Uhr
Seite 76
WINDOWS EMULATION
Fig. 2: To end, the Installation Wizard summarises everything once more
VMWARE
Fig. 3: The virtual PC is equipped with a Phoenix BIOS
Fig. 4: Configuration of the Windows components to be installed (here: NT 4.0)
Fig. 5: NT 4.0 recognises an AMD PCNET network card
Windows Installation The next step is the installation of the guest operating system – in this case, Windows in one of the versions 95, 98, Millennium Edition, NT 4.0 or 2000. All are supported in principle by VMware and co-operate with varying degrees of success with the virtual PC. For this article we performed tests with Windows 98, NT 4.0 and Windows 2000. The test computer was a dual Pentium III 500 with 256 MB RAM and a fast SCSI hard disk. In all cases the installation ran exactly as it would in a normal PC. You insert the Windows installation CD in the CD-
Bridged networking: In bridged networking the virtual machine obtains transparent network access to the entire network on which your Linux computer is located (including the Internet, if your network is connected to it.) This enables, for example, a web server which has been installed on Windows NT running under VMware to be accessed from any other computer. You can use VMware Windows to access other computers in the local network via a web browser or Telnet. Your virtual Windows box will have exactly the same accessibility as a real Windows box connected to your network. Host-only networking: In this variant only a (virtual) network connection between the Linux system (Host) and the virtual computer (Guest) is made. The virtual machine will be invisible to any other computer on the network. The purpose of this option is to allow you to access your Linux files from Windows (with assistance from a Samba server.) ■ 76 LINUX MAGAZINE 3 · 2000
ROM drive and click on the Power On button, this switches on the virtual PC. It’s most impressive when the usual BIOS start-up messages appear and the (virtual) main memory is counted down, just as a normal PC would do. By pressing [F2] at this point you could call up the BIOS set-up, in which the usual things (including among others the boot sequence) can be adjusted. (Fig. 3) Since the newly-installed virtual hard disk is still unpartitioned, VMware won’t find an operating system there. You will only notice this if you have not inserted the Windows CD, because in the standard configuration the virtual PC first attempts to boot up from the diskette or CD. What happens next is the completely normal installation procedure of the version of Windows you selected. You will be guided through the partitioning and selection of the Windows components to be installed, and from time to time the virtual PC will reboot, a habit Microsoft operating systems seem unable to give up. You can make the usual adjustments (time zone, name of the Windows directory etc.). Since VMware assumes the Linux system time is the hardware time for the emulated PC you should select ”no automatic change to winter/summer time” or the summer time adjustment will be doubly applied. Automatic hardware recognition by Windows should find, among other things, an AMD PCNET card. This virtual network card will be used later for the network connection to the guest computer. If you chose bridged networking as well, a second virtual network card will be created for connection to the local network. (Fig. 5) Once installation is completed, the virtual PC will reboot. Take the Windows CD out of the drive so as not to initiate the installation program again. Now VMware should boot up from the virtual disk and ultimately present the Windows log-in screen (Fig. 6).
Graphics driver Only a standard VGA graphics card is recognised when VMware is installed. Consequently, Windows presents itself at the start in an unsatisfactory 640x480 16 colour resolution. This mode has other disadvantages apart from the interface being much too small. As soon as you click for the first time in the VMware window, the mouse remains trapped there. You can still move the Windows mouse cursor, but in order to leave the window you must ”release” the mouse with the key combination [Ctrl+Alt+Esc]. However, VMware has special drivers for Windows 95/98/NT 4.0/2000 which are much easier to work with. They are installed from a virtual diskette which is provided on request by VMware. To install them, select the menu item Settings/VMware Tools Install (see Fig. 7). After a warning notice, all subsequent accesses to the A:
074vmware.qxd
23.10.2000
15:28 Uhr
Seite 77
VMWARE
drive no longer lead to any diskette which may have been inserted, but to a virtual driver diskette. Now go through the usual procedure for the chosen Windows version for the installation of a new graphics card and select the appropriate driver from the diskette. After restarting the VMware PC you can then set a higher resolution. Installation of the VMware tools under Windows adds an icon in the Windows StartUp list, which creates an icon in the Windows system tray. You can use this icon to permit or deny VMware access to the CD and diskette drives.
WINDOWS EMULATION
[left] Fig. 6: Log-in screen for NT 4.0 [right] Fig. 9: At last, with the aid of VMware tools Word and Excel from the Office 2000 package are available under Linux in a usable resolution (Linux: 1280x1024, Windows: 1152x864).
Network VMware automatically starts a DHCP server under Linux if at the start of the VMware configuration (vmware-config.pl) you selected one of the options Host-only networking or Bridged networking. From this, the installed Windows can automatically obtain an IP address. To do this, you must of course tell Windows that it should search in the ”local network” for such a server. This works brilliantly under Windows 95/98 and NT 4.0 too. In the test configuration with NT 4.0 the Linux PC had the IP address 192.168.0.4 (in the local network 192.168.0.*) and also the address configured especially for VMware 192.168.2.1; in the emulation NT was assigned the address 192.168.2.128 by the DHCP server, and immediately a Telnet (for example) was possible on the Linux computer; via a WWW proxy running under Linux (e.g. wwwoffle or squid) it was also possible to surf the Internet. Only Windows 2000 refused to co-operate in our tests and obviously could not find the DHCP server. It is easy at this point to determine whether the DHCP server has actually been contacted and has issued an IP address. In the file /etc/vmware/ vmnet1/dhcpd/dhcpd.leases you will find for each IP address issued an entry in the form lease 192.168.2.128 { starts 0 2000/09/03 23:25:18; ends 0 2000/09/03 23:55:18; hardware ethernet 00:50:56:c0:67:33; uid 01:00:50:56:c0:67:33; client-hostname ”VMWARE”; } If on the other hand the file (apart from a few comment lines beginning with ”#” ) is empty, no address has been issued. Under most Windows versions you can also determine, in a COMMAND.COM window using the command: C:\> route print whether an IP address has been issued. If only the standard IP address 127.0.0.1 (localhost) pops up there, it hasn’t worked. If correct, it should look like Fig. 8.
Fig. 8: Here ”route print” also shows the IP address 192.168.2.128 issued by the DHCP server
Fig. 7: Insert the VMware tools diskette...
The world of Windows programs There is now nothing to stop you from installing any preferred Windows program. Install Microsoft Office 2000, for example, (see Fig. 9) and free yourself from the problems of trying to exchange files with other people who run it. You do of course need a licence for both the operating system and all programs installed under it. If you need to buy a Windows licence especially for your virtual PC, VMware will even sell you a package containing both VMware and a Windows licence. ■
Info VMWare http://www.vmware.com/ ■
Suspend and Resume One especially useful feature of VMware (from Version 2.0) is the option of freezing the current state of the emulated PC at any time by using the Suspend button. VMware then stores the entire main memory and some other data in a file named vmware/nt4/nt4.std (for example) and then stops the PC. After this you can close down VMware. When you next start it, reload this state information by using the Resume button. You can then carry on working from where you left off. The advantage of this method is that you save yourself the bother of the lengthy Windows boot procedure and having to open all the applications again. But a great deal of disk space is needed to store the status.
VMware files Each virtual machine configured by you has its own directory (usually $HOME/vmware/os/, where ”os” stands for the name of the operating system, e.g. nt4 or win2000). There you will find, respectively, a configuration file (nt4.cfg), the virtual hard disk (nt4.dsk), a log file (nt4.log), the BIOS configuration (nt4.nvram) and (if applicable) the most recent status stored using Suspend (nt4.std). ■ 3 · 2000 LINUX MAGAZINE 77
078win4lin.qxd
20.10.2000
13:03 Uhr
WINDOWS EMULATION
Seite 78
WIN4LIN
TreLOS Win4Lin Version 1.0
DON’T TURN AROUND VON MIRKO DÖLLE
After VMware, a second PC emulator gets off the blocks. But unlike VMware, Win4Lin from TreLOS doesn’t attempt to simulate a universal computer. The emulation is tailored specifically for Windows 95/98. The result is a noticeably leaner and faster Windows emulator, but one that can only handle the corresponding versions of Windows.
Special kernel
Fig. 1: Internet Explorer worked perfectly, thanks to network emulation
Win4Lin requires a specially adapted kernel. Therefore installation requires fundamental changes to the system. The first task was to set towork with the patches provided on the TreLOS FTP server. However, we were unable to obtain a working kernel with Win4Lin support for versions 2.2.16 and 2.2.15. In the case of the 2.2.16 kernel we used the patch intended for Red Hat systems but this didn’t execute completely. Repairs turned out to be extensive and our efforts
ended with an error in memory management. The patch from version 2.2.15 ran promisingly, and the adapted source code then allowed itself be compiled into a new kernel. It was only after a restart that the installation program started to complain that there was no Win4Lin support included. All the more useful, then, that TreLOS bundles both complete kernels on CD for every conceivable distribution and makes available even more up-todate versions on its FTP server. These are the standard installation kernels for the individual distributions which have been expanded,to include Win4Lin support. However, if you can’t use the standard kernels from your distribution you’ll probably be unable to use Win4Lin, or at least not until TreLOS sorts out its kernel patches.. We downloaded a kernel 2.2.14 for SuSE Linux 6.4 as an RPM package. It was quickly installed using rpm -i kernel-Win4Lin1-SuSE6.4_2.2.1402.i386.rpm. After restarting, we took the chance of installing the RPM package Win4Lin1.0.rpm containing the actual emulator software. A restart is necessary because the emulator package installs only on a Win4Lin-compatible system, preferably one running a graphical interface.
Windows 98 SE To complete the emulator installation we were asked for the Windows CD. This was most inconvenient because since we were installing the RPM package from the CD we were obviously 78 LINUX MAGAZINE 3 · 2000
078win4lin.qxd
20.10.2000
13:03 Uhr
Seite 79
WIN4LIN
unable to unmount it. The only thing we could do then was to interrupt the installation. At this point, we could have done with a bit more userfriendliness. The best thing to do, in fact, is to load in the Windows CD later by hand using the configuration program winsetup. Using Load Windows CD (under Systemwide Win4Lin Administration) read in the entire Windows CD including a boot diskette. After that, you can confidently put away both disks as the Windows setup is done entirely from the hard disk. The Windows installation which follows is something which, for security reasons, you should not perform as administrator root. You must log in as a normal user. After the change-over you should call up winsetup under the graphical interface and select User-defined Configuration. The emulator is called up via Install Windows and Windows Setup then starts. After accepting the Microsoft licence agreement and entering the product ID code, to our surprise, the rest of the installation ran completely automatically: All confirmations and inputs are performed by the Win4Lin installation program itself. In Windows 98 Second Edition this was as far as it went. Windows completed the first part of the installation but Win4Lin was unable to apply its patches to the kernel32.dll and stopped. There was no way round this problem, nor was there anything anywhere in the TreLOS support database about it. After two days we had to admit defeat.
Windows 98 We then tried Win4Lin with Windows 98 in the original version. This time, installation finally worked as planned and we were able to get the system up and running. The network was available without any further configuration and Internet Explorer started (configured with LAN access) without delay. Via a special driver, Win4Lin offers direct access to the Linux network, modems and ISDN cards. Thus, you need only be connected to the Internet under Linux. Unfortunately we couldn’t listen to any RealAudio files with Internet Explorer as no pseudo-driver was installed for the sound card. And installing a Windows driver for the builtin card brought no improvement.
Microsoft Office Since Windows by itself is not much use, the next thing to do was to install Microsoft Office 2000. This was the first problem. Windows was missing the driver for the pseudo CD ROM: as a result, Windows was unable to read the CD. A solution was found via a symbolic link in the home directory. When installing Windows the win directory is set up, and this appears as the C: drive under Windows. So we created a link from /cdrom to ~/win/cdrom and mounted the CD by hand on /cdrom.
WINDOWS EMULATION
Another tip on this – if you’ve started a program via Start/Execute the CD-ROM drive remains blocked even after the program has terminated. You cannot unmount it and insert a new CD. Obviously, Windows is still keeping files or directories open on the CD. If you start a program via Start/Execute from another directory, the CDROM is shut down and you can unmount. However, merely selecting another program is not sufficient: it must be started. Once the CD was accessible to Windows, the problems continued. The Office 2000 installer was unable to detect the available memory space on the hard disk (Windows Explorer reported the entire hard disk as ”free”) and stopped with a whole range of error messages. All in all, the detection of the available hard disk space failed. The installation procedure for Microsoft Office 97 did not run any better. It insisted on having at least 5 Mb of free memory, even though at that point there was around 1 Gb available. All in all, Microsoft Office would not install, so we had to be satisfied with calling up the individual programs from the Office 97 CD directly. Surprisingly, Word was extremely stable, even with a book containing over 500 pages. Using Excel we had only small spreadsheets to hand, but these were no problem either.
Fig. 2: Word ran without any problems and amazingly quickly, even with large documents
Fig. 3: StarOffice ran absolutely perfectly.
Sun StarOffice In order to be able to try out at least one common Office application we installed Sun’s StarOffice 5.2 for Windows. Unlike the Microsoft counterpart, installation was exemplary. There were neither error messages nor any peculiarities during operation. The only snag was that importing the 500-page document took its time and showed the usual import problems with respect to embedded objects and scripts.
Info TreLOS http://www.trelos.com FTP server ftp://ftp.TreLOS.com ■
3 · 2000 LINUX MAGAZINE 79
078win4lin.qxd
20.10.2000
13:03 Uhr
WINDOWS EMULATION
Seite 80
WIN4LIN
[left] Fig. 4: Grouse was perfectly playable, but key scrolling was a bit jerky. [right] Fig. 5: Tricky stuff. Microsoft flips for Win4Lin. [above] Fig. 6: Unplayable. Graphics are in the right place, but reversed!
Grouse hunt Besides office compatibility, another reason for keeping Windows is the large range of games available. We tried the universally popular Grouse Hunt. Like many games, Grouse Hunt requires DirectX. Certainly the grouse set no new speed records on our Pentium II with 400 MHz, but it played surprisingly well. Scrolling with the cursor keys was jerky, since the key repetition didn’t work too smoothly. The installation of DirectX 7 for the new version of Grouse failed. Obviously, Win4Lin had a hand in this. Last but not least we looked at Apple’s QuickTime and ran a 320x240-pixel small video. The speed corresponded roughly to that of a Pentium 233 system under Windows 98.
Microsoft flips One stumbling block with Win4Lin is the habit of horizontally reversing graphics under certain conditions. We promptly fell flat on our faces at the first installation. All graphics were reversed horizontally! After looking into the support database at TreLOS we decided to empty the symbol cache and install update number 4 as a definitive bug fix. To our astonishment, the installation of the update did not change anything. As can be seen from Figure 6, graphics programs such as AstroriX remained unusable. 80 LINUX MAGAZINE 3 · 2000
At this point, our luck began to change. First, we made the screenshot of AstroriX with xv, but got only bars instead of an image. We found out that xv couldn’t cope with the 24-bit colour depth of the XF86_SVGA server being used and changed to 16bit colour depth. Now xv worked perfectly, but so did Win4Lin. The X server plays a crucial role in this fault. At 24-bits colour depth all graphics were reversed. At 16-bits the phenomenon disappeared (without the update). We then made the screenshots again in 24-bit-mode with import from the ImageMagic package.
Conclusion Win4Lin does represent an alternative. How useful this alternative is however, is a matter of personal choice. Overall, Windows worked. When Office installation problems are overcome, there won’t be any obstacle to using the programs you have grown to love under Windows with Linux. Another interesting option offered by Win4Lin is that of making different versions of Windows available to different users. This means the kids can rearrange the desktop to their heart’s content without you having to live with the end result. All the same, we did sometimes find the emulator crashing. We even had to reboot the system on one occasion. Obviously, the kernel had also come out in sympathy. And not all the games – which we took from http://www.download.com – would run. Invaders 1.0,for example, kept crashing with a protection fault. Before you spend 35 dollars on Win4Lin we recommend that you try out the 45 MB demonstration version. This will allow you sessions of up to 60 minutes and can be found on the TreLOS home page. We liked Win4Lin because of its fast running speed compared to other emulators. We are looking forward to seeing the upcoming version 2.0 and its improvements. ■
081wine.qxd
23.10.2000
16:50 Uhr
Seite 81
WINE
WINDOWS EMULATION
Windows under Linux
FULL BODIED AND MATURING NICELY PETER GANTEN
Imagine if Windows programs could be used on any Linux system. Imagine too, that this could be achieved without the need for a cumbersome emulator or a Windows operating system software license. Well, with Wine, it’s possible.
Thanks to the Wine project it is possible to run Windows programs under Linux without a need to pay money to Microsoft. The snag is, so far not all Windows programs will run. But the developers of Wine have made enormous progress and the number of Windows programs that can be used under Linux with Wine is increasing all the time. In this article you’ll find out what Wine really is, how it works and how you can install Wine on your system in order to use Windows programs.
EXE files? The fact that any i386-compatible processor can execute Windows programs doesn’t mean that such programs will automatically run under Linux on Intel. Something more is needed so that Windows programs can be loaded from the hard drive into working memory and then executed. This task is performed by the program loader. When you start Windows programs under Windows (perhaps by selecting a program in the Start menu) this function is carried out by the Windows operating system. Linux has similar functionality by which native Linux programs can be loaded when, for example, you call up an application in the KDE menu, from the shell or using the GNOME panel. If a user tries to start a program the operating system first checks whether the corresponding program file is on the hard disk
and in the correct file format. Just as Netscape cannot show StarOffice files, Linux cannot simply load Windows programs. If it appears that the file to be executed has an unknown format, the operating system interrupts and emits an error message. Therefore, in order to start Windows programs under Linux, a special Windows program loader is required.
API differences Then there is another problem: each operating system makes available certain functions to be used by programs that run under it in order to open files,
Fig. 1: Handy for the development of web pages. The two rivals, Netscape and Internet Explorer side by side displaying the homepage of one of the sister of Linux Magazine. Wine makes it possible
i386: Designation of the processor architecture developed by the chip manufacturer Intel. Among Intel 386-compatible processors are Intel’s 80386, 80486, 80586 (Pentium), 80686 (Pentium II) processors, but also processors from other CPU manufacturers such as AMD’s Athlon or VIA’s Cyrix III. The i386 architecture dominates the domain of the desktop computer. When i386compatible computers are mentioned, this often also means PCs or IBM-compatible computers. ■
3 · 2000 LINUX MAGAZINE 81
081wine.qxd
23.10.2000
16:50 Uhr
Seite 82
WINDOWS EMULATION
WINE
API: Application Programmer’s Interface. This refers to the interfaces of an operating system, a system component or a program library which can be used by other programs. Program library: This is a file containing program code which can be executed by the processor. It is not in itself a complete program. In general, program libraries lack the so-called main() function, which is called up by the operating system in order to start a program. The code in program libraries can however be used by executable programs. A program library can for example provide functions to display windows on the screen. Programs that use this library do not need to contain the corresponding code themselves, which saves memory space. This also ensures a uniform appearance for all windows of programs which use the library. Program libraries under Windows often have the filename suffix .DLL (Dynamic Link Library). The equivalent under Unix/Linux have names that end in .so (shared object). ■ display things on the screen, receive data from the Internet and so on. These interfaces are referred to as the API of the operating system. APIs, and the way in which they are used, differ considerably between Windows and Linux. The way APIs are used under Windows can best be explained by means of an example. In order to open or create a file the API CreateFile() is used. This is a function located in a program library. A Windows program that uses this function has to load the corresponding program library (in this case, the library KERNEL32.DLL). In this way the function call of the program is linked with the function in the library. When the function CreateFile() is called up, control is handed over to the library. Depending on which version of Windows (NT or 95/98) is involved, some very different functions can be used by the library in order to execute the required operation (i.e. the opening or creation of a file). If a Windows program was loaded into memory under Linux and then executed, it would almost certainly fail. This would be because functions such as CreateFile() are not available. The Linux kernel provides a similar function, in this case one known as open(), but it is called up in a completely different way. In the realm of computer programs, similar is not good enough. How can this problem be solved? You may have already guessed. The necessary APIs have to be Fig. 2: Windows programs under Wine and Windows NT in comparison
82 LINUX MAGAZINE 3 · 2000
reproduced under Linux. They can then be linked with the program to be executed, just as occurs under Windows. If, for example, the program running under Linux invokes the Windows CreateFile() API, the library is called, and calls in turn the corresponding Linux system calls. Any result returned by the Linux call is transformed if necessary into the form expected by the Windows program. This perhaps seems complicated, but in practice it doesn’t have any disadvantages in comparison with the ”real” Windows. To stay with the example, under DOS-based Windows versions such as Windows 95/98, CreateFile() under certain conditions calls DOS routines in order to actually open a file. Under Windows NT or Windows 2000 the corresponding NT API (in this instance NtCreateFile()) is called up from CreateFile(). Even under the ”real” Windows, more and more layers have to be run through. Exactly the same happens under Linux. In fact, where Linux performs a function more efficiently than Windows, a Windows program running under Linux may, despite the overhead of Wine, still execute more efficiently than under Windows.
Safer than the original So what precisely does Wine contain? For a start, it has a program loader for Windows programs. With this, 32-bit and 16-bit Windows programs (and also DOS programs) can be loaded into the working memory and executed. This is only a small (if very important) part of its functionality, however. Most of Wine consists of the program code which makes available the APIs that DOS and Windows programs expect to find. These are located, as under Windows, in special libraries, which are linked by the loader or at run time with the Windows program to be executed. Wine is an ordinary Linux program from the point of view of the Linux kernel. It doesn’t even require special rights to be executed. Windows programs can thus be more safely executed under Wine than for example under Windows 98, where each program has full access rights to the whole computer and all files. The acronym Wine, by the way, stands for ”Wine Is Not an Emulator”. This is intended to point out that Wine doesn’t emulate a Windows computer. Instead, Wine executes Windows programs directly, precisely as happens under Windows. Nevertheless Wine does also mean WINdows Emulator, because the Windows APIs it provides do not contain the same code as the ”real” APIs written by Microsoft. Wine could be said to emulate Windows APIs. However, the word reimplementation would perhaps be more appropriate. Before getting started on the installation and configuration of Wine, one more word about the current state of its development. The project is at present still in the alpha stage, i.e. in the middle of development. Many Windows programs do in fact
081wine.qxd
23.10.2000
16:51 Uhr
Seite 83
WINE
WINDOWS EMULATION
already run very stably with Wine. Unfortunately, many others still don’t run at all. Because Wine is still being worked on very intensively, it could also happen that a program which functioned well with one version of Wine no longer runs with a more recent version. In such cases it is a good idea to send a bug report to the Wine newsgroup on the Internet and wait for the next version of the program package.
Installation Almost all Linux distributions now include Wine packages which can be installed via the distribution package management program. Due to the rapid development of Wine, however, these packages are already obsolete by the time the distribution comes out. So it can pay to download a current Wine package from the Internet and install this. If you do this, you have the option of either installing a binary package in RPM or Debian format, or of using the Wine source code and compiling this on your own system. This sounds more complicated than it really is. Compilation on your own computer has the advantage that the Wine program thus created is precisely tailored to the software of your own computer. When this happens a check is made first as to which components are present in the system. Wine can then include properties that use these components. So for example the OpenGL components of Wine only function when an OpenGL library with specific properties is on the system. If you are using a binary package containing a version of Wine which has been compiled to support a specific OpenGL version, but the requisite OpenGL library has not been installed on your system, the result may be that Wine will not run. Conversely, you cannot use the OpenGL functionality unless your version of Wine has been compiled for use by OpenGL, even if an OpenGL library is present in your system. In order to use the latest version of Wine it is recommended that you take a source code package and compile it yourself.
•
•
•
Software requirements In order to compile Wine and then use it certain programs and files must be installed on the system. These files are all an integral part of modern distributions, but this doesn’t necessarily mean they are currently installed. To prevent any problems arising during compilation you should check whether the following components are already on your system: • Linux kernel version 2.2.x. Wine also functions with older kernels (version 2.0.x.) However, there can be problems with these kernels when 32-bit programs with several threads are executed. • The GNU C run time library. The recommended version here is 2.1. You can also use Wine with the older version 2.0. It is not advisable to use the now-obsolete C library libc5. Apart from the actual library, which is installed on every Linux system,
•
•
you also need the development files for it. These files are included under Debian 2.2 in the package libc6-dev and under Suse 7.0 in the package libc (Series d). Also, under Suse the header files of the kernel have to be installed as well, and these are in the linclude package, Series d. Wine normally uses the X Window system to display windows on the screen, so X has to be installed. You will also need the X developer files (Debian: xlib6g-dev package, Suse: xdevel package, Series x) The X Pixmap library (libxpm) will also be required. Under Debian you will have to install the packages xpm4g and xpm4g-dev. Under Suse the corresponding packages are included in the packages shlibs and xdevel. In order for Wine to be compiled, you will of course need a compiler. The minimum requirement in this case is the GNU C Compiler, version 2.7.2. The latest version, 2.95 is recommended. You can find these compilers both under Debian, Suse and doubtless most other distributions in the gcc package. Plus, you’ll need a few ancillary tools such as make, bison and flex, which are found under Debian and Suse in packages with the corresponding names. Wine can optionally use a few other libraries. These primarily include the ncurses library (Debian: libncurses5 and libncurses5-dev packages. Suse: ncurses, Series a) and OpenGL libraries. You’ll find developer files for OpenGL under Debian in the package mesag-dev and under Suse in the package mesadev (Series x3d). Please note that you will also need either an OpenGL graphics card and an X-Server with OpenGL support for your card or the (slow) OpenGL software implementation Mesa (Debian: mesag3. Suse: mesa and mesasoft, Series x3d).
Fig. 3: Still room for improvement: Explorer from the German version of Windows 95 in the window
OpenGL: OpenGL is a collection of elementary graphics functions which, unlike other graphics libraries (e.g. Direct3D under Microsoft Windows) has the enormous advantage of being not only hardware-independent, but also suitable for any platform (and also for any operating system!) You can find out more at http://www.opengl. org/. ■
3 · 2000 LINUX MAGAZINE 83
081wine.qxd
23.10.2000
16:51 Uhr
Seite 84
WINDOWS EMULATION
WINE
Configure finished. Do ‘make depend && makU e’ to compile Wine. Should this not be the case it is probably due the fact that certain files have not been found. A corresponding error message should have been issued. You will have to reinstall the missing package and try again. When all is well you can continue following the instructions and enter this command: $ make depend && make
Fig. 5: Wine provides a Windows registry under Linux, which can of course be edited using Windows tools
Debug: The process of debugging: removing bugs (not unwanted insects, but faults) from the program code. ■
Creating and installing the source code The current Wine source code can be downloaded from the Internet from one of the following addresses: • ftp://metalab.unc.edu/pub/Linux/ALPHA/wine/ • ftp://ftp.infomagic.com/pub/mirrors/linux/sunsite /ALPHA/wine/development/ • http://metalab.unc.edu/pub/Linux/ALPHA/wine/ development/. You will find compressed tar archives in directories with names consisting of the designation Wine-, the date on which the respective version was published and the ending .tar.gz. Thus the file Wine-20000821.tar.gz for example contains the Wine source code for the version of 21 August 2000. Normally you should use the latest applicable version. Once you have downloaded the tar archive and stored it in your home directory you can unpack it with the following command: $ tar -xvzf Wine-20000821.tar.gz You must obviously adjust the designation of the file name (in this case, Wine-20000821.tar.gz), if you are using another version.
Compilation and installation of Wine As soon as the archive has been unpacked, the source code can be compiled and Wine can then be installed. To do this first change to the directory in which Wine was unpacked. The name of this directory is made up from the character string wine- and the date of the Wine version, so for example it could be called wine20000821. $ cd wine-20000821 Then enter this command in order to configure the source code for your system: $ ./configure All being well, after a series of messages, the following text will appear: 84 LINUX MAGAZINE 3 · 2000
Tip: if you have little space free on your hard disk and wish to use Wine but not to debug it, you can omit the debug information in the binary files created during compilation. In that case you should use this command to compile the code: $ make depend && make CFLAGS=-O2 Now the program can be installed so that it can be used by all users of the system. This must be carried out with the rights of the administrator, root. If necessary acquire these rights by typing: $ su and entering the root password. Then install the newly-compiled Wine using: # make install If you don’t wish to make Wine available for the whole system you can bypass this step.
Configuration Configuring Wine is relatively time-consuming in comparison with other software packages. This is mainly due to the fact that Windows programs expect a certain infrastructure, which of course is already in place on a Windows system. Under Linux, however, it must first be created. For example, configuration data and other information is stored under Windows in a system file called the Registry, which of course has to be made available by Wine. There is also a considerable difference between Unix/Linux and Windows in that under Windows, drive letters are used to designate different storage devices. If a Windows program wants to open, for example, the file C:\My Documents/ Letter.doc, Wine has to convert this filename into a valid Linux filename and open the appropriate file. To do this, a configuration file is used in which, among other things, Windows drive letters are assigned to directories under Linux. So for example the C: drive could be assigned to a directory called /c so that the file /c/My Documents/Letter.doc will be opened by accessing C:\My Documents\Letter.doc. There are basically two different ways of configuring Wine: with and without an existing Windows installation. If Windows is installed on your computer (for example in a dual-boot configuration, where the Windows partition can be mounted under
081wine.qxd
23.10.2000
16:51 Uhr
Seite 85
WINE
Linux), Wine can share this installation. Then all settings and files from this existing Windows installation are taken over. This has the advantage that programs installed under Windows can be executed under Linux with Wine without having to be reinstalled. Wine can then make use of a whole range of libraries in the Windows installation. Since at present the number of Windows libraries emulated (or re-implemented) under Wine is still very much a subset of those commonly used, many Windows programs run better under Wine if the ”proper” libraries are available. There is no risk of Wine changing Windows configuration files. Changes to the registry or to INI files which are made by the programs running under Wine are stored separately in the home directory of the user concerned and not written back to the Windows installation. This can lead to inconsistencies if you install programs with Wine and files in the Windows installation are overwritten. Of course, the partition containing the Windows installation has to be available under Linux so that Wine can access it. If this is not yet the case on your system you should make a corresponding entry in the file /etc/fstab. Assuming that your Windows installation is located on the first primary partition of your first IDE hard disk and it is to be mounted on the /c directory, the /c directory should be created first, using: # mkdir /c Then the following line should be entered in the file /etc/fstab : /dev/hda1 /c vfat defaults If you wish to exclude completely the possibility that Wine might change anything on your Windows installation you can also mount the partition as read-only: /dev/hda1 /c vfat defaults,ro Now the partition still has to be actually mounted. This can be done with this command: # mount /c You can, of course, also use Wine without any existing Windows installation. You must then define at least one directory, which should be assigned to a drive letter, and create this directory if it doesn’t yet exist. Also, a few sub-directories must be created in the directory – these are already in place in a Windows installation – together with a Registry. The simplest way to perform this step, together with the creation of a Wine configuration file, is using the script wineinstall, which can be found in the tools subdirectory of the source code directory.
For all seasons: wineinstall Wineinstall can be used to configure Wine for use with an existing Windows installation. The script can tell whether one exists by checking whether a Windows partition is mounted in the file /etc/fstab. So if you have mounted a Windows installation but
WINDOWS EMULATION
don’t wish it to be used by Wine it is advisable to comment out the entry before you run wineinstall. You can then later remove the comment. Depending on whether you run wineinstall as root (system administrator) or as an ordinary user, the script will create a configuration which is valid for the whole system or one that is merely valid for the user who is running it. In the case of a configuration which is valid for the whole system the configuration data is stored in the /usr/local/etc directory. The most important file in this directory is the wine.conf file. This is the central configuration file for Wine. In the case of a user configuration the configuration file is written in the home directory (~) of the user who is running it, where it bears the name .winerc (The dot before the file name, by the way, means that this is a ”hidden” file which is only displayed by ls when the additional parameter -a is provided to it.) If both files exist, Wine will use the file in the home directory of the user who is running it. So you should enter the following command to configure Wine, either as user or as administrator:
Fig. 5: Handy for viewing Word documents under Linux: the free Word Viewer from Microsoft.
# tools/wineinstall A check will then be made as to whether Wine has been compiled and installed. If not, the required steps will be taken. After that there will be a check as to whether an existing Windows installation is in place. If so, the required configuration file will be created. If not, you will be asked which directory should correspond to the C: drive. This directory will be created if it doesn’t yet exist. The registry will also be created. After that a configuration file will be written.
Fig. 6: Another possibility: creating a presentation via Wine with PowerPoint
The big moment Now you can try to execute a simple Windows program in order to test Wine. If you are using an existing Windows installation try starting the program notepad.exe. If not, you can start with a test by going straight into the installation of a program, as described further on. To start Windows programs with Wine from the command line, the name of the Windows program to be run must be specified as an argument to the wine program. So to run notepad.exe the following command should be entered: $ wine notepad.exe You can also provide a full path to the program to be run. This is necessary if it cannot be found in the search path for Windows programs, which is specified in the configuration file. When you do this you can specify either the Unix path name or the Windows path name, depending on the assignments in the configuration file. So if the /c directory is assigned to the C: drive, both of the following commands would have the same effect: $ wine /c/windows/notepad.exe $ wine c:\\windows\\notepad.exe
3 · 2000 LINUX MAGAZINE 85
081wine.qxd
23.10.2000
16:51 Uhr
Seite 86
WINDOWS EMULATION
WINE
under Wine the file is found on the D: drive, for example, you are likely to receive an error message. You can of course define additional drives under Wine, perhaps to be able to address the root directory of your system or your own home directory.
Command line options
[top] Fig. 7: Installing a Windows program under Linux with Wine. Does this look familiar to you?
If you have already tested Wine as described you will probably have noticed that windows controlled by Wine look like Windows windows rather than native Linux windows and work independently of the Linux window manager. On the whole it is better for Windows programs running under Wine to appear as normal Linux applications and to be controlled like all other windows via the window manager. The command line option --managed is used to specify this preference this. So if Notepad is to run ”managed”, Wine would be invoked as follows:
[above] Fig. 8: Eh voila! Wine repays us for all this effort with a ”managed” newsreader
$ wine --managed notepad.exe
Note that the backslash was specified twice in each case in the second command because it has a special significance for the Linux shell. No matter how you specify the file name of the program to be executed, one thing must always be noted: each Windows program to be executed has to be in a directory to which a path can be formed starting from one of the directories assigned to a drive letter. These assignments are made in the configuration file, and have the following format: [Drive C] Path=/c Type=hd Label=MS-DOS Filesystem=win95
Info The Wine Project http://www.winehq.com/ ■
What matters here in particular is the designation in the square brackets, (by which the name of the drive is defined) and the value following the Path designator which defines the Unix directory the drive corresponds to. The Filesystem designator should in most cases be followed by win95 , regardless of which file system is actually being used on that drive. If you would like to have the whole file system of your computer available to programs running under Wine, simply define a drive corresponding to the root directory of your system: [Drive R] Path=/ Type=hd Label=ROOT Filesystem=win95 If Wine is used with an existing Windows installation, the drive assignments under Wine should be kept the same as under Windows. Otherwise there can be problems. If a program, when running under Windows, searches on the C: drive for a file but
86 LINUX MAGAZINE 3 · 2000
Wine can also be operated in ”desktop mode.” The windows of Windows programs are then, as under VMware, all shown together in one window. The command line option for this is --desktop. If desired the size of the desktop window to be displayed can be specified. In order to execute notepad.exe in a desktop window with a size of 640 x 480 dots, Wine would be invoked as follows: $ wine --desktop 640x480 notepad.exe Many Windows programs run under Windows 95 but not under NT, while others may have the opposite preference. Using the command line option --winver you can inform Wine as to which version of Windows it should pretend to be In order to make it appear to a Windows program that it is being executed under Windows NT 4.0, Wine would be run as follows: $ wine --winver nt40 notepad.exe To ”fake” Windows 3.1 the value win31 would be used. Other possible arguments are win95 and win98. If Wine is run using the command line option --help then, like many Linux programs it presents a list of all valid options. Another important option is --dll. With this it is possible to specify which libraries Wine should use from an existing Windows installation and which it should provide itself. To do this, the option should be followed by the name(s) of the libraries (without the .dll) for which the default setting should be overridden; where several libraries are specified their names should be separated using commas. An equals sign can be used to specify whether the version provided by Wine (b for built-in) or the Windows version (n for native) is to be used. In order, for example, to use the Windows versions of the shell and shell32 libraries, you could invoke Wine like this: $ wine --dll shell,shell32=n notepad.exe
081wine.qxd
23.10.2000
16:51 Uhr
Seite 87
WINE
WINDOWS EMULATION
The option --dll offers many more options, which are described in the Wine man pages. To alter the default settings and avoid the need to continually type lengthy command lines the configuration file can be modified.
Installing Windows programs under Wine If you don’t have a Windows installation available to you, Windows programs which you want to use under Linux must be installed under Wine. In principle that’s no problem, because installation programs are of course nothing more than ordinary Windows programs. In practice, however, it often turns out that the installation programs do not work under Wine even though the actual programs themselves, once installed, are perfectly usable. If you cannot install a program under Wine you should either wait for a new version of Wine (as mentioned, development is making rapid progress) or read the Wine documentation and then write up a bug report. To demonstrate the installation of Windows programs under Wine we will use the newsreader Free Agent (that’s the freeware version from Forte Agent) which is popular among Windows users,. It can be obtained from http://www.forteinc.com/ getfa/download.htm. Download the file fa32121.exe from this website. This is the 32-bit version of the program. There is also a 16-bit version of the program available which can also be used. Before attempting to install the program you must first ensure that Wine has write access to the Windows directory. If necessary, Wine should be executed by the administrator root. Then Wine can be run with the name of the downloaded file as an argument, like this:
installation with Wine – just start Internet Explorer! If you have no Windows installation on your computer, try installing some other programs. But bear in mind that Wine is still very much under development and consequently is not completely stable. If your program hangs, you can terminate it using the command:
[right] Fig. 9: The Wine uninstaller is found in the programs subdirectory of Wine The programs installed under Wine can be managed with this. [left] Fig. 10: The game StarCraft shows off the multimedia properties of Wine. Get the StarCraft demo from http://www. blizzard.com/starcraft/scdemo.shtml.
$ killall -9 wine
Further information This article can only give a glimpse of the possibilities of Wine. You’ll find additional information in the documentation directory of the Wine source code, where there is also a comprehensive set of installation instructions. We’d also recommend a visit to the Wine Project’s home page. There you’ll find links to the Wine FAQ and a Wine HOWTO. Online support can be found in the Wine newsgroup, which goes by the name of comp.emulators.ms-windows.wine. ■
# wine fa32-121.exe --managed The installation program should now start. Don’t worry about the minor graphical errors which can sometimes arise at this point. Respond to all requests from the program just as you would under Windows. The installation program creates among other things an entry in the Windows Start menu. There is currently no link between these entries and the start menus of GNOME or KDE. After installation you must therefore start Free Agent from the command line, although you can of course create a GNOME or KDE menu entry manually. If you have installed the program in the directory C:\Program Files\Agent and the drive letter C: corresponds to the Unix directory /c, you could start the newsreader using the following command line:
[top] Fig. 11: Wing Commander is a popular game that can be played under Wine. Obtained a demo from http://www.wingcommanderprophe cy.com/demo.html [above] Fig. 12: Can you tell the difference? Internet Explorer with the homepage of the Wine project and HTML help
$ wine /c/Program\ Files/Agent/agent.exe --maU naged (The backslash after Program is necessary here to inform the shell that the space which follows it is part of the file name.) This command starts the program, which ought to be as usable as it is under Windows. If you’ve made it this far, you can now try to execute the programs in your existing Windows 3 · 2000 LINUX MAGAZINE 87
054cluster.qxd
20.10.2000
11:46 Uhr
FEATURE
Seite 54
CLUSTERING
Compute farms
MANY LINUX BOXES MAKE FAST WORK Clustering is a technology that enables you
to perform computing tasks faster than they would run on a single machine. Linux is the
ALAIN WIEDMER
ideal operating system for this application.
A compute farm is a lot of computers strung together to run power-hungry applications in record time. When you do this, you need to use a reliable and secure operating system. There’s a tendency to go for the reassurance of using wellmarketed and supported software, even though it ties you into a particular vendor’s expensive hardware. But what about Linux? Just because its commercial backing isn’t yet as great as that of its rivals and the technology hasn’t been cemented into our minds by clever and persistent marketing – the so-called ”Microsoft effect” – it doesn’t mean that Linux isn’t mature enough to do the job. But is Linux the answer for every compute farm, or is it still a niche choice for those who have advanced technical expertise and those suffering processing budget constraints? Linux-based enterprise server sales are riding high, suggesting that the technology is increasingly being deployed by users keen to take advantage of Linux as a more cost-effective, open source alternative to standard proprietary operating systems. Linux-based enterprise server sales worldwide are growing at a compound annual growth rate of 57% according to IDC, having increased from $320 million to be worth $4.1 billion today, and an estimated $7.4 billion by 2002. In comparison, total server sales worldwide encompassing every operating system has a 54 LINUX MAGAZINE 3 · 2000
compound annual growth rate of 17% for the same period. Moreover, the latest EEtimes EDA 2000 research study shows that there will be more than a three-fold increase in the use of Linux over the next two years, from 11% to 38%.
Phenomenal growth Much of this phenomenal growth in the use of Linux will come from its use in compute farms. Clustering Linux-based systems gets around the problem that Linux is not as scalable as other Unixbased operating systems. Remove the issue of scalabilty and Linux is easily comparable in quality to any other mature operating system and far outstrips the rest in terms of cost-effectiveness. Running on Intel architecture, the performance of which is comparable to that of 32-bit Unix and RISCbased systems, Linux-based compute farms are easily the most cost-effective way of getting the power needed to run big or complex applications. Linux can run on any make of PC or server right up to the biggest RAID arrays of SCSI devices and Gigabit Ethernet interfaces. That means you can use your existing kit or buy the cheapest on the market without being constrained by your choice of manufacturer. Being free to choose hardware that delivers a low cost of ownership means that the decision to throw it out after 18 months is not such a financial headache. In fact, clustering or farming processors is a cost-effective method of extending the life of hardware, because new hardware can be added and obsolete hardware retired ”on the fly.” More and more industries are realising the benefits
054cluster.qxd
20.10.2000
11:46 Uhr
Seite 55
CLUSTERING
of using server farms to undertake CPU intensive computing tasks. As a consequence, Linux is steadily becoming an operating system of choice mostly because of the value for money it delivers. The most popular server in the world, the opensource Apache web server, runs on Linux, as do media servers such as the RealServer, encoders, secure enterprise information management systems and many e-mail, news and other types of server. But for Linux, being open source software is as much of a minus as it is a plus. Whilst it means that if you’ve got a problem you can quickly fix it yourself, it also means that if you get a problem you have to fix it yourself. This puts off some users from betting their business on Linux. They worry about issues of support and whether or not staff are sufficiently technically skilled to tinker with the software when tailoring needs or problems arise.
Mature and secure Nevertheless, Linux is a mature and secure network operating system with all the robustness, familiarity and power of Unix. Also, hardware manufacturers such as IBM and SGI are increasingly offering support contracts that cover different flavours of Linux so you can have the reassurance of an allinclusive hardware and software services contract. Linux certainly delivers on the functionality requirements of a compute farm. The new 2.4 kernel adds excellent multiprocessor support and very fine-grained security capabilities down to Internet Protocol level. ”Because Linux is built from the ground up as a secure network operating system you need never visit your server unless it is to change its hardware,” says John HaywardWarburton, an independent consultant and multimedia producer whose clients include Hyperion Records. ”I have servers that have never been visited since before the beginning of this year and have been able to keep them fully up-to-date with operating software, kernels, etc, from my desk on a hilltop in Herefordshire.” He adds, ”I will not hand control of my clients’ valuable data and hardware over to any closed source company.” While Linux is reliable, customisable, and comes free from license fees, it doesn’t usually come preinstalled on systems. But management is easy using your own custom-built tools scripted in shell languages, Perl or Python, or those developed by commercial enterprises. ”That does mean that you need more technical knowledge than you would of your average operating system, especially if you intend to tinker with the software.” says Bill McMillan, a technical specialist in product management with Platform Computing. ”If things fall apart then it’s your problem!” Platform is the developer of LSF, a distributed resource management package for computing clusters. The company backed Linux in response to increasing customer demand. Indeed, LSF is used by
FEATURE
[links] A cluster of SGI computers running Linux
chip design company Transmeta, which employs the father of Linux, Linus Torvalds, as a software engineer. Transmeta operates a pool of 600 highperformance processors for the design of its Crusoe range of chips, which calls for complex simulations and builds. Without LSF Suite, the simulations would take around a week to run instead of hours, and between eight and ten times the amount of computing power would be needed, according to Transmeta’s IT manager, Ray Borg. He adds, ”If I turn LSF off today, my week-long job could take three or four weeks.” Most of Transmeta’s machines run Linux but some run Sun Microsystems’ Solaris operating system. LSF ensures that jobs run transparently and reliably on the most appropriate machine within the cluster. At Transmeta, LSF ties Linux and Solaris together in a seamless fashion so that the pool becomes one giant processor. ”Linux takes advantage of resources a lot more smoothly and efficiently than other software brains,” says Borg. Predictably, Linux has gained an early foothold in sectors where staff are more techno-savvy, such as in the rendered film-resolution market and in electronics design (EDA). EDA users have pushed for Linux because they want more flexibility out of their operating system so they can tinker with configurations, which the likes of Windows NT don’t allow. Other sectors are really only behind in their adoption of Linux because they’re waiting for necessary software to be ported to it. However, with Oracle and IBM backing Linux, appropriate enterprise solutions won’t be long in coming. ”In our mind, there’s now a level of agreement in the industry about what is required from a base operating system, and Linux meets that requirement,” says John Fleming, SGI’s UK marketing manager. ”For users, if they want to be totally in control of their own destiny and build their own software and hardware compute farm, clearly Linux is the only choice. For the rest, Linux is still the best option, but using the services of a vendor like ourselves which offers a Linux conceptual framework, commodity components and assembly.” He concludes, ”Linux for compute farms is here and it’s here to stay. The question that’s more open is: will Linux invade the desktop space? There the answer is a little less obvious, but it’s coming.” ■
Info Platform Computing http://www.platform.com/ SGI http://www.sgi.com/ ■
THE AUTHOR ALAIN WIEDMER IS VP SALES EMEA FOR PLATFORM COMPUTING
3 · 2000 LINUX MAGAZINE 55
056mosix.qxd
23.10.2000
15:11 Uhr
Seite 56
FEATURE
CLUSTERING
Mosix Clustering with Linux
PENGUIN POWERED SUPERCOMPUTERS BERNHARD KUHN
Multiprocessor systems with more than two processors are relatively expensive. On the other hand a computing cluster – the alternative – doesn’t always do what is expected of it. Mosix is a costsaving software solution that falls mid-way between these two worlds. This report provides an introduction to the subject matter and gives some test results.
Where users have a distributable task that needs a lot of computing power they often turn to a Beowulf cluster. This takes the form of a network of computers (very often identically equipped, lowcost standard PCs), which are loosely coupled using as fast a network as possible (most often Fast Ethernet, sometimes SCI or Myrinet). Here, the separate entities of the distributed application on the respective nodes exchange their data with the help of communication libraries such as PVM (Parallel Virtual Machine) or MPI (Message Passing Interface). A server program runs on every computer (e.g. pvmd if PVM is used), which in association with the other nodes takes care of automatic distribution of the load. The relevant technology has matured over many years now, and has proven itself extremely well not only in the scientific environment but in commercial applications too. In the private sphere it is extremely instructive, at the very least, to make a cluster with one’s second and third computers in this way. Incidentally, as it is released under the GPL, Mosix is free of licensing charges. 56 LINUX MAGAZINE 3 · 2000
This traditional cluster technique has its disadvantages as well though: with applications that are particularly communication-intensive the network is prone to collapse very quickly and the nodes then work with mediocre efficiency. This problem cannot necessarily be resolved by using a faster network system, quite apart from the fact that the costs for connection elements with high numbers of units soon exceed those for ”true” high performance multiprocessor systems (with 128 or 256 processors, for example). With multiprocessor systems, programmers do not need to concern themselves, by and large, with any particular communication libraries, which means that processes are distributed to the computing units of the multiprocessor system without any special effort on the programmer’s part.
SMP/CC Cocktail The functionality of a SMP system can, however, very nearly be achieved using a ”normal” computing cluster and the support of Mosix. This
056mosix.qxd
23.10.2000
15:11 Uhr
Seite 57
CLUSTERING
FEATURE
Fig. 2: Ingenuity: via the link layer processes are relocated transparently to other nodes
software is an operating system extension (incidentally, not just for Linux) which allows a network of loosely coupled computers to appear like a multiprocessor system from the viewpoint of an application. A Mosix cluster is thus a kind of hybrid between a traditional (Beowulf) network of computers and a conventional multiprocessor system. Mosix was developed at the University of Hebron and was used for the first time more than 18 years ago on a PDP-11 computer. In spite of the
Fig. 1: One hundred computers working productively: Linux-Mosix cluster at the Hebrew University of Jerusalem
immense time-span, the construction of ”true” multiprocessor systems from discrete computers is still the subject of much research. Table 1 describes possible Mosix cluster variants.
Distributed magic Figure 2 depicts the basic mode of operation of Mosix. Usually, a Linux process comprises a userspace part and occupies a data area in the kernel as well. The user context comprises the program code, stack and user data, while the system context keeps operating parameters for I/O access and interprocess communication ready for use (see Figure 2, process A). Through a cleverly-devised load distribution algorithm, the user part of a process may have been moved to a different node – the representative system part (called Deputy) remains on the Unique Home Node (see Figure 2, process B). If the local process A (on node 1) now sends a signal to the migrated process B (on node 2), this will be intercepted by an adapter link layer and forwarded (the red path in Figure 2). If as a result of this, the removed process wants to write data back to the
Table 1: Types of MOSIX clusters Single Pool Mode. All of the workstations and servers in the network form a common cluster (all computers have the same mosix.map). Server Pool Mode. The clusters are formed by the servers only – the workstations are retained for the respective users. Disadvantage: computing-intensive processes do not migrate automatically to the servers. Adaptive Pool Mode. The cluster consists of the servers and the workstations are connected up from time to time only, e.g. in the evening or when they are not being utilised (this can be achieved, for example, by appropriate mosctl entries in the crontab). Half Duplex Pool Mode. A workstation is part of the cluster only for those applications that were started on it. That is, the taking over of processes of other workstations is declined. 3 · 2000 LINUX MAGAZINE 57
056mosix.qxd
23.10.2000
15:11 Uhr
Seite 58
FEATURE
CLUSTERING
local computer’s hard disk (node 1), then this too will be diverted accordingly via the link layer (see Figure 2, blue path). Incidentally, process C is unable to send any signals to process B, although the latter physically takes up computing time on the same node, since from a process logic standpoint B is still situated on node 1. A small overhead arises due to the link layer, but this is not noticeable in practice (see below).
Opportunity knocks Control of the adaptive load distribution is decentralised. This allows new nodes to be integrated into the system without having to interrupt processing operations. Also, nodes that signal their demise is imminent can be isolated and
switched off without having to abort and restart protracted computation processes. At first sight, adaptive load distribution is somewhat puzzling: each node informs a random group of cluster participants about its current state. Amongst other things, this procedure prevents peak loads from occurring in computers little utilised up until then. On the basis of the received status information and observation of the processing or I/O response of the local processes, a node then decides whether a migration makes sense. This occurs not just when the processor load is too high, but also, for example, when a process demands more memory than the amount physically available on the node. In this case the memory-hungry process is simply relocated to a computer that is able to satisfy its needs.
Mosix installation As is virtually common practice with projects of this complexity that are tightly integrated with the kernel, the operating system kernel has to be rebuilt. Mosix offers a largely automatic installation (which even worked in our test with RedHat 6.1). Those who have no confidence in this can also, of course, patch the kernel manually and then prepare the necessary binaries. The relevant guide can be found in the Mosix package’s README. The following section gives a description of the installation process. Incidentally, it is never a bad idea to do a system backup prior to such a delicate operation! After this you acquire and unpack the necessary software: cd /tmp wget ftp://ftp.de.kernel.org/pub/linux/kernel/v2.2/linux-2.U 2.14.tar.gz wget ftp://ftp.cs.huji.ac.il/users/mosix/MOSIX-0.97.2.tar.gz cd /usr/src # delete the old link rm linux tar -xzf /tmp/linux-2.2.14.tar.gz mv linux linux-2.2.14-mosix ln -s linux-2.2.14-mosix linux mkdir mosix && cd mosix tar -xzf /tmp/MOSIX-0.97.2.tar.gz ./mosix.install You are now led through the installation procedure and have to respond to a number of irksome questions. With RedHatbased distributions, very often it is possible to accept the default value by pressing the Enter key: Specify in which run-levels (2-5) to run MOSIX [2345]: The question concerning the preferred boot mechanism is generally responded to with the standard default 1 (LILO). Caution: if you employ an exotic boot loader you could possibly wipe out the system through this! We can automatically add MOSIX to your Boot Loader Chose one from the list below: 1 LILO 2 GRUB 3 None ([1]/2/3): 1
58 LINUX MAGAZINE 3 · 2000
It is best to give a negative response to the following question if you just want to look at Mosix: Would you like MOSIX for LINUX to be your default boot kerneU l ([Y]/n): n Since as yet there is no Mosix kernel on the computer, you have to respond with ”(Y)es” to the following question: If you already have a compiled MOSIX kernel from another node and the configurations are the same, you need not compile the kernel again. Do you need to compile the kernel on this node ([Y]/n)?: The specification of the path to the Linux source directory follows our example: Path of the Linux v2.2.14 kernel sources (”-” to skip the kerU nel installation) [/usr/src/linux-2.2.14]: /usr/src/linux-2.2.14-mosix A negative response should be given to the following question about the Symlink for the include path since the link /usr/src/linux has already been set correctly further up. Make Symlinks from /usr/include to /usr/src/linux-2.2.14-moU six/include ([Y]/n) n In configuring the kernel the Mosix-specific default values can be left unchanged: Which method would you like to use to config the kernel 1 config (the regular plain questions). 2 menuconfig (menu driven configuration) 3 xconfig (Tcl/Tk windows interface) ([1]/2/3): 2 Compilation and installation of the kernel and Mosix commands in the command line should now proceed smoothly. Prior to the subsequent reboot it is advisable to create the /etc/mosix.map as well (see ”Mosix example configuration” below) and to specify the command: /sbin/versionate 2.2.14-mosix h which each computer is able to access the necessary data (e.g. NFS mount).
056mosix.qxd
23.10.2000
15:12 Uhr
Seite 59
CLUSTERING
Limitations Unfortunately, a whole series of programs are unable to be migrated away from the home node. This is the case if an application uses shared or locked memory, has direct access to the hardware or if a number of clones of itself (especially threads) run in the same address space. This severely limits Mosix’s usability in day-to-day use. Many applications such as Apache or Zope cannot be distributed and a number of end-user programs such as Netscape or StarOffice may also not be relocatable to a different computer. Posix real-time tasks (SCHED_RT) and the init process of course cannot be migrated either.
Careless: locked out! There are some programs and services that should not be migrated even though this may be
FEATURE
Mosix example configuration Suppose that the cluster comprises four computers with the IP addresses 192.168.1.1 to 192.168.1.4 along with two computers with addresses 192.168.1.10 and 192.168.1.11. The corresponding configuration file /etc/mosix.map will have the following structure: # 1 5
IP Range 192.168.1.1 4 192.168.1.10 2
Each entry in the first column indicates the node number for the computer while the IP address is shown in the second column. The third column reveals how many computers with consecutive IP addresses still belong to the cluster. The computer with IP 192.168.1.11 in our example would thus have node number 6.
technically feasible. The gettys of the virtual consoles are an obvious example: were these to be moved to a different computer and you then
Fig. 3: With pvmpov there is hardly any difference between symmetric multiprocessing and Mosix cluster computing
Table 2: Important Mosix commands (used in the command line) mosrun can precede any given commands and defines whether and how the process started by this may be distributed. migrate moves a process to any given node. Caution: unless specified to the contrary with mosclt, the migrated process can again roam further at any given time. mosctl defines the ”migration tactics” of the separate processes. This allows processes of external nodes to be expelled (mosctl expel), for example, in order to isolate the computer from the cluster and be able to switch it off. setpe integrates the local computer into the current cluster (is usually invoked by /etc/rc.d/init.d/mosix) mon Text graphic for visualisation of current status of load distribution for the separate computers (see Figure 5 centre left). tune_kernel optimises 14 different Mosix kernel parameters such as those for processor-, storageand network speeds. A second computer as partner is required for this. prep_tune prepares a node as partner for the optimisation process. Both computers have to be in single-user mode (init 1). 3 · 2000 LINUX MAGAZINE 59
056mosix.qxd
23.10.2000
15:12 Uhr
Seite 60
FEATURE
CLUSTERING
Server in Mosix test: SGI 1400L The spacious server housing accommodates six plug-in modules for U2W hard disks, three redundant power supply units and an immense fan battery (replaceable during operation). The 4-way Intel server main board (including dual on-board U2W controller, on-board S3 graphics card and 6 PCI slots) can be maintained remotely by modem through its special server BIOS if the need arises (not normally the case). Included in the basic equipment in addition to the two Xeon 550 processors are 512 MB RAM, a 9GB hard disk, CD-ROM drive and a 17-inch monitor. The accompanying accessories (mouse, keyboard) are of course in the stylish SGI design. With four processors the device is hellishly fast: 50 seconds for the compilation of a standard kernel 2.2.14 could certainly not be achieved with a currently available desktop system. With approximately 20 seconds for the PVM-Skyvase benchmark, the computer is indeed 10 per cent faster than a dual Alpha 21264, but in the configuration level needed for this it is also considerably more expensive.
Table 3: Add-on programmes for Mosix (to be found under [4]) mps mps works just like the familiar ps command used in the command line but as well as displaying the usual process information it also shows the particular node that the process has just migrated to. mtop like top, mtop displays the process information continuously indicating the respective node in text mode. (see Figure 5) qps qps is based on the Qt GUI toolkit and is rather more ”diverse” than mtop.
Fig. 4: The SGI 1400L servers used in this test can be completely dismantled in less than two minutes if necessary
simulate a failure of just this computer (by switching it off, for example) the migrated gettys can no longer respond to keyboard input. You are then locked out from the keyboard, even though the node is still running merrily. (Incidentally, the Mosix kernels of the computers affected automatically resynchronise as soon as you turn the computer back on.) For some services in /etc/inittab or /etc/inetd.conf, enlistment of the command mosrun prevents migration at the automatic installation stage.
Tuning Just as the pistons in an internal combustion engine work together optimally only if the engine unit has been tuned beforehand, so the processors in a Mosix cluster work with maximum efficiency only if they have been tuned to one another. Mosix recognises no less than 14 operating parameters (cat /proc/mosix/admin/overheads), each of which has a significant effect on the system?’s performance. Manual tuning would therefore be a laborious procedure.
Fig. 5: The two Mosix nodes during the computation for Skyvase. With mtop (below) easily discernible: the distribution of the pvmpov processes (N#: 0=home node, 2=second node) 60 LINUX MAGAZINE 3 · 2000
056mosix.qxd
23.10.2000
15:12 Uhr
Seite 61
CLUSTERING
Fortunately there are tools available that carry out the optimisation automatically. With a cluster consisting of similar types of computer this makes improving the efficiency child’s play. Two of the computers first have to be taken into single-user mode. Then prep_tune is started on one of them. This computer acts as the partner for the cluster participant on which the optimisation is initiated with tune_kernel. The process lasts several minutes and as far as possible should be kept free of disturbance (caused, for example, by high network loading of other users). The tuning can be made temporarily effective by notifying the Mosix kernel about the new operating parameters using cat /tmp/overheads > /proc/mosix/admin/overheads. For a permanent solution, the file created during the optimisation procedure should be copied across (cp /tmp/overheads /etc/overheads), so that with every system start the settings can take effect automatically with the aid of /etc/rc.d/init.d/mosix.
FEATURE
commences its activities slightly later than the home node (xosview@lab0). As previously mentioned, the overhead caused by the link layer is not noticeable. Whether with the normal SMP kernel or Mosix, the execution times for the Skyvase test are identical provided that the events take place on the same node. Mind you, the Mosix system is rather more easily influenced than a true SMP system: an xosview running on the nodes at any given time worsens the results significantly (to approximately 28 seconds on average instead of 23), although this takes up hardly any processing time. It would seem that synchronisation is disturbed by this. With a kernel compilation things look even worse. The conversion of a source file into object code requires about the same length of time that a Mosix kernel normally takes for its migration decision. As a result, gcc processes are either not transferred at all or else at a very late time. Depending on the application a Mosix cluster has to be adjusted separately through /proc-fs.
Info Beowulf home page: http://cesdis.gsfc.nasa.gov/linu x/beowulf/ PVM home page: http://www.epm.ornl.gov/pvm /pvm_home.html MPI home page: http://www.mpiforum.org/index.html Mosix home page: http://www.mosix.org ■
Practical experience Enough of pure theory! Two SGI 1400L Linux servers (see box) were available for our experiments. During the tests, one of the two Intel 4-way server main boards was allowed to simultaneously deal with all four Xeon 550s in order to show the difference between cluster computing and symmetric multiprocessing. As is evident from Figure 5, in the PovraySkyvase benchmark the computation times initially are correspondingly lower the more processes there are running in parallel. However, if you exceed twice the number of available processors, this leads to poorer results because communication overhead, process synchronisation and context switching times take their toll. With two processes per processor everything still runs fine though. This is because if one process is slowed down due to file I/O activities, for example, the second process can immediately use the released time slots. With pvmpov it is of no great concern here whether the processors slave away in just a single computer or are distributed across a number of computers. It is quite apparent, though, that the Mosix cluster requires somewhat longer to process the task than the SMP system once more processes are started than the home node has processors for. This is because the Mosix kernel observes newly started processes first and only after a couple of seconds have elapsed decides whether migration to another computer is worthwhile, since for I/O-intensive applications it is preferable that the process remains on the home node. Consequently, the Mosix cluster requires just marginally more time to complete the Skyvase benchmark than an SMP machine with the same number of processors: in Figure 5 it is quite apparent that the second computer (xosview@lab1)
Work in Progress Of course, the Mosix project, like many other Linux projects, is still very much on the move. In one of the forthcoming stages a mechanism is to be implemented for so-called Network RAM. This allows the storage needed for a process to be distributed over a number of computers. The process then always travels to the data instead of the reverse of this. The objective is ultimately Distributed Shared Memory (DSM), with which threads etc., can also be distributed across the Mosix cluster. A further future option is migratable sockets. With this, migrated processes also transfer their network I/O activities to the remote nodes and thus reduce latency time and save bandwidth. The Mosix File System (MFS) is already implemented: instead of diverting data accesses to the Deputy (this is the system part that is located on the home node) migrated processes can access the desired data using ”real” system calls (read/seek etc.) via MFS. Result: significantly higher performance. However, MFS is still at an early alpha test stage.
Dreams for the future? Even today the use of Mosix in many, mostly scientific, spheres makes sense. Mosix will become really interesting though when Distributed Shared Memory is fully supported, thus allowing the migration of any given applications. One can imagine a Mosix server cluster with a lot of Xterminals attached: if a shortage of processing power or storage arises for the enterprise-wide solution, you simply plug in an extra ”fireball” to the network or shut down a server ”just briefly” to upgrade the storage – and nobody will notice! ■ 3 · 2000 LINUX MAGAZINE 61
062blender.qxd
20.10.2000
11:48 Uhr
PROGRAMMING
Seite 62
BLENDER SCRIPTING
Writing Blender scripts in Python
SNAKE CHARMER MARTIN STRUBEL
In versions greater than 1.67 and with the C-Key extensions the well-known animation package Blender allows users to explicitly manipulate 3D objects and their attributes. The script language it uses to do this, Python, is popular on every platform and very easy to learn. In this article we will make a start writing Blender scripts with Python. For the time being, the Python extensions are not available in the freeware version of Blender. This means that without a C-Key you cannot try out the examples provided. However, with Blender 2.0 (the Blender games development system), there is a certain dynamism about Python which could help those who do not yet own a CKey to feel that acquiring one would be worthwhile.
First steps We do not intend to describe in detail the structure and implementation of Python in this article. At the Python home page you will find a complete user manual on Python, which you do not have to read through right now. Python is actually very simple and you could probably learn it standing on your head. However, we must define some key concepts. As the term ”object” is used both in Blender and in Python, we intend to describe a specific Python object as ”PyObject” in the rest of the article. However, just to immediately confuse you again, an object in Blender can also be a PyObject – in fact, as soon as you address a Blender object using Python. But, a PyObject can also be a material or a light source in Blender and is not simply a variable or a data record. The best thing to do is to take a look at how we manipulate Blender objects. And the best way to do this is to start the Blender in a window (not full-screen) from the shell: blender -p 0 200 640 480 62 LINUX MAGAZINE 3 · 2000
This way you can still view the shell window displaying the current stdout and stderr of the Blender Python module. Now call up the text editor in Blender using [Shift+F11] and select ”Add New” in the menu panel. Now you are ready to start writing. Try out the famous Hello program, somewhat extended: # Everything after the "#" is a comment a = 1 print a print "hello" a = a + 1 print a Now run the script using [Alt+P]. Good, that was easy. However, we can establish straight away that ”a” is a PyObject, and one of the simplest at that: an integer value (int). You can use: print type(a) to establish what type of PyObject it is the output in the shell: <type ‘int’>. Make a small change to the script by setting ”a = 1.0” instead of ”a = 1”. Check again what type a is. Aha! Now we take the surface which is usually presented first when we start Blender. Its default name is Plane (OB:Plane) and it is controlled using the EditButtons menu [F9] (see Figure 1). Please note: the relevant Polygon object (Mesh) is also called Plane. However, we are only addressing its entity (i.e. the Blender object itself) and therefore always use the OB name.
062blender.qxd
20.10.2000
11:49 Uhr
Seite 63
BLENDER SCRIPTING
PROGRAMMING
Enter the following script and run it using [Alt+P]:
[top] Figure 1: The EditButtons menu [F9] [left] Figure 2: The IPO window
import Blender obj = Blender.Object.Get("Plane") obj.LocX = obj.LocX + 0.5 obj.RotZ = obj.RotZ + 0.2 Blender.Redraw() Something happened! You can print out the coordinates using print if you want to check them. And now a brief explanation. The function Blender.Object.Get() waits for the name of the Blender object as an argument and delivers the pointer to the data record (for C hackers: in the same way as struct) for the PyObject concerned as a return value. Thus we have specifically allocated a PyObject to the Blender object: if we change the attributes of the PyObject obj, the attributes of the Blender object change in the same way. However, this function is not installed in Python – clever readers have already guessed – it is located in the module called Blender which must first be imported. Blender.Redraw() allows the objects to be redrawn (so that you can also see the effect immediately). Of course, this is not necessary when you compute an animation. Obj.LocX is – quite obviously – the X co-ordinate of the plane (strictly speaking of the purple centre point) and obj.RotZ the angle of rotation around the Z axis, where the unit is radians (a circle – i.e. 360 degrees – corresponds to 2*Pi or around 6.28). We will look at how to query Pi as a variable later. That was the basics, but we will also show you a few tricks so that you can check out all the Python functions in Blender.
Hierarchical society As you can guess, Python has a similar type of class hierarchy to C++ or Java. The dir function provides you with a list of strings which contain the names of the class members (or methods) of the argument. Try out another script (”ADD NEW” in the menu): import Blender print dir(Blender) Your output will be something like: [‘Camera’, ‘Const’, ‘Get’, ‘Lamp’, ‘MateriaU l’, ‘NMesh’, ‘Object’, ‘Redraw’, ‘World’, ‘__doc__’, ‘__name__’, ‘bU ylink’, ‘link’]. Now try to move further down the hierarchy, for example using: print dir(Blender.Object) Everything alright? In principle, you can scout out new functions yourself (which you will probably receive with each new version of Blender). The most
important functions are those shown in the following form: Blender.<class>.Get("<Name>") For <class> you can use almost anything you obtained above with dir: Camera, Lamp, Material, Object and World. As a return value you always receive a data record object, the type of which corresponds to the class (and, accordingly, its attributes depend on it too). The aforementioned term ”method” always stands for a function which is applied to a specific PyObject, e.g. one of the standard methods is the function to attach a PyObject to a list (the method list.append). Blender.Get() is also a method. But what is a list? If you do not enter anything as an argument for Blender.Object.Get(), you will not receive the data record of an object as a return value but a whole list of all the objects. For example: import Blender obj = Blender.Object.Get() print obj print len(obj) delivers the following output: [[Object Camera at: <0.000000, -8.128851U , 0.000000>], [Object Plane at: <4.500000, 0.000000, 0.U 000000>]]. List elements can be addressed in the same way as arrays in C: the X co-ordinate of the object plane can therefore be queried or changed using obj[1].LocX in this instance. The number of elements in the list can be established using len(). In this case print len(obj) produces the result ”2”. Now, of course, we ask what else can be manipulated apart from the co-ordinates. Answer: almost all the attributes which can be controlled using the IPO curve. Simply switch to the IPO editor [Shift+F6] and view the attribute names on the right-hand side (see Figure 2). We want to show you a few examples (see Table 1). As the last example once again illustrates, the data blocks retrieved using the different functions are
Table 1: Blender scripting with Python cam = Blender.Camera.Get(”Camera”) x = cam.Lens x = ”focal distance” of the camera lens cat = Blender.Object.Get(”cat”) cat.SizeZ = cat.SizeZ / 10 Poor cat (no comment) mat = Blender.Material.Get(”Blue mat”) mat.B = 0.0 mat.R = 1.0 We have coloured the blue mat red… la = Blender.Lamp.Get(”Lamp”) la.Energ = la.Energ - 0.1 ob = Blender.Object.Get(”Lamp”) print ”co-ordinates:”, ob.loc We want to dim the light a little – but notice: la and ob are not the same!
3 · 2000 LINUX MAGAZINE 63
062blender.qxd
20.10.2000
11:49 Uhr
PROGRAMMING
Seite 64
BLENDER SCRIPTING
not the same. Try to establish the data type (as above with print type). If we select the lamp and switch to the EditButtons menu [F9], we see Figure 3: The variables la and ob have been used in the example above to give you the gist. la is a PyObject for the data record of the lamp parameters and ob the parent object for this data – the lamp entity with the name ”Lamp”. We already know that an object in Blender defines position, rotation, size etc., and refers to a data structure of the corresponding object type (Lamp, Mesh, Surface, Camera, etc.). In the relevant PyObject this happens via the pointer ob.data. Again you must note the data type shown by ob.data. In the example above with the lamp, ob.data points to la of type Lamp. We could therefore write the above example differently: ob = Blender.Object.Get("Lamp") la = ob.data la.Energ = la.Energ - 0.1 What is the point in saving the parameters separately like this? Those of you who have clashed with Blender’s object hierarchy will certainly know the difference between ”linked copies” and normal ”copies” (single user copy). An object can share its parameters with several other objects in other positions, i.e. if I change these parameters, all the partner’s parameters change in the same way. Therefore, you can duplicate ten lamps ”linked” (using [ALT-D]) and use la.Energ to change their brightness at the same time. To do this, you enter the ”LA:” name at Blender.Lamp.Get(). However, if I simply wish to change the position of the individual lamps, I use ob.loc and have to enter the ”OB:” name at Blender.Object.Get(). And now we should schedule a coffee break before things become too confusing. Before we go for coffee, though, I would just like to add one thing. Those of you who (with selected lamp) switch to the IPO window and click on the lamp icon will find more attributes of the PyObject la of type ”Lamp”. And those of you who wish to view this whole object hierarchy can do so in the OO window using [Shift-F9].
Complex images Now it gets serious. The great thing about scripts is that you can program complex animations quite easily instead of having to enter them laboriously by hand as IPO curves. The attributes of an object can depend on the attributes of an object animated using an IPO curve or directly according to time. We
Figure 3: The EditButtons menu [F9]
Figure 4: The ScriptLink menu 64 LINUX MAGAZINE 3 · 2000
have already seen an example of the first case; now need to access a time variable. We get this in the form of the ”frame number” using: time = Blender.Get(Blender.Const.BP_CURTIME) The inquisitive reader who immediately tries out print dir(Blender.Const) finds yet another variable BP_CURFRAME. The difference between this and BP_CURTIME is that Blender.Get() provides an integer value (indicating the currently rendered ”frame”). In contrast, the time values are not necessarily integer values, e.g. where half images (in PAL format) are rendered, or rendering involves ”Motion Blur”. Note: It is usually a good idea to deduct 1.0 from the time variable so that the animation begins with time = 0.0. Now it would be good if the script were retrieved automatically whenever an object was moved or whenever a new frame was rendered. It’s all possible! Select the object concerned and switch to the ScriptLink menu (see Figure 4) On the right-hand side are the scene script links: click on ”New” and enter the name of the script in the text field (in the example taumel.py). This is retrieved each time the frame is changed. On the left-hand side are more link options. Depending on which type of object was selected you will see the symbols for object, material, lamp, world etc., in the menu panel. However, we will not go into more detail about these link types just now. Let’s try it out. Let’s make a virtual lamp sway around and flicker a little. We want to do this using Scene-Link. We begin by adding a lamp using ”ADD NEW->Lamp”. However, we have only one light source. We want to see the lamp properly and so we add another ”Plane”, delete three vertices from it in EditMode and move the one vertex to the position of the lamp, which we make the parent of the vertex so that it moves with the lamp too. As material we set a red halo. Our script can be seen in Listing 1. In conclusion, the script is linked to the scene via the ScriptLink menu so that it is retrieved when the animation is played (Alt+A) and during rendering, as described above. A small snapshot, rendered using Motion Blur, can be seen in Figure 5:
A little bit of math and a little bit of chance Without any further explanation, we have imported the math module. It contains the functions of the standard C library, as you will see if you use print
062blender.qxd
20.10.2000
11:49 Uhr
Seite 65
BLENDER SCRIPTING
PROGRAMMING
Figure 5: The sway script in action
dir(math). Here we also find the value of Pi promised earlier: math.pi. We used the statement: from math import * so that we do not always have to type in the module prefix math. The math module functions used to calculate the circular movement or the sway in the orbit are sin() and cos(), best known in the context of trigonometry. In addition, we would like to have a greater element of chance. For this there is an extra module by the name of whrandom in the standard Python directory (under Linux usually /usr/lib/python1.5 or /usr/local/lib/python1.5.) However, the environment variable $PYTHONPATH is not set everywhere and so the standard system path may not be found. Using import sys, the system path can be replaced with the assignment sys.path = [‘/usr/lib/python1.5’,’...’, ...] (or extended with sys.path.append()) ? or, alternatively, via the environment variable $PYTHONPATH. The random function whrandom.random() always provides a floating decimal point value between 0.0 and 1.0. You can read everything else from the script or simply try it out in the example file.
A few pearls of wisdom We can really do something with the methods described. However, things become very interesting – but more time-consuming – as soon as we begin to simulate functions which are no longer as easily predictable but depend on the position of other objects (e.g. collisions, chaotic functions etc.) This is a topic for another time. Empties are very useful for establishing the starting position of an object. You can also use these as a kind of Slider without having to enter variables in the script. If you wish to simulate cannon fire, for example, you can stipulate the starting position and firing direction of the canon ball using two Empties. The fact that variables and modules are not deleted after the script is retrieved but are always available in the memory – and globally too – is very important. Therefore, if script A sets a variable, script B can read it again. Of course, this can be very useful, but it can also be somewhat confusing at times. When experimenting with scripts, you should test your script to establish whether it is foolproof by saving the work, restarting Blender, reloading the file and running the script again. There are still a few things we haven’t discussed yet. Since version 1.69 we have been able to manipulate or query the vertex data of a mesh and the original text co-ordinates directly. This is particularly interesting for the export and import of models (e.g. for Quake2 etc.). It also enables you to generate complex objects easily – Lindenmayer systems or genetic algorithms used to create plant-
like objects and trees come to mind. In addition, modules can be developed in C which can be loaded quite easily as dynamic libraries (like the Blender plug-ins) using import <module>. So anyone who still thought that Python was too slow as an interpreter language has hopefully been convinced otherwise. The rapid development of Blender allows us to dream. In future versions there will be built-in collision detection, and work is under way on extensions allowing users to use the Blender GUI from Python. There are likely to be more new features by the time you read this. Therefore, stay on the ball and allow yourself to be surprised. ■
Info Python home page: http://www.python.org/ Brief Python documentation on Blender: http://www.blender.nl/comple te/index.html Blender http://www.blender.nl/shop: ■
Listing 1: The dance of the lamps # sway.py by ms, 11.1999 from Blender import * from math import * import whrandom # Number of frames for "once around" # the higher the number, the more slowly thU e lamp sways speed = 100 pi2 = pi * 2 lamp = Object.Get("Lamp") box = Object.Get("Box") t = Get(Const.BP_CURTIME) - 1.0 t 0.0
# Start aU
# Make the lamp sway, taking into consideratU ion the size of the # box - change the size of the # box in order to test it and press Alt-A again # the radius of the orbit should oscillate soU mewhat r = box.SizeX* (0.7 + 0.1 * sin(10* t * pi2 /U speed)) lamp.LocX = r * cos(t * pi2 / speed) lamp.LocY = r * sin(t * pi2 / speed) # Make the lamp flicker: lampdata = Lamp.Get("Lamp") r = whrandom.random() lampdata.Energ = 1.0 + 0.5 * r # Also make the halo size flicker: mat = Material.Get(”Halo”) mat.HaSize = 0.10 * (1.0 + 0.5 * r) 3 · 2000 LINUX MAGAZINE 65
066framebuffer.qxd
20.10.2000
12:48 Uhr
KNOWHOW
Seite 66
EMBEDDED GRAPHICS
How to use framebuffer devices
SMALL IS BEAUTIFUL DENNIS SCHÖN AND BERNHARD KUHN
Linux handhelds have become all the rage now that framebuffer graphics have become available as an alternative to the resource-hungry X Window System. In this article we look at how to use this display technology.
The X Window System has been around for more than a decade. It’s a tried and tested graphics interface for Linux desktop computers and other Unix workstations. Resource usage isn’t a great concern in that environment, but it has been the main obstacle to the development of Linux as an embedded OS for handheld computers (PDAs, organisers and so on). In these tiny computers every megabyte saved counts. In the past few years several toolkits have been developed to help create applications with more efficient graphical user interfaces. Many of these are based on the ”kernel framebuffer device” originally developed for Linux/M68K. The graphics subsystems of these platforms (Amiga, Atari, Macintosh) offer little in the way of hardware acceleration but share a very similar representation 66 LINUX MAGAZINE 3 · 2000
Fig. 1: One is not enough: the generic console driver allows several consoles per computer.
066framebuffer.qxd
20.10.2000
12:48 Uhr
Seite 67
EMBEDDED GRAPHICS
KNOWHOW
format. It seemed appropriate to produce a generic graphics driver. From kernel 2.1.107 on, the framebuffer device for all platforms (x86, Alpha and so on) is integrated into the standard kernel. As in Figure 1, the generic driver for the text console runs as desired on either the ordinary VGA driver or on the fbcon for the underlying framebuffer devices (fbdev).
Fig. 2: Economical with resources: this static linked framebuffer version of a Tetris clone is just 11Kb.
Configuration of the kernel When using the framebuffer it is advisable to employ the latest stable kernel. The kernel 2.4.0test* is currently experimental. As such, it should only be used if the kernel 2.2.x has no support for the desired graphics card. This situation should be an exception. The framebuffer device for the VESA-BIOS is usually found in IBM-compatible PCs. Special drivers such as those for products from 3dfx, ATI or Matrox allow for higher resolutions and image repetition frequencies than the VESA driver. After downloading, the kernel sources can be unpacked (while running with root privileges) under /usr/src and then configured using make menuconfig or make xconfig. You must activate the menu item ”Prompt for development and/or incomplete code/drivers” under ”Code maturity level options”. If this isn’t done, the option ”Support for frame buffer devices” will not appear in the ”Console Drivers” menu later on.
Table 1: List of graphics chipsets supported. Graphics cards chipset Kernel 2.2.16 VGA 16 X VESA 2.0 compatible X Permedia2 X Matrox X ATI Mach64 X nVidia Riva Cirrus Logic GD542x/543x ATI Rage128 SiS630/540
Kernel 2.4.0-test1 X X X X X X X X X
Framebuffer vs. X Server In the traditional X-Windows system, the application communicates with the X Server via the network layer. This then accesses the graphics hardware in the user-space. In the framebuffer device, the application accesses the graphics memory via the device files /dev/fb*.
X-Server: flexible but complex. It’s simpler with a framebuffer device
Advantages of the framebuffer device subsystem • The framebuffer is an powerful and efficient alternative to the XServer; • Many graphics cards are switched into graphics mode by firmware and therefore provide no hardware text mode. For such graphics cards, framebuffer-type drivers would nevertheless be necessary; • The framebuffer allows very flexible use of the graphics card. It offers various resolutions, refresh rates, colour depths and type sizes, either with X-Windows (XF68_FBDev) or on the console. The console can be run at a resolution of 1600x1200 at 90 Hz (200x150 text lines) with powerful enough hardware. Even at 1024x768 / 75 Hz, economy is noticeably increased compared to standard VGA (normally used with 640x480 pixels at 60 Hz). • With the framebuffer there is no limit for the console on the number of symbols in a line of text. • There isn’t a dedicated X-Server in existence for a framebuffer card. However, the device does have VESA-BIOS (established in the PC environment for about five years). With the aid of the framebuffer X-Server (or the corresponding XFree86-4.0 driver module) an XWindow system can be used. • A kernel equipped with a framebuffer can display a ”tux” or other logo (in multiprocessor machines) in the top part of the screen during the boot procedure, instead of a plain black screen. • The framebuffer architecture is very simple. An experienced programmer can implement a new driver in a single afternoon.
3 · 2000 LINUX MAGAZINE 67
066framebuffer.qxd
20.10.2000
12:48 Uhr
KNOWHOW
Seite 68
EMBEDDED GRAPHICS
The following options must be activated: [*] VGA text console [*] Video mode selection support .... [*] Support for frame buffer devices Extended options will follow. Apart from the driver for the graphics card (such as VESA 2.0, ATI Mach64, 3Dfx Banshee/Voodoo3) the following options can still be activated: [*] Advanced low level driver options .... 8 bpp packed pixels support 16 bpp packed pixels support 24 bpp packed pixels support 32 bpp packed pixels support .... [*] Select compiled-in fonts .... [*] VGA 8x16 font Using ”x bpp packed pixel support”, select the possible colour depths with which the framebuffer device can be operated. Under ”Select compiled-in fonts” you can choose the fonts for the console. Activating more than one font makes it possible
later on to select which one should be used for the console as a boot parameter.
Configuring the bootloader Depending on the bootloader used, the files /etc/lilo.conf for lilo or /boot/grub/menu.lst for grub can be modified – once the kernel has been compiled and the module installed. This must be done in order to activate the framebuffer device at the next reboot. Here are two examples: # LILO configuration file boot = /dev/hda3 # Linux bootable partition config begins image = /vmlinuz append = "video=atyfb:1024x768-8@76,font:SUU N8x16" root = /dev/hda3 label = Linux read-only # GRUB configuration file # For booting Linux title GNU/Linux (experimental 2.4.0-test1 "U 1024x768@76") kernel (hd0,0)/vmlinuz video=atyfb:1024x768-U 8@76,font:SUN8x16 root=/dev/sda1
Configuration of the X Server XF68_FBDev One of the main applications for the framebuffer, apart from the graphical console is the X Server XF68_FBDev. It’s name doesn’t mean that this is a number cruncher – this X-server was originally developed for platforms with Motorola 68000 processors. XFree86 from version 4.0 on gives the driver module fbdev_drv.o of the generic X-server direct access to the graphics hardware. But now every current distribution includes the framebuffer server in pre-compiled form. After installing the software package for the respective distribution or compiling and installing the source package (only advisable in exceptional cases) a few small alterations have to be made to the configuration file for the X server (version 3.x) (/etc/X11/XF86Config or /etc/XF86Config). The ”Screen” section might look like this: Section "Screen" Driver "FBDev" device "Primary Card" Monitor "Primary Monitor" SubSection "Display" Modes "default" EndSubSection EndSection
With these configurations the X-server starts in the resolution at which the framebuffer has been most recently set (either by boot parameter or with the command line fbset). Resolutions can also be specified in the XF86Config file. This makes it possible to modify the resolution of the X server at run time. Unfortunately the timing values of the video mode have to be specified in a different format than in the normal XF86Config. There are two ways to find out the correct values: Either convert the existing XF86Config values into the new framebuffer values. This can be performed using the formulas given in /usr/src/linux/Documentation/fb/framebuffer.txt Section 6. ”Converting XFree86 timing values in frame buffer device timings”, or by switching to the resolution desired for XWindows using the command fbdev and the observing the timing values displayed when you run fbset -x. # fbset -x
Section "device" Identifier "3dfx" Driver "fbdev" EndSection
Mode "1024x768" # D: 84.991 MHz, H: 62.493 kHz, V: 75.933 Hz DotClock 84.992 HTimings 1024 1032 1152 1360 VTimings 768 784 787 823 Flags "-HSync" "-VSync" # Warning: XFree86 doesn’t sU upport accel EndMode
Section "Screen" Identifier "Screen 1" device "3dfx" ... EndSection
This mode specification can now be used easily by means of copy and paste in the ”Monitor” section” of XF86Config. Apart from this the XF68FBDev server can be used like any other: using the ”virtual” keyword, a virtual resolution can be set.
In XFree86 4.0 and later versions, only the driver module fbdev_drv.o has to be taken into account:
68 LINUX MAGAZINE 3 · 2000
066framebuffer.qxd
20.10.2000
12:48 Uhr
Seite 69
EMBEDDED GRAPHICS
KNOWHOW
The meaning of the kernel parameter ”video” should be self-explanatory. The ATI driver should be loaded at a resolution of 1024x768 pixels with a colour depth of 8 bits per pixel and a refresh rate of 76 Hz. The font compiled into the kernel is SUN 8x16. Depending on the graphics card driver the kernel parameter may be omitted or look completely different. The VESA-BIOS and the Matrox driver, for example, have to be given a VESA mode number (such as append = ”video=matrox:vesa:440”). More details on these drivers can be found in the directory /usr/src/linux/Documentation/fb/. This is also where special options for the drivers are explained (for example ypan and ywrap to increase scroll rate). Until now almost every framebuffer driver has used its own video modes. A good way of avoiding this chaotic waste of resources can be found in the recent kernel series 2.4. Here, the drivers for Amiga (ami), ATI Mach64 (atyfb), ATI Rage128 (aty128fb) and Voodoo3 (tdfx) all share a single video mode database (modedb). The following self-explanatory format specifies a valid video mode:
Fig. 3: Microwindows is delighted by the ever-growing popularity of Linux handhelds. Apart from alpha blending it also controls TrueType fonts with anti-aliasing.
"video=<driver>:<xres>x<yres>[-<bpp>][@<reU fresh>]"
Other configuration If no device files have yet been created in the /dev directory for the framebuffer devices this must be done by hand: for i in 0 1 2 3 4 5 6 7; do mknod /dev/fb$i c 29 $[$i * 32] done A restart a message, similar to the one below, should appear during the boot procedure: atyfb: 3D RAGE PRO (BGA, AGP) [0x4742 rev 0xU 7c] 8M SGRAM, 14.31818 MHz XTAL, 230 MHz PLL, 100 MHz MCLK Console: switching to colour frame buffer deU vice 128x48 fb0: ATY Mach64 frame buffer device on PCI The graphics card immediately switches to graphics mode. The tux logo should appear during the operating system boot-up.
Troubleshooting If problems arise, the first thing to do is to consult the documentation under /usr/src/linux/Documentation/fb/. The texts on the various drivers used (aty128fb, tdfx, …) are particularly good sources of information. Another good place to start if there are problems is the Linux framebuffer project homepage (listed below). Here you’ll find a link to
the Linux framebuffer HOWTO (which unfortunately has not been updated for a year). There is also a mailing list, which can be found at linux-fbdev@vu.union.edu. There can be many different problems. Typical approaches to solutions are listed below: • If the framebuffer does not start (Error messages in the boot procedure!) the wrong driver has probably been compiled or the card is not yet supported. It may be that the ESA-BIOSframebuffer or the experimental kernel (2.4.0text*) should be tried. • If the framebuffer does start, but does not switch to the specified mode, all the options which are not vital (ypan, ywrap, font) should be deactivated and then a lower-resolution mode (for example 800x600) tried. 3 · 2000 LINUX MAGAZINE 69
066framebuffer.qxd
20.10.2000
12:49 Uhr
KNOWHOW
Seite 70
EMBEDDED GRAPHICS
Fig. 4: Small but a bit of all right: The FLTK-GUI-TK leaves little to be desired for PDA and organiser applications.
Tools and applications The fbset program by Geert Uytterhoeven makes it possible to alter the resolution of the framebuffer during operation. It manages its own database of video modes (in /etc/fb.modes). This can obviously be extended as desired. A whole range of sample modes accompany the sources of the program, for example, for ATI graphics cards, Atari Falcon or video modes to control NTSC or PAL TV monitors. Another useful tool is fbview. Image files can be displayed on the console with this program. It now runs with almost all framebuffer drivers. If you don’t like seeing the penguin when booting, take a look at fblogo. Images in Tiff format can be converted into your own boot logo header files for the kernel with this program. Tomas Berndtsson has begun a very interesting project with Zen. This is a web browser with various interfaces (plain text, oFBis, ncurses, GTK). The interface of interest here is the oFBis interface. This consists of a library of graphics routines for the framebuffer device. With the aid of this library, Zen is able to display images on the console. This project Table 2: Framebuffer-based windowing systems and GUI toolkits Microwindows WindowsCE-look-alike API www.microwindows.org Nano-X X-Window-look-alike network -transparent API. Based on Microwindows -”FLTK object-oriented GUI-TK for Nano-X -”Tiny-X Trimmed down X-Server with limited functionality www.xfree86.org Qt Embedded Framebuffer variants of Qt www.troll.no 70 LINUX MAGAZINE 3 · 2000
is still alpha software, but will be worth watching out for. Microwindows is a portable and extremely efficient windowing system (see Figure 3) which runs on a whole range of hardware and software environments. It was developed for the handheld and pocket PC market (LinuxCE). Hence it needs, on a 16 Bit system (ELKS) for mouse, keyboard and screen driver, less than 64Kb of memory. But Microwindows also runs on modern PC systems under Linux with a bit of help from the framebuffer devices, the SVGAlib library or under XWindows in a separate window. The latter makes application development a great deal easier. With Nano-X Microwindows has an API very similar to that of the X-Window system. This means the resource-sparing yet powerful Fast Lightning Tool Kit (FLTK, see Figure 4) can be used and applications such as the web browser ViewML (see Figure 5) developed. The latter uses the highly refined HTML engine of KFM (the KDE file manager). Because of its extremely low resource requirements it is of particular interest for handhelds. Last, but not least, ”Qt Embedded” deserves a mention: The object-oriented GUI toolkit, already well known as the basis of the KDE desktop (in version 2.2.0) is also available in a variant which allows direct access to the graphics hardware. All class definitions are fully compatible with the XWindow version. This is why Qt/KDE applications can be used without any porting costs even without an X server.
066framebuffer.qxd
20.10.2000
12:49 Uhr
Seite 71
EMBEDDED GRAPHICS
KNOWHOW
Fig. 5: The ViewML browser is happy with just 700Kb ROM and 2MB RAM
Framebuffer device programming with C and C++ The framebuffer device of the first graphics card in the computer can be addressed via the device file /dev/fb0. For compatibility reasons there are often links from /dev/fb0current or /dev/fb to this file. With a command such as cat /dev/fb0 > /tmp/fbdump the complete memory (yes, all of it!) of the graphics card can be read out. The visible area of the ”graphical text console” in that case always begins at the file position 0. Modification of the screen contents using seek, read and write is tedious and relatively slow in execution. Therefore, the framebuffer device can be embedded with the aid of mmap in the data memory segment of the application (see listing, line 25). Note, by the way, that other graphics cards can be controlled using the device files /dev/fb1 to /dev/fb7. The following sample program colours the console blue (at eight bit colour depth the defaults for the colours 0 to 15 correspond to the usual 4-bit VGA palette). The program can be compiled using g++ -o fbdemo fbdemo.cpp. 01 02 03 04 05 06
// Read in header files #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <linux/fb.h> #include <unistd.h>
07 #include <sys/mman.h> 08 #include <sys/ioctl.h> 09 10 main() { 11 12 // open framebuffer device and read out info 13 int fd = open("/dev/fb0", O_RDWR); 14 struct fb_var_screeninfo screeninfo; 15 ioctl(fd, FBIOGET_VSCREENINFO, &screeniU nfo); 16 17 // continue only if 8 bit colour depth 18 if (screeninfo.bits_per_pixel == 8) { 19 20 // determine size 21 int width = screeninfo.xres; 22 int height = screeninfo.yres; 23 24 // embed framebuffer into memory 25 unsigned char *data = (unsigned char*) 26 mmap(0, width * height , 27 PROT_READ|PROT_WRITE, MAP_SHARED, fU d, 0); 28 29 // process screen content line by line 30 for(int row=0; row<height; row++) { 31 for(int column=0; column<width; coluU mn++) { 32 data[column + row * width] = 0x01; 33 }; 34 }; 35 36 // mask framebuffer out of memory 37 munmap(data, width * height); 38 39 }; 40 }; 3 · 2000 LINUX MAGAZINE 71
066framebuffer.qxd
20.10.2000
12:49 Uhr
KNOWHOW
Seite 72
EMBEDDED GRAPHICS
GGI: Alternative to the framebuffer? At one time a majority of kernel developers were against the idea of implementing graphics drivers at kernel level. The driver software including 3D functions rapidly exceeded the complexity of the operating system core and as it increased in size it jeopardised the stability of the system. The necessary text emulation for the Atari, Amiga and friends, however, forced at least a rudimentary mechanism to be included, which was then steadily built up. The framebuffer device has thus achieved integration in the standard kernel through the back door. The ”Generic Graphics Interface” (GGI for short) was not so lucky with its ”Kernel Graphics Interface” (KGI). The embedding of complex 2D and 3D interfaces, including multi-input and multi-head support was going too far in the view of the hard-line core developers. What was once considered ”Not a Bad Idea” is now eking out a shadowy existence, although there is a big ”fan club” for it and therefore also a whole range of impressive demos and applications (GGI X server, games).
Table 3: ioctl functions of the framebuffer device FBIOGET_VSCREENINFO FBIOPUT_VSCREENINFO FBIOGET_FSCREENINFO FBIOGETCMAP FBIOPUTCMAP FBIOPAN_DISPLAY FBIOGET_CON2FBMAP FBIOPUT_CON2FBMAP FBIOBLANK FBIOGET_VBLANK FBIO_ALLOC FBIO_FREEF
determine variable dimensions of the framebuffer define variable dimensions of the framebuffer determine fixed dimensions of the framebuffer determine colour palette define colour palette move physical display within the virtual salvage content of console restore content of console delete content of console determine current raster beam position allocate graphics memory for own purposes (e.g. dual buffer) free graphics memory
As an especially neat and useful gimmick, Qt Embedded has a widget for inputting and recognition of handwriting (see our test in the previous issue). But for commercial use, one-off developer licence fees
Info Framebuffer Home page: http://www.linux-fbdev.org Framebuffer Howto: http://www.linuxdoc.org/HOWTO/framebuffer-HOWTO.html Framebuffer Mailing List: http://www.linux-fbdev.org/mlist.html XFree86 Homepage: http://www.xfree86.org Various Framebuffer Tools: home.tvd.be/cr26864/Linux/fbdev/ fbview Home page: http://www.nocrew.org/software/fbview/ Zen Web-browser: http://www.nocrew.org/software/zen/ User-Space Graphics functions for the framebuffer device: http://osis.nocrew.org/ofbis/ fblogo Homepage: http://home.sol.no/~dvedoy/ Microwindows / Nano-X Homepage: http://www.microwindows.org ViewML Website: www.viewml.com Homepage of the GGI Project: http://www.ggi-project.org/ ■ 72 LINUX MAGAZINE 3 · 2000
have to be paid for each workstation. These are roughly within the same price range as Windows variants. There are also costs for run-time licences ($2 per device,assuming high numbers of items).
Framebuffer goes into action In comparison with the tried and true graphics version aided by a hardware-accelerated X server, applications based on framebuffer devices are noticeably slower at high desktop resolutions such as 1280x1024 or greater. The displays of pocket PCs and handhelds, however have considerably fewer pixels (320x240 or less). Windows are smaller and are represented by less data, so the lower performance is scarcely noticed. Memory consumption is another matter. A few megabytes can be saved by doing without an Xserver. Thanks to the framebuffer device and its associated GUI toolkits it seems likely that more mobile Linux PDAs will be coming on to the market in the near future. ■
088kdegui.qxd
20.10.2000
13:07 Uhr
PROGRAMMING
Seite 88
DEVELOPING GRAPHICAL APPLICATIONS
Qt Designer and KDevelop
DESIGNER SOFTWARE JONO BACON
Today, most computer users want graphical applications. But developing graphical applications can be hard work for the programmer. There are, however, tools that can make the job easier. If you want to develop applications using Qt, the library used by the KDE project, one of the best tools to use is Qt Designer. KDE developer Jono Bacon demonstrates how to use it.
Widget: An element of a graphical interface, such as a container window, button or field for entering text. Layout management: This term describes the way in which widgets are arranged in a window. In its simplest form, an element may be placed at a specific position and given a specific height and width. But graphical environments under Linux allow for the system to manage the layout of widgets according to the size and characteristics of the window they are displayed in. ■
Software is a funny thing. It is developed by lots of different people with lots of different ideas and ways of working. Some like the simple vim+consoles approach in which they manually edit and compile source files. Some prefer a more integrated development system such as KDevelop. Personally I fall into the latter category, preferring to have the class view, automatic Makefile handling and other goodies that KDevelop offers. Although KDevelop supports the development of GNOME, Qt and console applications it really comes into its own with the development of KDE applications. One feature of KDevelop that has particular appeal is its graphical dialog box designer. With the built-in designer you can visually ”draw” your dialog boxes and the code is then generated for you. Although the designer is good, it isn’t great and it has some flaws including: • limited selection of widgets – not all widgets and
88 LINUX MAGAZINE 3 · 2000
properties are selectable; • limited layout management; • limited support for dialog types (QDialog, QWizard, QWidget etc.) Unfortunately these limitations have quite an impact on the development of KDE applications as you often need widgets that are not supported, or you need layout management. As with any software, there are of course alternatives that provide better support for designing dialogs. But everything changed when Troll Tech, the developers of Qt, released its own dialog creator named Qt Designer. Qt Designer is not just a dialog box creator. It can also design widgets, wizards and more. Qt Designer has advanced layout management support and supports all Qt widgets. Qt Designer also lets you change and edit a great many properties for each widget you use in a dialog and is a very flexible tool in general. In this article we are going to show you how to harness the power of both KDevelop and Qt Designer to simplify and speed up the development of your applications. Please be aware that this is not a tutorial on KDE/Qt programming or using KDevelop. You should already be familiar with both of these, although the article will still be useful if you are only learning KDE development. Also bear in mind that the example application we will build here
088kdegui.qxd
20.10.2000
13:07 Uhr
Seite 89
DEVELOPING GRAPHICAL APPLICATIONS
PROGRAMMING
is not the most extensive application and is intended simply to illustrate the concepts being described.
Getting going To get us started doing something useful with Qt Designer we are going to build a simple program that takes a name, email address and a witty comment and generates a signature for an email. To create the application we will need to follow the steps below: • Design the application in Qt Designer; • Create the slots and connections in Qt Designer; • Generate the source code using Qt Designer; • Embed the generated code in KDevelop and write the functional code.
Designing the program To design the program we first need a visual idea of the design. To make this easier I have created the design for you: you can see a screenshot of it in Figure 1. As you can see, we have a window with a number of different items (or widgets) on it, designed so that the user puts the right information in the right boxes. To create this design we need to first fire up Qt Designer. You can do this by either clicking on Qt Designer from the Development option in the application starter or by loading the program from the command-line from $QTDIR/bin.
($QTDIR is, of course, the home directory of your Qt files.) When Qt Designer has loaded, you will be presented with a screen similar to that in Figure 2. Qt Designer’s interface is essentially split into three areas. At the top under the menu bar is the widget toolbar section. Here you can see a number of icons representing the different types of widgets
Fig. 1: The program we will create in the article
How to get Qt Designer The Qt Designer package itself comes with version 2.2.1 of Qt. It is important that you get at least this version or Qt Designer will not be included. There are several ways to get hold of Qt. The most common method is explained here. Downloading from Troll Tech You can download the file from ftp://ftp.troll.no/qt/source. In that directory you will find a variety of versions of Qt available. Make sure you get the right version for your system. Unzip and extract the package when you have downloaded it. You can then compile it. Compiling Qt To compile Qt you must first set the QTDIR environment variable. This should point to the directory into which you installed Qt. For example, if you installed it to /build/qt you can set this variable in bash by typing: export QTDIR=/build/qt To make life easier you should edit .bash_profile so this is set automatically when you log in. To compile you should then issue these commands in the following order: ./configure -sm -gif -system-jpeg -no-opengl make Note: make install is not needed. Qt Designer will be located in the bin directory of your Qt installation directory. You will need to add this directory to your PATH so uic can be found when using Qt Designer.
3 · 2000 LINUX MAGAZINE 89
088kdegui.qxd
20.10.2000
13:07 Uhr
PROGRAMMING
Seite 90
DEVELOPING GRAPHICAL APPLICATIONS
that can be used in your program. To the left of the screen you can see the Property Editor. This is where your widgets can be fine tuned to behave how you want them to. To the right of this is the Object Hierarchy. This box shows a parent/child relationship view of the various widgets in your program. The space in the middle of the screen is where you will graphically design your program.
Creating the framework
Figure 2: Qt Designer
Let’s start by developing the framework that our program will sit in. This can be done quickly and easily in KDevelop by using the Application Wizard to create a KDE2 Mini application. (There was an article about using KDevelop in Linux Magazine November 2000.) Call the application SigCreate. Once the Application Wizard has created your application, compile it to ensure that everything is fine. We now have our framework and are ready to start developing our program. We can now start creating the program interface so that it looks similar to that of Figure 1. First, switch to Qt Designer and select File->New. A box will pop up allowing you to select a variety of program templates. The default template is a dialog, and that is fine for our program, so click on the OK button. A blank window with grid points in it will now pop up. This is your program’s window, within which you will design your user interface. You will also see that the Property Editor has been filled with details about the box you have just created. At the top is the name of the box. This will form the class name of the dialog so you should name it something useful. Name it SigCreateDlg for now. To do this simply type ”SigCreateDlg” into the text box to the right of the name property. This is how properties are changed: select the property, then change its’ setting on the right.
90 LINUX MAGAZINE 3 · 2000
Adding widgets To start we will insert the text at the top of the program window which can be seen in Figure 1. This text tells the user how to operate the program. This type of widget is called a Label and you can put one on your program like this: • Select the ‘A’ icon or Tools->Display->TextLabel from the menu; • The cursor will become a crosshair over your program. Draw a box for the label, just as you would in a paint program, and you will see that the label is created with some dummy text in it; • To change this text, double click on the label in the box and type in the text; • Finally, resize the widget using the handles so it is the correct size and at the top of the box. Try to centre the label by moving it with the mouse. This is just a temporary measure. Later on we will look at a more elegant layout management technique. You follow pretty much the same procedure for embedding any type of widget that is supported by Qt Designer: select it, draw it and finally change its’ properties and size. An interesting concept in Qt Designer is that widgets can act as containers for other widgets. This will be demonstrated in our next task, which is to create the input fields inside the frame. You can see that in Figure 1 we have a bunch of labels and text boxes inside a frame. This frame is called a Group Box and acts as a container for the labels and text boxes inside it. Let’s first create the frame by selecting the Group Box icon or Tools->Containers>GroupBox from the menu. You can drag the mouse to create box again. In the Property Editor you can change the title property to alter the text in the frame. You may also notice a + symbol in this entry in the Property Editor. This indicates that the property has subproperties that can also be changed. Once you have created the frame, create three more labels as before but when you draw them, draw them inside the Group Box frame. You can then see in the Object Hierarchy to the right that the labels have become children of the Group Box frame. Once you have done this you can then create the text boxes. The name of this type of widget is a Line Edit. (There is also a Multi Line Edit that we will use later.) To create a Line Edit select the Line Edit icon or use Tools->Input->LineEdit from the menu. Up to now we have not named any of the widgets that are being placed in our program. We have set the text of the labels and the frame, but we have not set the internal program names for them which are set via the the name property at the top. The reason for this is that although it is a good idea to give all widgets a name, it is only really important to set the name of widgets that you are actually going to deal with in your program. In our program we need to manipulate the data from the two Line
088kdegui.qxd
20.10.2000
13:07 Uhr
Seite 91
DEVELOPING GRAPHICAL APPLICATIONS
PROGRAMMING
[left] Figure 3: Using spacers in our design [right] The completed layout management
Edit boxes so we can generate our signature. As we need to read the text from these boxes we should give them a name using the name property. To do this set the name of the top box to ”nameBox” and the bottom box to ”mailBox”. You will see later how these internal names are used. We can now begin adding the other widgets in the same manner. Most of the widgets simply require you to draw them on the form. One widget that does need a little more explanation is the Combo Box widget that will hold the comment for the user to select. Start by creating it like any other widget, then when it is displayed double-click on it. You will then be presented with a box into which you can add the contents of the combo box. Click on the ‘New Item’ button. A text box will appear. Into it you can type a comment. When you have typed the first comment you can click ‘New Item’ again and start entering the second comment. Repeat this for all the other comments. When you are finished, click the OK button. After you have entered the comments you need to name this widget using the ‘name’ property in the Property Editor. This is because we need to access the comments of the box in our program. Call the Combo Box ”commBox”. You can now go on and add the other widgets. You will need to add a Label (the text above the large space), a Multi Line Edit (the white space) and two buttons (at the bottom). Name the Multi Line Edit as ”sigBox”. The other widgets do not need to be named, although you can name them if you want.
Getting spaced out Now all your widgets are in place we can have a quick preview of the form by selecting Preview>Preview Form from the menu. Try to resize the window and you will notice that the widgets do not adjust appropriately. To achieve this we need to use a feature of Qt Designer called Spacers. Spacers can be thought of as springs which push the widgets on each side apart. We can use these virtual springs to design our dialogs so that they resize effectively when the user resizes the box.
The use of spacers and layout management is a skill that is developed through trial and error. The best idea is just to play with Qt Designer and look at other people’s good and bad efforts. The key rule to remember when dealing with spacers is that you work horizontally first and then vertically. Now you have all your widgets set out, let’s get the spacers created. The first thing we will do is to make the text at the top of the box centered. To do this we will need a spacer at either side of the text. To create a spacer you can click on the spring icon or select Layout>Add Spacer from the menu. Either way you will be presented with a menu from which you can choose either a Horizontal or a Vertical spacer. Choose Horizontal from the menu as we need to center the text horizontally. Next you must click on the form to set the location of the spacer. Click the space to the left of the text and the blue spacer will appear. Repeat this process for the spacer on the right. You can see what this should look like in Figure 3. Now we have put the spacers in we need to tell Qt Designer how to look after the layout management. To do this we can either use Vertical, Horizontal or Grid management. As we have three objects in a row (the two spacers and our label) we can use Horizontal management. To do this we need to select the left spacer with the mouse, hold down Shift and then select the label and the right spacer so all three are selected. We can then click on the Horizontal Layout icon which is three blocks next to each other, or use Layout->Lay out Horizontally from the menu. You will then see a resizable red line around the three objects to indicate that their layout is being managed. We can now repeat this procedure for the the three labels inside the group box (we don’t need spacers between each label), this time using vertical layout management (not spacers). You can also use vertical management for two text boxes and the combo box. The reason why we are using vertical management for the labels is because we want them to be aligned as they currently are, and spacers can distort objects that need to be aligned in a specific way. Vertical management is also used on the text boxes and combo box as they are equally sized. 3 · 2000 LINUX MAGAZINE 91
088kdegui.qxd
20.10.2000
13:08 Uhr
PROGRAMMING
[top] Figure 5: Making the connection [above] Figure 6: Making a slot
Seite 92
DEVELOPING GRAPHICAL APPLICATIONS
The text above the multi line edit needs a spacer to the right and horizontal management, and the buttons at the bottom need a spacer to the left and horizontal management. After all this your form will contain a number of red boxes. To finish the layout we need to let the form look after the laid-out boxes. This is a simple matter of right clicking the form and selecting ”Lay out in a Grid” from the menu. The final design with layout lines should resemble something similar to Figure 4.
Slotting things into place Signals and slots: Signals and slots are the mechanism used by a program to communicate with the widgets that form its graphical interface. ■
Now the widgets are implemented and the layout is arranged the final thing we need to do in the design stage of the form is to create the signal/slot connections. To do this manually requires coding a connect() function, but Qt Designer provides a simple yet effective solution. Before we look at how to create the connections in Qt Designer, we must explain how Qt Designer handles slots in your programs.
92 LINUX MAGAZINE 3 · 2000
The whole idea of Qt Designer is that you can visually create a dialog box, Qt Designer will generate the code and you will not need to modify that code at all. This is all fine and dandy until you want to create your own slot: to do this you would need to edit the code to create the functionality of your slot. This obviously contradicts the idea of not editing the generated code, so a solution is needed. Qt Designer solves this problem with a nifty bit of coding. It does it using Virtual Methods. If you are unsure what virtual methods are, I would recommend picking up a good C++ book and reading up on virtuals. What you need to do, very basically, is to create a subclass of the dialog’s class and in it create a slot of the same name. When you then create an object of this subclass the right slot will be used. In many ways this is a good thing to do as it keeps the implementation separate from the interface which is at the heart of good C++ coding. To create the signal/slot connections we need to use the connecting tool. To do this either select the icon (it looks like a red arrow going into a blue square) or select Tools->Connect Signals/Slots from the menu. To create the connection click on the widget that is going to be dealing with the slot, drag the line off the form and release the mouse button. Let’s deal first with the Create! button. Click on the button with the crosshair and drag the line off the form completely. When you have released the button you will see the connections tool shown in Figure 5. What we want to do is to create a slot that will create our signature when the user clicks on the button. To do this we first need to create the slot, and then do the connection. To create the slot we need to click on the ”Edit Slots” button. You will see the slot creation boxshown in Figure 6 appear. Now click on the ‘New Slot’ button and a slot will appear in the box. You can now rename it and set its access specifier. For our project, set the name to slotCreateSig() and leave the access specifier as public. When you click on OK you will be returned to the connections box and you will see your new slot in the Slots section of the box. To make a connection you simply select the appropriate signal (which is clicked() in our case) and then select the slot (which is our new slot slotCreateSig() ). When you have selected both signal and slot you will see the connection made at the bottom of the screen. After you are finished click OK. You can repeat this procedure for the Cancel button by using the clicked() signal and the reject() slot.
Generating the source Now we have created the box, widgets, layout managers and connections, we can finally generate the source code for your dialog. To do this we will be using a special command line tool called uic that is included with Qt Designer. The function of uic is
088kdegui.qxd
20.10.2000
13:08 Uhr
Seite 93
DEVELOPING GRAPHICAL APPLICATIONS
to take the saved file that Qt Designer creates when you save your designed dialog (which is a file full of special XML code) and convert it to the C++ code that the compiler understands. The first thing to do is to copy the Qt Designer file to your projects directory if you haven’t done so already. The file needs to be in the same directory as the other project source code (in our example (~/sigcreate/sigcreate/). You can then use uic to generate the header. We will assume that your Qt Designer file is called sigcreatedlg.ui: uic -o sigcreatedlg.h sigcreatedlg.ui To create the file with the implementation code in it we use the following command: uic -i sigcreatedlg.h -o sigcreatedlg.cpp siU gcreatedlg.ui Now that the code for the dialog has been generated we can add it into our KDevelop project. Fire up KDevelop if it is not already open and select Project/Add existing file(s) from the menu. You can then import the .h and .cpp files that you just generated into the project. When these have been imported, compile the project to make sure everything worked OK by hitting F9. If no errors are reported, everything worked fine. The next step is to adjust the class that KDevelop generated for you to inherit the new dialog class. To do this add: #include "sigcreatedlg.h" at the top of the sigcreate.h file, and add ”: public SigCreateDlg” to the end of ”class SigCreate”. Next we need to add some other include files to the various files so that the new dialog class is loaded correctly. You will need to add: #include <qmultilinedit.h> #include <qlineedit.h> #include <qcombobox.h> We can then create the slot in our subclass by right clicking on SigCreate in the class view and selecting ”Add member function”. In the box create a slot called slotCreateSig() and make sure it is public. In the generated slot we can then actually write the code that generates the signature: this->sigBox->insertLine("\n—-", 1); this->sigBox->insertLine(this->nameBox->texU t(), 2); this->sigBox->insertLine(this->mailBox->texU t(), 3); this->sigBox->insertLine(this->commBox->currU entText(), 4); Finally ensure that all references to QWidget are removed from the SigCreate class declarations. This is because our dialog uses QDialog and the KDevelop-generated projects use QWidget. The first line of the SigCreate class should be:
PROGRAMMING
and the constructor should be: SigCreate::SigCreate() in the sigcreate.cpp file. Yhe class declaration of the constructor should be: SigCreate(); Once all of these steps have been completed you can compile and run the program. If you had any trouble understanding the changes that you made to the code, we would recommend reading up on KDE and C++ programming. Unfortunately there isn’t the space to fully explain the changes just made, bearing in mind that the focus of the article has been on showing how Qt Designer can help your program development. This article has provided a simple walkthrough on getting started with Qt Designer. Although we have covered the main elements of Qt Designer, there are many more concepts and techniques that can be learned. It is well worth taking a look at some of the things listed in the Info box to see where you can get more information on getting the best out of Qt Designer and KDevelop. With KDE becoming increasingly popular with home users, business users and enthusiasts, the scope for KDE development is getting more and more exciting. Taken together with the rapid development and maintenance of KDE itself and the increased productivity that can be achieved using development tools such as Qt Designer and KDevelop and you have a lot of opportunities available. Good luck, and let me know how you get on! If you have an IRC client go to irc.openprojects.net and join #kde. My nickname is [vmlinuz]. Come and chat to me if you have any problems. If I am not there, just ask someone in the channel when I will be on and I will try to help as best I can. ■
Info KDE home page http://www.kde.org/ Troll tech http://www.troll.no/ KDE Developers’ site http://developer.kde.org/ KDevelop home page http://www.kdevelop.org/ KDE mailing list info http://www.kde.org/contact.html KDE mailing list archives http://lists.kde.org/ ■
Getting the example source code If you would like to download the source code from the example project today you can get it from my web site at: http://www.jonobacon.co.uk/writing/qtd estut/index.html . The file is called sigcreate.tar.gz. Once you have downloaded the code, you can unzip it by typing: gunzip sigcreate.tar.gz You can then unpack the code with: tar xvf sigcreate.tar The code will then be extracted into a directory. In there will be the KDevelop file that you can load to play with the code.
class SigCreate : public SigCreateDlg 3 · 2000 LINUX MAGAZINE 93
094fltk.qxd
23.10.2000
15:56 Uhr
Seite 94
PROGRAMMING
DEVELOPING GRAPHICAL APPLICATIONS
FLTK: a new C++ GUI toolkit
FAST AND LIGHT CHRISTOPH DALITZ
The ”Fast and Light ToolKit” (FLTK) is a new addition to the range of graphical user interface (GUI) programming libraries for X and Win32. Besides featuring a graphical dialog designer, FLTK stands out due to its high speed and small appetite for resources.
The standard library Xlib, which should be part of any X installation, provides only basic functions for drawing, querying mouse positions and so on. If you don’t want to program a GUI from scratch (and who does?) you’ll need a more
Fig. 1: Standard FLTK file dialog; functional but modest in its design 94 LINUX MAGAZINE 3 · 2000
powerful programming library. Five years ago the choice was usually Motif. Things have changed quite dramatically since then and developers are now spoilt for choice (see the overview to be found at http://sal.kachinatech.com.) These toolkits are now joined by another, FLTK, the recently published library developed by Bill Spitzak at Digital Domain. The
094fltk.qxd
23.10.2000
15:56 Uhr
Seite 95
DEVELOPING GRAPHICAL APPLICATIONS
PROGRAMMING
Fig. 2: The result produced by the example code
advantages of this toolkit are: • FLTK is covered by the LGPL, i.e. the source code is available and it may be used in commercial applications; • FLTK is a C++ class library, which makes it considerably easier to work with than C libraries like Motif or GTK; • FLTK has been ported to Win32 and so the same source code can be compiled under both Unix and Win32; • FLTK is a small library. Therefore, even statically linked programs can be as little as 200Kb and take no time at all to launch.
Installation and contents The FLTK source code can be downloaded from the FLTK home page and installed without any problems by following the instructions in the Readme file using the usual commands ./configure, make and make install. There is no need to bother about producing the dynamic library as FLTK is so small that the extra work involved is hardly worthwhile. If you have no need for the OpenGL link you can switch off the relevant option in the file config.h before compiling FLTK. Once the installation process is complete, you will find the following components on your system: • the static library libfltk.a; • the header files FL/*.H, where various formats are supported by symbolic links (not necessarily a good idea…); • the ”FL User Interface Designer” fluid, with which dialogs can be designed graphically; • very good HTML documentation in the documentation sub-directory of the source code.
This not only describes all the classes but also contains tutorials on FLTK programming and on FLUID; • demo programs for individual interface elements and concepts in the sub-directory test of the source code. Further example applications and components can be found in the ”Bazaar” on the FLTK home page. As FLTK is still quite young, the selection isn’t extensive. However, there are one or two useful extensions such as a powerful editor widget and alternative file dialogs. Although the standard file selection dialog is functional and user-friendly (with tab completion and wildcard support) it is rather modest in its design (see Figure 1).
Widgets Basically, FLTK provides two different types of interface elements or ”widgets”: • ”Composite widgets” incorporate other widgets
Fig. 3: The dialog designer FLUID in action
3 · 2000 LINUX MAGAZINE 95
094fltk.qxd
23.10.2000
15:56 Uhr
Seite 96
PROGRAMMING
DEVELOPING GRAPHICAL APPLICATIONS
and are responsible for their layout, e.g. a window (Fl_Window) or movable areas within the window (Fl_Tile). Fl_Group is the basic class. • ”Control widgets” are the actual elements which allow for user interaction, e.g. buttons (Fl_Button) or input fields (Fl_Input). Fl_Widget is the basic class. Widgets are allocated to a composite widget either explicitly using the methods Fl_Group::add, Fl_Group::insert and Fl_Group::remove or implicitly by constructing new widgets between the methods Fl_Group::begin and Fl_Group::end. For example, the following code produces a window with an input field and a button (see Figure 2): Fl_Window* window; // Window Fl_Input* input; // Input field Fl_Button* button; // Button window = new Fl_Window(200,200,"Example FLU TK"); window->begin(); { input = new Fl_Input(80,50,100,20,"InpuU t:"); button = new Fl_Button(50,100,100,50,"Ok"); } window->end(); The figures in the constructors show the positions and sizes. Although this method allows all the widgets to be placed directly on the main window, the layout isn’t adjusted when the window is resized. A widget can be marked as resizable for a dynamic layout. It then always fills the remaining space in the composite widget. An alternative method is to incorporate other composite widgets (for example, areas separated by moveable bars (Fl_Tile) ) in the window and place the ”Control Widgets” there. The composite widget Fl_Pack takes over the layout of its child widgets by itself, in just the same way as the Tcl/Tk Pack Geometry Manager. Widget properties are changed and queried using member functions. With the sole exception of menu items, FLTK does not use public properties. The access functions are loaded onto the properties instead, so that Fl_Input::value() returns the text of an input field and Fl_Input::value(”bla fasel”) sets it.
Events The function pointer of the callback function is assigned to the widget via the function Fl_Widget:©allback. This is done in order to link events such as the pressing of a mouse button with the program routines (”callback functions”) that respond to them. Users can determine when this callback is triggered via the method Fl_Widget::when. In most cases it is sensibly preassigned. The callback function must determine the nature of the event if you need to execute different actions depending on the event This procedure 96 LINUX MAGAZINE 3 · 2000
differs from Borland C++ Builder or Qt. Both of these can directly assign different callback functions to different events. In order to do this, these toolkits use their own C++ language extensions which are only understood by the proprietary compiler (Borland) or a special pre-processor (Qt). Buttons and menu items have the property shortcut, to which a key combination can be assigned that triggers the callback directly.
The dialog designer Complex dialogs can be difficult to design when it comes to obtaining precisely the right sizes and positions for the individual widgets. Fortunately, FLTK provides ”FL User Interface Designer” FLUID, which makes it easy to put dialogs together quickly. It allows individual widgets to be set to the desired position and size using the mouse. Figure 3 shows work under way in FLUID. The edited dialog can be seen in the window in the centre. The class hierarchy is illustrated and edited in the left-hand window while the properties of the currently marked widget are displayed and edited on the right-hand side. Like all graphical dialog designers FLUID doesn’t work directly on C++ source code but instead has its own source format from which the C++ source and header files are then generated. This process can be automated using rules in the makefile. Unlike the dialog designers popular under Microsoft Windows (such as Borland C++ Builder) the FLUID source files are pure ASCII files. Consequently, they can also be edited using an editor. More importantly, they can be managed without any problems using version control systems (e.g. RCS, CVS). Although FLUID provides a way to enter C++ source code, the editor fields provided for this are not overly user-friendly. This is not a disadvantage but an incentive to manage the interface code with FLUID and to code the actual functionality in separate source files. The clarity and maintainability of the source code are helped by this procedure.
Unix Integration FLTK applications work together smoothly with the X clipboard. This means that text can be exchanged with other X applications by marking with the left mouse button and inserting with the centre mouse button. However, the shortcuts in the edit fields are Emacs-compatible by default. Therefore, the shortcut to insert is Ctrl-Y and so on. Unlike Motif, FLTK is not based on the Xtlibrary, so X resources set in /usr/lib/X11/app-defaults or ~/.Xdefaults are not evaluated automatically. If you consider this important, you will have to load and evaluate the resources yourself using the appropriate Xlib functions (XrmGetFileDatabase etc.) This problem is not specific to FLTK, it applies to all newer toolkits (Qt, GTK…) generally.
094fltk.qxd
23.10.2000
15:57 Uhr
Seite 97
DEVELOPING GRAPHICAL APPLICATIONS
FLTK supports the command line options geometry, -display, -name, -title, -iconic, -fg, -bg, bg2,, where bg refers to the background of menus and buttons and bg2 refers to the background of text fields. There is no standard option -fg2, so -bg2 is of limited use. A keen ”focus-follows-mouse” fan will appreciate one special feature of FLTK: a window can be displayed relative to the current mouse position (using the property Fl_Window::hotspot). Consequently, it is possible to display dialogs which are automatically in focus. The Gimp (written with GTK) doesn’t do this, which can be rather annoying for the user. The option in FLTK to trigger callbacks when file descriptors (files, pipes, devices, sockets) change is very useful for inter-process communication. sadly, this only works with sockets under Microsoft Windows due to the limitations of that platform. It is of incalculable use under Unix when programming GUIs for command line tools.
Practical use The printer interface ESP Print Pro proves that FLTK is suitable for real world applications despite its youth. In the case of this application, the small size of FLTK makes a particularly good impression – it isn’t acceptable for a print dialog to wait several seconds for the Qt libraries to load. Another
PROGRAMMING
advantage of the small size is the option to link statically, something which makes installing binary RPMs extraordinarily easy no matter what distribution is used: something that will be increasingly valuable as Linux becomes more widely used by the masses. However, FLTK’s immaturity can be a problem when it comes to the development of sophisticated interfaces. FTLK doesn’t have anything like the variety of third-party widgets available for toolkits like Motif and Tcl/Tk. Users will have to implement the unavailable widgets themselves. The KDE team had the same problem with Qt and solved it. However, not everyone has such patience. For a company or a professional developer it may be cheaper to buy a ready to use HTML widget than to try to write it yourself. FLTK also requires extensions in several places. The following points are on the to-do list for the next version: focus on buttons (deliberately omitted in the current version 1.07!), balloon help and a way to plug-in users’ own widgets in FLUID. Proposals for extensions are discussed voraciously on the FLTK mailing list. With around 30 mails a day, the site is quite active. The concepts of FLTK are convincing and easy to understand. As the documentation is good, FLTK programming can be learnt quite quickly. Anyone who is still in search of an easy-to-use GUI toolkit should certainly take time out to try FLTK. ■
Info Overview of GUI toolkits for X: http://sal.kachinatech.com/F/5/i ndex.shtml FLTK homepage: http://www.fltk.org ■
ad????
3 · 2000 LINUX MAGAZINE 97
098faqintro.qxd
23.10.2000
16:45 Uhr
Seite 98
CONTENTS 98
BEGINNERS
114
99 How to: Boot Linux from DOS If you have both Linux and Windows on your hard disk you can boot Linux from MS-DOS and create your own boot-up menu. Julian Moss explains how to do it. 102 The Tutor: Posting the mail A mail server is useful on any Linux system, but setting up the standard package sendmail is a job for timeserved gurus. The Tutor shows you how to replace sendmail with a simpler alternative, Postfix, and how to use it to send Internet mail. 109 Command Line: convert Here’s a command line tool that even the keenest graphics fan will find a use for. We show how to use ImageMagick’s convert utility to transform image files and convert them from one format to another. 110 How to: Create KDE desktop themes KDE supports »desktop themes« that change the look of the desktop and all the applications running on it. Part two of our series shows you how to change the look of windows using a KDE theme.
98 LINUX MAGAZINE
3 · 2000
SOFTWARE
114 Out of the box: ncp The network copy program ncp lets you move files from one computer to another across a network or even the Internet. As Chris Perle explains, it’s indispensable. 116 Nautilus Nautilus is the new file manager for GNOME: the successor to GNOME Midnight Commander. We take an indepth look at this powerful new program. 120 Desktops: Dockapps Dockapps are small utilities like clocks and resource monitors that dock to WindowMaker and provide extra functionality. Jo Moskalewski shows off a selection of the most interesting and useful dockapps. 123 How To: Tackle installation problems Installing Linux programs from source code archives isn’t always easy. Hans-Georg Esser describes the problems you may encounter and what to do about them.
Welcome to the LinuxUser section where we focus on introductory topics and interesting software packages for Linux. This month we have articles about GNOME’s new file manager Nautilus, easy image conversion, a look at Window Maker dock applications, some cures for installation problems and the sequel to last issue’s KDE themes workshop. We hope you find this an interesting mix. For the more technically inclined we explain how to setup a decent mail server (using Postfix), introduce you to loadlin (an alternative to the standard Linux boot manager lilo) and discuss a neat network copying tool (ncp). We also have a special feature about Windows emulation which is in a more prominent place in the magazine – find out what you can do to make your favourite Windows programs run on Linux. Check the cover CD-ROM to see what files we put there. Many of the articles in this section have a CD-ROM icon which means that you can find program sources or installable binaries (RPMs) in the LinuxMagazine subdirectory. Enjoy the LinuxUser pages, Hans-Georg Esser hgesser@linux-user.de
099loadlin.qxd
23.10.2000
16:00 Uhr
Seite 99
BOOT-UP
BEGINNERS
How To: Create your own boot menu
ALTERNATIVE BOOT JULIAN MOSS
When you install Linux on a PC it is usually started using a boot manager called lilo. This usually works well; however, the lilo prompt isn’t the most user-friendly thing you could see when you turn on your computer. There are alternatives, for example grub which is now used by the Linux-Mandrake distribution. But if Linux co-exists with Windows on your hard disk and you still use DOS or Windows a lot of the time a good solution is to use loadlin, which lets you manage your boot-up options using DOS.
The name loadlin is derived from ”load Linux” and this describes exactly what the program does. It is an MS-DOS program that loads a Linux kernel into memory, thereby starting the boot process. It is convenient if you wish to run DOS and Windows as well as Linux. If your system is set up to boot into DOS, you can start Windows by typing ”win” or you can start Linux by running ”linux”. Even more conveniently, you can arrange to select one of these choices from an MS-DOS boot menu. Why would you want to do this, when lilo will do the same job and is a more commonly used solution? Well, for a start, lilo will only work if the Linux boot partition is contained within the first 1,024 cylinders of the hard disk. In the most common new-user scenario, in which Linux has been installed into the space made by shrinking an existing DOS partition
that occupied the whole disk, this is very often not the case, leading to the situation where the would-be newbie spends an hour installing Linux and then finds that they can’t boot it. Other benefits for the newcomer are that loadlin is easy to set up and allows you to change your boot options and kernel parameters using familiar DOS tools. It avoids changing the contents of the boot sector and the risk of losing the ability to access DOS and Windows. It also avoids the risk of losing the ability to boot Linux if you reinstall Windows in the future: for some reason, Microsoft doesn’t think that anyone would want to use a nonMicrosoft operating system and so when you install Windows, lilo or any other boot manager you are using are overwritten without warning. Because loadlin is an ordinary program it won’t be 3 · 2000 LINUX MAGAZINE 99
099loadlin.qxd
23.10.2000
16:00 Uhr
BEGINNERS
Seite 100
BOOT-UP
overwritten (unless you format the Windows partition, of course.) Even some more experienced Linux users like to use loadlin. If you like experimenting with new Linux kernels you can easily create a system using batch files or a boot menu that lets you choose which of many kernels to use at boot-up. Or you can specify a different kernel on the command line. So all in all, if you use DOS at all, loadlin is a very flexible tool for loading Linux.
Installation Where can you find loadlin? Well, if you have a set of CDs from a full distribution you’ll probably find it on the first disk in a directory called something like /dosutils. If you have a cover CD version like Linux Mandrake 7.1 from our issue 1 cover disk, you may not. If you can’t find it, don’t worry. We’ve included a copy on this month’s cover CD for you. Most distributions supply loadlin uncompressed in its own directory, ready to use, but it’s possible to find it as a compressed archive called lodlin16.tgz . In that case you’ll need to extract the files from the archive first using a command similar to: tar xzf /mnt/cdrom/dosutils/lodlin16.tgz
Create a directory on your DOS disk called ”linux” or something equally appropriate. (Keep the name to eight characters or less because you will be running this under real-mode DOS where there is no long filename support.) Copy to this directory the files loadlin.exe, linux.bat, and test.par. You could also copy the doc directory, which contains the full documentation for the utility. (You can probably manage without the src directory, which contains the assembly language source code!)
Boot manager: The PC’s BIOS is designed to boot just one operating system using program code stored in the first sector (or boot sector) of the first disk partition. A boot manager is a program that replaces the boot-up code in the boot sector and presents you with a choice of operating systems to boot from. Depending on your choice, it then loads the boot-up code from the appropriate partition. BIOS: This stands for ”Basic Input-Output System”. It is program code stored in permanent read-only memory which is executed by the computer at start-up and enables it to access the main peripherals and load the operating system’s bootup code. Partition: A hard disk is organised into one or more areas known as partitions. These partitions are then formatted for use by the operating system you want to use. Cylinder: A hard disk is made up of one or more disks or platters coated with magnetic material, on to which data is recorded on concentric tracks. Although many modern hard disks have just a single platter, older ones were made up of several. A surface linking all the same-numbered tracks of all the platters would form a cylinder. From this the term is derived. ■ 100 LINUX MAGAZINE 3 · 2000
Next, place a copy of your current Linux kernel in the directory. If you are running Linux at the moment you can do this by mounting your DOS drive and copying the kernel directly from its location in /boot. If you are logged in as root while doing this (which you shouldn’t be!) and are using a graphical file manager be very careful that you copy the kernel instead of moving it. You’ll also find a copy of the kernel on the boot floppy you made when installing Linux The kernel usually has a name like vmlinuz2.2.15-4mdk, which is one of the stock kernels for Linux-Mandrake. The exact name will depend on both the distribution and the version. When copying it to the DOS directory give it a name eight characters long or less, such as vmlinuz.
Configuration Now you must create a loadlin parameter file. It can be called anything you like, but for the sake of example we will call it linux.par. The file test.par that is usually included with loadlin is an example, so you could start by editing that. A simple parameter file will look like this: C:\Linux\vmlinuz # the first value must be # the name of the Linux kernel root=/dev/hda7 ted as root FS
# the device which is mounU
ro
# mount root read-only
mem=128M l the memory
# tell the kernel to use alU
The first three parameters are essential, and with the comments almost self-explanatory. The first value, which you may want to change, is the DOS path to the file containing the Linux kernel. To use a different kernel you could simply create a new parameter file containing a different filename. The second value which starts with ”root=” must specify the device which is mounted as root (”/”). If you can’t remember what this is from when you installed Linux you can find out (if you’re in Linux at the moment) by running the command mount. This will list all the mounted filesystems: the device you want is the one that is listed as ”on /”. The third value is standard, and ensures that root is initially mounted read-only. You may not need any parameters after that. However, a common one if your system has more than 64MB of RAM is the ”mem=” parameter which tells the kernel how much memory it should use. The stock kernels in most current distributions are compiled to use a maximum of 64MB by default, so without this parameter Linux may run a lot more slowly than it could. For a full description of the parameters recognised by loadlin or the kernel see the file params.doc in the doc subdirectory of the loadlin package.
099loadlin.qxd
23.10.2000
16:00 Uhr
Seite 101
BOOT-UP
Starting Linux Having done all this, and making sure that you are at a real DOS prompt and not a virtual one under Windows (booting Linux in the middle of a Windows session is not recommended, at least for the health of Windows) you can start Linux by executing the command: LOADLIN @linux.par assuming that you are in the directory containing both loadlin.exe and the parameter file and that the latter is called linux.par. If not, modify the paths accordingly. In the loadlin directory you should see a batch file named linux.bat. You can edit this to contain the correct invocation of the above command, using full paths to both the program and its parameter file, and put it somewhere in the DOS search path. This will enable you to start Linux by just typing linux.
Boot menu But there’s an even better way. MS-DOS supports a facility that enables users to construct their own boot-up menus using commands in the file config.sys. You can use this facility to create a menu that lets you choose between Windows and Linux at boot-up. A simple example looks like this: [MENU] MENUITEM=LIN,Linux MENUITEM=WIN,Windows MENUDEFAULT=LIN,10 [WIN] REM OPTIONS NEEDED BY WINDOWS (IF ANY) GO HERE [LIN] SHELL=C:\Linux\loadlin.exe @C:\Linux\linux.par [COMMON]
BEGINNERS
This example creates a boot menu with just two options, Windows and Linux (see Figure 1) and makes Linux the default (of course.) If Linux is chosen, DOS executes the commands under the section headed [LIN] which was associated with this option. The section contains a SHELL command that runs loadlin with the appropriate parameter file. As with the batch file previously mentioned, you’ll need to change this line if the paths and parameter file name on your system are not the same.
An MS-DOS start-up menu offering a choice of Linux or Windows
Diehard Linux fans may object to the fact that the Microsoft Windows start-up logo briefly appears (which can be changed) or the fact that the menu is titled ”Microsoft Windows 98 Startup Menu.” But then diehard Linux fans wouldn’t be running Windows in the first place, would they?
Conclusion loadlin has many more options. For example, if you want to experiment with different Linux kernels you can run it using different parameter files. Or you can override the kernel file name parameter using a command line argument, like this: LOADLIN @linux.par image=path_to_kernel You can even run loadlin without a parameter file at all, specifying all the options on the command line (though be warned that DOS command lines are limited to 128 characters in length.) For more information see the loadlin documentation. However, the basic method of operation described here will be sufficient for most people’s needs. If you run Windows or DOS on your computer as well as Linux you will probably find loadlin to be a very useful utility. ■ 3 · 2000 LINUX MAGAZINE 101
102answergirl.qxd
23.10.2000
16:04 Uhr
BEGINNERS
Seite 102
TROUBLESHOOTING
The Tutor
POSTING THE MAIL PATRICIA JUNG
Computers can often give you a surprise, even under Linux. Often things don’t work as they are actually meant to. The Tutor shows you how to deal with these little problems.
It’s unusual in the Windows world for a home or workstation PC to be equipped with its own mail server, but in Linux installations the SMTP server (also known as MTA – ”Mail Transfer Agent”) is part of the basic system, for good reason. Without it, programs like Cron Daemon, which controls the automatic processing of specified tasks at predefined times, have a bit of a problem, as they like sending both failure and success messages by mail to the local user concerned. Even if, as a home user, you switch off as many Internet servers as possible on safety
SMTP: ”Simple Mail Transfer Protocol” is a convention for how email is transported through the Internet. Daemon: A class of program (the name comes from ”disk and execution monitor”) that is started at boot-up and runs in the background performing some essential service for the system. Cron: The time daemon (cron or crond, depending on the distribution) deals with tasks specified in Crontabs (”Cron tables”) at the time specified therein. ■ 102 LINUX MAGAZINE 3 · 2000
grounds (services which are not made available by your own computer can’t be abused or attacked by net baddies) or to conserve resources, why not use your local SMTP server when sending mail, and be independent of your access provider?
It doesn’t have to be Sendmail Many distributions have the grandfather of MTAs Sendmail pre-installed. This program is very powerful but has one major drawback: its configuration file, /etc/sendmail.cf, is written in such a way that wile it is easy for machines to understand it is more or less incomprehensible to humans. In the versions of sendmail now commonly used it is possible to create an easily understandable sendmail.mc file and transcribe this using the m4 pre-processor into a suitable sendmail.cf. But in all honesty, do you trust yourself to configure a program correctly and satisfactorily, whose version number you can’t find usingsendmail -v or sendmail –version, but only by means of /usr/sbin/sendmail -d0 -bt < /dev/null ?
102answergirl.qxd
23.10.2000
16:04 Uhr
Seite 103
TROUBLESHOOTING
Many problems with Sendmail arise only because even fairly proficient Unix users don’t have the energy to plough through several hundred pages of documentation. The upshot of this has been that setting up your own mail server is generally considered to be a task achievable by only the most wizened of gurus.
Alternatives It doesn’t have to be like that, though. Such userhostility has even bothered the gurus, providing them with just the excuse they needed to go off and write their own mail server. So now there are a wide variety of Sendmail alternatives ranging from the ”little” smail via Postfix, Exim, Zmailer (which was specially optimised for heavily-used mailservers) to Qmail, which really does take some getting used to. All are usable to some extent by applications designed to work with Sendmail. So for reasons of compatibility there is always a symbolic link named /usr/sbin/sendmail. For the home user, optimal behaviour of the MTA under a heavy load – a key criterion for internet service providers, universities or big companies – doesn’t matter. The most important thing in this situation is good documentation and comparatively simple installation and configuration. All three criteria together are satisfied best, in my experience, by Postfix (http://www.informatik.unibonn.de/pub/software/postfix/start.html).
BEGINNERS
Symbolic link: A reference to another file created using the command ln -s. By this means a file can be addressed by different names. Dependencies: Classic Unix programs require that they can dispose of the emails they create using a local mail server: they depend on it in such a way that their presence on a computer without an MTA would be pointless. Each rpm or deb package contains information about this dependency so that the package manager will warn you if you try to install a program that requires an MTA when one is not present, or if you try to uninstall an MTA when other programs are installed that depend on it. It will also prevent you from installing different MTAs side by side, because some of the settings needed will clash. POP: The ”Post Office Protocol” provides a way to receive mail even when not constantly online to your ISP’s SMTP server. The POP server plays post-box and allows the user to pick up mails collected (by SMTP) at a later time using a suitable tool (for example, fetchmail) or a POP-compatible mail program. ■ Thanks to -v (”verbose”) we learn that Sendmail and Postfix are mutually incompatible. All right, we’ll solve the problem by uninstalling (-e as ”erase”) the sendmail package … [root<\@>pc software]# rpm -e sendmail error: removing these packages would break dU ependencies: smtpdaemon is needed by fetchmail-5.3.1-1 smtpdaemon is needed by mutt-1.0.1i-6 smtpdaemon is needed by nmh-1.0.3-6x
Postfix has the additional advantage that it comes with many distributions as an rpm or deb archive, or is at least available as such. Red Hat users won’t actually find the package in the core distribution, but in powertools (e.g. at ftp://ftp.redhat.com/pub/ redhat/powertools/6.2/i386/i386/). Since a whole range of programs can only be installed using a package manager if the latter is aware of the existence of an MTA, it is essential in order to avoid later problems to install the SMTP server using the package manager, too. Otherwise rpm or dpkg are almost unbeatable for stubbornness. This is exactly the problem we have to combat when we want to use something instead of Sendmail. In the first instance, simple installation (-i) of the Postfix package as root doesn’t work:
… or maybe not: The installed mail programs mutt and nmh and the POP client fetchmail absolutely will not relinquish their SMTP server. Without this, their functionality would be affected, and as rpm doesn’t know that we are immediately going to provide them with a new replacement mail server, it gets awkward. Who is the boss here? The rpm man page, despite all its complexity, is really helpful in this situation. It reveals that by using the option –nodeps (”no dependencies”) the package manager can be made to understand that it should simply overlook dependencies. But it’s better to be safe than sorry: After installing Postfix, is everything now in order for our sendmail-loving mail assistants? Let’s just ask (”query”: -q) the Postfix rpm package (-p as ”package”), to see what it actually makes available (–provides).
[root<\@>pc software]# rpm -ivh postfix-19991231_pl02-4.i386.rpm error: failed dependencies: sendmail conflicts with postfix-19991U 231_pl02-4
[trish<\@>pc software]$ rpm -qp —provides postfix-19991231_pl02-4.i386.rpm MTA smtpd smtpdaemon
The option -h (as in ”hash”, meaning to use # symbols as a progress indicator) shows that nothing has happened because rpm flatly refuses to load a package whose dependencies are not fulfilled.
The answer tells us we can take the momentous step of uninstalling Sendmail with an untroubled mind: The smtpdaemon that mutt, fetchmail and nmh all need will be restored by the Postfix installation.
Escaping from Sendmail
3 · 2000 LINUX MAGAZINE 103
102answergirl.qxd
23.10.2000
16:04 Uhr
BEGINNERS
Seite 104
TROUBLESHOOTING
So we can now try again to remove sendmail: [root<\@>pc software]# rpm -e —nodeps sendmail [root<\@>pc software]# rpm -ivh postfix-1999U 1231_pl02-4.i386.rpm postfix ###########################################U ####### postfix-script: warning: creating missing PoU stfix pid directory postfix-script: warning: creating missing PoU stfix incoming directory [...] postfix-script: warning: creating missing PoU stfix private directory rpm is no longer carping that we are destroying package dependencies by deleting sendmail (-e), and with it uninstalled we also see a bit of the progress indicator (-h). Because verbosity is switched on (-v) we learn from rpm that when loading in the postfix packages a couple of directories which were not previously available (”directories”) named pid, incoming etc. have now been set up.
So long as it stays at home... Even if you are not quite confident enough to use this server to send messages into the big, wide world, at least local users can now send emails to each other. Is it really that simple? Using: [trish<\@>pc software]$ telnet localhost smtp telnet: Unable to connect to remote host: CoU nnection refused we check whether there is actually a mail server lurking on the SMTP port of our local computer (”localhost”). Connection refused – bad luck! No answer from the mail server and thus no chance, either, of sending even the tiniest bit of mail. Before you start turning the air blue, though, you should bear in mind that the requirements for mail servers are so diverse that most administrators would certainly start cursing if suddenly, after installation, a mail server were immediately to run, and in the worst case do everything but what it was supposed to. Wrongly configured mail systems can cause an awful lot of trouble. However, if you are certain, on your single-user computer, that no-one but you will be sending mail, you can risk testing the default configuration at least just for local mail.
Postfix is a daemon, and like all such programs there is a start-stop file for it in the directory init.d (”Initialisation of Daemons”), which, depending on the distribution, can be found under /etc, /etc/rc.d, /sbin etc. (Exotic distributions like Slackware, which don’t use this logical System-V Init concept, are something we will avoid discussing at this point.) If you change (using cd) into this directory, all you need is the command ./postfix start (note the dot slash, ”./” in the path) before trying telnet localhost smtp once again: Trying 127.0.0.1... Connected to localhost. Escape character is ‘^]’. 220 computername ESMTP Postfix
If I were a mail program Anyone who would now like to know how mail programs (alias ”Mail User Agents” – MUA) and SMTP servers converse, should enter the command help now. Postfix reacts with a curt: 502 Error: command not implemented Shame, we ought to have done that first when we still had access to Sendmail. Then, it would have provided some fine assistance on how to converse with it: help 214-This is Sendmail version 8.9.3 214-Topics: 214HELO EHLO MAIL RCPT DATA 214RSET NOOP QUIT HELP VRFY 214EXPN VERB ETRN DSN 214-For more info use ”HELP ”. 214-To report bugs in the implementation senU d email to 214sendmail-bugs<\@>sendmail.org. 214-For local information send email to PostU master at your site. 214 End of HELP info Could the clarification of the individual SMTP commands be a bit more detailed? Certainly: help MAIL 214-MAIL FROM: [ ] 214Specifies the sender. Parameters arU e ESMTP extensions. 214See "HELP DSN" for details. 214 End of HELP info
./: Executable files that live in directories which are not in the search path must be selected using the full path. Unlike DOS, under Linux the current directory (which can be referenced as ”.”) is not made part of the search path. If the program you want to run is in the current directory, you must specify dot and forward slash to create a full path. The command echo $PATH prints out the search path. CR/LF: The good old ASCII symbol carriage return (named after the key of the same name on typewriters) (”Carriage Return”) and line feed (”Line Feed”) under DOS and derivatives are both generated at the same time by pressing the Enter key. In this context, however, the instruction simply means that you should press Return. Under Unix a hard line break is just LF (under MacOS just CR), which certain Windows editors (and some printers) take very badly. It is therefore important even for plain text files to use the correct coding of the line break and if necessary to use tools like dos2unix and unix2dos to perform conversion. ■ 104 LINUX MAGAZINE 3 · 2000
102answergirl.qxd
23.10.2000
16:04 Uhr
Seite 105
TROUBLESHOOTING
So MAIL is the command to specify the sender and RCPT… help RCPT 214-RCPT TO: [ ] 214Specifies the recipient. Can be useU d any number of times. 214Parameters are ESMTP extensions. SeeU "HELP DSN" for details. 214 End of HELP info … specifies the recipient. But instead of these friendly explanations provided by Sendmail, Postfix is silent at this point. Its author, Wietse Venema, made the quite reasonable assumption that anyone who was so inclined as to try to talk to a mail server using a terminal program would already know the language to use, while the MUAs that would be communicating with it most of the time would have no use for online help. We can try sending a message using the terminal program to test the system, which will also serve as an instructive demonstration of how SMTP works. The conversation goes: MAIL From: yourusername<\@>your.mailaddress 250 Ok This specifies the sender of the mail (you should, of course, put your real name and address in here) and: RCPT To: yourusername 250 Ok specifies the recipient, the user you are logged in as. After:
BEGINNERS
Household remedy Before we go wandering off completely into the forest of configuration options required by most MUAs we still have the Unix classic mail. Anyone who has ever had anything to do with this program will probably agree that thanks to its somewhat cryptic user interface it is no use as an everyday mail program. But for our diagnostic purposes it will do splendidly.
Ruthless uninstallation with Debian’s apt-get or dpkg The Debian package manager also carps when you want to remove packages that it considers vitally necessary using -r or –remove: lillegroenn:/home/trish/docs# dpkg -r sendmail dpkg: dependency problems prevent removal of sendmail: mutt depends on mail-transport-agent; however: Package mail-transport-agent is not installed. Package sendmail which provides mail-transport-agent is to be removed. [...] anacron depends on exim | mail-transport-agent; however: Package exim is not installed. Package mail-transport-agent is not installed. Package sendmail which provides mail-transport-agent is to be removed. [...] dpkg: error processing sendmail (—remove): dependency problems - not removing Errors were encountered while processing: sendmail But thanks to the –force option, this too can be overcome. To find out all the possible options try: dpkg –force-help. For example: depends [!]
Turn all dependency problems into warnings
you type in the rest of the mail text…
Turn all dependency problems into simple warnings? In this case that appears to be right up our street. But anyone who is a bit anxious about the effect of dpkg -r –force-depends sendmail can first try a dry run (–noact) to see what happens:
testmail ..
lillegroenn:/home/trish/docs# dpkg —no-act -r —force-depends sendmail
…, which you terminate using a single dot on a line all on its own – just as specified by the help prompt: 354 End the data with Enter-Dot-Enter.
After the long list of warnings, dpkg then tells us with …
DATA 354 End data with <\<>CR><\<>LF>.<\<>CR><\<>LF>
250 Ok: queued as BD7F34EC9 This means the mail is ready to be delivered. We can now say goodbye using: quit
Has the mail reached you (or rather yourusername)? To find out, it is important that your e-mail program uses the file /var/spool/mail/yourusername as the Mail folder. This is where all mails to yourusername received via SMTP are usually stored. With traditional Unix mail programs like mutt or pine you should have no problem with this. Netscape and other graphical MUAs on the other hand are not always so easy to persuade to look for mail in the spool directory and if necessary also to leave messages there.
Would remove or purge zmailer ... … that it is – no matter how reluctantly – clearing us a space for the Postfixinstallation with dpkg -i postfix_0.0.19991231pl05-2_i386.deb or apt-get install postfix. Obviously apt-get can also be used to remove sendmail: lillegroenn:/home/trish# apt-get —no-act remove sendmail Reading Package Lists... Done Building Dependency Tree... Done The following packages will be REMOVED: anacron mailx mutt sendmail 0 packages upgraded, 0 newly installed, 4 to remove and 16 not upgraded. Remv anacron Remv mailx Remv mutt Remv sendmail As you can see, this resolves the dependency problem in its own way: it removes all the packages which would be in the way of a clean uninstallation in one go.
3 · 2000 LINUX MAGAZINE 105
102answergirl.qxd
23.10.2000
16:04 Uhr
BEGINNERS
Seite 106
TROUBLESHOOTING
DNS: The ”Domain Name System” is, in effect, a database spread over a great many machines called name servers, that allow you to address computers using meaningful textual addresses (”www.linux-magazine.co.uk” for example) instead of user-unfriendly IP addresses such as 195.143.20.22. The names consist of the computer name (”www”), subdomain(s) (”linux-magazine”) and top-level domains (”co.uk”). The DNS database also keeps records of which mail server accepts mail for a specific domain (e.g. for anyone at linux-magazine.co.uk). FQDN: The ”Fully Qualified Domain Name” of a computer consists of the computer name (e.g. www) and the complete domain (linux-magazine.co.uk).
Header: Electronic mail always consists of two parts: the housekeeping information like sender and recipient address, the subject (Subject), the type of document (MIME type) etc., and the actual message, the body. Many mail programs hide most of the header from users. Most Unix mail clients, though, store mails in their original plain ASCII text form (even images and other non-text file attachments are converted for the purposes of transport into pure ASCII symbols and have to be converted back by the mail program afterwards into their original format.) This means that using a simple less /var/spool/mail/username (or similar) you can look at the whole inbox and see the headers in all their glory. Spam: The popular name for Unsolicited Bulk Email, which everyone who has used the Internet for any length of time will certainly have received, and wished they hadn’t. grep: A standard Unix command line tool for locating character strings. If you push the output of a command (in this case from rpm -ql postfix) through the ”pipe” | into grep, it searches this output for the specified string (in this case main.cf). ■ [trish<\@>pc software]$ mail Mail version 8.1 6/6/93. Type ? for help. "/var/spool/mail/trish": 1 message 1 new >N 1 trish<\@>regtest.enitel.net Wed AU ug 23 20:09 12/371 & At the mail prompt, which in this instance looks like & just for a change, you can obtain help using a question mark. Or you can simply type in 1 to read mail number 1: & 1 Message 1: >From trish<\@>regtest.enitel.net Wed Aug 2U 3 20:09:09 2000 From: Patricia Jung <\<>trish<\@>regtest.eU nitel.net> To: trish<\@>regtest.enitel.net Date: Wed, 23 Aug 2000 20:08:46 +0200 testmail & Recognise your test mail? The From: line exactly matches what you typed after the MAIL command, while the To: line on the other hand is what came after RCPT, supplemented if necessary by the computer name. Then there is the date stamp, which the MTA has generated. But wait: there is in fact a second From, this time 106 LINUX MAGAZINE 3 · 2000
without a colon, which was produced by the mail server. All very exciting. Maybe all that’s of interest now is how to delete this mail No. 1. The command is (”delete”)… & d1 …which then shows all the other mails …
& h No applicable messages Having read the only mail, we can use the command ”quit” to end the program: & q Of course, mail can also be used for writing a test mail: [trish<\@>pc software]$ mail -v ihrusername Subject: test testmail 2 .. Cc: [ Press Enter ] … and thanks to the verbosity parameter -v (”verbose”) mail now shows us how the mail is delivered to Postfix: send-mail: open maildrop/56CAC13ABD
On the margin of legality? It would be nice if that was all. But when it comes to answering our test mail, its sender address (in the example trish@regtest.enitel.net) is anything but usable. If it arrived like this at another computer it would even be illegal, because it doesn’t exist. This is just the start of our configuration problems. Private individuals rarely have the details of their personal computer entered into the DNS. So there is no way at installation that Postfix can find out for itself for itself the Fully Qualified Domain Name (FQDN) of the machine to which replies should be addressed. Each mail must be labelled in such a way that the address to which an answer can be sent is shown too. At this point things look bleak for computers which are not in the DNS. In good mail programs a Reply-To: header can be defined, which informs the recipient MUA of the address to which it should reply. Poor mail programs, though, often don’t take this into account. Besides this, we still have no valid sender address and thus are violating Internet standards. Therefore most good mail programs allow the From: sender header to be set by the user to a valid mail address. That would be enough if we simply sent mail only to the Smart host SMTP server of our Internet access provider. But the whole point is that we don’t want to be dependent on them, and that’s where we stumble over another problem. Although we may be able to specify what to put in the Reply-To: and the From: headers, the receiving mail server generates, as we have seen with our test mail, a so-called
102answergirl.qxd
23.10.2000
16:04 Uhr
Seite 107
TROUBLESHOOTING
Envelope address. The content of this From line (the one without a colon) is checked by some mail servers to see if it really exists, since spam usually appears to originate from an address which cannot be looked up in the DNS and this is one way of rejecting it. With an unresolvable address we have got ourselves a great deal of trouble. It isn’t just that we can’t deliver
BEGINNERS
mail to many mail servers. Some send the error message intended to inform us that the mail was refused to the address which doesn’t really exist, so that we never receive it. When addressees repeatedly, obstinately insist that they haven’t received mails, while the sender cannot remember any messages being returned as undeliverable, this is often what has happened.
Now just configure — the main parameters from main.cf Fortunately almost all the parameters in Postfix’s main configuration file are set to sensible default values. Also, they are mostly documented very thoroughly. Nevertheless you ought to run through the configuration to at least try to understand it and modify it if necessary before starting the server. As usual, in this file a # serves as a comment symbol, which prevents Postfix from treating the rest of the line as an instruction. A $ with a variable name after it means that Postfix replaces this combined symbol by the value stored in the variable. • If you are using an rpm or deb package made for your distribution, the various directories (queue_directory for the mail queue and command_directory for the Postfix help or daemon_directory for the necessary servers) are set to the right values. If something doesn’t work, check if these directories are present and the last two are filled with the appropriate programs. If, however, you note settings which are clearly wrong at this point, and which are not the result of changes you have made, save yourself what may be a great deal of aggravation by trying a new installation using a new package. • mail_owner, the owner of the mail queue and most Postfix processes should never, ever be attributed to root. Instead, use a special user (such as the very boring postfix). Look in /etc/passwd to check that this user is also there. If not, you must create it and if necessary give it access rights to the Mail spool and the various Postfix programs. • For many processes Postfix needs no specific special rights (as possessed by mail_owner or even root), nor should it have them. Therefore default_privs should be attributed to a user with the lowest possible level of rights such as the user nobody which is preconfigured on most systems. • When myhostname contains the FQDN of the computer you no longer need to worry about mydomain (which by default contains only the domain component of myhostname) and myorigin (normally the same as myhostname). You can find out the standard value using the command hostname –fqdn. If this value doesn’t match your default settings, change the variables. And when you do, don’t forget to remove the comment symbol. • You may have noticed that every mail includes a header named Message-ID:. This marking consists of a unique sequence of symbols generated by your mail server, an @ and the value of myorigin. This ID is supposed to be unique for that message. But since mailservers know nothing about any ID strings that may have been generated by others, myorigin should also be unique to your computer, so that no duplicates occur. For this reason, don’t make myorigin equal to the domain name of your provider or to localhost. • If you only use your mail server for sending mail out and for local mail, but not for receiving mail from the Net, you should restrict inet_interfaces to $myhostname, localhost. • The last stop (mydestination) for incoming mail is your mail server for mails which may be addressed to localhost, $myhostname. • In the file aliases you can define who receives the post which goes to specified local users. It’s important that you attribute mail addressed to root to a user who actually reads mail and is responsible for the system (e.g. yourself). A reasonable default value is:
alias_maps = hash:/etc/postfix/aliases You should then modify /etc/postfix/aliases so that the line with root: on the left side has on the right, next to it, either your local username or one of your valid mail addresses. Example (for modification, not to be mindlessly copied!): root:
pjung<\@>linux-user.de
If you have gone to a lot of trouble with your old /etc/aliases stemming from the days of sendmail you can of course reuse this in alias_maps. But don’t forget to convert the file using the postalias command into the required hash format. Traditionally /var/spool/mail has been the mail_spool_directory under Linux. With home_mailbox, though, you can also keep local mail in a file in each user’s home directory. This is sensible if they are using programs such as Netscape or Kmail. You can also accommodate people who, instead of a folder-file, like to store each mail as an individual file with home_mailbox. Do experiment a bit here by sending mails to a local test user (and please note that Postfix has to be restarted after each alteration to its configuration.)
3 · 2000 LINUX MAGAZINE 107
102answergirl.qxd
23.10.2000
16:04 Uhr
BEGINNERS
Seite 108
TROUBLESHOOTING
Address manipulation So with a heavy heart, we pick up the Postfix documentation. The keystone of the configuration is the file main.cf, usually found in /etc/postfix. But the package manager also provides information using rpm -ql postfix | grep main.cf or dpkg -S main.cf. (The rpm-options can be queried with -q for ”query” – and -l for ”list( of all files contained in the package)”; the -S in the Debian package manager stands – even more simply – for ”Search”.) It’s worth taking a closer look at the basic configuration described at http://totem.fix.no/postfix/basic.html (especially if the attempt at sending a test mail failed: see also the box ”Now just configure – the main parameters from main.cf”). We are mainly interested in the item Address Manipulation (http://totem.fix.no/postfix/rewrite.html) in this documentation. The title suggests something mildly illegal, and yet this is the only way for those of us without a fixed IP address and DNS registration to operate our own mail server. The item ADDRESS REWRITING in main.cf sounds like exactly what we want: Rewriting or masking sender addresses. A quick look into the sample configuration file provided, samplecanonical.cf (in the Postfix documentation directory /usr/doc/postfix-19991216 or similar) confirms that we have found the right tool: # The parameter sender_canonical_maps specifU ies optional tables, # which can be referred to in order to find oU ut which Envelope- and # Header-Address a specified sender should rU eceive. # # You will need this e.g., if you want to conU vert the sender address # user@ugly.domain into user@pretty.doU main, where it should still # be possible to send mail to the receiver adU dress user@ugly.domain. Our keyword here is Envelope- and HeaderAddress. The first means the topmost From line with no colon in the header of a mail, with ”Header-Address” the From: line. So we create, following the example in the documentation, a text file named /etc/postfix/sender_canonical. In this file the mail address is on the left, as it is automatically created when a user sends a mail, and on the right, how the appropriate valid address should be worded. You can find the left hand side from your test mail. This consists of the user name on the system (i.e. the name with which the user concerned logs on), the @ symbol, and the value of the configuration variable myorigin from main.cf (see box). An example: trish<\@>regtest.enitel.net pjung@linux-uU ser.de
108 LINUX MAGAZINE 3 · 2000
This means that the local user trish becomes the sender pjung<\@>linux-user.de with a valid mail address. Now all we have to do is set the variable sender_canonical_maps in main.cf to the name of our table file: sender_canonical_maps = hash:/etc/postfix/seU nder_canonical
How do I tell my mail server? Now we have everything we need, and thanks to the command /usr/sbin/postfix reload we don’t even need to restart Postfix with ./postfix restart in the init.d-directory. A new local test mail is written … but what arrives still has the wrong header (i.e. the ”old” address from the left side): >From trish@regtest.enitel.net Wed Aug 2U 3 21:19:06 2000 [...] From: trish@regtest.enitel.net (Patricia U Jung) What did we do wrong? When the number of users of a mail server is large, it takes far too long to read in a very long text file. Postfix may therefore use several formats for conversion tables such as /etc/postfix/sender_canonical. Which method is used can be found, according to the FAQ (http://www.postfix.cs.uu.nl/faq.html#intranet) using the command: [root@pc software]# /usr/sbin/postconf -m nis regexp environ btree unix hash With the prefix hash: in sender_canonical_maps we had specified that the hash format must be used. This means that we wanted Postfix to use not a slow text file but a special binary format. But we haven’t created one. To create the ”Map” from the text file (the meaning of the -m in the postconf command) there is a special command called /usr/sbin/postmap, to which we give as an argument our sender_canonical text file: [root@pc software]# /usr/sbin/postmap /etc/postfix/sender_canonical A ls -al /etc/postfix now reveals that in this directory a new file named sender_canonical.db has been generated. After that – and we’ve worked hard for this – everything falls into place with the test mail: >From pjung<\@>linux-user.de Wed Aug 23 22:U 04:02 2000 [...] From: pjung@linux-user.de (Patricia Jung) And that’s it!
■
109convert.qxd
20.10.2000
13:11 Uhr
Seite 109
COMMAND LINE
IMAGE PROCESSING
Graphics made to fit – fast
IMAGEMAGICK’S CONVERT Whether for use within web sites or for other purposes, graphics files frequently have to be converted from one format to another and at the same time altered in terms of size and quality. With convert from the ImageMagick package you can do this quickly on the command line.
HANS-GEORG EßER
The simplest function of convert is conversion between different image formats. To do this you simply specify the name of the original and target file – convert recognises the correct format from the filename extension. So for example to convert an image from JPEG into GIF format enter: convert grouse.jpg grouse.gif The list of formats known to convert is amazingly long. If the program doesn’t recognise the extension of an image file it may be expecting a slightly different extension. If so, call up the man page for convert with man convert
On the left is a grouse at its original size; on the right yoiu can see it after reduction (for comparison, printed here at the same size)
and search for the desired file type there. Every file type has a unique extension such as ”BMP” for Microsoft Windows bitmaps or ”JPEG” for the JPEG format frequently used on the Internet. The plus sign (”+”), which follows some of these types can be ignored. Image files don’t have to have an extension indicating the image type, however. To convert into
Good for scripts Play around with convert and you’ll be amazed at the possibilities. The program is particularly suitable for use within shell scripts which can be written to automate complex image processing operations. For example to reduce all the images in a directory to small 50x50 icons you could use the following shell script: #!/bin/bash for image in *.gif; do target=`echo $image | sed s/.gif/_small.gif/` convert $image -geometry 20x20 $target done
an image in the JPEG format with the unusual name of test.image, use the invocation: convert grouse.gif JPEG:test.image By prefixing the filename with ”JPEG:” it is made clear to convert what should be done, even though the target filename doesn’t end in .jpg. An interesting command line argument is ”X:”, meaning the X Window system. This is only available if you are working under X and has the following meaning: • when used as ”input file” in the form of ”convert X: output.jpg” the result is that the next window clicked on is captured and stored in output.jpg; • when used as ”output file” in the form ”convert image.jpg X:” the graphic is simply displayed in an X window.
And much more The ability to convert between diverse file formats would in itself be enough to make this an essential utility on any system but convert can do much, much more. As a part of the ImageMagick package it has access to many of the filters available in ImageMagick. For example, if you want to shrink a large image to a lower resolution the result often looks very pixelly: it’s better if you first run it through the blur filter, which makes the image less sharp. If you are using the graphics program xv this is usually the most sensible way to proceed. But convert is cleverer and automatically converts the image into high quality when reducing it in size. In order to make a large image small we use the ”-geometry” option to change the size: convert large.jpg -geometry 300x200 small.jpg
In the somewhat complicated third line a target file named anything_small.gif is calculated from each source file name anything.gif in the current directory. To do this the name is piped, into the command line tool sed. There, the ending ”.gif” is replaced by ”_small.gif”. The result of this sed call is then written via backwards quotes ”`” in the variable $target.
109 LINUX MAGAZINE 3 · 2000
The man page for convert provides, at the end (after a comprehensive listing of all the image formats supported) a list of all options for changing your image. Of interest are ”-crop” (to cut out a rectangle from the original image) and ”-draw” (for drawing simple graphical elements – line, circle, Bezier curve, text and much more) over the image. ■
110kthemes2.qxd
23.10.2000
16:10 Uhr
BEGINNERS
Seite 110
KDE THEMES
How To: Create KDE Themes, Part 2
GIVING KDE A MAKEOVER HAGEN HÖPFNER
KDE gives you the ability to customise its appearance very quickly using ”Themes.” In this series we show you how you can create your own themes.
Theme: A Theme is a collection of multimedia elements which have a common theme as regards content. For example, if you are the fan of a rock group, you could use a digitised photo of the band as a background image and extracts from their songs as system sounds, creating a Theme. ■
In the first part of this trilogy we tackled the start panel, the background images and the icons. Let’s take a look at the things covered in this series of articles: • Start panel • Background image • Icons • window-buttons • window-title bars • window-frames • System sounds • Colour schemes • KFM-settings This second part of the Workshop concerns the design of the window. To make it easier for you to get on board, we are not starting a new listing. At this point, we will simply expand Listing 1 (already discussed in Part 1). So don’t be surprised when at the end of this article you discover an eclipse.themerc, which covers the first two parts.
And so it goes ... … one more time. In the first part we took a look at the central KDE Design manager kthememgr and went into the meaning of the file eclipse.themerc. This knowledge is the basis of the entire series of
110 LINUX MAGAZINE 3 · 2000
articles and so applies for the following steps. If you are a little hazy on these key points it may be a good idea to refresh your memory by referring to last month’s article before continuing.
New buttons for new clothes … and for new windows too. In order to define new window buttons, we shall add two more sections to our eclipse.themerc. The first one defines which images are to be used and the second defines how they should be arranged. The corresponding parameters for defining the images can be found in Table 1 (lines 041 to 047 in Listing 1). This section also includes details concerning the appearance of the title bar. ”Title bar”, by the way, is taken to mean the part of a window which displays the name of the window. The possible parameters for this are also shown in Table 1. Since KDE supports colour shading by default, no title background images are specified in our eclipse theme. Impatient readers will have to wait until the third part of this series to find out how to do this. The arrangement of the window buttons is defined in the section [window button layout]. This
110kthemes2.qxd
23.10.2000
16:10 Uhr
Seite 111
KDE THEMES
Table 1: Images for window buttons Section designator image for button to close window image for button to enlarge window to maximum image for button to restore the original size of the window image for button to minimize the window image for button to attach window image for button to cancel the attachment of the window title-background image for the active window title-background image for non-active window Should the title pixmap be used behind the title text? Should the title bar of the active window be shown shaded? Alignment of the text in the title bar is where you must assign specific functions to the five possible buttons. The following function designators are available for this purpose: • Menu: A click on this type of button opens a menu containing various functions that can be applied to the window. (e.g. moving the window to another virtual desktop.) • Sticky: A click on this type of button attaches the window, so that it is visible on all virtual desktops. • Off: This switches off a button. • Close: A click on this type of button closes the corresponding window. • Maximize: A click on this type of button enlarges the window to its maximum extent. • Iconify: A click on this type of button has the effect that the window is no longer displayed on the current desktop. It can be made visible again either by using the key combination Alt+Tab or by a click on the entry in the taskbar. The corresponding section in our eclipse.themerc (lines 048 to 054 in Listing 1) has the structure shown in Table 2. In principle, it is possible to create the small images in the same way as the icons from part 1, so we won’t go into detail again at this point. Using Firetext plugins for The Gimp can save a lot of pixel work if you want to have corresponding fire images. For the image in the title bar of windows you can ”burn” the appropriate symbol using this tool: • The menu button is occupied in KDE applications by a mini-image of the large icon. • CloseButton: A large ”X” is especially suitable for this. • MaximizeButton: Enlargements are as a rule illustrated by means of pointed images facing upwards (e.g. an arrow), so for this we will simply use the ”Exponent” symbol ”^”. • MaximizeDownButton: Since this button restores the original size of a window, an underscore ”_” can be used for this. I found this looked too straight and so opted for the tilde ”~”. • MinimizeButton: For this button you can rotate the ”maximize” button by 180 degrees. • StickyButton: A capital ”O” can be used for attaching … • StickyDownButton: … and a small one for releasing.
BEGINNERS
[window title bar] CloseButton=filename MaximizeButton=filename MaximizeDownButton=filename MinimizeButton=filename StickyButton=filename StickyDownButton=filename TitlePixmapActive=filename TitlePixmapInactive=filename PixmapUnderTitleText=yes / no TitleFrameShaded=yes / no TitleAlignment=left / middle / right
The size of the window image should not exceed 20x20 pixels. You can admire the effects of the changes made in this section in Figures 1 and 2.
New coat of paint As you may have guessed, there is also a section devoted to window frames (lines 055 to 063 in Listing 1) in the configuration file. You can see the corresponding inputs in Table 3. The Firetext plug-in of Gimp can also provide valuable assistance in creating individual frame fragments. You should be able to handle this tool blindfolded by now. So here are just a few more tips which are handy when creating window frames. A window frame consists (as can be seen from Table 3) of eight parts. Let’s start with the simpler ones and do the straight pieces. Always bear in mind that KDE lines these individual pieces up in a row until this row, together with the corners, covers the full width or height of the window. If the straight pieces are too long, this will make the frame bigger than the window. To prevent this, we will select a width of 1 pixel for the pieces that run longitudinally. For the right and left border the height will then be 1 pixel. We create a line which is ”on fire” by processing the underline ”_” character with the Firetext plug-in of Gimp, and after zooming, cut out a piece 1 pixel wide. To ensure that the border at the end only shows ”flames” on the outside, you should make sure that when cutting out, you take only the bottom half. Figure 3 shows how this border part could look.
Attaching a window: By using the sticky option for a window it is possible to have this window appearing on all virtual desktops. Virtual desktop: KDE offers the option of having several desktops at the same time (One, Two, Three, Four,…). On desktop One, for example, a text editor can be started, while a web-browser is running on desktop Three. By switching back and forth between the different virtual desktops you will have considerably more screen space available for your programs. Taskbar: The taskbar shows the application programs which are running. By clicking on the buttons on it you can switch between the various open programs. ■
[top] Fig. 1: Title bar in the Standard KDE Look [bottom] Fig. 2: Title bar with theme
Table 2: function definition of window buttons Section designator [window Button Layout] first window button from left ButtonA=function second window button from left ButtonB=function third window button from left ButtonC=function first window button from right ButtonD=function second window button from right ButtonE=function third window button from right ButtonF=function 3 · 2000 LINUX MAGAZINE 111
110kthemes2.qxd
23.10.2000
16:10 Uhr
BEGINNERS
Seite 112
KDE THEMES
[left] Fig. 3: Bottom border part enlarged 1300% [right] Fig. 4: Right bottom corner (Intermediate step 1) 800% enlarged
Infos KDE Homepage: http://www.kde.org/ The sample theme ”eclipse”: http://kde.themes.org/themes. phtml?cattype=inc=trad=0=1= eclipse KDE Themes Homepage: http://kde.themes.org The Gimp Homepage: http://www.gimp.org KDE Designmanager: ftp://ftp.kde.org/pub/kde/unst able/apps/themes/kthememan ager-1.0.0-src.tar.gz ■
The other straight border pieces are simply created by rotation. You can find this function in Gimp as follows: • Right click in the image • Image/Transforms/Rotate We thus create, by means of successive rotation by 90 degrees, the remaining straight border pieces. Let’s take a look at the corners. To create the right bottom corner, which is shown in Figure 7, we shall cut out a somewhat larger piece from our burning underline ”_” (again, only half!). We copy this (Ctrl+C), create a new transparent file (Ctrl+N) and paste it to this (Ctrl+V). Now the copy is rotated in such a way that the ”flames” are shown outside on the right. The result is two parts which simply have to be placed together.
Newly created images assume as standard size the values (width and height) of the image which is now in the buffer. Again, we copy one of the two parts into the buffer (Ctrl+C) and create a new transparent file (Ctrl+N). When doing so, we set the enlargement of the image to be created in both the x and the y direction as the higher of the two values entered. Now the copy is pasted into the new file and moved to the appropriate border. The same is done with the other part. The result of these efforts can be seen in Figure 4. The remaining corners can be created simply by cutting out the bottom right corner part from the newly-created bottom right corner. By means of rotation, you can turn it into the parts that are still needed. As an example of this, take a look at the bottom left border corner in Figure 8.
Fig. 5: Right bottom corner (Intermediate step 2) 1300% enlarged
Fig. 6: Right bottom corner (Intermediate step 3) 1300% enlarged
Fig. 7: Right bottom corner 1000% enlarged
Table 3: window frames Section designator image for top window border image for bottom window border image for left window border image for right window border image for top left window corner image for top right window corner image for bottom left window corner image for bottom right window corner
[window border] shapePixmapTop=filename shapePixmapBottom=filename shapePixmapLeft=filename shapePixmapRight=filename shapePixmapTopLeft=filename shapePixmapTopRight=filename shapePixmapBottomLeft=filename shapePixmapBottomRight=filename
112 LINUX MAGAZINE 3 · 2000
Fig. 8: Left bottom corner 1000% enlarged
110kthemes2.qxd
23.10.2000
16:10 Uhr
Seite 113
KDE THEMES
BEGINNERS
Listing 1: eclipse.themerc 001 [General] 002 name=eclipse 003 author=Hagen Hoepfner 004 email=Hagen.Hoepfner<\@>gmx.de 005 description=A dark sun for KDE (made withU gimp and its Firetext-plugin) 006 version=0.3 007 008 009 010
Fig. 9: window in the Standard-KDE-Look
[Display] CommonDesktop=true Wallpaper0=bg.jpg WallpaperMode0=Scaled
011 [Panel] 012 background=panel.xpm 013 014 015 016 017 018 019
[Icons] PanelGo=go.xpm:mini-go.xpm PanelExit=exit.xpm PanelKey=key.xpm Home=kfm_home.xpm Trash=kfm_trash.xpm TrashFull=kfm_fulltrash.xpm
020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040
[Extra Icons] Extra1=kfind.xpm Extra2=image.xpm Extra3=sound.xpm Extra4=aktion.xpm Extra5=kwrite.xpm Extra6=folder.xpm Extra7=kcontrol.xpm Extra8=kdehelp.xpm Extra9=kmail.xpm Extra10=kfm_refresh.xpm Extra11=folder_open.xpm Extra12=3floppy_mount.xpm Extra13=3floppy_unmount.xpm Extra14=5floppy_mount.xpm Extra15=5floppy_unmount.xpm Extra16=core.xpm Extra17=document.xpm Extra18=input_devices_settings.xpm Extra19=kab.xpm Extra20=kvt.xpm
041 042 043 044 045 046 047
[window titlebar] CloseButton=close.xpm MaximizeButton=maximize.xpm MaximizeDownButton=maximizedown.xpm MinimizeButton=iconify.xpm StickyButton=pinup.xpm StickyDownButton=pindown.xpm
048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063
[window button layout] ButtonA=Menu ButtonB=Sticky ButtonC=Off ButtonD=Close ButtonE=Maximize ButtonF=Iconify [window border] shapePixmapTop=wm_top.xpm shapePixmapBottom=wm_bottom.xpm shapePixmapLeft=wm_left.xpm shapePixmapRight=wm_right.xpm shapePixmapTopLeft=wm_topleft.xpm shapePixmapTopRight=wm_topright.xpm shapePixmapBottomLeft=wm_bottomleft.xpm shapePixmapBottomRight=wm_bottomright.xpm
Fig. 10: window with theme
End of window-cleaning What have we achieved so far? We have now adapted new images to our windows, tidied up the window frame and altered the title bar of the window, even if the latter is not yet complete. You can clearly see the differences if you have a look at Figures 9 and 10. Well, all right, we did cheat a little. We’ll find out later how to change the colour scheme. We changed the colour scheme because our image and our frame did not look too brilliant with the standard colours. Now all we have to do is combine the eclipse/ directory using: tar cvf eclipse.tar eclipse/ into a tar archive, and then compress this using gzip eclipse.tar Admire what has been achieved so far. To do this we start kthememgr, remove the old eclipse entry and add our new package. Now just click on OK and be amazed.
Removing themes Unfortunately kthememgr is unable to delete themes cleanly. For this reason, the individual images must be deleted manually before changing the theme. This is done by running three commands: rm ~/.kde/share/icons/* -rf rm ~/.kde/share/apps/kwm/pics/* rm ~/.kde/share/apps/kpanel/pics/* -rf Don’t worry, the Design manager retains a copy of the files, so that they are still available if you want to use a theme again. ■
Buffer: The buffer is used when copying within a program. If a text or an image is copied with Ctrl+C, the copy will first be placed in the buffer and can then be pasted from this into another file using Ctrl+V. tar archive: tar is a program which was originally designed to allow backups to be made to a tape drive. But it can also be used to combine several files into one. No data compression is used during this process, however. In order to save space, such archives are usually compressed in a following step using gzip or b2zip. ■
3 · 2000 LINUX MAGAZINE 113
114NCP.qxd
20.10.2000
13:15 Uhr
Seite 114
SOFTWARE
OUT OF THE BOX
NCP
NETWORK COPY TOOL CHRIS PERLE
There are thousands of tools and utilities for Linux. ”Out of the box” takes the pick of the bunch and each month presents a program which we consider to be indispensable or, perhaps, little known. This time we take a look at the Net copy tool ncp.
ftp: ”file transfer protocol”, is a widely used protocol for moving files across the Internet. However, as all data (including passwords) is transferred unencrypted and can therefore be intercepted it is most often used to download files from the archives of public FTP servers or upload files to web servers and not to transfer data between two users’ systems. scp: ”secure copy”, is part of the secure shell package ssh, with which files are transferred encrypted. Server: A program offering certain services which client programs can use when they connect to it. Examples of the services offered include http (provided by web servers), ftp and finger. bzip2: is an alternative to gzip, the standard compression program on Linux systems. Often bzip2 compresses data better than gzip. tar: ”tape archiver”, is the standard archiving program under Unix. It reduces whole directory structure to a single file, which can be written or compressed on magnetic tape (hence the name). Compile: A program cannot be executed by the operating system while it is in the form of source code. Only after it has been compiled (converted) into executable code using a compiler is it in a form that can be run by the processor. The advantage of distributing programs in source code form is that they can be compiled to run on different platforms (Intel, Sparc, Alpha ...) as well as making it easy for you to make your own modifications. ■ 114 LINUX MAGAZINE 3 · 2000
To move files from one computer to another over the Net we usually use ftp or scp. But what do we do if we wish to do a quick file transfer between two computers or transfer the output of a command line operation to somewhere else? In this case, the utilities mentioned above, with the necessary servers, are too cumbersome. Felix von Leitner has probably had this problem as he is the developer of ncp.
Unusual zipping From the ncp homepage (http://www.fefe.de/ncp/) we obtain the source text of the program, which is available as a bzip2 -zipped tar archive. As not all tar versions offer automatic use of bzip2, we will explain how to unzip it as follows: bunzip2 -c ncp-1.0.tar.bz2 | tar xf Next we must compile the program. To do this you only need to run make. Then we must install the executable program in the directory /usr/local/bin and set up two symbolic links with root rights, as root reserves the write rights in system directories such as this.
114NCP.qxd
20.10.2000
13:16 Uhr
Seite 115
OUT OF THE BOX
cd ncp make su (enter root-password) cp ncp /usr/local/bin cd /usr/local/bin ln -s ncp npoll ln -s ncp npush exit
Different operating modes ncp can be used in different modes. If it is called up using the name ncp, it sends or receives one or several files. As an example, we could send the whole /etc directory from computerA to computerB. To do this, ncp is started on computerB in server mode and then on computerA in client mode: [ComputerB]$ ncp ncp: server mode. waiting for connection. [ComputerA]$ ncp ComputerA /etc tar: Remove leading `/’ from absolute file naU me in archive. drwxr-xr-x root/root 0 2000-05-30 09:2U 6 etc/ -rwxr--r--root/root 2096 1999-03-11 18:0U 3 etc/hosts ... If you call up ncp under the name npush or npoll (these are the symbolic links we set up during the installation procedure) it sends or receives the standard input. It is up to us what we feed to the npush standard input. As an example, we transfer the content of the text directory as a bzip2-zipped archive, which is only to be saved and not unzipped on the opposite side: [ComputerA]$ tar cf - text | bzip2 | npush npush: IPv4 multicast failed, trying IPv4 broU adcast
SOFTWARE
make: is a program for controlling the sequence of events needed to create an executable program from source code. The configuration file for make (the Makefile) contains information on dependencies between individual program modules, for example. Symbolic link: On Unix file systems users have the option to give a single file several different names. The file blah could also be reached under the name blubb using the command ln -s blah blubb. /etc: The configuration files of many different programs are kept in this directory. It is worth backing up /etc so that all the work you may have done modifying these files is not lost after a disk crash, a careless deletion while logged in as root or in the case of a re-installation. Once again the modular structure of Unix pays off. Instead of having to gather all the files in the /etc directory yourself, ncp leaves this task to tar. This tool is also called up at the receiver’s end so that the files can be written there. For security reasons tar refuses to include the absolute path name so that the /etc directory on computerB is not inadvertently overwritten. Absolute path name: A complete indication, starting from the root directory, of the location of a file. For example, the absolute path of a file logs/log.txt in my home directory is /home/chris/logs/log.txt. standard input, standard output: Many command line programs offer the opportunity to omit the name of the input file. If this is done the program reads from the standard input, which is usually the keyboard. If the name of output file is omitted many programs output to the standard output which is normally the terminal. |: The pipe sign (representing a pipe) connects the standard output of one program to the standard input of another. Therefore, several programs can be linked together in a pipeline to form one processing step. Null modem cable: A cable providing a direct connection between two computers via the serial interface. In contrast to normal serial cables, the send and receive lines are connected crosswise. ■ booted from the diskette on both computers and the connection to be established using the script ppp-nullmodem (which you may have to modify): #!/bin/sh
[ComputerB]$ npoll ComputerA > text.tar.bz2 connecting to ::ffff:192.168.0.1
# Compression for PPP (not absolutely necessU ary) insmod /tmp/bsd_comp.o
Let’s break down the command string on computerA. First of all tar creates an archive using c (”create”) which goes via “f -” to the standard output. With the help of the pipe sign | this is diverted to bzip2, which in turn forwards its output to npush. On the opposite side (computerB) npoll receives the data and outputs it to the standard output, which is redirected to a file by the name of text.tar.bz2.
# establish PPP connection # Serial interface: /dev/ttyS0 or /dev/ttyS1 # Give as IP address pair: # Here in the example on computerA: 192.168.U 0.1:192.168.0.2 # On computerB: 192.168.0.2:192.168.0.1 pppd /dev/ttyS1 115200 asyncmap 0 noauth persU ist local passive nodefaultroute 192.168.0.1:192.168.0.2
ncp on disk
That way, data can be moved to a notebook which has only 8MB of main memory and no network card, even if Linux is not installed. ■
There is another gem for users with some experience of the command line. ncp is very useful in that it can be used to transfer data between two computers with a minimum of requirements. For example, I modified the small Linux distribution hal91 so that two computers can be connected via a null modem cable. All this requires is hal91 to be 3 · 2000 LINUX MAGAZINE 115
016nautilus.qxd
20.10.2000
13:57 Uhr
KNOW-HOW
Seite 116
NAUTILUS
Nautilus, GNOME’s new file manager
MANAGER FOR ALL SEASONS? MATTHIAS WARKUS
The Nautilus Desktop Shell is intended to supersede the GMC file manager (which was derived from the venerable Midnight Commander) in new versions of GNOME. What looks superficially like Yet Another File Manager appears at second glance to be a great deal more.
File managers are often a topic of lively debate among Unix operating system devotees. Traditionalists lavish care and attention on the content of their home directories using cp, ls, mv and rm. Nautilus could become one of the first graphical file managers which succeeds in satisfying both the command line aficionado and the Linux neophyte whose only experience of the free operating system has been gained using a graphical environment such as GNOME or KDE. The developers don’t call their creation a mere file manager, but give it the far more importantsounding designation of ”Desktop Shell”. Once you have got the Nautilus preview up and running (not the easiest of tasks) you are confronted by a ”Druid” (otherwise known as an ”Assistant”). You must then decide on a level of use, that is, whether you are a beginner, an advanced user or an expert (”Hacker”). Nautilus also asks if you wish to register for Eazel services (more on this later), as well as whether you wish 116 LINUX MAGAZINE 3 · 2000
immediately to upgrade to the latest version. Before you can pounce excitedly on the first Nautilus window, a dialog box appears to remind you of the alpha status and the consequent instability of the application. Whether Nautilus is running in beginner, advanced user or expert mode is shown by a symbol in the menu list. The very concept of user levels, especially in a file manager, is highly unusual. Yet the differences between the levels, which can be changed at any time, are rather subtle. The ”Home” button, for example, leads beginners into a special directory with various ready-made links. Experts are taken to their own home directory. Beginners also normally do not see invisible files. This concept is going to be extended somewhat by the developers. The first directory view (Fig. 1) isn’t particularly remarkable. A window with icons on the right, a sidebar with details of the selected file on the left, menus, icons and an address line with automatic address completion isn’t really anything new. Using
016nautilus.qxd
20.10.2000
13:57 Uhr
Seite 117
NAUTILUS
a button in the toolbar it’s possible to switch to a list view (Fig. 5) which is a lot more functional and informative than that of the predecessor, GMC. The usual features of a file manager are all supported: context menus, drag and drop, selection with a frame (which in this case is a semi-transparent rectangle reminiscent of the Enlightenment file manager). The usual links can be defined to any file and any file type, in this case for as many actions as required, and a button in the sidebar corresponds to each of these. The first real ”Aha!” experience comes when you take a look at the zoom controls (the magnifying-glass symbol).
Zoom and views Both the icon as well as the list view can be zoomed from 25% up to 400% of the normal viewing size in six stages. Not only do the icons change their size when this happens, but more information appears in the icon captions, for which this creates more room. This information can be configured. Normally in the icon view the stages below 100% show only the file name. At 100% the file size is also shown (for directories, the number of entries); from 150% up the modification date is added; and at 400% the MIME types of the files become visible. In order to reduce the ”pixelling” of the icons Nautilus provides many of them in various sizes and automatically selects the most suitable version. Also, all views are shown by default with full antialiasing. This makes the graphics really slow, however, and at high zoom levels they can occupy astronomical amounts of memory. On less powerful computers it is better to switch off this option. The zoom option is more than just a gimmick, however, mainly due to the facility for looking into files (Fig. 2). Files that according to their MIME type are image files actually show themselves as icons, which turns Nautilus into a really handy graphics viewer. Files with a text MIME type display, in the empty space of their icon, a cut-out of the top left corner of their content. (Although Nautilus’ ”default” theme is just about the only one with which this does not work! More on themes later.) Naturally, when zooming in, the index images and views into text files shown also become larger. If that isn’t enough, you can stretch an icon to any size you like in order to see more of a file’s content. Even peeks into audio files are possible: For example if you move the mouse cursor over an MP3 file, a note symbol appears above your icon and the file is played. If the cursor leaves the icon, the music stops immediately.
KNOW-HOW
applications, unlike GMC which used the file suffix. Technically, this is accomplished using a new library which future versions of GNOME will use for all file handling operations. It is called gnome-vfs, where ”VFS” stands for ”Virtual File System”. A program like Nautilus, even for simple views, is already shovelling enormous amounts of data out of the file system. It reads file names, file contents and file properties. To determine the MIME type the VFS often does more than call in the file name: it also examines the file content itself via a so-called ”snoop buffer”. In order that during the reading and writing of information Nautilus doesn’t hang around, unusable, as was the case with GMC, gnome-vfs makes accesses asynchronously in the background, either in a single process or in a thread. When a view is opened the icons, data and file contents gradually appear while Nautilus reads in the information. The program remains fully
[below] Fig. 1:Nautilus displays a directory [bottom] Fig. 2: A view at maximum enlargement; an MP3 is played straight away
Under the bonnet Nautilus obviously makes extensive use of MIME types, as one would expect of a new-age file manager. Not only do they determine how files are displayed, they are also used to link files with 3 · 2000 LINUX MAGAZINE 117
016nautilus.qxd
20.10.2000
13:58 Uhr
KNOW-HOW
Seite 118
NAUTILUS
is carried out either with the lightweight GtkHTML components or (preferably) with an embedded Mozilla engine.
The components concept
Fig. 3: The music view; as much as possible is folded away
usable during this time and, in the same manner as with a web browser, it is possible to interrupt any activity using the Stop icon. This concept is employed throughout the program, in the tree view, in the properties dialog windows and so on: it reads in all information in succession while you carry on working. To appreciate this, you really have to experience it for yourself. Such asynchronicity creates the conditions for network transparency. In fact gnome-vfs can perform all file accesses over a network, too.
Nautilus on the net
Fig. 4: Nautilus’ sidebar, here with the tree view
For Nautilus users there are no such thing as directories, web sites, FTP servers or archives. All these things are ”locations”, each having a unique address (URL) and dealt with by the program in the same way, regardless of how they look in the view. (The GNOME desktop, too, which in future will be managed by Nautilus, is only one directory which will be displayed in a special way: as the screen background.) This means that typical browser functions such as bookmarks, history, back and forward buttons etc., will work at any time. With files accessed via a protocol like FTP or HTTP, all operations that Nautilus allows are possible in principle, as long as the protocol supports them. In theory this architecture only needs an HTML renderer in order to work as a web browser. Some time in the late 1990s the idea took hold that a file manager should be able to do precisely that. You can argue over the sense or nonsense of this until the cows come home, but obviously the market is expecting it. So Nautilus, too, as soon as you home in on an HTML page, offers in its list of possible views the option ”Display as web page.” This action
118 LINUX MAGAZINE 3 · 2000
This leads us neatly on an excursion into the technology. Everything in Nautilus is based on Bonobo components. Views such as the icon, list and web page views are such components, all of which have a defined CORBA interface. The unremarkable little index tabs in the left sidebar are special components. A nice side-effect of this is that components are only loaded when needed and a crashing component doesn’t take the entire application down with it (which is a particular blessing with this alpha stage software.) Because of the open APIs, anyone can write additional views for Nautilus. Already the Nautilus user can view text or graphics files. But considerably more complex views are also provided. A mere click on ”Display as music” and a directory full of MP3 files is shown as a tracklist with titles, interpreter and playing time (Fig. 3), at which point, unfortunately, Nautilus always tries to sort the tracks as if they all came from the same album. Beneath the list, play controls appear and Nautilus turns into an MP3 player. File searching also benefits from the view concept. Any number of complex search criteria can be specified in the toolbar and Nautilus shows the search results with any view you like – whether as icons, a list or perhaps even as a music view. The panels in the sidebar have already been briefly mentioned. Nautilus provides more than half a dozen of these little helpers, of which the side bar shows as many as you like under index tabs (Fig. 4). These too are components. ”Notes” stores text notes on the object currently being shown in the view window; ”Web Search” is a front end to a huge variety of search engines; ”Tree” shows the directory tree; ”History” is a chronicle similar to the ones found in web-browsers; and ”Help Index” and ”Help Search” make it appear likely that Nautilus will supersede GNOME’s help browser. Anyone who feels bothered by the panels can switch them off either partly or completely. The sidebar also folds away, while all the other control elements apart from the menu bar can be deactivated. But these are not the only options for customising Nautilus to your own taste – not by a long chalk.
Better living with Nautilus In the ”Edit” menu there’s a veritable land of milk and honey hidden away for anyone who loves themes and graphical frippery of all kinds. First off, there are themes that can change the whole appearance of the application, from the standard backgrounds and the look of the user elements up
016nautilus.qxd
20.10.2000
13:58 Uhr
Seite 119
NAUTILUS
KNOW-HOW
to the icon set in use. The default theme is the spitting image of GMC, while the dull Eazel theme in fact represents the ”corporate identity” of Eazel, Inc. (more on this later). Of interest is the chic theme ”Arlo” (Fig. 5). As an experiment a ”Vector” theme is also provided that instead of using images in pixel format uses SVG vector graphics as icons which can in principle be zoomed without limit and in infinitely variable steps without ”pixelling”. In a selection window (Fig. 6) you can see backgrounds and colours which are assigned to parts of the application (such as sidebars or a folder background) by dragging them. One category here could be an aid to productivity. Lots of little ”emblems” – symbols which stand for things like ”important”, ”encrypted”, ”OK” or ”urgent” – can be dragged onto file icons and remain attached to them. There isn’t a lot of functionality coupled to these emblems yet, apart from the special ones that symbolise whether a file is read or writeprotected, but their marking function alone makes these emblems a useful aid to keeping files organised.
Metadata Anyone who has given their J. S. Bach MP3s an Escher background image but used a stylish rough fibre wallpaper for their folders of StarOffice files will notice that Nautilus stores the background image for each folder separately. But that’s not all. For every location and every object, whether local or on the network, the program keeps all attributes, converted into a special XML format, in so-called ”Metadata.” This includes backgrounds, colours, icon position, icon size, the view most recently used for inspection, emblems and so on. Nautilus can, as an alternative, store all metadata centrally in its own directory (~/.nautilus) but if possible it saves it to each directory as .nautilus-metafile.xml. In our earlier excursion into the Nautilus viewing concept we mentioned that in theory just about anything can be implemented as a Nautilus view. Applications using Bonobo could come with a wrapper which encapsulates itself as a Nautilus view: a spreadsheet or word processing function would make it possible for files to be viewed and edited directly in Nautilus.
All-round support thanks to Eazel? Eazel, Inc., the company chiefly responsible for developing Nautilus (as the program comes under the GNU GPL there are also of course volunteers taking part), would like to make money from the possibility of using Nautilus as an application platform. ”Eazel services”, provided for a charge over the Net, should be available through Nautilus as special views. The services chould include system
updates, remote servicing, secure storage of files on Eazel’s servers (protected by hard cryptography) and similar nice touches. Their target group is mainly desktop users who are not well-versed in technical matters. You might well be sceptical about this. Whether Eazel will really manage to live profitably on the income from these services is a matter for debate: how much value will these services have and for whom? It’s too early to say. But there’s no doubt that the company has developed of one of the most impressive programs in the Linux software world: those involved include the designer of the graphical user interface of the original Macintosh and many key workers from the GNOME project. With Nautilus, GNOME’s file management is making a quantum leap from the simple technology of the GNOME Midnight Commander (which in respect of performance had nothing to be ashamed of) to a new concept which, though not revolutionary, is jam-packed with wellthought-out innovations and inventions. ■
[top] Fig. 5: The List view in the ”Arlo” look [above] Fig. 6: The attribute viewer with available emblems
Info GNOME http://www.gnome.org Enlightenment File Manager http://www.enlightenment.or g/efm.html Mozilla http://www.mozilla.org Bonobo http://developer.gnome.org/ar ch/component/bonobo.html Eazel, Inc. http://www.eazel.com ■
3 · 2000 LINUX MAGAZINE 119
120dockapps.qxd
23.10.2000
16:13 Uhr
SOFTWARE
Seite 120
DESKTOPS
Jo‘s Alternative Desktop
DOCKAPPS JO MOSKALEWSKI
You alone determine the look of your Linux desktop. In this series we look at some alternative window managers and desktop environments. This month, we take a look at the special dockapps which make working with the window manager Window Maker so much easier.
wmmail 0.64, mountapp 2.7, wmcp 1.2.8, wmmon 1.0b2, wmppp 1.3.0, wmsysmon 0.7.6, wmtime 1.0b2 yawmppp 1.2.1 ?????/desktopia/
Since Windows 95 came out we all expect at least one clock on the desktop. Environments such as GNOME or KDE meet these expectations perfectly. But these environments have also committed themselves to doing a great deal more. There are tools for connecting and disconnecting to an ISP or a pager to switch between different desktops. All this – and a lot more besides – is there for the taking in Window Maker too. Dockapps are small programs which can be attached like an icon onto the dock (or clip) of Window Maker. You do this simply by dragging an icon to the desired point on the dock (or clip).
So where do they run, then ...
Fig. 1: WMTime in both analogue and digital form
Fig. 2: WMMon in all views
Unfortunately at present one of the most important web pages for Window Maker dockapps is being restructured and thus at the moment has been wiped clean (http://windowmaker.mezaway.org/). All I can do now is, as with so many tools, to pack my own sources onto the CD without reference to any newer versions on the Web. But they all work wonderfully, and anyone who riffles through the CDs of his distribution will certainly find one dockapp or another ready for use. The following web sites are also good places to find dockapps: • http://www.BenSinclair.com/dockapp/ • http://freshmeat.net/appindex/x11/window%20 maker%20applets.html • http://nis-www.lanl.gov/~mgh/WindowMaker/ DockApps.shtml • http://www.cs.mun.ca/~gstarkes/wmaker/ dockapps/
120 LINUX MAGAZINE 3 · 2000
WMTime ... … is definitely the clock for Window Maker. Anyone who calls it up with the parameter -digital will see it in digital form – otherwise it appears as analogue. Some nice person has tinkered about a bit with the somewhat dowdy digital mode and equipped it with Internet time – not that anyone needs it, but a person can never have enough … So we are indulging in this luxury and have packed the sources of precisely that version onto this issue’s CD. Anyone familiar with WMTime and who is now looking at the illustrations of this article in amazement due to the wrong colour reproduction may like to know that it is possible to alter the colours of most dockapps by modifying the graphics in the source package. Of course, that is often really tedious, but is still feasible. Also, in many programs it is possible to select the text colours via command line options – the option -help will usually come up with a lot more about the dockapp in question.
Tank watches Many people find a system monitor rather obtrusive, but anyone like me who is keen on tinkering with the system will certainly want to know about it when a program runs amok. And as Linux actually runs several applications at the same time, it may be that one only notices the lack of memory caused by some program malfunction when other programs are already tumbling out of memory. Luckily, there are dockapps which leave us in no doubt about the system status:
120dockapps.qxd
23.10.2000
16:14 Uhr
Seite 121
DESKTOPS
WMMon This dockapp has three views. By default it monitors the CPU. If you like you can just click on the mouse and then instead of the CPU, hard disk activity will be monitored. After another mouse click we find out all sorts of information about the memory, the swap (use of swap memory), and the uptime (time elapsed since last boot procedure). If you’d rather have everything in view at the same time simply start this dockapp three times: the second time with the option -i, and the third time with the appendage -s.
WMSysMon But surely, rather then starting WMMon three times, it’s nicer to replace two WMMons by one WMSysMon. Only two, because WMSysMon doesn’t display the CPU load, though it shows everything else (and more) in a convenient form. Anyone looking for the associated home page can find it at http://www.gnugeneration.com/software/ wmsysmon.html.
SOFTWARE
the drives listed in the file /etc/fstab. As a little bonus, the configuration of mount.app – as shown in Figure 5 – is done with a separate tool called mount.conf, which is called up by the dockapp with a double click. The future development of mount.app can be followed at http://mountapp.sourceforge.net/.
Fig. 3: WMSysMon
WMPPP WMPPP is a marvellous tool which makes or breaks a configured Internet connection at the click of a mouse, displays the status of the connection, graphically displays data transfer and shows it in bytes, displays online time and also – provided the modem configuration allows – reveals the connect rate. But it is not a front end for pppd, so the
Fig. 4: mount.app
mount.app One handy tool for mounting storage media is mount.app. It allows you to mount and unmount
Fig. 5: Configuration of mount.app
anzeige
3 · 2000 LINUX MAGAZINE 121
120dockapps.qxd
23.10.2000
16:14 Uhr
Seite 122
SOFTWARE
DESKTOPS
Fig. 8: Alternative display by YAWMPPP
Fig. 6: Connection to an ISP with WMPPP
connection to the ISP has to be set up some other way. WMPPP can only execute a command line to make a connection. A remedy for this is promised by a slightly modified version of this tool named YAWMPPP, found at http://yawmppp.seul.org/. With this, you should have the same functionality that you get with kppp , or so I believe, as I personally have been unable to make a connection using either of them. I feel it is better to dial into the Internet without a graphical front end, and so my preference is for the simpler WMPPP.
WMMail.app Fig. 7: KPPP-replacement YAWMPPP
At http://www.eecg.utoronto.ca/cgibin/cgiwrap/ chanb/www/wmmail can be found a dockapp which can check mailboxes of all conceivable kinds in all possible ways for their content. It has already become so powerful that even changing over from version 0.59 to 0.64 requires lengthy perusal of the documentation. It may not sound fun, but it’s worth spending the time, as the different views in Figure 9 prove. Anyone installing the current package may find themselves looking in despair for WMMail.app – it can be found in the directory /usr/local/GNUstep/Apps/WMMail.app/ where it answers to the name of WMMail. But that’s not all: to be able to use WMMail.app each user needs their own configuration file, which is copied from the Defaults subdirectory into the user’s own GNUstep default directory, thus to ~/GNUstep/Defaults/WMMail. After that, this file still has to be modified using the text editor.
wmcp Those coming from KDE will certainly miss the buttons that enable you to switch from desktop 1 to Fig. 9: How WMMail can look
Fig. 10: wmcp in ”scalpel” and ”gv”
122 LINUX MAGAZINE 3 · 2000
Fig. 11: Options for dockapp
desktop 4. You can of course switch between desktops using the corner of the clip (paper clip), but it isn’t as easy. The remedy for this is provided by a pager going by the name of wmcp which can be found on the attached CD or at http://www.linuxave.net/~bac/. The author of this little tool also supplies two different faces (Figure 10) – and anyone who is a bit creative can whip up their own graphics here.
Useless? Here’s another illustration with all sorts of dockapps not yet introduced. Since over 120 of them exist and not one of them is a standard dockapp (and for almost all of which there are equally usable alternatives introduced by myself), I will say no more about them now, but would encourage you to take a look for yourself in the dockapp archive. Then you can use all sorts of things which you certainly do not need, simply because you like them. But without further delay let’s look at what the screenshot shows: wmglobe (the earth with light and darkness depending on the sun’s position), wmWeather (your local weather report), ascd (CD player), asmixer (small audio mixer) and wmeyes (eyes follow the mouse cursor). Whoops – ”ascd” instead of ”wmcd”? Right! Some ”dockapps” designed to work with AfterStep can also be used with Window Maker, which increases the number of tools at your disposal even more … There’s still the question of how to make these little helpers always start automatically when you log in. To do this, just click on the edge of a dockapp with the right mouse button, select Settings… and tick Start when Window Maker is started. ■
123instprob.qxd
20.10.2000
13:48 Uhr
Seite 123
BEGINNERS
INSTALLATION
How To: Solve Installation Problems
PAINSTAKINGLY RECREATED HANS-GEORG EßER
The installation of Linux programs from source code generally follows the ”classic” three step path of ./configure, make and make install. But what happens if, for instance, packages required during the ”make” process are missing? We examine some typical problems.
When installing a program from a source code archive the aim is to create executable binary programs that can be run by the computer. To do this, a compiler must be installed. As Linux programs are normally written in C or C++ you will need a C compiler. In most distributions this is part of the egcs package, which must be installed. But the compiler alone is not sufficient. During compilation what is known as a makefile is used. This file contains the instructions as to how the program will be built. The program make reads this file and carries out these instructions: this must also be installed. To check whether these packages are present run the commands: rpm -q egcs rpm -q make
The dot and slash at the start are normally needed so that your shell searches in the current folder for the script configure. The shell runs this script which searches through your system checking for the presence of the programs and libraries which are required to successfully compile the new software. If you want to install, say, a KDE program, X Window and KDE libraries and the respective include files at the very least are required. The same applies to GNOME programs. If files needed to compile the program are missing, ./configure will produce error messages like this, checking for X... configure: error: Can't fiU nd X includes. Please check your installation and add the cU orrect paths! The packages you’ll most often need are:
rpm should respond in each case with the package name and a version number (e.g. egcs-1.1.2-30 and make-3.78.1-4). If it tells you that the package is missing, look on the installation CD of your Linux distribution for the corresponding packages (e.g. egcs-1.1.2-30.i386.rpm and make-3.78.14.i386. rpm) and install these while logged in as root using the command: rpm -Uvh package-name.rpm The first step after unpacking the source code archive (and changing to the directory containing the code) is the configuration of the sources. This is the step during which the makefile is created. The command for this is:
XFree86-devel-3.3.6-20.i386.rpm kdelibs-1.1.2-15.i386.rpm kdelibs-devel-1.1.2-15.i386.rpm qt-2.1.0-4.beta1.i386.rpm qt-devel-2.1.0-4.beta1.i386.rpm gnome-libs-1.2.0-0mdk_helix_2.i586.rpm gnome-libs-devel-1.2.0-0mdk_helix_2.i586.rpm The version numbers shown above are of course only examples. If queries rpm -q XFree86-devel, rpm -q kdelibs-devel, rpm -q gnome-libs-devel etc. give you the reply ”package … is not installed”, look on your distribution CD for similarly named packages and install these as explained above). Note that GNOME libraries are not required for KDE programs and vice versa. ■
./configure
KDEDIR and QTDIR libraries: Libraries contain program code that is designed to be called by other programs. This avoids the need for each program to have its own version of this code, saving programmertime and memory usage. Libraries usually reside in /usr/lib/, /usr/local/lib/ or their subfolders. Only a few important system libraries are located directly in /lib. include files: Include files provide information to the compiler as to how the program gains access to the libraries. They have a file extension of .h and are usually found in /usr/include, /usr/local/include and their subfolders ■ 123 LINUX MAGAZINE 3 · 2000
When compiling KDE programs a common problem is that the environment variables KDEDIR and QTDIR are not correctly set. This occurs if both Qt library versions (1.x and 2.x) are installed. If you encounter problems compiling, set QTDIR to the correct folder (for example with export QTDIR=/usr/lib/qt2.1.0). KDEDIR should also be correctly set. With most distributions the correct value is /opt/kde or /usr.
124gnuworld.qxd
23.10.2000
16:17 Uhr
COMMUNITY
Seite 124
BRAVE GNU WORLD
The monthly GNU-Kolumne
BRAVE GNU WORLD GEORG C. F. GREVE
Welcome to Brave GNU World. This month we have got several small projects covering a wide spectrum of applications. Let’s begin with a very interesting mail reader.
The mail reader Sylpheed with Aqua-Look
124 LINUX MAGAZINE 3 · 2000
124gnuworld.qxd
23.10.2000
16:17 Uhr
Seite 125
BRAVE GNU WORLD
Sylpheed is an extremely user friendly mail reader developed by Hiroyuki Yamamoto and others. Hiroyuki started the Sylpheed project because he was dissatisfied with existing mail reading programs. Emacs-based mail readers have problems handling attachments properly, are pretty heavyweight and require starting emacs in order to read mail. Graphical mail clients, on the other hand, have interfaces that the author felt were hard to get used to, sometimes they don’t support threads, don’t know about the existence of other mail readers or some features can’t be controlled via the keyboard. If you have had similar thoughts in the past, why not give Sylpheed a try? Sylpheed has some pretty impressive advantages. Based on the GTK+ toolkit it has a very intuitive interface. Its mail filtering capability and its support of message threading helps you keep your mail well-organised. Of course, it supports multipart MIME and has an integrated image viewer. X-Faces are supported, as are clickable URIs. But despite this heavy reliance on a GUI everything can be done from the keyboard as well. One excellent feature is the XML-based address book - this is guaranteed to be portable and available for other applications. The mail folders are portable, as well. Since messages are saved in the MH format they can be used by other mail readers like Mew. On top of this Sylpheed can read (but not write) news and supports calling the external programs fetchmail and procmail to retrieve mail. Outgoing mail can be collected in a queue before sending. Sylpheed is rather young so there is still a pretty long to-do list that contains IMAP4, LDAP, PGP/GPG support, SSL, compression of mail folders and more.
COMMUNITY
It also needs some work on the English localisation but since it is gettext based this shouldn’t be too hard. Debian users can install Sylpheed directly from the given address; others aren’t too seriously disadvantaged since it does support autoconf/automake. If you are unhappy with your mail reader at the moment, you should definitely consider giving Sylpheed a try.
Xzgv Xzgv is an image viewer for X11 that has been inspired by the console tool zgv. It was written by Russell Marks because he has a great affinity for the console and was unhappy with X11 image viewers. His original plan was just to write a zgv port for X11, but he soon realised that this wouldn’t give the desired results. Xzgv can be controlled completely using mouse or keyboard and uses a single window for selecting and displaying images: no more of the ”window zoo” that some image viewers tend to create. Since it has been written using the GTK+ toolkit without any dependency on GNOME or KDE it can be used anywhere without a problem and is very responsive as well as highly usable. The author emphasises that writing a good image viewer isn’t as trivial a task as some people seem to think. Of course it isn’t a highly complicated task to just display an image. But creating a general image viewer that ”does things right” requires solving some non-obvious problems. Xzgv is already very much finished. Pans for further development are limited to maybe getting away from the Imlib 1.x library and optimising Xzgv: simple but efficient
3 · 2000 LINUX MAGAZINE 125
124gnuworld.qxd
23.10.2000
16:17 Uhr
Seite 126
COMMUNITY
BRAVE GNU WORLD
viewing for really big pictures. As it is licensed under the GNU General Public License, Xzgv can be recommended without reservation.
Solfege Solfege is a program to train your ears. Until now, this area has been almost completely dominated by proprietary programs running under Microsoft Windows or MacOS. Solfege has been written by Tom Cato Amundsen, a music teacher who began the project for his own lessons. A remarkable feature is that it is extensible via ”lesson files” in ASCII format, so that people with knowledge of music teaching can easily add their own lessons. Writing more of these files is one of the most urgent tasks at hand. In the long run the author would like to reimplement large parts of the program as the solutions chosen have proven to be unsatisfactory to him. In doing so the help of people with experience of OSS programming would be very welcome. The obvious target audience – music students and others interested in training their hearing – shouldn’t expect too much from this program at the moment because Solfege V1.0 is still some way off
Ggradebook As the name suggests, Ggradebook by Eric Sandeen, Hilaire Fernandes and several volunteers is a program to manage grades. It uses the GTK+ toolkit and can optionally be compiled with GNOME support. Ggradebook can handle grades in percentages, abstract numerical or alphabetical No luck for pupils: teachers won´t be wrong with Ggradebook
126 LINUX MAGAZINE 3 · 2000
form. The last two can be automatically matched to percentages and their scales are customisable. A current problem is that with percentages and alphabetical grades it is not yet possible to use a ”+” or ”-” to raise/lower the grades. This can only be done for numerical grades. Further plans involve an import/export filter for similar programs, internationalisation and more features. Additionally, although the interface does allow all functions to be accessed it could definitely be improved upon as far as user-friendliness goes. The biggest difficulty the authors face at the moment is the lack of feedback: they need more. So all readers who could use something like this are encouraged to give it a try and tell them what you think.
Common C++ The next three projects are really of interest to developers. Common C++ originally started out as the APE project written by David Sugar. In March 2000 it was merged with Common C++ by David Silverstone and since David Sugar preferred the name Common C++ the joint project kept that name. Basically, the Common C++ project starts where the ANSI STD C++ library stops. The goal is to create a portable class library that provides functionality for tasks like abstract threading, sockets, serial I/O and a persistent class-serialisation on Posix and Win32 systems. Most of this functionality is already provided by single, unrelated libraries. For David Sugar, the advantage of Common C++ lies in the consistent structure for all these tasks. It is also very portable. Instead of including everything in one big
124gnuworld.qxd
23.10.2000
16:17 Uhr
Seite 127
BRAVE GNU WORLD
library, Common C++ uses different libraries for the different systems. This makes it possible to custom fit the libraries to the machine and system, which reduces the overhead. At the same time Common C++ applications will work on both architectures. As far as compilers are concerned, Common C++ is very tolerant and supports several of the C++ compilers that don’t obey the standards. However, the developers believe that with increasing standardisation this is becoming less important. As most developers know, pretty much every Unix implementation has its own thread implementation that usually doesn’t conform to any standard. For these, Common C++ provides an extensive range of tests during configuration time to assure they can be abstracted in a clean way. Although it doesn’t yet co-operate directly with GNU Pth (GNU Portable Threads) it should already be able to work with its pthread emulation. In the future it is planned to provide native GNU Pth support. Currently a ”rewrite” for version 2.0 is scheduled for this autumn in order to remove anachronisms for old C++ compilers. Additionally it is planned to support more platforms (BeOS). As far as functionality is concerned, a library for network protocols will be implemented. The lack of this network protocol library is the biggest weakness of Common C++ right now, as far as David Sugar is concerned. In the long term he expects to take a more administrative role as there is a very active community of developers for this project. He especially wants to see Henner Zeller and Gianni Marianni mentioned who put a lot of work into the latest release. The license chosen for this project is the GNU General Public License with a few extra terms that are a bit like the Guile license. This places it between GPL and GNU Lesser General Public License as far as the granted rights & freedoms are concerned. Since this has led to some confusion in the past, version 2.0 will probably be distributed under the LGPL. As of April 2000, Common C++ is an official GNU Project which isn’t surprising as it is already the basis for several Free Software projects like Bayonne.
GNU marst GNU marst is a program by Andrew Makhorin to translate Algol to C. It is another recent addition to the GNU Project. The project implements the complete Algol 60 programming language as specified in ”Modified Report on the Algorithmic Language Algol 60” [11]. As this language doesn’t really change all that much, the author’s current task is to write a front-end shell that makes handling easier for the user. If you happen to have free Algol 60 programs the author would like to hear from you: he is looking for programs of any kind at the moment.
COMMUNITY
Gengetopt
Info
Just like the others, Gengetopt is a new official GNU Project. It is a tool originally developed by Roberto Arturo Tena Sanchez but now maintained by Lorenzo Bettini. Its function is to create a C function that automatically checks and parses command line options for other programs. The results are then returned in a struct. This is a very useful tool for every C/C++ programmer, since it does the rather tedious but very important job of parsing the command line for the programmer. The programmer simply specifies the desired options, whether they are mandatory or optional and whether they require a parameter. The C source code generated by gengetopt is added to the program and a simple function call validates and evaluates the command line.
Send ideas, comments and questions to Brave GNU World <\<>column@brave-gnuworld.org> Home page of the GNU Project http://www.gnu.org/ Home page of Georg’s Brave GNU World http://brave-gnuworld.org ”We run GNU” initiative http://www.gnu.org/brave-gnuworld/rungnu/rungnu.en.html Sylpheed home page http://sylpheed.goodday.co.jp/index.cgi.en
OpenTheory We’ll end on a topic not directly related to software. The OpenTheory project by Stefan Meretz tries to transfer the spirit and methods of Free Software development to the creation of documents. All the documents are under the GNU Free Documentation License and it is meant to host anything from documentation and books to concepts or theoretical papers. The main site offers a coordinating hub driven by a web based tool that allows thoughts and ideas to be exchanged. Projects can be started online and documents can be made available for discussion. All text is transmitted in ASCII format and stored in a MySQL database. The pages are based on PHP3 which makes them very dynamic. Each of the projects has its own mailing list that can be posted to over the web interface. Additionally similar projects can be linked and share their mailing lists. The only thing Stefan Meretz is missing right now is a mail interface to control the OpenTheory functions: at the moment documents cannot be submitted via mail. The project has already been applied to some rather interesting tasks. The Oekonux project sprang to life there, and people also used it to discuss and improve their talks for the LinuxTag in Stuttgart, Germany on it. The OpenTheory project itself - which refers to the software providing the functionality - is licensed under the GNU General Public License. Being at version 0.4 other developers are still very much invited to join.
Xzgv home page http://rus.members.beeb.net/xz gv.html Solfege home page http://www.gnu.org/software/s olfege/ Ggradebook home page http://www.gnu.org/software/g gradebook/ Common C++ home page http://www.commoncpp.cx/ GNU marst home page http://www.gnu.org/software/m arst/ Modified Report on the Algorithmic Language Algol 60. The Computer Journal, Vol. 19, No. 4, Nov. 1976, pp. 364-79 Gengetopt home page http://www.gnu.org/software/g engetopt/ OpenTheory home page (partially German) http://www.opentheory.org ■
See ya later... That’s it for this month. With a little luck I’ll have something rather special next month - but no promises. However, as usual I would like to encourage you to contact me with new projects, ideas, question and suggestions. ■
3 · 2000 LINUX MAGAZINE 127