COMMENT
General Contacts General Enquiries Fax Subscriptions Email Enquiries Letters CD
01625 855169 01625 855071 www.linux-magazine.co.uk subs@linux-magazine.co.uk edit@linux-magazine.co.uk letters@linux-magazine.co.uk cd@linux-magazine.co.uk
Editor
John Southern jsouthern@linux-magazine.co.uk
Assistant Editor
Colin Murphy cmurphy@linux-magazine.co.uk
Sub Editor
Gavin Burrell gburrell@linux-magazine.co.uk
Contributors
Alison Davies, Richard Ibbotson, Steven Goodwin, Janet Roebuck, David Tansley, Wednesday White, Bruce Richardson, Jack Owen, Jono Bacon
International Editors
Harald Milz hmilz@linux-magazin.de Hans-Georg Esser hgesser@linux-user.de Ulrich Wolf uwolf@linux-magazin.de
International Contributors Björn Ganslandt, Georg Greve, Jo Moskalewski, Sebastian Eschweiler, Anja Wagner, Carsten Zerbst, Patricia Jung, Marianne Wachholz, Stefanie Teufel Design
Advanced Design
Production
Rosie Schuster
Operations Manager
Debbie Whitham
Advertising
01625 855169 Carl Jackson Sales Manager cjackson@linux-magazine.co.uk Verlagsbüro Ohm-Schmidt Osmund@Ohm-Schmidt.de
Publishing Publishing Director
Robin Wilkinson rwilkinson@linux-magazine.co.uk Subscriptions and back issues 01625 850565 Annual Subscription Rate (12 issues) UK: £44.91. Europe (inc Eire) : £59.80 Rest the World: £77.00 Back issues (UK) £6.25
Distributors
COMAG, Tavistock Road, West Drayton, Middlesex England UB7 7QE
R. Oldenbourg
Linux Magazine is published monthly by Linux New Media UK Ltd, Europa House, Adlington Park, Macclesfield, Cheshire, England, SK10 4NP. Company registered in England. Copyright and Trademarks (c) 2001 Linux New Media UK Ltd No material may be reproduced in any form whatsoever in whole or in part without the written permission of the publishers. It is assumed that all correspondence sent, for example, letters, emails, faxes, photographs, articles, drawings, are supplied for publication or license to third parties on a non-exclusive worldwide basis by Linux New Media unless otherwise stated in writing. ISSN 14715678 Linux is a trademark of Linus Torvalds Linux New Media UK Ltd is a division of Linux New Media AG, Munich, Germany Disclaimer Whilst every care has been taken in the content of the magazine, the publishers cannot be held responsible for the accuracy of the information contained within it or any consequences arising from the use of it. The use of the CD provided with the magazine or any material providied on it is at your own risk. The CD is comprehensively checked for any viruses or errors before reproduction. Technical Support Readers can write in with technical queries which may be answered in the magazine in a future issue, however Linux Magazine is unable to directly provide technical help or support services either written or verbal.
Current issues
WAY AHEAD
I
n the next few months expect the market to get a shake up. Rather than lots of separate distribution companies all working their own way and repeating each other’s work, a major co-operation will be announced. This has the disadvantage of less diversification. For me this is not a worry, as if an opportunity exists someone will fill it and there are enough small distributions to keep everyone on their toes. The advantage is firstly cost reductions in that the wheel is not reinvented at each development centre. The second is that by pooling resources work can be more focused and directed rather than just developing at someone’s whim. Not all of the big players are included and if they were it would have hinted at a cartel. Those that are included will have to find a way to add value to differentiate their products. Adding value means more features, which in turn requires more development and so the product improves. The next big move will be in High Performance Computing. This will allow Linux to run on very large systems, all the time. Although Beowulf clusters exist they are mostly experimental. Enterprise wide deployment means a new set of problems. Sure enough the main contenders for these markets, IBM, Compaq and Sun are all busy developing to scale their Linux work to on-demand systems. Running Linux on a mainframe is possible, but the ability to run it on large clusters with reliability is now approaching. Happy Hacking
John Southern Editor We pride ourselves on the origins of our magazine which come from the very start of the Linux revolution. We have been involved with Linux market for six years now through our sister European-based titles Linux Magazine (aimed at professionals) and Linux User (for hobbyists), and through seminars, conferences and events. By purchasing this magazine you are joining an information network that enjoys the benefit of all the knowledge and technical expertise of all the major Linux professionals and enthusiasts. No other UK Linux magazine can offer that pedigree or such close links with the Linux Community. We're not simply reporting on the Linux and open source movement - we're part of it.
Issue 18 • 2002
LINUX MAGAZINE
3
NEWS
LINUX NEWS Loki closed for business Loki, who helped bring to the Linux community games such as Railroad Tycoon and QuakeIII, regretfully filed a Chapter 11 petition back in August last year and finally closed for business on 31st January 2002. Concern was raised about what would happen with regards to the support and maintenance for the whole range of Loki products, so the Loki team were keen to point out what steps they had taken to try and minimise people’s fears: ● All patches, FAQs, newsgroups and other online support services will continue to operate with a
Tim O’Reilly receives award Congratulations to Tim O’Reilly, who has been selected to receive the 2002 RIT Isaiah Thomas Award in Publishing, which is sponsored by the Xerox Corporation. This award, which was presented in New York by the Rochester Institute of Technology’s School of Printing Management and Sciences (or SPMS) on the 20 February 2002, recognises outstanding contributions made to the publishing industry. According to Frank Romano, SPMS chair, “Tim O’Reilly has truly documented the digital revolution. Programmers, IT professionals, and many other users have learned and applied new electronic tools through his company’s books. If there is a foundation for the digital age, it rests with O’Reilly publications”. Often regarded as the first and most trusted source of technical information, the books from O’Reilly & Associates occupy a treasured place on the shelves of the developers building the next generation of software. O’Reilly books are probably best known for the regular animal motifs on the covers, allowing readers to relax and not get into a hump while reading about Perl or go batty over SendMail.
6
LINUX MAGAZINE
Issue 18 • 2002
third party host, like www.icculus.org/lgfaq/. The Loki domains will be redirected to point to the new host, so there should be no need to make any changes to continue to use these services. ● All source code has been returned to the respective licensers. Although we cannot guarantee that each licenser will continue to support the Linux versions of their titles, we have made certain that they have all the necessary tools to do so. The Loki team suggest that the Linux community should not be shy about letting any of them know, respectfully, our thoughts on this subject.
Skygate SpamDecoy Spam, or unsolicited email, has emerged as an ever-increasing drain on everyone, wasting both time and money. To combat this drain, Skygate has developed SpamDecoy. At best spam wastes network bandwidth, productivity and resources, and is a very real security threat disseminating malicious code in its body. Skygate Technology’s SpamDecoy is the most effective tool available for protecting against the numerous threats inherent in email spam. “We started research into the spam threat a year ago,” says Skygate director Pete Chown. “As a security consultancy we were only too aware of how serious and underrated a security threat spam was. We also felt that the handful of products designed to block spam were less than satisfactory for at least two reasons. “They didn’t actually prevent spam from penetrating a network in the first place and secondly, because they relied
on lists of known spammers which quickly become obsolete. Our objective was to develop an effective, technical solution that would stop spam before it entered the network. The result is a simple, largely automated solution that in tests, has stopped 90 per cent of spam”. SpamDecoy is currently available for Linux mail servers, though builds can be made available for Unix systems running on Enterprise scale installations. Skygate Technology, based in south London, is an independent consultancy, specialising in the field of cryptography, security, e-commerce and the Internet. The company provides international organisations with strategic consultancy and software development capabilities.
Info sales@skygate.co.uk www.skygate.co.uk
NEWS
MandrakeSoft ships Linux for IBM eServer xSeries
IBM Introduces new dedicated Linux Servers
Linux MandrakeSoft, has announced that it will deliver a fully-supported Linux distribution for IBM eServer xSeries, IBM’s Intel-based server targeted at small businesses through to large enterprises. Mandrake Linux 8.1, is a complete operating system that provides both fully configured, easy to use desktop solutions plus advanced professional solutions for powerful servers providing for both business and personal needs. As part of their leading-edge support for Linux, MandrakeSoft offer customer services such as MandrakeExpert, which enables Mandrake Linux users to benefit from community technical support and top-level support, offered by MandrakeSoft’s experts within a 48-hour response time. MandrakeSoft successfully tested the Linux Mandrake 8.1 release on IBM xSeries servers at IBM’s Solution Partnership Centre in Essone, near Paris. Mandrake Linux 8.1 currently supports eServer xSeries models x300, x330, x340, x370, x200 and x220 and the Netfinity 6000R, Netfinity 4500R and Netfinity 5100. Testing is also currently underway on IBM’s 64-bit Itanium based server the eServer x380. “I’m extremely pleased that customers can now take advantage of the high availability, low cost IBM eServer xSeries systems running the Mandrake Linux operating system”, said Marc Joly, Linux Business Manager, IBM France. Netkonect, a UK-based business to business ISP looked to IBM and MandrakeSoft to provide them with a stable and reliable solution to run their business. The company installed IBM x340 and x330 servers running Mandrake Linux. Stuart Henderson, Business Development Manager with Netkonect said, “As many of our customers’ entire business depends on the high availability of their servers, a stable and reliable platform was of paramount importance. Our users are continually developing applications on the latest scripting technologies and therefore an up-to-date OS release that does not compromise stability was essential. In choosing this solution, support from IBM and stability of the product was vital. After looking around at the different Linux distributors, we choose the Mandrake Linux OS because it offered the highest level of functionality in an integrated, ease-to-use package.”
IBM today announced plans to deliver two new dedicated Linux servers, including a first-of-its-kind Linux-only mainframe that requires no traditional mainframe operating system experience. The IBM eServer zSeries allow for the consolidation of between 20 to 200 stand alone servers, bringing the superior total cost of ownership and bullet-proof performance and security of the IBM mainframe to an entirely new class of customers. The announcement follows a year of remarkable growth and interest in Linux on the mainframe at IBM, as the eServer zSeries became the fastest growing platform in the industry and the only server platform to chalk up five consecutive quarters of growth.
Info
Info
www.ibm.com/servers www.mandrakesoft.com
www-1.ibm.com/servers/eserver/zseries/ www.turbolinux.com www.suse.com
IBM also announced plans to deliver an aggressively priced, easy-to-manage Linux server specifically for small and medium-sized businesses. The IBM eServer iSeries offering for Linux uses IBM’s advanced “partitioning” technology to help customers to reduce cost and complexity by consolidating up to 15 standalone Linux and Windows servers onto a single physical server. It supports the SuSE and Turbolinux distributions of Linux and includes an installation wizard for rapid deployment. Both servers are intended for infrastructure applications such as firewall, Web serving, file and print serving, and mail serving and are expected to be available in the first quarter of this year. “These new Linux servers answer the call of every customer who is serious about reducing server sprawl and dramatically improving their total cost of ownership,” said Bill Zeitler, senior vice president and group executive of IBM eServer. “Companies large and small are turning to ‘virtual’ Linux servers to save ‘real’ dollars as they gain better control over their ebusiness infrastructure.” Running IBM’s industry leading z/VM virtualisation technology, the eServer zSeries for Linux offers an ideal platform for server consolidation, utilising the mainframe’s ability to create as few as 20 and up to hundreds of virtual Linux servers on a single physical box, saving customers substantially on energy, floor space, and maintenance expense.
Issue 18 • 2002
Dreamworks and HP to use Linux for more film entertainment Dreamworks used Linux servers for much of the brute work to help render many of the images in the box office success ‘Shrek’. But that was two years ago. Since then, though, Linux has taken over almost completely in the role of helping to make Dreamworks new feature “Spirit: Stallion of the Cimarron,” or so they told the world at the recent LinuxWorld Conference and Expo in New York. Hewlett-Packard took up the strain to help add and improve on some of the holes Linux has for this type of development, helping to produce a system that was 50 per cent cheaper that the system used to produce ‘Ants’ two to three years ago, while, at the same time giving a 400 per cent power improvement. “We’re in the process of shifting away from specialised high-end computing hardware and software to Intel-based PCs running Linux,” said Ed Leonard, Dreamworks’ chief technology officer. “We worked very hard with our third-party software partners and convinced them why we believed this was interesting. We brought most of those key partners on board,” Leonard said. With HP’s help, the Dreamworks translated to Linux its own in-house animation tools, millions of lines code long. HP Chief Executive Carly Fiorina announced the deal in a keynote address at the LinuxWorld Conference and Expo.
LINUX MAGAZINE
7
NEWS
Compaq ProLiant email servers thanks to Sendmail Sendmail helped define how Internet email operates and is considered the benchmark for Open standards and Internet email innovation. Now, Sendmail, Inc., has joined forces with Compaq Computer Corporation in announcing the availability of the first enterprise-class, Linux-based email solutions for industry-standard Compaq ProLiant servers. These customisable solutions combine ProLiant servers with Sendmail’s Mailstream Manager and Sendmail’s Integrated Mail Suite (IMS). These highly available, secure email systems are ideal for organisations and service providers that are looking to reduce the costs associated with deploying and maintaining their current RISC-based email infrastructure, and extend email access to additional users. These email solutions take advantage of industry-standard software and server platforms to provide maximum availability and a superb price performance ratio. Performance testing indicated that Sendmail, Inc. software running on Compaq ProLiant server clusters significantly surpassed more expensive RISC-based
servers in message throughput. “Cost-effectiveness is a top priority for email and messaging solutions,” said Mark Levitt, research director for Collaborative Computing at IDC. “The combination of Sendmail, Inc.’s software with Compaq’s Linux-based servers is designed to deliver highly available systems that can grow easily and cost-effectively to support rising email volumes and expanding user populations.” “Compaq’s leadership in the Linux server market combined with the strong management capabilities of Compaq’s products, make ProLiant an excellent platform for delivering enterprise-class email solutions,” said Greg Olson, chairman and co-founder of Sendmail, Inc. “Our lab tests have shown the price/performance ratio to be stunning.” The Compaq and Sendmail, Inc. solutions
can stand alone or enhance existing messaging and groupware platforms with additional security and availability. The high availability offered in these solutions is ideal for enterprise customers and service providers who consider email to be a critical component of their 24x7 business operations. The solutions provide additional mail security filters, directory services, secure mail routing and centralised administrative tasks into a single secure SSL-based management interface. Early adopters of the Compaq and Sendmail, Inc. package include a leading GSM mobile telephony provider and one of the most respected health care companies in California.
Info www.compaq.com/products/software/linux/ mailmsg/ www.sendmail.com
Caldera Volution Manager 1.1 receives industry accolade Volution Manager 1.1 is Calderas’ secure, Web-based management and administration solution. This new version, which has been available since February, now supports the latest versions of all major Linux distributions as well as Caldera OpenServer and Open Unix products. In addition, Caldera has introduced several new system management features in Volution Manager to help system administrators and solution providers save time, scale resources, and ease deployments cost effectively. The Aberdeen Group, a leading industry research firm, recently released a competitive analysis white paper by Dr. Bill Claybrook on Linux management solutions. The Aberdeen Group claimed Caldera provided the best Linux management solution, saying, “Today, the leading overall Linux software management solution is an integration of Caldera Volution Online and Caldera Volution Manager. Caldera has the potential for large amounts of
8
LINUX MAGAZINE
Issue 18 • 2002
success with the integrated product solution both within its large installed base and outside its installed base because it has brought crossplatform management into one interface for several distributions of Linux, OpenServer, and Open UNIX with future plans for Solaris and Windows.” Some of the enhancements to Caldera Volution Manager include: ● Multiple platforms management via one interface. ● New install options that simplify reviews, evaluations and deployments. ● New Wizards that simplify common tasks and reduce the initial learning curve. ● Integration with Volution Online providing an Internet delivered, proactive software management service for Linux and UNIX systems.
Info www.caldera.com/products/volutionmanager
NEWS
Linux on the PlayStation 2 Sony Computer Entertainment America Inc. announced the availability of “Linux (for PlayStation 2)” Release 1.0, targeted towards the Linux development community. Designed as a hobbyist development environment, users can not only run the wide variety of computer applications written for the Linux operating system, but also create original programs and applications designed to run on “Linux (for PlayStation 2)”. The company expects the kit to sell for about £200 when it is made available in May 2002 exclusively through its Web site, The release of “Linux (for PlayStation 2)” will, for the first time, allow developers access to the PlayStation 2 runtime environment and system manuals, though rumours abound as to how full and detailed that access will be. “The Linux (for PlayStation 2)” Release 1.0 kit includes: ● Internal hard disk drive for PlayStation 2 (HDD) with 40 Gb capacity. ● Network adaptor (Ethernet) (for
PlayStation 2) with 100 Base T Ethernet interface. ● Computer monitor adaptor (for PlayStation 2) (with audio connectors). ● USB Keyboard and mouse (for PlayStation 2). The software, which is expected to be supplied on two DVDs, will contain: ● Linux kernel version 2.2.1 (with USB device support). ● gcc 2.95.2 and glibc 2.2.2 with VU assemblers. ● XFree86 3.3.6 with PlayStation 2 GS support. Linux use is growing at unprecedented levels and according to IDC, a leading market research firm, the Linux operating system marketshare is expected to reach 38 per cent world-wide by 2004. In response to many requests from the Japanese Linux society to enjoy Linux programming on PlayStation 2, Linux Beta Version Kit was made available to approximately 7,900 enthusiasts in Japan in July 2001. Since then,more than 28,000
people have expressed interest in the “Linux (for PlayStation 2)” development kit to date. This led the company to release a new version as release 1.0 to support users of the worldwide Linux community. The company responded to feedback received from the beta test program in Japan and modifications were incorporated for “Linux (for PlayStation 2)” Release 1.0. Customer support and other community-based features will be handled through the North American PlayStation 2 Linux Web site. More details regarding the “Linux (for PlayStation 2)” release, FAQs and related news will be disseminated through this Web site, as it becomes available.
Info http://playstation2-linux.com/
AMD introduces new mobile ATHLON 4 1500+ Processor for Notebook PCs AMD’s new mobile processor, the Athlon 4 1500+ features QuantiSpeed architecture, AMD PowerNow! technology, joining the AMD Athlon XP and AMD Athlon MP processors. The new mobile processor will be identified with model numbers, as opposed to clock speed in megahertz. In October 2001, AMD announced the True Performance Initiative (TPI), through which AMD will assist customers in understanding the benefits of PC performance. The mobile AMD Athlon 4 processor features QuantiSpeed architecture, which incorporates a nineissue, fully pipelined superscalar micro-architecture, a superscalar floating-point unit, hardware data prefetch, exclusive and speculative Translation Look-aside Buffers (TLB). Other features of the mobile AMD Athlon 4 processor include support for AMD’s 3DNow! Professional instruction set for enhanced multimedia capabilities, and AMD PowerNow! technology for extended battery life.
The new mobile processor, delivers the highest notebook PC application performance matches the demands of today’s mobile business professionals. Systems based on the mobile AMD Athlon 4 processor 1500+ are expected to be immediately available from Compaq Computer Corporation. “Going forward, AMD continues to listen closely to end users that are calling for a better measure of performance,” said Patrick Moorhead, AMD vice president of Customer Advocacy who is driving the TPI. “AMD is working diligently to drive these discussions in a positive manner and devise an improved measure embraced throughout the PC industry.” Compaq will feature AMD mobile processors in its Compaq Presario 700 notebook series. The Presario 700 can be ordered immediately from the Compaq Web site.
Info www.athome.compaq.com. Issue 18 • 2002
LINUX MAGAZINE
9
NEWS
MontaVista to strengthen Linux ties to IBM PowerPC Processors MontaVista has announced a technological agreement under which MontaVista’s embedded Linux platform will expand its support for the IBM PowerPC microprocessor. The agreement is designed to help ensure that the IBM PowerPC microprocessors are fully supported by MontaVista Linux for embedded system and networking applications. IBM and MontaVista also plan to pursue joint sales and marketing activities. “Linux has become a requirement for many embedded and networking applications, so we’re strengthening our relationship with MontaVista Software to build on the strong enabling of Linux in IBM products,” said Scottie Ginn, vice president, Pervasive Computing Group, IBM Microelectronics. “IBM continues to invest in the hardware and software technology essential to meeting the needs of our customers. We believe that MontaVista Linux, along with the IBM PowerPC and IBM PowerNP can provide a consistent, scalable environment for everything from cell phones to routers.”
MontaVista Software was one of the first embedded software companies to develop a Linux-based platform for the IBM PowerPC family of processors, a focus that the company has maintained during its two years of successful commercial operation. The two companies had previously announced that they are collaborating to make the Linux operating system available for IBM PowerPC-based set-top box chips and IBM PowerNP NP4GS3 network processors. “Embedded systems developers have long valued the stability and performance embodied in the PowerPC architecture,” said Jim Ready, CEO, MontaVista Software. “Now these same developers can couple nextgeneration IBM PowerPC and IBM PowerNP hardware platforms with the openness and performance of MontaVista’s embedded Linux environment. MontaVista Software joins IBM in delivering the software and hardware foundation for the next wave of networking systems and consumer devices.” The scalable IBM PowerPC architecture provides a common Linux programming
platform across the e-business infrastructure. The IBM PowerNP and IBM PowerPC processors and cores are suited for data storage devices and servers that feed the network; wired hubs, routers, and switches that make up the network; and the handheld communications devices and other pervasive computing products that access the network. MontaVista Linux Professional Edition is available direct from MontaVista and MontaVista distribution channels worldwide as a product subscription, providing the MontaVista Linux kernel, software updates, utilities, development tools and one year of technical support. It is Open Source, royaltyfree, and built from 100 per cent pure Linux sources. Also available from MontaVista Software are expanded technology add-on products, including embedded Java, HighAvailability technology, and powerful graphical toolkits.
Info MontaVista – www.mvista.com/ IBM – www-3.ibm.com/chips/
NEC adopts NetVault as its Linux backup software BakBone Software is an international storage management software company that develops and globally distributes high-performance storage management solutions to the open systems markets, especially well known is NetVault. The NEC Corporation has announced the addition of NetVault to its Linux backup solution product line-up. This enables NEC to sell NetVault as a backup software for NEC’s “Express 5800 Series” IA server products. NetVault is the de facto standard for Linux backup software in the Japanese market. NEC adopted NetVault based on capabilities not currently supported by other Linux backup solutions including support of single files greater than 2Gb when using a 64-bit Linux 2.4 kernel, 10
LINUX MAGAZINE
Issue 18 • 2002
RAW Device plug-in enabling backup of character devices and block devices, handling of Japanese file names, and support of a variety of Linux distributions such as Red Hat, Turbolinux, Miracle Linux, Caldera, VA Linux, and Vine. “Our adoption of NetVault backup software enables NEC enterprise customers to use our Linuxbased system products more reliably. We aim to further expand our market share by using our experience in the market and providing Linux solution consulting and support services to our customers,” said Takayuki Okada, from the NEC Corporation.
Info www.nec-global.com/ NetVault - www.bakbone.com
INTERVIEW
INTERVIEW
RMS We were lucky enough to ask RMS, founder of the GNU project and author of Emacs, a few questions about his work
Richard M. Stallman – Campaigner
Info Richard Stallman www.stallman.org/ The GNU project www.gnu.org Free Software Foundation Europe fsfeurope.org/
10
LINUX MAGAZINE
A
t the recent FOSDEM conference in Brussels, Richard M. Stallman was good enough to give up a few moments of his time to answer some questions about the GNU project, GNU/Linux and the freedom to know your code. Linux Magazine - Did you find the people at FOSDEM enthusiastic? Richard M Stallman - Yes LM - Do you prefer this type of event more than an expo? RMS - Absolutely. The expo is more commercial and can be useful to spread GNU ideals to additional people, but even at a meeting like FOSDEM it’s necessary to do that. People come to FOSDEM because they like the atmosphere of development, being coders, and that’s good. But it is also important to come here to hear political and ethical issues we have. To get them to organise against software laws such as the new European software patents. LM - Do you feel passionate about hardware specifications? RMS - If the specifications for hardware are secret the Free Software community cannot support that hardware and this is a serious problem. Hardware vendors that produce products with no Free drivers is a very bad thing – people should not buy that hardware – they should tell the manufactures that they insist on hardware that can be run with Free Software. Should hardware vendors think that it is better for them to attack our freedom then it is they who should suffer. LM - Rich countries are donating hardware complete with non-free software. Is this good or bad? RMS - When Microsoft do this it can be thought of as Microsoft colonialism. Microsoft is trying to colonise the world, which is true for first world countries as well as third world. They should not be able to dominate anyone. They are not satisfied with a market share, they want to dominate that share, and they want that share to be the whole. There are those that argue that it serves the economy. They are wrong, it only serves a few people in that economy. It is the nature of non Free Software that it creates Issue 18 • 2002
domination: the owner of the software dominates the users. LM - Marketing plays a vital role in their stronghold. How can the Free Software community challenge that? RMS - Word of mouth. The more you talk about the ethical importance of Free Software the more people will see that there is something really important at stake rather than short term convenience and expense. LM - Do you have a term for ‘the opposition’? RMS - I refer to them as Software hoarders or Software privateers. Privateers, originally, were authorised by governments to attack the shipping of another country. In this context, they have been known to refer to their enemies as pirates, so it’s only fitting that we can call them privateers. LM - What are you most proud off? RMS - The Free Software movement. With it I have found a way to stand up for freedom. Before that I cared about freedom, but I had never found a way to stand up for it. And that, of all the things a person can do is something to be most proud of. The heroes I most admire have stood up in tremendous ways for freedom. So now I’ve found a certain, smaller, way to stand up for freedom. Now I feel I’ve done something good with my life. Just developing software is good, but not as good, not nearly as important. LM - Has the GNU project gone as good as you would have hoped? RMS - I didn’t know how much to hope for, I always imagined success and total failure. We have achieved a substantial lot, but, at the same time there are still real problems in the community, both internal and external. In the community we have a weakness that many people appreciate Free Software, but as an ethical principle: they like having freedom, but they are willing to use proprietary software for short term convenience. Those people help in various ways but they can be easily lost to us. It easy to tempt them to leave our community and not support it. They are not likely to fight hard to overcome an obstacle. Then, externally we face dangers such as software patents, DMCA, EU copyright directive and the Cybercrime treaty. If we want Free Software we have to fight against all of these things. Another danger comes from the manufacturers who don’t publish specifications. It prohibits us from writing Free Software. So all of these things are external threats to our community. So what we have are threats and at the same time a lack of resolve of many of the people in our community. That is why I focus my efforts on showing people how to gain and keep this resolve. If a person appreciates Free Software that may be enough to convince them to contribute to the community with code, but it may not convince them to act politically to protect the community from prohibition.
ON TEST
Dolphin
DOLPHIN SERVER T
he cost of running your own email and Web space server has been prohibitive to all but the most committed of businesses. The Positive Internet Company hope to have found a niche in the market, with the “Dolphin Server”, which is aimed at businesses who have low capital cost and efficiency as one of their priorities as well as filling the requirements of the high end home user. For £159 a month (plus VAT, if that sort of thing is of concern to you) you get your own dedicated server, locked away in a server room with all the climactic control and conditioned air that you would give to your own much loved equipment.
What you get for your money? The Dolphin Server is made up into a 1u rackmounted 19” case which contains an Intel PIII-933 MHz processor, 256 Mb RAM and two 20Gb E-IDE hard drives, with a 10/100 network card giving you voice to the outside world via a dedicated 100Mb/s switch port. This configuration will provide enough power for process-intensive sites. There is a generous monthly allowance of 40Gb data transfer to take advantage of, via high quality networks with multiple international bandwidth providers. This would allow you to make maximum use of the server, giving it the opportunity to get its teeth into something regardless of where in the world your customers might be. The second drive in the server allows for daily backups, so you can rest assured that, in the unlikely event of a drive failure, you’ll have a recent set of data to rebuild from. Because this is a dedicated server you have full control over what runs there and you have the ability to furnish your server with whatever you need, should there be anything that is not included already. The server includes the SSL-ready Apache Web server with PHP and Perl. Having full access to Java servlets and Java Server Page support means you can run rings around your Web visitors, or more hopefully, guide them towards the information they want with some carefully crafted Javascript. Also included in Dolphin is the MySQL database server, a must for e-commerce, as is reliable processing of email through your site – achievable with the use of the Qmail mail server, which is reliable and secure. You even have full control over your DNS management as BIND is also installed by default.
How is all of this possible? And more to the point, what’s all this got to do with Linux Magazine? Well, that’s the clever thing: to give you all of this flexibility Positive Internet give you a server with a full implementation of Debian GNU/Linux on it. Debian has a very long pedigree: the project started out in August 1993 and has followed a most methodical and cautious development, making it very stable and secure. Debian has not seen the need for advanced graphical installation interfaces, which can be off putting for first time Linux users. Once running, though, Debian comes into its own being one of the simplest Linux distributions to maintain and upgrade thanks to the apt-get feature, which gets applications, either upgrading them or adding something new for the first time, looking and solving any dependency problems it might find as it goes. This can be done from the command line, or some
Should you be in the market for a good value dedicated server for a range of Internet requirements, then the Dolphin Server from The PositiveInternet Company may be what you are looking for. Colin Murphy takes a closer look
clever person can write a graphical front end for those that prefer to point and click. Don’t let stark installation processes put you off, because, if you are going to be using Debian on a Dolphin server, then the installation will have been done for you, and yes, the guys at Positive Internet have been wise enough to write the graphical interface to make upgrading a point and click affair, if that’s what you want.
Full control As an example of how far Positive Internet are prepared to go to give their customers as much control as possible, the Dolphin Server comes complete with a “Zero Delay Reboot” feature. ZDR will enable you to power cycle your server whenever you want to, via a secure Web-based switching utility. You may never want to tinker with your Dolphin Server, in which case you may never have need for this feature, but should all that application getting get the better of you, then it might just save the day. Issue 18 • 2002
Dolphin Server Supplier Price Web For Against
The Positive-Internet Company £159.00 per month www.positive-internet.co.uk Value for money No full RAID
rating LINUX MAGAZINE
11
NEWS
K-splitter
UPGRADE OR NOT Thanks to KDE’s
Always something new
appearance on CD-
Scarcely is the latest version of release 2.2 safely installed on your computer, than the KDE developers are starting to prepare the next serving – KDE 3.0. And they’re not doing it on the quiet, but publicly on the Web and in the relevant newsgroups. With so many innovations (we present some of them to you in this K-splitter) it’s hard to resist the siren song of the new software. But what should you do with the old version of KDE? Should one really, just because of a few new gimmicks, put the functional and stable-running working environment at risk? It’s not an easy decision. So what could be better than deciding on both? José Pablo Ezequiel (Pupeno) Fernández must have had this same idea, because he is letting all curious KDE users share his experiences with the installation and commissioning of the new install on his homepage at http://www.pupeno.com.ar/runningkdes. Mandrake users are best off, because it uses this distribution as its starting point. However, the Red Hats among you may also delight in the tutorial because of the similarity between the two Linux distributions. Only SuSE users may have to proceed differently every now and then. But who knows, maybe one of them will then sit down and put their experiences with dual operation under SuSE Linux on the Net.
ROM and the new programs, icons and features of KDE 3.0, one thing is certain: things won’t be boring on the desktop, this year. There’s a lot of compiling to do, so let’s get stuck in!
Figure 2: The new localisation tool, Qt Linguist
Major events... ... cast their shadows before them, as progress on KDE 3.0 development is making great strides forward. We present you with one way of installing the recently-released Alpha1 in the section “Always something new”. But why should you take the risk? The answer is obvious: the KDE surprise package of the third generation has more than one marvel in store. The majority of the innovations result from the change over to Trolltech’s new release of the Qt-GUI toolkit. One of the most important expansions will be especially appreciated by all users who have a lot to do with databases. Because the new KDE version will at last include an independent interface, which supports almost all the well-known SQL database
Figure 1: With a good tutorial, dual installation is much easier
Happy programming! We have previously reported on the Andamooka site http://www.andamooka. org/. This is a site to find your online versions of useful Linux and KDE books. Not just for computer books – you will find a wide range. What makes the site special is the ability to annotate the books for others. This lets you see what other people thought about or want to give further hints on as you read. Figure 3: New icons for the 3rd generation
12
LINUX MAGAZINE
Issue 18 • 2002
NEWS
Figure 4: KDE 3.0 has new themes in its baggage, too
systems, including Oracle, PostgreSQL, MySQL and also the ODBC interface. The developer community is working equally feverishly on further internationalisation. Since Qt 3.0 greatly improves the display of non-Latin character sets, such as those running from right to left, KDE also benefits from this. What’s more, the “Qt Linguist” stemming from the new Qt distribution helps in translating all text visible to users in Qt-based programs into another language (Figure 2). There are also glad tidings for the graphics people among you. In future you will at last be able to run several monitors at once under KDE. In future the graphical user interface will simply blend out unnecessary icons by means of so-called Alpha Blending. Although that would be a shame, as Figure 3 proves, because apart from lots of new functional features, KDE 3.0 also comes with a treat for the eyes, like the gaily coloured icons from Figure 3 or the new style themes – Figure 4. Anyone not wanting to wait for version 3.0 and who would rather avoid Alpha and Beta versions, will not have to forgo the icons and the new styling, because both can now be downloaded separately. You can find the style theme at http://clee.azsites.org/kde/; whilst the icons can be downloaded from http://users.skynet.be/bk369046/ icon.htm. In addition to optical innovations, there are also new programs in KDE 3 to quicken the pulse. So, for example, KonCD, the burn program we introduced to you in Linux Magazine issue 11, has made its entrance into the main distribution.
with which such imagemaps can be created with wonderful ease. Neatly enough, the editor can be completely integrated into Quanta, one of the most powerful HTML editors under Linux. In order to do this, you will need Quanta 2 Pre 2, because it is only from this version on that its own actions can be integrated. To do so, select the menu item Settings/Configure Actions, and click there on the New button. In the Input tab, in turn, select None. With the aid of the button with the three dots, you can now track down the binary of your map editor. Found it? The correct path to it should now appear in the text field in front of the dot button. Add an extra -c there and a %f as placeholder for the current document. Now you can look for a suitable icon in the top row, with which the Imagemap action just made will later appear in the Quanta toolbar. You also have the option of sending the icon on its way with a so-called
tooltip, i.e. text which appears whenever you move the mouse over the icon. Enter the text of your choice in the ToolTip box. Now click on the Output tab. Here you should opt for the item Insert on cursor position. One last click on the OK button, and you will have now integrated the imagemap editor in Quanta. If the action is to pop up in the toolbar, you must end by telling your HTML editor this via the menu item Settings/Install toolbar.
Figure 5: Imagemaps made easy
Strategic Imagemaps made easy Imagemaps are a fine thing. These little images, which include a number of links within the graphic, for example the left half of the image could point to the Web page “1.html” and the right-hand one to “2.html”, are a splendid gimmick for any Web site. With the KImageMapEditor (the latest version can be found at http://kimagemapeditor.sourceforge.net/) the author Jan Schäfer has developed a KDE tool
The game players among users of KDE will soon have one more game to enjoy: Andreas Beckermann recently announced on the KDE Games Developer List that he has started further development of Boson, a real-time strategy game in the style of Command & Conquer and StarCraft. For the project to progress rapidly, Beckermann is still seeking keen combatants. Anyone interested can contact the team at bosondevel@lists.sourceforge.net and offer assistance. Issue 18 • 2002
LINUX MAGAZINE
13
GNOME NEWS
Gnomogram
THE SHAPE OF THINGS TO COME This month Gnomogram looks at Evolution and Galeon
GNOME for MacOS X There are now CDs on offer with an office suite named OpenOSX Office. This is not something on the lines of OpenOffice, but a few Gtk and GNOME packages such as Gnumeric and Abiword ported to MacOS X. Porting is relatively simple, since beneath the colourful interface of MacOS X there is a Unix kernel related to BSD, named Darwin, at work. However, you don’t need to pay the high price of OpenOSX to get GNOME running under MacOSX or Darwin: the Fink Project offers considerably more
GNOME packages for MacOS X and at no charge. In fact OpenOSX even makes use of Fink in a few places. Another good source is the site of the GNUDarwin distribution, where you can also get a comparatively cheap CD with OS X ports. However, this CD is based directly on the port system as used in FreeBSD. For Debian users in particular, dpkg, which is implemented by Fink and installs via deb packages, might be easier. Less comprehensive packages in the actual MacOS X format can be found at http://www.osxgnu.org.
Evolution 1.0 Following six betas and several months of intensive bug fixing, Evolution will finally get to 1.0 when this issue is published. Even though by our deadline only the first release candidate was available, it is obvious that the error correction has paid off: Evolution is absolutely ready for everyday use and demonstrates everything that can be done with GNOME. The beating heart of the Outlookbased GroupWare is the mailer. Evolution can handle both POP and IMAP and can also protect connections via these protocols with
SSL. The Camel library is responsible for this, which in addition to these protocols also supports various storage methods such as Mbox and Maildir for messages. Since Camel is independent of Evolution, it is easy to add new protocols in this way, which can then be accessed simply via a new URL such as mbox://file. Camel also saves a summary file to all folders, in which a few items of header data are saved. So it is possible to search for specific messages very quickly. Obviously such a search will be limited to characteristics such as sender and subject – the content itself is indexed by another library named Ibex, so that enquiries about the body of the message can also be processed very rapidly.
GNOME 2 news When GNOME 2 is launched, it’s not just the software which will change – a new Web site is also planned. Unlike GNOME 2, the site will come with lots of optical innovations and can be admired as a preliminary version. In order to make GNOME 2 accessible to the disabled, two Accessibility Projects with the nice names GOK and Gnopernicus have been started. GOK is an on-screen keyboard,
14
LINUX MAGAZINE
Issue 18 • 2002
while Gnopernicus functions as a screen-magnifier and a screen-reader at the same time. There has also been plenty of hard work aimed at improving on the panel. So, in future, applets will be loaded via Bonobo. Also, the menu structure will no longer be a direct portrayal of a directory structure, but will be produced as required from a Gnome-vfs module.
GNOME NEWS
Gflow
Figure 3: Evolution for mail handling
In addition to the normal folders, it is also possible to create “virtual” ones in Evolution, containing messages which match one or more of the search criteria defined. Anyone who would rather sort his mails in real folders can fall back on the filter editor. This allows, in addition to moving and copying, a specific colour or a certain status to be assigned to a message, and is much simpler to use than Procmail. Support for PGP and GPG is an important feature, especially in view of increasing monitoring – and S/MIME support is in development. As GNOME’s front-runner program, Evolution naturally uses the component architecture of Bonobo. All the actual functions such as mailer and calendar are independent components, which are loaded as required into the so-called shell. The advantage of this is that in future new components can be added without endangering the sopainstakingly achieved stability of the other components. But the whole thing does have the catch that the components still remain in memory, even when the shell has crashed. To end all the tasks which are left behind in this case, Evolution comes with the program killev. Components additionally open up the possibility of displaying attachments directly in Evolution, if there are the corresponding Bonobo embeddables on the system. If there are no embeddable components, Evolution is still, thanks to Gnome-vfs, capable of providing a few standard viewers for an attachment, or simply storing it. As if Evolution did not already consist of enough independent parts, the calendar is also split into a front end and a back end. The back end with the name of Wombat can make the calendar and address data available to other applications, too, such as Pilot-Conduits. So a PDA can be synchronised even if Evolution is not running. The back end also makes it possible to massively enlarge the GroupWare functions of Evolution in future. Already the calendar offers the option of clarifying appointments. Under Action/Assign task in the window for new tasks a
Gflow allows the flow around a symmetrical object to be simulated and displayed graphically. Since Gflow is based on the constant properties of a fluid, the program is obviously suitable for the construction of submarines. Nevertheless it does give a good insight into the behaviour of fluids of varying density and speed. To do so, Gflow computes three graphs, in which the flow, the strength of turbulence and the pressure are displayed. The computation of the graphs is done iteratively, whereby a large part of the physical parameters can be altered during the computation. To make this possible, the pause between the iterations can be extended as required. Anyone working with the developer version can also print out the results via gnome-print. Anyone preferring the stable version can also store the results of the calculations. Gflow does not however store any images, just raw data, which can for example be output using gplot or via the command “splot filename”. Gflow makes several text files for each object, which contain the object itself, the values calculated and the parameters applied. Anyone wanting to create an object themselves can do so with the extremely limited program gfdedit. But it is easier to simply create an object in a program like The Gimp, in which case the image must be 200x100 in size. Furthermore the graphic must be indexed to two colours, since Gflow regards all non-black colours as part of the object. The image format itself is not crucial, since Gflow, with the aid of gdk-pixbuf supports the majority of current formats.
Figure 1: Surrounding a GNOME character with virtual fluid
Figure 2: How the flow data calculated by Gflow looks in Gplot
group can be defined for this, which receives the appointment in an email as iTip attachment. If a participant agrees, the appointment is transferred directly into the tasks and confirmed via email. Of course other GroupWare solutions such as Outlook or Lotus Organiser also support this standard. As usual with Free software, Evolution is also in line with open standards in other ways: So the contact manager also supports, in addition to the Vcard format, LDAP address databanks via Wombat. Other formats can at least be imported – so data from programs like Elm, Pine or Netscape can be carried over with no problem. Finally, the summary provides an overview of all information; this used to be known as “My Evolution”. In addition to information from all other components, here RDF streams from news sites and weather reports can be included. Issue 18 • 2002
LINUX MAGAZINE
15
GNOME NEWS
Figure 4: Galeon displays the preview of the GNOME-2 site with no problem
Libraries required
Galeon 1.0 The developers concerned with Galeon have within a comparatively short time managed to program a browser that has little to fear from the competition. The main reason for this rapid development is that Galeon relies for the actual representation of HTML on Mozilla’s render engine Gecko. But since Galeon, unlike Mozilla, uses Gtk+ instead of the slow and sometimes overloaded XUL interface, the browser is less demanding when it comes to memory and processing power. Nevertheless the user need not do without the advantages of a flexible interface. In addition to the Gtk theme almost all icons in Galeon can be changed, and the menu bar is freely configurable. This is not the only point where Galeon has something better than Mozilla: Some interrupted sessions can be stored and continued. If Galeon should ever crash, the last session is even offered automatically for loading. For some time now Galeon has also supported working with different tabs in one window. So it is possible, despite full screen mode, to display several Web sites at the same time. Anyone who has shut down all the toolbars in full screen mode will also be pleased that certain functions such as displaying bookmarks can be placed on a mouse key and that there are keyboard shortcuts for most functions. These shortcuts can be freely edited, like almost everything else in Galeon. For bookmarks, Galeon offers not only a very easy-to-use editor, but is
Info OpenOSX Fink Darwin MacOS X format Gnome2 preview GOK Gnopernicus GFlow Evolution Galeon IE2G Transfer Manager
16
openosx.com/office fink.sourceforge.net/pdb/section.php/gnome gnu-darwin.sourceforge.net www.osxgnu.org www.mindspring.com/~digitect/gnome/v2/templates.html developer.gnome.org/projects/gap/AT/GOK/index.html developer.gnome.org/projects/gap/AT/Gnopernicus/index.html www.vp7.dk/gflow www.ximian.com/products/ximian_evolution/index.html galeon.sourceforge.net sourceforge.net/projects/ie2g gtm.sourceforge.net
LINUX MAGAZINE
Issue 18 • 2002
Gflow Gdk-pixbuf >= 0.8.0 (Gnome-print) Evolution Scrollkeeper >= 0.1.4, Libxml >= 1.8.10, Gnome-print >= 0.25, Gdkpixbuf >= 0.9.0, ORBit >= 0.5.8, Oaf >=0.6.2, Gconf >= 0.6, Gnome-vfs >= 1.0.0, Libglade >= 0.14, Bonobo >= 1.0.3, Gal >= 0.18.1, Gtkhtml >= 0.16.1, Intltool bzw. Xml-i18n-tools Galeon Scrollkeeper >=0.1.4, Libxml >=1.8.14, Gdk_pixbuf >= 0.10.1, ORBit >=0.5.7 , Oaf =0.6.5, Gconf >= 1.0.4, Libglade >=0.13, Gnome-vfs >=1.0.1, Mozilla >=0.9.5, Intltool bzw. Xml-i18n-tools also able to import bookmarks in other formats such as those of Netscape or XBEL. With the aid of the program IE2G it is even possible to convert bookmarks from Internet Explorer. After first launching the browser it will continue to offer to import parts of the configuration of Netscape too. Galeon can also add these bookmarks directly to its “smart” bookmarks: These are displayed on a special toolbar and allow search queries to be passed on immediately to specified sites such as Google or Freshmeat. “Auto-Bookmarks” get no special toolbar of their own – this folder is where the most-visited sites are displayed. Anyone who can’t bear to part from the site list in Mozilla even has the option of displaying bookmarks or history as a dock on the page. Another feature which Galeon has picked up from the competition, is the option of using “galeon -s” to start a windowless instance which accelerates all further calls from Galeon – obviously at the expense of increased memory use. Also useful in view of the daily flood of banners is the ability to block images from certain sites directly in the pop-up menu. Furthermore, cookies can also be restricted to certain sites and a list with passwords can be made. Since these passwords are completely unprotected, this is obviously not always a good idea. One setting which makes many Web sites more readable switches off animated graphics. If the site uses style sheets, it is also possible to load an alternative style sheet, which does without colourful knick-knacks altogether. Even though this is still supported, Galeon can now cope without the GNOME Transfer Manager. In addition to a small downloader and FTP support, Galeon still comes with the option of displaying GNOME help documents. This means the program offers a few good alternatives to the ageing GNOME help browser and to Nautilus. The latter program may indeed offer the integration of Scrollkeeper and thus the option of searching through the entire Help files, but Galeon as a browser is absolutely no match for it.
KNOW HOW
Membership lists
PROTECT YOUR WEB PAGES
I
f you’re lucky enough to have been to a Formula One Grand Prix, then you’ll know that there is a members’ area and a VIP area. By entering these areas you’ll have immediate privileges and access not afforded to the general public. A similar thing can be said about Web pages. Most Web pages you will want everyone to see, after all that’s the whole point of the World Wide Web. On some pages, however, you may want to restrict the viewing to members or special users – VIPs, if you will. You can protect Web pages based upon the calling browser, IP address, domain name or simply via password protection. We will look at the latter, which is more commonly known as basic authentication. We will also look at how to personalise those nasty error messages that get thrown in your face when you try to go to a page that is missing (see Figure 1.) or to an unauthorised area.
The challenge/response process First look at the process involved. Here’s how it goes: you point your browser to a Web page protected by a username and password. The Web server then looks for a file in that directory called .htaccess, if that file is present it reads the directives (configuration) to obtain the type of authentication (if any) and what files to protect with this information; authentication begins. What happens now is commonly called the challenge/response cycle. The Web server sends an authentication request to your browser, the browser will prompt you for a user name and password within a dialog box. The user enters their username/password then clicks on OK
Figure 1: Requesting a non-existent document. A 404 error page
and the information is sent back to the Web server. The Web server then validates the username and password against the information held in a password file. If the user passes the authentication (there is a username/password match) the page is displayed, if not then the Web server throws up a 401 error page in the browser, see Figure 2.
Some Web pages
Setting up your password file
done then wonder no
To enable access to certain users, you first have to create a password file. We will call our password file, “.ht_users”, though you can chose your own meaningful name if you wish. Now this is not the password file that gets read when you login to your Linux machine, this is a totally different file. To create this file we use the htpasswd utility, which enables you to add users and their passwords to an encrypted flat password file. As this file will hold users’ names and passwords it is best to stick this file (at least) off the main Web root directory. For goodness sake, DO NOT put it in your HTML, CGI-BIN or ICONS directory. Create a new directory, called “private” say, off the www directory. (All Apache installs now stick your HTML (or HTDOCS), CGI-BIN and ICONS directory within this www directory layout structure by default.) Next, lets create a couple of users, say “davetan” and “paulinetan”.:
more, David Tansley
restrict access to authorised members. If you’ve ever wondered how this is
shows us how
Figure 2: Authentication Failed. A 401 error page
$ mkdir private $ pwd /var/www/private $ htpasswd -c /var/www/private/.ht_users davetan New password: Re-type new password: Adding password for user davetan $ htpasswd /var/www/private/.ht_users paulinetan New password: Re-type new password: Adding password for user paulinetan
Issue 18 • 2002
LINUX MAGAZINE
17
KNOW HOW
Figure 3: A challenge/response dialogue box.
Notice that when adding user davetan we have used the -c after htpasswd. The -c option tells htpasswd that this is a new file and thus a new file should be created. We give the full pathname to the location of the passwd file(.ht_users). In this case we are sticking the file in /var/www/private – you may want to use a different directory structure. After a space, the username we are adding is given. Finally htpasswd prompts for password confirmations for that user. Adding user paulinetan, there is no need to specify the -c option as we do not want to create a new file, only append to it. If you use the -c, guess what, the file contents previously held will be wiped. Here’s how the file we just created looks: $ more .ht_users davetan:ETEkRxqtoentY paulinetan:C.ePHk1ASFlIs Notice the user names and password are colon separated and the passwords are encrypted.
Informing the Apache Web server
You can use different realms to protect different parts of your Web page directory
By default, Apache comes pretty much secure. Locate the httpd.conf file and do a bit of editing. To find out where your httpd.conf resides use the find utility to do all the work for you: $ find / -name “httpd.conf” -print /etc/httpd/conf/httpd.conf Next using vi, vim or some other text editor, edit http.conf and locate the directory directive: <Directory>. Make sure you have the correct AllowOveride entry within this directive, it will probably have: AllowOverride None Change this to “AllowOverride All”, so you have an entry like so: <Directory /> Options None AllowOverride All </Directory>
Setting up the .htaccess file Now for the meaty part. Change into the directory where the HTML files you wish to protect are located and create a .htaccess file. For example, to protect all pages that start with the word “private” at the beginning of the file, the following pattern match will do it for us: private*.* So the above pattern would match all of these files: private_main.html, privatepage1.html, private_page2.html and private.php. Create a file called .htaccess with the following contents: AuthUserFile /var/www/private/.ht_users AuthNAME “Hey! Restricted Directory” AuthTYPE “Basic” <Files private*.*> require valid-user </Files>
In the first line, AuthUserFile instructs Apache where the file we created to hold the usernames and password is located. In the second line, AuthName is the Realm Name – you can use different realms to protect different parts of your Web page directory structure. For the basics, just use it as a header line that will be displayed on the dialog box when a browser tries to access a protected page. You must enclose this with double quotes if you have more than one word, as above. In the third line, AuthTYPE is basic; this means we are only using Basic Authentication, as mentioned at the beginning of the article. The Files directive specifies that we are protecting the files “private*.*”, which will protect all files that match this pattern. The require valid-user, means the HTML page(s) matched will not be loaded unless the user first gets successfully authenticated. Now load up the browser and point to a file that is protected and you will get a challenge sent from the server to your Web browser, similar to Figure 3. If you hit cancel your browser will throw up a 401 error page, as in Figure 2. Assuming you enter a correct username/password, the protected page you requested will be displayed.
Other examples
18
LINUX MAGAZINE
If you have made changes to your configuration file, you must restart the Apache Web server. On a Red Hat box with Apache put in place at installation, you can use the rc script to stop/start the Apache Web server:
To limit access to a page to a single user:
$ /etc/rc.d/init.d/httpd restart
The above only allows the user davetan to access the
Issue 18 • 2002
<Files top_secret.html> require user davetan </Files>
KNOW HOW
page top_secret.html You may be thinking, what if somebody points their browser to an HTML directory and specifically tries to load a .htaccess file. No problem, just deny viewing from everybody: <Files .htaccess> deny from all </Files> The above file directive will set the state to deny from everybody. Your .htaccess file is safe. If some one tries to access it directly, a 403 forbidden error page will be thrown up in their browser, saying it does not have access to this file. Neat, eh?
Personalising error pages Ever gone to a broken link and had a totally unfriendly “Not Found” document thrown in your face? It is possible to make these pages friendlier to the calling browser, however. There are quite a few error code pages on a Web server. The most common ones are: 204 401 403 404 500
No content Authorisation Required Forbidden Not Found Internal Server Error
Lets see how to create a “404 Not Found” error page; the principles are the same for other error pages you wish to personalise. All you need to do is put an entry in your .htaccess file (that you created earlier). Like so: ErrorDocument 404 /icons/not_found404.html Each ErrorDocument for a different error code must go on a new line. The format of the entry is: ErrorDocument <error code> <path to error page> In the example shown above, I have put my error document in /icons, which is off the Web root directory. You are not restricted where you put these HTML pages; some like to create a separate directory and stick them in there – it’s up to you. Also notice the name I have given to the HTML page is a meaningful one that corresponds to the actual error code page. In my example I have used not_fouund404.html, so I know it is concerned with the 404 error code page When throwing up personalised error pages it is considered good practise to always put a link back to your homepage, or at least to some main Web site (like http://www.netscape.com). There should also be a way for the user to complain that some thing is
Listing 1: not_found404.html <HTML> I am sorry, but the file you requested could not be found,<BR> it may have been moved, deleted or simply just does not exist.<BR> Back to <A HREF=/index.php>Home</A></strong><BR> If you have a query or something we should know about email the administrator at <a href=”mailto:webadmin@localhost”>webadmin@localhost</a><BR> <BR><BR> <HR> <CENTER><IMG SRC=”/icons/apache_pb.gif” border=0></CENTER> <HR> </HTML> wrong. Listing 1, shows my very sparse, but more friendly HTML code for a 404 error page. Please note that you do not have to create usernames/passwords if you only wish to personalise your error pages, simply create a .htaccess file and insert the entries for the error pages you are personalising, as shown above.
Figure 4: Personalised 404 error page
Conclusion I have demonstrated how to carry out basic authentication on a Web server protecting individual or many Web pages based via basic pattern matching. There are many more directives that you can specify, however space does not allow me to go through all of them. When testing your .htaccess configuration directives it is always a good idea to open up a new shell window and continuously page the end of your error log file, so you can pick up any mis-configurations you may have in the .htaccess file straight away and fix them. Like so: $ tail -f error_log When a user has been validated, they remain validated, even if they go off to another site then come back to view the same protected page again so long as they have not closed down their browser. To re-set the authentication the calling browser must restart their browser. Bear this in mind when testing your authentication procedures. Being able to personalise your error pages, makes your Web site friendlier and more professional to a user visiting your site. When these types of hiccups do happen, it shows you care about your Web site. Issue 18 • 2002
Info Apache homepage: www.apache.com
LINUX MAGAZINE
19
FEATURE
Webcams
NEVERWITH WORK CHILDREN AND SMALL, FAT ANIMALS So all the Window
What to buy?
users get broadband
Wanting an easy life we started out by simply typing into Google the words webcam and Linux. The first site had not been updated since 2000 while the second site was still being actively updated. This second site at http://www.smcc.demon.nl/webcam/ shows the current devices supported by the drivers. Armed with this list we set off for the shops and returned with a Philips PCVC740K webcam more commonly known as the ToUCam. Back to the web site to download the modules and drivers. The usb-pwc file is just 70K in size but requires us to patch the kernel. Full instructions were online until we read the small print. From version 2.4.6 of the kernel this module is included. That meant we had nothing to do. Download the CamStream application as it is by the same developer and so if anything should work this should be it. At version 0.25 this requires unpacking and the usual ./configure followed by make then make install. In a shell we type camstream and a window opens. Nothing is visible. Choose the Files -> New -> Viewer and any of the sizes. A small window appears with the webcam live. Okay so it works but the image is very small.
and go on about webcams of their goldfish. John Southern show you that Webcams can have other uses.
Finally a 640x480 image
Starting another viewer and choosing the VGA output size gives us a large grey border but the image is still small. Not ideal but maybe the quality is VGA and we are just displaying it wrong. Back to the webpage and this time a read of the FAQ. (Note to self - Really must read before trying things. Especially backup software!) There is the answer. No pwcx.o module is loaded. Another download this time of the usb-pwcx (11K) Copy to /lib/modules/usb/ or in the case of Mandrake /lib/modules/2.4.8-26mdk/kernel/drivers/usb Change to root and use insmod -f pwcx.o The -f is to force the module in as it is compiled against 2.4.6 and not the version I was using. The warning can be ignored. Now restart CamStream and a new viewer VGA size shows the full image. Now we can start to explore the software. The first options under
At least the camera works
22
LINUX MAGAZINE
Issue 18 â&#x20AC;˘ 2002
FEATURE
Target acquired
CamStream apart from the size of the image are to do with framerate. The higher the image size the lower the maximum framerate. The lower a framerate used brings about jumpy images and makes viewing unpleasant. This is a compromise and you need to adjust the size of the image you can live with against the framerate. For webmeetings it is better to go with the smaller image size and have a smoother video whereas for snapshots just go with the maximum size. The next set of controls are to do with the input levels and here you can control the brightness and apply gamma correction to the picture. You then have the ability to upload the image via ftp to a website to provide a webcam accessible to all. I can see great potential for this but I will need to lock down my firewalls first. You can always save the stream to disk and it is now becoming popular in the USA to use webcams to spy on child minders during the day. I am not sure of the legality in the UK but I know I would lose my mother’s Sunday lunch if I tried! Now to look at what other software we can use. The next from the list of software at the driver website is Vgrabbj. This is a command-line tool for controlling the webcam. Maybe not everyone’s favourite but so easy to include within a script. At this point we are almost back on track to mounting onto the robot. Just one piece of software required.
Motion Motion is a small utility that compares images. It looks at differences between two samples and based on changes can save an image. To speed up comparisons you can also apply masks so for example you can block out most of an image so it only detects a change say at a door or window. Very useful in guarding and security work. The latest stable version is 2.6.3 although the unstable has reached 2.9.2. The rpm requires the mysql client as a library while the source file does not need this built against. It is available from http:// motion.technolust.cx/. To start the program we need motion -d /dev/video0 Be careful when testing this because as soon as an image difference (motion) is detected in the webcam stream a snapshot is taken and stored. This has to be a quick way of using up all your disk space. The saved location directory is based on the date of when the files are taken for example: /home/john/2002/02/ 10/17/11/10-03.jpg. Images fortunately are about 12K in size. Back to the robot. Fixing the camera to the Lego Mindstorm was achieved with hot-melt glue. (Peels off easily later) Now the only limiting factor is the usb cable. This was two metres and so limited the range dramatically. Moving the robot and using Motion resulted in a constant stream of snapshots. These could then be turned into an animation file to show the rover motion in action. For a more jerky but less CPU intense task we chose to put a pause between motion detection.
The outline shows the area that Motion detects a change
It is now becoming popular in the USA to use webcams to spy on child minders
motion -a /dev/video0 -G 15 to give comparisons every fifteen seconds. Issue 18 • 2002
LINUX MAGAZINE
23
FEATURE
Multimedia
LIGHTS CAMERA ACTION! Q
The DeCSS court case is still rumbling in the USA. While we are all waiting lets watch a film under Linux. Colin and John get ready to munch some popcorn
24
LINUX MAGAZINE
uite possibly, DVD could be described as multimedia at its best: video, audio, text all rolled together into a rich mixture of sensorial delight – if you are allowed access to it. DVD is digital data, flawlessly copyable data, which has brought about moves by DVD manufacturers to try and limit who and how you can get access to the information contained on it, which all revolves around software licenses. Unfortunately, no-one in the Linux community will want to go to the expense of producing a commercially licensed DVD player, due to cost, and maybe they shouldn’t, due to principle. But give the Linux community an envelope and they will try their hardest to push it until it bursts. Here, in this article, we will go through just some of the exciting developments that have been made available to Linux users to enable you to view DVD movies, as well as some other video formats, explain the state of their development and how we got some of them to run. The envelope in this instance is the DeCSS, the Decryption part to the CSS or Content Scrambling System which is required by all players to watch the vast majority of commercially purchased DVDs. Because of the licensing and legal issues (see the DeCSS boxout for some links) surrounding DVDs and some of the encryption they use, no single complete package exists and they all require a certain amount of work to get running. Many are under heavy development and so may not be fully suitable for beginners to play with, but this will improve over time. Issue 18 • 2001
Real Player If you are lucky, your distribution will already have RealPlayer G2 from http://www.real.com installed. RealPlayer is designed to cope with streaming data, such as that from an Internet news channel or live webcast. It is commonly used to play sound samples of music you want to buy. Built into G2 for us is an AVI file format plug-in. A trip down to the local computer show lets us buy a whole host of video CDs, which use the AVI format. These usually have very low quality graphics with disjointed playback. They do however hold many hours of film on a standard CD. It is not unusual to find ten hours of TV shows on one CD. With RealPlayer G2 the files are displayed in a small window but full screen is possible with visible pixelation. OSS or Esound drivers are supported, and with webcasting being its main task, using a proxy is straightforward.
FEATURE
DVD, DeCSS and Linux
VideoCD (MPEG-1)
On January 20, 2000, A United States District Judge issued a preliminary injunction prohibiting the distribution of computer code for reading encrypted DVDs, but only if it is readable by a computer. The upshot of this that everyone is now only too keen to tell you what the code is to decode a DVD, by printing it on ties, writing it on the back of envelopes or putting it in verse. To read more about this you should see these sites.
ZZPlayer GUI controler
To play VCD (VideoCD) files we turn to the SMPEG library from Loki (http://www.lokigames.com/ development/ smpeg.php3). This is a MPEG decoder based on UC Berkeley’s mpeg_play. As a front end for the player we can choose from quite a range but found either Enjoympeg or ZZPlayer to be our favourites. For the SMPEG the SDL library is required. This Simple DirectMedia Layer is fast becoming a must-have standard for not only video work but so many of the new games. Enjoympeg is 63Kb in size. The VCD is treated just as a DVD although of somewhat lower quality. ZZPlayer is designed for the KDE environment although a GNOME GUI also exists.
EnjoyMPEG running
● ZZ Player – http://zzplayer.sourceforge.net/ ● Enjoympeg – http://people.freenet.de/for_Ki/ As an alternative, xmms can handle the VCD format with the avi plug-in.
XMMS playing the MPEG1 file with avi-xmms
DeCSS Central - http:// web.lemuria.org/DeCSS/ Gallery of CSS Descramblers -http:// www.cs.cmu.edu/~dst/ DeCSS/Gallery/
Ogle Ogle was the first Open DVD player, which supported DVD menus and navigation. It is capable of reading from mounted or unmounted DVDs and hard drives. Displaying both encrypted and unencrypted DVDs using libdvdread/libdvdcss. It also has a fullscreen mode. Start by installing the two libraries libxml2 and libxml develop. Next install both ogle and ogle_gui. Use the command ogle to start the player. Sound control is achieved via the usual mixer controls such as AuMix. Unlike Xine, Ogle has little control over themes and skins and so looks dated in appearance. But with that said, what really matters is DVD playback. Using the same libdvdcss package to read encrypted discs, it does allow good control over things such as fast forward and skipping sections. Menu control is possible with either the mouse pointer or via a series of buttons that act on the menu functions. Turning on the subtitles was possible with just one click. The only disappointment from making this the overall recommendation was that to change DVDs
required us to restart the program, and is that really such a big hinderance? The latest version of Ogle is 0.8.2 and is available from http://www.dtek.chalmers.se/groups/dvd/.
Issue 18 • 2001
LINUX MAGAZINE
25
FEATURE
Xine Xine is probably the most popular of the DVD players currently running under Linux and it’s certainly the one with the most pedigree. The User Interface has been refined over the years and now contains many useful features like the snapshot facility, with which you can create backgrounds of your favourite images.
Lights! Xine is not the most straightforward application to download and run, and it does have quite a few quirks. The main files – the files that make up the
Xine started for the first time
Mplayer Aimed at being the next generation in multimedia players, Mplayer has a lot to live up to and it’s certainly doing just that. Available from http://www. mplayerhq. hu/homepage/ it comes as source and requires gcc of either 2.95 or 3.0.x (Not 2.96). The player itself is now at version 0.60 and is ready to go after a simple ./configure make make install. Don’t forget to download a skin so the GUI will work and a font, as well as the codecs. The codecs are the killer feature, as all the new types of video and audio compression are covered. Want to play WMA files in the future? Mplayer will be your friend. Want DivX capability? It’s built in. Many codecs exist because they have had to be reverse-engineered to exploit the proprietary file formats. This also means that, due to their legality, they cannot be included within the standard Mplayer package. In fact, to make Mplayer work as well as it does, the developers have had to call on a wide range of sources, including code sources. The way that some of this code is licensed means that Mplayer can only be distributed in source code format, which you need to compile.
26
LINUX MAGAZINE
Issue 18 • 2001
core of Xine – are held in the xine-lib package. This is currently at version 0.9.8. But don’t stop downloading there, there are many other things needed to make full use of the player. We recommend that you also download the xine-ui package – to give you the graphical user interface – and the recently released xine-dvdnav so you can navigate menus in films. The Xine player is capable of handling a wide variety of data formats, of which there are so many of that the unwary can easily be bamboozled. MPEG 1 and 2 film formats are what you will find on a standard film DVD. Then there is DivX which is based upon Microsoft’s MPEG-4 format, and MPEG-4 is similar to MPEG 2, but has a much higher level of compression applied to it. This means that both DivX and MPEG4 formats can provide enough data to enable you to get a DVD film packed into the same size that you can get on a normal CD, with very little loss in quality. The OpenDivX format works just as well, if not better, due to it having fixed and openly scrutinisable format and structure. The other common film format you are likely to come across, though it is now loosing out in the popularity stakes, is VCD. This will also allow you to have hours of video on a normal CD but the quality is poor and can be a little jumpy. Once you have the xinelib installed, you can then, in theory, be able to watch unencrypted DVDs. This is fine, but you will have to control Xine from the command line and secondly, probably more importantly, you will have to find an unencrypted DVD to watch.
Camera! With the xine-ui you get a nice graphical interface to start and stop your film playing. Another, very useful addition to tack on to Xine is the dvdnav plug-in. With this you gain a new nav device, with which you can take control of some of the more modern features in today’s DVDs, like accessing different camera angles and menus – in fact everything that goes to make DVD the fun multimedia format it is. Special effort has had to go in to providing features like nav. The next challege is the unencryption of DVDs. First you need to install the libcss files then the libdvdcss. This will leave you at a point where using your DVDs will seem worthwhile. Now at this point you can fire off Xine using the shell command xine, which will open up two windows. The first is the main DVD viewing area and the other is the user interface. Should this second window not appear all by itself then a right mouse button click in the viewing area might be just enough to coax it into life. From this user interface choose the playlist button and from there take the new menu from the nav
FEATURE
Xine has now settled as a standard application
device. This should bring up a playlist of the DVD files in the drop down window. Select one and hit the play button and the film should start. The reason for running this in a terminal for the first time is that any errors that occur are shown in a separate window rather than Xine just crashing out and leaving you without a clue. The first error that usually occurs is no input file to handle the DVD encryption, which means that you failed to load in the css or dvdcss library files. When loading these from rpm we noted that the dvdcss said it was dependant on libdvdread but in fact needed us to also load in libdvdread-develop package, so there is another file to add to your download list.
Action! The next stage is sound. The film might be playing but without audio output. Choose the settings icon on the user interface and then the audio tab. Here we now see the audio device is set by default to NULL. Load in the xine-lib-oss libraries and restart Xine with the command line xine -A oss. Finally, with any luck, the film is playing as we want. Now to configure it to appear just as we want it by using the control button. This allows us to adjust the brightness, hue etc. and also allows us to play with the themes and skins available for the player. The nav device also allows the easier use of DVD menus so, for example, you can choose to have subtitles on screen or play the soundtrack in another language. However, once youâ&#x20AC;&#x2122;ve watched the only DVD you have seven times, boredom may set in. At this point you can then start to have fun changing some of the other libraries. Install xine-lib-aa and you can play film rendered in ascii characters. This is quite amazing the first time you see it although it is best watched at a
distance. You can force the console output to a dev/null device. The console output is useful as it is continually telling you what is happening but we found better performance was achieved with no output. VCDs and audio CDs played just as easily, though you will find problems if you are trying to get your data off from a SCSI and not an IDE CD-ROM device. One last thing to look out for is for the higher quality SVCDs, they will play but the audio track is usually not in sync. This is easily cured by starting the program with xine -a 8.
Making full use of the features in DVD
Info Xine itself, as well as links to the other accessories mentioned here can be found at http://xine.sourceforge.net/
Issue 18 â&#x20AC;˘ 2001
LINUX MAGAZINE
27
FEATURE
Arcad – Architecture under Linux
ALL BUILDINGS
GREAT AND SMALL
Arcad is a pureblooded Linux CAD program for architects. With a long history, an active user community and favourable licenses it offers enough to warrant a serious look. Ulrich Wolf does just that
Boris Becker’s villa in Mallorca was built in Arcad from the original plans (Architect bureau Hainz, Munich)
C
AD programs for architects (CAAD software) should be able to do everything; not so much regarding the variety of functions but rather the methods by which architects work. For this reason there are, on the one hand, ultramodern CAAD programs, which are completely parameterised. This means that, for example, building components such as walls or windows are “Objects” with properties that do not just describe the geometry, but also the material, the price, the manufacturer and much more. On the other hand, there are also programs that treat components, such as doorframes, as mere geometrical shapes with no other properties. Arcad, created by a small company of the same name, falls into the second category. Arcad comes originally from the world of DOS. Over the past few years however, newer versions have been developed exclusively for Linux. The relatively faithful user community is strongly involved in the development and testing of new software. Arcad has a popular user forum on its Web page, where questions regarding operation are in the main patiently and competently answered.
or herself, but contract work for third parties is not permitted. For those who want this option, there are reduced full versions starting at 450 euros; the complete outfit hits the bank at around 8,500 euros. A “Maintenance Fee” of ten per cent of the purchase price per year covers unlimited updates to the newest version. Arcad is a “3D Volume” program, which also allows design in two dimensions, as some architects and civil engineers prefer this mode of operation. Others can develop 3D models and generate section views from these – the user is therefore given the choice. The dimensioning of the 3D model is associative, which means that the measurements are automatically changed when modifications are made to the
Favourable licenses The manufacturer gives favourable Campus-licenses in order to help spread the good word. For 65 euros, the purchaser receives a full program for unlimited duration, inclusive of installation support. The purchaser may use the program unrestrictedly for his 28
LINUX MAGAZINE
Issue 18 • 2002
Space for what is important: the Arcad interface
FEATURE
geometry. This is not however the case for designs in two dimensions. Another disadvantage of the program is that if sectional views (such as plan views or elevations) are produced from the 3D model, these are independent drawings with no link to the model. This allows for quick output because of the small data quantities, however each small modification must be carried out by hand on the section view as well as on the model. Subsequent changes on the model do not automatically update the views.
Rendering included Of particular interest for design presentations, is the possibility of rendering 3D scenes. For this purpose, Arcad features a POV-ray interface for Open Source rendering. This also enables users to render films showing fly-throughs of an object as well as real time animations. At present in the beta phase, the acceleration of the 3D functions lies with OpenGL. Arcad comes with a large library of textures and different elements such as doors, windows, plants and the like. Additional libraries (in the DXF format) can be included through several paths. HPGL, HPGL2 and Postscript are supported as print formats and thanks to Ghostscript, almost any printer can be used. One of the standout features of Arcad is its intelligent user interface. The pull-down menus are freely moveable and can be faded in and out with a simple tap of the tab key. Each command can be transferred to an icon list and special commands such as zooming, moving or tilting are possible during processing using a combination of mouse and keyboard keys.
Clearer interface As is typical in Linux programs, all three mouse buttons are utilised. Left-clicking on an object calls up the command with which the object was created and thereby takes on all the object’s current parameters; clicking the middle mouse button leads to a menu from which the parameters can be changed; and a right mouse click ends the selected operation, as is usual in many CAD programs. The user can also choose between different forms for the cursor. This is just a plaything for desktop applications but for CAD applications it becomes very useful. For example, objects can be aligned more easily with a cross-hair cursor than with an orthogonal. The “Snap” property of the cursor can also be defined. It can for example be selected to automatically snap to corner points, edges or the like. For most users, Arcad will be quite easy to learn. This also applies to those without much CAD experience but with previous knowledge in architecture and the building industry. The program includes an extensive manual, which exists however only as a Postscript file – a printed version is not published at present. A less extensive HTML version is likewise included, which,
linked to the standard browser of the system, serves as context sensitive online help. However, the constant appearance of the Netscape window quickly becomes too much to take, so this feature is only really worthwhile in exceptional cases.
Cutting through bureaucracy The most important activities next to design in architecture/planning offices are the organisation of the tendering process, and the assignment and invoicing of jobs (AVA). There are numerous independent programs, as well as modules for CAD software, designed for this purpose. However you don’t need to resort to any of these if you have Arcad. The earlier versions of the program already included an integrated AVA, and this has been greatly extended in the present beta version. It is now possible to bill according to the architect’s fee regulations, to calculate according to the area in square meters, or to administer tenders and jobs. An address administration and a text processor have even been integrated. The architect need now use no other platform other than fast and reliable Linux.
Summary Arcad under Linux offers the opportunity for architects, who work alone or for smaller offices, to switch completely to Linux. This is presuming that you prefer the classical design style with non-parameterised models. The license costs are at the lower end of the scale and the campus license makes a trial both an attractive and affordable proposition. The manufacturer co-operates and works very well together with the users. Those who like to, can take an active part in the development. The largest risk might be that Arcad is the product of a very small company. If this should find itself in difficulties, a migration to alternative Linux applications would be inevitable. Perhaps then it would become Open Source. Issue 18 • 2002
Info Arcad: http://www.arcad.de/ gb/home.htm
LINUX MAGAZINE
29
FEATURE
QCad – CAD in two dimensions
VIRTUAL DRAWING BOARD For ambitious hobby designers and students QCad is a good introduction to the world of CAD. Ulrich Wolf takes a look at the capabilities of this free GPL CAD program
F
or the most part, professional CAD systems are not only extremely expensive but also so complicated that they can hardly be operated without some form of training. They offer a wide variety of functions, many of which are only really required by highly specialised professionals. QCad is the exact opposite of such software packages: it’s a 2D CAD program with relatively few functions, but those it has are well chosen and include all the most important features. QCad is included in almost every Linux distribution and can also be downloaded from the program’s Web site. The program is also available under Windows as it uses the portable Qt-library. There is however a royalty licence cost for the Windows version. rpm packages for dynamically linked libraries are available – naturally the source code of the current version is 1.4.7. If you still have Qt2.1 and don’t want to go through a new compilation, you should install the binaries of version 1.4.4 – the function range is almost identical. An intensive revision and the leap to QCad II are already planned. QCad’s author, Andreas Mustun, also has a commercial package. The program, called
QCad includes a small library of DXF examples
30
LINUX MAGAZINE
Issue 18 • 2002
QCad is a good introduction to CAD
CAMExpert, is currently available for the commercial license fee of $160. It essentially extends QCad by offering you the possibility of creating NC programs for Computer Aided Manufacturing (CAM). It handles formats such as Gerber, G-code and HP/GL. QCad itself, however, reads and writes DXF files and additionally exports in EPS. QCad does not have its own proprietary format. Those who are just finding their feet with CAD have it relatively easy with Qcad. The operation is self-explanatory and the interface is designed around accustomed standards. One small difference is found in the behaviour of the mouse: a right mouse click does not, as is normal, call up a context menu but instead concludes the current drawing operation. The absence of scroll bars also takes a while to get used to, though the Pan Zoom tool serves as a substitute for this. One very positive note is the existence of a helpful user manual – something which is unfortunately not always a matter of course with Free software – though there are some points that are not described in as much detail as we might like. There’s also no find function to help you locate the information you may need. Whilst working, tool tips (or speech bubble help) assist in finding the functions covered by each icon. Qcad is also localised for numerous languages, including Japanese. The manual is currently available in English, French and German. The program comes with a small library of
FEATURE
Even complex drawings are not a problem for QCad.
prefabricated items such as screws, frames and even a small Tux. The library is too small to be of any real use, however. In order for it to be useful, you must expand it with drawings created by yourself, – i.e. normal DXF files – or procure DXF files of standard parts from the manufacturer and integrate them into the library path.
Laying layers As in all modern drawing and graphics programs, Qcad lets you create different layers, also referred to as folios. Graphic designers normally create a different layer for each new object. For architects and engineers however, it is sensible to divide up layers according to functional criteria. For example one layer might be used for the outlines, one for dimensions, and separate layers for frames, text, help lines and so on. This is also recommended, as QCad has no separate function for help lines. It is practical to use one pre-defined line colour for each layer. Layers can be individually hidden and uncovered again; all elements of a layer can be selected and edited as one; and individual elements can be shifted from one layer to another.
Working to measure QCad’s dimensioning function leaves nothing to want: the units range from nanometers to light years, thus permitting the construction of small galaxies, should you wish. Diameters, radii and angles can be easily provided with dimensions, as can imaginary distances such as the distance between the centre points of drill holes. One great feature, oft absent in larger CAD packages, is that the dimensions are automatically changed when modifications are made to the drawing. Although QCad is not a parameter-based CAD program, it is nevertheless possible to stretch, squash or distort closed forms. In order to actually receive the desired results however, the operation’s so-called point of reference must be selected very carefully. In all operations, whether it is drawing or editing forms, the multitude of supported “Snap Points” can prove very helpful. The cursor thereby automatically snaps to object or line intersections, centre points or similar prominent points.
The business of import/export The author of QCad didn’t waste his time inventing his own format, and one can therefore assume the portable DXF format is sufficiently well supported. A 1.2 Mb file from an architect bureau caused only slight problems: the characters in some fonts were missing and the whole project, a very extensive building, was scaled down to the size of a postage stamp when opened. All the important information was however preserved, including each of the ten layers and their designations. The only unfortunate
aspect was that the dimensions did not dynamically adapt to subsequent drawing changes. This actually seemed to be more a problem of the program from which the file was exported than a failing of QCad. When importing directly from Autocad or other programs, the dimensions did adapt to changes, even if they were on other layers.
Weaknesses 95 per cent of the QCad code comes from the main author Andreas Mustun. For this reason, the fact that some functions are still missing is only too easy to forgive. For example, there’s not yet a function that allows the construction of ellipses – this must currently be done using curves with defined reference points. One option that is sorely missed is the ability to group unconnected elements at will. The export possibilities are also very limited. QCad can only output files in the DXF and EPS formats. It’s worth noting that all of these points are at the top of Andreas Mustun’s To Do list.
Summary 3D design is a matter of experience and requires quite a different approach in planning to 2D design. On the other hand, those who merely want to switch from the drawing board or vector-based drawing programs to CAD will have their wishes satisfied with the free design environment of QCad. The program is extremely stable, sufficiently fast and can even handle larger files from foreign programs without a problem. At first glance the obvious lack of component libraries seems to be the largest disadvantage. There are however many manufacturers who make standard components of their products, such as screws and profiles, available as DXF files. This counteracts the problem of the small library to some extent. Since the program is totally limited to two dimensions, it offers little help in the creation of different views of an object. If the user wants to implement complex projects, he or she should already be familiar with the methods of a technical designer or a draughtsman. The intuitive and easy to learn operation however makes it possible to concentrate fully on the construction of the project at hand. Issue 18 • 2002
One great feature is that the dimensions are automatically changed when modifications are made to the drawing
Info QCad and CAMExpert homepage http://www.ribbonsoft.com
LINUX MAGAZINE
31
KNOW HOW
Java for the Linux platform
COFFEE WITH MILK AND SUGAR Anyone who has worked with Java programs will be aware of their great advantage: platform independence. Java programs do however need a runtime environment for execution. In this article, Sebastian Eschweiler tells you what options are currently available under Linux
W
hile Microsoft is trying to ban Java completely from the latest version of Windows (Windows XP) due to the licence dispute with Sun, the Linux user has more options than ever to make his system Java-capable. This article gives you a short overview of the existing Java SDKs and JREs available for Linux.
can choose between a tar.gz archive and an rpm package. Depending on which format you choose, after download one of the two following files should be on your hard drive:
Spoilt for choice
The respective bin file must first be made executable:
In addition to the known JDK/JRE from Sun, Java users have a huge choice of alternatives under Linux. Here are the main Java Kits for the Linux platform:
chmod a+x j2sdk-1_4_0-beta2-linux-i386-rpm.bin or
● Sun JDK 1.3.1/1.4 Beta 3 ● Blackdown 1.3.1 FCS ● IBM Java 2 SDK/JRE 1.3 ● Kaffe 1.0.6 Let’s take a look first at the best-known option for making a Linux system Java-capable: the Sun-JDK/JRE.
Java from the inventor Java SDKs or JDKs Java Software Development Kits provide the entire Java environment for the programmer. You will only need an SDK if you want to develop Java programs yourself. If there is a JDK in place, you need no separate JRE in order to execute programs – the JDK already includes all functions of the JRE.
* j2sdk-1_4_0-beta2-linux-i386-rpm.bin * j2sdk-1_4_0-beta2-linux-i386.bin
At present, Sun is in between the two JDK versions 1.3.1 and 1.4. Version 1.4 is in fact still in Beta status, but is nevertheless already highly stable and highly recommended for private use. The Sun-JDK or JRE can be found at java.sun.com. Whether you now settle on the JDK or the JRE, it makes little difference to the installation procedure. But we describe below only the JDK installation. When downloading from the aforementioned Web site you
chmod a+x j2sdk-1_4_0-beta2-linux-i386.bin Now the program can be started: ./j2sdk-1_4_0-beta2-linux-i386.bin You will first be confronted by the licence agreement. After confirmation, the JDK is installed in a subdirectory j2sdk1.4.0 of the current directory. When using the archive with rpm in the name, after confirming the licence agreement, the rpm file will be made, which can then be installed by the root administrator with the command rpm -iv j2sdk-1_4_0-beta2-linux-i386.rpm Once installation is complete you must still set two
JRE Java Runtime Environment. Since Java programs only exist in the so-called byte code (a sort of Java machine code), a Java interpreter must be used for execution, which can be found in the JRE. Figure 1: Java Web site from Sun (java.sun.com)
32
LINUX MAGAZINE
Issue 18 • 2002
Figure 2: Java commandline options
KNOW HOW
important system variables, in order that the nowinstalled Java environment can be found by Java application programs:
high speed, since large parts of it are written in C++.
Kaffe export JAVA_HOME=/usr/lib/j2sdk1.4.0 export PATH=$PATH:/usr/lib/j2sdk1.4.0/bin
Blackdown The developer group at http://www.blackdown.org also provides a JDK or JRE. The objective of this software group is to port Java on the basis of the Sun source code onto Linux. You will now be wondering why they bother, when there is already a Linux version from Sun available. Blackdown promises to bring in special adaptations for the Linux platform, as the result of which the program package should be more stable and faster. Since people are working on the basics of the Sun implementations, it will therefore take some time before the corresponding version numbers are attained by Blackdown. The latest version at present is 1.3.1. Opinions vary widely on the question of whether to rely on a version from Sun or whether the Java implementation from Blackdown should be used. But it has already been shown on many occasions that implementations from Blackdown work very reliably. So it’s well worth taking a look at this alternative. Installation is again very simple and is finished in a few steps. Whether you now decide on the JDK or merely want to use the JRE, this will not affect the installation in any way (apart from the directory names, obviously). Once you have downloaded the tar.gz archive from the Web site, there follows the usual procedure for extracting the archive: tar xjvf j2sdk-1.3.1-FCS-linux-i386.tar.bz2 You will then find a new directory j2sdk1.3.1 in the working directory. And with Blackdown, too, you must not forget to set or to adjust the two system variables PATH and JAVA_HOME as described above.
The “Kaffe” project is attempting to imitate the Java Virtual Machine including the class libraries as Open Source project. This project was created by Tim Wilkinson and is now supported by a great many other Java programmers. Unfortunately, the version numbers of the Kaffe implementation do not correspond to the usual versions from Sun, which is making the categorisation of its project status difficult. At present (Version 1.0.6) Kaffe is in between Java versions 1.1 and 1.2 from Sun. But some functions are still not yet implemented. Sadly, installation is not so simple as with the other packages mentioned. The latest release does not work with the current versions of glibc, so compilation onto many Linux distributions is not possible. This means you need the latest version from the CVS directory. To do this, enter the following commands: cvs -d :pserver:readonly@cvs.kaffe.org:/U cvs/kaffe login cvs -d :pserver:readonly@cvs.kaffe.org:/U cvs/kaffe co kaffe After that the source texts of the latest version of Kaffe will be found in the new sub-directory kaffe. Compilation is now done with the commands: ./configure — — prefix=/usr/lib make make install The parameter “— — prefix” in the configure script specifies the directory in which Kaffe is to be installed. After that Kaffe should be ready to start work.
Big blue In recent times, IBM too has recognised the importance of Java and is offering JDK and JRE. The available versions can be found at ibm.com. Here, too, you can choose between a download as rpm package or tar.gz archive. The installation is largely identical to the one previously mentioned. Apart from the option of a complete download, IBM also offers you a download split into four (JDK) or three (JRE) files, which will be of interest if your Internet connection is prone to crashing. The individual files then have to be combined prior to installation. This is done with the cat command in the following form: cat [file1] [file2] [file3] [file4] > [outputfile]
Conclusion As you have seen, there are numerous options for Java programmers and users under Linux. Which alternative best suits your requirements, is something you should try out by testing the various packages. If you want to play safe, it is advisable to turn first to the Sun JDK/JRE. In the next article we will be putting the now-installed JRE into practice and trying out the first Java applications.
Figure 3: The Kaffe homepage (http://www.kaffe.org)
tar.gz The latest tar versions use the option “j”, to unpack a bzip2compressed tar archive. For older tar variants this is still “-I” (with a capital I), while for very old ones there is no appropriate option. If your tar reacts to both variants with an error message, decompress the archive with bunzip2. CVS The “Concurrent Versions System” allows all those involved in large programming projects to have write/read access to the source files. At the same time CVS offers the feature of version control, so that current or older versions can be extracted at any time from the CVS “tree”.
URLs JDK and JRE BlackDown Java IBM Kaffe
http://java.sun.com/j2se http://www.blackdown.org http://www-106.ibm.com/developerworks/java/jdk http://www.kaffe.org
The IBM Java package stands out in particular for its Issue 18 • 2002
LINUX MAGAZINE
33
KNOW HOW
Freemind
IDEAS TAKE SHAPE F
collected data to be
ollowing installation of the Java toolkit for Linux previously mentioned, it now seems like a good idea to test them. Since Freemind is a Java program, this brief article thus does two jobs at once: firstly it shows how Java programs can be made to run, and secondly an interesting and useful tool is introduced.
shown and edited in a
Installation
tree structure. Thanks
Freemind can be found at http://freemind.sourceforge.net. Downloads can obtained as either a tar.gz or zip archives. Once the package has landed on your hard disk, the rest of the installation can be completed in just a few steps. You will need a JRE of at least version 1.2 on your computer. If you have, then all you will need is to create a directory and to unpack the Freemind archive to it. Care should be taken here, as the archive does not create its own directory automatically. So you should only perform the unpacking in a subdirectory you have created for this purpose.
MindMap tools are currently enjoying
enormous popularity, since they allow
to Freemind, Linux users need no longer be left in the cold. Sebstian Eschweiler finds out why
On the CD Freemind is included on the coverdisc.
mkdir freemind mv freemind-bin-0_4.tar.gz freemind cd freemind tar xvzf freemind-bin-0_4.tar.gz (The zip archive is unpacked in the same way with “unzip ...”.) After these few steps the program is ready to go and can be invoked using the command
java -jar ./lib/freemind.jar & Alternatively, the shell script freemind.sh can also be used, which contains the same command.
Usage The program is very simple to use and largely intuitive. After making a new MindMap, you will first see a central root element with the wording “New MindMap”. Starting from this node, you can now start to build your own personal MindMap. This is simplest if you use the pop-down menu of the node which you wish to edit/extend. The pop-down menu contains four sub-items: “Node”, “Branch”, “Edge” and “Patterns”. In the Node submenu you will find all the settings relating to the individual nodes. Here you can add new nodes and navigate in the structure of the existing elements. Branch offers you all relevant settings for formatting the text style of the individual nodes – so for example you can select bold or italic highlighting. The Edge submenu defines the formatting of the individual node connections, and finally, in Patterns you will find defined formatting templates, which can be applied directly to the various nodes. Since the project is still at an early stage of development, there are still a few bugs in it, such as the fact that selected formatting is not always displayed or applied directly. This will surely improve in the subsequent versions. The print function still has a few bugs, too, and does not always deliver perfect results.
What the future holds
Figure 1: Illustration of a simple MindMap
34
LINUX MAGAZINE
Issue 18 • 2002
Apart from getting rid of the individual bugs, many additional features are planned for future versions. So a complete file browser will be integrated, an HTML/XML editor developed and an applet to display the MindMaps in the browser will be added. There are also plans to develop a MindMap server, so that it will be possible to work together with several users on a MindMap over a network. All in all, it can be said that Freemind, with all the planned features, is a highly promising project.
KNOW HOW
jEdit – a professional Java-based editor
PERFECTLY EDITED T
he jEdit application is not just a simple text editor, but an editor with many additional functions, which can considerably lighten the load in your daily work. Since jEdit is a very wideranging program, we will limit ourselves in this article to looking at the basic functions. To give you a simple overview of the possibilities of jEdit, here is a rundown of the existing features: jEdit offers you, in addition to an unlimited Undo/Redo function, any number of buffers. These buffers are referred to by jEdit as registers. In addition the editor enables so-called markers to be set, which make it especially easy to find text positions once they are marked up. jEdit enables you to open as many files as you like at the same time in separate editor windows. Each window can then be subdivided again as often as you like.
longer required. To get a list of all the plug-ins available there is a central starting point at http://plugins.jedit. org.
Where once highpowered, but hardto-learn, editors such
Installation Before we go deeper into the abilities of the editor we’ll need to install it. Thankfully this is really easy: the program files can be found at http://www.jedit.org under the heading of “Download”. There are two packages on offer for the Linux user, the first being an rpm archive and the second a package with integrated installation program. Let’s now take a look at the installation with the aid of the installation program. After downloading the file jedit322install.jar, start the graphically supported installation by means of
as Emacs or Vi used to be the order of the day, there are now numerous alternatives. jEdit is a graphical editor, which sets great store by userfriendliness.
Syntax highlighting inclusive
java –Djava.compiler=none –jar U jedit322install.jar
Sebastian Eschweiler
The built-in syntax highlighting is of particular importance: a total of 60 different highlight modes are supported. These include all the main programming languages, including C++, HTML, Java, Javascript, JSP, Pascal, Perl, PHP, PL-SQL and XSL. By far the most important feature, though, is the fact that jEdit can be expanded by means of plugins. Managing the addition and removal of plug-ins is made easy by the built-in plug-in manager and this means no great expense. Even the downloading of plug-ins is integrated into the plug-in manager, as the result of which manual downloads are no
Installation consists essentially of three partial steps, in which you select the installation directories and the components to be installed. Since these steps are self-explanatory, we will not go into them in detail here. The installation should be performed by the administrator, root. Once successfully completed, jEdit should now start when you enter jedit. On some distributions there is already an editor of the same name (part of the jstools package) – so it may be necessary to enter the full path for our Java version.
through its paces
puts its feature range
Figure 1: Installation in two steps
Issue 18 • 2002
LINUX MAGAZINE
35
KNOW HOW
Using the program
Plug-ins
Now that the installation has been successfully completed, after first start you should see a window similar to Figure 2.
As already mentioned, one of the most important features of jEdit is that it supports plug-ins. Since there are a great many plug-ins available, this means that jEdit can thus be expanded by many useful functions. The editor thus turns into a multifunctional application. As already mentioned, plug-ins can be installed directly into the editor via the integral plug-in manager, which first ascertains the list of plug-ins available via the specified Web site and then provides the user with a clear selection dialog. The marked plug-ins are then automatically downloaded and installed, without the need for further intervention by the user. After that all you need do is restart jEdit, and the added plug-ins will then be at your disposal.
Figure 2: jEdit after starting
The first input area, called buffer under jEdit, is already open and you can start entering text immediately. The input area can now be altered, as you like: the menu items Utilities/Buffer Option and Utilities/Global Options offer all conceivable options. The palette ranges from simple colour adjustment via the settings for the printer options to changing the drop-down menus. You thus have the option of adapting practically every single element of jEdit to your own requirements and wishes. The standard functions of an editor, such as opening and closing files, searching within files and cutting/pasting text elements are obviously all here, but we will not go into these in any more detail, since the application is self-explanatory. The extended functions of jEdit are more interesting.
Figure 3: The selection list of the plug-in manager
Macros jEdit includes full macro support. Under the menu item Macros you will find a macro recorder, which allows you to record and save your own macros simply, so you can play them back as required. jEdit comes with a range of ready-made macros, which cover many useful functions. The macros which come with it can also be found in the Macros menu, divided into five sub-menus: Files, Java, Misc, Search and Text. If you want to record your own macro, select the menu item Macros/Record Macro.... Now you will be asked for a name, under which the macro will be stored. When you have finished the recording, simply select the menu item Macros/Stop Recording.
Macros: automated tasks within a program. Often, you will be offered a function for recording a macro. This means that keyboard inputs and mouse clicks are saved by the program and repeated identically when the stored macro is called up later.
36
LINUX MAGAZINE
Issue 18 • 2002
Figure 4: The plug-in manager during installation
The plug-in manager is reached via the Plugins/Plugin Manager... menu. In the Plugins menu you will also find menu items for plug-ins already installed. Here there are lots of options for installing and adapting the various expansions.
In conclusion The purpose of this article was to give a brief insight into the wide-ranging features of jEdit. Whether you are now seeking an editor for programming or want to adapt configuration files simply – jEdit offers ideal support in either case and is especially easy to use. If you now add the extended functions, which can be reached via plug-in installations, jEdit evolves into a multitalented application. So it’s well worth having a look at this program.
KNOW HOW
Scheduling appointments and tasks with KOrganizer
ALWAYS SOMETHING THERE TO REMIND YOU T
he KDE’s scheduler KOrganizer offers nearly as much usability as Outlook’s built-in calendar. Their functionality and desktop structure are very similar so the switch from Windows to Linux’s KDE desktop needn’t require a period of readjustment. KOrganizer is a standalone application and is not integrated into an email client such as KMail. Nevertheless, it is possible to send invitations to attendees via email. Should you be looking for KOrganizer in the standard installation of SuSE Linux 7.2, on which this workshop is based, you will be searching in vain, as this tool has to be installed separately. That’s no problem though, as it is one of the software packages included in the distribution. Start YaST2 via K/SuSE/System/Configuration/ YaST2 and enter the root password. Launching YaST2 will take a moment. Select Software/Install/Delete Software. KOrganizer can be found in the package kdepim – Personal Information Manager. You will find the software under X11/KDE/Basics. Double-click the package (causing an “x” to appear at the beginning of the line) and confirm with OK. The software will
YaST: YaST stands for Yet another Setup Tool and is the tool that guides you through the installation of the operating system in the SuSE distribution. Subsequent changes to the system, such as the installation of additional software packages, setting up an Internet connection, network setup or the installation of new hardware, are also performed using YaST2. As an alternative there is also the text-based YaST where some other configurations are carried out and which requires less resources.
The Outlook calendar has helped Windows users overcome the problems of forgotten appointments and with KOrganizer you don’t have to forgo punctuality and order
You can export your Outlook calendar to KOrganizer
now be installed on your system and you can then find the scheduler in the start menu under K/SuSE/Office Applications/Organisation/KOrganizer. For a frequently used application like a scheduler it is handy to be able to start the program via the panel or desktop. If you would like to create a panel button for KOrganizer open the context menu by rightclicking a free space on the panel and select Add/Button followed by the menu path. There is some good news to start with: you can import your Outlook calendar into KOrganizer. To do this, you need to export the data in the vCalendar format, VCs. The easiest way to do this is to select File/Import and Export in Outlook. A wizard appears. Select “Export to a File” and then “Personal Folder File (.pst)”. The folder from which you want to export is the calendar, so you need to select that. In the following step, “Save exported file as”, click on Browse and enter a name for the file to be saved, with the extension “.vcs”. Set the file type to “All Files” and if necessary delete the default format “*.pst”. Now copy this vcs file into your home directory. In KOrganizer select File/Open, look for the file and double-click it. Issue 18 • 2002
under Linux either. Anja M. Wagner finds out why
LINUX MAGAZINE
37
KNOW HOW
Let’s get to work Home directory Linux automatically creates a home (i.e. personal) directory for each system user. It is similar to the “My Documents” folder under Windows.
Now we come to setting up and working with KOrganizer. The desktop is divided into three sections. In the top left-hand corner is a calendar view of a month, which doesn’t contain text, only numbers with letters representing the days of the week. This is called the date navigator. Days for which appointments have been entered are represented in bold, but more on that later.
The constituents of the KOrganizer desktop
Below the date navigator is a field listing existing tasks. The largest part of the screen is taken up by the calendar view. Similar to Outlook this can be switched between a day view, a view of the working week from Monday to Friday, a view of the entire week or a month view by using the appropriate buttons on the right side of the toolbar. That should be enough for now to give you a rough overview. The date navigator is a sort of eternal calendar. You can use its four navigation buttons to move backwards and forwards in time. The buttons with double arrows change the year while the single-arrow buttons change the month. We stopped our backwards journey at the time of the French revolution of 1789; making an early note of your trip on the USS Enterprise in the year 2367 also presents no problems. Clicking on a date in the date navigator causes the day view for that date to appear in the main window. Shades of grey are used to distinguish between working and other time; you can customise this according to your own preferences. But before we get involved with design matters we will first of all explain how to enter appointments and tasks using some examples. For instance, to enter a dentist’s appointment on the calendar, select Actions/New Appointment. An entry form appears in which you can input the details of the new appointment. On the “Summary” line enter the text to be displayed in the calendar. Some sort of heading, like “dentist” would be useful, but this is not always possible. If the text you have entered is too long it will be truncated in the calendar. We will explain later in the workshop how 38
LINUX MAGAZINE
Issue 18 • 2002
you can still display it in its entirety. Now select the date of your visit to the dentist and enter the start and end time of the appointment. You can select the times from the dropdown menu down to the nearest quarter hour, or you can manually enter any time you like. Leave the option “Recurring Event” deactivated, as this is a one-off, or at least an irregular appointment. On the other hand, it might be good to activate the “Reminder” option to ensure the dentist doesn’t end up waiting for you in vain. You specify how many minutes, hours or days before the event you want to be reminded. Click on the icon with the note and select a sound file. The duration of the appointment can be shown as “busy” or “free”, although this does not seem to make any difference to the way it is displayed. In the large text field you can enter additional information and details for the appointment if required, for instance the address and telephone number of the dentist or a question you are planning to ask him. The priority of each appointment defaults to one, the highest, though it can be set as low as five.
Entering a new appointment
Finally, the appointment can be assigned a category. Click on the Category button and select one of the twelve entries available, among them meeting, business, birthday, etc. You can also add your own categories. Click on Edit Category and write “doctor” in the text field at the bottom of the new window. After clicking on Add the new category will appear in the list. Highlight a category in order to remove or edit it, for instance to change “business” to “job” or “work”. Tick your chosen category and confirm with Apply/OK. The selection of a category is not
Viewing your reminder
KNOW HOW
mandatory, but since you can set the KOrganizer desktop to show each category in a different colour it can make it easier to find your way around the calendar (see below).
No rules without exceptions
Unlike in Outlook you cannot access the entry form for creating, viewing or editing appointments by right-clicking on the day view, but double-clicking does work. Depending on which half hour segment you double-click that time is set as the start time.
Smart rule design saves work
Mondays to Fridays, should be marked as weekly and the days on which it occurs should be ticked. You can also define the end of a regular event, for example if you’re taking ten dance classes. It is, of course, also possible to enter annual events such as birthdays, anniversaries and the like. Events that are not associated with a particular time appear at the top of the day view, making them more noticeable. You can move or change one-off appointments with the mouse in the day or week view. If you click on an appointment and keep the mouse button pressed the pointer turns into a cross. You can now “grab” the appointment and move it to a different part of the day, or even to another day by using the date navigator. If you touch the border of the appointment field with the mouse pointer it changes into a double arrow. You can then move the border and change the duration of the appointment. Recurring events cannot be moved with the mouse. To change these open the context menu by right-clicking and select Edit to change the settings. Clicking on View in the context menu shows a complete summary of the heading, date, time and any other information you may have entered when you created the appointment. The option Reminder On/Off allows you to activate or deactivate the alarm.
Recurrences You only need to set up a regularly occurring appointment once. KOrganizer will then create appointments in the appropriate places according to the rules you define. You meet up once a month with friends to discuss Linux? If you do this on every third Wednesday of the month, go back to Actions/New Appointment on the menu bar. The settings are the same as for a one-off event, but this time you activate the option “Recurring Event”. This takes you to the Recurrence tab. Here you specify the criteria for the recurrence of the appointment. Is it a daily, weekly, monthly or annual event? The entry field on the left changes according to the frequency of repetition. For our example choose “monthly”. Now specify that the event takes place on the third Wednesday of every month. It is also possible to set up every second or third month. You can specify exceptions from this pattern, for example public holidays. Under “Exceptions”, click on the button to the right of the date field. A small calendar opens and you can select the exception and click on Add. Each exception needs to be added separately, therefore only appointments with irregular exceptions should be entered in this way. In case of regular exceptions you ought to amend the recurrence rule. An event that only occurs on certain days of the week, for instance only from
KOrganizer will then create appointments in the appropriate places
It’s all over after ten appointments
It is also possible to configure appointments with several attendees. However, the group scheduling feature has not been implemented yet. When creating group appointments you initially follow the same steps as for any other appointment. Then select the Attendees tab. In the text boxes enter the name and possibly the email address of the attendees. A dropdown menu allows you to select their role in the meeting from attendee, organiser, owner and representative. There are eight status options for attendees: preparation required, accepted, sent, attempted, confirmed, rejected, completed and delegated. Once you have defined the features click on Add and the attendee appears in the large text field. Should you need to amend the status of an attendee later on, for instance because he has cancelled, highlight him with a mouse click, change the status and then click on the Edit button. If invitations are to be emailed you need to have entered the address. Close the edit window with Issue 18 • 2002
LINUX MAGAZINE
39
KNOW HOW
Apply/OK, click on the appointment in the calendar and select Actions/Appointment By Email from the menu bar. The recipient of your message will see the appointment’s summary line as the subject and the date, start and end time in the body of the message.
When the going gets tough, the tough start task lists
No set date: todos
There are many things in life we have to do that are not attached to a specific date
There are many things in life we have to do that are not attached to a specific date. The car needs to be washed, the tax return has to be done, the suit must be taken to the dry cleaners and lunch with a colleague should really be arranged. All these things come under the heading of todos. Whenever you start up KOrganizer you will be reminded of any outstanding tasks by the list of incomplete todos in the bottom left-hand corner. To enter a new task on the todo list select Actions/New Todo from the menu bar. The process is similar to creating a new appointment, but you activate the options “Without Date”, “No Start Date”, “No Fixed Time”. Todos can also be assigned categories. The button on the left above the day view opens a large window showing all tasks. For more complex tasks – a tax return, for example – you can set up subgroups such as “find P60”. Open the context menu by right-clicking on a todo and select New Sub-Todo. Todos can be moved within the todo window or copied into another open KOrganizer using drag & drop. Completed todos are
Customising the scheduler
40
LINUX MAGAZINE
Issue 18 • 2002
marked with a tick. They remain on the list until you right-click them and select Purge Completed.
Configuration To configure your own individual scheduler go to Settings/KOrganizer Set-up on the menubar. In the Personal section you can tell the scheduler who you are. Text boxes are available for your name, address and email address. This is also where you specify whether and if so at which intervals the calendar will be saved automatically, as well as which holidays are to be displayed in the calendar. Since you can set up several KOrganizers, one of them can show the holidays for a different country. That way you will know, for instance, when your friends in the US are enjoying a day off. When you email invitations to appointment attendees you can receive a copy of the mail yourself if you activate the option “Send Copy To Owner When Mailing Events”.
Setting time zones and working times
The section “Time & Date” relates to the five-day week view. The working day is pre-set to start at 8am and end at 5pm. You can adjust this to reflect your working day by entering your own times. The most noticeable changes can be made in the “Colors” section. You can choose colours to distinguish between work and leisure time as well as for the appointment categories, allowing you to tailor the appearance to your own preferences (as is normal in KDE applications). To choose a colour click on the appropriate button on the right and select a colour from the colour palette using your mouse or enter the relevant HTML colour code. Colour selection is particularly useful for categories. The standard setting is to display all categories in a light shade of grey. If you emphasise a category by assigning it a more noticeable colour you will be able to tell at a glance which sort of appointment you’re looking at. In the category dropdown menu select, for example, “special” and click on Select Colour. You can change the font and its size under Font. The section Views contains some more useful
KNOW HOW
Even without a laptop you can take KOrganizer with you wherever you go, as a paper printout. Under Print you can define what sort of paper is in the printer. When printing via the menu bar you can choose between the day, week and month views and the todo list. A print preview option is available for each option.
Search function included Categories in striking colours make the calendar clearer
options. If you specify here at what time your day begins the hour you have selected appears at the very top of the day view. That means you have to scroll to get to earlier times, since, like in real life, every day actually starts at midnight. The month view does not have much space to display your appointments so the headings are truncated after only a few characters. However, it is possible to set up a scrollbar for the individual day fields. If you activate the relevant option you will be able to read the whole entry by scrolling along it. The easily cluttered month view can be maximised to fill the entire screen. The same is true of the todo list. If the relevant options are deactivated these views form part of the main window, which takes up two thirds of the desktop. A very useful option is “Enable tooltips for event summary display”. When this is activated and the mouse pointer is moved over the summary displayed in the calendar the entire text is displayed in a separate field. The principle should be familiar from the tooltips for buttons in Windows as well as KDE. The two options “Show Daily Events In Date Navigator” and “Show Weekly Events In Date Navigator” should be deactivated – that way only irregular appointments are shown as bold in the navigator. Otherwise it won’t be long before all numbers are bold and the emphasising effect is lost.
Once you have used KOrganizer quite heavily for a while you will get to the point of asking yourself “when was that again?” without wanting to click through the whole scheduler with the mouse. There’s no need to, either, because of a useful feature to be found under Edit/Search on the menu bar. Simply enter a search term, which can include the wildcard “*” for a number of characters and “?” for just one character. You can also limit the search by time or subject (headings, descriptions and/or categories). Double-clicking on a search result opens the appointment enabling you to edit it. In theory KOrganizer should be able to archive old entries, but unfortunately the tool seems to crash in the attempt. You are not limited to one single calendar – via File/New you can create additional ones. All calendars are written to your home directory as vcs files. Use File/Open to select another calendar. However, only one calendar can be active, this is the one that will be loaded automatically when starting the program and also the only one that can trigger alarms. The status is displayed in the title bar of whichever KOrganizer window is open. To activate another calendar, open it and select File/Activate. It is possible to merge several calendars. Click on File/Merge Calendar. The directory containing all calendar files opens and you can select the one that is to be merged with the calendar that is currently open. The calendar that is added in this way remains intact.
Month view with and without scrollbars
Issue 18 • 2002
LINUX MAGAZINE
41
KNOW HOW
Linux Authentication: Part 2
THE KERBEROS NETWORK AUTHENTICATION SYSTEM In the second article
What is Kerberos?
in this series Bruce
Kerberos is an authentication system designed to provide secure remote authentication and encrypted access to network services based upon that authentication. It’s fast, relatively easy to set-up, an open standard and Open Source software (though proprietary implementations of the standard also exist). Kerberos was developed at MIT as part of Project Athena, the university’s distributed network computing project. Kerberos is also the name of the three-headed dog which, according to Ancient Greek legend, guards the entrance to the underworld. Due to its strong encryption, the full MIT Kerberos code is classed as a munition and cannot be obtained outside the US. To get around this, a version of the code was stripped of all the encryption (and given the slightly ghoulish nickname “E-bones”. Developers at the Royal Institute of Technology in Stockholm then reimplemented the encryption. Their version of the code is called “Heimdal”, named after the Viking god who guards the entrance to Valhalla. The Kerberos protocol is currently on version 5. Version 4 was the first version that was stable and secure enough for practical use but has significant disadvantages compared to V5 and should be treated as an item of historical interest only.
Richardson looks at how Kerberos can be used to implement a centralised network authentication system
How does it work? The Kerberos authentication system is based on tickets. It involves a simple 3-step process: ● You identify yourself to a service. ● The service grants you a ticket. ● You use that ticket to get access to network resources. The first time you go through this process is when you login to your Kerberos realm. In step one you identify yourself to the Key Distribution Centre (KDC) by giving it your password. The KDC grants you an initial ticket. This ticket will act as proof of your identity until it expires (eight hours is the default lifetime). 42
LINUX MAGAZINE
Issue 18 • 2002
When you access a Kerberos-aware service (see the section called Kerberos-ready applications) you go through this process again. In step one you identify yourself to the service by showing it your initial ticket. The service checks your ticket with the KDC and then gives you another ticket, which enables you to access its resources. That ticket is usually good for one session (login session, mail retrieval or whatever). Due to the initial ticket’s role in getting you further tickets it is usually referred to as a Ticket Granting Ticket (TGT).
The Kerberos network model To understand how Kerberos works you should be familiar with the key components of a Kerberos network. The Realm is the organisational unit of the Kerberos network, comparable in may ways to the NT domain. Each Realm is associated with a KDC and Admin. server. It is entirely up to the system administrator how realms are named and which users/machines/services are members of which realms. The convention, however, is to map Kerberos realms to DNS domains and to give the realm the same name as the corresponding domain, only upper case (realm names are case-sensitive). So the realm for charity.org would be CHARITY.ORG. If no domain is specified, as a command-line argument or in a config file, Kerberos software will assume that this convention has been followed. It is possible to establish trust relationships between realms, so that users on one realm may access services on another. This article does not go into that. Each realm has at least one Key Distribution Centre, which stores the password database and grants Ticket Granting Tickets. If a realm has more than one then one is the master and the others are slaves, synchronising their databases from the master. It is essential to keep your KDC secure: if it is cracked then your whole network is compromised. The administration server allows the Kerberos database to be manipulated remotely, enabling an administrator to add accounts, change passwords
KNOW HOW
etc. It is not essential to run one: you could make all changes while directly logged in to the KDC, which would be secure if limiting. Admin servers are usually run on the same host as the KDC for convenience and security but this is not a requirement.
Tickets Every service available through Kerberos requires a ticket. Each service requires a different kind of ticket but all tickets have these things in common: ● They are issued to a specific principal, granting access to a specific service. ● They have a fixed lifetime, after which they expire if not explicitly renewed. ● They are issued for a specific host. That is, by default they can only be used from the host on which they were requested (see the section called A typical user session).
Credentials cache The Credentials cache stores all the tickets you have been issued during your current Kerberos session. By default this is a file in /tmp readable only by you, but this is configurable. It is possible to open multiple concurrent Kerberos sessions, in which case you will have multiple caches.
A typical user session Fred is already logged on locally at his Linux workstation but hasn’t yet logged in to the Kerberos realm. To do this he uses the kinit utility: $ kinit fred@CHARITY.ORG’s password: Because the local kerberos config files do not specify a domain and because he passed no special arguments to kinit, kinit assumes that his principal name matches his local account name and that the realm is the upper-case version of the local DNS domain. Luckily, this is correct and once he has typed in his password he is issued a TGT. The hosts on Fred’s network run kerberised telnet daemons. Fred decides to log into his network’s mailhost: $ telnet -x -l fred mailhub.charity.org trying 192.168.10.12... Connected to mailhub.charity.org (192.168.10.12) Escape character is ‘^]’ Negotiating encryption... Last Login: Dec 22 14:03:45 from workstation.charity.org $ Note that Fred didn’t need a password and that his Telnet session is encrypted.
Principals A Kerberos principal is roughly analogous to a Unix account. It may represent a human user, a machine or a network service. Principal names are constructed from up to three components (in practice you will never see more than two) and the realm name, in the form component/component/component@realm. The first component is referred to as the name, the second as the instance and there is as yet no standard use for the third. A typical principal name would be fred@CHARITY.ORG. Just as there is a convention to map realm names to DNS domains, so there is one to name user principals after Unix accounts. As a result, user’s principal names often match their e-mail addresses. If Fred were an administrator then he would usually also have an account fred/admin@CHARITY.ORG, which he would use to access the admin server. Note that although this extra account is referred to as Fred’s “admin instance” there is in fact absolutely no link between the fred@CHARITY.ORG and fred/admin@CHARITY.ORG. They are completely separate principals with different passwords and network privileges. Fred could log into the admin server as vendingmachine/repairman@CHARITY.ORG if there were such an account. It is simply the convention to name organisationally related accounts in this way. If Fred runs the kadmin utility without specifying a principal then it will assume that fred/admin@CHARITY.ORG is the principal as whom it should try and connect.
At this point, Fred remembers that he has something he needs to do on the proxy server. So: $ telnet -x -l fred squid.charity.org trying 192.168.10.1... Connected to squid.charity.org (192.168.10.1) Escape character is ‘^]’ Debian GNU/Linux 3.0 squid squid login:
Oops. Fred forgot that Kerberos tickets are, by default, only good for one host. His TGT is no good on mailhub – in fact they don’t even exist on mailhub, having been left behind on workstation. So the kerberised telnet service fell back on the plain old unencrypted and insecure standard. Now, Fred could run kinit on mailhub but that would be insecure: the whole point of kerberos is Issue 18 • 2002
LINUX MAGAZINE
43
KNOW HOW
that your password is not transmitted across the network. So he logs out of mailhub, returning to workstation. Out of curiosity he checks to see the details of the tickets he has acquired so far: $ klist Ticket file: /tmp/krb5cc_1002 Principal: fred@CHARITY.ORG Issued Expires Principal Jan 05 12:37:22 Jan 05 20:37:22 krbtgt/CHARITY.ORG@CHARITY.ORG Jan 05 12:38:12 Jan 05 20:37:22 host/mailhub.charity.org@CHARITY.ORG
This shows him the original TGT and the Telnet ticket from mailhub. Note that the Telnet ticket expires at the same time as the TGT used to obtain it: a service ticket may expire before the original TGT but may not outlive it. But Fred wants to start afresh, so $ kdestroy Tickets destroyed $ kinit -f fred@CHARITY.ORG’s password: His new ticket is now forwardable. If he re-runs Telnet, adding an -F option, his TGT will follow him to the new host and to any other host he telnets into from there. If he runs telnet with the -f option then his TGT will follow him to the new host but will not be further forwardable (i.e. if he telnets to mailhub
using the -f option then he will be able to telnet without a password from there to squid but not from squid to anywhere else.) For a glimpse under the hood of Kerberos, have a look at the sidebar A Kerberised Telnet session in detail, which gives a detailed technical account of how the telnet session is authorised. One thing to take particular notice of is the paranoid and secure fashion in which Kerberos creates an encryption key for the session. It is this key which provides the mechanism for encrypting the subsequent telnet communications. In this fashion any properly kerberised application can enjoy the benefits of secure encrypted operation across the network.
Using Kerberos on your network Unless you are a highly skilled developer, there are essentially two ways to use Kerberos on your network: ● Install services (and clients to access them) which have already been developed to use Kerberos. Do check the documentation to see how fully the application supports/uses Kerberos: some applications only use it for authentication, others make full use of its features to enable secure, encrypted communication. ● Install services/clients which use a generic highsecurity mechanism (e.g. SASL, GSS-API) that can use Kerberos as a backend. These generic security layers are actually more complex than Kerberos and an application that properly supports them can make full use of Kerberos security.
A Kerberised Telnet session in detail To give an idea of how paranoid Kerberos security is, here is that Telnet session in detail: ● Fred sends a request (using his Ticket Granting Ticket) to the KDC: “I want to talk to the Telnet daemon on charity.org” (well, the kerberised Telnet client does it but let’s keep this simple). ● The KDC generates a new session key, which Fred and the Telnet daemon will use to secure their communication. ● The KDC sends two messages to Fred: the first contains a copy of the new key and the name of the remote Telnet daemon and is encrypted using Fred’s key. The second contains a copy of the new key and Fred’s name and is encrypted using the Telnet daemon’s key (and is Fred’s “ticket” to talk to the Telnet daemon). Note: The KDC is not involved from this point on. ● Fred decrypts the first message (he can’t decrypt
44
LINUX MAGAZINE
Issue 18 • 2002
the second as he doesn’t have the key) and extracts the new session key. ● Fred creates a message containing the current time (the “authenticator”) and encrypts it using the session key. ● Fred sends the new message and the ticket he received from the KDC to the Telnet daemon. ● The Telnet daemon decrypts the ticket from the KDC (passed on to it by Fred) and extracts the session key and Fred’s name. ● The Telnet daemon uses the session key to decrypt the authenticator from Fred and checks the time. ● At this point, Fred has authenticated himself to the Telnet daemon and they can use the session key for further communication. But Fred may want the Telnet daemon to authenticate itself to him, in which case: ● The Telnet daemon takes the timestamp from Fred’s authenticator, adds its name and encrypts the result with the session key to create its own authenticator, which it sends back to Fred.
KNOW HOW
● Install the PAM Kerberos 5 module and use that to integrate Kerberos into your network authentication policy.
Kerberos-ready applications The Kerberos source comes with a selection of kerberised replacements of standard Unix apps (Telnet, ftp, rsh etc). While these are interesting to experiment with they are based on creaky old code and I wouldn’t advise using them seriously on your network. Kerberised versions of the more recent Linux apps are out there. There is an ever-increasing number of serious applications available using Kerberos authentication, either directly or through GSS-API or SASL. This includes PostgreSQL, OpenLDAP and Cyrus IMAP. Of particular interest is Cyrus IMAP, which will not only use Kerberos for authentication and encryption but can also use it to store group membership information (Cyrus employs a sophisticated system of group and user permissions to allow access to mail folders). Of course, you’ll need a mail client that can use these security mechanisms. Mutt is a good example for Unix and the respected Eudora mail client does the same for Windows. One very interesting Kerberos-based application is the Andrew File System, which uses the Kerberos security model to provide a distributed network filesystem. It’s rather more sophisticated than NIS and much more secure!
PAM PAM offers the crudest way to integrate Kerberos into your network. PAM offers a relatively simple authentication interface with no provision for the encrypted communications features of Kerberos. Still, if you add the kerberos module to the stack of the Linux login app then it will authenticate the login against the KDC, fetch a TGT, store it and destroy it when you logout. If you combine the kerberos module with the mkhomedir module, which automates the creation of local home directories for newly authenticated users, you can implement your own roaming logon system (assuming you are fortunate enough to have Linux desktops in your workplace). Of course, you are all now PAM experts, having read the first article in this series, and will find this no challenge at all.
Working with Windows 2000 You may have heard that Active Directory bases its security model on Kerberos. This is true and although they have, as usual, “embraced and extended” the protocol it is still possible to authenticate users against an Active Directory server and even to create trust relationships between Kerberos and Active Directory domains. See the Info box for details.
An admin session Fred has to do some admin work on the Kerberos realm. First he needs to connect to the Admin server. Because he doesn’t specify a principal, kadmin assumes he wants to connect as fred/admin@CHARITY.ORG. Note that it is only when he asks for Wilma’s details that he is asked for his password. $ kadmin kadmin: getprinc wilma Principal: wilma@CHARITY.ORG Expiration date: 2004-01-12 14:22:35 Last password change: 2001-12-22 09:31:05 Password expiration date: 2002-03-22 09:31:05 Last modified: 2001-12-22 09:31:05 (fred/admin@CHARITY.ORG) Last successful authentication: 2001-12-21 09:35:43 Last failed authentication: 2002-01-05 11:20:19 Failed password attempts: 3 Number of keys: 1 If you look at the information Fred retrieved about Wilma, you’ll see that she’s come back from holiday and forgotten her password. So kadmin: cpw wilma Enter password for principal “wilma”: Re-enter password for principal “wilma”: Password for “wilma@CHARITY.ORG” changed. That done, Fred wanders off to get a coffee. When he comes back he finds that he has to re-authenticate himself, as the Admin server has been set to grant tickets with five-minute lifetimes to secure it against careless nerds like him. This behaviour differs from that of the kerberised Telnet daemon, which will not abort a telnet session once the ticket expires but will refuse to authorise any fresh ones.
Summary If this article has done its job then you have learned how Kerberos can bring centralised, secure authentication, user administration and reliable encrypted communications to your network. You’ve seen practical examples of its use and an overview of its architecture and philosophy. So why aren’t you using it? What do you have that’s better?
Info Kerberos FAQ
http://www.nrl.navy.mil/CCS/people/kenh/kerberos-faq. html Heimdal http://www.pdc.kth.se/heimdal Kerberos for Morons http://www.isi.edu/~brian/security/kerberos.html Why not use Kerberos? http://www.redhat.com/docs/manuals/linux/RHL-7.2Manual/ref-guide/s1-kerberos-whynot.html Win2K Kerberos Guide http://www.microsoft.com/windows2000/techinfo/ planning/security/kerbsteps.asp
Issue 18 • 2002
LINUX MAGAZINE
45
KNOW HOW
The pitfalls of DNS
DNS SUBTLETIES DNS is a distributed system that handles the correspondence between hostnames and IP addresses. In this article Wednesday White aims to discuss some of the more interesting cases that you may fall afoul of once you’ve got a basic DNS implementation up running
Alternative DNS servers
S N D
It’s not compulsory to use the ISC’s BIND package for DNS, and not everyone does; even if you do use BIND, you have a choice between the more stable and better understood version 8, or the relatively new version 9. Version 9 introduces some ingenious new features, some of which I will discuss below, but is probably more prone to new security holes being found and to general instability. I wouldn’t suggest using version 9 unless you need one of these new features. The other viable alternative is a package by Dan Bernstein (DJB) called “djbdns”, available from http://cr.yp.to/djbdns.html. Djbdns is free of cost, but the license is not Open Source (although the source is open for inspection), meaning your GNU/Linux distribution probably does not include it; a more serious drawback of djbdns is that it assumes that you wish to organise your systems exactly as DJB would; and of course article authors will persist in being awkward and discussing everything in terms of BIND. However, djbdns is believed to be extremely secure; at this time, no security holes in it have been exploited, and plenty of people have been looking – DJB offers a reward of $500 for finding one. BIND, by comparison, has been compromised all too often. If you are paranoid and want to run a DNS server that provides a service to the whole world or to possibly malicious people, djbdns may be for you. Djbdns also offers superior performance, but that’s unlikely to be an issue except for the largest of sites. The most ingenious feature of djbdns is that it
Multiple DNS servers If you’re running DNS in anything other than your home, pretty soon you’re going to want to have more than one DNS server. But how many? In medium to large sized organisations, DNS servers can serve many functions. You’ll want lightweight caching-only servers over a large network, to provide low latency answers to users; you’ll also have nameservers that are connected to the public Internet, both to pass requests out from internal users and to provide information about the domain or domains you run yourself – and you probably want these to be separate machines, since the machines that accept requests from the outside world are necessarily more of a security liability and will ideally be placed in a DMZ. If you run a multiple horizon set-up, you’ll need a second set of nameservers that provide the internal view of your domains.
46
LINUX MAGAZINE
Issue 18 • 2002
divides up the features of a DNS implementation – caching, normal queries, answering zone transfers – into separate programs, making it easy to only provide the functionality you require on a particular server. Microsoft’s Windows NT does also provide a DNS server, but I will not be discussing it here.
More about security
Don’t forget to subscribe to a relevant mailing list for security alerts – probably the one run by your chosen GNU/Linux distributor. Ensure you are running up-todate versions of your software; 8.1.3 or 9.2.0 for BIND, 1.05 for djbdns at the time of writing. It is well worth disabling zone transfers except from approved machines (with the allow-transfer statement, when using BIND); this should normally only be those machines that slave zones off a particular DNS server. This will prevent the black hats from grabbing a complete copy of your zone and looking through it for attractive targets, and (possibly accidental) DoS attacks on your server under with a series of zone transfer requests. If you run a “hidden primary” configuration, you may be able to disallow all requests to that server except from its slaves, not just zone transfers. Do firewall off port 53 except to servers that actually provide a DNS service to the outside world; don’t make the mistake of permitting only UDP port 53 because ‘only zone transfers use TCP’ – any reply over a certain length will use TCP port 53. Consider also filtering outgoing port 53; you’ll want to ensure that outgoing requests come only from the server or servers you want them to, especially if you’re running a multiple horizon set-up – a set-up where the same domain has different data for internal users and external queries, which is very common if you don’t want random people to know all the names of your internal machines. However, this is only appropriate if your internal machines’ IP addresses can never be used on the public Internet (owing to some kind of NAT arrangement, perhaps using the RFC 1918 reserved ranges) – if their IP addresses are visible, they ought to have names, too!
. 2 1 2
Reliability Various system-monitoring scripts exist which can monitor several DNS servers and check they are all still answering queries; however, when setting up
KNOW HOW
such a thing, it’s all too easy to ensure that the failure of a single monitoring machine or of your mail system completely disables alerting! Be careful. It’s not much good having several DNS servers if they are all on the same network subnet where the failure of a single router or switch can take them all out; try to ensure that all your DNS servers could only be rendered inaccessible if the network was completely unusable. If your organisation is large enough to have more than one route to the Internet, try to ensure that your DNS architecture has at least one server using each one. Conversely hardware for DNS servers does not need to be hugely expensive – although shelling out a little more is often worthwhile. The DNS is designed so that at every stage of the process, systems can have a choice of three or more servers to query; if you have avoided the network problems above, you will survive the failure of any particular server. However, you should ensure that you have copies of the configuration for each DNS server you possess in a number of places; then, when a particular machine suffers a terminal hardware failure, you can very easily produce another system with the same configuration to replace it – particularly if you use Free operating systems on cheap hardware, and can hence readily have spare machines with an OS installed ready to be used at any time.
Some common errors Unfortunately there are more common errors than this; these are just some of the more awkward ones. ● The standards specify that an MX record – used for mail delivery – cannot point to a CNAME. Unfortunately, this usually appears to work OK, and so goes unfixed; nevertheless, it is a surprisingly awkward case for authors of mail transfer agents to get right, and should be eliminated. MX records should always point to A records. ● When editing a zonefile, leaving the trailing dot off a fully qualified domain name is an incredibly common error; this of course results in the zone being appended to the entry, producing absurdities like “reverse entry for 192.168.53.90 is snake.example.com.53.168.192.in-addr.arpa.” It’s easy to do; the answer is always to test an entry immediately after changing it and reloading the nameserver. ● A particular IP address can have multiple A records pointing to it; a common error is to fail to notice that a reverse record already exists when adding a second A record pointing to a given IP address, and add a second IP address, which causes the nameserver to reject the reverse zonefile. The simplest answer is to keep reverse zonefiles sorted – then the previous reverse entry will be obvious when you try to add the new bogus one. ● Failing to increment the serial number when editing a zone file causes remote nameservers not to think the zone has changed. Unfortunately, this is just a matter of training yourself not to forget – or using a tool like h2n that does it for you.
1 1 . 1 0 2 . 6 5 Hidden primary
A “hidden primary” configuration is one where the master for a particular zone is not actually mentioned in the NS records for that zone at all; instead, a set of machines all of which slave the zone off it are mentioned. This has some advantages; the hidden primary never receives any DNS requests except approved zone transfers (noone knows its name, and it need not even be willing to answer them), so will not be heavily loaded even if you run all your zones off it; and if you make an error editing a zone file and the nameserver refuses to load it, none of the nameservers that anyone actually uses will be refusing queries because they have no data for that zone. The benefit of concentrating all your zone files in one place without performance worries is considerable, and should not be overlooked.
Multiple horizon Traditionally multiple horizon setups have required two complete sets of nameservers, which is a pain. BIND 9 added the “split view” facility that, with appropriate configuration, allows you to load two different sets of zone files and answer requests based on the IP address of the calling client. In a hidden primary setup, the primary can use split view – with reduced security worries, since although it will run
BIND 9 it need not accept DNS traffic from random machines at all – to serve both internal and external zone files to its slaves, permitting you to concentrate all editing on one machine. A simplification of multiple horizon setups is to use a separate domain for all your internal entries; if your world-facing domain is “example.com”, ensure that all your internal machines are in “internalexample.com” (however, you should ensure you register this domain) – then your multiple horizon setup need only ensure that your world-facing DNS servers believe they are authoritative for it and load an empty zone file for it.
Reverse DNS
Reverse DNS is something that is traditionally messed up; but I’d encourage you to make a break with tradition and get it right! Very few people do; ISPs are some of the worst offenders, with plenty of Internet-accessible machines (usually routing hardware) lacking reverse entries. If a given IP address is in use – if a computer has it assigned, or if any forward DNS entry resolves to that IP address – that IP address ought to have a reverse DNS entry; and that reverse entry ought to resolve to a name which can itself be looked up to yield the same IP address. Note that it’s not a problem if elephant.example.com resolves to 192.168.53.76 and 192.168.53.76’s reverse entry is rhino.example.com – Issue 18 • 2002
LINUX MAGAZINE
47
KNOW HOW
If your ISP is not sufficiently competent, they will tell you it can’t be done
provided that rhino.example.com also resolves to 192.168.53.76. The first problem that you will probably encounter is that your ISP is unable or unwilling to delegate the relevant reverse ranges to you. This is very common with bargain-basement operations that will sell you a domain and delegate you the forward zone, but find reverse DNS to be a mystery. Normally this is just a matter of persuasion, but it’s more difficult in the case where your IP address range is not what used to be a class A,B or C network (for example when your subnet mask is not 255.0.0.0, 255.255.0.0, or 255.255.255.0). Fundamentally, the design of the in-addr.arpa zone used for reverse DNS is intended only to deal with these cases, since it predates the CIDR system now in use. If your ISP is not sufficiently competent, they will tell you it can’t be done. How can you deal with this? It’s detailed in RFC 2317; on your end it’s simple enough. You insert zone file definitions starting like this;
they will find a CNAME to 73.64/27.53.168.192.inaddr.arpa ; they will find that 64/27.53.168.192.inaddr.arpa is delegated to you, and ask your name servers; and they will return the answer ‘giraffe.example.com’.
Alternatives to editing zone files It’s not really an alternative, but a lot of the pain of editing zone files can be alleviated by using a version control system such as GNU CVS. If you have more than one person editing zone files, I would go so far as to say that this is an absolute requirement. Beyond that, the venerable h2n script transforms lists of hosts and IP addresses into correctly formed zone files; it can readily include other chunks of zonefile, for things you cannot describe as host-IP pairs (like MX records). It increments the serial number eliminating another common source of error. If you aren’t doing anything overly complex, ensuring you edit lists of hosts and then run h2n on them can greatly simplify DNS maintenance. If you want to get more sophisticated, you will end up writing your own Perl scripts to find free addresses, free up old addresses, check the correctness of zonefiles, make the coffee, and so forth. This can certainly be an interesting project (and provides for people who faint at the words ‘text editor’), but is probably overkill unless you really are running a huge DNS set-up. Some proprietary software vendors make “IP management” software; in my experience these are clunky, slow, painful to use, and do not provide even the most basic sanity checking. Steer clear.
. 1 0 2 . 6 5 . 2 1 2 zone “64/27.53.168.192.in-addr.arpa” {
into your named.conf – this one would be to deal with reverse entries in the 192.168.53.64/27 subnet, which contains 32 IP addresses. (Of course, the IP addresses here are from the RFC 1918 reserved ranges, and so would never be used on the global Internet.) Your ISP – which controls 53.168.192.in-addr.arpa, we hope – delegates 64/27.53.168.192.in-addr.arpa to you with lines like this in the zonefile for 53.168.192.in-addr.arpa ; 64/27 64/27
NS NS
<your name server> <your other name server>
They also create one entry for each IP address in your subnet, like this; 64 CNAME 64.64/27.53.168.192.U in-addr.arpa. 65 CNAME 65.64/27.53.168.192.U in-addr.arpa. (and so on for 30 more entries up to) 96 CNAME 96.64/27.53.168.192.U in-addr.arpa.
Of course this is a pain, but they only have to do this once and all these entries can be automatically generated. Now you can create entries in your zonefile for 64/27.53.168.192.in-addr.arpa like this one; 73
PTR
giraffe.example.com.
Now if someone looks up 73.53.168.192.in-addr.arpa 48
LINUX MAGAZINE
Issue 18 • 2002
DNSSEC
This is perhaps the most significant improvement in Bind 9. The DNS is very vulnerable to ‘spoofing’ – insertion of bogus data into caches designed to misdirect traffic to the wrong machines. A detailed discussion of DNSSEC would require another article, but essentially DNSSEC uses public key cryptography so that a zone can sign its subzones; hence, if I have a public key for “example.com” and I receive data for “animals.example.com”, my nameserver can check that the source of data for “animals” has a public key signed by the owner of the private key for example.com, and hence that the data comes from an approved source. Ultimately, of course, the key signing “web” will come down from the root nameservers, so it will not be necessarily to trust any keys at all – the key for example.com will be signed with the key for .com, which will itself be verified by the root nameservers. The BIND 9 Administrator’s Manual contains a discussion of the necessary steps to get DNSSEC up and running; it’s worth a look.
KNOW HOW
Qt tutorial – Part 5
GETTING STARTED WITH QT O
ne of the many benefits of using Qt is that it comes with a rich array of readymade widgets, which you can pick and use. Let’s now take a quick look at some of these widgets and how we can put them to work.
When you have added items to a menu, you should get something looking similar to Figure 1:
Office land here we come
Toolbars
One of the main uses that Qt can be utilised for is building the typical office-style applications, which utilise many of the typical widgets you see in normal everyday applications. Here is a breakdown of these widgets and which classes you can use for them:
Menus The purpose of a menubar is to act as a placeholder for menus (which are called popup menus). The menubar is created using the QMenuBar class to create the bar, and then each item is created using a QPopupMenu for each entry. Once these have been created, we can Figure 1 then use insertItem() to add an item to each menu. You can also set slots to connect to when you add the item. The following code creates a couple of menu items on the menubar and then adds some entries: 1 QMenuBar * menuBar = new QMenuBar(this); 2 3 QPopupMenu * fileMenu = new QPopupMenu(menuBar); 4 QPopupMenu * itemMenu = new QPopupMenu(menuBar); 5 6 fileMenu->insertItem( “&New Item”, this, SLOT( slotNewItem() ), CTRL+KEY_N ); 7 itemMenu->insertItem( “&Edit...”, this, SLOT( slotEdittem() ), CTRL+KEY_E ); 8
9 menuBar->insertItem( “&File”, fileMenu ); 10 menuBar->insertItem( “&Item”, itemMenu );
***qt3.jpg Toolbars are widgets, which can look similar to a menubar, but contain buttons (called toolbuttons) instead. The purpose of a toolbar is to present a button with an icon on, which connects to a frequently used function or action. Toolbars use the QToolBar class, and contain toolbuttons built from the QToolButton class. Usage of a toolbar and toolbuttons is often coupled together, using a QmainWindow, which we will cover later.
Status bar
Welcome to the fifth and final part of our series on using the Qt toolkit for creating graphical applications. In this issue Jono Bacon covers the remaining features
***qt4.jpg The status bar widget is usually found at the bottom of the main window and is intended for showing concise information and detailing what is currently going on. In many ways it is intended as a metaphorical dashboard for an application. The status bar is implemented using the QStatusBar class, and it has basically three different modes:
that Qt has to offer. For any of the features that we don’t have space to cover refer to the documentation and
● Temporary – occupies most of the status bar
the Qt-interest mailing list
Action time
Although it is perfectly fine to use the QMenuBar, QPopupMenu, QToolBar and QtoolButton classes, it is often a good idea to also use the QAction class to build actions. Actions are basically things a user will do while using the application; e.g. opening a file, printing a document etc. The QAction class lets us group the user interface elements for these actions (menus and toolbars) so when we add an action, we get the necessary user interface elements automatically. Take a look at the QAction documentation for more details.
Issue 18 • 2001
LINUX MAGAZINE
49
KNOW HOW
Class Mania! Here is a quick run down of the classes you would use for typical functions within your applications: Create tab pages Creating radio buttons Creating combo boxes Creating a step by step wizard Let the user open a file Manipulate files Manipulate regular expressions Drawing graphics Manipulating mouse actions Editing multiple lines of text Connecting multiple objects to slots and checking which was the caller Dealing with data structures Storing coordinate points in a data structure Creating tooltips Playing sounds Creating widget themes and styles Printing Handling the mouse wheel event Dealing with HTML code Networking
QTab QRadioButton QComboBox QWizard QFileDialog QFile QRegExp QCanvas, QPainter, QPixmap QMouseEvent, QEvent QMultiLineEdit QSignalMapper QArray, QList, QVector, QStack, QQueue, QDom[...] QPointArray QToolTip QSound QStyle QPrinter QWheelEvent QDom[...] extension, QXml[...] extension QSocket, QNetworkProtocol, QNetworkOperation, QFtp
LINUX MAGAZINE
initActions() – Creates the menus (menubar and menu items) initToolbar() – Creates the toolbar and toolbuttons initView() – Creates the main view of the application (usually the document for example) initStatusbar() – Creates the status bar
Many coders often include some other methods for building other parts of the application: initConfig() – Load the config file for the app initDefault() – Setup any default settings initDoc() – Create any document data related objects and constants
The statusbar is often used with the QMainWindow class, which we will cover later.
Once these things have been executed the window is pretty much built. The QMainWindow has a number of different convenience methods, but the most common ones you are likely to use are for setting up the menu, toolbar and status bar and for setting the main widget for the application. Facilities available in QMainWindow include addToolbar() for adding toolbar items, menuBar() returns the menu bar and creates a new one if needed. menuBar() and statusBar() are also available as convenience methods for building those widgets and they manage the relevant space needed by those widgets. One of the main uses of a QMainWindow is for managing the screen space of the main area in the window (the place where the document traditionally is). This space is called the main or central widget, and you can set any widget as the central widget using setCentralWidget(). QmainWindow won’t actually affect the widget itself – it will just manage the geometry of it.
The magical main window
Non graphical aspects of Qt
Qt has special support for another type of window (we have already covered Qdialog-based windows and we have discussed widgets). The intention of this type of window is that it forms the main body of your application and contains the menubar, menus, toolbar, toolbuttons, status bar, documents etc. that your application provides. The class used to create this special type of window is QMainWindow. A QMainWindow is basically a normal window, but it has a number of convenience methods, which can be used to make life a little easier. The usual usage for a QMainWindow is to inherit from it, and then use these convenience methods where needed.
Although most people who have seen Qt will think of it as a GUI toolkit, the functionality of Qt certainly doesn’t end with on-screen widgets. Qt has substantial support and classes for non-graphical processing. The first thing we can look at is the data structures that Qt has. The first and possibly the simplest is QString. The QString class offers a number of methods and facilities for common string usage, and due to the fact that QString uses implicit sharing, it is fast. Another useful class is the QStack class. There is support for pushing and popping data onto the stack with push() and pop() and much more. Another
briefly. Used for explaining tool tip texts or menu entries, for example. ● Normal – occupies part of the status bar and may be hidden by temporary messages. Used for displaying the page and line number in a word processor, for example. ● Permanent – is never hidden. Used for important mode indications. Some applications put a Caps Lock indicator in the status bar.
50
The usual behaviour for the window building process is to create some methods, which build the various parts of the window and execute these methods in the constructor. I usually create the following methods to build the various parts:
Issue 18 • 2001
KNOW HOW
useful class is the QList class. QList gives a lot of support for typical lists, and is often used in conjunction with a QListView or QTable widget. QList is a full template class with support for double linked lists. QList also uses the internal QLNode class to hold pointers to the usual next and previous items. Using this class can make handling lists a breeze so I suggest a good read of the documentation for QList. Other classes such as QVector, QQueue and QArray are worth looking into regarding data structures.
● Integration with the desktop – integrating files, directories, icons and more. ● KParts component model – KParts enables support for applications that can be embedded, applications within applications ● DCOP – Powerful interprocess communication and scripting support Addressbook, kded, shared resources – KDE has shared address books, daemons and other resources ● aRts – KDE natively uses the aRts digital synthesis server for powerful music capabilities
KDE support Qt is a fantastic widget set, and if you use KDE you’ll be pleased to know that KDE is written using Qt. The KDE project has developed a number of extension classes and technologies, which extend Qt applications for desktop integration, inter-application communication and more. These extensions are called the KDE Libraries, and if you are planning on writing an application for use on a desktop UNIXbased system, looking into providing KDE support is a wise idea. KDE supplies the following services built from Qt to extend Qt:
All of these are natively supported in Qt as most of them were coded using Qt. This desktop support extends your application if you wish.
Wrapping things up Well it has been an interesting journey into the world of Qt development, and I hope I have helped you get started with Qt development. Qt is a truly powerful API and it has a learning curve, but once you are started, progress can be made smoothly. I am always eager to hear how you get on, so drop me an email, via the magazine, and let me know. Good luck!
Issue 18 • 2001
LINUX MAGAZINE
51
PROGRAMMING
Useful Tcl and Tcllib functions
HIDDEN TREASURES The Tcl/Tk distribution contains a lot of useful functions that many programmers don’t know about. These can help to solve many common
hen writing applications you often encounter problems that countless developers have had before you. Besides the normal system functions, Tcl provides solutions for many problems, so that programmers don’t need to go to the trouble of sorting them out for themselves. These solutions come in the form of packages, which are easily loaded into the interpreter and are usually part of a normal system installation. An earlier instalment of Tcl/Tk has already introduced the msgcat internationalisation package, this time we are going to look at three more packages.
W
valid and invalid options. You could, of course, write this yourself, but why bother when a solution already exists in the shape of the opt package? This package contains the command tcl::OptProc, which is used instead of the normal proc command. The command has three parameters: tcl::OptProc name, parameter description and body. The name of the procedure to be created is followed by the description of its parameters. This description is actually a list of the valid passing parameters. Each individual parameter is in turn described in another list that contains the following elements:
Split
● Parameter name, with a hyphen for optional arguments. ● Type. OptProc recognises -boolean, -int, -real, -list and -choice, the latter including the choices available. ● Default value. Used if this parameter has not been set when the procedure was called. ● Description.
problems, such as parsing the command line, very quickly. Carsten Zerbst takes a closer look
Don’t you just love the thousands of options on the command line that control an application? Before the program can make use of these parameters there is an arduous task to be performed. The string entered by the user must be divided up and examined for
Listing 1: The opt package 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
52
#!/bin/sh # Example for the opt package # \ exec tclsh $0 $@ package require opt tcl::OptProc main { {require -string “file name”} {-flag} {-int 2 } {-real 1.0 “flag, default 1.0”} {-bool -boolean false “boolflag, default false”} {-choice -choice {1 2 3} “selection, 1, 2 or 3”} {-list -list {} “list, default {}”} {?more? -string “” “unparsed remainder”} } { foreach v [list required flag int real bool choice list more] { puts stdout [format “%14s : %s” $v [set $v]] } } if {[catch {eval main $argv} err]} { puts stderr $err exit }
LINUX MAGAZINE
Issue 18 • 2002
It is not necessary for all four elements to be defined. For example, OptProc can automatically determine the type from the default value. Listing 1 shows some different examples of definitions as well as nested lists in the second parameter of tcl::OptProc. This is followed by the actual function. When calling a command defined in this way the parameters are passed as individual strings, which the procedure automatically parses according to the definition. This is also the reason for the eval construction in line 23: $argv contains the command line parameters in form of a list, but main is expecting them as individual options. The parameters to be passed are available as variables within the procedure. Another interesting feature of the example is the foreach loop: at each iteration of the loop the variable v receives the name of a variable containing the value of the respective option. $v in line 19 therefore only outputs the name of a variable, [set $v] is required to show its content. If the parser encounters an error, either because the type of a variable is incorrect or because a
PROGRAMMING
Latest news No matter how insecure domain management by email at InterNIC may be, even without security gaps things can go badly amiss with name server entries. In this particular case the cause is something that is often red, always small and has two cute little round eyes. Others see it as an expensive toy with which its owners are trying to recapture their lost youth. We are, of course, talking about the new Mini. What exactly is the connection between BMW and the Internet? At first glance nothing, apart from the fact that BMW also uses Tcl. Since 1996 an enormous font of Tcl knowledge has been available at the “Tcl’ers Wiki”. Its URL http://www.mini.net recently started to rather unexpectedly link to BMW. The entry at registers.com had been transferred to BMW without anyone bothering to consult the domain’s owner, Jean-Claude Wippler. The company had bought a number of other addresses featuring the Mini, but not mini.net. Wiki users were quick to notice the error, but it took some time before the Wiki was back in business. Accidents like this just go to show once again how easy it is to cause major disruptions on the Internet.
Combat: CORBA scripting with Tcl CORBA is the solid foundation of many an application. Frank Pilhofer’s Combat has probably been the best Tcl binding available for this middleware for quite some time. It allows you to write CORBA clients as well as servers. Until now it required MICO as its basic ORB – but with
required parameter is missing, it returns an error. This can return the section of the program called to the user (catch in line 23 and puts in line 24). Another popular option is -help, which outputs the definition of the arguments with their description. In Figure 1 you can see the script from Listing 1 in action. The OptProc package allows you to provide Tcl programs with a useful command line interface very quickly. Even individual procedures can benefit from this flexibility.
Let’s have it Of course Tcl has much more to offer, including an implementation of HTTP, the Hyper Text Transfer Protocol. The HTTP package is part of a standard Tcl installation and allows access to Web pages. At the heart of the package is the command http::geturl, it can load files with all the trimmings, fill in forms or simply retrieve information about a page.
Combat 0.7 there now exists a pure Tcl implementation so that complicated libraries are no longer required.
Patchlevel Tcl 8.3.4 The latest patchlevel for Tcl 8.3 is now available in Tcl 8.3.4. The improvements primarily concern 64bit platforms and are therefore (not yet) of much general interest. While Tcl can generally do more with each new release, naturally growing ever bigger at the same time, CISCO is currently financing a project by ActiveState to develop a modular Tcl. The aim is to only load those modules into the interpreter at startup that are absolutely necessary. What can be gained by this is demonstrated by NASA’s Marsrover or currently by the game Wiggles in which each figure runs on a pared-down interpreter which takes up all of 17Kb.
New Tcl database A close symbiosis has existed between databases and Tcl for quite a while. There is hardly an SQL database that doesn’t come with a Tcl extension, if Tcl isn’t used for system tools or code tests anyway, as in the case of Adabas or Oracle. Sometimes you need just a little bit more than a simple text file without really requiring a full-blown database. For these occasions Richard Hipp offers SQLite, a small database engine that runs directly in the application and understands a sufficient subset of SQL. This means that small applications for interactive customer catalogues or CD collections can be implemented without elaborate server processes.
Even without security gaps things can go badly amiss with name server entries
The return value of the command is a token. This token is the name of an array, which in turn contains information about the page and, depending on the request, possibly even the file itself. After each use of http::geturl it is therefore necessary to delete this array with the command http::cleanup, otherwise this is a great way for the interpreter to become bloated with more and more data. First of all it makes sense to have a look at the environment of a Web page. The flag -validate restricts http::geturl to only loading information like size and MIME type of the file or Web server type instead of the entire file. This is what happens in the first few lines of Listing 2, while line 14 outputs this meta-information. However, not all Web servers reveal their meta-data without pages being actually requested. The CUPS server, for instance, is very unforthcoming in this respect and only supplies metadata with a file. Issue 18 • 2002
LINUX MAGAZINE
53
PROGRAMMING
Listing 2: Files from the WWW 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
#!/bin/sh # Example for the http package # \ exec tclsh $0 $@ package require http set url http://tcl.activestate.com #set url http://127.0.0.1:631 # meta-information set token [http::geturl $url -validate 1 ] foreach {name value} [set $token\(meta)] { puts stderr [format “%-20s = %-20s” $name $value] } # file set fd [open as.html w] puts stderr “get $url” proc progress {handle max size } { puts -nonewline stderr [format “ %.0f%% “ [expr 100.0*$size/$max]] } set token [http::geturl $url \ -channel $fd \ -blocksize 2048 \ -progress progress ] puts stderr “finished” puts stderr [http::code $token] http::cleanup $token close $fd exit
Once you have the information about a page you might want the whole thing. The next lines of Listing 2 contain a simple example. As described above, geturl normally returns a token that describes an array in which the file itself eventually ends up. In Listing 2 the file is instead written directly into an open file due to the option -channel. The command http::geturl blocks the interpreter until it is finished or an error occurs. So that the user will get some sort of feedback during loading, geturl invokes the callback procedure progress every 2048 bytes. progress simply outputs the percentage of the file that has already been loaded. As soon as the file has been transferred completely the information held in the token becomes available. Various functions from the HTTP package access these data, http::code, for example, requests the transfer status code. The output of the script in Listing 2 can be seen in Figure 2. The Web has much more to offer than simple data transfer from server to client. The form tag allows you to design simple user interfaces in HTML with input fields, radio buttons and the like. A popular service requiring user input is AltaVista’s Babelfish, which can translate single words, sentences or entire HTML pages between various Western and Eastern languages. The values of the buttons and entry fields must be suitably packaged for transfer to the server. This is done using the command http::formatQuery, which expects a list of variable names and values as input. The variable names can be found in the HTML code as attributes of the tags input, select or textarea. For radio buttons and the select tag the code also contains the valid parameters. The request’s target URL is contained in the form tag as the attribute action; the formatted request will need to be appended to this. A simple example can be seen in Listing 3 where a single word is translated using Babelfish. The opt package is used to parse the command line. Line 14 assembles the required URL for translation of the word in the desired language combination. Unlike
Info Tcl’ers Wiki Getleft
http://www.mini.net http://personal1.iddeo. es/andresgarci/getleft/english/ GNOCL http://www.dr-baum.net/gnocl CORBA http://www.omg.org COMBAT http://www.fpx.de/Combat Wiggles http://www.wiggles.de SQLite http://www.hwaci.com/sw/sqlite/ Wrong registration http://www.mini.net/tcl/ 2355.html Figure 1: Entry using the optpackage
54
LINUX MAGAZINE
Issue 18 • 2002
PROGRAMMING
Figure 2: Trawling through the WWW with Tcl: the script first shows information about http://tcl.activestate.com, then it loads the page and gives a running update of its progress
most of our other examples this one also contains error handling, otherwise it wouldn’t be much use. Babelfish is often busy, so our program cuts its losses after 15 seconds. A check in line 19 whether the transfer was successful is followed by the code for processing the received data.
Worldwide Internally Tcl works with Unicode and is therefore able to handle Chinese or Arabic characters. For strings that either originate externally or are intended for external use, Tcl assumes Western European ISO8859-1-encoding. However, AltaVista pages are created in UTF-8 so the text needs to be converted to the internal Unicode format before processing. This is done using the encoding command in line 23. Next, we need to extract the translated word from the page. The most elegant way of doing this is with the help of the W3C’s DOM model, which we are going to have a closer look at in a future instalment. Until then we are going to extract the old-fashioned way: the HTML code including the result is split into individual lines. The translated word is located between the start and end tag of textarea, the script simply combines any relevant lines and discards any unwanted tags with regsub. Not pretty, but effective.
The author Carsten Zerbst works for Atlantec on a specialised PDM-System for the ship-building industry. Apart from that he devotes his time to the general application of Tcl/Tk.
Listing 3: Client for interactive HTML pages 01 #!/bin/sh 02 # 03 # Translation using Altavista’s Babelfish \ 04 exec tclsh $0 $@ 05 06 package require opt 07 package require http 08 09 tcl::OptProc main { 10 {text -string “text”} 11 {-langs -choice {en_de en_fr en_it fr_en fr_de de_en de_fr it_en} “languages, default en_de”} 12 } { 13 set url http://world.altavista.com/tr 14 append url “?[http::formatQuery tt urltext urltext “$text” lp $langs]” 15 16 if {[catch {http::geturl $url -timeout 30000} token]} { 17 error “Problem with network: $token” 18 } 19 if {[http::ncode $token] != 200} { 20 error “Problem with server, $token” 21 } 22 # “Brutal” data extraction method 23 set htmllist [split [encoding convertfrom UTF-8 [http::data $token]] \n] 24 http::cleanup $token 25 set index0 [lsearch -regexp $htmllist “<textarea”] 26 set index1 [lsearch $htmllist “</textarea>”] 27 if {($index0 < 0) ||($index1< 0)} { 28 error “Problems with parsing” 29 } 30 set result [join [lrange $htmllist $index0 [expr $index1 -1 ]]] 31 regsub {<textarea[^>*]>} $result “” result 32 puts stdout $result 33 exit 34 } 35 36 if {[catch {eval main $argv} err]} { 37 puts stderr $err 38 exit 39 }
Issue 18 • 2002
LINUX MAGAZINE
55
PROGRAMMING
C: Part 5
LANGUAGE OF THE ‘C’ In part 5 of our C tutorial Steve Goodwin adds control and finesse to our printing as well as looking at keyboard input
T
here was a lecturer who taught Pascal programming to a group of first year students. He taught the course in a very rigid and structured manner: in the first week, he taught everything about Pascal beginning with the letter ‘A’. The second week’s lecture was brought to the students by the letter ‘B’, week three, ‘C’ and so on. It certainly split the course up nicely and provided a good aide memoire for the students, however it took 16 weeks before they could print anything to the screen. I hope I haven’t followed in his footsteps!
Get outta my dreams The most common means of printing text is with the printf function. We’ve seen this function many times before, and you’ve probably deduced how it works. If not, see Listing 1. printf consists of a text string to print and (optionally) any number of additional parameters
Listing 1 1 #include <stdio.h> 2 3 int main(int argc, char *argv[]) 4 { 5 int iItemCount = 24; 6 float fAverageTemperature = 42.5f; 7 8 printf(“There are %d items, the U average temperature was %f \n”, iItemCount, U fAverageTemperature); 9 return 0; 10 }
56
LINUX MAGAZINE
Issue 18 • 2002
holding data. The string may consist of text, and/or “format specifiers”. These may appear anywhere within the string, but always begin with a % symbol, and are followed by one or more characters. When displayed on the screen each format specifier will be replaced from inside the printf function with the next available parameter (the string itself, however, remains unchanged). The manner in which it is replaced is determined by the specifier itself: a %d means print the argument as a decimal integer, for example. It is imperative that the variable type given matches the specifier in the string, or it will print garbage on the screen. Also, you must make sure the number of format specifiers equals the number of extra arguments, or you will get similar-looking garbage. The more common format specifiers are shown in Table 1. Although it’s acceptable to print single characters using an integer variable it can be quite problematic. This is because not all integers will result in printable characters (see the Weak types boxout). But that’s only chapter one! C supports some groovy additions that enable you to control the number of significant digits presented, as well as the layout.
Numbers If the first character after the % is a number, this means the printf will display at least this number of characters – be they digits, letters or padding. It may print more, but never less. This is called the “field width”. Its antithesis is the precision value (any number which follows a dot, “.”). This indicates the maximum number of characters it can print. In the case of numbers it refers to the digits after the decimal point. With letters, it means the number of characters in general. Either, or both, of these numbers may be omitted. For those hungry for examples, your dessert is Listing 2!
PROGRAMMING
Listing 2 1 #include <stdio.h> 2 3 int main(int argc, char *argv[]) 4 { 5 printf(“a – ‘%2d’\n”, 5); 6 printf(“b – ‘%3.1f’\n”, 7.53f); 7 printf(“c – ‘%.2f’\n”, U 3.1415926535897932384626433832795f); 8 printf(“d – ‘%10s’\n”, “Ten”); 9 printf(“e – ‘%-10s’\n”, “Ten”); 10 printf(“f – ‘%-10.3s’\n”, “April”); 11 return 0; 12 }
Table 1 Format What it outputs
Suggested types
%d %ld %u %lu %o %x %f %c %s
int, short long int, short long int, short, long int, short, long float, double char, int char *
Decimal integer (signed) Long decimal integer (signed) Decimal integer (unsigned) Long decimal integer (unsigned) Octal integer (unsigned) Hexadecimal integer Floating point number Single character NULL terminated string
Note: In order to print a % sign, we must use the string %%. No arguments are needed. puts(“Another way of writing ‘Hello, World!’”);
Listing 2: Output a b c d e f
– – – – – –
‘ 5’ ‘7.5’ ‘3.14’ ‘ ‘Ten ‘Apr
putchar(‘X’); /* Note the single quotes */ Ten’ ‘ ‘
Listing 2: An explanation Line 5
Line
Line
Line Line Line
And putchar, which outputs a single character:
The ‘2’ means we must output at least two digits, so printf pads the number 5 with a leading space. (Note the use of single quotes to indicate padding in each example – thanks, Dennis!) 6 Padding the entire number to 3 characters, with a maximum of one digit after the decimal point. 7 By omitting the field width, the only restriction is the precision. In this case, two decimal places. This format is fairly common. 8 The string is padded into 10 character wide fields, regardless of string width. 9 A minus sign will adjust all output to the left-hand side. 10 Again, the minus sign justifies the text to the left of the field, but here will also limit it to a maximum of three characters.
Single girl In addition to the printf function, there are two other oft-used functions that we will briefly mention; puts and putchar. puts writes a single string (automatically adding a new line) to the output. It doesn’t do any format conversions, however, which makes it slightly faster.
All output takes place on stdout. This is the standard output, and is usually the screen. However, if the program’s output has been redirected (with > or >>) or is piped into another program (using |) the shell will automatically take this output and pass it onto the appropriate parties. Don’t try to be clever by looking to see if the error stream has been redirected into a file, and appending text to that file directly – it won’t work! However, for ease of description, I shall refer to the screen as your standard output device. These three functions will output text to the screen. However, when this information will be written is not guaranteed. The screen, like everything in Linux, is a file and by default a “buffered” stream: all text sent to printf, puts and putchar is not sent to the screen when you call the function, but is sent sometime later. This could occur when: 1 Its memory (aka the buffer), is full. 2 A specific character, say a carriage return, is outputted. 3 When some input is required. It would be silly for the prompt to be sitting in the buffer, when the user is expected to be entering data. 4 The stream is closed. (Something that makes more sense with files than with the screen.) 5 It is explicitly requested by the C program. Since 5 is the only one we can control (without changing the operation of our program), it is the only one we will discuss. If your program performs a lot of processing, but only minimal output (say a prime number calculator) then you’d want to output each digit as soon as the program has made it available. To do this, we have to “flush” the output buffer to the screen. Issue 18 • 2002
LINUX MAGAZINE
57
PROGRAMMING
scanf(“%s %f”, szFromUnits,U &fConversionNumber);
Table 2 Format What it reads %d %o %x %f %c %s
Decimal integer (the variable it is read into determines whether the result will be signed or unsigned) Octal integer Hexadecimal integer Floating point number This reads the next character – %c doesn’t skip white space like the others. It is more usual for this to feature in low-level file parsing A string – characters are copied until white space is encountered, at which point scanf will terminate the string with NULL
fflush(stdout); The fflush function takes one argument, indicating the “stream”, to empty. This can be a file pointer (as we’ll see later), or one of the special file pointers like stdout, the standard output device, or stderr, the standard error stream. If the stream is given as NULL, then all streams are flushed. It is expected that all output (which gives the results of a programs’ operation) goes to stdout, whilst any errors that occur as a consequence of trying to produce that output are given to stderr.
A space between each format specifier (%s and %f, in this example) tells scanf to ignore all white space between the values. A non-space character (for example a letter) will tell scanf to expect that letter (and only that letter) in the input stream. If that letter is not forthcoming, scanf will not read any more data and will exit, having failed. Its return value will be a count of the parameters it managed to read successfully. For more complex input, there are other format specifiers you may use, which are listed in Table 2. When reading a hexadecimal number, only valid hex characters will be considered. Should you enter a value outside the [0..9,a..f] range, scanf will assume the hex number has completed and move on. It will take the last (invalid hex) character and treat it as the first character in the next input field. Should this character not fit in with the next specifier (for example, it may be a letter, whereas scanf expects a %d) the function will return as usual, indicating the number of valid parameters it managed to read.
Y kant Tori read In addition to scanf there are three other functions to consider: getchar, unget and gets. All handle standard input, but again, for ease of description, we shall assume this to be the keyboard:
Rhythm is the key Keyboard handling within C is fairly limited. This is because the language was designed in an era when games like Quake didn’t exist, and its only purpose was in creating software that required a more sedate rate of data entry! Even to us Linux users of today, these routines are largely adequate because a lot of our work involves batch files rather than interactive input. ANSI C can’t even write the classic ‘press any key to continue’ prompt! However, if you play to the strengths of the language you’ll find there’s enough functionality to go around.
Searchin’ scanf (short for scan formatted) is the sister of printf. It takes a string describing the format of the line and a list of arguments. These arguments are the locations in memory where the read data is to be placed, i.e. they must be pointers. Failure to do so will cause a segmentation fault. (Omitting the & in front of a variable name is all too common.) char szFromUnits[32]; float fConversionNumber;
58
LINUX MAGAZINE
Issue 18 • 2002
ch = getchar(); This retrieves a single key, returning its ASCII value into the variable, ch. However, the input isn’t flushed (and the getchar function doesn’t return) until you press the enter key. As a consequence, the input buffer still contains an enter key, which will get used by the next call to getchar (but not scanf or gets). If several getchars are called, thus: ch1 ch2 ch3 ch4
= = = =
getchar(); getchar(); getchar(); getchar();
and you type: char { ENTER} the first getchar will not return until you hit the enter key, at which point all the variables (up to the point
PROGRAMMING
Weak types
ungetc(‘x’, stdin); ch3 = getchar();
So what are they? Well, a variable of type char lets us store and process character data. However, if we were to use it in a situation where it would be considered as a number (and NOT a character), it would ‘behave’ as a number. In the printf example, we might ask the character to be printed out as a number. In which case, when behaving like a number, the character A would be outputted as 65 (its ASCII value). The same is true if working backwards: an integer with the number 65, when treated as a char and printed with %c will produce an A. As we’ve already mentioned, this can cause problems. The ability to do this within a language means it is a weakly type language. And the types themselves are weak types.
“x” is the character we write back, whilst stdin means the standard input stream (since the same function can also be used with files – see later). By using this function we have forced the letter x to jump to the front queue (so to speak), causing the following getchar function to return an ‘x’ instead of any other character still in the input buffer.
where { ENTER } was pressed) will be filled with the appropriate character (c1=’c’, c2=’h’, c3=’a’, c4=’r’). I recommend experimenting with this concept by pressing { ENTER } at different places, until you’re confident with it.
A saucerful of secrets I will now spill the beans on two interesting facts about getchar. No really, they ARE interesting! The first is that a Ctrl+D from the shell will cause getchar to exit immediately with an error code: EOF, or -1. This is the only other key press (besides { ENTER }) that will do this. This also leads us onto the second fact: the get character routine must return an integer, not a character. Otherwise, getchar cannot return all possible characters (from 0 to 255) and an error code: EOF (or -1) is displayed.
Space oddity ungetc is a peculiar little function that will place any single character “back”! It doesn’t write it out to the screen, but places it into the keyboard buffer – which is what we actually read from when using gets, scanf or getchar. Consequently ungetc will affect all three of these input functions. int ch1, ch2, ch3; ch1 = getchar(); ch2 = getchar();
Get shorty gets(szInputString) takes one line of text (terminated when the user presses Enter) and places it into szInputString. For ease of use, the resultant string ends with a NULL terminator, instead of new line. However, gets is one of the worst functions in the C library! If you frowned at the implementation of getchar (the get character routine, that returned an int), you will be horrified by gets! Why? It is not protected! Functions like strncpy allow you specify the maximum number of characters that will be copied into the string. It is a simple precaution that stops the library writing into memory it doesn’t own. However, the gets function is wearing no such condom! Instead, I would therefore suggest using: char szInputString[80]; fgets(szInputString, 80, stdin); In fact, gets is such a bad function that even gcc tells you it is ‘dangerous and should not be used’. And gcc is a piece of code! So, when an inanimate program suddenly becomes sentient, gains a personality and has enough compassion to tell you something is dangerous I think you should listen, don’t you?
The author Steven Goodwin celebrates (really!) 10 years of C programming. Over that time he’s written compilers, emulators, quantum superpositions, and four published computer games.
Issue 18 • 2002
LINUX MAGAZINE
59
COMMUNITY
CRYPTO L
ets get this out in the open from the start: I did not like this book. I found the style of writing to be verbose and it will date easily with its frequent use of modern “buzz” phrases. That said I found the subject matter to be sufficiently fascinating to read through to the end and for those of you who like this style of writing the book should be a very good read. It deals with the development of cryptography since the 1960s and its role in the development of global communications. It covers in depth the creation of RSA and DES and explores the motives of the men behind them. The latter part of the book goes into the often secret battles between scientists and the fledgling encryption industry and the shadowy government forces, out to protect the interests of national security. Most of the book is based in the US and only the last
chapter goes into any detail about the activities of encryption experts in Britain and Europe. The development of PGP is mentioned, although not in as much detail as RSA and several pages are devoted to the court case against Phil Zimmerman. Crypto is an interesting work about an important subject, that affects all of us, often without us realising, and apart from the irritating style it is written in, it could be a god read for anyone wanting to find out more on the topic. The text is supported by extensive notes and a bibliography for those wanting to continue their own research. Author Publisher Price ISBN
Steven Levy Penguin £7.99 9-780140-244328
CLONING SILICON VALLEY T
his is a business text. The book seemed very promising at the start, an examination of technological clusters around the world and their similarities and differences to Silicon Valley in the US. David Rosenberg picks out six areas around the world to focus on in depth, asking the same questions about each to give a unified overview of what they are like. It might have been interesting to have asked the same questions about Silicon Valley itself to give a comprehensive picture of the contrasts. The text was very business orientated, which I found a little off putting and sometimes incomprehensible. A glossary of terms would have been useful and some explanation of the parameters of the data in the tables might have helped. Possibly I was prejudiced by an in depth analysis of
62
LINUX MAGAZINE
Issue 18 • 2002
technology companies around Cambridge that barely mentioned Sinclair Research in passing, but at times seemed to be including half of southern England under Cambridge. I found the other places covered in the book, Helsinki, Tel Aviv, Bangalore, Singapore and HsinchuTaipei more interesting, possibly because I know less about them. As a guide to the next generation of high tech business leaders it is a very useful text with much to think about. Unfortunately I cannot recommend it as an entertaining read. Author Publisher Price ISBN
David Rosenberg Reuters £18.99 9-781903-684061
BEGINNERS
G-tools
COVERING THE BASE This month GNOME tools looks at Moleskine to help you code and Procman to watch your system
Gstreamer GStreamer allows the construction of graphs of media-handling components, ranging from simple MP3 playback to complex audio (mixing) and video (non-linear editing) processing. Applications can take advantage of advances in codec and filter technology transparently. Developers can add new codecs and filters by writing a simple plug-in with a clean, generic interface. GStreamer is released under the LGPL, with many of the included plug-ins retaining the license of the code they were derived from, usually GPL or BSD. Version 0.3.2 is now released from http://gstreamer.net/. Applications that so far have fair status for GStreamer include mjukplay, gstmediaplay and ZStreamCaster for icecast streams.
gstmediaplayer in action with the Gstreamer library
Procman Procman is a GNOME process viewer and system monitor. This allows you to see what is happening on your system and notice any anomalies. The unstable version is 1.1.1 but the stable version is 1.0. Hurrah ! It requires libtop 1.0.6 and gal 0.19.0 or greater. Available from http://www.personal.psu.edu/users/k/ f/kfv101/procman/ and GPL licensed.&
G-tools In this column we feature a monthly round-up of some of the best tools available from the gnome.org Web site. Whether they be essential tools for everyday GNOME users or interesting curiosities, you’re sure to find them here. Monitoring usage
64
LINUX MAGAZINE
Issue 18 • 2002
Moleskine Moleskine by Michele Campeotto is a source code editor for the GNOME desktop. Moleskine is developed in Python and uses Scintilla as its textrendering engine. Moleskine takes its name from the artists’ notebook of the same name, which so many of the world’s favourite books started out on. The latest version, 0.7.7, has been released which now accepts word wrapping and multiple files can be dragged from the file manager. It has a GUI configuration tool and auto-completion for words, capable of matching braces and syntax highlighting. Because it uses Scintilla, many programming languages are fully supported in the default configuration. As long as Scintilla supports a language you want then you can also add your own. Three modules are required – Moleskine, PyGtkScintilla and GtkScintilla. Available from http://www.sourceforge.net/ projects/moleskine.
BEGINNERS
Conc Conc is a console concentrator for Linux and GNOME. It features remote maintenance of systems over IP, and concurrent connections to consoles. Serial lines on multiple machines may be pooled into one system allowing a virtually unlimited number of consoles to be managed – ideal for large server farms, clusters or off-site server rooms. The systems consist of three components. The first is concserv – the central daemon that keeps logs from all the consoles and co-ordinates the rest of the system. When it starts, concserv spawns a number of termserv processes that control the serial lines to which the console lines are connected. The link between a termserv and concserv is encrypted and termservs may run on separate machines to concserv, communicated over TCP/IP. The final component, conc, is the user interface. It connects to concserv over an
encrypted TCP/IP link, and allows the system administrator to view the logs of a particular machine, connect to its console, add and remove consoles etc. There is also a small, text-based interface called console that allows connection to a single console. Any number of user interface programs may run concurrently and multiple connections to the same are possible,
allowing groups to work on one system. Having all the components communicate by TCP/IP allows administration of machines from off site or unifying management of co-located and local equipment. Tested with Comtrol’s Rockpot card to give a reliable 16 channel serial card. Available from http://www.jfc.org.uk/ software/conc.html
Configuring Conc
Conc overview
Aricalc Yet another project that has made it to version 1.0 this year. Aricalc is a simple calculator for people who have to work with imperial values – dimensions given in feet, inches and fractions of inch. It can do all the standard math functions with those kinds of values. It can also work with Designed for construction square or cubic values (for area or volume dimensions). It can also calculate all the parameters of a slope (pitch, run, rise and diagonal). All the people working in the construction industry will appreciate Aricalc plus a few others. Available from http://www.total.net/~harrych/ aricalc/aricalc.htm and as usual under the GPL. There is also a link for the online help pages, which also give the keyboard shortcuts and simple examples of using the calculator.
Gnect Gnect is a four in a row board game for GNOME. The object is to build a line of four of your counters while trying to stop your opponent (human or computer) building a line of his or her own. A line can be horizontal, vertical or diagonal. Gnect has two computer-driven players. One’s very simple – it’s included to provide a fun opponent for young children. The other is Giuliano Bertoletti’s Velena Engine. The Velena Engine takes a much more sophisticated approach – its strongest level is unbeatable if it makes the first move. Velena is “A Shannon C-type program which plays connect 4 perfectly”. A beautiful SVGA DOS version is available, as is the Velena Engine source. It has now reached version 1.4.3. Available from http://homepages.ihug.co.nz/~trmusson/gnect.html
Velena Engine at work
Issue 18 • 2002
LINUX MAGAZINE
65
BEGINNERS
K-tools: Noatun and Aviplayer
ROLL FILM Do you fancy having Hollywood’s dream factory exclusively on your Linux computer? As Stefanie Teufel explains, it’s no longer just a dream thanks to Aviplayer and Noatun
Plug-in A program fragment which can be inserted (“plugged”) into a larger program as an expansion. Aside from Noatun, both The Gimp and XMMS make use of plug-in technology. Ogg Vorbis According to the definition from its developers, Ogg Vorbis is a completely Open, nonproprietary, licence and patent-free, compressed multi-purpose audio format of high quality, similar to MP3, which is still the better known format at present. Incidentally, the name “Vorbis” stems from a character in a Terry Pratchett novel.
T
here is now an abundance of MP3 players under KDE, but what can you do if your favourite song comes along not only acoustically, but also in the form of a multimedia video? In the days of DSL flat rates and diverse peer-to-peer systems (Qtella, is a suitable client, and we introduced it to you in Linux Magazine issue 16) this is by no means an unrealistic everyday situation. Don’t despair, just ask Noatun – or to put it better, crank it up. Behind the somewhat offbeat name hides the most popular KDE media playback system, which can, as standard, play the MP3 and WAV audio formats as well as the video format MPEG-1. Via so-called plug-ins, additional formats, such as Ogg Vorbis, can also be used. The media player is an integral part of the kdemultimedia package (Version 1.2.0 described here is part of KDE 2.2.1) and therefore does not have to be downloaded and installed separately. It’s also very easy to start: either enter, in any terminal emulation of your choice, a simple noatun & command, or fire up the player by clicking in the KDE start menu (Multimedia/Media playback). Noatun starts off with the so-called Excellent Plugin as its graphical user interface (Figure 1), since this interface displays the greatest similarity of all to other KDE applications. If this rather simple display is not to your taste, you may be glad to hear that Noatun also has lots of more tailored outfits to wear (as in Figure 2). You can give Noatun a new appearance via the
Figure 1: Not just for simple minds
66
LINUX MAGAZINE
Issue 18 • 2002
K-tools In this column we present tools, month by month, which have proven to be especially useful when working under KDE, solve a problem which otherwise is deliberately ignored, or are just some of the nicer things in life, which – once discovered – you wouldn’t want to do without.
Figure 2: Noatun in the Kjofol look
menu item Settings/ install Noatun. In the window which will then appear, choose the item Plugins / Interfaces and there select the design of your choice by mouse click. Once we are in the configuration environment, we immediately carry on with the General Options. On the tab of this name, which can be reached via the Options entry, specify that playback should start immediately when the player is started, or define how many Noatuns can run at the same time. Behind the Young Hickory entry you’ll find the configuration options for the system range of the KDE control bar. In the standard setting this is found on the left next to the clock. If you click with the right mouse button on the corresponding symbol (Figure 3), a small menu appears (Figure 4), with which you can in future operate your media player with ease. You can define, via the plug-in, which symbol should appear and whether or not you are interested in brief info about the respective current piece. Not all plug-ins are loaded by default. If you
BEGINNERS
want to expand Noatun by additional set pieces, you can do so at any time via the item Plugins/Additional plugins. The plug-ins loaded later will then appear with further configuration options in the now-familiar settings menu. As soon as new plug-ins not included in the original package come out, you will find them at http://noatun.kde.org/ plugins.phtml.
Figure 3: Noatun icon in the system panel
Patched up Like every good media player, Noatun has a play or item list. Since Noatun requires at least a graphical user interface and an item list in order to function, every time you feel like a change you must load a new graphical user interface or a new playlist before you can delete the old one. When you do so, a new item list automatically replaces the old one. At the same time you can display the respective current list in a separate window, if you select the menu item Settings/Display item list (Figure 5). The sequence of the pieces can then be changed very simply using drag and drop.
Figure 5: Piece by piece
If you have added the plug-in “Export playback lists in HTML”, you can even transfer your item list into an HTML table. The installation page allows for the setting of colours, background image and activation of the paint-over mode, in which the colour of a link changes when a mouse pointer is run over it. The equaliser, which you can access via the item Settings/Equaliser, allows the song being played to be manipulated with various sound-effects. There are already a variety of pre-assigned effects, which can be expanded as you like by your own settings. But there is no way to alter videos as yet.
Friends from Windows world As DivX becomes more widespread on the Internet, the format also becomes of greater interest to us
Linux users. Unfortunately, Noatun is still flummoxed by playing DivX films at present, which is why we would like to introduce to you the Aviplayer, another multimedia player. You can find the latest version at http://ftp.kde.com/Multimedia/ Video/AviPlayer/ or on the (hard to reach) homepage of the team of authors Eugene Kuznetsov, Zdenek Kabelac & co. at http://divx.euro.ru/. Strictly speaking, this is an application and a library named avifile, which has been adapted for Linux. The basic idea is to integrate Win32 binaries as plug-ins, in order to use these for playback. The avifile library makes it possible to play back the AVI codecs which run under Windows, Figure 4: Complete control under Linux as well. At the same time the application does not restrict itself solely to the available AVI codecs, but also supports other formats such as Microsoft’s MPEG-4 or Motion JPEG. You are best downloading the necessary files from http://www.linuxberg.cz/files/ binaries-010122.zip. The authors do offer a download option from their DivX This video codec is site, but because of technical problems the download based on the MPEG-4 usually crashes halfway through if you use this route. compression format Once you have obtained the necessary packages, released by the “Moving you have to create, as root, a “win32” directory, Picture Experts Group”. using the command mkdir /usr/lib/win32. Then Compared with previous unpack the codecs with an unzip binaries-010122.zip standards MPEG-1/2, -d /usr/lib/win32 into the freshly-created directory, MPEG-4 needs only a and install your Aviplayer using the Linux three-step fraction of the memory ./configure; make; make install. Done. capacity, in order to attain By entering aviplay filmofyourchoice.avi you can a satisfactory image now immediately fire up the film you want. A simple quality. If one takes it aviplay allows you, as an alternative, to click forwards quite literally, the DivX in a dialogue window, to your favourite video file. format does not represent Whichever way in you choose, you will always be a modification of the rewarded with a playback window like the one in MPEG-4 format, but a Figure 6. If you prefer full screen mode, you can hack of the MPEG-4toggle between postage stamp size and full screen at based ASF format from any time with the Esc key. Microsoft. The MPEG-4 codec has been further developed by Microsoft and is based on MPEG-2 technology. However, the bit rate of the individual frames is increased considerably by the strongest possible compression, so that the files remain very small, despite being of top quality. Figure 6: Linux is DivX-capable, too
Issue 18 • 2002
LINUX MAGAZINE
67
BEGINNERS
The Answer Girl
INSTALLATION WITHOUT TEARS There is one main drawback to selfcompiled software: whether or not it can be neatly uninstalled later depends on your own discipline. As Patricia Jung explains, there is a remedy at hand
Anyone who installs software via the rpm or deb packages pre-compiled for their respective distribution does so in the safe knowledge that, if necessary, they can also be removed in the same manner. Even is the process does leave behind dead files on the system, this doesn’t detract from the feeling of being blameless: if this happens then it was the package compiler who did the sloppy work. Even distributions well-equipped in terms of software have to pass at some point: not every useful program comes shake-and-bake for all distributions and distribution versions. When the words “selfcompiling” appear, the moment of truth will arrive by the time you get to make install if not before. If you don’t keep track precisely, you’ll may well find the installed binary and possibly the corresponding manpage, but was that really all that this inconspicuous command – executed with root rights – has tucked away in the depths of your filesystem?
Work ban for make There is a manpage for make, however, and this explains that the -n option lists all commands that come in to play during the handling of the Makefile, although it doesn’t execute them. That’s no problem, so long as this list is as clear and simple as it is in the mirror package: pjung<chekov:~/software/mirror$ make -n install [...] install -m 755 -g gnu mirror.pl /usr/local/sbin/mirror [...]
Makefile: A file containing the instructions that “make” should execute. If the GNU version of make is not told, with the option -f, which file this is meant to be, it searches methodically for a file named GNUmakefile, makefile or Makefile in the working directory. mirror: A tool which “reflects” directory hierarchies from one computer on to another, so that there are identical copies.
68
LINUX MAGAZINE
Issue 18 • 2002
The Answer Girl The fact that the world of everyday computing, even under Linux, is often good for surprises, is a bit of a truism: Time and again things don’t work, or at least not as they’re supposed to. The Answer-Girl in Linux Magazine shows how to deal elegantly with such little problems. Only the install command is invoked here, which copies the file “mirror.pl” into the target directory /usr/local/sbin and at the same time renames it as mirror. There, using the mode option -m 755 it is given the rights rwxr-xr-x and is assigned to the gnu group (which must already exist) using -g. If the commands to be executed by make install are copied with the re-routing operator > into a file to be archived (make -n install > filename), the installation process can be reconstructed if you later go on an uninstallation binge and delete /usr/local/sbin/mirror and its consorts from the system by hand with the rm command . As soon as make install has to execute longwinded, highly-complex shell scripts, which in turn rely on the scripts which come with the source package, the will to discipline will certainly no longer suffice. A different approach is needed.
BEGINNERS
rwxr-xr-x The ls option -l shows which users can do what with a file or a directory – r, “read”, w, “write”, execute (in the case of directories: change to, x, “execute”) – or not (–). The first trio of letters, rwx, says that the file owner can do everything. The group to which the file is assigned, on the other hand, only has read and execution rights – it cannot alter (write) the file on the grounds of the “–” in the middle trio. All other users, too, only have the rights to read, execute or change to. A different notation for rwxr-xr-x is 755. This is produced by coding r with 4, w with 2, x with 1 and – with 0 and adding the values of a trio: 4+2+1=7, 4+0+1=5.
To each his own directory The simplest solution would be to accommodate each self-compiled software package in a directory of its own. The “Filesystem Hierarchy Standard” (http://www.pathname.com/fhs) suggests /opt/packagename for “add-on application software packages”. However, another sensible alternative is that of a package directory under /usr/local, even if there is no provision for this in the FHS. In this package directory a typical Unix directory hierarchy with bin directory for user programs, sbin for the binaries reserved for the system administrator “root”, man for manpages, lib for libraries, include for any header files etc. should then be made, provided the software package concerned comes with suitable files. So how do you persuade make install to copy into such a directory? Apparently by modifying the file makefile or Makefile. Once a back-up copy has been made, it’s time to search for the Target install. The Makefile does in fact specify the arguments with which make can be invoked: only words that stand at the start of a line therein and end in a colon are permitted. The makefile from the mirror package is very clear: install: [...] install -m $(EXMODE) -g $(GRP) U mirror.pl $(BINDIR)/mirror [...]
Here we find, indented by precisely one tab character, the commands to be executed by make install – although concrete values are replaced by variables. A comparison shows that $(BINDIR) is obviously inserting the content of the variable BINDIR at this point, and a few lines higher up BINDIR is assigned the value /usr/local/sbin: # directory to install public executables BINDIR = /usr/local/sbin
If we want to change the “directory to install public executable files” into /opt/mirror/sbin, all it takes is one correction in the makefile: BINDIR = /opt/mirror/sbin The challenge in this method consists only of finding all variables relating to the installation step. This can turn into production-line work in which many control processes in terms of make -n install become necessary. It often happens with such hand-written makefiles, too, that the authors have not done tidy work and have “unintentionally” included static path specifications too. The only remedy for this is a methodical search for the stubborn directory specification. There’s another problem too; if make install complains, with an error message such as: install: cannot create regular file U `/opt/mirror/sbin’: No such file or directory then the target directory /opt/mirror/sbin is missing and must be made by hand. If the parent directory mirror is missing as well as its subdirectory, sbin, then you can save yourself a mkdir command if you use: mkdir -p /opt/mirror/sbin to make all the necessary parent directories at once.
Configure taken at its word With software projects above a certain size it becomes too tedious for most authors to wait for Makefile by hand. In these cases they depend on automatic Makefile production mechanisms. The drawback here is that the Makefile gets very complex – one reason why make -n install becomes difficult to decode. This is offset by the advantage that the Makefile creation tool is very easy to influence. If the configure script that comes with a software package correctly bears its name, it should also be possible to specify the place to which the files already created should be copied. In fact configure speaks to the common help option – – help – Listing 1 shows extracts from the Help for the configure script of the mail program sylpheed. Equipped with this information, with: ./configure – – cache-file=/tmp/sylpheed.tests U – – prefix=/opt/sylpheed – – enable-ssl we ensure that SSL support is included and all files are stored on installation with make install under /opt/sylpheed. – – cache-file allows a file to be specified after the equals sign, in which the test results found by configure are to be saved. Normally the config.cache is in the current directory but here Issue 18 • 2002
/usr/local This directory, which ideally sits on its own partition, is intended to keep locally installed software on hand. “Local” here means both “not part of the distribution”, and also (in networks) “actually installed on the hard drive of this computer and intended only for this computer” (unlike centrally stored resources which may be made available via the “Network File System” or NFS of the individual workstations). Header files If you want to compile software which uses the functionality from libraries dynamically, the interfaces of this library – the API (Application Programmer Interface) –, must be at hand when the time comes to compile. In C and C++ programs these are found in so-called header files with the ending .h.
LINUX MAGAZINE
69
BEGINNERS
we have instead selected /tmp/sylpheed.tests. Anyone who now goes off and makes changes in the Makefile created, by the way, is doing this with a safety net in place: should any manual corrections turn out to be wrong, all you need do, instead of the whole configure rigmarole, is say ./config.status. This executable file saves precisely what needs to be done to reach the same conclusion as the last configure run. A make compiles the software, while make install makes an orderly Unix file tree under the prefix directory and copies in the files needed to use the software. In the case of Sylpheed, a bin subdirectory is created under /opt/sylpheed containing the executable program, plus a share directory, which contains the online manual in HTML format. Since /opt/sylpheed/bin does not lie in the search path which can be displayed with echo $PATH, a simple sylpheed& will not work (or starts a program of the same name, which may already be in one of the directories listed in the PATH). Only when the full path to the binary is specified... /opt/sylpheed/bin/sylpheed& ... does the new program start.
Figure 1: Sylpheed running
Listing 1: Help for a configure script pjung<chekov:~/software/sylpheed-0.6.5$ ./configure ––help | less Usage: configure [options] [host] Options: [defaults in brackets after descriptions] Configuration: ––cache-file=FILE cache test results in FILE ––help print this message [...] Directory and file names: ––prefix=PREFIX install architecture-independent files in PREFIX [/usr/local] [...] Features and packages: [...] ––enable-ssl Enable SSL support using OpenSSL [default=no] [...]
70
LINUX MAGAZINE
Issue 18 • 2002
Forging paths With an export PATH=$PATH:/opt/sylpheed/bin the PATH variable can be extended quickly and simply into a shell. Use $PATH to fetch the previous content of PATH and attach the new search directory following a colon. The result is in turn assigned PATH as new value. export ensures that the bash passes on the variable to “child processes”, such as to a console, which is started via console & from the current shell. If you have already found a sylpheed binary in the old path, this resetting will be precious little help, since the shell always takes the file it finds first. If on the other hand, when resetting the path, one places /opt/sylpheed/bin before all previous search directories: export PATH=/opt/sylpheed/bin:$PATH ... the tables are turned, and the binary previously found without specifying the directory is now the one that must be called up explicitly. But who wants to have to change the path every time he/she needs a program from a “non-standard” directory? If all users of the computer are to get something out of the path extension, it would be advisable to enter the change from root in the system-wide configuration file /etc/profile. (Some distributions, such as SuSE, also read in files specially designed for local changes, such as /etc/profile.local.) If the path extension is only intended to apply for your own user account (which makes sense for things like software which one has installed in one’s own home directory – such as under ~/bin –), then the personal start files of the shell (for the bash~/.bashrc and ~/.bash_profile) are candidates for correction.
No new paths Anyone who does a lot of self-compiling will soon get fed up with the constant path corrections – especially since a multi-line PATH monster is no longer easy to grasp. What a good thing there are some alternatives. Links in a Linux/Unix filesystem ensure that a file can be addressed by several names. Instead of doing something like copying the sylpheed binary from /opt/sylpheed/bin into the directory /usr/local/bin and thus having two copies of it, a “symbolic link” is all it takes ln -s /opt/sylpheed/bin/sylpheed U /usr/local/bin/sylpheed to make the binary accessible using either specification. For fairly small programs like Sylpheed this is enough – but woe betide you if you are dealing with larger software packages, which create several
BEGINNERS
binaries at the same time has lots of manpages and, in the worst case, also come with their own libraries. Nobody wants to set so many links by hand and you certainly won’t want to remove them again if /opt/sylpheed falls victim to the uninstaller rm -rf /opt/sylpheed. Although the mini-program symlinks can ferret out and delete any broken links that point into oblivion, this useful tool is pre-installed on few systems. Now we can set about looking for a tool that will take over the whole task of link management. Debian users are lucky, since this distribution contains just such a mini-program in the form of stow. apt-get install stow automatically downloads the corresponding package (assuming you’ve got Net access) and installs it immediately, provided root invokes the command. But users of other distributions need not despair. Any Debian software can be downloaded from a Debian mirror not only as pre-compiled deb package, but also as an original tarball from the author of the software. The Debian download pages on the Web (Figure 2) offer three links in the lower part under the point Source Code: the package specification as a dsc-formatted ASCII file, the original tar.gz source archive and the source code of the changes which the Debian package builder has integrated into the binary package, as a diff output packed with gzip. The tar file is unpacked in the usual way with tar xzvf archivefile (the option -z unpacks the gzip compression, -x extracts the content, -v provides a bit more talkativeness in tar (“verbose”), and -f archivefile states which file is to be unravelled). With
Mirror: A server which “reflects” the data file of another one, thus stores it as a copy which is as up to date as possible. The tool “mirror” is readily used for this purpose.
Figure 3: With the prefix /opt/stow, stow gets its own directory hierarchy
Hidden treasures So now stow may be installed, but a bit of documentation wouldn’t go amiss. There is in fact an info file in /opt/stow/info, but anyone not already familiar with this information system will not have much luck with info -f /opt/stow/info/stow.info (see Figure 4). An HTML file, which can be viewed in the browser and printed out neatly formatted, is high on the wish list.
./configure – – prefix=/opt/stow in the directory stow-1.3.2.orig we also configure stow with a target directory of its own /opt/stow. For once we can do without make, since on this occasion there is nothing to compile. make install then provides a file structure as in Figure 3.
Figure 4: Info with info
The Makefile in stow-1.3.2.orig does in fact contain a few useful secrets: even if we aren’t stow developers, we need not be scared off by the comment # The rules for manual.html and manual.texi U are only used by # the developer and make a note of the promising Make-Target manual.html, which is defined thereafter:
Figure 2: Debian always comes with the original source code
manual.html: manual.texi -rm -f $@ texi2html -expandinfo -menu U -monolithic -verbose $<
Issue 18 • 2002
LINUX MAGAZINE
71
BEGINNERS
The first line says that make is dependent in these rules on making sure that the rules of the (later defined) target manual.texi have been executed, before its own (indented with Tab) commands come into play. The first real action taken by the manual.html target consists of deleting any file of the same name which may exist ($< stands for the target itself, thus that which is on the line which is not indented before the colon) with rm -f (“force”). Should there be an error message now (perhaps because no such file as manual.html exists), make should say nothing – hence the minus before the rm command. What really interests us is the second rule: the texi2html program is, says its manpage, a “Texinfo-to-HTML-converter”. Since $\ in the make-syntax stands for whatever comes after the target name and its colon, it soon becomes clear: this target produces an HTML file from a file (created by the manual.texi target) named manual.texi. We can see, in turn, from the texi2html manpage, that the program – provided it is invoked using the -monolithic option – creates from a Texinfo-file named foo a single file foo.html (instead of swapping footnotes and tables of contents into additional, individual files). The command make manual.html in the stow source directory thus looks precisely as if it is making our wish for a stow manual in HTML format come true. There is in fact then a usable manual.html in the current directory (Figure 5).
Figure 5: Rewarded by a “make manual.html”
The way it should be To use stow successfully, a few concepts need clarifying. When the documentation talks about the “stow directory”, we are dealing with the directory in which the sub-directories with the file hierarchies of the individual, compiled software packages are found. In other words, the preparatory work for using stow consists of specifying in each case the prefix Stow-directory/packagename in the configure run for a software package. In our plan, the stow directory thus bears the simple name /opt. 72
LINUX MAGAZINE
Issue 18 • 2002
The next piece of information stow needs to know is the directory hierarchy into which it should link the contents of stow directory/packagename. A highly suitable target directory is /usr/local, especially if /usr/local/bin is already contained in the PATH. The target directory and stow directory can be specified with the options -t and -d, and the latter option can be left out if we are dealing with cd /opt in the stow directory. Now we still have to link /opt/stow/bin/stow itself in orderly fashion to /usr/local/bin/stow, before we can call it up without specifying the path. An /opt #./stow/bin/stow -v -v -v -n -t / U usr/local stow Stowing package stow... Stowing contents of stow Stowing directory stow/bin Stowing contents of stow/bin LINK /usr/local/bin/stow to U ../../../../opt/stow/bin/stow Stowing directory stow/info LINK /usr/local/info to ../../../opt/stow/info
gives us control over what would happen to the content of the directory stow if we wanted to link it to /usr/local: The option -n ensures in the meantime that nothing happens, while each -v (“verbose”) makes the program a bit more talkative; but this only goes up to chatter level 3. If directories which can be found under ./stow do not yet exist in /usr/local (/usr/local/info for example), the program ./stow/bin/stow certainly does not make them. However it does make things easy for itself: /usr/local/info points to /opt/stow/info. If the respective directory already exists (which it does in the case of the example of /usr/local/bin), stow sets a link therein to the corresponding file (/usr/local/bin/stow points to /opt/stow/bin/stow). For the source file stow specifies relative paths starting from the target directory. That looks sensible, so we do a good job with ./stow/bin/stow -v -t /usr/local stow and take a look at the result: /opt # ls -Al /usr/local total 3 drwxr-xr-x 2 root root 55 Nov drwxr-xr-x 2 root root 150 Nov lrwxrwxrwx 1 root root 22 Nov info -> ../../../opt/stow/info drwxr-xr-x 2 root root 57 Nov
26 20:02 bin 26 18:28 ftp 26 20:02 U 26 18:28 man
The trouble with bugs Before we get euphoric and start stowing other software, we’d better first check if the promised Deinstallation with the option -D really is that simple. If /usr/local/bin lies in the search path, we can now call up stow without specifying the directory:
BEGINNERS
/opt # stow -v -v -v -n -D -t /usr/local stow Unstowing in /usr/local Unstowing in /usr/local/bin Unstowing in /usr/local/ftp Unstowing in /usr/local/ftp/bin Unstowing in /usr/local/ftp/dev Unstowing in /usr/local/ftp/etc Unstowing in /usr/local/ftp/lib Unstowing in /usr/local/ftp/usr Unstowing in /usr/local/ftp/usr/bin Unstowing in /usr/local/ftp/msgs Unstowing in /usr/local/man That looks funny – why is /usr/local/info never mentioned? Why is there nothing saying that /usr/local/bin/stow is to be deleted? And what has stow lost in subdirectories such as man and ftp, into which it has linked nothing at all? The brave will now back up the complete /usr/local hierarchy and let stow -D run again without the -n option. But this is no help, either: /opt # ls -al /usr/local/bin total 16 lrwxrwxrwx 1 root root 29 Nov 26 20:02 U stow -> ../../../../opt/stow/bin/stow
/usr/local/stow # stow -v -D stow UNLINK /usr/local/bin/stow UNLINK /usr/local/info UNLINK /usr/local/bin/stow RMDIR /usr/local/bin Links are removed and directories which are now empty such as /usr/local/bin are deleted. After relinking stow, /usr/local/bin points, as a newlymade link, to /usr/local/stow/stow/bin. To now install sylpheed neatly, too, we must however reconfigure and compile the mail program with new prefix /usr/local/stow/sylpheed, before stow can pursue its linking work after a make install: /usr/local/stow # stow -v sylpheed Stowing package sylpheed... UNLINK /usr/local/bin MKDIR /usr/local/bin LINK /usr/local/bin/stow to U ../stow/stow/bin/stow LINK /usr/local/bin/sylpheed to U ../stow/sylpheed/bin/sylpheed LINK /usr/local/share to stow/sylpheed/share
The links are still there. No matter how much we try, here and there, at some point we have to swallow the bitter pill and admit to ourselves: stow is faulty and inadequately tested. It will only really function when the stow directory containing the package subdirectories is itself a subdirectory of the target directory. It’s a good job we know what has been linked:
... and this will give you a nice surprise:
/opt # rm /usr/local/info /opt # rm /usr/local/bin/stow
Suddenly /usr/local/bin is no longer a symlink to the bin directory of the stow package, but an ordinary directory, in which two new symlinks can be found: After the old link to /usr/local/stow/stow/bin was broken, the stow program was simply linked independently. If we should now yearn to get rid of sylpheed again, everything rolls itself neatly back again:
So we make a new stow directory stow under /usr/local and pack our stow package directory into that: /opt # mkdir /usr/local/stow /opt # mv stow /usr/local/stow/
Stowing The advantage of the new stow directory is that we no longer have to specify the target directory /usr/local at the same time: /usr/local/stow # ./stow/bin/stow -v stow Stowing package stow... LINK /usr/local/bin/stow to U ../stow/stow/bin/stow LINK /usr/local/info to stow/stow/info
links correctly, and the deinstallation also looks reasonable:
. Short notation for the shell for the directory, in which one is currently at. Two dots (..) on the other hand designates the “parent directory” lying exactly above the working directory. ls -A The option -A makes sure that ls also lists “hidden files”, whose names begin with a dot. Contrary to -a, the user does not also see the current (.) and the parent (..) directory listed at the same time.
/usr/local/stow # ls -al /usr/local/bin total 0 lrwxrwxrwx1 root root 21 Nov 26 20:13 stow U -> ../stow/stow/bin/stow lrwxrwxrwx1 root root 29 Nov 26 20:13 U sylpheed -> ../stow/sylpheed/bin/sylpheed
/usr/local/stow # stow -v -D sylpheed UNLINK /usr/local/bin/sylpheed UNLINK /usr/local/share UNLINK /usr/local/bin/stow RMDIR /usr/local/bin LINK /usr/local/bin to stow/stow/bin /usr/local/stow # ls -al /usr/local/bin lrwxrwxrwx 1 root root 13 Nov 26 20:14 /usr/local/bin -> stow/stow/bin
An rm -rf /usr/local/stow/sylpheed then ensures that (apart from personal mail and configuration files) there really are no remains left behind on the system. Issue 18 • 2002
Info mirror http://sunsite.org.uk/pac kages/mirror sylpheed http://sylpheed.goodday.net symlinks: http://packages.debian.o rg/unstable/utils/symlinks .html stow http://packages.debian.or g/stable/admin/stow.html
LINUX MAGAZINE
73
BEGINNERS
Dr. Linux
BOOT OPTIONS Marianne Wachholz
No comment?
dons the white coat
Q
this month to serve up a few tricks for
Although as superuser (root) I have decommented the domain
#image = /boot/memtest.bin #label = memtest86
the boot manager in my /etc/lilo.conf, when I boot up the selection memtest86 still appears as a menu item and also functions. Why is my change not being accepted?
LILO
Dr. Linux: You’ve obviously not quite completed all the administrative work of the superuser. Not only does the Linux Loader (LILO) need reinstalled after installing a new kernel, but it also needs re-installed after any change to /etc/lilo.conf. This is when the lilo command transfers the changes into the map file, which is found in the /boot directory in a standard installation, and creates an updated version of /boot/boot.b. The start sectors of these two files on the hard disk and other information is written into the boot sector. This procedure can be explained by taking a look at the way LILO works. There is not enough space in the boot sector to store large programs. For this reason, only the first part of the boot manager is located here as a tiny piece of code (start program), which is executed by the BIOS of the computer. It has the task of loading the main LILO program, the file /boot/boot.b. However, at this point there is still no operating system of any kind running, which could make a driver available to access a filesystem. As such, the start program can’t do anything with file names and path specifications. In the interaction with the BIOS of the computer, there is in the first instance Decomment In scripts and many configuration files there exists the option to have lines ignored by the program reading in, by placing the character # at the beginning of a line. This means you can insert explanations into a file without its function being affected. The additional information remains visible to humans, but is ignored when the computer evaluates the file. This is a particularly useful option for backing-up an original configuration and testing new configurations.
74
LINUX MAGAZINE
Issue 18 • 2002
Dr. Linux Complicated organisms, which is just what Linux systems are, have some little complaints all of their own. Dr. Linux observes the patients in the Linux newsgroups, issues prescriptions here for the latest problems and proposes alternative healing methods.
only the option of reading sectors of a hard drive. For the LILO start code to find the main program, it turns to the sector numbers also designated by LILO in the boot sector, among which /boot/boot.b can be found. After a successful start the main program allows the user to choose between the various operating systems (where available). If the choice falls on Linux, LILO has to load the kernel. To do so, the file /boot/map is evaluated, which in turn contains the start sectors on which the kernel file plus additional data begin, which LILO needs to start an operating system. This also explains the fact that any write-access to files under /boot and also moving files into or out of /boot renders the current /boot/map unusable. This is true even when files are later written back under the same name into the /boot directory, because it is
BEGINNERS
more than likely that these will end up in different sectors from the old data. The necessary maps and the installation of the LILO bootloader are dealt with as root: perle@maxi:~ > su – root Password: your_root password root@maxi:~ # /sbin/lilo Before making even minor changes it is preferable and safer to enable the bootloader installation program lilo to first “say” what it thinks of the new configuration. You can take this phrase quite literally: add the options -v (for “verbose”, talkative) and -t (for “Test mode”) to the installation command: root@maxi:~ # /sbin/lilo -v -t LILO version 21.6 (test mode), Copyright (C) 1992-1998 Werner Almesberger Linux Real Mode Interface library Copyright (C) 1998 Josh Vanderhoof Development beyond version 21 Copyright (C) 1999-2000 John Coffman Released 04-Oct-2000 and compiled at 21:55:07 on May 15 2001. Warning: COMPACT may conflict with LBA32 on some systems Reading boot sector from /dev/fd0 Merging with /boot/boot.b Mapping message file /boot/message Syntax error near line 20 in file /etc/lilo.conf The options leading to the conflict, compact and lba32 are explained in Listing 1. On the other hand the syntax error demonstrates what happens if one writes a newly entered option incorrectly (such as dafault instead of default). The option -v can be applied up to five times in one command. But since the level “maximum verbosity” can only be used by absolute experts, it’s
Boot sector On data media such as diskettes or partitions on a hard drive, the first sector is designated as boot sector. In the case of hard drives, the term used for the first sector is the Master Boot Record (MBR). BIOS Basic Input/Output System; the basic software of the computer. When the computer is fired up, this program performs a self-test, which among other things defines the graphics mode and checks the motherboard and the main memory. Users can make changes to the BIOS settings. More precise information can be found in the pamphlets that come with computers or a motherboard. You can also find further information on the Web site http://sysdoc.pair.com/bios.html. Kernel The operating system kernel consists of the components which comprise the actual operating system. Only the kernel has direct access to the resources of the computer, i.e. to disk space, memory and computing time, the keyboard etc. If a command is sent or a program invoked, the kernel loads the required program code into the main memory and starts the corresponding task. Tasks have no access to these resources – they have to ask the kernel for them. The Linux operating system kernel distributes the necessary computing time and the memory so quickly, that it gives the impression that programs can run at the same time.
better to begin with fewer -v options. If you need the lilo messages, in order to track down unfamiliar error messages with the aid of the documentation (Box 1), create a text file with the outputs: root@maxi:/tmp # /sbin/lilo -v -v -v -t > Liloinf.txt The boot sector and the map file have *NOT* been altered. In the example LILO’s messages are filed in Liloinf.txt in the current directory (in this case: /tmp). Since we are in the test mode – as the error message line “The boot sector and the map file have *NOT* been altered.” reassuringly informs us – working with LILO loses a great deal of its terror.
Box 1: LILO documentation Along with LILO, the comprehensive English User’s Guide by Werner Almesberger is also installed on your system. You will usually find this under /usr/share/doc/packages/lilo or else in /usr/doc/lilo. The file is often called user.ps.gz, or on some systems user.tex or user.div. You’ll find everything your system can provide on the subject of LILO – directories of this name, documentation files or the binary program itself – if you place on a command line the command: perle@maxi:~ > locate lilo Precise information on the LILO program and on the configuration file lilo.conf are provided by the manpages (“manual pages”), from
which the options proposed here also originate. The LILO-Mini-HOWTO provides background information and describes the standard installation of the LILO bootloader. The LinuxBoot-Prompt-HOWTO contains a collection of all boot parameters you can send, with the aid of LILO, during the procedure of booting to the Linux kernel. If, when booting with LILO, nothing whatsoever appears or only a part of the word such as “LI” or even “L01010101”, the SuSE database (http://sdb.suse.de/en/sdb/html/kgw_lilo_errmsg.html) is of interest, and not only to users of SuSE Linux. Additional support database entries on LILO can be found at http://sdb.suse.de/en/sdb/html/key_form.html.
Issue 18 • 2002
LINUX MAGAZINE
75
BEGINNERS
Password for Linux
Q
On my computer I run Windows (for my children’s games and educational software) and Linux, which can be used for connecting to the Internet. I would like to prevent my children booting up Linux, as at their age they should only be using the Internet under strict supervision. I would also like to avoid accidentally switching off the computer “improperly” if Linux were to start unintentionally. In order to make use of the available gaming and educational software, the children should be able to boot Windows on their own, but not Linux. Are such requirements realistic? Dr. Linux: You can define the boot behaviour of LILO with a wide variety of options. The general settings basically consist of the options in the global section at the start of lilo.conf file, as described in the
manpage under “GLOBAL OPTIONS”. A system section begins with an “image” or an “other” line (“per-image section”), which applies to any of the respective operating systems LILO is to boot. A few of the global options here can be overwritten on a case-by-case basis. The global option “default” is used to define which operating system starts after switching on the computer provided no user input is made. If this option does not exist, the operating system normally boots in the first system section. The password option can be used in both the global section and in the system sections of the individual operating systems. In the example configuration in Listing 1 the system section for Linux contains a corresponding entry. In this way, Linux can only be booted when the right password is entered.
Listing 1: Windows automatically and Linux only with password # LILO-configuration file /etc/lilo.conf # # Begin of the global section boot=/dev/fd0 # “boot” refers to the device in whose first sector # the LILO boot sector is installed: here, the floppy U drive /dev/fd0. # compact # “compact” boots (a few seconds) faster, but does not U work successfully # with all systems and is (usually) only worthwhile, # when a boot diskette is used to start the system. # lba32 # Keywords to avoid the 1024-cylinder limit. Only functions # when the BIOS supports the option and can lead to U conflicts with # the option “compact”. # vga=normal # “vga” sets the VGA text mode at the start. # message=/boot/message # In this directory, LILO finds the file to be output U on the screen # during loading, e.g. an image or text. # read-only # The root partition is only mounted legibly for the filesystem test # and only after that shifted into the final read and # write mode. # prompt # Request for input, enabling the desired system to be U selected
76
LINUX MAGAZINE
Issue 18 • 2002
# when there is more than one bootable operating system U on the hard disk timeout=100 # Definition of the wait-time in tenths of a second for U keyboard inputs # at the prompt. If no input is made, after the time runs U out the # first image in the system section will be booted, U unless another # is prescribed by the option “default”. # default=windows # End of the global section # # # Linux system section image = /boot/vmlinuz root = /dev/hda7 label = Linux password=your_password # To protect the password from unauthorised eyes, simply U delete it # after installation from the /etc/lilo.conf. U Alternatively, change # the file rights such that only root can read the U configuration file. # #append=”<parameter>” # To transfer kernel parameters such as e.g. hardware U components. # Not set here. # Windows system section other = /dev/hda1 label = windows table = /dev/hda
BEGINNERS
Jo’s alternative desktop: PWM
CLEVERLY CONTRIVED How would you like a window manager with support for dockapps, which let you access several applications from tabs on a single window? Jo Moskalewski takes investigates PWM, which does just that
O
n his Web site, http://www.students.tut.fi/~ tuomov/pwm/, PWM’s author Tuomo Valkonen declares that his window manager “may not be the easiest window manager to get into, but most good things aren’t”. In fact the list of features absent from PWM may quickly convince many a Linux Magazine reader that it would be better to look elsewhere. However, it’s also clear that the program’s failings are soon more than compensated for by its unusual functions.
Faulty goods Before we get down to PWM’s juicy features, let’s pick out the bones on which some users might choke. The biggest difference between the various window managers lies in their focus behaviour and program-specific handling. PWM provides only the so-called “sloppy focus”: with this, the window over which you place your mouse responds to keyboard inputs. However, it continues to be covered up by other windows until you bring it into the foreground manually. You can do this either by a mouse click or via a keyboard command. Such behaviour is unfortunately something you’ll just have to put up with, as is PWM’s appearance. Its
deskTOPia Only you can decide how your Linux desktop looks. With deskTOPia we regularly take you with us on a journey into the land of window managers and desktop environments, presenting the useful and the colourful, viewers and pretty toys. window decorations light up only in simple, plain colours. Anyone who wants to use colour shadings, or even graphics, will have no luck with this window manager. Another peculiarity is the lack of any windows buttons, with which windows can be closed or maximised. Instead of these, a right mouse click on the title bar activates a drop-down menu with these functions. However, if you’re expecting to see an “iconify” option here then you’ll be disappointed as it’s just not present. To complete your misery, PWM doesn’t come with any configuration tool of its own, but leaves this task to your favourite text editor. Many of you may of course regard that as a plus: it is after all very nice to be able to use your usual editor, instead of having to burrow through a new GUI. There is yet more delight when one notices that the windows may not be able to reduce to an icon, but instead to their small titlebar. Double-clicking on this bar with the left mouse button very practically “rolls” the application up.
Compensation
Main PWM screen
One of the most outstanding characteristics of this desktop controller is hidden, not behind the left, but the middle mouse button: with this you can rip down a title bar (Figure 1) and drag it to another window bar. After this action PWM displays the applications concerned in the same window. The title bar then mutates – as can be seen in Figure 2 – into tabs. Again using the middle mouse button, it is now possible to select the desired application. A newly added program is thereby adapted to the size of the existing window. It is downright astonishing how easily this can be used to place and use any number of applications on a single desktop. Issue 18 • 2002
LINUX MAGAZINE
77
BEGINNERS
Damned nuisance PWM also has a number of virtual desktops: while one can be devoted to the actual work, another one can be used for Web browsing and the next for image processing. If ever space on the desktop runs out, then you just take an empty new one. If you want already-opened windows are to be dragged along when you switch desktops, then you simply pin them to the surface of the screen: a right mouse click on the title bar opens the window menu which also contains the entry “Tg stick”. If a window has activated this option, a marking in the top right hand corner of the title bar indicates this. An application thus marked is present on all virtual desktops. If you select the corresponding menu item again, the sticker is removed and the application is left behind in the current workspace. Menus can also be pinned on – not to take them with you on your journeys through various workspaces, but to keep them open at all times in the visual field. A mouse click on a menu title bar is a stepping-stone to continuous display, and a double click (or the Esc key) lays the ghost to rest again. PWM has two menus: the “Go to window” menu accessed via the middle mouse button (task list), which toggles between the available windows, and a start menu accessed via the right mouse button.
Figure 1: Title bar being moved
Into the bargain Anyone who hankers for more than just these features should take a deep breath: PWM has a dock for Windowmaker “dockapps”. These are special programs, reduced to 64x64 pixels, which act as icons and are designed to be clipped (docked) onto the window manager Windowmaker’s interface. PWM users need no longer even flick them on: their dock automatically draws dockapps to the right spot, and does so reliably, as Figure 3 shows.
Figure 3: Dockapps
The dock always remains in the foreground – any overlapping active window pushes itself underneath. If it ever really gets in your way, you can simply minimise it with the key combination Mod1+T. On most keyboards and system configurations Mod1 corresponds to the Alt key. It’s almost superfluous to mention that PWM can also be used fully and sensibly via the keyboard. The standard assignment is shown in Table 1.
On a plate You will soon have this window manager installed: either play in a ready-made rpm package with the distribution package manager, or else go for the source. In the latter case the requirements of your computer are gleefully meagre – you only need the XFree86 developer package (usually called xdevel or similar) and the tool make together with compiler. After that the source code can be installed as follows: tar xvfz pwm-1.0.tar.gz cd pwm-1.0 make depend && make su cd pwm-1.0 make install
Starting PWM is somewhat more intricate than installation. Apart from a few distribution-dependent variants, however, the following path should lead to the destination: the crux of this is the file ~/.xinitrc (or for a graphical login ~/.xsession). If this file does not exist on your computer it can simply be made from new. Its content could look like this: #!/bin/sh exec pwm Make the whole file executable, and next time you invoke startx (or at the next graphical login) PWM will greet you – even if this happens in a highly inconspicuous way.
Getting cosy
Figure 2: Three Konquerors and one ATerm in a single PWM window
78
LINUX MAGAZINE
Issue 18 • 2002
If the gentle PWM user does not make his or her own configuration file, the window manager uses the system one under /etc/pwm/pwm.conf (or /usr/local/ etc/pwm/pwm.conf). The user’s own desires can be turned into reality in the newly made file ~/.pwm/pwm.conf. You can certainly bundle the entire configuration into a single ~/.pwm/pwm.conf, but it is still possible to distribute the content among a number of configuration files. These must then be invoked from pwm.conf. It’s certainly not advisable to alter the system configuration files (with the exception of the start menu) – incorrect entries completely deactivate their function. Anything which is not validly configured, will
BEGINNERS
later not even be available. If, for example, you do not define any mouse operation, then you won’t be able to use the mouse! For this reason it is better simply to convert your own desires at user level and otherwise to leave the system configurations as they are. An example of a personal configuration file can be found at /etc/pwm/sample.conf, which you can simply copy to ~/.pwm/pwm.conf. This is the central configuration file, into which the start menu, the keyboard assignment and the mouse operation are loaded first (at which point you can type their content, if you like, in its entirety into this one file; the only thing to suffer will be clarity): include “menus-default.conf” include “keys-default.conf” include “buttons-default.conf” If you would like to set up a few defaults at this point, then simply make such a file according to the examples from /etc/pwm and enter the modified files into your pwm.conf. The files to be loaded in can be found in both the system, as well as in the user’s own directory – PWM searches both, but prefers the user directory ~/.pwm. If your alterations are now to take effect, then it helps to restart the window manager: click with the right mouse button on the clear desktop, then select Exit followed by Restart.
Treat for the eyes The example configuration file continues with the appearance: screen 0 { include “look-brownsteel.conf” workspaces 6 dock “-0-0”, 1 }
As well as look-brownsteel.conf, look-beoslike.conf also comes with the package (additional “look*.conf” files can be found on the coverdisc). A look into one of these files reveals that it is easy to create your own colour settings. Six virtual desktops (Workspaces) should be enough for anyone, but if you like you can of course increase this number. Finally, the dock is placed in the bottom right-hand corner (geometry data “-0-0”) and horizontally aligned. Anyone who prefers it vertical, should set, instead of the final “1” a “0”. In this section you can also make further entries: so with the line font “lucida”, in future the font Lucida will be used. opaque_move 50 means that a window with a size of more than 50 per cent of the desktop area is shifted so that only the window frame, but not the content, is displayed (a feature which owners of decrepit old computers should value). The full list of possible
Table 1: Keyboard combinations Command Alt+Tab Alt+(1-9,0) Alt+M Alt+G Alt+D Alt+T Alt+E Alt+Return Shift+Ctrl+W Shift+Ctrl+X Shift+Ctrl+S Shift+Ctrl+Z Shift+Ctrl+V Shift+Ctrl+H Shift+Ctrl+M Shift+Ctrl+(R/L) Shift+Ctrl+A Shift+Ctrl+O
Function Change active window including raising Change workspace (“0” is workspace 10) Open start menu “Go to window” menu – the task list “Detach” menu – release attached window Maximise or minimise dock Call up new xterm The active window can then be moved by arrow keys Close application and close window Close window, without closing application (application crashes!) Reduce window to title bar Pin on window Maximise window vertically Maximise window horizontally Maximise window Raising Pin window to another Window menu
tweaks for the screen section (and other sections) can be found in the file /usr/*/doc/pwm/config.txt.
Magic box It gets really interesting with the following lines: winprop “Netscape.Navigator” { frame 10 } This makes pop-up (or mischievously newly opened) Navigator windows appear automatically in one and the same PWM window. Opera’s properties can thus be combined with Netscape Navigator, although this is not really the intention at all. With PWM it’s not only possible to combine windows – if required, one can also show applications without window frames. For example to start the system monitor xosview without a frame, the following entry helps: winprop “*.xosview” { wildmode yes } Conversely, with no instead of yes a window frame can be forced, if, at its own initiative, an application wants to go without it. Because of its docks and its unique window handling options, PWM is definitely an odd sort of program. Its fundamentally different concept should be regarded as an opportunity for a different way of doing things, rather than as a failing. Anyone who takes a good look into this one may very well in future look at the greats of this genre with a bit of sympathy. Issue 18 • 2002
Virtual desktop: Most window managers offer several “screens”, which can be filled with windows or applications. You can switch between these without having to close an application, but you can only see those applications that were started on the current desktop.
LINUX MAGAZINE
79
COMMUNITY
Internet
THE RIGHT PAGES Janet Roebuck introduces the latest Internet bookmarks to tickle our fancy in the
Linux Newbie Administrator Guide http://sunsite.dk/linux-newbie/index.htm We’ve only just discovered this site, but it’s wonderful for the way it explains how to become an administrator for either a home system or a small office.
Linux Magazine offices
Cups http://gongolo.usr.dsi.unimi.it/~vigna/fax4CUPS So you want to do away with your old fax machine? Well now you can with CUPS. Printing faxes is much easier with the help of this back-end: you make your fax, print it and CUPS will use the modem to send it out.
Celestia http://ennui.shatters.net/celestia/index.html This is a real time space simulator with amazing graphics. Add-on packages expand the planets and satellites that you can visit.
Linux Focus http://www.linuxfocus.org/English Linux Focus is a free online e-zine produced every couple of months. It’s aimed at giving out information and has a neat PDA conversion button so you can read it anywhere.
Linux Utilities http://home.xnet.com/~blatura/linapps.shtml A quick list of applications and utilities for Linux. Although it’s not been updated for some time, you’ll usually find some software to do the job.
Tom’s Hardware Guide http://www.tomshardware.com Tom’s Hardware page is the Web’s best guide for information on hardware, as well as the latest news and good articles about different hardware topics. Useful if you want to have the latest and greatest for a LAN party.
Linux Hardware Database http://lhd.datapower.com A database of compatible hardware submitted by Linux users and not tied to any specific distribution. 80
LINUX MAGAZINE
Issue 18 • 2002
Quesa http://www.quesa.org Quesa is a highlevel 3D graphics library, which offers binary and source level compatibility with Apple’s QuickDraw 3D API.
GnuCash http://www.gnucash.org A way to manage your finances. Be careful which version you download as some require major library updates and are not recommended. See their main page for details.
COMMUNITY
FreeAmp
WAPsh
http://www.freeamp.org A multiplatform audio player. Supports MP3 and Ogg Vorbis files as well as streaming and standard CD audio. You can use themes to make it fit in with your desktop style.
http://www.exolution.de/wapsh/index.html Got a WAP phone? Want to login to a remote host? Then this is the site for you. Create shortcuts to save typing on your phone and use secure communications.
SPICE
Linux System Labs
http://fides.fe.uni-lj.si/spice SPICE is a circuit simulator with optimisation utilities. It features an excellent graphical front-end with plotting functions and a large OPUS catalogue of semiconductors, which is constantly updated.
http://www.lsl.com Good site for news and a nice FAQ section, which has been recently updated.
Geek Stuff http://www.ewal.net Cool LCD display and IR detectors to attach to your system.
Bochs http://bochs.sourceforge.net Bochs is a PC emulator program, which lets you run a virtual Windows computer under Linux. Many operating systems will run under Bochs enabling you to play with lots of software.
Basilisk II
Nerf HQ http://www.angelfire.com/wa/rythom/nerfhq. html Nerf warfare! Raise the stakes and get kitted out for the easy way to end a Quake party dispute. This site tells you which to get and why.
http://www.Uni-Mainz.DE/~bauec002/B2Main.html An Open Source 68K Macintosh emulator. You still need a copy of MacOS and a Mac ROM image but once you have those then the world of Macs is yours.
Kalendus http://kalendus.sourceforge.net Kalendus is a Web calendar built with Perl and MySQL. It supports multiple calendars, events spanning multiple days, repeated events, and customisable HTML templates.
Audacity
Jinx http://www.jinxhackwear.com Get kitted out in style with this range of clothing for hackers and geeks.
http://audacity.sourceforge.net Audacity is a free audio editor. You can record sounds, play sounds, import and export WAV, AIFF and MP3 files, and more. Use it to edit your sounds using Cut, Copy and Paste (with unlimited Undo), mix tracks together, or apply effects to your recordings. Issue 18 â&#x20AC;˘ 2002
LINUX MAGAZINE
81
COMMUNITY
The monthly GNU Column
BRAVE GNU WORLD Welcome to another issue of Georg CF Greve’s Brave GNU World. As announced in the previous issue, here are a few more
The original NetHack
games to play over
NetHack – Falcon’s Eye
the Easter holidays.
Falcon’s Eye is a graphical user interface for the game NetHack, which has had a well-deserved community of fans for about 20 years now. This makes NetHack one of the oldest computer games still seeing further development. NetHack is a single-player Rogue-like game in which a player aims to explore dungeons and survive encounters with often unfriendly creatures. Blizzard’s “Diablo” is a well-known commercial example of this genre. The content and gameplay of NetHack are rather complex, so players with a “if it moves, kill it” attitude will find their characters facing an untoward end pretty quickly. That said, the interface of NetHack is very simplistic. Without sound and being based only on ASCII, the imagination of the player is being challenged. This certainly offers the advantage of being able to play NetHack in a console or on a terminal, but some more eye-candy is also nice at times. This is where Falcon’s Eye comes into play. It
Our more serious readers will find some scientific software
88
LINUX MAGAZINE
Issue 18 • 2002
replaces the ASCII art with a high-resolution isometric display featuring dynamic lighting effects; several interface screens; and a graphical introduction sequence. It also provides sound effects for the different events in the game and allows for MIDI and MP3 background music. Falcon’s Eye also adds new ways of controlling the game as it allows mouse use, movement via
Falcon’s Eye
COMMUNITY
VegaStrike The second game of this issue, VegaStrike, is a 3D space combat simulation under the GNU General Public License that certainly doesn’t have to be afraid of competing with proprietary games. In the beginning, Daniel Horn, a student of the University of Berkeley, California, wrote a GLide-based clone of the non-Free game Wing Commander, that was even mentioned on the Origin homepage. According to Daniel, this code was extremely ugly and unclean, because at the time he didn’t know a whole lot about programming. He decided to start over and write an entirely customisable space combat simulator without any connection to Wing Commander game. Not knowing about Free Software or GNU/Linux at the time, he originally wrote it for Windows using OpenGL and D3D. It wasn’t planned, but in January 2001 he decided to port it to GNU/Linux and make VegaStrike platform-independent. The current version of VegaStrike uses C++ and the OpenGL, OpenAL, glut, SDL and expat libraries. The latter is used to
VegaStrike
process XML data, which VegaStrike uses extensively for all configuration and communication. In Daniel’s eyes, this exclusive and wide usage of XML is one of the big advantages of VegaStrike, since it allows even non-programmers to configure and expand the game. Over the past year, VegaStrike was improved with the help of other students from Berkeley and other members of the Free Software community, which made it
“autopilot” and context-sensitive menus. A description of objects with so-called “tool tips” allows beginners to get into the game more easily. The interface is highly customisable: not only can the screen resolution be chosen but sound effects and key mapping can also be modified. The game content itself is delivered by NetHack, which is why the combination is being referred to as NetHack – Falcon’s Eye. The Finnish developer Jaakko Peltonen wrote falcon’s Eye almost single-handedly. He not only did the programming and the interface of the game, but he also did the graphics and music. User requests and feedback were an important part of the development process, however, since they allowed him to improve the project in many ways. Jaakko first thought about this project in 1999, when he experimented with isometric graphics. He only discovered NetHack later when he realised that his original plan, outfitting Ultima IV with a graphical user interface, would fail because Ultima is proprietary software. Development began in October 2000 and since then a lot of time and work has been spent on Falcon’s Eye – the interface alone saw five revisions. At the moment Jaakko is busy fixing some bugs and problems and thinking about making the interface
one of the best space combat simulations available at the moment. However, development is still far from being finished. After the technical issues have been settled, VegaStrike will develop in two directions simultaneously. On one side, the explorative side and the social interaction will be expanded for players to experience alone or with friends. It will be possible to gain financial resources by trade, piracy or opening a business and players can engage in politics. On the other hand, strategic aspects will be expanded, so players can control several ships at once, leading big fleets into combat. In order to realise all these plans, the project team still seeks help in many forms. It needs people with a talent for artworks to work on the 3D models, developers willing to work on a platformindependent basis or game testers balancing out the values of the different parts and components. Daniel would also like find someone to further improve the physical model. Enough said. If you are interested in VegaStrike, take a look at the homepage.
more attractive by including animations. The turnbased nature of NetHack makes this a little difficult, but at least “static” animations like flickering torches should be possible. The next version will also contain a lot of new graphics that will improve the overall attractiveness and a zoom-feature is also planned. In this area in particular there is a lot of freedom for potential volunteers willing to work on Falcon’s Eye. The complexity of NetHack makes it impossible for Jaakko to discover all problems himself, so he needs people to playtest it. I’m sure there will be no problem to find people willing to make this sacrifice. Falcon’s Eye was written in C with some C++ parts where it became necessary – to access DirectX, for instance. NetHack – Falcon’s Eye is tested to run on GNU/Linux, DOS, Windows (95+), BeOS and Solaris SPARC. Installing NetHack and Falcon’s Eye from scratch is still problematic, but fortunately there are prebuilt packages and online help available to make this much easier. Just like NetHack itself, Falcon’s Eye is release under the NetHack General Public License by M. Stephenson, which was written to be like the BISON General Public License by Richard M. Stallman Issue 18 • 2002
LINUX MAGAZINE
89
COMMUNITY
although that license has now been replaced by the GNU GPL. After all these years of development without copyright assignments, changing the NetHack license is practically impossible. But it might have been more useful to release Falcon’s Eye under the GNU General Public License as it does not have this legacy. However, this should not keep you from having a lot of fun with Falcon’s Eye or possibly contributing to it.
GSL The GNU Scientific Library (GSL) is a modern numerical library providing a huge number of mathematical routines under C and C++. The library itself, which is available under the terms of the GNU General Public License, was written in ANSI C. The collection of over 1,000 functions provided by GSL cover areas like random number generation, fast Fourier transforms (FFT), histograms, interpolation, Monte Carlo integration, functions for vectors and matrices, permutations and linear algebra. The library follows the object-oriented design and allows loading or changing functions dynamically without needing to recompile the program. Users with a little experience in C should have no problems using the GSL, thanks to the pretty extensive 500page documentation available online. In the near future it will also be possible to buy a handbook that will simultaneously be available under the GNU Free Documentation License. The interface was designed specifically to allow the use of GSL in high-level languages like GNU Guile or Python and of course the GSL is thread-safe. The project began about five years ago, when Dr. M Galassi and Dr. J Theiler of the Los Alamos
GNU indent
GNU GaMa
The history of GNU indent began in 1976 as a part of BSD UNIX in order to be “donated” to the Free Software Foundation later, which makes the program almost as old as Unix itself. GNU indent helps to improve the readability of C source code and can transform different types of C source code formatting into each other. Since different developers, projects or companies often consider different types of formatting to be most comprehensible this can be extremely useful. The standard setting of GNU indent is to convert the source code according to the GNU Coding Standards. Additionally, GNU indent may be used to check C syntax and help hunt for bugs and maintain projects that way. The project was written in ANSI C and is released under the GNU General Public License by the FSF; its age and flexibility in particular make the program quite special. The current maintainer, David Ingamells, who recently took over GNU indent from Carlo Wood, seeks help with the internationalisation, as it is only available in English and Taiwanese at the moment. If you would like to help keeping one of the old-timers alive and attractive for other users, this is your chance.
90
LINUX MAGAZINE
National Laboratory began working on a consistent and solid Free Software computational library. Since those days, it has been developed by a group of physicists with experience in the field of computational physics. In order to avoid mistakes in the algorithms, tried and tested Fortran-algorithms were re-implemented for GSL whenever possible. Further plans include adding more functionality, but preserving the consistence and stability is paramount, so after a rather work-intensive period, the GSL can now be considered stable and ready for daily use. Using proprietary software for the kind of task where working co-operatively in international groups with replicable results is essential does not make sense, and this is particularly true in science. Brian Gough, who filled out the Brave GNU World questionnaire for GSL, emphasised this in his email. The additional costs for software licenses, the limitations in using the software and later publication of results resulting from them as well as the lack of transparency inherent in proprietary software make Free Software the only acceptable choice for science. The GSL very consciously chose the GNU General Public License to ensure that scientific applications would remain available to the scientific community after their publication. Out of practical considerations and respect for privacy, the GNU General Public License allows “inhouse” modifications and applications that do not have to be published. Only after these are distributed outside a company, house or institute, must the terms of the GPL be upheld. There is an amazingly obvious parallel to generation and publication of scientific results in this.
Issue 18 • 2002
The name of the next project, GaMa, is an acronym for “Geodesy and Mapping”. Geographers at least should now be aware that the project stems from the geologic metrology, remote sensing and cartography areas. Geographers may pardon the simplification, but this requires some introductory words for nongeographers: as most people know, the shape of our planet has always held some challenges for cartographers. In order to be able to create twodimensional maps, different projections are used, all of which distort some features. However, even in three-dimensions exact metrology is far from trivial, as a rotating geoid tumbling through space, which is what the planet earth really is, has no fixed reference points. Every position measurement is always an error-prone relative measurement between two arbitrary points. As in many other disciplines, this is countered by taking as many measurements as possible. In
COMMUNITY
The FSF speak about Free Software
Geography these measurements are referred to as “observations”. The goal of geodesy, a branch of applied mathematics, is to correlate all of these observations with each other in order to generate the best possible model of reality. It needs to be taken into account that geodesy has to make a statement about the quality and errorrange of the result based on the quality of the initial observations. Anyone ever doing error calculation will have a rough idea of what this means. GNU GaMa can calculate local geodetic networks with an essentially unlimited amount of observations of different observation types. Observations are being specified in XML format and can even be entered into GNU GaMa via email. The programming language used for GNU GaMa is C++ and the code was kept platform-independent enough to compile on both GNU and Windows systems. Being a part of the GNU Project, GaMa is available under the GNU General Public License. Further development of GaMa seeks to create independent components communicating through XML in order to improve efficiency and GNU GaMa will hopefully be able to calculate global geodetic networks one day. Ales Cepek began work on GaMa in 1998 but quickly sought help from students of his department and others, especially Jiri Vesely, Petr Doubrava, Jan Pytel, Jan Kolar and Petr Soucek. Help is needed to revise the documentation and to create the planned Qt GUI that Jan Pytel is currently working on. The latter will hopefully make GNU GaMa much more attractive to the end user. If you are interested in computer-based geography, please take a look at the FreeGIS homepage.
“We speak about Free Software” The Free Software Foundation Europe issued the “We speak about Free Software” initiative in midNovember 2001. The originators of the campaign were companies in and around Free Software complaining about the abuse and fuzziness of the term Open Source, who
asked the FSF Europe to point out publicly why Free Software is not only the better concept but also the better term. Central arguments are that Free Software is easier to understand as it refers to the freedoms defining the phenomenon; that it is harder to abuse; and that the definition is more solid. Also Free Software provides additional values that are not part of Open Source. The initiative received a very positive response, especially from companies who have been involved in Free Software a little longer. Ten of them immediately asked to be listed on the campaign’s Web page. The feedback by private people was also quite good. In one case the FSF Europe made an exception and listed the support on the page: Bruce Perens, cofounder of the Open Source movement and author of the Debian Free Software Guidelines and the Open Source Definition asked to be listed as a supporter of the initiative. If you are interested in the initiative or would like to get your company listed, please take a look at the homepage.
See you... So much for the Brave GNU World this month, I hope some of you received interesting suggestions and impressions. As usual I’d like to encourage a lot of feedback containing ideas, questions, comments and introductions to interesting projects to the usual addess, because only the steady support of the Free Software community makes the Brave GNU World possible.
Info Send ideas, comments and questions to Brave GNU World Homepage of the GNU Project Homepage of Georg’s Brave GNU World “We run GNU” initiative NetHack – Falcon’s Eye homepage NetHack homepage VegaStrike homepage GNU Scientific Library homepage GNU GaMa homepage FreeGIS homepage GNU indent homepage Free Software Foundation Europe homepage “We speak about Free Software” homepage
column@brave-gnu-world.org http://www.gnu.org http://brave-gnu-world.org http://www.gnu.org/brave-gnuworld/rungnu/rungnu.en.html http://www.hut.fi/~jtpelto2/nethack.html http://www.nethack.org http://vegastrike.sourceforge.net http://www.gnu.org/software/gsl http://www.gnu.org/software/gama http://www.freegis.org http://www.gnu.org/software/indent http://fsfeurope.org http://fsfeurope.org/documents/ whyfs.en.html
Issue 18 • 2002
LINUX MAGAZINE
91
COMMUNITY
Want to know more about NetBSD?
POWER TO THE DAEMON In last month’s Free World Richard Ibbotson took a closer look at FreeBSD. This month it’s the turn of NetBSD. Never heard of it? Then read on further
B
SD, just like GNU/Linux, has its fashionable version of the year. With BSD we nearly always find that it’s NetBSD and that’s the end of it. Just lately, however, NetBSD has been showing signs of being the trendy and fashionable BSD about town. A quick look at the NetBSD site will reveal that this version of BSD will run on just about anything that resembles computing hardware and no matter what that hardware is, things nearly always work as they should. NetBSD is somewhere between Free BSD and Open BSD in terms of ease of use and security. It is probably just as easy to get support for NetBSD as it is for NetBSD and
NetBSD is full of features to explore
92
LINUX MAGAZINE
Issue 18 • 2002
the mailing lists that can be accessed through the NetBSD site are just about the best on the Internet.
The smallprint Just before or just after installing the NetBSD software you should get hold of the documentation from the NetBSD site in the form of the “NetBSD Operating System: A Short Guide”, which has been written by Frederico Lupi. This is an excellent document and you should read it thoroughly. It’s available in several formats and so you shouldn’t have any problems reading it. At page eleven you can read the licence, which says: “Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: ● Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. ● Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. ● All advertising materials mentioning features or use of this software must display the following acknowledgement: this product includes software developed by Federico Lupi for the NetBSD Project. ● The name of the author may not be used to
COMMUNITY
endorse or promote products derived from this software without specific prior written permission.” This more or less means that you can do what you like as long as you retain the copyright statement with any software that you re-distribute. It’s not quite the GPL that most of us are used to but some people prefer this kind of copyright to anything else. Of the features that are explained in the short guide the following might be of interest to you: ● Code quality and correctness. ● Adherence to the standards. ● Research and innovation. The target audience is said to be R&D, computing professionals and hobbyists who want a bit more from their software than a crashed computer and a blue screen and no explanation for it. NetBSD is used at NASA’s Numerical Aerospace Simulation facility with Alpha machines, which is probably one of the best recommendations for the respectability of the software. Booting from the first CD you should see the sysinst utility appear on the screen (see Figure 1). You will be asked to select one of the options. Install NetBSD might be a good one. The next screen will ask you whether or not you wish to partition the hard drive. Several other screens follow. If you get confused at this point refer once again to the online manual or download it in whatever format you prefer. You will be asked about several important things to do with installing the software into your hard drive. Make sure that you get this part of it absolutely correct.
Multimedia applications on NetBSD
be configured by hand as described in the guide. If you are still confused then why not subscribe to one of the online lists so that you can ask your questions. At this time it is good to consider such things as the /etc/resolv.conf files or perhaps hosts.deny. Do you want to use your new NetBSD computer as a workstation or server or a firewall? Which do you want?
Figure 2: Installing NetBSD correctly
Figure 1: The Net BSD sysinst utility
When you have been through these a screen will appear which will ask you to select the standard installation without Xwindows or another with Xwindows or do you want to make up your own custom installation. At this point you might wish to choose KDE2 or GNOME or Windowmaker for your desktop. Further screens will reveal that the software sets have been extracted and you will be asked to reboot the machine. A successful start up will show that you have hardware such as Ethernet cards or modems installed which will now need to
Fun desktops on NetBSD
Issue 18 • 2002
LINUX MAGAZINE
93
COMMUNITY
Working day desktops on NetBSD
Security As is always the case in present day circumstances the perennial question of network security and the untrustworthy Internet connection come along next. You must have a firewall on your network somewhere if you wish to use the Internet. Fortunately for us the IPF syntax for BSD firewalls are much simpler than those found in iptables under Linux and they can actually be just as sophisticated. The firewall HOWTO is probably one of the best documents on the Internet. You can see the Web address for that document below. A simple intro to IPF begins with... block in all pass in all and then it goes on in some detail about how to build up chains of rules so that you can be reasonably sure that Grandma’s shopping list isn’t being read by the wrong people. The untrusted outward pointing device is the thing
that creates a great deal of interest and discussion. NetBSD works fine with modems, ISDN and ADSL as well as Ethernet cards. My own computer was configured to work with ADSL and after a certain amount of command line adventure at the configuration stage it now works fine. If you do have problems then persevere and ask questions – it’s worth the effort. Now that you have an installed and working system you can download updates from the Web with some simple command line arguments. Maintenance is a very simple task, which is carefully thought through by the developers who wrote the software. The package management system is an excellent example of complex technology designed to reduce an update to a no-brainer decision. NetBSD started out in 1993 with the 0.8 version. At the time of writing the 1.5.2 is in use and there is some talk of another release at some point. Although NetBSD can be used on just about any hardware for the purposes of this review an assumption has been made that the person who installs NetBSD for the first time will use i386 hardware, as this is readily available at sensible prices. Platforms that can be used with NetBSD are: Acorn32, algor, alpha, Amiga, Amigappc, arc, arm26, arm32, atari, bebox, cats, cesfic, cobalt, dnard, Dreamcast, evbsh3, hp300, hpcarm, hpcmips, hpcsh, i386, luna68k, Mac68k, MacPPC, mipsco, mmeye, mvme68k, netwinder, news68k, newsmips, next68k, ofppc, pc532, PlayStation 2, pmax, prep, sandpoint, sgimips, sparc, sparc64, sun2, sun3, vax, walnut, x68k and last but not least x86_64. Quite amazing, eh?
Where can I get NetBSD from? BSD central might be a good place to start. Linux Emporium also sell a set of CDs. You can also have a look at the NetBSD site for a list of distributors. You might be one of those lucky people who have a broadband connection in which case you can download the whole thing. For more information have a look at the useful URLs below. Next month we have a look at Open BSD and what it’s all about.
Info NetBSD site Documentation Mailing lists Linux Emporium BDS Central The latest stuff NetBSD with ISDN Net BSD with ADSL Firewall HOWTO
http://www.netbsd.org http://www.netbsd.org/Documentation/ http://www.netbsd.org/MailingLists/ http://www.linuxemporium.co.uk http://www.bsdcentral.com http://www.daemonnews.org http://www.netbsd.org/Documentation/network/isdn/ http://www.xsproject.org/speedtouch/ http://www.obfuscation.org/ipf/ipf-howto.txt NetBSD in action
94
LINUX MAGAZINE
Issue 18 • 2002