linux magazine uk 10

Page 1



TECHNOLOGY OVERVIEW

INTERVIEW

RED HAT 7.1

GURU DAY JOHN SOUTHERN

Red Hat recently held a Guru day to show off the features of its new release that are often missed.

Planting a flag The day started with a talk by Alan Cox setting out why Free Software matters. Timing is everything and only that week both Eazel and BRU had both folded. This amply showed the advantage of open source in that we have the code for Nautilus and so its development can continue, whereas BRU, although an excellent product, was proprietary and so we are unlikely ever to see the source or move development forward. After a quick look at the US vs European patent law, Alan moved onto the history and culture of Open Source. This may appear to have had little to do with Red Hat 7.1, until it finally sinks in that Red Hat truly believes in Open Source as a social code, not just a business model — they are not just willing to tick the boxes agreeing to OS, but also to live and work the dream.

Smooth and secure The day then continued with a hands-on look at the new boxed set. Red Hat do have a slight upper hand compared to other distributors in that they employ eight out of the top ten kernel developers. This means that the developers who would know how to break it, test the RH distribution. Compared to version 7.0 the latest release has moved on with some dramatic new features. Most of this is due to the 2.4 kernel being incorporated. Little features such as the SMP scalability being increased to 8 processors and 64GB RAM support may not appeal to most users, but it does mean that RH is looking both to the future and the Enterprise market. Like most of the new distributions the graphical install was painless and I was most impressed by the KickStart feature which allows a setup to be predefined so re-installing is completely automated. New configuration tools made setting up Apache, BIND and ADSL easier. It was nice to see

Ogg Vorbis being supported along with a whole raft of additional security features. A very brief overview was then given of Stronghold and Interchange. Stonghold is Red Hat’s new secure Web server based on Apache with additional security. Interchange is developed from the E-commerce software by Akopia. Very flexible, fast and with a choice of options ranging from payment acceptance methods to databases. The Red Hat Network gives the ability of simple updates and system configuration. My initial fears that this would force users to either pay or do without the updates were met with disbelief. It was categorically stated that Red Hat will always offer all updates free via FTP as it always has done. The day went very quickly and aroused sufficient interest for me to have finally taken the plunge and converted to Red Hat. All painless, especially the printer and graphics card support. The only package that I have since upgraded has been Gnome from 1.2 to 1.4 Overall, I would recommend Red Hat 7.1 and having been shown some of the little features that normally you skip over I am now quite biased and proud of it. ■ 10 · 2001 LINUX MAGAZINE 13


FEATURE

RUBY

Object-oriented script language, Ruby

RED STAR TJABO KLOPPENBURG

Ruby is an object-oriented script language, which has won itself a place among the established languages Perl, Python and maybe PHP, too. Ruby, a development from Japan, also has a certain entertainment value, which makes getting started that much easier. From Japan, the Land of the Rising Sun, comes a new star in the sky of script languages. But does the world need a new script language? After all, there are already established ones such as Perl, Python, Tcl and many more. Ruby is a new attempt to produce a well-thought-out and object-oriented script language, which is easy to use. Since it is very recent, there are not yet as many libraries available as there are for Perl or Python, and yet it is already usable for many purposes. This article introduces the concepts by means of a few examples and shows how compact and yet neatly structured Ruby can be.

Jack of all languages? Ruby was developed between 1993 and 1995 by Yukihiro (Matz) Matsumoto in Japan; Perl 4 and Python already existed. While Python is a hybrid language with functions for procedural programming and with objects for OOprogramming, Ruby is purely object-oriented: There are no functions, just methods — and everything is an object. There is a substitute, though, for 14 LINUX MAGAZINE 10 · 2001

functions: A method defined outside a class turns into a private method of the main class Object, as the result of which it becomes available globally. The syntax and the design philosophy are heavily based on Perl, so there are statement modifiers (if, unless, while and others), integral regular expressions, special variables such as $_ and the difference between “...” and ‘...’ strings. Ruby thus helps itself to various programming languages and combines them into a new one. Unlike Perl, $,@ and % do not refer to different types of data, but to the scope of a variable: Normal variables can manage perfectly well without characters such as $, % or @, $var refers to a global variable of any type and @var an instance variable of any type in an object. There are no semicolons at the end of the characters in Ruby programs.

Object-oriented As with all OO-languages, there are also practical standard classes available in Ruby, which are easy to use. Of course you can also define classes yourself, including inheritance and private,


RUBY

protected and public methods. But the object orientation of Ruby goes a bit further, so to cut a long story short, everything that can be modified is an instance of a class (or else a reference thereto). This also applies even to completely ordinary figures, so for example -42 is an instance of the integer class and thus also has access to the methods defined in the integer class. As the result of this object orientation, which extends as far as the basic elements, one obtains information about a number or a string, not through a function such as sizeof(scalar) or int2str(int), but direct from the object, in fact through one of its methods. For example, in order to obtain the string representation of 42, you can call up the method 42.to_s. This is, of course, a banal example, but the (overwritable) method .to_s is already defined in the class Object, the mother of all classes, and thus available in absolutely every class — including its own. As well as the usual class and instance methods, Ruby offers the option of using so-called iterators. These are methods which iterate via the individual elements of an object container, thus via the elements of an array, a hash, the lines in a text file or again, its own container class. Even the integer class, itself not actually a container, has the use of helpful iterators such as .upto(num) or .downto(num), which iterate starting from the object in steps of one to num. In this case the iterator is given a block of functions, which are called up in sequence for each element. To show how this works, let’s take a look at a small sample of code, where the .upto iterator of integers is put to use to create an output for the numbers from one to ten: #!/usr/bin/ruby Max = 10 # constant 1.upto(Max) do |i| # iterated element print “#{i} ^2 = “, i*i, “\n” end

FEATURE

In the first line we define a constant Max, whose name begins with a capital letter, .upto is the iterator already mentioned, which in this case iterates from 1 to Max. do and end include the code block which is transferred to the iterator. Alternatively the block can also be bracketed with { and }. |i| assigns the element of the current iteration to the variable i, the print line finally outputs i and i*i. Bear in mind that normal variables have to start with a lower-case letter and constants with a capital letter. If there is an objection on the tip of your tongue, that for is much more universal, I admit it, you’re right. But since in nine out of ten cases an increment of one is being counted up or down, the upto method (and downto, step and others) is a genuine advantage. As we are about to see from a somewhat more complex example, iterators also in many other cases allow for highly efficient programming. But first take a look at the line Max = 10. We need to visualise once more the idea that in this case the 10 is not simply a ten, but an instance of the integer class. The assignment operator creates a new instance and then Max assigns a reference to this. Let’s look at a somewhat more realistic problem: You want to rename all the files in the current directory ending in .MP3, so that they end in .mp3 . Since the mv command from Linux is little use in this everyday problem (unless you speak fluent Bash), we shall build our own script in Ruby. The task: One has a directory, wants to read out the content and, depending on the state of the entry, rename the file. While in Perl one has to tussle with functions such as opendir and readdir, the problem can be solved in Ruby exactly as we have formulated it: One takes a Dir object, allows the entries to be assigned and renames files as appropriate. So now look at the following code: Dir.open(“.”).entries.each do |entry| new = entry.gsub(/\.MP3$/, “.mp3”); File.rename(entry,new) if new != entry end This is of course really easy to understand: Dir.open provides a new instance of the class Dir, which we do not assign to a variable, but re-use immediately. The entries method of the Dir instance provides an array with all directory entries in the form of string objects, and lastly each is the standard iterator over all elements of a container (here: arrays). All elements of the array land one after another in the variable entry. The string method gsub replaces .MP3 with .mp3 and the result lands in the variable new. The last line finally uses the class method rename of the class file, to undertake any renaming which may be necessary. Class methods, unlike instance methods, can be called up without an existing instance of a class. The attached if expr is a construct familiar from Perl, 10 · 2001 LINUX MAGAZINE 15


FEATURE

RUBY

Listing 1: The Tribble class class Tribble # Start with a capital letter! def initialize # constructor @conditions = Array.new # possible conditions @condition = nil # default-condition end def setValues( arr ) # enter permitted values if (arr.type.to_s == “Array”) arr.uniq! # kill duplicated elements @condition = arr if arr.size == 3 end end protected :setValues def get # method, read out condition @condition end def set( new ) # method, set condition @condition = new if@conditions include?(new) end def ==(other) # comparison operator return nil if get.nil? or other.get.nil? @condition == other.get end end

which sometimes makes your hair stand on end. It would of course have been possible to use a normal three-line construct if, Commands, end, but the notation selected is often clearer especially with single-line if blocks. Even if the example may have appeared rather odd at first, realisation is very much in line with how one thinks — and one becomes accustomed correspondingly rapidly to this type of programming. Another option worth mentioning here is that of passing small programs on the fly with the -e `...`parameter to the interpreter. But in that case the individual commands must be separated by semi-colons.

Your own classes Now that we have seen how standard classes and iterators are used in two simple examples, we will now look at how you can define your own classes and later iterators in Ruby. In order to also to show more specialised properties of classes in Ruby, we shall look at the implementation of a class whose instances can include three stable conditions: the Tribble class. Why none of the common languages has come up with this sort of data type is remarkable in itself. 16 LINUX MAGAZINE 10 · 2001

Such classes are not actually in any way absurd, as one can for example use a class and the conditions should, must and either/or really well to express the wishes of a customer with respect to an article — in order to then to seek out the most suitable thing for precisely this customer. Or how would it be for example with a class that helps us by using the conditions yes, no and yeno in making difficult decisions? We will next define a general class Tribble, from which we will then, for test purposes, derive the class YNTribble, which will help us to make decisions. Look at Listing 1 for a first simple class Tribble. The name of the class must begin with a capital letter. A method is defined using def, initialize is the reserved name for the constructor. This is where we make the instance variable @conditions (options) and @condition (actual condition), which can be accessed by all methods within an instance. The method setValues serves to define the possible conditions of the Tribble. This method is protected, so that it can only be used in Tribble and derived classes. .get serves to read out the current condition, .set(value) sets the current condition. In .get there is no return, as it can be left out, since the value of the last expression is automatically the return value of the method. The third method finally defines the == operator for the Tribble class: If the condition of one of the Tribbles is nil, then the result is also nil, otherwise the conditions of the two Tribbles involved are compared and returned as the results true or false. The .nil? method is defined in the class Object and only comes back with true if the object whose nil? method is being called is nil. In other words: A variable is never undefined, but at least an instance of the nil class. To have one applicable class to play with, we shall derive from Tribble the new class YNTribble, which makes life easier for us with analytical functions. The new class is to have the functionality of the Tribble class and be able to include the conditions yes, no and ye-no. In Ruby a class can in each case only inherit one other class. To make additional methods available in a class, modules can be integrated which are defined outside the class. In our new class (Listing 2) we define a new constructor, which calls up the constructor of the parent class and defines the possible conditions of instances of our class. It is possible here to specify a start condition as an option, the default is yes. Also, we define a new operator +, which links two instances of YNTribble and depending on the conditions, comes up with a new YNTribble instance. The logic can be accommodated in three lines, by using statement modifiers: If two Tribbles to be compared, the result is yes, yes + no turns into ye-no and the rest becomes no.


RUBY

This means there is now a useable Tribble class available for first tests. To do this, we instance the YNTribble twice and compare the contents of the objects: t1 = YNTribble.new(“yes”) t2 = YNTribble.new(“no”) print “comparison”, t1 == t2, “\n” print “yes + no = “ t3 = t1 + t2 # yes + no = ? puts t3.get The output is: Comparison: false yes + no = ye-no Obviously, since yes + no now produces a crystal-clear ye-no, what else. The instruction puts outputs the transferred string followed by \n.

FEATURE

Listing 2: YNTribble — The defined Ye-no class YNTribble < Tribble # inherits all methods def initialize(condition=”yes”) # optional default value super() # calls up the method of the parent class. setValues( [“yes”, “no”, “ye-no” ] ) # Array on the fly... @condition = condition if@conditions.include?(condition) end def +(other) # a self-defined operator: t2 = t1 + t2 return YNTribble.new(“yes”) if (@condition == “yes”) and (other.get == “yes”) return YNTribble.new(“ye-no”) if (@condition == “yes”) or (other.get == “yes”) return YNTribble.new(“no”) end end

Home-made iterators Where, then, are the promised iterators? Well, we shall build ourselves a container for YNTribbles, HTArray, with an iterator each, which can iterate via all elements (Tribbles), without bothering the user with tedious details about the internal storage of the Tribbles. Also, both methods integrate add(object) and get(num) to roll in or read out Tribbles.

Listing 3 shows a simple implementation which would not only roll in Tribbles (arrays can store any objects) and which in this case could actually be replaced by a normal array. But using one’s own class with iterators does have the advantage that later we could change the in-class realisation of the storage without any problem, while the interface remained the same externally.

10 · 2001 LINUX MAGAZINE 17


FEATURE

RUBY

Listing 3: An array made of Tribbles class HTArray def initialize @arr = Array.new # Data in array end def add( tribble ) # attach tribble @arr.push( tribble ) end def get ( num ) @arr[num] end

# read out tribble

def each # Iterator over all tribbles @arr.each { |tribble| yield tribble # call up block with tribble } end end

The definition of the iterator method is relatively simple: One writes a method, which takes all the stored objects by the hand in turn and passes them on to the yield command. When an iterator is used, then for each object the transferred code block stands in for the yield, where the parameters of the yield commands land in the variable specified in |...|. Listing 3 shows one possible implementation of HTArray with an .each iterator. You can try out the HTArray with the following snippet of code. If you have saved the listings for YNTribble and HTArray in individual files, you must integrate the class definitions using require: require `yntribble.rb’ require `htarray.rb’ ha = HTArray.new ha.add( YNTribble.new(“yes”) ) ha.add( YNTribble.new(“ye-no”) ) ha.add( YNTribble.new(“no”) ) print “Compare `yes’ with all values:\n” ha.each { |t| # yield tribble, tribble -> t print ((t + YNTribble.new(“yes”)).get,“\n”) } Previously, we left off the brackets from print, but as soon as the expressions after print become complicated, you should include the brackets, so that the interpreter knows which part of the text after print is a parameter for the call. In case of doubt, ambiguous points in the source code can be displayed with the aid of ruby -w. 18 LINUX MAGAZINE 10 · 2001

Controlled exceptions: Exceptions When interpreting Ruby programs, it is possible for type conversions to fail (if, unexpectedly, a variable turns out to be nil) or a file to be read out is not available. In these exceptional situations the interpreter raises up so-called exceptions, which are also familiar from Java and C++. If one does not catch an exception oneself, the program automatically shuts down. But this is not necessarily in one’s own best interests, as this can result, for example, in painstakingly calculated figures or tedious user inputs being lost. Therefore, exceptions can be caught in the program, so as to react appropriately. The following small example reacts, when storing a result, to the problem that the file is not writeable from time to time. This effect sometimes occurs if one starts a program from the home directory of another user, who has hard-wired the name of the destination file. Exceptions are caught using begin /rescue/end: resfile = “/home/root/.result” begin file = File.open(resfile, “w”) # save data rescue puts “error!” print “Specify a writeable “ print “file for the results: “ resfile = STDIN.gets retry # agaaaain! ensure file.close # close file end If resfile is not writeable, File.open(...,”w”) raises a corresponding exception. rescue catches it — and we try to get round the problem: The user is challenged to enter a new file name, which will be read in via STDIN.gets. The effect of retry is that the critical block will run through one more time. In this example this could go round in circles forever, until the user finally specifies the name of a writeable file. The code after ensure will in any case be executed. If one uses ensure and rescue, rescue must appear first. Catching exceptions is one thing — but of course, one can also raise exceptions oneself. This is done simply via the command raise, to which a descriptive string is given. This can then be read out via $! in the exception treatment: begin raise “unexpected error” rescue print “Exception!: “, $!, “\n” print “(“,$!.type,”)\n” end Exceptions are instances of exception classes, which are all derived from Exception. These in turn are children of the parent of all objects (Object) and thus can use the method .type, with which the


RUBY

name of the class of an instance can be read out. So we can find out the type of exception in the exception treatment: begin a = 1 / 0 rescue print $!.type, “ : “, $!, “\n” end In the first example it still looked as if $! was simply a string. But it is obviously a class instance and print has somehow called up an appropriate method from $!, in order to display the string. We should now no longer be surprised that this is the standard method .to_s.

Graphical Allsorts In case the properties described thus far have given the impression that Ruby is purely a console language — this is not so. Ruby has integral support for Tk, with which an expert programmer can at least quickly knit himself some simple GUIs. Unfortunately there is no proper graphical snapon GUI tool yet, although there is already an interface to Glade, the Gtk GUI builder. And, of course, a direct interface to Gtk+, which can be found at www.ruby-lang.org/gtk/en complete with documentation. For programming Tk you should in any case take a look in the obligatory reading for Ruby, for example the downloadable online version of the just-published book “Programming Ruby” .

All about Ruby online The fairly-expensive printed book is available in English. It is very up-to-date, which sadly cannot necessarily always be said about the official reference documentation. Ruby is after all a Japanese development and any current documentation usually comes out first in Japanese, and then with a bit of luck is translated into English. Even the XML sources of the book are available. The XML code is hideous, though, as confirmed by one of the authors, who extracted the XML code from TeX code. Nevertheless it is possible to patch together very passable brief references from the XML files or adapt the layout of the book to one’s own preferences. Unfortunately the appropriate XSLT code for the conversion does not come with the book — this is something you will have to think out for yourself. For example, you could use the class xmlparser or one of the other XML expansions found in the Ruby Application Archive or at Ruby Mine. When doing so it is often advisable not to follow the download link immediately, but to look for a more recent version on the homepage of the respective expansion, since every author updates his homepage first.

FEATURE

Info Which language is better?: http://www.perl.com/pub/2000/12/advocacy.html Ruby homepage: http://www.ruby-lang.org/en/ Thomas, Hunt, Matsumoto: programming Ruby; Addison-Wesley; ISBN 0201710897 Online version of the book: http://www.pragmaticprogrammer.com/ruby/ ruby-gtk: http://www.ruby-lang.org/gtk/en/ Ruby Application Archive: http://www.ruby-lang.org/en/raa.html Ruby Mine: http://www.ale.cx/mine/raa.html Ruby-FAQ: http://www.ruby-lang.org/ Open Directory Project: http://www.dmoz.org/Computers/programing/Languages/Ruby ■

For the rest — it’s written in the book This brings us to the end of the article. One or two things have not yet been addressed — for example threads - since this article is not intended to be a substitute for a book. Information about threads and about the testing of Ruby can of course be found in the online book mentioned and maybe also in diverse other online documents, which any good search engine should be able to find on the Net. An up-to-date reference can be obtained via the tool ri, which can be had from the Application Archive. The call ri File for example spits out all methods of the file class. The console single-liner ruby -e `puts file .methods.join(“\n”) does just that too, but without any further explanations. The English Ruby mailing list is also good for additional information (info on the Ruby homepage). Anyone who reads this will gain an overview on the latest developments and can even take part in further development, as “Matz” Matsumoto makes enquiries in the ML before making any changes. Finally, flat-rate and other hard-core surfers can also set a course for the channel #ruby-lang in the IRC via the server irc.openprojects.net, in which there are always a few Ruby enthusiasts, including the authors of the online book. Anyone who wants to go further with Ruby should in any case read up on the difference between code blocks of iterators and the code within while, until and if — and which of the variables can be changed locally and which cannot. Have fun. ■

The author Tjabo Kloppenburg is, strictly speaking, studying electrical and electronic engineering at the Uni of Siegen, although he has tended to specialise in IT. Is there anything nicer than a bit of DIY with IT scripts? He thinks not. 10 · 2001 LINUX MAGAZINE 19


FEATURE

SECURITY

Market Survey of Firewalls

LABYRINTH JAN SCHUBERT

Firewalls have now developed into a standard utility for network protection. Equally comprehensive is the number of systems on the market. We present a survey...

Figure 1: The Astaro interface also provides, apart from configuration of the firewall, exhaustive reporting. The picture shows throughput in the internal network.

The market presents a truly bewildering selection of different firewalls. But often, especially with Linux firewalls, the only differences are in the details, such as a different administration interface, behind which a standard kernel does all the actual work. In our market survey, we show a selection of different Linux systems, together with a few wellknown commercial firewalls. The spectrum runs from pure software solutions to special appliances and from packet filters via application level gateways up to multifunction devices with their own FTP and Web server. We have deliberately left out the prices. These depend, not only on various licence models, but also on additional hardware, personal expenditure and suchlike. Although the project costs can easily reach a six-figure amount, the price of the firewall ought to play a fairly unimportant role. What’s more important are the respective requirements, serviceability and above all, the available know-how. All details are based on information supplied by the manufacturer. In the case of software solutions,

20 LINUX MAGAZINE 10 · 2001

which require platforms other than Linux on Intel CPUs, this is also stated.

Astaro Security Linux: Special Linux distribution with firewall functionality Astaro Security Linux is a specially protected Linux distribution. On the hardened operating system, there are packet filters with stateful inspection, application level gateways, content filters (banner, virus scan), VPN and a Web-based administration front-end (see Figure 1). The frontend in the test version unfortunately offers only limited options. The system is offered for download free of charge as an ISO-CD image, but firms must acquire a licence later. Costs are in line with the number of computers to be protected. Astaro complete solutions are offered on Cobalt Raq3i. Also of interest is the paid Up2Date service, with which for example patches and the latest virus patterns can be loaded automatically. LinkX offers a similar product, with the Securepoint firewall server (http://www. securepoint.de). Licence: Basic GPL, partly proprietary. Free of charge for private users. http://www.astaro.de/ products


SECURITY

FEATURE

Biodata BIGfire: Stand-alone firewall appliance Biodata, with BIGfire, offers a firewall as black box. The 19-inch plug-in offers packet filters with stateful inspection, NAT and VPN. Software and hardware are both in-house developments by the manufacturer, who has specialised for many years in security. In combination with BIGapplication it is possible to expand to application level gateways and to integrate products from third party suppliers (such as Cobion Webfilter). Licence: Commercial http://www.biodata.com/de/products/bigfire/ biodata_bigfire.cphtml

Checkpoint Firewall 1: Proven software solution with a high market share With the Firewall 1, which has been established for many years, the Israeli manufacturer Checkpoint has achieved very high market penetration. Checkpoint was a leader in the development and implementation of new technologies, especially in the case of stateful inspection and the company’s own INSPECT technology. In this process, all the necessary information is extracted between physical layer and network layer (OSI layers 2 and 3) and compared with the security policy. If the data does not contradict this is it passed on to the IP-layer (OSI layer 3) of the operating system. Potential errors in the TCP/IP stack of the operating system therefore have no effect on the firewall rules. In the latest version 4.1 Checkpoint offers, in addition to the packet filter, a range of integrated application level gateways, support for NAT and VPN. Components from third part suppliers can be integrated through the company’s own standard OPSEC (Open Platform for Security). With the exception of a few control commands, a special interface has to be used for configuration (Figure 2). The connection between GUI and firewall is produced by a management server; this allows several firewalls to be administered centrally and consistently. Like many other commercial products, Firewall 1 is also certified by the ICSA (http://www. icsalabs.com/html/communities/firewalls/ certification/vendors). Nokia offers an Intel-based appliance with the IP range with Firewall 1 (http://www.nokia.com/ securitysolutions/network/firewall.html). The combination of established firewall software and special network and routing functionality should meet the high standards of security and speed. Licence: Commercial Platforms: HP-UX, AIX, Linux, Solaris, Windows http://www.checkpoint.com/products/ firewall-1

Figure 2: With Checkpoint’s Visual Policy Editor, even complex firewall policies can be created simply and easily.

Cisco PIX 500: High-performance complete solution with high market penetration The PIX firewall series builds on the many years of know-how of Cisco in the domain of network management and routing. The extremely high data throughput stands out as a particular characteristic. There is support for stateful packet filtering, NAT and VPN. In the latest version 5.3 remote access via secure shell is possible. Also, the PIX uses its IDS (Intrusion Detection System) to recognise some well-known methods of attack and blocks them. And other established manufacturers in the field of communications are also offering complete solutions, such as 3COM with Superstack III and Zyxel with ZyxWALL 10. Licence: Commercial http://www.cisco.com/ go/pix

GeNUA GeNUGate: Several firewalls in one The GeNUGate actually consists of two or more firewalls. This is intended to reduce the high expense involved in the integration and servicing of multi-level firewall scenarios. The solution includes, in the basic expansion stage, a packet filter and an application level gateway (see Figure 3). At this expansion stage GeNUGate separates four networks which are independent of each other. As the result of the modular structure, in the form of processor boards in an ordinary commercial 19inch casing, redundant, hot standby and more powerful expansions are possible. A Web-based administration front-end makes it easy to configure these complex solutions, in which, for example, a separate nameserver is operating in each sub-network. 10 · 2001 LINUX MAGAZINE 21


FEATURE

SECURITY

Figure 3: Schematic structure of the GeNUGate basic expansion level. External network, secureserver network and admin network are linked by means of an application level gateway. The internal network is also protected by a packet filter.

The basic software is the commercial BSD version BSD/OS 4.01 (4.2 planned). This operating system is a secure basis, especially as the result of the special access permissions at file level, which go beyond the normal options of UNIX. The open architecture allows for manual configuration and individual software and hardware expansions. The product is currently certified by the German Federal Office for IT Security to the international standard ITSEC E3 High. Licence: Commercial http://www.genua.de/produkte/ggfamilie

Linux Netfilter/Iptables: Professional packet filter in the Linux kernel 2.4 Figure 4: Solsoft NP Lite, programmed in Java, can configure Linux firewalls, with not only the rules, but the entire network topology being clearly displayed with various objects.

As successor to the tried and tested Ipchains (in kernel 2.2), Iptables offers, in addition to simplified configuration, additional options. Instead of the previous three, data packets now need only run through a single control chain when being passed on.

Additional features now include support for stateful inspection, filtering of MAC addresses and very comprehensive NAT functions. Application level gateways and VPNs can be realised with additional solutions, for example with the TIS firewall toolkit (see below), or the IPSec implementation FreeS/WAN. Configuration is done solely on the command line. But by using addon products, GUI-supported configuration is also possible. Examples of GUIs are Solsoft NP Lite (see below) or Firewall Builder. Older rules, which were created for Ipchains (Kernel 2.2) or for Ipfwadm (Kernel 2.0), can continue to be used — Iptables offers its own compatibility layer for this purpose. Ipfilter from Darren Reed is suitable for various BSD versions and also a few commercial UNIX systems (http://www.ipfilter.org). Licence: GPL http://netfilter.samba.org

Network Associates Gauntlet: Comprehensive application level gateway Although packet filters and NAT are supported in the latest Version 6.0, NAI recommends use as a pure application level gateway. If necessary an additional packet filter can be integrated, before or after the Gauntlet firewall. Gauntlet contains proxies for a great many current applications protocols, including database links and print services, and there is even a UDP proxy. The basis for development was the TIS firewall toolkit. Unlike older versions, unfortunately, it is now no longer possible to peek into the source code. Licence: commercial Platforms: Solaris 8, HP-UX 11.0 http://www.pgp.com/products/gauntlet

Smooth Wall: Linux distribution with firewall functionality Designed as a complete distribution, Smooth Wall is based on a special kernel 2.2.18 with IPSec support. Smooth Wall is designed as a more secure router, with Web-based configuration of packet filters, proxy, DHCP and PPP (via ISDN and DSL). As a special highlight, a Java SSH client is included. This can be downloaded free as an ISO-CD image. Licence: GPL http://www.smoothwall.org

Solsoft NP: Management platform for several firewalls Solsoft NP (Net Partitioner) serves to manage various firewalls. The graphical tool supports, in addition to the various Linux filters, Checkpoint Firewall 1, Cisco 22 LINUX MAGAZINE 10 ¡ 2001


SECURITY

PIX and the configuration of access control lists for diverse routers and switches. NP can transfer the actual configuration to the firewalls. The aim is for a configuration which is as error-free and intuitive as possible. The free version, NP Lite 4.1 for Linux (see Figure 4) is interesting, and this supports a graphical configuration of Iptables. Licence: commercial, free version for Linux Platforms: Linux, AIX, HP-UX, Solaris, Windows http://www.solsoft.com/products/net_ partitioner.html

Symantec Raptor: Comprehensive software solution After the take-verification by Symantec, there was silence on this product, which was tried and proven in the past. In the latest Version 6.5 there are now some fairly obviously characteristics being represented as special features (NAT, application level gateway and VPN). Stateful inspection is apparently not possible. Licence: Commercial Platforms: Tru64, Solaris, HP-UX, Windows NT http://enterprisesecurity.symantec.com/ products/products.cfm?Product-ID=47

FEATURE

Figure 5: The LAN Internet Support Station is, as a network appliance, a ready-expanded piece of hardware. The connections for the networks are located, together with a few status indicators, on the front plate.

There is support for a range of standard protocols, and additional ones can be transferred via universal gateways (plug-gw). Additionally, there is also a port scanner and other utilities. The accessible source code contributes to the installation of an individual application level gateway which is as secure as possible. Licence: Free, also available in source code. Not for commercial use. http://www.tis.com/research/software/index. html

Telco Tech LAN Internet Support Station: Linux-based access router with firewall function In addition to the management of domains, email and Web server, this flexible solution also offers packet filters, VPN and IDS. Based on Linux 2.2, only a standard interface is provided to manage all services. Initial configuration of the 19-inch device (see Figure 5) is performed by floppy disk, after which a Web front-end is available. Similar products include Linogate Defendo (http://www.defendo.de) and the Firebox-II series from Watchguard (http://www.watchguard.com/ products/firebox.asp), but the latter with kernel 2.0. Licence: Commercial http://www.liss.de/

TIS Firewall Toolkit: Building set for application level gateways Although the last version came out over three years ago, FWTK is still an interesting building set for constructing application level gateways. All proxies are configured in the file netperm-table. Users can use the authsrv to authenticate themselves at the firewall, before they are allowed to use a proxy. 10 路 2001 LINUX MAGAZINE 23


FEATURE

SECURITY

The Linux Intrusion Detection System (LIDS)

ROOTING OUT ATTACKS DAVID SPREEN

When an attacker gains Linux root privileges, simple tools won’t fend them off. LIDS offers methods to deny the intruder sight of and access to important parts of the system. On Linux systems, root privileges are necessary for many actions. Unfortunately, there are no further restrictions for root, so an intruder can simply read everything, write everything, delete everything. When, instead of the system administrator, an intruder gains your root privileges, he can also do everything: read private e-mails, swap programs, install back doors, wipe away his own tracks and obtain all and any information from the system. With LIDS, files and directories can also be protected from root, so his omnipotence is at an end. You can define the access rights to files more precisely (ACLs, Access Control Lists). Also, via capabilities, you can allow individual programs to do things which really require root privileges. For example a process for opening a port under 1024 really requires root privileges. With the appropriate capability this can also be done without root.

LIDS is just its name You may be wondering why the whole thing is called Intrusion Detection. There were certainly some long discussions about this name, but after all, the primary objective of LIDS is not to spot break-ins, but to protect the system after a break-in. Nevertheless there are a few mechanisms aimed at recognition. When a LIDS rule is violated, LIDS closes not only the shell, from which the rule was violated: LIDS can also inform the administrator, say via e-mail. Also, LIDS offers a portscan detector in the kernel, although this is far from being as refined as Port Sentry.

24 LINUX MAGAZINE 10 · 2001

Issuing access rights with ACLs Being able to protect files from root is certainly a useful thing. But for your Linux to continue to function as it ought to, you can give back individual programs writing and reading rights. For example LIDS hides the file /etc/shadow from all users, but explicitly allows /bin/login to read this file. The first step makes sense, since passwords can be cracked with the aid of the shadow file. But without any access to this file, no user can log in now, as /bin/login can no longer check the password entered. In fact, you ought to have made an exception: Without write access to /etc/shadow no user can change his password. At first glance, this appears perfectly simple, you could indeed give the program /usr/bin/passwd write access to the shadow file. In this case, though, root could again change all the passwords. But since in LIDS we are assuming there is an attacker, we ought to exclude this possibility. LIDS also has a solution for this, the LIDS free session (LFS). An LFS applies to the current terminal and all programs started from it. In an LFS, the LIDS restrictions do not apply. Since LFS only starts after you enter the lidsadm password, only the sysadmin could change the passwords. If there are no users on the system apart from the admin, this is a more feasible way.


SECURITY

FEATURE

New versions of LIDS While working on this article, more recent versions of LIDS have come into being. But their syntax is incompatible with the older ones and nor are they adequately documented, but on the other hand it is more consistent per se. This article is mainly concerned with the old versions: 0.9.13 for kernel 2.2.18 and 1.0.5 for kernel 2.4.1. But by reading the source code and with the aid of the LIDS mailing lists I have still been able to make some amendments to the text in time.

But if several people have access, you will have to choose between comfort for the user and reinforced security.

For which systems does LIDS actually make sense? LIDS is designed to prevent the overwriting of important parts of a system. It therefore makes little sense to use LIDS on a system which is changed daily, unlike production systems with individual services, which are installed once, then run in a stable environment. Playing with a bugfix or an update from time to time is not a problem, but automatic upgrades like Apt-get from Debian will not get along with LIDS. A Linux with a 2.2.x kernel is recommended. There is also a LIDS for the Linux kernel 2.4, but as a rule the use of a kernel under x.x.10 is not advisable. Also, LIDS is still in development, so you should keep an eye open for updates at regular intervals.

Installation of LIDS A kernel patch and the administration programs are available as .tar.gz archives at http://www.lids.org. Once the packet for the corresponding kernel version has been downloaded and unpacked, change to the directory /usr/src/linux. The patch is applied from here, with patch -p1 .../lids-version.patch after which the kernel is configured.

To obtain the LIDS options as a selection the following items must be selected: [*] Prompt for development and/or incomplete code/drivers [*] Sysctl Support Now you can configure and compile your LIDS kernel. Next the lidsadm tool needs to be compiled. You will find the sub-directory lidsadm-version in the LIDS package. Before you type make in there, however, in version 0.9.12 a minor change needs to be made to the Makefile: Complete the line CFLAGS=... with the entry -DLIDS_CONFIG. Now call up make && make install, to compile and install the program. Before first booting with the new kernel you must set a LIDS password and synchronize the configuration with your computer. The LIDS password is set using lidsadm -P. It will be needed later to deactivate LIDS or to change to an LFS. If you should start with the LIDS kernel, without first having set a password, the attempt will end with a kernel panic. We had already addressed the ACLs. These are stored in the file /etc/ lids/lids.conf; but in addition to the file and directory names, there are also Inode numbers entered there. The Inode numbers next to the file names are comparable to an address for the files. Since file system accesses can occur, not via the file names, but also via Inode numbers, LIDS obviously has to know the numbers of the files to be protected. For the ACLs which were preinstalled with the LIDS package to fit into your system, 10 路 2001 LINUX MAGAZINE 25


FEATURE

SECURITY

LIDS and VFS System Call Interface

VFS - Virtual Filesystem common code ext

ext2

proc

vfat

nfs

smbfs

LIDS sits on the VFS layer. Linux is designed so as to be able to work with many different file systems. The VFS layer is a general interface for all file systems supported in the kernel. If there is now access to the file system, the system-call interface does not access the individual file system directly, but forwards this query — or put in simplified terms — to the VFS. In reality, the device driver subsystem also plays a role in this. Figure 1 shows the principle; details on this can be found at http://plg.uwaterloo.ca/ ~itbowman/CS746G/a2

Figure 1: The system-call interface (the file system interface available for user processes) does not access the individual file system, but its common interface, the VFS.

update the Inode numbers with lidsadm -U. The ACLs, which relate to files and directories, are active as soon as the kernel loads the LIDS system. On the other hand the ACLs which deal with the capabilities only becomes active the first time when lidsadm -I is executed. So you must ensure, too, that this occurs immediately on booting. Decide precisely when you want the capability rules to be made active: For example it makes sense only to make these take hold after activating the firewall, if the firewall makes use of capabilities which you want to block. You must complete the configuration before you boot the system with the new kernel, so that the machine is actually still usable when it next starts. If your computer ever becomes unusable as the result of LIDS rules, when booting, assign LILO the parameter security=0.

option -s assigns the subject of the rule, the option -o the object. If the subject (here, a program) wants to access the object (file, directory or capability), the ACL (or to be more precise: by means of the rule assigned with -j ) governs whether and how the subject can gain access. The following call up allows /bin/login read access to /etc/shadow:

Configuration

Capabilities

New rules are added with lidsadm. For a file rule the lidsadm command looks like this:

Since kernel 2.2.x the Linux kernel has had a sort of access control of its own — capabilities. For certain actions, tasks need more privileges than a normal user has. Instead of the all-or-nothing division between root and normal user, there are more finely subdivided authorisations. For example

lidsadm -s /path/program -oU /path/file,directory -j RULE The syntax is a bit like that of Ipchains. The

Figure 2: In step 1 Insmod wants to access the Module_loader. Firstly the kernel checks, in step 2, whether Insmod has the capability CAP_SYS_MODULES, and only then, in step 3, allows the sub-module to be loaded. 26 LINUX MAGAZINE 10 · 2001

/sbin/lidsadm -s /bin/login -o /etc/shadowU -j READ To make /etc/shadow unreadable for all, the following call up would be necessary: /sbin/lidsadm -o /etc/shadow -j DENY Further rules can be found on the Lidsadm manpage. Why LIDS works for any number of file systems, not just for Ext2, is explained in the box “LIDS and VFS”.


SECURITY

the loading and removal of modules requires the capability CAP_SYS_ MODULES, insmod thus needs precisely this capability (see Figure 2). In the Linux standard kernel, it is not possible to assign a program file any capabilities, as there is no support in the file system. Capabilities can only be given to or taken from a current process. The kernel does, however, check the capabilities; this is resolved by root processes automatically receiving all capabilities. To take a capability away from a root program, it has to be removed globally from the system. LIDS, on the other hand, offers the option of giving to and taking from each program capabilities individually, regardless of whether it has root privileges or not. /etc/lids/lids.cap defines which capabilities are available by default and which are only issued by special regulations in the ACLs. So for example the Apache web server can be assigned the capability CAP_NET_BIND_ SERVICE, so that it can occupy a port smaller than 1024, although this capability was globally deactivated. This example requires the following command: /sbin/lidsadm -s /usr/sbin/httpd -t -oU CAP_NET_BIND_SERVICE -j INHERIT The option -t says that the object of this rule is a capability. The rule INHERIT determines that the capability is also inherited by child processes of /usr/sbin/httpd. The opposite would beNO_INHERIT. An overview of the capabilities relevant to LIDS can be found in the Lidsadm-manpage. From version 0.9.14-2.2.18 or 1.0.6-2.4.2 respectively, the syntax of lidsadm has changed slightly. Here the call would look like this: /sbin/lidsadm -s /usr/sbin/httpd -i -1 -oU CAP_NET_BIND_SERVICE -j GRANT The options -s and -o still assign subject and object to the ACL. The option -t is dropped and the rules INHERIT and NO_INHERIT also cease to apply. Instead of these, in capabilities the rules GRANT is assigned so as to allocate it to the subject. The option -i -1 means that the child processes of the httpd also receive the capability. What’s new is the option of defining the depth of the inheritance. While -i -1 means infinitely deep inheritance, with -i 1 the capability would still be inherited by the child processes of the httpd, but would no longer by its children (thus the grandchildren). To prevent inheritance completely, specify -i 0 or leave out the option -i completely.

Hidden Processes With the methods proposed so far, you can certainly make life difficult for an intruder. In the real world, though, one would leave him tapping in the dark for as long as possible. A firewall can effect this very well outwardly, for example by concealing the inner network structure.

FEATURE

Info Brian Ward & Peter Sütterlin: Linux kernel HOWTO: http://www.tu-harburg.de/dlhp/HOWTO/DE-kernel-HOWTO.html Ivan Bowman, Saheem Siddiqi & Meyer C. Tanuan: Concrete Architecture of the Linux kernel: http://plg.uwaterloo.ca/~itbowman/CS746G/a2 Steve Bremer: LIDS FAQ: http://www.clublinux.org/lids/ Port Sentry: http://www.psionic.com/abacus/portsentry/ ■ LIDS causes the same effects inside a computer: Processes are hidden, if the program is given the capability CAP_HIDDEN: /sbin/lidsadm -s /usr/sbin/popper -t -oU CAP_HIDDEN -j INHERIT The result of this example, as you have surely already discovered, is that /usr /sbin/popper can no longer be seen in process lists such as ps and top. Here, too, the call in the newer versions of lidsadm looks somewhat different: /sbin/lidsadm -s /usr/sbin/popper -i -1 -oU CAP_HIDDEN -j GRANT

Switching LIDS Hiding processes, granting or refusing capabilities, restricting file accesses even for root — all with the aim of making life harder for an intruder. Unfortunately, however, this also makes one’s own work harder: Sometimes one has to change something, as legitimate admin, simply and quickly, a route or a gateway, or changes need making to the firewall. For these cases there is the LFS (LIDS Free Session), in which you can again work as normal. With the aid of the LIDS password a terminal can be released from the LIDS controls. The command for this reads: /sbin/lidsadm -S —- -LIDS But if a service is to be started from new or even a restart is imminent, then even the LFS is no longer adequate. At this point the actions are no longer under the control of the one released shell, so we must first completely deactivate LIDS. The following command does this: /sbin/lidsadm -S —- -LIDS_GLOBAL To load in an altered LIDS configuration from new while LIDS is active, call up the following: /sbin/lidsadm -S —- -RELOAD_CONF LIDS should serve the majority of its time without needing much changing. We hope this article has given you a basic look at the methods and operation of LIDS. Nevertheless, you should certainly look for advice on configuration in both the Lidsadm-manpage and the LIDS-FAQ, so that your Linux does not suddenly regard you as the enemy and deny you entry. ■

The author David Spreen is the Debian maintainer of the LIDS packages. Apart from studying, he also works for NetUSE AG, an ISP in Kiel. And spends most of the rest of his time on his Linuxbox or programming. He would like to thank, among others, Benjamin Traube and Eugene A. Brin..

10 · 2001 LINUX MAGAZINE 27


COVER FEATURE

XCDROAST

Easy Burner

X-CD-ROAST KARSTEN GÜNTHER

X-CD-Roast is one of the oldest graphical user interface applications for burning CDs known to Linux. We’ll show you why it’s still one of the most powerful...

BURN-Proof: CD burner technology which prevents buffer underruns, where CD burners destroy the blank CD if the data to be burned is not fed in quickly enough by the computer. ■

There are currently two versions of X-CD-Roast in circulation: the now fairly old version 0.96 (ex) and the one still marked as “Alpha” – version 0.98xx. At the moment, the Alpha release of November 2000 is still the most recent. The developer, Thomas Niederreiter, has deemed this version more refined and stable than version 0.96, so users should ideally use X-CD-Roast 0.98. The current version is available for download at http://www.xcdroast.org X-CD-Roast 0.98, unlike the older versions, which relied on the Tcl/TK toolkit, uses Gtk (the Gimp Toolkit also used in GNOME) from version 1.2.3. So the corresponding libraries must first be installed. X-CD-Roast is also often installed via link under the name xcdrgtk. A word about compiling the source texts. Less experienced users may be faced with a few problems here in some circumstances since, as the developer is at pains to point out, the software is in Alpha code. The ready-made RPM packages should be unpacked and used if you have any problems. XCD-Roast also includes adapted versions of cdrecord (Version 1.9, back-end for writing the CDs), mkisofs (to create image files, “masters”), readcd to read from CDs (corresponds to the dd of earlier versions) and vrfytool, to check finished CDs. Use the commands cdda2wav, cddbtool and wavplay for processing audio CDs. Audio and data CDs can be created or copied directly with X-CD-Roast from data on the hard disk. Apart from CDRs, rewriteable CDs (CDRW) can also be used for this purpose. It’s even possible to create mixed mode and boot CDs using this program. Mixed mode CDs consist of both data and audio components, but can’t be played on normal CD players. Data CDs can be mastered on the fly, but multiple image files are also supported. The new

32 LINUX MAGAZINE 10 · 2001

version of cdrecord (1.9) can also be used with BURN-Proof technology. Only multi-session CDs can’t be created yet.

0.96 and 0.98alpha7 differences Principally, the two versions differ in terms of ease of use. While in the earlier version, a distinction was made between six modes, the number in the new version is reduced to three. When creating CDs, the user doesn’t have to decide whether audio or data CDs are being copied or created – this option (amongst others) is now taken care of automatically. Additionally, the new version has been expanded with several useful and practical features: • Image files can be stored in several data directories. • Mastering CDs can be done on the fly, thus saving on image files. • Selected tracks can be read from CDs. • Mastering has been expanded so any number of directories can be specified. • X-CD-Roast can also be used by ordinary users.

Parallel installation of X-CD-Roast 0.96 and 0.98 under SuSE Linux X-CD-Roast 0.96 and 0.98 can be installed in parallel under SuSE Linux, provided a few points are taken into account; both versions are automatically installed in different directories (see Box 1). The binaries of both versions are installed under /usr/X11R6/bin as xcdroast – a conflict. In Version 0.98 this is a link to xcdrgtk. After the installation of 0.98 the link must be removed, in order to install the RPM package of X-CD-Roast 0.96:


XCDROAST

COVER FEATURE

[left] Figure 1: Recognising the CD burner [right] Figure 2: Settings for the CD burner

[left] Figure 3: Miscellaneous settings [right] Figure 4: Copy a CD

rpm -ihv xcdroast-0.96ex2-50.i386.rpm ... One overlap still occurs. This relates to the README.nonroot file in the /usr/share/doc/packages/xcdroast/ directory. A version must consequently be saved under a different name (README.nonroot-0.98).

Configuration Before implementing X-CD-Roast, the software has to be configured. On first starting up, a reference is made to one of the new features of the program: Normal users can also make use of X-CDRoast in the current version (formerly, its use was reserved for the administrator root). Additional information and special details can be found in the README.nonroot file. The entire configuration is carried out in six dialogs. The CD writer is determined first. With SCSI devices this is no problem at all: CD writers are recognised as CD-ROM drives as a matter of course. Highly detailed information about the hardware can be displayed in this window with a double-click on the corresponding device symbol. The use of ATAPI writers will be examined elsewhere. X-CD-Roast includes README.atapi, in which a few special instructions are given. But you can always fall back on the excellent CD Writing HOWTO by Winfried Trumper in case of doubt and/or problems. The latest version of this document can be found on the Internet at http://www.linuxdoc.org/HOWTO/CD-WritingHOWTO.html. In the second dialog (Figure 2), special settings are made for the CD writer. The burn rate can be

altered manually later, so there is only a pre-set for this here. As the tool tips show, the read configuration depends largely on what hardware is used. Not all devices even evaluate these settings. Often, if values that are too high are set for the audio read rate, this can lead to errors, so that somewhat lower settings often provide better results. The lowest two settings can help if there are errors. The online help supplies truly exhaustive information on this topic. Image files are stored in the directory, which you enter in the third dialog. Any desired directory can be included using the Add button, although there must be space in the directories in each case for at least one (whole) image file. Under Miscellaneous, sounds for different purposes can be defined. By the way, no sound card is necessary for this, as XCD-Roast can also home in on the internal loudspeaker.

Box 1: X-CD-Roast directories #ls -l /usr/X11R6/lib/xcdroast-0.9* /usr/X11/lib/xcdroast-0.96ex: total 9 drwxr-xr-x 5 root root 1024 drwxr-xr-x 33 root root 5120 drwxr-xr-x 2 root root 1024 drwxr-xr-x 2 root root 1024 drwxr-xr-x 2 root root 1024

Mar Mar Mar Mar Mar

21 22 21 21 21

19:48 01:01 19:48 19:48 19:48

. .. bin logo sound

/usr/X11/lib/xcdroast-0.98: total 10 drwxr-xr-x 6 root root drwxr-xr-x 33 root root drwxr-xr-x 2 root root drwxr-xr-x 2 root root drwxr-xr-x 2 root root drwxr-xr-x 2 root root

Mar Mar Mar Mar Mar Mar

21 22 21 21 21 21

19:37 01:01 19:37 19:37 19:37 19:37

. .. bin icons lang sound

1024 5120 1024 1024 1024 1024

10 路 2001 LINUX MAGAZINE 33


COVER FEATURE

XCDROAST

Figure 5: Creating an image file of a data CD

CDDB: Compact Disc Database, originally a free service from cddb.org. A server, when queried about any known CD, returns information about the artist and title. This database is used by many CD-to-MP3 converters in order to create useful file names and track information. Since CDDB has now been commercialised (cddb.com) and program manufacturers now demand licence fees, it is better to head for the free alternative at freedb.org. ■

The additional options are described at length in the online help. The options available under Options are also clarified there. The last dialog controls the access options for the user of a multi-user system, in which several references are made to the README.nonroot file – the details of this feature are explained therein. X-CD-Roast takes over access control automatically. You can create a configuration file under xcdroast.conf in the /etc directory, using the Save configuration button.

Practice Once configuration is complete, X-CD-Roast can be put to use. Clicking on OK switches back to the initial mode. First, a disc will be copied. Pressing the Duplicate CD button leads to the window in Figure 4. X-CD-Roast automatically makes a distinction here between audio and data CDs. Firstly the program ascertains the information on the CD. Individual tracks can be selected on audio CDs and additional information can be called up using CDDB queries when there is an existing Internet connection. An image file is usually created first in order to copy a data CD. It is obviously possible to copy between the two if an additional CD drive is available. The advantages of image files lie in the possibility of just using the CD writer and being able to create several copies of a CD relatively quickly (Figure 5). You can set the read rate and the directory used for image files manually here. The image file can be checked after reading in, with Verify CD. Individual (or all) tracks on an audio CD can be output via Play Audio-Tracks for checking, before

[left] Figure 6: Copying an audio CD [right] Figure 7: Writing a CD 34 LINUX MAGAZINE 10 · 2001

they are burnt onto a new CD. You can access the individual tracks on audio discs (Figure 6). If the “Read CD” option Perform index scan has been selected, the read-in takes a relatively long time. On some CDs this information cannot be completely read out. It’s possible to listen to individual tracks for test purposes. The CD is then burnt using the Write CD dialog (Figure 7). The most important parameters (such as the write rate) are summarised in the window and can also be adjusted. The whole write procedure is conducted with the write laser beam switched off with Simulation write. The tracks on audio CDs can be filled up automatically so that a proper audio CD can be created from any WAV files. In some cases it is necessary to Swap audio byte order. Badly created CDs contain nothing but a steady hiss. CDRWs can now be deleted with X-CD-Roast before being written on (Figure 8). This wasn’t possible at this stage in version 0.96 and always had to be done on the command line. X-CD-Roast supports the usual variants from deleting the whole CD (by completely over-writing all tracks) to Fast delete, where only the table of contents, PMA and pregap are deleted. The last variant is normally sufficient and the process only takes one or two minutes.

Creating a CD A data CD can also be created from files already present on the hard disk. From the start dialog, first click on Create CD to call up the appropriate window (Figure 9). The Master tracks button is a new feature is in this window. Click here to access the dialog that allows you to adjust relevant settings. Click the Add button under Master source to select the desired paths. Subdirectories can be taken into account here with Exclude. One especially practical feature is the option of showing the master paths on the CD in a different way from the way they exist on the source via Redirect. The directories in the example housed under /lib are moved into the /work directory of the backup CD. This leads to considerably simpler paths when playing in the backup. A wrongly specified path can be deleted at any time using Remove.


XCDROAST

In the second dialog ISO9660 options (Figure 11), extremely detailed adjustments are made before the CD is created. All the options are again explained by the online help. For this reason, only the differences between Rock Ridge (Anonymous), where all users are given access to the files on the CD and Rock Ridge (Data protection), where the original permissions are applied on the CD, are referred to here. Not backup files ... means that all files ending with a tilde “~” or including a hash “#” in the name are ignored when mastering – as are those with the extension .bak. Another important option is Follow symbolic links: The result of this is that the corresponding original files are copied onto the CD, too. Without this option only the links themselves would be copied. Diverse settings for creating bootable CDs are summarised under Boot Options. The ISO9660 header (Figure 12) allow CDs to be marked professionally. There should be at least one TOC file (Table of contents), entered under Abstract information, on the CD. The final preparations for creating the CD can be made in the last dialog – Create session/image. A catalogue of the files copied onto the CD must first be created. Click on Calculate size in the Create session on disk field – remembering that an image file is not yet created in this step. The track that has been created can be marked using File prefix for ease of classification. In the Write session on the fly field, information on the medium can be viewed and adjusted. Unfortunately, there is no Multisession CD support currently available. An image file can now be created by Master as image file or the CD can be created directly through Master and write on the fly. The second option proved workable, provided the system load was not

COVER FEATURE

too high. The creation of CDs without spacegrabbing image files is certainly an advantage over the predecessor, version 0.96ex, which should not be underestimated.

Conclusion The new version of X-CD-Roast is one of the best applications available for creating CDs under Linux. It enhances many of cdrecord’s features and at the same time offers a simple, intuitive interface which allows even relatively inexperienced users to create and copy audio and data CDs. Any worries with respect to the Alpha status of the software turned out to be completely unfounded. The online help via tool tips is exemplary – and something we wish more applications would make available. The clear structure of the program guides the user effortlessly, so that problems rarely occur. If any steps have been forgotten, the program notes this and makes a reference to missing settings. X-CD-Roast demonstrates how the combination of back-end commands and front-end GUI, so often used under Linux, can function well. Quite simply, working with X-CD-Roast is a lot of fun. ■ Figure 8: Deleting a CDRW

[left] Figure 9: Dialogs for creating a CD [right] Figure 10: Add paths

[left] Figure 11: ISO9660 Options [right] Figure 12: ISO identifiers 10 · 2001 LINUX MAGAZINE 35


COVER FEATURE

MKISOFS AND CDRECORD

CDs written fast on the command line

BURNING BY COMMAND HANS-GEORG ESSER

The graphical user interface may be the user friendly way of burning audio and data to disc, but for pure speed the command line simply can’t be beaten.

CD Image A CD image is an identical copy of the raw content of a data CD. Such an image can be created under Linux from a data CD using the command dd if=/dev/cdrom of=/tmp/cd.iso. Here, dd simply reads out the CD, byte for byte, and writes the content in the output file cd.iso. This CD image can then be re-written, using cdrecord, onto a blank CD or even be mounted in the file system with mount, like a proper CD. ■

Why wrestle with CD-burning command-line tools when you can simply use one of the graphical applications already available? Well, there are several benefits to writing discs at the command line: • Speed You can burn all the files in an existing directory onto a data CD with one or two brief commands using mkisofs and cdrecord, or a series of WAV files onto an audio CD. Try it out – no GUI tool can do it that fast. • Control Command line tools give you complete control over the task to be performed. You tell the program what it should do in detail via options, meaning that there will be no surprises. • Resources Even an ancient computer, if it has been equipped with a new ATAPI or SCSI CD burner, can be used as a burn computer. There is no need to use an up-to-date graphics card to do so. You can use text mode throughout. It even works without a monitor if you log in from

36 LINUX MAGAZINE 10 · 2001

another computer via a network connection. • Principle Dedicated users of the shell will always employ it as a first option for performing tasks. Even if you don’t intend to use the command line for burning, it still makes sense to understand this tool, as the programs described here are working away busily behind the scenes of your GUI. If things go wrong with your graphical application you’ll still need to know your way around mkisofs and cdrecord

Data CDs The simplest way of burning a data CD is to back up a complete sub-directory, providing the content is under 650MB in the case of a standard CD/R. To burn something like the entire /tmp/cddata/ directory onto CD, you could use a variant of the two following commands: mkisofs -r -J -o /tmp/cddata.iso /tmp/cddata/ cdrecord -v dev=0,4,0 speed=4 /tmp/cddata.iso In the two commands above mkisofs – make ISO9660 filesystem – creates an ISO file system (where file system type ISO9660 is the default for both Windows and UNIX). mkisofs -o /tmp/cddata.iso /tmp/cddata/ creates a CD image called /tmp/cddata.iso according to strict ISO standard. This contains all the files from the


MKISOFS AND CDRECORD

COVER FEATURE

Figure 1: The man pages for cdrecord and mkisofs are exhaustive and very helpful – a look at the more esoteric options can be very entertaining.

/tmp/cddata/ sub-directory. However, when it comes to naming and other file system features (such as UNIX access rights and symbolic links) the ISO standard is limited. An ordinary file system, easily legible under Linux/UNIX and Windows is only obtained by extending the ISO-conform content directory using Rockridge extensions (UNIX) and Joliet extensions (Windows). This is precisely what the -r and -J options do. In the second step, the CD image is written onto a blank CD using the cdrecord command. The two parameters without “-” define which device (the CD burner) should be used and what burn rate is required. In practice this means: • dev=0,4,0: The burner links up with the first SCSI controller (counting from 0), has the SCSI ID 4 and the LUN (Logical Unit Number) 0. The LUN is always 0 for burners. If you do not know which SCSI ID your burner has, use the cdrecord scanbus command. We’ll be explaining more about using ATAPI burners later. • speed=4: Burn at 4x speed. The additional parameter -v stands for verbose (wordy) and issues a status message at regular intervals during the burn procedure on what percentage of the CD has so far been completed. The file name of the CD image should come at the end of the command.

space first. Output from mkisofs is simply transferred via a pipe to cdrecord; mkisofs is called up without specifying an output file (-o file) and simply writes the ISO image in the standard output: mkisofs -r /tmp/cddata | cdrecord -v fs=6mU speed=4 dev=0,4,0 The minus sign at the end of the command is necessary so that cdrecord reads the data from the standard input and does not attempt to read from a file. A buffer of 6MB (option fs=6m) should prevent buffer underrun. But such buffer underruns (which result in the destruction of the blank) are highly unlikely under Linux anyway, as the operating system has a good multitasking function. Even with the most vigorous activities being performed by other programs, we have never known cdrecord to miss a stroke. In operating systems with less powerful multitasking, burnt out blanks are commonplace – even the screensaver kicking in can cause problems. It is only on very low-powered or very heavily overloaded computers that this type of failure would be conceivable under Linux, and the industry has created a remedy for this scenario: devices with BURN-Proof technology reliably prevent buffer underruns. cdrecord supports this technology.

Audio CDs On the fly It is also possible to burn a directory directly onto the CD, without taking up 650MB of disk

Since audio CDs do not contain a file system, mkisofs is not needed for them. The tracks are simply written one after another onto the CD. As 10 · 2001 LINUX MAGAZINE 37


COVER FEATURE

MKISOFS AND CDRECORD

with data CDs, cdrecord is used for the burn procedure, with a few modifications. The sample command from the start of the article now becomes cdrecord -v -audio -pad dev=0,4,0 speed=U 4 /tmp/track*.wav The dev and speed options have not been altered, but two new ones have been added: • -audio tells the burn software that an audio CD is to be burned. • -pad is less obvious. Audio CDs include tracks in accordance with the CD-DA (Compact-Disc Digital-Audio) standard, and tracks with this specification need a few special characteristics. They need a sampling rate of 44,100 samples per second and their file size must be a multiple of 2352 bytes. Since the wav or au files available for burning do not, as a rule, comply with this file size requirement, the -pad option is needed to add an appropriate number of zeros to the end of the file. It is also possible to create audio CDs incrementally. The additional option -nofix is used to do so. The following three commands each burn two tracks onto the CD and only fix the blank on the third write procedure, thus finishing it off completely: Little / Big Endian: Little and Big Endian are terms which tell us something about the processor architecture. Values which cannot be stored completely in one memory cell are to be distributed over several such cells: So something like the value 43981 (hexadecimal ABCD) is split into AB and CD. With Big Endian, AB is now stored in the first memory cell and CD is stored in the second; in the case of Little Endian it is exactly the opposite. ■

cdrecord -v -audio -pad -nofix dev=0,4,0U speed=4 t1.wav t2.wav cdrecord -v -audio -pad -nofix dev=0,4,0U speed=4 t3.wav t4.wav cdrecord -v -audio -pad dev=0,4,0 speed=4U t5.wav t6.wav

correct the byte sequence), use the -swab (SWAp Bytes) option.

Reading back audio tracks The read-out of an audio CD is a slightly different subject to that of burning but since when copying such CDs, the read-out comes before the burn, we will also briefly introduce the appropriate command line tool here. Unlike data CDs, audio CDs do not simply contain a normal file system, which can be read out using dd. Instead, a special CDDA grabber has to be used. The standard tool for this task is cdda2wav. MP3 files can also be burned directly onto CD (with at least ten albums usually fitting onto one blank). However, these are then normal data CDs, which cannot be played back using a CD player. To be played on a CD player, they have to be converted into normal wav files.

ATAPI burner We’ve only covered SCSI burners so far. But ATAPI owners are not excluded from the joy of burning. Linux has an SCSI emulation for ATAPI devices which allows an ATAPI burner to be recognised as a SCSI device and controlled with the usual control commands under SCSI. This SCSI-ATAPI emulation is not usually mounted in the kernel, since it is seldom needed. To load the corresponding kernel module, enter, as root administrator, the command

A CD which has not been fixed can only be played back in CD burners and not in simple CDROM drives or audio CD players. You can finish these discs off at any time using the -fix option:

The module should now be loaded, and using the command

cdrecord -v -fix dev=0,4,0

cat /proc/scsi/scsi

Sometimes the resultant CD is missing some audio tracks – with noise taking their place. This is due to the structure of your wav files – the byte sequence of the audio coding is wrong; The sequence can be Little Endian or Big Endian. To get rid of this very strange phenomenon (and to

you will receive a summary of the newly added SCSI devices in your computer. If a proper SCSI controller is present, the SCSI-ATAPI emulation is added to the system as a second SCSI bus (No. 1). The dev-option necessary for the cdrecord command is dev=1,X,0. ■

modprobe -v ide-scsi

Problem-solving • Access rights: To scan the bus and/or to perform the actual burn procedure, the device files /dev/sg* must be provided for you with read-privileges (the file belonging to the burner should also have write-privileges). Alternatively, you can perform the bus scan as root administrator with chmod a+r /dev/sg* and then just set the necessary device file to “a+rw” to release the read privileges for all users. • Generic Devices: cdrecord uses direct access to the CD burner for burning – for which the “Generic Devices”, addressed via the device files /dev/sg* (SCSI generic), have to be recognised. As a rule, this is done by an sg module which is loaded automatically when one of these files is accessed. If this does not happen, load it in manually with modprobe -v sg.

38 LINUX MAGAZINE 10 · 2001


CONVERTING AUDIO FORMATS

COVER FEATURE

Converting Standard Audio Formats

CD2MP3 HANS-GEORG ESSER

You may already have the hang of converting audio files to wav files and downloading mp3 files from the Net. But how is one format converted to the other? If you’d like to listen to mp3 files in your car, but you don’t have a new-fangled in-car mp3 player, you’ve got a problem because your CD with the 180 best tracks from the Net cannot be played on a normal CD changer. The files have to be converted into the memory-guzzling wav format and then reprocessed into an ordinary audio CD. Conversion in the other direction requires a 50GB hard drive, and the advised format for conversion is MPEG 1 Layer 3, rather than wav files. However, if you’re willing to work on the command line, you’ll find that Linux provides fast and practical tools for this task.

mp3 to wav One of the most popular console MP3 players is mpg123 (http://www.mpg123.de/, latest version: 0.59r). Anyone taking a look at its options will also immediately find an easy way to convert from mp3 to wav: mpg123 -w song.wav song.mp3 does the necessary (see Figure 1). This is unsurprising since every mp3 player converts files into wav format, in order to play them. On Freshmeat’s mpg123 project homepage (http://freshmeat.net/projects/mpg123/) there is script which burns a series of mp3 files directly onto an audio CD: for var in `ls -1 $1`; do echo Burning $var ..........; mpg123 -s $var | cdrecord dev=imation -v -U nofix -audio -swab done cdrecord dev=imation -fix The script uses the option -s (“write to stdout”), and the output from mpg123 is piped directly into the cdrecord process. Those who prefer to take it easy can also use xmms (http://www.xmms.org/): To write wav files, open the settings via the menu item Options/Preferences (or [Ctrl-P]) and change the output plugin to Disk Writer Plugin. Then click on

Configure and specify a destination directory for the wav files (see Figure 2). You can now create an mp3 playlist for xmms. When your playlist is compiled, you can simply click with the mouse to start playback. Although you might not hear anything at first, xmms is working through your playlist systematically, placing the wav files in the correct directories. Figure 1: mpg123 effortlessly converts from mp3 to wav

wav to mp3 mpg123 is usually employed as an mp3 decoder, so, going from wav to mp3 requires an encoder. One good candidate is bladeenc (http://bladeenc.mp3.no); an alternative is LAME (http://www.mp3dev.org/mp3/). The syntax of bladeenc is, not surprisingly: bladeenc in.wav out.mp3 Since coding takes up considerably more resources than decoding, bladeenc cannot code the data in real time. This means that the procedure takers longer than it would to play the piece at normal speed. bladeenc creates mp3 files at 128KB/sec by default. If you want better quality, you should use the -br (bit rate) option. The bit rates available are 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256 and 320 KB/sec. So, for the 192 KB/sec frequently found on Napster servers, use the command:

Figure 2: Anyone who doesn’t want to work on the console can fall back on xmms

bladeenc -br 192 in.wav out.mp3

Automation One advantage of working on the command line is that you can automate procedures, making it is easy to build mpg123 and bladeenc into shell scripts. These scripts can then be used to convert whole directories from one format into the other overnight. Whether by script or by hand, you can now have lots of fun converting and enjoy many hours of listening pleasure — on the computer or in the car. ■

Figure 3: bladeenc is an mp3 encoder

10 · 2001 LINUX MAGAZINE 39


ON TEST

BURN-PROOF CD WRITERS

Burn-proof technology

FIREBREAK CHRISTIAN REISER

Even in these days of gigahertz CPUs, CD burning often means burnout. But you can blank out memories of meltdown with BURN-Proof technology.

Most burn interruptions are due to buffer underruns. A buffer underrun occurs when the computer fails to transport data fast enough to the CD burner. This means that the internal buffer memory of the CD burner is empty, thereby rendering the CD-ROM incomplete and unusable. The reason for this is that the data has to be written onto the CD at a constant rate. It is therefore crucial that both the speed of CD rotation and the laser path across the surface of the disc are kept at a regular rate. The inner tracks are shorter, the outer ones longer. In order to run at the required constant rate, the CD-ROM must run faster when reading the inner tracks than it does when reading the outer ones. Consequently, CLV (Constant Linear Velocity) ensures that the laser always covers the same distance per unit of time. In the case of single-speed devices, the data rate is 2352 x 75, giving a speed of 176,400 bytes per second. A burner with 12x acceleration consequently writes at 12 x 176,400 bytes per second — a speed of just over 2MB per second. This quantity of data has to be supplied constantly by the computer. Until now, incremental burning was not an option.

seconds. The burner software forms a buffer in the memory, meaning that, in burner software such as cdrecord, buffer allocation is up to you. The standard size is 4MB. The latest method for avoiding buffer underrun, BURN-Proof, stands for Buffer Under RuN and comes from Sanyo (http://www.sannet.ne.jp/ BURNProof/). BURN-Proof takes the innovative approach of attempting to prevent not the underruns themselves, but their consequences. A small microcontroller inside the CD burner constantly clocks the fill level of the built-in buffer. If it detects a risk of buffer underrun (where the buffer is less than 10% full), it initiates the ending of the burn procedure. If the data has been written onto the CD-R, the chip remembers its location. As soon as the buffer is full again, the CD burning software restarts the burn procedure. This means that this software must also support BURN-Proof. The microcontroller recommences burning at the point at which the last data was burned. Similar software from Ricoh and Yamaha are called JustLink and Waste-Proof Strategy respectively. So far however, only BURN-Proof has made it onto the market. Linux supports this technology with cdrecord and cdrdao from version 1.1.5 onwards.

Buffer solution Several procedures have been developed to circumvent this well-known problem. The first solution developed was to incorporate the buffer into the burner itself. The newer burners hold buffer capacities of up to 4MB, meaning that a maximum burn rate lasts for about 1.5 to 2

How we tested Tests were performed on an AMD K6-2 at 350 MHz and with kernel 2.4.2. In the CD-R tests a 691 MB ISO-9660-Image was burnt onto the CD-R. In the CD-RW tests we settled for 137MB. The read tests always went via the inner tracks — the first 100MB and the first 10 minutes of the test CD.

42 LINUX MAGAZINE 10 · 2001

Using BURN-Proof You can determine whether a drive supports BURN-Proof by using the command # cdrecord -checkdrive dev=0,X,0 driveropts=U help Cdrecord 1.10a16 (i586-pc-linux-gnu)U Copyright (C) 1995-2001 Jörg Schilling [...] Driver options: burnproof Prepare writer to use SanyoU BURN-Proof technology noburnproof Disable using Sanyo BURN-U Proof technology The X can be defined via cdrecord -scanbus. An example can be found in the IDE/ATAPI burner


BURN-PROOF CD WRITERS

CD burners with buffer-underrun protection Manufacturer Lite-On Website www.liteonit.com.tw Model LTR-12101B Shop price £115 Speed (Write/Rewrite/Read) 12/10/32 Connection ATAPI Buffer-underrun protection BURN-Proof BURN-Proof Speed test: Burn CDR [kB/s] 1750 Fix CDR [s] 25.3 Burn CDRW [kB/s] 1212 Fix CDRW [s] 28.0 Read CD-ROM [kB/s] 2145 Read audio (playing time/read time) 2.27 under Linux box. BURN-Proof is activated with the addition of the parameter driveropts=burn-proof to the normal cdrecord command.

But does it work? Now to reality: We had five drives using BURNProof and one drive using JustLink in our test. Although employing the same approach to buffer underrun as BURN-Proof, JustLink has to be started up differently; cdrecord cannot do it. We did not experience this problem with either SCSI or with ATAPI. But it was considerably harder than expected to test the CD burners on BURN-Proof. Since cdrecord uses the POSIX real-time expansion, it has a higher priority than all other tasks; only the kernel takes precedence. This is no place to start. There is no point in blocking the IDE bus in order to slow the progress of data to the burner, since no modern IDE device is capable of overtaxing the IDE bus permanently. The only remaining option is to slow down the data to be burnt before it gets to cdrecord. We created the data on the fly (without dumping it on the hard disk) with mkisofs and brought this task to a stop by lowering its priority

ON TEST

Plextor www.plextor.be PX-W1210A £150 12/10/32 ATAPI BURN-Proof

Plextor www.plextor.be PX-W1210S £225 12/10/32 SCSI BURN-Proof

Plextor www.plextor.be PX-W1610A £170 16/10/40 ATAPI JustLink

Ricoh www.ricoh.de MP9120A-DP £175 12/10/32(/8 DVD) ATAPI BURN-Proof

Teac www.teac.de W512E £120 12/10/32 ATAPI

1736 24.2 1278 33.8 2479 5.45

1747 24.6 1272 34.1 2512 6.06

2265 18.9 1107 33.7 3094 5.22

1761 22.9 1406 35.4 2483 1.86

1739 27.9 1302 34.4 2414 3.05

and starting other tasks. In the end we managed to fabricate a buffer underrun on all the drives and BURN-Proof always caught them, although cdrecord does not issue a message when this happens.

Conclusion All burners with BURN-Proof function perfectly, the interruption of the flow of data has no effect on the legibility of the CDs. But Sanyo states that CDs produced using BURN-Proof should preferably be read with a 4x CD-ROM or a CD-player built after 1995. Every interruption in the write process results in a small gap, which could in some circumstances cause older drives to get out of step. Only the Ricoh drive produced illegible blanks in the test. JustLink is not currently supported by any Linux burn program. If you are buying one, check whether cdrecord or cdrdao can cope with the respective burner. Considering today’s high-performance computers however, a question naturally presents itself: Do we really need a device with this technology? Burnt CDs are something of a rarity — at least under Linux.

hdX: Under Linux, all IDE devices are addressed as /dev/hdX. hdX in this case stands for: hda: first controller, master hdb: first controller, slave hdc: second controller, master hdd: second controller, slave. ■

IDE/ATAPI burner under Linux cdrecord directly supports SCSI burners, assuming correct installation of the SCSI-Host adapter. ATAPI devices must first be converted from IDE to SCSI-API. This is conducted via the kernel module ide-scsi: This module provides a SCSI emulation for every IDE drive which is not yet occupied by another driver, so that it appears to programs as a genuine SCSI drive. The real problem now is to keep the CD burner free for the IDE-SCSI module. Here the methods differ depending on whether the IDE CD-ROM is permanently supported in the kernel, or if it is loaded as a module (IDE-CD). The simplest way to find this out is by mounting a data CD into the file system and then using lsmod to check if the IDE-CD module has been loaded. If the IDE-CD support is firmly anchored in the kernel, this means you have to tell the kernel by boot parameter that it should keep the burner free for the SCSI emulation. This is done using the parameter hdX=ide-scsi. From now on, /dev/hdX is no longer available and the burner can be addressed via /dev/scd0. The boot parameter should be permanently entered in the bootloader; in the case of LILO, the append entry in the /etc/lilo.conf is extended to do this:

10 · 2001 LINUX MAGAZINE 43


ON TEST

BURN-PROOF CD WRITERS

append = “hdX=ide-scsi” If parameters already exist under append, hdX=ide-scsi is placed before them and separated from the rest by a comma. If ide-cd is not fixed in the kernel, then the ide-cd parameters must be adapted first so that this module no longer accesses the burner. Secondly, the ide-scsi parameter must be altered so that other devices are not occupied, even when their driver has not yet been loaded. Both settings are defined in the file /etc/modules.conf. For the ide-cd-module, just the line options ide-cd ignore=hdX needs to be inserted. Making sure that ide-scsi only occupies the burner is unfortunately not that simple — there are no options for modules. For this reason, you need to make sure that all drivers for any other possible devices are already loaded before the ide-scsi module is initialised. This is effected by the following entry: pre-install ide-scsi modprobe -k ide-cd;modprobe -k ide-tape;modprobe -k ide-floppy The parameter -k ensures that the drivers are unloaded automatically when they are not needed. Last of all, in both cases the ide-scsi emulation must be entered as SCSI controller: alias scsi_hostadapter ide-scsi Whether everything has worked is shown by # cdrecord -scanbus Linux sg driver version: 3.1.17 Cdrecord 1.10a16 (i586-pc-linux-gnu) Copyright (C) 1995-2001 Jörg Schilling Using libscg version ‘schily-0.4’ scsibus0: 0,0,0 0) ‘TEAC ‘ ‘CD-W512EB ‘ ‘2.0B’ Removable CD-ROM 0,1,0 1) * 0,2,0 2) * 0,3,0 3) * 0,4,0 4) * 0,5,0 5) * 0,6,0 6) * 0,7,0 7) *

44 LINUX MAGAZINE 10 · 2001


CALDERA TECHNOLOGY PREVIEW

ON TEST

Short Test of the Caldera Beta

HAS CALDERA CRACKED THE KERNEL? ANDREAS GRYTZ Like most other distributions, Caldera relies on a graphical user interface during installation. The screen is split into a central area for displaying options, drop-down menus or text boxes and a narrow field to the right, which offers help on the respective installation steps. You can travel back and forth during the installation routine via the navigation areas at the lower edge of the screen.

Package installation But once the selection of packages has been made and the installation kicks off, the set-up program no longer allows any steps backwards. While the packages are being copied onto the hard drive, the user can still go on configuring the network, setting the computer name and installing a modem. But you will search in vain for an ISDN or DSL connection here. Installation and the rest of the configuration run in parallel, but the copy procedure can take somewhat longer. On completion of the installation, you are given the option of making a start diskette. A replica diskette, with which the installation can be replayed again and again in the same configuration, is not provided.

Pure and simple The package installation itself has been kept straightforward. Several categories are pre-set for a rough distribution. In a little box, a tick shows if packages from the respective group have been selected. Each group can have several sub-groups, which in turn contain the individual packages. Caldera has decided on a display that presents the user with a description of the respective package in plain language. The precise version number is only learned by clicking on the package in another window, in which (if available) an additional description is then displayed. If it is your first time installing Linux and you don’t want to be bothered with detailed installation, you can simply let the computer make the pre-selection. A small bar helpfully displays the available disk space and the space occupied after installation in colour.

Partitioning Partitioning in the Expert mode offers the user four partitions. This corresponds to the maximum number of primary partitions that can be created. The user has access to both ext2 and ReiserFS file systems. You can also define a swap partition via the menu or make an extended partition, as well as installing Soft RAID. It’s a bit worrying that only the partition limits are stated and thus no values in kilobytes or megabytes — making judging the right size something of a guessing game for the layman.

Under the bonnet

It’s been fairly quiet

With this new version, Caldera relies exclusively on KDE again (also used with Version 2.1). With the X-Server the user can choose between the almost up-to-date XFree86 4.0.2 and the older Version 3.3.16. The Technology Preview sent to us came with Kernel 2.4.2. XFree 4 is installed as standard. But on our test device (a notebook) the server had severe problems recognising the Savage MX graphics card. A nice touch for KDE devotees: COAS, the administration tool from Caldera, has been completely integrated into KDE’s control centre. So Caldera has now drawn level with a few other distributions.

on the OpenLinux front recently, but the new kernel release has forced Caldera into a frenzy of activity.

Conclusion The Technology Preview of the next Caldera OpenLinux version is simple to install. The user is guided with assurance to a usable system by the graphical user interface. Apart from a few minor glitches (such as the partition limits in the reformatting of the hard drive) the lack of a keyboard option without deadkeys is the most glaring omission. Generally, the whole installation goes off very quickly and doesn’t overtax users with too many technical questions. It’s just a shame that this is also the case in the Expert mode and that the installation routine doesn’t offer the option of switching off the graphical log-in, this would have been a big help, given the existing graphics card problem. ■ 7 · 2001 LINUX MAGAZINE 45


ON TEST

NOTEBOOKS UNDER LINUX

iBook and PowerBook

APPLE PIE MICHAEL ENGEL

If you’re at all interested in notebooks, you can’t fail to have noticed the introduction of Apple’s innovative iBook and Powerbook into these markets. But how do they fare under Linux?

We tested a first-generation iBook and a Powerbook G3 Firewire. The G3 processor, clocked at 500MHz, is the heart of the Powerbook, which, with its 14.1inch TFT display with XGA resolution, 128MB RAM, 20GB hard drive, DVD-ROM drive and diverse ports, truly represents a complete desktop replacement. The Powerbook is persuasive when it comes to details, too. Peripheral components such as batteries and drives fit into two slots in the notebook, on the left and right. The DVD drive can be swapped for a second battery or a different drive. The range of accessories from third party suppliers runs from ZIP via MO to CD-RW. It is even possible to do a hot swap in Mac OS, although unfortunately, Linux can’t do this yet. Another nice detail is the target mode, which can be selected at start up. This mode allows a second Mac to address the hard drive of the Powerbook like a normal external Firewire disk.

Linux on the Powerbook G3 The installation of SuSE Linux 7.0 for PPC went smoothly, thanks to Yast 2 and, since the Powerbook has a 20GB hard disk, no space problems were anticipated. Unfortunately, not all

hardware is supported – IrDA only works as SIR at a maximum of 115200 bps, the Fast-IrDA mode, which the Powerbook should have been able to handle, could not be activated. And we couldn’t play DVDs at first, despite using various tools. The soundcard refused to work after installation – but that was quickly corrected with modprobe dmasound. Yast had forgotten the corresponding entry in the /etc/modules.conf. Otherwise the Powerbook G3 proved to be a stable and fastworking device.

iBook The iBook is Apple’s entry-level notebook, which is obvious from the equipment. There is an out-moded 12inch TFT display with SVGA resolution (800x600 pixels) together with a 3GB hard disk. There are no Firewire ports, PCMCIA slots or external monitor connections whatsoever, the memory, at a measly 32MB RAM, is very small and even at maximum expansion can only be doubled. Overall, this device is scarcely capable of expansion. On the other hand, processing and speed are good. The iBook has a very robust plastic casing, which would undoubtedly survive the odd tumble from the edge of a table, a usable keyboard (even if

The Powerbook G3 offers all the ports you’d expect from a desktop. 46 LINUX MAGAZINE 10 · 2001


NOTEBOOKS UNDER LINUX

ON TEST

Apple Powerbook G3-500 Firewire

the key travel is short) and a 300MHz G3 processor. Fast Ethernet, USB and Airport capability are all available.

Linux on the iBook Unlike the installation of SuSE 7.0 on the G3 Powerbook, on the iBook things went slightly awry. This was due to the insufficient memory. At just 32MB, SuSE’s installation routine felt itself forced to do without Yast 2’s graphical installation and opted for the tried and trusted Yast 1 instead. The text interface would come as something of a culture shock for a seasoned Mac OS user. A full installation was not possible with 2GB free hard disk space. This resulted in a trimmed-down system, in which a few vital X11 components could not be installed, necessitating many later installations. The start up of the X-Server was successful, although after that the iBook was largely tied up with swapping, thanks to KDE.

products that work to IEEE 802.11b, such as Lucent Wavelan or Elsa Airlancer. While cards from other manufacturers usually sit in a normal PCMCIA slot, Apple has implemented a trimmed-down solution, which is why the normal Linux driver for Wavelan cards (wvlan_cs) does not work. An appropriate driver has been developed by Benjamin Herrenschmidt and can be found at http://www.penguinppc.org/benh/.

Airport support All the latest Apple computers offer the option of adding on an economical Airport card. The necessary slot and the aerial are already built in. Airport in this case is actually Wireless Ethernet standardised to IEEE 802.11b with a transfer rate of up to 11MB per second. With this, connections can be made both to other computers equipped with Wireless Ethernet and also to special base stations, which provide routers to Ethernet and modem connections. Airport is compatible with other

Apple iBook

10 · 2001 LINUX MAGAZINE 47


ON TEST

NOTEBOOKS UNDER LINUX

Notebooks at a glance

Manufacturer, Model, designation

Apple Powerbook G3-500 Firewire

Apple iBook

Apple Powerbook G4-400

Category Memory [MB] Hard drive [GB] Diskette drive CD-ROM DVD-ROM Drives open sideways Expansion slot usable for drives/ Second battery Ports: PS/2 / serial / parallel / Line USB / Irda / TV Docking port / ext. drives Display: Type / Size [Inch] Colour distribution / Brightness control Keyboard: Key travel / Pressure point Offset cursor block Cursor functions without Fn key Loudspeaker covered when typing Mouse type / Keys Battery: Type / Capacity [Wh] Accessories: Special power cable for mains Modem / Ethernet cable / Video adapter Factory-installed operating system / Media Graphics chip Memory [MB] Modem chip (addressable) Ethernet chipset (addressable) Sound chip (addressable) Cardbus chipset Cardbus slots Cardbus cover Irda chipset (addressable) TV-output / port usable at the same time as display max. useful resolution Power off / sleep / suspend to disk usable Usable on text console / under X11 W / D / H [cm] (Weight [kg]) Guarantee [in months] Market launch Price [£]

Top class 128 20 – – 8x/plug-in yes yes

Entry-level model 32 3.2 – 24x/internal yes not included

High end 128 10 – – 8x/internal no not included

–/–/–/+ +/+/+ no / yes (Firewire) TFT / 14.1 yes / yes

– / – / – / + (Line Out only) + / – / + (later models only) no / yes (later models only) TFT / 12.1 yes / yes

– / – / – / + (Line Out only) +/+/+ no / yes (Firewire) TFT / 15.2 yes / yes

good / noticeable yes yes

good / noticeable yes yes

good / noticeable yes yes

no

no

no

Touchpad / 1 Lithium-Ion / 45

Touchpad / 1 Lithium-Ion / 50

Touchpad / 1 Lithium-Ion / 50

yes +/–/+

yes +/–/–

yes +/–/+

Mac OS 9.04 / Install Install and recovery CD ATI Mach 64 4 Apple’s own (+) Apple’s own (+) Apple’s own (+) TI PCI1131 (+) 1 x Type II or Type III Spring shutter - (SIR) + / SVHS +/+ 1024x768 + / + /–

Mac OS 8.6 / Install and recovery CD ATI Mach 64 4 Apple’s own (+) Apple’s own (+) Apple’s own (+) not included

+/+/–

Mac OS 9.1 / Install and recovery CD ATI Rage 128 8 Apple’s own (+) Apple’s own (+) Apple’s own (+) TI PCI1131 (+) 1 x Type II or Type III Spring shutter - (SIR) + / SVHS +/+ not stated +/+/–

+/+

+/+

+/+

32.3 / 26.4 / 4.3 (3.0) 12 February 2000 1650

34.4 / 29.4 / 4.6 (3.0) 12 September 1999 1050

34.1 / 24.1 / 2.6 (2.4) 12 January 2001 2300

48 LINUX MAGAZINE 10 · 2001

not included


NOTEBOOKS UNDER LINUX

ON TEST

Apple Powerbook G4-400

General

Titanium Powerbook G4

Many people will be wondering whether Linux on Powerbooks actually makes any sense. There are still very few commercial applications available for PPC-based Linux systems – SuSE Version 7.0, seems to provide just about everything you could need; and the porting of Open Office onto PowerPC is coming on apace. Apple’s Powerbook and iBook deliver solid hardware at competitive prices – but in the case of the iBook, you have to make a lot of compromises in return, while the Powerbook G3, on the other hand, keeps up well with the top models from the x86 range. You have to live with a few Apple idiosyncrasies: Many people will curse the lack of second and third mouse buttons under X11. The support for external Firewire devices could easily be improved and, unfortunately, the latest 2.4 kernel versions don’t have fully-integrated PPC support yet, forcing you to seek out a few patches. However, MOL (Mac On Linux), a virtual Mac OS machine, gives you the option of running Mac OS in an X-window in a similar way to VMware on x86 systems (though MOL comes under the GPL). The integral Fast-Ethernet port, the expandability with IEEE 802.11 wireless Ethernet with 11MB/sec and the connection option for external Firewire devices of the G3 Powerbook are a combination of characteristics which you’d be hard pressed to find on x86 notebooks.

The latest Powerbook, G4, has seen Apple make a radical break in design terms: The new Powerbook casing is manufactured completely out of titanium. Anodising the metal means that many colour ranges can be created, so we can look forward eagerly to future design ideas. Apart from the chic new outside, the Powerbook offers a Motorola G4-Processor (PPC7410) clocked at either 400 or 500MHz, 128 or 256MB RAM (expandable up to 1GB), a hard drive with 10 or 20GB capacity, a 15.2 inch TFT display (in 3:2 picture format), a slot-in DVD drive, Airport capability, Fast Ethernet, Firewire, USB and infrared ports together with a lithium polymer battery with a capacity of up to 5 hours’ operation. At MacWorld Expo, held at the beginning of January in New York, the developers of LinuxPPC succeeded in booting up a G4 Powerbook with LinuxPPC 2000 Q4. There is now also a HowTo, describing the Linux installation on one of these little gems, which can be found at http://www.powermaclinux.net/php/powermaclinu x_g_h.php3?single=53+index=0. ■

The author Michael Engel is involved with RISC processors and Linux. His latest interests include Embedded Linux, particularly its use in mobile devices.

Info Powerbook info http://www.powerbooklinux.org Linux on Macs http://linux.macnews.de LinuxPPC developers http://www.penguinppc.org ■ 10 · 2001 LINUX MAGAZINE 49


MAILBOX

KNOW HOW

Mail Delivery made easy

PROCMAIL COLIN MURPHY

Last month’s article described how to set up and configure a basic mailserver for dial-up machines, giving you control of your email, rather than relying on monolithic mail programs like Netscape Messenger. Fetchmail was charged with the responsibility of collecting email from your ISP, now we have to do something with it to make all of this worthwhile. Procmail is a small but powerful program, which will not only act as our mail delivery agent – moving the mail from our local mailserver to where a local user can read their messages in the home directory – it will also allow us to filter, or otherwise process, our mail. It will either remove unwanted mail or organise wanted messages so that they are not buried in a pile of spam screaming about get-richquick pyramid schemes. As with Postfix and Fetchmail, Procmail is widely available, being supplied with all of the boxed set distributions we’ve tried. Procmail is a feature-rich program that no overview could fully describe, all we can do here is give you a taste for the powerful control this application could unleash on your email handling. Here’s a list of some of the things it can do as described in the documentation that comes with procmail: Event driven (invoked automagically when mail arrives) Does not use *any* temporary files Uses standard egrep regular expressions

Poses a very low impact on your system’s resources (it’s 1.4 times faster than the average /bin/mail in user-CPU time) Allows for very easy-to-use, yes/no decisions on where the mail should go (can take the size of the mail into consideration) Also allows for neural net-type weighted scoring of mails Filters, delivers and forwards mail *reliably* Provides a reliable hook (you might even say anchor) for any programs or shell scripts you may wish to start upon mail arrival Supports four mailfolder standards: single file folders (standard and non-standard VNIX format), directory folders that contain one file per message, or the similar MH directory folders (numbered files) Native support for /var/spool/mail/b/a/bar type mailspools Variable assignment and substitution is an extremely complete subset of the standard /bin/sh syntax Provides a mail log file, which logs all mail arrival, shows in summary whence it came, what it 10 · 2001 LINUX MAGAZINE 61


KNOW HOW

MAILBOX

was about, where it went (what folder) and how long (in bytes) it was Uses this log file to display a wide range of diagnostic and error messages (if something goes wrong) Does not impose *any* limits on line lengths, mail length (as long as memory permits), or the use of any character (any 8-bit character, including ‘\0’ is allowed) in the mail It has man pages (boy, does it have man pages) Procmail can be used as a local delivery agent with comsat/biff support (*fully* downwards compatible with /bin/mail); in which case it can heal your system mailbox, if something messes up the permissions Secure system mailbox handling (contrary to several well-known /bin/mail implementations) Provides for a controlled execution of programs and scripts from the aliases file (i.e. under defined user ids) Allows you to painlessly shift the system mailboxes into the users’ home directories Procmail calls on .procmailrc, a file in the users’ home directory, which you will need to write with your favourite text editor. In its simplest form, you just need to set up some default variables so that it knows where to find and post its work, it needs to look something like this: PATH=$HOME/bin:/usr/bin:/usr/ucb:/bin:/usr/U local/bin:. MAILDIR=$HOME/Mail # You’d betterU make sure it exists DEFAULT=$MAILDIR/mbox LOGFILE=$MAILDIR/from LOCKFILE=$HOME/.lockmail

A quick-start tutorial 62 LINUX MAGAZINE 10 · 2001

You will need to confirm that the directories are actually available otherwise your precious email may go missing. With the latest Procmail versions, it is mandatory that this file is not writable by any user other than yourself, so you may need to run: chmod go-w ~/.procmailrc Procmail compares every email it has against this file. As it stands, all the emails will move your mail into the default standard mail directory, not very exciting so to this file we should add some filtering recipes: :0 # Anything from thf * ^From.*thf@somewhere.someplace todd # will go to $MAILDIR/todd This is very straightforward. The :0 signifies the start of a new recipe, on the second line the * tells us that this is a conditional command, the “From” header in the email is compared to the text expressions, the third line shows the directory to which the email would be sent should the condition about prove true, if we have remembered to create the directory first! So, emails from thf@somewhere.someplace and 123thf@somewhere.someplace should all now appear in the todd subfolder in KMail Here are some more examples: :0 people at uunet * ^From.*@uunet uunetbox $MAILDIR/uunetbox

# Anything fromU

# will go toU


MAILBOX

KNOW HOW

[left] faqs galore [right|] Procmail.org is always worth a look

:0 Henry * ^From.*henry henries $MAILDIR/henries

# Anything fromU

# will go toU

# Anything that has not been delivered byU now will go to $DEFAULT # using LOCKFILE=$DEFAULT$LOCKEXT These mail subdirectories will appear in the mail user agent, KMail for instance, as subfolders. Should you be using some other MUA, you should bear in mind that the mail folder used by default might be different. Netscape, for example, uses $HOME/nsmail by default for its mail folder, so that’s what you would set the MAILDIR variable to. This still hasn’t done anything outstanding though, nothing that we couldn’t have achieved using the built-in filtering routines in programs like Netscape. The .procmailrc file is nothing more than a shell script, there is nothing to stop us from calling other scripts should a filtering condition be met, and this is where the power and control lie. With some creative .procmailrc files you can automatically delete all these unwanted, unsolicited commercial emails which clog up the inbox, or fire klaxon warnings should you receive a message from your editor telling you how late your copy is. Procmail comes with some other utility programs – Formail is one. With these you could write a script to reformat digest mailing list messages back to their original form, making it much easier to follow the thread of a discussion, like so:

You’re not limited to moving emails from folders – you can also copy them. This would allow you to keep logs of all emails received, recording just the details you think are necessary. This could mean merely logging the time, subject and From header fields or preserving the entire email, a copy of which could be sent directly to some archive/compression routine. Procmail comes with a wealth of documentation and there are lots of third party tutorials, faqs and discussion groups probably because it is an ideal utility to tinker with, as well as providing a much-needed service. ■

Bristol University is helpful in setting up filters

:0 * ^Subject:.*Digest |formail +1 -d -s procmail LOGFILE=$MAILDIR/from here, in order to avoid logging

# Put itU # theU

arrival of the digest.

10 · 2001 LINUX MAGAZINE 63


SOFTWARE

GIMP WORKSHOP

Image processing with Gimp: Part 3

PICTURE PERFECT SIMON BUDIG

Last month we covered using Gimp’s basic painting tools, which should mean you’re well equipped for this issue’s topics: selection and photograph reprocessing. Let’s conduct an experiment: Try filling in a new image in black, create an ellipse and use the fill tool to colour it white. At first glance you’ll see that a perfectly clear egg on a black border has been created. On closer inspection however, a couple of things don’t look quite right. You’ll notice that the border between the two images is not a hard edge where black and white meet. In fact it is a muddy concoction of shades of grey. Figure 1: The fill tool has spilt some colour outside the border

By taking a closer look with the magnifying glass tool, you’ll see that the fill tool has filled beyond the selected area and that the inside of the egg is not consistently shaded (Figure 1, the marching ants are coloured for clarity’s sake). But how has this happened? Gimp pixel selection is not just a simple on or off proposition. It is possible to select the saturation level for each pixel — the higher the saturation, the brighter the colour. When using the selection tool to select a gradated area of colour, you’ll discover that the whole of the area doesn’t appear to be selected. The marching ants only surround an area of colour with a 50% or more saturation level, although the whole of the area is in fact selected. The ellipse tool employs saturation options when using Anti-aliasing, which means the edges of the finished image don’t look so serrated. The idea of only half selecting pixels seems a bit odd at first, but is very important when reprocessing photos. You will find an image of me on the coverdisc (simon.png), and we’ll use this to take a look (you can of course use a picture of your own).

Skin problems

Figure 2: Soft edges allow gentle transitions

Anti-aliasing makes lines and edges look smoother. If an oblique edge were drawn in a 100% saturated colour, ‘stairs’ or Aliasing would be visible. However, if several gradated saturation values are used to blur the steps, the line appears smoother. In Gimp, the paintbrush tool uses anti-aliasing — the pencil tool does not. ■ 64 LINUX MAGAZINE 10 · 2001

Open the image in Gimp. Create an ellipse, on the forehead for example, and select <image>/Filters/Artistic/Apply Canvas. Chasing away the marching ants with <image>/Selection/None reveals a sharp-edged screen effect. This type of edge often looks unnatural, which is why we’re going to create a soft transition with the aid of half-selected pixels. Click the tool settings on the ellipse selection by double-clicking on the appropriate symbol. Now use the slider to select the radius of the feather edge — 20 pixels is a good value. Select a new area in the image. At first glance, the ants seem to be marching


GIMP WORKSHOP

SOFTWARE

[left] Figure 3: Select areas with similar colours [right] Figure 4: Quickmask mode allows you to correct selections using painting tools

in the same area as before, but calling up the screen effect reveals that the edge has softened. But, having created a soft edge without the guidance of the marching ants, how else can you tell if the edge is softened or not? As mentioned earlier, it’s possible to set the saturation level of a pixel in Gimp or, more precisely, a pixel can be selected in 256 levels. A selection is nothing but a grey shade image, or channel. The paler a pixel is, the stronger the selection. Clicking on the red square at bottom left in the image window enables you to edit this channel directly. The selection then turns into a mask and the channel lies — coloured red — over the original image. The various graduations of a selection can be recognised here. The selected areas show the original image. The second ellipse clearly displays a soft edge, which you can work on using the painting tools. The smudge tool is really handy for pinpointing any sharp edges you’d like to blur. Let’s try out a practical example. You’ll find an image of a tiger (tiger.jpg) on the coverdisc, which you can load into Gimp. Let’s make the background a bit less sharp, so that the tiger stands out from it more clearly. To do this, we have to select the background. First, make a rough selection using Select/By Color. To do this, select 50 as the Fuzziness Threshold value and click on the grey area in the top left of the image. The black and white image of the selection channel now appears in the dialog (Figure 3). As you can see, unfortunately it is not only the background that’s been selected. Hold down the [Shift] key to remedy this. This adds two selections (as you may remember from part 1 of this series). Now click in the black area at top right. To sharpen up the tiger you have to change mask mode, do this by closing the Colour Selection dialog and clicking in the image on the red quadrangle at bottom left in the image window. Select the paintbrush tool and set black as the foreground colour and white as the background colour. Using a coarse paintbrush from the paintbrush selection, paint inside the tiger so that it is protected from the mask. It doesn’t matter if you paint slightly over the outline. Then swap the

Figure 5: The tiger with background blur

foreground and the background colour using the x key and delete the red marks in the background. Select a smaller paintbrush with a somewhat softer edge and correct the outline of the tiger where necessary. If you can’t see the outline for edges, simply invert the mask via Image/Colours/Invert. The tiger should now be much more clearly defined, and the background covered in red. If you are satisfied with the result, click on the hatched square at the bottom left in the image window to toggle back into normal mode. The tiger should now be encircled with marching ants. Before we finish off, double-check that the background is selected (the marching ants should be surrounding the tiger, and the edge of the image). If the background isn’t selected, invert the selection using Select/Invert or [Ctrl+i]. Then you can blur the background using Filters/Blur/Gaussian Blur (IIR) with a radius of 15. The result looks something like Figure 5.

Retouching One of the most common, and perhaps most useful, applications of image processing programs is that of retouching photographs. Removal of the red-eye effect is a popular way of improving images and the basic steps for removing red-eye are explained in Figure 6. The image used can be found on the coverdisc as red-eye.png. You can’t remove this effect automatically in Gimp, but nevertheless, it’s very easy to do. Zoom into the image so that you can work accurately

Red-eye effect: This often happens when portraits are taken using a flash in dim light. The pupils of the subject’s eyes are dilated and the sudden burst of light from the flash is reflected back by the retina in similar way to cats’ eyes. To prevent this, most cameras offer a pre-flash, which causes the pupils to close and can reduce the effect. ■

10 · 2001 LINUX MAGAZINE 65


SOFTWARE

GIMP WORKSHOP

[top left] Figure 6: Removing the red-eye effect

[top right] Figure 7: It is easier to remove blemishes in a photo than from actual skin...

[bottom left] Figure 8: Use the clone tool to copy areas of an image. [bottom right] Figure 9: Change colours using the levels tool. The area in the circle has been intensified.

inside the eyes. Paint the red pupils black with the paintbrush tool, using black as the foreground colour and a small, gently fading out paintbrush. Make sure the light reflection remains visible and don’t paint over the iris. Skilled artists can also work with the airbrush tool or a lower covering power (found in the tool settings). The light reflection quality of the eyes may have suffered somewhat as a result of our corrections, making the eyes appear a bit listless. To correct this, select white as the foreground colour and enhance the reflection of light slightly. When doing so, it’s important to try to keep the shape and position of the reflection the same — otherwise the eyes could begin to look a little strange.

66 LINUX MAGAZINE 10 · 2001

Now we’ve removed the demonic eye effect, let’s turn our attention to corrective surgery. I have a small blemish between the eyes and it would be nice if this could be eradicated from the photo. The fastest way of doing this is to use the smudge tool. Select a medium-sized paintbrush with a soft edge and wipe it from the outside onto the spot (Figure 7). After a short time, the spot should have disappeared. Only small faults can be cleared up like this, since the smudge tool eventually obliterates all detail. If you want to correct larger areas, you will have to fall back on the cloning tool, which copies parts of the image. The smudge tool would be no use in getting rid of the shadows behind my head, since the photographic grain would be completely obliterated. We have to paint over the shadows with illuminated parts of the wallpaper. To do this, click on the Clone Tool symbol and use a double-click to open the tool settings, convert the tool to Aligned (see Figure 8). Then select a large paintbrush with a soft fade-out and find the area in the image you’d like to copy. Press the [Ctrl] key, click on your chosen starting point and then release [Ctrl]. Click in the shadows, the relationship between source and target point will be defined, as you can tell by the small cross-hairs. You can now eliminate the shadows gradually. From time to time, you should redefine the start point with the [Ctrl] key, so that the various bright parts of the wallpaper fade smoothly into each other. By copying in pieces, the structure of the wallpaper is retained. Just a couple more words on the various settings for the clone tool: Aligned means that the relationship between target and source is defined


GIMP WORKSHOP

when you click in the image (first with, and then without, the [Ctrl] key). Non-aligned defines the source with a [Ctrl] click. A fresh start is made at this source with each subsequent mouse click. This is useful when you want to copy large areas. The last point, Registered, always fixes source and target at the same co-ordinates. This is unnecessary within one image, but makes it easy to copy between different images.

So green... Now let’s look at changing colours. On the coverdisc you will find forest.jpg, this is an image of a rain forest. As you will surely notice, the image is markedly overexposed. Such errors can be corrected with the commands from the Image/Colors/... menu. First select Levels — the dialog in Figure 9 will appear. You can correct the basic brightness of the image here. In the dialog’s upper button you can set whether you want to edit all or just individual colour channels (red, green or blue). In the area below you will see a histogram, showing how the colours are distributed in the image. Using the triangles below the two grey colour scales you can control precisely how the colours should be adjusted. The black and the white triangle in the input levels are displayed on the corresponding triangle at the output levels; the grey triangle can be used to shift the average value a bit. In this case it is enough to drag the grey triangle slightly to the right, to darken the bright medium colour tones. If you’d like to create a misty atmosphere, you can drag the two triangles at the output levels roughly to the centre. The contrast is considerably reduced as the result of this — it’s rather like the smoky haze after a battle. While the Level tool essentially retains the colours, the Curves tool allows more flexibility. Start it via Image/Colors/Curves. In this dialog, you can define almost any curves and easily create solarisation effects (Figure 10). The results are

SOFTWARE

sometimes reminiscent of images that are coloured badly — looking like satellite photos. Edit the individual channels separately to correct this. The rain forest scene still doesn’t look exactly verdant, so you could reduce the blue a bit. Use the top button to select the blue colour channel and click in the curve area slightly below the centre. An additional restart point will be added to the curve, and the curve is slightly dented. The blue channel is slightly darkened where the image is of medium brightness, which makes for more saturated greens. Play around with the curves, until you have grasped the idea behind them. This is a very powerful tool for correcting colours.

Perspectives The last tool we’ll look at is the transformation tool, which allows you to distort images. Gimp provides four different types of distortion via the tool settings: Rotation, Scaling, Shearing and Perspective. Rotation and perspective distortion are especially useful. To turn our primeval forest into a “slantwise photographed wallpaper”, select Perspective from the “Tool Option” settings (double-click on the Transform Tool icon) and click in the image. A grid appears over the image with an additional dialog. Distort the grid in the image window by clicking close to the vertices and dragging them to the target position. After a click on Transformation the image is distorted accordingly. These transformations are also useful for correcting mistakes: If you have scanned an image in askew for instance, select corrective rotation and adapt the grid to the edges, which should really be straight. After clicking on Rotate, the image is pulled straight. It can now be put into the appropriate format using the Crop tool, and saved. We will deal with the grey quadrangles that might have cropped up during the distortion in our next installment. Tricks such transparency and levels will also be on the agenda.

The Author Simon Budig suffered badly from Murphy’s Law when producing this piece. He would like to express many thanks to Michael Engel for rescuing this article from his hard disk.

[bottom left] Figure 10: The curves tool produces vivid colour effects [bottom right] Figure 11: Perspective distortions are defined by means of a grid.

10 · 2001 LINUX MAGAZINE 67


KNOW HOW

MAIL PROTOCOL

Multiple Personalities

IMAP MIKE BRODBELT

I’ve been using electronic mail for some years now, and, like many other Linux users, I have more than one email address. Currently, I have about six accounts that I actively use, and several dozen other addresses that deliver to me. I used to use my email to manage my work — these days managing my email has become my work, along with managing mail for my network. For anyone who has a requirement to access multiple separate mail accounts that reside on different machines, access to email becomes problematic. Many mail systems use the tried and tested POP3 protocol. POP3 client applications download the mail from the server, and store it on the client computer. Most clients store mail in their own format, making it inaccessible to other mail programs, and most machines which run POP3 clients are desktops, which are rarely on 24/7, further reducing access to the mail once it has been downloaded. The IMAP protocol attempts to remedy some of these problems. The strength of IMAP (Internet Message Access Protocol) lies in online and disconnected operation. Unlike POP3, mail is not copied from the server and then deleted. Instead, IMAP clients manipulate the mail on the server, and permit access to remote, server hosted mailboxes as though they were local resources. An IMAP mail system has a number of immediate advantages for users: As all mail is stored on the server, changing mail client becomes the work of seconds. All that is

As of this writing, the IMAP capabilities defined are: ACL IDLE LITERAL+ LOGIN-REFERRALS MAILBOX-REFERRALS NAMESPACE QUOTA UIDPLUS STARTTLS LOGINDISABLED ID

68 LINUX MAGAZINE 10 · 2001

[RFC2086] [RFC2177] [RFC2088] [RFC2221] [RFC2193] [RFC2342] [RFC2087] [RFC2359] [RFC2595] [RFC2595] [RFC2971]

required is to configure a new IMAP client with the IMAP account details. An IMAP client can easily be configured to view multiple mailboxes in physically separate servers. Multiple IMAP clients can be used by each user. This makes implementing a Web mail solution for roaming users a simple task. IMAP maintains message status flags on the server for read, answered, etc. IMAP allows shared folders. This makes it easier to implement generic email accounts for an organisation, and then allow multiple users to access those accounts. Many implementations also allow server side filtering of mail. This can be an extremely useful feature when users are accessing their mailboxes through different email clients.

Software There are a number of IMAP servers available, but this discussion is limited to those which fall under some sort of open-source license. IMAP offers a number of extensions to the basic protocol, and different servers implement different subsets of this functionality. Detailed information about any of these can be gleaned from the appropriate RFC, but those likely to be of most interest are ACL (access control list) support, which offers fine grained control over user access to mailboxes, QUOTA support, which permits mailbox level quotas independent of any disk quota scheme in use, and STARTTLS, which allows IMAP over SSL secured connections. The three main open-source IMAP servers in use are: Courier IMAP (http://www.inter7.com/courierimap/), University of Washington (UW) IMAP


MAIL PROTOCOL

(http://www.washington.edu/imap/), and Cyrus (http://asg.web.cmu.edu/cyrus/imapd/). These servers all offer slightly different feature sets, and which is best will depend entirely on the demands of the user base it is expected to serve. The choice of IMAP server may well be dictated by the MTA (Mail Transfer Agent) in use, as both the MTA and the IMAP server must understand a common mailbox format. The UW server offers no support for maildir, and has no plans to do so at the time of writing. The Courier server was specifically written to allow IMAP access to maildir format mailboxes, and so users of Qmail (which uses maildir) will find Courier to be their only choice of IMAP server at this time. The Cyrus server supports only its own format, but provides with the distribution a local delivery agent that can understand this format, so integrating it into most MTA’s should be possible. The UW server supports several mailbox formats, so if access to mail via Elm or any other mail client that reads the mailbox directly is required, then UW will be the server of choice. If the choice of server has not already been made by the above paragraph, other features may be important. The Cyrus server allows you to run it as a black-box system. Cyrus users need neither shell access to the IMAP server, nor an account in /etc/passwd. Users of UW or Courier servers need accounts in /etc/passwd to receive mail. Cyrus implements the IMAP ACL and QUOTA extensions, UW relies on OS level disk quotas, and thus can generate hard bounces for over-quota situations. For my purposes, I chose to use the Cyrus server. Cyrus is a feature rich server, and supports several features, which I consider important. It implements the IMAP ACL and QUOTA extensions, which give it great administrative flexibility. It has full support for several encrypted authentication methods, via the Cyrus-SASL library (see below), and it supports IMAP over SSL.

KNOW HOW

authentication multiplexer — it can be compiled to use a number of authentication methods, and it hides the details of these authentication methods from the application using them. A site may have a number of applications that use SASL, and these applications need only be written to authenticate via SASL. The SASL library can be built to authenticate via Kerberos, GSSAPI, CRAM-MD5, DIGEST-MD5, and others. SASL provides the option of storing authentication information within a Berkeley database on disk, for those who do not have a Kerberos or similar infrastructure in place. If this is to be used, it is important that the SASL library and the applications using SASL be compiled with the same version of libdb. SASL will happily compile with the version included with glibc on most systems, but Cyrus IMAP will not, and requires Berkeley DB. The Berkeley DB package can be downloaded from http://www.sleepycat.com. I installed it in /usr/local/BerkeleyDB.3.2/ and then configured SASL to use it:

BOX 1 configdirectory: /var/imap partition-default: /var/spool/imap admins: cyrus sendmail: /usr/sbin/sendmail

# export LIBRARY_PATH=/usr/local/BerkeleyDBU .3.2/lib/ # export C_INCLUDE_PATH=/usr/local/BerkeleyU DB.3.2/include/ # export LDFLAGS=-R/usr/local/BerkeleyDBU .3.2/lib/ # ./configure --prefix=/usr --disable-gssapiU --disable-krb-4U --with-pam=yes --with-dblib=berkeley --with-U rc4=/usr/local/ssl/ This configuration was for a test system with no Kerberos or GSSAPI authentication, with OpenSSL 0.9.6. OpenSSL should be compiled to generate a shared library. This compiles a SASL library with support for anonymous, CRAM-MD5, DIGEST-MD5, and PLAIN authentication methods. Any application compiled against the SASL library will now be able to

IMAP client IMP

Compiling and installing the Cyrus server There are binary distributions of Cyrus available in rpm or deb format, and installing one of these may well represent the simplest way to get the Cyrus server installed. Nevertheless, I chose to compile the server from source as this provides far more flexibility that a precompiled distribution, and, with a package as complex and powerful as Cyrus, the time invested in customising the setup for your needs is time well spent. While Cyrus is an excellent package, the documentation left much to be desired. The first problem to be faced in attempting to compile Cyrus was the Cyrus SASL (Simple Authentication and Security Layer) library. Recent versions of Cyrus (version 2 or greater) require the Cyrus SASL authentication library to be installed before the IMAP server. SASL is an 10 · 2001 LINUX MAGAZINE 69


KNOW HOW

Mutt as the client

MAIL PROTOCOL

offer any of these authentication methods to client applications. This can provide significant additional security — PINE and Mutt both support CRAM-MD5 authentication, which obviated the need to send authentication credentials in the clear. The SASL architecture allows more authentication methods to be plugged-in to SASL as they are developed. Once the SASL library is installed, Cyrus can be compiled relatively easily, though it is important to remember to add a user account for Cyrus to run under to /etc/passwd before compiling the server. The version I used was 2.0.11, and it was configured as follows: # ./configure --prefix=/usr --sysconfdir=/etc U --localstatedir=/var U --with-openssl=/usr/local/ssl/ --with-sasl=/U usr/lib/sasl/ U --without-krb --with-dbdir=/usr/local/BerkeleyU DB.3.2/ To make Cyrus compile correctly, I had to make two small alterations. I added a symbolic link from /share to /usr/share (without this the compile_et program caused the compile to fail), and I also had to copy the ssl shared libraries from /usr/local/ssl/lib to /usr/lib before the compilation found them.

Configuring and testing the Cyrus server

BOX 2 pop3 imap imsp acap imaps pop3s kpop sieve lmtp fud

110/tcp 143/tcp 406/tcp 674/tcp 993/tcp 995/tcp 1109/tcp 2000/tcp 2003/tcp 4201/udp

After installation, there are several steps necessary to get you new IMAP server up and running. First, create your /etc/imapd.conf file. This is a simple configuration file, and a basic setup should look something like the file in Box 1. For a full description of the fields in this file, see the imapd.conf(5) man page. Next, create the “configdirectory” specified in the imapd.conf file. Ensure this is owned by the Cyrus user and group (by default, cyrus:mail), and change its permissions to 750. Do the same for the “partition-default” directory. Then, run the tools/mkimap script from the Cyrus distribution, as the Cyrus user - this will create the Cyrus directories under those you just created. On Linux file systems (ext2 - this does not apply to ReiserFS, XFS, or similar), it’s important to use the “chattr +S” command to set these directories and their contents for synchronous updates. The ext2 filesystem can be prone to mailbox corruption under certain circumstances without this attribute set. Using synchronous updates forces the operating system to flush changes to these directories to the disk immediately, and generates a performance overhead. For a large system, it may be preferable to use a journaling filesystem to obviate the need for this. Ensure that your /etc/services file contains all the entries in Box 2.

70 LINUX MAGAZINE 10 · 2001

Finally, the master process must be configured. The Cyrus distribution comes with a number of sample configurations in the master/conf directory. Choose the appropriate one, and copy it to /etc/cyrus.conf, and uncomment the entries required. To test connections to the IMAP server, start the master process and try to telnet to the server on the IMAP port: $ telnet bifrost 143 Trying 192.168.1.4... Connected to bifrost.altair.nexus. Escape character is ‘^]’. * OK bifrost.altair.nexus Cyrus IMAP4U v2.0.11 server ready If you see a greeting message like that above, your server is running. Add a user and password to your SASL secrets file using the saslpasswd utility (this won’t be necessary if you already have an authentication framework like Kerberos in place). You can then test connections for this user with the imtest script from the Cyrus distribution: # /usr/bin/imtest -m login -a imapuser bifrost C: C01 CAPABILITY S: * OK bifrost.altair.nexus Cyrus IMAP4U v2.0.11 server ready S: * CAPABILITY IMAP4 IMAP4rev1 ACL QUOTAU LITERAL+ NAMESPACE UIDPLUS IDU NO_ATOMIC_RENAME UNSELECT MULTIAPPEND SORTU THREAD=ORDER EDSUBJECT THREAD=REFERENCES IDLE AUTH=DIGESTU -MD5 AUTH=CRAM-MD5 S: C01 OK Completed Password: C: L01 LOGIN imapuser {9} + go ahead C: <omitted> L01 OK User logged in Authenticated. Security strength factor: 0 . logout * BYE LOGOUT received . OK Completed Connection closed. At this stage, you have a working IMAP server installed. You now need to add user mailboxes. This is done with a Perl program called Cyradm, which is installed as part of the Cyrus distribution. This should be run as the Cyrus user, and allows a number of administrative operations: $ cyradm bifrost.altair.nexus Please enter your password: bifrost.altair.nexus> ? authenticate, login, auth to server chdir, cd directory createmailbox, cm, create deleteaclmailbox, dam,U deleteacl from mailbox deletemailbox, delete, dm disconnect, disc from current server exit, quit

authenticateU change currentU create mailbox remove ACLsU delete mailbox disconnectU exit cyradm


MAIL PROTOCOL

help, ? show commands listacl, lam, listaclmailbox list ACLs onU mailbox listmailbox, lm list mailboxes listquota, lq list quotasU on specified root listquotaroot, lqr, lqm show quotaU roots and quotas for mailbox renamemailbox, rename, renm rename (andU optionally relocate) mailbox server, servername, connect show currentU server or connect to server setaclmailbox, setacl, sam set ACLs onU mailbox setquota, sq set quota onU mailbox or resource version, ver, info displayU version info of current server Each user should have a mailbox created. For the imapuser test user, create a mailbox called user.imapuser through Cyradm. This will become the INBOX for that user. All other mailboxes will be subordinate to this one, and are best created via a mail client. To complete the installation, you need to arrange for the Cyrus master process to start when the system boots, and also configure your MTA to deliver mail into the Cyrus mailstore. Cyrus provides a local delivery agent and the MTA must be configured to call for local mail. The Cyrus documentation provides information on how to achieve this with sendmail. For other MTA’s, different procedures will be required.

KNOW HOW

system. Installing and configuring it correctly is not a simple process, but once set up, it provides a vastly superior alternative to the traditional POP3 mailbox setup. Most mail clients can be configured to use multiple IMAP accounts, so for users with many mailboxes IMAP simplifies mail handling immensely. All your mail accounts can be handled from anywhere with an IMAP client. Several IMAP Web mail clients exist, these can vastly simplify life for roaming users, who can then access their mail from anywhere with an Internet connection. Examples of these are Squirrel Mail (http://www.squirrelmail.org) and IMP (http://www.horde.org/imp/2.2). Both of these require additional configuration work, as they are PHP based. Squirrel Mail uses its own implementation of the IMAP protocol, and, for this reason is probably easier to set up than IMP, which requires an external library distributed with the UW IMAP server. The screenshots show three IMAP clients in use — Netscape, Mutt, and IMP. All three are using different forms of encrypted authentication, and are viewing the same mailbox. $ openssl req -new -x509 -nodes -out /var/U imap/server.pem U -keyout /var/imap/server.pem -days 365 Then, uncomment the imaps service definition in /etc/cyrus.conf, and add the following lines to /etc/imapd.conf:

Conclusion

tls_cert_file: /var/imap/server.pem tls_key_file: /var/imap/server.pem

It should be clear from the above that a Cyrus installation provides a powerful and flexible mail

After restarting the imap server, SSL support will be enabled. ■

BOX 3 IMAP over SSL Enabling SSL support in Cyrus can be simply achieved by creating a self signed X509 certificate and private key pair with OpenSSL:

Mail server settings for Netscape 10 · 2001 LINUX MAGAZINE 71


REVIEW

BOOKS

JUST FOR FUN ALISON DAVIES

This is the gospel according to Torvalds. The account of the birth of Linux by the one person who knows the truth, and the story of the man who went from computer programming student to multimillionaire and the figurehead of the Open Source movement. The book was written in conjunction with David Diamond, a journalist for the New York Times, Business Week, Wired and editor of Red Herring Magazine. It interweaves accounts of Diamond’s meetings with Torvalds to collaborate on various chapters, with the account of Torvalds life and his own views on Linux and Open Source. The title — Just for Fun — comes from Torvalds belief, stated more than once in the book, that the ultimate motivation for everything in life is that it is ‘just for fun’. Entertainment is the meaning of life, once it has progressed through survival and social order, and the development of Linux was a product of this. He started writing the program as entertainment to learn more about the workings of his new computer and the development continued as a team entertainment over the Internet as more and more people had an input. The book chronicles Torvalds’s life from his childhood and first encounters with his grandfather’s VIC 20, through his initial choice of a Sinclair QL and his student days with his new PC bought with birthday and Christmas money. It

covers a little of his family background but he admits that his memory is not as clear when it comes to family events as it is when asked about events in the life of Linux. His description of his early development of Linux is preceded by a warning that the section contains geek language so we have been warned! Having followed the birth of Linux the story follows Linus’ move to the US, the birth of his children and his financial fortunes due to the stock market launch of Red Hat. Some scenes give us both points of view, both Torvalds’ and Diamond’s with fascinating insights into what different people consider important. Occasionally we are treated to a third point of view, when Tove, Linus’s wife, gives her clarification of an issue (the famous penguin was originally her idea). Linus sets out his very pragmatic views on Open Source, he defends himself against accusations that he betrayed the movement by allowing Linux to be distributed commercially and by taking a job with a very secretive company. He also defends himself against accusations that fame and fortune have made him less accessible. The book explains his philosophy on life and makes no apology for his geekiness, in fact he revels in it. It is an entertaining account of a life that has changed the course of computer history and compulsory reading for anyone interested in Linux. ■

Info Published by Texere £17.99 ■

7 · 2001 LINUX MAGAZINE 81


COMMUNITY

BRAVE GNU WORLD

The monthly GNU Column

BRAVE GNU WORLD GEORG C. F. GREVE

Welcome to Georg’s Brave GNU World. As the last issue has shown, Free Software in schools is an interesting area with a lot of potential. Therefore I have another topic from it to start with this month.

GNU/Linux TerminalServer for Schools

Using software from the Linux Terminal Server Project (www.LTSP.org), you can turn the Iopener into a diskless thin client computer.

The “GNU/Linux TerminalServer for Schools”Project tries to provide an easy means for the installation and administration of a GNU/Linux based terminal server. Background info: a terminal server provides the full functionality of a system to several users. Their respective workplace machines are merely terminals. This has a lot of advantages. First of all old hardware can still serve as a terminal. Even a 486 is sufficient for this, so the solution is very cost-efficient. Also the system administrator only has to administrate a single system, which saves a lot of time. Thanks to the central structure, backups are also easy to make. The project began on a mailing list of the “Freie Software und Bildung e.V.” (“Free Software and Education Association”), where it has been proposed by Hans-Josef Heck. At a congress in November 2000, it was decided to push work on this project. It is mainly maintained by Christian Selig, who is being supported by Georg Baum and Jason Bechtel with this. Jason has joined the team through the “Linux Terminal Server Project” (LTSP), which provides the core functionality of the “GNU/Linux TerminalServer for Schools” project. The main task of the project is not to write software packages but rather to create a good and easy-to-use installation and configuration system. The administration is based on a webmin module written in Perl for this purpose. A CD is planned for easy installation. On of the things considered crucial for the CD and the configuration is distribution-independence and internationalization. Translations into French (by Joel Amoros), Swedish (by Michael Habbe) and Spanish (by Angel Eduardo Porras Meza) are already

108 LINUX MAGAZINE 10 · 2001

available and further translators are quite welcome. According to Christian, the special strength of the project is that it isn’t special, it is rather an easy solution to a common problem. The weak point in his eyes is that there are no statistics about the hardware and network-bandwidth needed for a certain amount of users. Although the LTSP has a lot of users, it is hard to determine authoritative numbers for this in the same way proprietary vendors do it. Although those numbers are very often only rough estimations themselves they present themselves in a very convincing way to the end-user. Short-term goal is to complete the CD for installation on all common distributions. To reach this goal, there is still need for beta-testers that are willing to probe the CD and the administration program for weaknesses. In the long term, it is planned to also include other educational software on the CD in order to further spread the use of Free Software in schools. Additional information about this can be found on the Ofset homepage. The LTSP and the administration module are released under the GNU General Public License. The license of the documentation is not clear yet, but only licenses accepted by the FSF are perceived as acceptable. Personally I would like to see the project make use of the GNU Free Documentation License (FDL). I’ll continue with an update about a project that has been mentioned in an earlier issue.

GNU HaliFAX In Linux Magazine Issue 4 I wrote about the GNU FaXile project, which has the goal to create a complete and comfortable fax environment in the GNU Project. The project has been merged with GNOME-GFax now and been renamed to GNU


BRAVE GNU WORLD

COMMUNITY

HaliFAX with the active page as the yellow thumbnail

HaliFAX. Of the planned functionality (see issue #4), the GNU HaliFAX viewer (ghfaxviewer) and the GNU HaliFAX sender are already usable. The fax viewer is already pretty advanced and has an easy to use graphical user interface, does anti-aliasing, has an improved zoom algorithm and of course also allows to print faxes. Some things like a binding of GNU HaliFAX to SANE as well as a project-management part are planned; the maintainers, Wolfgang Sourdeau and George Farris, still have a lot of work to do. They received a lot of help from Till Bubeck who did many things on gfax and also did a German translation for the ghfaxviewer. The original translator into German was Thomas Bartschies, the Chinese translation was done by Kevin Chen and the Polish by Zbigniew Baniewski. Help in form of funding, allowing Wolfgang and George to concentrate more on this project, would be very welcome. Although the next project may not be directly relevant to many readers, it is certainly extremely interesting.

GOSSIP The “GOSSIP Simulation Environment” by Marius Vollmer works on the creation of a simulation environment for use in communications engineering and digital signal processing. It is implemented as an extension to Guile, the Scheme implementation of the GNU Project. The project consists of essentially four parts, which are the simulation-engine (gossipsim), the tool schematic capture tool of data (gossiped), a group of supporting libraries (gossip-lib-*) and an extension that makes it possible to read VHDLfiles (gossip-vhdl). The description of the simulation is text-based through Scheme programs which are

being executed by gossip-sim. Currently gossip-sim can only work with synchronous data-flow. Asynchronous data-flow or discrete events should not be impossible, but the implementation of the necessary simulation engines and their interaction has a pretty low priority for Marius Vollmer. The 1.0 release is almost ready: all features planned for it are already implemented in gossip-sim and the other parts are also almost ready for real use. For this the author would like to see constructive criticism especially for the simulation-engine, because he is no expert in simulation-techniques. In his eyes GOSSIP should provide a powerful tool with flexibility and simple structure as its primary attributes. The way some software packages patronize the user is something he especially dislikes, and he would rather give the user the opportunity to determine his own needs himself. The biggest problem is something that manifests as the old chicken and egg problem. GOSSIP is still pretty young, so it lacks simulation modules, which would have to be written by users. But they will only write them if they can use GOSSIP, which requires many good simulation modules. This project opens the possiblity to strengthen the scientific principle of freely exchanging knowledge and its traceability through Free Software in the field of digital signal processing. Therefore I would like to encourage everyone using such software to start using GOSSIP. Anyone having experience with packages such as COSSAP, SPW or Cocentric SystemStudio should already be familiar with the concepts of GOSSIP, which should greatly simplify a transition and porting existing modules. Now I’m coming to the next project, which may seem pretty abstract at first but it is dealing with a very important task. 10 · 2001 LINUX MAGAZINE 109


COMMUNITY

BRAVE GNU WORLD

Jude in action

Jude Jude is part of the thesis of Massimo Zaniboni. He developed it in order to create a “Workgroup Application” for the Crystal Engineering Laboratory of Ciamician, the chemical department of University of Bologna, in Italy. Fortunately he did all this under the GNU General Public License. Jude is essentially a toolkit or framework for application development that allows the implementation of solutions for the “DataManagement” and “Workgroup Application” areas while being simple to use for users and developers at the same time. The server-side is based on a object-oriented model; the user-side presents itself in a agent-based compound-document way. The technologies to create such solutions such as relational or object-oriented databases, document-management systems, XML documents, agent-based systems or Java are well-known. Implementation of the desired structure is very often problematic, though, especially if only one or a few of these techniques are being used. Jude tries to provide all the advantages of these technologies in a single coherent development environment. Jude allows the developer to enter a simplified and abstract representation of the problem in a very high-level declarative and object-oriented language in order to get a fully functional workgroup application. This allows developers to access many existing modules and makes reusing old code much easier. The user only sees a coherent and simple to use environment based on Java and Swing as the interface to documents and structured information. Currently Jude is still in the alpha-stage and hard to install, so normal users should better not 110 LINUX MAGAZINE 10 · 2001

give it a try yet. Consequently, making it ready for production use is Massimo’s next goal. Afterwards he would like to expand it with a transactionmanager, encryption, offline-capabilities and PDAsupport. But it will be a long way until this will be ready. Help or sponsors are very welcome. So if you think this could be an interesting project to spend your time on, feel free to get in touch with Massimo. I’d say this was abstract enough for this month and I’ll now come back to an area that everyone of us has contact with on a daily basis, the Web.

HyperBuilder HyperBuilder is a project by Alejandro Forero Cuervo who originally began it in order to have a tool to manage big static web sites with more than 100 pages. In order to allow this, the reusability of information was a crucial design factor since it is rather annoying to insert the same header and footer on 200 pages by hand. Although HyperBuilder is still a good choice for maintaining large static web sites, its real strength is now on the dynamic side. HyperBuilder runs on the webserver and parses the documents on demand, allowing for maximum dynamics. For this, it is best used as an Apache module, although it is also possible to have it run as a CGI-script, which is much slower. The web site files themselves are written in a kind of extended HTML that is easily understandable and editable. The HyperBuilder modules can be included as HTML-like tags. Modules for several standard problems like message boards, polls, SQL-backends, user authentication


BRAVE GNU WORLD

and more are already implemented. Additionally it is also possible to directly include Perl or Scheme (Guile) code in the files; if desired even both in the same file. HyperBuilder provides great advantages for non-programmers because it is very easy to learn how to use it. For instance the line <p>Visits: <counter src="file" id="counter_id" inc="1" show="yes"></p> is all that is needed to put a counter on a web page. The alternative would be to write the counter yourself, which will be much longer than the single line above, or download a CGI script by someone else for this purpose and integrate it into the web page. So HyperBuilder lets you get rid of many CGI-scripts. But programmers also benefit from HyperBuilder. If it is planned to create a webinterface for an application, it is definitely a very good idea to implement this in a HyperBuilder module. This way it become possible for every user to structure and compose the interface according to personal preferences and taste. It also allows the developer to forget about layout and to focus on the functionality instead of the interface and its graphical details. The HyperBuilder itself has been developed in C under the GNU General Public License with POSIX threads for performance reasons. It runs on Unixsystems and has been tested on different versions of GNU/Linux, Solaris/SunOS, *BSD and Irix. As languages for the dynamic creation of web pages it is possible to use Perl, C and Scheme. HyperBuilder is fully functional. Right now, the biggest problems right now are the lack of documentation and not enough users to find the remaining problems. Alejandro is especially unhappy about the lack of documentation and offers to coach volunteers for this on the project internals. Further plans are to write more modules like a module for creation of pages from XML files, a module for communication through XML-RPC, a module to include the whole functionality of GIMP and modules for Ruby and Java. As a side note I should mention that there is already a complete portal-site called “FuWeb” based on HyperBuilder available under the GNU General Public License - so it can easily be used for own projects. This should be enough to give you an idea of what this project can do and I can only recommend taking a look at the homepage that also contains examples for the mentioned FuWeb portal.

Brave GNU World internals Finally I can announce that the Brave GNU World gets translated into another language now. Thanks to Fernando Lozano and Hilton Fernandes who joined the Brave GNU World-family, the Brave GNU World is also available in Portugese now.

COMMUNITY

Thank you so much, guys! Now this column can be read in seven languages (German, English, French, Japanese, Spanish, Korean, Portugese), something I certainly did not expect when I started it. Also I’d like to welcome Gero Takke, Michael Scheiba and Steven R. Baker in the family. These three have taken over the Brave GNU World web site. Together with Alejandro Forero Cuervo, the author of HyperBuilder, who volunteered to help, they will do the upcoming redesign of the Brave GNU World web site.

Enough for today Alright, that should have been enough for this month. I hope to have provided some interesting input and as always I’m hoping for tons of email with ideas, feedback, comments and project suggestions to the known address [1]. ■

Auch die Matanza-Webseite baut auf HyperBuilder.

Info Send ideas, comments and questions to Brave GNU World column@brave-gnuworld.org Homepage of the GNU Project http://www.gnu.org/ Homepage of Georg’s Brave GNU World http://brave-gnu-world.org “We run GNU” initiative http://www.gnu.org/brave-gnuworld/rungnu/rungnu.en.html GNU/Linux TerminalServer for Schools homepage http://termserv.berlios.de/ Freie Software und Bildung e.V. homepage http://fsub.schule.de/ Linux Terminal Server Project homepage http://www.ltsp.org/ “Ofset - Organization for Free Software in Education and Teaching” homepage http://www.ofset.org/ FSF - license list - documentation licenses http://www.gnu.org/philosophy/license-list.html#DocumentationLicenses GNU HaliFAX homepage http://www.ultim.net/~wolfgang/gnu_halifax/ghfv.html GOSSIP Simulation Environment homepage http://gossip.sourceforge.net/ Jude homepage http://jude.sourceforge.net/ HyperBuilder homepage http://bachue.com/hb/ ■ 10 · 2001 LINUX MAGAZINE 111


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.